'Opencv: Jetmap or colormap to grayscale, reverse applyColorMap()
To convert to colormap, I do
import cv2
im = cv2.imread('test.jpg', cv2.IMREAD_GRAYSCALE)
im_color = cv2.applyColorMap(im, cv2.COLORMAP_JET)
cv2.imwrite('colormap.jpg', im_color)
Then,
cv2.imread('colormap.jpg')
# ??? What should I do here?
Obviously, reading it in grayscale (with , 0
) wouldn't magically give us the grayscale, so how do I do it?
Solution 1:[1]
You could create an inverse of the colormap, i.e., a lookup table from the colormap values to the associated gray values. If using a lookup table, exact values of the original colormap are needed. In that case, the false color images will most likely need to be saved in a lossless format to avoid colors being changed. There's probably a faster way to do map over the numpy array. If exact values cannot be preserved, then a nearest neighbor lookup in the inverse map would be needed.
import cv2
import numpy as np
# load a color image as grayscale, convert it to false color, and save false color version
im_gray = cv2.imread('test.jpg', cv2.IMREAD_GRAYSCALE)
cv2.imwrite('gray_image_original.png', im_gray)
im_color = cv2.applyColorMap(im_gray, cv2.COLORMAP_JET)
cv2.imwrite('colormap.png', im_color) # save in lossless format to avoid colors changing
# create an inverse from the colormap to gray values
gray_values = np.arange(256, dtype=np.uint8)
color_values = map(tuple, cv2.applyColorMap(gray_values, cv2.COLORMAP_JET).reshape(256, 3))
color_to_gray_map = dict(zip(color_values, gray_values))
# load false color and reserve space for grayscale image
false_color_image = cv2.imread('colormap.png')
# apply the inverse map to the false color image to reconstruct the grayscale image
gray_image = np.apply_along_axis(lambda bgr: color_to_gray_map[tuple(bgr)], 2, false_color_image)
# save reconstructed grayscale image
cv2.imwrite('gray_image_reconstructed.png', gray_image)
# compare reconstructed and original gray images for differences
print('Number of pixels different:', np.sum(np.abs(im_gray - gray_image) > 0))
Solution 2:[2]
The other answer works if you have exact color values.
If your colors have been compressed lossily (JPEG), you need a different approach.
Here's an approach using FLANN. It finds the nearest color and tells you the difference too, so you can handle implausible values.
complete notebook: https://gist.github.com/crackwitz/ccd54145bec1297ccdd4a0c8f4971deb
Highlights:
norm = cv.NORM_L2
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
fm = cv.FlannBasedMatcher(index_params, search_params)
# JET, BGR order, excluding special palette values (>= 256)
fm.add(255 * np.float32([jet._lut[:256, (2,1,0)]])) # jet
fm.train()
# look up all pixels
query = im.reshape((-1, 3)).astype(np.float32)
matches = fm.match(query)
# statistics: `result` is palette indices ("grayscale image")
output = np.uint16([m.trainIdx for m in matches]).reshape(height, width)
result = np.where(output < 256, output, 0).astype(np.uint8)
dist = np.uint8([m.distance for m in matches]).reshape(height, width)
Source of colormapped picture: Separating Object Contours OpenCV
Solution 3:[3]
Above is brilliant answer from Christoph Rackwitz! But this is a bit confusing due Python Notebook specifics. Here is a full code for conversion.
from matplotlib import colormaps # colormaps['jet'], colormaps['turbo']
from matplotlib.colors import LinearSegmentedColormap
from matplotlib._cm import _jet_data
def convert_jet_to_grey(img):
(height, width) = img.shape[:2]
cm = LinearSegmentedColormap("jet", _jet_data, N=2 ** 8)
# cm = colormaps['turbo'] swap with jet if you use turbo colormap instead
cm._init() # Must be called first. cm._lut data field created here
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
fm = cv2.FlannBasedMatcher(index_params, search_params)
# JET, BGR order, excluding special palette values (>= 256)
fm.add(255 * np.float32([cm._lut[:256, (2, 1, 0)]])) # jet
fm.train()
# look up all pixels
query = img.reshape((-1, 3)).astype(np.float32)
matches = fm.match(query)
# statistics: `result` is palette indices ("grayscale image")
output = np.uint16([m.trainIdx for m in matches]).reshape(height, width)
result = np.where(output < 256, output, 0).astype(np.uint8)
# dist = np.uint8([m.distance for m in matches]).reshape(height, width)
return result # , dist uncomment if you wish accuracy image
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | |
Solution 3 | Vadim Smirnov |