'Getting a Black and White UIImage (Not Grayscale)
I need to get a pure black and white UIImage from another UIImage (not grayscale). Anyone can help me?
Thanks for reading.
EDITED:
Here is the proposed solution. Thanks to all. Almost I know that is not the better way to do it, it works fine.
// Gets an pure black and white image from an original image.
- (UIImage *)pureBlackAndWhiteImage:(UIImage *)image {
unsigned char *dataBitmap = [self bitmapFromImage:image];
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
if ((dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 3 / 2)) {
dataBitmap[i + 1] = 0;
dataBitmap[i + 2] = 0;
dataBitmap[i + 3] = 0;
} else {
dataBitmap[i + 1] = 255;
dataBitmap[i + 2] = 255;
dataBitmap[i + 3] = 255;
}
}
image = [self imageWithBits:dataBitmap withSize:image.size];
return image;
}
EDITED 1:
In response to comments, Here are methods bitmapFromImage
and imageWithBits
.
// Retrieves the bits from the context once the image has been drawn.
- (unsigned char *)bitmapFromImage:(UIImage *)image {
// Creates a bitmap from the given image.
CGContextRef contex = CreateARGBBitmapContext(image.size);
if (contex == NULL) {
return NULL;
}
CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGContextDrawImage(contex, rect, image.CGImage);
unsigned char *data = CGBitmapContextGetData(contex);
CGContextRelease(contex);
return data;
}
// Fills an image with bits.
- (UIImage *)imageWithBits:(unsigned char *)bits withSize:(CGSize)size {
// Creates a color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
fprintf(stderr, "Error allocating color space\n");
free(bits);
return nil;
}
CGContextRef context = CGBitmapContextCreate (bits, size.width, size.height, 8, size.width * 4, colorSpace, kCGImageAlphaPremultipliedFirst);
if (context == NULL) {
fprintf (stderr, "Error. Context not created\n");
free (bits);
CGColorSpaceRelease(colorSpace );
return nil;
}
CGColorSpaceRelease(colorSpace );
CGImageRef ref = CGBitmapContextCreateImage(context);
free(CGBitmapContextGetData(context));
CGContextRelease(context);
UIImage *img = [UIImage imageWithCGImage:ref];
CFRelease(ref);
return img;
}
Solution 1:[1]
If what you're looking for is to threshold the image -- everything brighter than a certain value turns white, everything darker turns black, and you pick the value -- then a library like GPU Image will work for you.
Solution 2:[2]
This code may help:
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
if (dataBitmap[i + 0] >= dataBitmap[i + 1] && dataBitmap[i + 0] >= dataBitmap[i + 2]){
dataBitmap[i + 1] = dataBitmap[i + 0];
dataBitmap[i + 2] = dataBitmap[i + 0];
}
else if (dataBitmap[i + 1] >= dataBitmap[i + 0] && dataBitmap[i + 1] >= dataBitmap[i + 2]) {
dataBitmap[i + 0] = dataBitmap[i + 1];
dataBitmap[i + 2] = dataBitmap[i + 1];
}
else {
dataBitmap[i + 0] = dataBitmap[i + 2];
dataBitmap[i + 1] = dataBitmap[i + 2];
}
}
Solution 3:[3]
With Swift 3, I was able to accomplish this effect by using CIFilters, first by applying CIPhotoEffectNoir
(to make it grayscale) and then applying the CIColorControl
filter with the kCIInputContrastKey
input parameter set to a high value (i.e. 50). Setting the kCIInputBrightnessKey
parameter will also adjust how intense the black-and-white contrast appears, negative for a darker image, and positive for a brighter image. For example:
extension UIImage {
func toBlackAndWhite() -> UIImage? {
guard let ciImage = CIImage(image: self) else {
return nil
}
guard let grayImage = CIFilter(name: "CIPhotoEffectNoir", withInputParameters: [kCIInputImageKey: ciImage])?.outputImage else {
return nil
}
let bAndWParams: [String: Any] = [kCIInputImageKey: grayImage,
kCIInputContrastKey: 50.0,
kCIInputBrightnessKey: 10.0]
guard let bAndWImage = CIFilter(name: "CIColorControls", withInputParameters: bAndWParams)?.outputImage else {
return nil
}
guard let cgImage = CIContext(options: nil).createCGImage(bAndWImage, from: bAndWImage.extent) else {
return nil
}
return UIImage(cgImage: cgImage)
}
}
Solution 4:[4]
While it may be overkill for your purposes, I do just that for live video from the iPhone camera in my sample application here. That application takes a color and a sensitivity, and can turn all pixels white that are within that threshold and transparent if not. I use OpenGL ES 2.0 programmable shaders for this in order to get realtime responsiveness. The whole thing is described in this post here.
Again, this is probably overkill for what you want. In the case of a simple UIImage that you want to convert to black and white, you can probably read in the raw pixels, iterate through them, and apply the same sort of thresholding I did to output the final image. This won't be as fast as the shader approach, but it will be much simpler to code.
Solution 5:[5]
The code worked right for me,just need some tweak ...here are few changes I made to work it properly by assigning value to dataBitmap[] array
's zeroth index...
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
//here an index for zeroth element is assigned
if ((dataBitmap[i + 0]+dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 4 / 2)) {
// multiply four,instead of three
dataBitmap[i + 0] = 0;
dataBitmap[i + 1] = 0;
dataBitmap[i + 2] = 0;
dataBitmap[i + 3] = 0;
} else {
dataBitmap[i + 0] = 255;
dataBitmap[i + 1] = 255;
dataBitmap[i + 2] = 255;
dataBitmap[i + 3] = 255;
}
}
Hope it will work.
Solution 6:[6]
Here's a swift 3 solution:
class func pureBlackAndWhiteImage(_ inputImage: UIImage) -> UIImage? {
guard let inputCGImage = inputImage.cgImage, let context = getImageContext(for: inputCGImage), let data = context.data else { return nil }
let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
let width = Int(inputCGImage.width)
let height = Int(inputCGImage.height)
let pixelBuffer = data.bindMemory(to: RGBA32.self, capacity: width * height)
for x in 0 ..< height {
for y in 0 ..< width {
let offset = x * width + y
if pixelBuffer[offset].red > 0 || pixelBuffer[offset].green > 0 || pixelBuffer[offset].blue > 0 {
pixelBuffer[offset] = black
} else {
pixelBuffer[offset] = white
}
}
}
let outputCGImage = context.makeImage()
let outputImage = UIImage(cgImage: outputCGImage!, scale: inputImage.scale, orientation: inputImage.imageOrientation)
return outputImage
}
class func getImageContext(for inputCGImage: CGImage) ->CGContext? {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
context.setBlendMode(.copy)
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height)))
return context
}
struct RGBA32: Equatable {
var color: UInt32
var red: UInt8 {
return UInt8((color >> 24) & 255)
}
var green: UInt8 {
return UInt8((color >> 16) & 255)
}
var blue: UInt8 {
return UInt8((color >> 8) & 255)
}
var alpha: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
color = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
}
func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | |
Solution 3 | jmad8 |
Solution 4 | Brad Larson |
Solution 5 | Nazik |
Solution 6 | Daniel McLean |