Core Graphics image masks are handy, but if you want to load the mask image from a file, things don’t always work the way you expect.
The function CGImageCreateWithMask()
can take either a mask or an image as the second parameter, but it turns out that Core Graphics (at least on iOS) is pretty picky about what is an acceptable image for the mask.
I’ve seen this snippet of code suggested in a few places:
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CGImageGetBitsPerPixel(image), CGImageGetBytesPerRow(image), CGImageGetDataProvider(image), NULL, false); |
the idea being that you create a mask with the pixels that are in the loaded image, but it turns out that this code is not 100% reliable either.
The truth of the matter is that CGImage is an incredibly versatile object. The bits that represent the image can be in a variety of formats, bit depths, and colour space. When you load an image from a file, you are not guaranteed what format those bits are going to be inโfor example, there are reports online of how people can get image masks to work if they save it in one way from an image editing program, but not if they save it a different way (e.g. http://stackoverflow.com/questions/1133248/any-idea-why-this-image-masking-code-does-not-work )
Thus, I’ve found that the best and most reliable way to generate an image mask from an arbitrary image is to do this:
- Create a bitmap graphics context that is in an acceptable format for image masks
- Draw your image into this bitmap graphics context
- Create the image mask from the bits of the bitmap graphics context.
The following function has worked well for me so far:
CGImageRef createMaskWithImage(CGImageRef image) { int maskWidth = CGImageGetWidth(image); int maskHeight = CGImageGetHeight(image); // round bytesPerRow to the nearest 16 bytes, for performance's sake int bytesPerRow = (maskWidth + 15) & 0xfffffff0; int bufferSize = bytesPerRow * maskHeight; // we use CFData instead of malloc(), because the memory has to stick around // for the lifetime of the mask. if we used malloc(), we'd have to // tell the CGDataProvider how to dispose of the memory when done. using // CFData is just easier and cleaner. CFMutableDataRef dataBuffer = CFDataCreateMutable(kCFAllocatorDefault, 0); CFDataSetLength(dataBuffer, bufferSize); // the data will be 8 bits per pixel, no alpha CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray(); CGContextRef ctx = CGBitmapContextCreate(CFDataGetMutableBytePtr(dataBuffer), maskWidth, maskHeight, 8, bytesPerRow, colourSpace, kCGImageAlphaNone); // drawing into this context will draw into the dataBuffer. CGContextDrawImage(ctx, CGRectMake(0, 0, maskWidth, maskHeight), image); CGContextRelease(ctx); // now make a mask from the data. CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(dataBuffer); CGImageRef mask = CGImageMaskCreate(maskWidth, maskHeight, 8, 8, bytesPerRow, dataProvider, NULL, FALSE); CGDataProviderRelease(dataProvider); CGColorSpaceRelease(colourSpace); CFRelease(dataBuffer); return mask; } |
Example of use:
UIImage *maskSource = [UIImage imageNamed:@"mask.png"]; CGImageRef mask = createMaskWithImage(maskSource.CGImage); |
Then use the mask as you wish, for example in the aforementioned CGImageCreateWithMask()
or CGContextClipToMask()
And don’t forget to dispose of the mask when you’re done. createMaskWithImage()
returns the mask with a retain count of 1, and expects the caller to take ownership.
CGImageRelease(mask); |
Hi,
thank you for this great tutorial.
I have tested it and works fine, but i want to add one more function. I’d like the user could display the mask, move and resize it and then apply it to the original image. Is this possible?