15 Most Recent [RSS]
More...
|
Removing transparency from NSImage
I played around with a little test project using layer-backed NSImageViews for its graphical display. The hit-testing bit me just when I ran out of time, but a nice little NSImage category came out of this. It's not very fast, but it takes an NSImage and creates a copy with all the fully transparent areas around it removed. It also gives you a rectangle, indicating at what position (relative to the original upper left of the image) the new image lies. So if you've created a layered image in Pixelmator, where every object in your scene is on its own layer, you can just export each layer as a full-size PNG, then use this category to cut them down to their actual size and calculate the rectangle the object will need on your screen. Much more fun than doing this manually. The basic trick is doing a lockFocus on your image, and then looping over the coordinates, doing an NSReadPixel() for each pixel. And there lies the problem: NSReadPixel() creates an NSColor object for each pixel, and tries to interpolate pixels in the image if you end up in a coordinate between them (which I'm sure the code below probably still does). I tried creating an NSBitmapImageRep with a fixed depth and examining its pixels using -getPixel:atX:y:, but that didn't seem much faster. Copying the whole image and resampling is apparently just as slow as creating a bunch of NSColors. I guess one would have to write code that can work on the actual image data, and is capable of examining any depth, maybe drop down to CGImageRef to get more performance out of this, but for an automated batch preprocessing tool, this is already suitable. Here's the code: @interface NSImage (UKRemoveTransparentAreas)
-(NSImage*) imageByRemovingTransparentAreasWithFinalRect: (NSRect*)outBox;
@end
@implementation NSImage (UKRemoveTransparentAreas)
-(NSImage*) imageByRemovingTransparentAreasWithFinalRect: (NSRect*)outBox { NSRect oldRect = NSZeroRect; oldRect.size = [self size]; *outBox = oldRect; [self lockFocus]; // Cut off any empty rows at the bottom: for( int y = 0; y < oldRect.size.height; y++ ) { for( int x = 0; x < oldRect.size.width; x++ ) { NSColor* theCol = NSReadPixel( NSMakePoint( x, y ) ); if( [theCol alphaComponent] > 0.01 ) goto bottomDone; } outBox->origin.y += 1; outBox->size.height -= 1; } bottomDone: // Cut off any empty rows at the top: for( int y = oldRect.size.height -1; y >= 0; y-- ) { for( int x = 0; x < oldRect.size.width; x++ ) { NSColor* theCol = NSReadPixel( NSMakePoint( x, y ) ); if( [theCol alphaComponent] > 0.01 ) goto topDone; } outBox->size.height -= 1; }
topDone: // Cut off any empty rows at the left: for( int x = 0; x < oldRect.size.width; x++ ) { for( int y = 0; y < oldRect.size.height; y++ ) { NSColor* theCol = NSReadPixel( NSMakePoint( x, y ) ); if( [theCol alphaComponent] > 0.01 ) goto leftDone; } outBox->origin.x += 1; outBox->size.width -= 1; }
leftDone: // Cut off any empty rows at the right: for( int x = oldRect.size.width -1; x >= 0; x-- ) { for( int y = 0; y < oldRect.size.height; y++ ) { NSColor* theCol = NSReadPixel( NSMakePoint( x, y ) ); if( [theCol alphaComponent] > 0.01 ) goto rightDone; } outBox->size.width -= 1; }
rightDone: [self unlockFocus];
// Now create new image with that subsection: NSImage* returnImg = [[[NSImage alloc] initWithSize: outBox->size] autorelease]; NSRect destBox = *outBox; destBox.origin = NSZeroPoint; [returnImg lockFocus]; [self drawInRect: destBox fromRect: *outBox operation: NSCompositeCopy fraction: 1.0]; [[NSColor redColor] set]; [NSBezierPath strokeRect: destBox]; [returnImg unlockFocus]; return returnImg; }
@end And yes, I'm using goto, get over it. It just made the code much more readable, and as long as C doesn't add named loops, so I can say break outerloop;, I guess it'll stay that way. It's not as if I was jumping up and down, so code flow is obvious, and NSAutoreleasePools save me from any scope issues goto might have or not. Another approach I haven't yet tried to make this work in realtime would be to use CoreImage. Maybe one could build a histogram of certain areas of the image (only the alpha channel) and thus quickly home in on big transparent spots. Anyone wanna chime in? Peter Hosey writes: > I tried creating an NSBitmapImageRep with a fixed depth and examining its pixels using -getPixel:atX:y:, but that didn't seem much faster.
The fastest way would be to create a CGBitmapContext with a pixel format of your choice, draw the source image into it, iterate directly on its backing buffer to determine the rect, and then use CGBitmapContextCreateImage and CGImageCreateWithImageInRect to crop out the desired image.
I don't think you can use Core Image for this. You already have the alpha channel, and there's nothing a CIFilter can do with it that can help you solve the problem, and with no data-dependent loops, CI Filter Language can't solve the complete problem alone. You would have to dynamically generate the CIFL code to (looplessly) find the bounds of the image.
|
Ken Ferry writes: Hey Uli!
The recommended way to do something like this is to make a new NSBitmapImageRep in a _known_pixel_format_, draw the image into it, then examine the data of that bitmap. This is fast because you aren't churning through making objects or indirecting through pointers or anything like that. This is safe because the drawing machinery is basically canonicalizing your abstract image into a single format you understand - no need to deal specially with arbitrary depths or arbitrary anything.
There's a discussion the AppKit release notes under "NSBitmapImageRep: CoreGraphics impedance matching and performance notes".
"So, to sum up:
(1) Drawing is fast. Playing with pixels is not.
(2) If you think you need to play with pixels, (a) consider if there's a way to do it with drawing or (b) look into CoreImage.
(3) If you still want to get at the pixels, draw into a bitmap whose format you know and look at those pixels."
This is case 3. |
| |