October Challenge

October 1st, 2010 § Comments Off on October Challenge § permalink


I have decided to try my hand at the October Challenge, AKA PoV’s Challenge. It’s a personal challenge to create and sell at least one copy of a game before the end of October.

I’ve been meaning to participate in one of these game-making challenges for a while (like Ludum Dare, or Toronto Game Jam), but never have, partly because those 2- or 3-day sprints are a little too easy to procrastinate about (blink and they’re over! and I am a master procrastinator), and partly because they seem a little too intensive for me in my advancing age (I like sleep!).

This one seems like something I could actually do and yet still be a kick-in-the-pants challenge. I like that PoV references NaNoWriMo in his post :)

Since I plan to make an iOS game, fulfilling the last part of the challenge (sell a copy) is partly at the mercy of Apple’s app review process, but I’m going to give it my best shot anyway. And of course I plan to blog about it all here.

Wish me [good] luck! And let me know if you’ll be participating too.

iOS4 multitasking: subtle UIViewController change

August 31st, 2010 § Comments Off on iOS4 multitasking: subtle UIViewController change § permalink

UIApplicationDelegate changed a lot with the introduction of multitasking in iOS4 (see Dr. Touch’s post and charts [although there are still some small omissions and inaccuracies there]).

But UIApplicationDelegate was not the only class affected. UIViewController‘s behaviour is slightly changed in the presence of multitasking: namely the view(Will|Did)Disappear: methods.

If your iOS4-built app is running on iPhone OS 3, or if UIApplicationExitsOnSuspend is set to true, then when the user presses the Home button, the frontmost view controller’s viewWillDisappear: and viewDidDisappear: methods will be called before the app exits. However, if UIApplicationExitsOnSuspend is false and you’re running on a multitasking-enabled device (iPhone 3GS or higher; iPod touch 3rd generation), viewWillDisappear: and viewDidDisappear: are not called as it enters the background.

That was a messy couple of sentences so here’s a chart!

UIApplication­Exits­On­Suspend? multitasking-capable device and OS? When Home button is pressed:
false true viewWill/DidDisappear: NOT called
false false viewWill/DidDisappear: IS called
true true viewWill/DidDisappear: IS called
true false viewWill/DidDisappear: IS called

It’s subtle, but it might make a difference to your code.

The Sparrow Framework

August 22nd, 2010 § Comments Off on The Sparrow Framework § permalink


When I first started iPhone programming last year, I decided I wanted to stay away from third-party frameworks at first, so I could learn as much of the native environment as possible. My first animation-based project used CALayers, but I later converted it to use OpenGL for better performance.

I am definitely not opposed to using third-party frameworks. When I’m not trying to wring the last bit of performance out of a device, I’d rather deal with higher-level abstractions than directly with OpenGL.

Cocos2D-iPhone is a very popular open source framework for 2D games and graphics applications. It seems very feature-rich, including things like visual effects, particle systems and even integrated physics engines!

But I was immediately drawn to the Sparrow Framework when I first heard about it. It, too, is an open source 2D graphics/game framework for iOS. It has far fewer features than Cocos2D (possibly a boon, depending on your outlook—less code to add to your app) but its main attraction (to me) is that it is modelled after the ActionScript 3 API. For someone like myself who has used Flash for many years, this is a definite plus.

When I was writing the Vampire simulator, I needed to make the vampire sparkle. I figured that this simple animation task would be well suited for my first exploration of the Sparrow framework.

Creating a new Sparrow app is very simple. Just duplicate the “scaffold” folder and rename the Xcode project within. You will have to do a one-time Xcode settings change: adding a SPARROW_SRC folder reference to point to where the Sparrow source files are on your hard drive.

The documentation that is available for Sparrow is minimal but very, very clearly written. Also, the source code is easy to follow. If you have any background with the ActionScript 3 class library, the learning curve is practically zero. I was shocked at how quickly I was making things happen with it.

Here’s a simple example from the vampire app. This snippet places the image “vampire.png” at the centre of the screen:

SPImage *image = [SPImage imageWithContentsOfFile:@"vampire.png"];
image.x = (self.width - image.width) / 2;
image.y = (self.height - image.height) / 2;
[self addChild:image];

Responding to events (touch events, or timing) will be familiar to you if you’ve used ActionScript (or JavaScript, for that matter), using the addEventListener method:

 [self addEventListener:@selector(onEnterFrame:) atObject:self forType:SP_EVENT_TYPE_ENTER_FRAME];

This will cause the onEnterFrame: method on self to be called on every frame of the animation.

Refugees from Flash should note: while Sparrow is modelled after the ActionScript 3 libraries, it is only a small, small subset of it. For example, it does not include any of drawing API (on the other hand, if you want to do any custom drawing, you can subclass SPDisplayObject and draw with OpenGL directly).

I definitely plan to use Sparrow for whatever my next game project might be. I’ll likely have more to say about it then. I’ll be interested to see how performance holds up if a lot of elements are flying around the screen.

Thoughts on iOS 4 camera APIs: privacy issues, new UI possibilities?

August 17th, 2010 § Comments Off on Thoughts on iOS 4 camera APIs: privacy issues, new UI possibilities? § permalink

While playing with the new AVFoundation APIs, it occurred to me that in iOS 4, apps can now easily access the camera with no feedback to the user. Before, apps had to use UIImagePickerController, which shows the iris-opening animation before recording starts, even if you hide the preview image using cameraViewTransform. With AVFoundation’s AVCaptureSession, there is no indication to the user at all that the camera is in use unless the app provides its own. There is no permission alert, nor any LED indicator like a webcam. An app could secretly be recording your face with the iPhone 4’s front-facing camera and sending it to who knows where. I wonder if Apple’s app review team checks for this in some way?

On the other hand, the new APIs make it much easier to integrate non-photo-taking uses of the camera into an app. I could imagine using the iPhone 4’s front camera for non-touch gesture controls or facial expression recognition. Makes me wish I knew something about real time image processing!

Turn your iPhone into a vampire with AVFoundation and iOS 4

August 15th, 2010 § 7 comments § permalink

iOS 4 added a lot to AVFoundation, including classes and APIs that give you much more control over the iPhone camera. One of the things you can now do with the camera is read the video frame data in real time.

In this post, I’ve created a simple demo that simulates a Twilight-style vampire. In the Twilight series, vampires aren’t hurt by daylight; instead, they sparkle. Yes, sparkle.

Here are a couple of screenshots from the app:

And here’s a low-quality video of the vampire simulator in action.

The app detects the amount of light shining on the phone by doing very simple image analysis of the incoming video frames from the camera. The brighter the image seen by the camera, the more sparkles it draws on the vampire.

So how does this all work?
» Read the rest of this entry «

iDevBlogADay

August 8th, 2010 § Comments Off on iDevBlogADay § permalink

Like many people with barely-updated blogs, I want to blog more often. I’m not much of a writer but that can only change with practice, right? Plus, I’m a big believer in community and the sharing of knowledge, and I wanted to contribute to that more. But where would I find the writing discipline?

I’ve also been a fan of personal writing challenges for some time. For example, I’ve participated in National Novel Writing Month (NaNoWriMo) for many years, where the goal is to write a 50,000-word novel in 30 days (in November). The concrete and public goal, combined with the camaraderie of the others involved, makes for a fun creative exercise and certainly helps with motivation.

When I heard about #iDevBlogADay on Twitter, I had to know more about it. It apparently all started when independent iPhone developer @MysteryCoconut wanted the impetus to blog more often, a sentiment that I can definitely relate to myself. What began as an offhand tweet has ballooned into what could become a bonafide movement.

Here’s how it works: Each day of the week is assigned to two indie iOS developers. They must post a blog post on their assigned day. If they miss a day, they’re out and sent to the end of the current waiting list. The next blogger in the waiting list now takes that person’s place.

What’s cool about this is that this kind of motivation helps everyone. The explosion of shared knowledge and inspiration pouring forth from these blogs has been pretty awesome.

It’s now my turn to take a spot on the Sunday roster. I have to admit I’m intimidated, as the quality of the blog posts has been high! I can only hope I can live up to the standards that have been set. (And I apologize for the “fluffy” nature of this post—I had a “crunchier” blog idea that I had apparently been sitting on for far too long, since it was rendered obsolete by recent versions of the iPhone OS. That’ll teach me!)

Hats off to MysteryCoconut for starting a fun “game” that helps and fosters the entire iDevelopment community :)

Loading an image mask from a file

July 23rd, 2010 § 1 comment § permalink

illustration of an image being masked
Core Graphics image masks are handy, but if you want to load the mask image from a file, things don’t always work the way you expect.

The function CGImageCreateWithMask() can take either a mask or an image as the second parameter, but it turns out that Core Graphics (at least on iOS) is pretty picky about what is an acceptable image for the mask.

I’ve seen this snippet of code suggested in a few places:

CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(image),
CGImageGetHeight(image), CGImageGetBitsPerComponent(image),
CGImageGetBitsPerPixel(image), CGImageGetBytesPerRow(image),
CGImageGetDataProvider(image), NULL, false);

the idea being that you create a mask with the pixels that are in the loaded image, but it turns out that this code is not 100% reliable either.

The truth of the matter is that CGImage is an incredibly versatile object. The bits that represent the image can be in a variety of formats, bit depths, and colour space. When you load an image from a file, you are not guaranteed what format those bits are going to be in—for example, there are reports online of how people can get image masks to work if they save it in one way from an image editing program, but not if they save it a different way (e.g. http://stackoverflow.com/questions/1133248/any-idea-why-this-image-masking-code-does-not-work )

Thus, I’ve found that the best and most reliable way to generate an image mask from an arbitrary image is to do this:

  1. Create a bitmap graphics context that is in an acceptable format for image masks
  2. Draw your image into this bitmap graphics context
  3. Create the image mask from the bits of the bitmap graphics context.

The following function has worked well for me so far:

CGImageRef createMaskWithImage(CGImageRef image)
{
    int maskWidth               = CGImageGetWidth(image);
    int maskHeight              = CGImageGetHeight(image);
    //  round bytesPerRow to the nearest 16 bytes, for performance's sake
    int bytesPerRow             = (maskWidth + 15) & 0xfffffff0;
    int bufferSize              = bytesPerRow * maskHeight;
 
    //  we use CFData instead of malloc(), because the memory has to stick around
    //  for the lifetime of the mask. if we used malloc(), we'd have to
    //  tell the CGDataProvider how to dispose of the memory when done. using
    //  CFData is just easier and cleaner.
 
    CFMutableDataRef dataBuffer = CFDataCreateMutable(kCFAllocatorDefault, 0);
    CFDataSetLength(dataBuffer, bufferSize);
 
    //  the data will be 8 bits per pixel, no alpha
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef ctx            = CGBitmapContextCreate(CFDataGetMutableBytePtr(dataBuffer),
                                                        maskWidth, maskHeight,
                                                        8, bytesPerRow, colourSpace, kCGImageAlphaNone);
    //  drawing into this context will draw into the dataBuffer.
    CGContextDrawImage(ctx, CGRectMake(0, 0, maskWidth, maskHeight), image);
    CGContextRelease(ctx);
 
    //  now make a mask from the data.
    CGDataProviderRef dataProvider  = CGDataProviderCreateWithCFData(dataBuffer);
    CGImageRef mask                 = CGImageMaskCreate(maskWidth, maskHeight, 8, 8, bytesPerRow,
                                                        dataProvider, NULL, FALSE);
 
    CGDataProviderRelease(dataProvider);
    CGColorSpaceRelease(colourSpace);
    CFRelease(dataBuffer);
 
    return mask;
}

Example of use:

UIImage *maskSource = [UIImage imageNamed:@"mask.png"];
CGImageRef mask = createMaskWithImage(maskSource.CGImage);

Then use the mask as you wish, for example in the aforementioned CGImageCreateWithMask() or CGContextClipToMask()

And don’t forget to dispose of the mask when you’re done. createMaskWithImage() returns the mask with a retain count of 1, and expects the caller to take ownership.

CGImageRelease(mask);

Uncanny resemblance

June 7th, 2010 § Comments Off on Uncanny resemblance § permalink

iPhone offline web applications: tips and gotchas

January 24th, 2010 § Comments Off on iPhone offline web applications: tips and gotchas § permalink

I spent this evening updating my “iPhone VR” Javascript/CSS demo to work with iPhone OS 3.0 (it had stopped working 100% correctly since the OS update). I also decided to spend some time making it work as an offline-capable web application.

The basic process of making your web app cacheable offline is, in theory, fairly straightforward, and generally well-documented at Apple’s website. I ran into some interesting headaches though.

Gotchas:

  • First thing to note is that your web server must serve your cache manifest file with a MIME type of text/cache-manifest. This may mean editing mime.types to add a line that looks like this:
    text/cache-manifest   manifest

    or perhaps a line in httpd.conf that looks like this:

    AddType text/cache-manifest  .manifest
  • The next thing that had me stumped for a while: while your web server may know how to take the URL http://example.com/webapp/ and automatically and invisibly serve up the file http://example.com/webapp/index.html, your offline web app knows nothing of this mapping of webapp/ to webapp/index.html. If the URL bar reads http://example.com/webapp/ and you save it locally using a home screen bookmark, it will fail to launch correctly if the device is offline, even if index.html file has been cached. The device simply does not know to look for webapp/index.html instead of webapp/. Thus, you must ensure that the URL bar reads webapp/index.html before the user makes a home screen bookmark.
  • At first, I thought I would enforce the above with a tiny bit of JavaScript that simply reads window.location and redirects to window.location + "index.html" if the URL ends in a slash. This worked while online, but it broke my app when offline, even when the redirection was not taken. Why? It seems that any reference to window.location in your script is treated as network access. Since the device is offline, it generates an error alert.

    Instead, I made my app live at webapp/main.html, and created a small webapp/index.html file that simply redirects to main.html. (I could have used a server redirect inside of an .htaccess file instead, but I chose not to, for no particular reason.)

Tips:

  • You can specify a custom icon for the home screen bookmark using a <link> element, like so:
    <link rel="apple-touch-icon" href="custom_icon.png" />

    If you don’t want iPhone to automatically apply the “shine” effect on your icon, use the following instead:

    <link rel="apple-touch-icon-precomposed" href="custom_icon.png" />
  • Since iPhone OS 3.0, offline web apps can have custom splash screens, just a like a native app! Simply add a link element to your web page:
    <link rel="apple-touch-startup-image" href="/splash.png" />

If you want to see the results of all this, go to http://bunnyherolabs.com/iphone/xform/ on your iPhone or iPod Touch, then click the “+” (add bookmark) button and choose “Add to Home Screen.” Now you can click on the “Panorama” icon on your home screen to see the demo at any time, even when not connected to the internet.

How universal is “universal”?

January 1st, 2010 § Comments Off on How universal is “universal”? § permalink

Tip: if building a +universal variant in MacPorts (for example, because you’re trying to build and install the Python Imaging Library), then you should check /opt/local/etc/macports/macports.conf to see what MacPorts considers “Universal”:

universal_archs

The machine architectures to use for +universal variant (multiple entries must be space delimited). Options include: ppc, i386, ppc64, x86_64

Default: x86_64 i386 (ppc i386 for 10.5 and earlier)

(from http://guide.macports.org/#internals.configuration-files)

On Mac OS X 10.6 (Snow Leopard), the default value for universal_archs is x86_64 i386. Note that it does not include the PowerPC architecture. This makes perfect sense, because Snow Leopard doesn’t run on PowerPCs. Unfortunately, when installing PIL with pip, it builds for PPC as well, and thus requires PPC architectures in its dependent libraries, even on 10.6. I do not know how to disable PPC support in PIL (or pip?). All of this Python extension building stuff is new to me :)

The upshot: if you are running 10.6, then you must edit macports.conf and add “ppc” to universal_archs before you follow the directions in the linked article.