October Challenge

October 1st, 2010 § Comments Off on October Challenge § permalink

I have decided to try my hand at the October Challenge, AKA PoV’s Challenge. It’s a personal challenge to create and sell at least one copy of a game before the end of October.

I’ve been meaning to participate in one of these game-making challenges for a while (like Ludum Dare, or Toronto Game Jam), but never have, partly because those 2- or 3-day sprints are a little too easy to procrastinate about (blink and they’re over! and I am a master procrastinator), and partly because they seem a little too intensive for me in my advancing age (I like sleep!).

This one seems like something I could actually do and yet still be a kick-in-the-pants challenge. I like that PoV references NaNoWriMo in his post :)

Since I plan to make an iOS game, fulfilling the last part of the challenge (sell a copy) is partly at the mercy of Apple’s app review process, but I’m going to give it my best shot anyway. And of course I plan to blog about it all here.

Wish me [good] luck! And let me know if you’ll be participating too.

iOS4 multitasking: subtle UIViewController change

August 31st, 2010 § Comments Off on iOS4 multitasking: subtle UIViewController change § permalink

UIApplicationDelegate changed a lot with the introduction of multitasking in iOS4 (see Dr. Touch’s post and charts [although there are still some small omissions and inaccuracies there]).

But UIApplicationDelegate was not the only class affected. UIViewController‘s behaviour is slightly changed in the presence of multitasking: namely the view(Will|Did)Disappear: methods.

If your iOS4-built app is running on iPhone OS 3, or if UIApplicationExitsOnSuspend is set to true, then when the user presses the Home button, the frontmost view controller’s viewWillDisappear: and viewDidDisappear: methods will be called before the app exits. However, if UIApplicationExitsOnSuspend is false and you’re running on a multitasking-enabled device (iPhone 3GS or higher; iPod touch 3rd generation), viewWillDisappear: and viewDidDisappear: are not called as it enters the background.

That was a messy couple of sentences so here’s a chart!

UIApplication­Exits­On­Suspend? multitasking-capable device and OS? When Home button is pressed:
false true viewWill/DidDisappear: NOT called
false false viewWill/DidDisappear: IS called
true true viewWill/DidDisappear: IS called
true false viewWill/DidDisappear: IS called

It’s subtle, but it might make a difference to your code.

The Sparrow Framework

August 22nd, 2010 § Comments Off on The Sparrow Framework § permalink

When I first started iPhone programming last year, I decided I wanted to stay away from third-party frameworks at first, so I could learn as much of the native environment as possible. My first animation-based project used CALayers, but I later converted it to use OpenGL for better performance.

I am definitely not opposed to using third-party frameworks. When I’m not trying to wring the last bit of performance out of a device, I’d rather deal with higher-level abstractions than directly with OpenGL.

Cocos2D-iPhone is a very popular open source framework for 2D games and graphics applications. It seems very feature-rich, including things like visual effects, particle systems and even integrated physics engines!

But I was immediately drawn to the Sparrow Framework when I first heard about it. It, too, is an open source 2D graphics/game framework for iOS. It has far fewer features than Cocos2D (possibly a boon, depending on your outlook—less code to add to your app) but its main attraction (to me) is that it is modelled after the ActionScript 3 API. For someone like myself who has used Flash for many years, this is a definite plus.

When I was writing the Vampire simulator, I needed to make the vampire sparkle. I figured that this simple animation task would be well suited for my first exploration of the Sparrow framework.

Creating a new Sparrow app is very simple. Just duplicate the “scaffold” folder and rename the Xcode project within. You will have to do a one-time Xcode settings change: adding a SPARROW_SRC folder reference to point to where the Sparrow source files are on your hard drive.

The documentation that is available for Sparrow is minimal but very, very clearly written. Also, the source code is easy to follow. If you have any background with the ActionScript 3 class library, the learning curve is practically zero. I was shocked at how quickly I was making things happen with it.

Here’s a simple example from the vampire app. This snippet places the image “vampire.png” at the centre of the screen:

SPImage *image = [SPImage imageWithContentsOfFile:@"vampire.png"];
image.x = (self.width - image.width) / 2;
image.y = (self.height - image.height) / 2;
[self addChild:image];

Responding to events (touch events, or timing) will be familiar to you if you’ve used ActionScript (or JavaScript, for that matter), using the addEventListener method:

 [self addEventListener:@selector(onEnterFrame:) atObject:self forType:SP_EVENT_TYPE_ENTER_FRAME];

This will cause the onEnterFrame: method on self to be called on every frame of the animation.

Refugees from Flash should note: while Sparrow is modelled after the ActionScript 3 libraries, it is only a small, small subset of it. For example, it does not include any of drawing API (on the other hand, if you want to do any custom drawing, you can subclass SPDisplayObject and draw with OpenGL directly).

I definitely plan to use Sparrow for whatever my next game project might be. I’ll likely have more to say about it then. I’ll be interested to see how performance holds up if a lot of elements are flying around the screen.

Thoughts on iOS 4 camera APIs: privacy issues, new UI possibilities?

August 17th, 2010 § Comments Off on Thoughts on iOS 4 camera APIs: privacy issues, new UI possibilities? § permalink

While playing with the new AVFoundation APIs, it occurred to me that in iOS 4, apps can now easily access the camera with no feedback to the user. Before, apps had to use UIImagePickerController, which shows the iris-opening animation before recording starts, even if you hide the preview image using cameraViewTransform. With AVFoundation’s AVCaptureSession, there is no indication to the user at all that the camera is in use unless the app provides its own. There is no permission alert, nor any LED indicator like a webcam. An app could secretly be recording your face with the iPhone 4’s front-facing camera and sending it to who knows where. I wonder if Apple’s app review team checks for this in some way?

On the other hand, the new APIs make it much easier to integrate non-photo-taking uses of the camera into an app. I could imagine using the iPhone 4’s front camera for non-touch gesture controls or facial expression recognition. Makes me wish I knew something about real time image processing!

Turn your iPhone into a vampire with AVFoundation and iOS 4

August 15th, 2010 § 7 comments § permalink

iOS 4 added a lot to AVFoundation, including classes and APIs that give you much more control over the iPhone camera. One of the things you can now do with the camera is read the video frame data in real time.

In this post, I’ve created a simple demo that simulates a Twilight-style vampire. In the Twilight series, vampires aren’t hurt by daylight; instead, they sparkle. Yes, sparkle.

Here are a couple of screenshots from the app:

And here’s a low-quality video of the vampire simulator in action.

The app detects the amount of light shining on the phone by doing very simple image analysis of the incoming video frames from the camera. The brighter the image seen by the camera, the more sparkles it draws on the vampire.

So how does this all work?
» Read the rest of this entry «


August 8th, 2010 § Comments Off on iDevBlogADay § permalink

Like many people with barely-updated blogs, I want to blog more often. I’m not much of a writer but that can only change with practice, right? Plus, I’m a big believer in community and the sharing of knowledge, and I wanted to contribute to that more. But where would I find the writing discipline?

I’ve also been a fan of personal writing challenges for some time. For example, I’ve participated in National Novel Writing Month (NaNoWriMo) for many years, where the goal is to write a 50,000-word novel in 30 days (in November). The concrete and public goal, combined with the camaraderie of the others involved, makes for a fun creative exercise and certainly helps with motivation.

When I heard about #iDevBlogADay on Twitter, I had to know more about it. It apparently all started when independent iPhone developer @MysteryCoconut wanted the impetus to blog more often, a sentiment that I can definitely relate to myself. What began as an offhand tweet has ballooned into what could become a bonafide movement.

Here’s how it works: Each day of the week is assigned to two indie iOS developers. They must post a blog post on their assigned day. If they miss a day, they’re out and sent to the end of the current waiting list. The next blogger in the waiting list now takes that person’s place.

What’s cool about this is that this kind of motivation helps everyone. The explosion of shared knowledge and inspiration pouring forth from these blogs has been pretty awesome.

It’s now my turn to take a spot on the Sunday roster. I have to admit I’m intimidated, as the quality of the blog posts has been high! I can only hope I can live up to the standards that have been set. (And I apologize for the “fluffy” nature of this post—I had a “crunchier” blog idea that I had apparently been sitting on for far too long, since it was rendered obsolete by recent versions of the iPhone OS. That’ll teach me!)

Hats off to MysteryCoconut for starting a fun “game” that helps and fosters the entire iDevelopment community :)

Loading an image mask from a file

July 23rd, 2010 § 1 comment § permalink

illustration of an image being masked
Core Graphics image masks are handy, but if you want to load the mask image from a file, things don’t always work the way you expect.

The function CGImageCreateWithMask() can take either a mask or an image as the second parameter, but it turns out that Core Graphics (at least on iOS) is pretty picky about what is an acceptable image for the mask.

I’ve seen this snippet of code suggested in a few places:

CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(image),
CGImageGetHeight(image), CGImageGetBitsPerComponent(image),
CGImageGetBitsPerPixel(image), CGImageGetBytesPerRow(image),
CGImageGetDataProvider(image), NULL, false);

the idea being that you create a mask with the pixels that are in the loaded image, but it turns out that this code is not 100% reliable either.

The truth of the matter is that CGImage is an incredibly versatile object. The bits that represent the image can be in a variety of formats, bit depths, and colour space. When you load an image from a file, you are not guaranteed what format those bits are going to be in—for example, there are reports online of how people can get image masks to work if they save it in one way from an image editing program, but not if they save it a different way (e.g. http://stackoverflow.com/questions/1133248/any-idea-why-this-image-masking-code-does-not-work )

Thus, I’ve found that the best and most reliable way to generate an image mask from an arbitrary image is to do this:

  1. Create a bitmap graphics context that is in an acceptable format for image masks
  2. Draw your image into this bitmap graphics context
  3. Create the image mask from the bits of the bitmap graphics context.

The following function has worked well for me so far:

CGImageRef createMaskWithImage(CGImageRef image)
    int maskWidth               = CGImageGetWidth(image);
    int maskHeight              = CGImageGetHeight(image);
    //  round bytesPerRow to the nearest 16 bytes, for performance's sake
    int bytesPerRow             = (maskWidth + 15) & 0xfffffff0;
    int bufferSize              = bytesPerRow * maskHeight;
    //  we use CFData instead of malloc(), because the memory has to stick around
    //  for the lifetime of the mask. if we used malloc(), we'd have to
    //  tell the CGDataProvider how to dispose of the memory when done. using
    //  CFData is just easier and cleaner.
    CFMutableDataRef dataBuffer = CFDataCreateMutable(kCFAllocatorDefault, 0);
    CFDataSetLength(dataBuffer, bufferSize);
    //  the data will be 8 bits per pixel, no alpha
    CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef ctx            = CGBitmapContextCreate(CFDataGetMutableBytePtr(dataBuffer),
                                                        maskWidth, maskHeight,
                                                        8, bytesPerRow, colourSpace, kCGImageAlphaNone);
    //  drawing into this context will draw into the dataBuffer.
    CGContextDrawImage(ctx, CGRectMake(0, 0, maskWidth, maskHeight), image);
    //  now make a mask from the data.
    CGDataProviderRef dataProvider  = CGDataProviderCreateWithCFData(dataBuffer);
    CGImageRef mask                 = CGImageMaskCreate(maskWidth, maskHeight, 8, 8, bytesPerRow,
                                                        dataProvider, NULL, FALSE);
    return mask;

Example of use:

UIImage *maskSource = [UIImage imageNamed:@"mask.png"];
CGImageRef mask = createMaskWithImage(maskSource.CGImage);

Then use the mask as you wish, for example in the aforementioned CGImageCreateWithMask() or CGContextClipToMask()

And don’t forget to dispose of the mask when you’re done. createMaskWithImage() returns the mask with a retain count of 1, and expects the caller to take ownership.


Uncanny resemblance

June 7th, 2010 § Comments Off on Uncanny resemblance § permalink

Planet iDev tweak

March 4th, 2010 § Comments Off on Planet iDev tweak § permalink

Some of the articles included by Planet iDev were getting pretty long, so I’ve decided to display just excerpts of the posts on the website rather than the entire articles. The RSS/Atom feeds should still have the full text.

Some of the excerpts seem to be showing up empty. I’m not 100% sure of what is causing that, but I’ll look into it (I made some changes to FeedWordPress, so it’s likely something I did :)


Planet iDev

January 31st, 2010 § Comments Off on Planet iDev § permalink

When I was doing more Flash development, one of the most valuable web resources I used was Adobe Feeds. It aggregates many, many Adobe-oriented developer blogs from all over the web. Not only is it a great place to keep an eye on the pulse of the Flash development world, but it’s where smaller Flash developers can get more exposure to more of the community. Thanks to my own blog’s inclusion in Adobe Feeds, I was able to get some answers to some Flash questions more quickly than I probably would have otherwise, all while sharing what knowledge I had to as broad an audience as possible.

When I started doing iPhone development, I looked for something similar for iPhone dev blogs. While there are a lot of iPhone developers out there blogging, I could not find a good aggregator out there. I did find Planet iPhone SDK, but it does not seem to be active (I tried contacting the owner, but did not receive a response). Planet Cocoa is good and quite active, but iPhone development is just part of the coverage there (the rest is Mac desktop development, which I don’t do myself).

Thus, I am starting Planet iDev. Armed with FeedWordPress and a free WordPress theme, I have cobbled together a simple blog aggregator. I should note that many of the blogs I am aggregating I found on this tremendously helpful post by Travis Dunn.

Having never run a blog aggregator before, I am unsure of the ethics and etiquette of Planet-style sites. I have no intention of financially profiting from other people’s hard work, nor do I want to negatively impact others’ endeavours. With that in mind:

  • There will never, ever be ads on Planet iDev
  • All authors are credited, and all articles link back to their original sources (as do all of the items in the Planet iDev feed).
  • I have set up the WordPress install to not be indexed by search engines.

Regardless, if any blog owner wants to be removed from this aggregation, I will be more than happy to do so, and I offer my sincerest apologies.

On the other, if any iPhone developer out there wants their blog to be added to this Planet, simply email me at planetidev@gmail.com (or comment on this post) with your blog’s information, and it will be considered for inclusion. Preference will be given to blogs that focus on the development and production process, rather than blogs that are mostly for product promotion.

I still have some questions regarding Planet etiquette. While most Planet sites seem to include the entire contents of the aggregated blogs’ posts, I wonder if it would be more polite to only display an excerpt (but include the entire article in the feed). Please share any thoughts you have on this matter, or on Planet iDev in general.

I hope others out there find this resource valuable and useful, and I hope this helps encourage a sense of community. Thanks!

Visit Planet iDev.