Sunday, June 16, 2013

A basic entity system using Haxe and NME

A while back I learned about a game development pattern called "component-entity system" (or "entity-component system" or just "entity system"). It's a brilliant architectural approach for reducing complexity in game development. When you use typical object-oriented inheritance your object graph can become more complex as you add functionality. But with an entity system you use composition, constructing entities from components and minimizing (if not completely flattening) your object graph.

An entity is like a namespace for components. In fact an entity doesn't need to be more than an identifier that associates components. Components themselves are pure data, they could be primitives or even structs, but it's convenient express them as classes. Systems are the third big aspect to this pattern. Systems look for entities with components they care about and take some action.

To make things happen in an entity system, you create a bunch of components, associate those components with entities, and then set up a bunch of systems to process the entities. It's typical to have systems for rendering, movement, input, AI, physics, and so on, and have components that are used by (and modified by) these systems such as a texture component, a location component, a player-character tag component, etc. Simply by wiring up the right set of components you can create new entities.

The best part is you can re-use functionality just about anywhere without having to worry about whether that functionality is available in one of your entity base classes. The functionality is in your systems. Just add the right component and you're good to go.

There's a universe of information out there already about entity systems. Instead of explaining them here I highly recommend you visit the Entity Systems Project. It's a great starting point for learning more about this approach.

One thing I think is really neat about entity systems is that the concept can be used outside of game development. In normal application development, web development, or enterprise development the maxim "favor composition over inheritance" is well known. The way entity systems turn inheritance on its head and make you re-think how you use objects can have a big impact. It's kind of like learning about OOP for the first time. I feel like I have a new and powerful tool under my belt, and I have found ways to re-use the knowledge in my day-to-day work.

My implementation

So, I wanted to write my own entity system. I built it using Haxe and NME. I haven't touched it in a while, and I feel like it could be a great starting point for a more complete engine, so I wanted to release it in case anyone might be able to learn from it or re-use it in their own project. There are other entity systems out there for NME, I believe, but there are some things about my implementation that I think are interesting and might be useful.

One of my big initial hang-ups with entity systems was the question of how systems know about the entities they wish to process. There are lots of options here: You could explicitly associate entities and systems, or you could have systems look for entities with a particular component attached, etc. The approach I chose was to use a filter-based system. Each time a component is added to or removed from an entity, systems are notified and they can process the entity through a series of filters to determine if they care about that entity. For example, I have a RequiredEntityFilter that indicates some set of components must exist in an entity for the system to process it, and an ExcludedEntityFilter that does the opposite. You can chain these filters, allowing you to say things like "I care about entities that have both a location component and a velocity component but do not have a player-character component", and so on.

Another issue I struggled with was how to deal with input. I created a Buffer class that provides a means of examining the set of input events that occurred during the last frame. This avoids the issue of systems having to be updated via some notification mechanism when an input event occurs. My goal was to make the systems as "pure" as possible and not use callbacks, events, or anything like that to achieve functionality. In retrospect, I think this was a little naive. It's probably better to think of the entity system as the "model" of your application, and find a way to wire the input events, which are part of the "view", to that model using some sensible mechanism. I think what I have here is a decent first pass at this but it could be improved.

There are some other little gems in there. Take a look at the code and see if it might be of use to you. I spent a lot of time working on it about a year ago and I hope it can help someone else.

Tuesday, August 28, 2012

Basic animation with NME and drawTiles

In this post I'm going to cover the basics of animation with drawTiles in NME. It builds upon the project in my previous post, Getting started with Haxe and NME.

Source for this example can be downloaded from github.

The fastest way to draw sprites with NME is to use a technique called blitting. This basically means you copy the pixel data from one bitmap into another, and then draw the final composed image to the screen. This is opposed to drawing several bitmaps to the screen, which is generally slower.

In NME this is done via the Tilesheet class and its drawTiles method. (Joshua Granick posted some benchmark numbers, if you'd like to see just how fast drawTiles is.)

Using drawTiles is pretty easy. All we need is a Tilesheet and some tile data, which is an array of Floats telling the Tilesheet what to draw.

First I'm going to declare two instance variables:

Then later I can set them up:

Here I create my Tilesheet with a spritesheet asset, which is basically just an image that contains all my animation frames.

After creating the Tilesheet I tell it the location of my two tiles: The first starts at position 4, 0 and is 8 pixels wide by 16 pixels high. The second starts at position 20, 0 and is also 8x16.

Now I need to set up my tileData, and here's where things get a little more interesting. We're eventually going to ask our Tilesheet to drawTiles using this tileData. The data is formatted as an array of Floats, and each position in the array has a specific meaning. By default, each "tile" is specified by its x position, y position and tile ID. In my array, I'm saying that I want tile ID 0 (the first tile I added to my tilesheet) to be drawn at 10, 10 and also at 20, 10.

So now I've set up my tilesheet and I need a way to draw its tiles every frame. To do that, I set up a listener for the ENTER_FRAME event. The specified method will be called on each frame, allowing me to set up the scene to be rendered.

My onEnterFrame method is going to use a very crude mechanism to change the current animation frame, and I need some instance variables to support it:

OK, here's the method itself:

First we clear graphics. Every Sprite, including the class I'm working with here, has a Graphics member. The tilesheet will draw our tiles to it, and if we don't clear it before drawing then we'll just keep drawing on top of whatever we had last frame.

Next you can see my crude animation mechanism. It's not something I would use in production code, but it's good for this example. Basically, every 500 milliseconds we're going to toggle the tile ID between 0 and 1. Then we specify that position 2 and 5 of our tileData array should be set to this ID. Take a look at the code above, where we initialized tileData, and you can see that position 3 refers to the ID of the first tile and position 5 refers to the ID of the second tile.

Finally, we're ready to drawTiles. We simply provide our Graphics instance and our tileData, and we're set.

drawTiles can do other interesting things too, like scale, rotate or smooth your tiles. Here's the doc, taken from Tilesheet.hx:
Notice how using additional features changes the number of array elements per tile in your tileData.

Here's the complete class:
Run this, and you should be greeted by two happily bouncing chefs:

And there you go. Happy Haxe-ing. :-)

Sunday, August 26, 2012

Getting started with Haxe and NME

NME and Haxe are really amazing. Using them, you can create cross-platform applications for Windows, Mac, Linux, iOS, Android and more.

The history of Haxe is very interesting. It's roots are in ActionScript and Flash development, and so as a language it's very similar to ActionScript. It even uses many of the same classes. As I'm learning Haxe I feel like I'm also learning ActionScript.

In this blog post I'm going to walk through setting up NME and creating a very basic app that displays a bitmap sprite. Full source for this application can be found on github.


To install NME, check out the instructions. I've been using Sublime Text 2 to write Haxe and build NME projects, and I highly recommend it. Using Sublime Package Control you can install the Haxe Bundle for Sublime Text 2 and get syntax highlighting, code completion, and build tooling.

Project skeleton

First off, let's take a look at the structure of a simple project:
We have three top-level folders:
  • Assets: Where the application icon and other assets will reside.
  • Export: This is where builds will go.
  • Source: Where source files will go.
And three files:
  • nme.svg: An application icon I borrowed from the sample projects. SVG is used so that at compile time the icon can be rasterized and scaled to whatever size(s) the target platform requires. For SVG work I use and recommend the Open Source vector graphics editor Inkscape.
  • NME1.nmml: NMML files are used to configure NME's install tool. This file will specify things like the window size, icon name, etc. We'll walk through it.
  • Main.hx: The only Haxe source file in my simple project.
Go ahead and create a folder structure similar to this. You can grab the SVG file here.

The NMML file

First, let's take a look at the contents of the NMML file:
The app node allows us to specify things about the app, including the class that contains our main method (the application entry point).

The window node indicates the size and orientation of our window.

Next we specify the location of our build directory by setting BUILD_DIR. A subdirectory will be created for each platform we build to.

The classpath node is used to specify where our source files are located.

We can use haxelib nodes to pull in libraries. Since we're using NME we definitely need that.

The assets node allows us to specify how assets are included in our project. Here we are basically saying that the Assets/images directory should become the images directory in our build (we will load images using the path images/<file>) and that we want to include all images except icons.

The ndll nodes allow us to include native libraries. Here we're using the standard ones.

Check out the official documentation for more in-depth info on the NMML file format.

A barebones class

OK, so let's check out a bare-bones, do-nothing class. Create a file in the Source directory called Main.hx:
Every NME application requires a main method to serve as its entry point, which you can see above. The class that implements this method should extend nme.display.Sprite.

In our main class we're basically creating an instance of Main and adding it to Lib.current as a child. Lib.current will, depending on the platform you are targeting, return a different implementation of MovieClip, which is akin to the ActionScript class of the same name. MovieClips are also Sprites. As we'll see next, Lib.current allows you to get and set useful stage properties such as the scaling mode, width, height and pixel density.

If you want to build and test this app from the command line, you can do the following:

nme test NME1.nmml flash

If you're using Sublime Text 2 with the Haxe bundle, you can hit ctrl+shift+b to select your target and then ctrl+enter to build and run. Ta da! A boring white screen. Excellent.

Displaying a bitmap

OK, let's make this slightly more interesting and display a bitmap image on the screen. First, you'll need to grab an image, such as this, and drop it in Assets/images.

Next we could probably use a constructor in our Main class:

Constructors in Haxe simply have the name new. They must invoke their superclass constructor by calling super(). After doing this, we're going to call initialize to set up some stage configuration and then addSprite to display the bitmap.

Here's initialize:

Notice how everything we configure is via Lib.current (the MovieClip) and its stage property. Stage is akin to the ActionScript class of the same name, and represents the main drawing area.

Here I'm saying that the stage should align towards the top left, that it should scale proportionally (with cropping if the stage becomes too small in either dimension) and that it should fill the screen. I'm just doing it this way because I want to be able to see my tiny sprite and I also want it in the same aspect ratio no matter what. StageScaleMode.NO_SCALE is more typical. I encourage you to play around with the possible settings to see what they do.

Here's addSprite:

This function uses the Assets class to load bitmap data from a specified file. Notice how we're specifying the file location as images and not Assets/images because of the configuration in our NMML file.

Bitmap is itself a DisplayObject, and it just so happens that our Main class, due to the fact that it extends Sprite, is also a DisplayObjectContainer. Anything we add as a child will be displayed on the stage.

Here's the complete class, with proper import statements:

Go ahead and run this, and you should see something like this (after hitting esc to exit fullscreen):
So there he is, our happy little chef sprite. I might do more with him later.

Additional resources

There are some excellent tutorials on the NME website I highly recommend. They show more complete program structure, how to properly cache bitmap resources, etc.

Thanks for reading, and feel free to leave any questions or feedback in the comments.

Monday, July 30, 2012

How to set up OpenGL on iOS using GLKit

In this post we'll set up OpenGL ES on iOS and clear the screen, just like we did in a previous post, but this time we'll use GLKit to do it. I recommend checking out the previous post before you dig into this one, because it explains some of the OpenGL concepts that are glossed over here.

A finished Xcode project with the code from this post can be found on github.

GLKit is a framework built on top of OpenGL ES. Using it can save time because it reduces the amount of boilerplate code you have to write.

For this example, create a new Empty Application iOS project in Xcode. I'm using automatic reference counting -- you may have to change the code if you want to use manual memory management. I'm also using Xcode 4.4.

Add QuartzCore, OpenGLES and GLKit frameworks to your project (project settings, Build Phases, Link Binary With Libraries):
Now add a new Storyboard file to your project and adjust your project settings to make it your main storyboard. Storyboards allow you to create multiple scenes and specify how they are related. For this example we'll just use a single scene:
With the Storyboard open, check out the Object Library (in the Utilities drawer). You should see a GLKit View Controller. Drag this on to your Storyboard:
If you have the Navigator drawer open you should see the GLKit View Controller. Make sure it's selected:
Back in the Utilities drawer, under the Identity inspector, you should see a Custom Class section where GLKViewController is specified. This means that GLKViewController is the backing controller class for the view:
We'll need to change this to our own custom subclass. Add a new Objectve-C class to your project named MyViewController. Make it a subclass of GLKViewController. Make sure to import GLKit.h in your header file:

#import <GLKit/GLKit.h>

@interface MyViewController : GLKViewController


Now you can go back to your Storyboard and set your custom class:
While you're here you can click on the Connections inspector. Notice how your view controller's outlets have been configured to reference a GLKit View:
Now click on the GLKit View in your Scene:
Check out its Identity inspector. Notice how the custom class is GLKView:
When we added the GLKit View Controller to the scene it was automatically set up to manage a GLKView instance. This GLKView instance, in turn, will manage your framebuffer for you.

OK, let's get back to the code. Open up MyViewController.m. Let's add a property for an EAGLContext:

#import "MyViewController.h"

@interface MyViewController ()

@property (strong) EAGLContext *glContext;


@implementation MyViewController

@synthesize glContext = _glContext;


Now let's add an empty viewDidLoad method to the MyViewController class:

- (void)viewDidLoad
    [super viewDidLoad];

Now let's fill it out. First, let's create our EAGLContext:

self.glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

Here we've specified the use of OpenGL ES version 2. Let's check to make sure the context got created properly:

if (!self.glContext) {
    NSLog(@"Unable to create OpenGL context");

Next, make the context current:

[EAGLContext setCurrentContext:self.glContext];

Before leaving the viewDidLoad method we'll tell the GLKView instance (being managed by our view controller) about the context:

GLKView *view = (GLKView *)self.view;
view.context = self.glContext;

Let's finish off the MyViewController class with a simple mechanism for clearing the screen:

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
    glClearColor(150.0/255.0, 200.0/255.0, 255.0/255.0, 1.0);

The method we're overriding here, glkView:drawInRect, is part of the GLKViewDelegate protocol, which is used by GLKViewController. We can implement this method to draw whatever we want using OpenGL commands.

The last thing we need to do before running this application is to go into AppDelegate.m and modify our application:didFinishLaunchingWithOptions method to simply return YES so our Storyboard gets used:

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
    return YES;

Launch the app, and you should see a lovely periwinkle screen!
Thanks for reading. Feel free to post comments, questions or suggestions.

Sunday, July 15, 2012

Using Blender on a MacBook

Blender is an amazing 3D modeling, animating, video editing, compositing, game making tool that's designed to be used when sitting at a desk with a full keyboard (including numpad) and a three-button mouse. If you want to use Blender on a MacBook then you'll have to do a bit of extra work to set it up.

There are two main issues you have to solve when using Blender on a MacBook: First is the fact that newer MacBooks have no numpad, second is the fact that OS X provides no built-in way of making a middle mouse click with the trackpad.

Two free tools will solve this problem.

First, the keyboard.

Most information I've seen on the web for using Blender on a laptop advises the use of the Emulate Numpad" setting in Blender user preferences. This setting causes Blender to act as though the regular number keys are in fact numpad keys. This means you can use the number keys to switch between various 3D views but unfortunately you use the ability to use the number keys for their original purpose, which is selecting layers.

I recommend you leave "Emulate Numpad" off. Instead, use KeyRemap4MacBook. This is a nifty preference pane that will let you use the fn key plus number keys to simulate numpad input. This means you can use the numbers to switch between layers in Blender, or use fn+numbers to switch between views. To enable this functionality, install KeyRemap4MacBook, go into System Preferences, open the KeyRemap4MacBook preference pane, and under the "Change Key" tab locate the "Change Num Key (1...0)" item. Click the little triangle to open the item, then check the "Fn+Number to KeyPad" preference. Now, whenever you press fn+<some number> it will be as though you used the numpad to make the key press.

Next, the mouse.

To get left mouse input just make sure you have "Secondary click" enabled in the Trackpad system preference pane. This lets you use two fingers to get RMB clicks.

The middle mouse button is a bit tricker. For this we'll need another piece of software, the very awesome BetterTouchTool.

After installing and running BTT you will see a little icon at the top of your screen that looks like a finger on a trackpad. Go into its preferences. Click "Basic Settings", and enable "Launch BetterTouchTool on startup" (if you want).

In Blender, middle mouse is used to move about the 3D view. We want to be able to hold MMB and move the mouse. It's pretty straightforward to enable MMB clicks in BTT, but being able to drag MMB is a little tricker.

To enable MMB drag in BBT, go into its preferences and click the "Advanced" button. You should now see a little magic wand icon at the top of the window labeled "Action Settings". Click this. Go to the "Stuff" tab, and select "Use special middleclick mode".

Special middleclick mode won't work if you don't have a middleclick gesture defined, so click on "Gestures", select "Global" in the menu on the left, and then click "Add new gesture". Set the Touchpad Gesture to "Three Finger Click" (not Three Finger Tap, that won't work) and set the Predefined Action to "Middleclick".

Now if you go into Blender, push the trackpad down with three fingers and keep it down while lifting two of them, you should be able to move your remaining finger around to navigate in the 3D view.

All in all, a little bit of effort and now you can use Blender full-on with just the MacBook keyboard and trackpad, no external devices required. Enjoy!

Thursday, July 12, 2012

How to set up OpenGL on iOS

OpenGL ES is a scaled-down version of the OpenGL API for 2D and 3D graphics programming on mobile devices. iOS supports version 1.1 and 2.0 of the API. Version 1.0 is more simple, version 2.0 is more powerful and flexible. For this particular example I'll be using version 2.0 to create a bare bones OpenGL app that does nothing but clear the screen with a particular color. There's a lot to it, and OpenGL does have a bit of a learning curve, but I think in the long run it's a rewarding thing to learn.

A complete Xcode project for this post can be found on github.

OK, let's go.

On iOS all OpenGL content is rendered to a special Core Animation layer called CAEAGLLayer. Our basic application will create a UIView subclass called GLView which will wrap a CAEAGLLayer. We do this by overriding UIView's layerClass method to specify that our view is backed by a CAEAGLLayer:

+ (Class)layerClass
    return [CAEAGLLayer class];

The CAEAGLLayer instance is managed for us by our parent. We can retrieve it via the layer property:

CAEAGLLayer *glLayer;
glLayer = (CAEAGLLayer *)self.layer;

Once we have a reference to our CAEAGLLayer we can configure it. By default the layer is transparent. We have to change that. If the layer is not opaque performance will suffer:

glLayer.opaque = YES;

Now we need a context. In OpenGL the context is used to store current state. The class we use for this on iOS is EAGLContext. When we initialize the context we tell it which API version we wish to use:

EAGLContext *glContext;
glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

We then make our context the current context so OpenGL will use it:

[EAGLContext setCurrentContext:glContext]

Next we need to ask OpenGL to create a renderbuffer for us. A renderbuffer is a chunk of memory where the rendered image for the current frame will be stored. To create one, we use the glGenRenderbuffers command:

GLuint renderbuffer;
glGenRenderbuffers(1, &renderbuffer);

Notice we passed in the address of a GLuint variable. This variable holds an identifier that we can use to refer to this particular renderbuffer.

Once we have a renderbuffer, we bind it to the GL_RENDERBUFFER target. All this means is that when we execute commands that involve the bound renderbuffer in some way, this particular renderbuffer will be used:

glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);

Now we need to allocate storage for the renderbuffer:

[glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:glLayer];

Notice how we didn't explicitly specify which renderbuffer to allocate storage for. Instead, we specified the GL_RENDERBUFFER target. This is a good example of how the OpenGL API is "stateful". OpenGL creates and manages a bunch of internal objects that we don't directly control. Instead, we use OpenGL commands to build up the current state, and then use other OpenGL commands to manipulate the current state. If we want to manipulate some other state, say if we wanted to work in another context or use a different renderbuffer, we would have to tell OpenGL to use this other state before executing commands that would manipulate it.

This is an important concept in OpenGL. When working with the API, we have to make sure that we're using the correct state. If I was managing multiple renderbuffers I would have to make sure I told OpenGL which one was bound to the GL_RENDERBUFFER target before executing commands that manipulate the currently bound renderbuffer. This simple example only has a single renderbuffer and a single context, but this is a fundamental aspect of OpenGL and important to keep in mind.

Now that we have our renderbuffer, we need a framebuffer. The framebuffer is another chunk of memory that is used when rendering the current frame:

GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);

Now we bind the framebuffer to the GL_FRAMEBUFFER target so that framebuffer-related commands act up on it:

glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

Then we attach the renderbuffer to the framebuffer:

glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);

Notice we specified the GL_COLOR_ATTACHMENT0 slot. This is the framebuffer's attachment point for a renderbuffer. Sometimes renderbuffers are called "color buffers" or "color renderbuffers" because they're basically just a color image.

Now OpenGL is initialized and ready to use. Let's fill our renderbuffer with a solid color and push it to the screen.

First, we set the clear color:

glClearColor(150.0/255.0, 200.0/255.0, 255.0/255.0, 1.0);

When setting a color we specify values for four channels: red, green, blue and alpha (transparency). Each can have a value between 0 and 1. Normally we're using a color mode that has 8 bits per channel, so there are 256 distinct values any channel can have. Specifying a value between 0 and 255 is easy, and a lot of tools for woking with color support this. To change these "human readable" values into the 0 to 1 range OpenGL expects, we simply divide.

Now that we've specified the clear color we can fill the currently bound renderbuffer with it:


Finally, we present the contents of the renderbuffer to the screen:

[glContext presentRenderbuffer:GL_RENDERBUFFER];

Take a look at the Xcode project on github to see the full GLView class. Take a look at the AppDelegate code to see how we attach GLView as a subview of the window. Run it, and you should see a blue screen. :-)

That's it for now. Comments and questions welcome. Thanks!

Monday, July 2, 2012

How to set up OpenAL and play a sound file on iOS

OpenAL is a powerful library that provides audio playback, 3D sound and other cool stuff. This post will help you get up and running quickly with OpenAL on iOS, but it only scratches the surface of what you can do. For a more detailed look at this topic, check out the excellent book Beginning iPhone Games Development.

A complete Xcode project based on this post can be found on github.

OK, here we go.

First, you'll need a Core Audio Format (.caf) sound file that is little-endian, 16-bit, and has a sampling rate of 44,100 Hz. OS X comes with a utility called afconvert that can be used to convert audio files into the proper format:

/usr/bin/afconvert -f caff -d LEI16@44100 Sosumi.aiff Sosumi.caf

Once you've got your .caf file, go ahead and add it to your Xcode project. Then modify your build configuration to link the OpenAL.framework and AudioToolbox.framework libraries.

Now you're ready to import OpenAL headers:

#import <OpenAl/al.h>
#import <OpenAl/alc.h>
#include <AudioToolbox/AudioToolbox.h>

To set up OpenAL, you will need a device, a context, a source and a buffer.

The device represents a physical sound device, such as a sound card. Create a device with openALDevice, passing NULL to indicate you wish to use the default device:

ALCdevice* openALDevice = alcOpenDevice(NULL);

You can use alGetError at any time to see if there is a problem with the last OpenAL call you made:

ALenum error = alGetError();

if (AL_NO_ERROR != error) {
    NSLog(@"Error %d when attemping to open device", error, operation);

The context keeps track of the current OpenAL state. Use openALContext to create a context and associate it with your device:

ALCcontext* openALContext = alcCreateContext(openALDevice, NULL);

Then make the context current:


A source in OpenAL emits sound. Use alGenSources to generate one or more sources, noting their identifiers (either a single ALuint or an array). This allocates memory:

ALuint outputSource;
alGenSources(1, &outputSource);

You can set various source parameters using alSourcef. For example, you can set the pitch and gain:

alSourcef(outputSource, AL_PITCH, 1.0f);
alSourcef(outputSource, AL_GAIN, 1.0f);

Buffers hold audio data. Use alGenBuffers to generate one or more buffers:

ALuint outputBuffer;
alGenBuffers(1, &outputBuffer);

Now we have a buffer we can put audio data into, a source that can emit that data, a device we can use to output the sound, and a context to keep track of state. The next step is to get audio data into the buffer. First we'll get a reference to the audio file:

NSString* filePath = [[NSBundle mainBundle] pathForResource:@"Sosumi" ofType:@"caf"];
NSURL* fileUrl = [NSURL fileURLWithPath:filePath];

Now we need to open the file and get its AudioFileID, which is an opaque identifier that Audio File Services uses:

AudioFileID afid;
OSStatus openResult = AudioFileOpenURL((__bridge CFURLRef)fileUrl, kAudioFileReadPermission, 0, &afid);
if (0 != openResult) {
    NSLog(@"An error occurred when attempting to open the audio file %@: %ld", filePath, openResult);

A couple things to note about this last bit of code: First is the use of __bridge: This is only necessary if you are using ARC in iOS 5. Second is the literal value 0: This indicates that we're not providing a file type hint. We don't need to provide a hint because the extension will suffice.

Now we have to determine the size of the audio file. To do this, we will use AudioFileGetProperty. This function takes the AudioFileID we got from AudioFileOpenURL, a constant indicating the property we're interested in (see the complete list), and a reference to a variable containing the size of the property value. The reason you pass this by reference is because AudioFileGetProperty will set it to the actual property value.

UInt64 fileSizeInBytes = 0;
UInt32 propSize = sizeof(fileSizeInBytes);

OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSizeInBytes);
if (0 != getSizeResult) {
    NSLog(@"An error occurred when attempting to determine the size of audio file %@: %ld", filePath, getSizeResult);
UInt32 bytesRead = (UInt32)fileSizeInBytes;

Note that kAudioFilePropertyAudioDataByteCount is an unsigned 64-bit integer, but I've downcast it to an unsigned 32-bit integer. The reason I've done this is because we can't use the 64-bit version with the code coming up. Hopefully your audio files aren't long enough for this to matter. ;-)

OK, now we're ready to read data from the file and put it into the output buffer. The first thing we have to do is allocate some memory to hold the file contents:

void* audioData = malloc(bytesRead);

Then we read the file. We pass the AudioFileID, false to indicate that we don't want to cache the data, 0 to indicate that we want to read the file from the beginning, a reference to bytesRead, and the pointer to the memory location where the file data should be placed. After the data is read, bytesRead will contain the actual number of bytes read.

OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &bytesRead, audioData);
if (0 != readBytesResult) {
    NSLog(@"An error occurred when attempting to read data from audio file %@: %ld", filePath, readBytesResult);

Now we can close the file:


And we can copy the data into our OpenAL buffer:

alBufferData(outputBuffer, AL_FORMAT_STEREO16, audioData, bytesRead, 44100);

Now that we've copied the data we can clean it up:

if (audioData) {
    audioData = NULL;

Then we can attach the buffer to the source:

alSourcei(outputSource, AL_BUFFER, outputBuffer);

At long last, the source can emit the sound data contained in the buffer!


When you're ready to clean up you should delete your source and buffers, destroy the context and close the device:

alDeleteSources(1, &outputSource);
alDeleteBuffers(1, &outputBuffer);

I had trouble getting the sound to play when I tried to initialize OpenAL and play the sound inside my viewDidLoad method. So I created a button and used its action to play the sound. Then everything worked fine.

If you have any questions or feedback, please feel free to comment. Thanks!