I have recently had the need to play with OpenAL and despite the various number of examples available on the Internet, I found most of them either incomplete, inexact, but mostly non-working as-is. I will let you know why later, but for the moment, I think it is better to start with an example to better understand in which traps you can fall (like I did) and understand this basic OpenAL example so you can eventually build more complex software using it.

If you want to jump directly to the example code, it is here

What is OpenAL about?

OpenAL is the audio counterpart of OpenGL for graphics and as such defines a standard and portable API to build audio applications on top of it. The API covers both playback and capture use cases.


There are various implementations of OpenAL in the wild, the one being used in this example is called OpenAL Soft. This implementation is available for most Unices and Linux systems.

Eventually you can use the OpenAL utilities library (alut) which provides a set of functions to help you in with some simple tasks.

Key concepts

Since OpenAL was specified much like OpenGL was, there are key concepts directly inherited from OpenGL, in particular:

  • the API naming convention follows the OpenGL one

  • there is a current context to create and use for your scenes lifetime

  • "rendering" audio is done in an audio scene (think about pipelining commands as well as rendering asynchronously)

  • the API deals with audio streams (understand, raw PCM format) not with audio codecs

We will get back to these concepts later because they are crucial to understand why our example might or might not work out of the box.


Since OpenAL is about audio, it also introduces new concepts, in particular:

  • the listener object

  • the source object

  • the buffer object

Each of these different objects have properties which can be set with their respective API (al*Object*).

It is not mandatory to define all of these objects, in particular the listener since there is a default listener.

The source and buffer objects need however explicit initialisation and manipulation.

Let’s get started

Now let’s start with a short example which plays back a WAV file to a playback device.

Device opening

The very first thing to do is to open a handle to a device. This is done like this:

#include <AL/al.h>
#include <AL/alc.h>
ALCdevice *device;

device = alcOpenDevice(NULL);
if (!device)
        // handle errors

Here we open a handle to the default device. You can specify explicitely which device you want to open by using its name as an argument to alcOpenDevice(). Below is an example about how to enumerate the list of available devices

Device enumeration

Prior to attempting an enumeration, Open AL provides an extension querying mechanism which allows you to know whether the runtime Open AL implementation supports a specific extension. In our case, we want to check whether Open AL supports enumerating devices:

ALboolean enumeration;

enumeration = alcIsExtensionPresent(NULL, "ALC_ENUMERATION_EXT");
if (enumeration == AL_FALSE)
        // enumeration not supported
        // enumeration supported

Retrieving the device list

If the enumeration extension is supported, we can procede with listing the audio devices. If the enumeration is not supported listing audio devices only returns the default device, which is expected in order not to break any application.

The OpenAL specification says that the list of devices is organized as a string devices are separated with a NULL character and the list is terminated by two NULL characters, we will define a helper for parsing the list of devices.

static void list_audio_devices(const ALCchar *devices)
        const ALCchar *device = devices, *next = devices + 1;
        size_t len = 0;

        fprintf(stdout, "Devices list:\n");
        fprintf(stdout, "----------\n");
        while (device && *device != '\0' && next && *next != '\0') {
                fprintf(stdout, "%s\n", device);
                len = strlen(device);
                device += (len + 1);
                next += (len + 2);
        fprintf(stdout, "----------\n");

list_audio_devices(alcGetString(NULL, ALC_DEVICE_SPECIFIER));

Passsing NULL to alcGetString() indicates that we do not want the device specifier of a particular device, but all of them.

Error stack initialisation and usage

One big aspect of OpenAL, inherited from OpenGL is the error stack. This error stack is to be manipulated with caution, otherwise you will get errors which do not correspond to your latest al*() call, and this will be definitively puzzling while troobleshooting.

Fortunately the error stack has a depth of 1 error, so you want have to pop all the errors, just the last one. Also note that success (AL_NO_ERROR) is also pushed to the error stack. This means that every call to al*() must be consequently checked, but you should check all return values anyway.

The very first thing to do, prior to any al*() call, is to reset the error stack. This is done by issuing a dummy read like this:


retrieving an error, is done pretty much the same way. If you want to

ALCenum error;

error = alGetError();
if (error != AL_NO_ERROR)
        // something wrong happened

Would you print a litteral error string (ala strerror()) that you would need to use alut’s alutGetErrorString() to do that.

Context creation and initialization

In order to render an audio scene, we need to create and initialize a context for this. We do this by the following calls:

ALCcontext *context;

context = alcCreateContext(device, NULL);
if (!alcMakeContextCurrent(context))
        // failed to make context current
// test for errors here using alGetError();

There is nothing specific for our context, so NULL is specified as argument.

Definining and configuring the listener

Since there is a default listener, we do not need to explicitely create one because it is already present in our scene. This actually makes sense because audio is to be heard by a listener anyway. If the listener actually does not hear it, it is because of the source properties (source too far away …).

If we want to define some of our listener properties however, we can proceed like this:

ALfloat listenerOri[] = { 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f };

alListener3f(AL_POSITION, 0, 0, 1.0f);
// check for errors
alListener3f(AL_VELOCITY, 0, 0, 0);
// check for errors
alListenerfv(AL_ORIENTATION, listenerOri);
// check for errors

Source generation

In order to playback audio, we must create an audio source objet, this source is actually the "origin" of the audio sound. And as such must be defined in the audio scene. If you combine audio with graphics, most likely quite a lot of your graphics objects will also include an audio source object.

Note that you hold a reference (id) to a source object, you don’t manipulate the source object directly.

ALuint source;

alGenSources((ALuint)1, &source);
// check for errors

alSourcef(source, AL_PITCH, 1);
// check for errors
alSourcef(source, AL_GAIN, 1);
// check for errors
alSource3f(source, AL_POSITION, 0, 0, 0);
// check for errors
alSource3f(source, AL_VELOCITY, 0, 0, 0);
// check for errors
alSourcei(source, AL_LOOPING, AL_FALSE);
// check for errros

Here we generate a single source (thus the 1), and we later on define several of its properties: - pitch - gain - position - velocity - looping

Buffer generation

The buffer object is the object actually holding the raw audio stream, alone a buffer does not do much but occupying memory, so we will see later on what to do with it. Just like sources, we hold a reference to the buffer object.

ALuint buffer;

alGenBuffers((ALuint)1, &buffer);
// check for errors

Loading an audio stream to a buffer

We mentionned earlier in this document that OpenAL manipulates raw audio streams and does not care about the audio format, since it is not its scope.

In order to simplify let’s take the example of a WAV file (*.wav). In order to ease the parsing of the WAV format (not that it’s particularly complex, but I am lazy), we can either use alut, or libaudio for instance.

Here is the example with alut:

ALsizei size, freq;
ALenum format;
ALvoid *data;
ALboolean loop = AL_FALSE;

alutLoadWAVFile("test.wav", &format, &data, &size, &freq, &loop);
// check for errors

That’s very simple but it is a deprecated API, so here is the replacement using libaudio in case you want to do it the hard way:

WaveInfo *wave;
char *bufferData;
int ret;

wave = WaveOpenFileForReading("test.wav");
if (!wave) {
        fprintf(stderr, "failed to read wave file\n");
        return -1;

ret = WaveSeekFile(0, wave);
if (ret) {
        fprintf(stderr, "failed to seek wave file\n");
        return -1;

bufferData = malloc(wave->dataSize);
if (!bufferData) {
        return -1;

ret = WaveReadFile(bufferData, wave->dataSize, wave);
if (ret != wave->dataSize) {
        fprintf(stderr, "short read: %d, want: %d\n", ret, wave->dataSize);
        return -1;

The trickiest thing with libaudio, is to understand that WaveSeekFile actually seeks in the WAVE file starting from the audio offset of the WAVE format.

Now we can finally proceed with loading the raw audio stream into our buffer, this is done like this for alut:

alBufferData(buffer, format, data, size, freq);
// check for errors

and like this for libaudio:

static inline ALenum to_al_format(short channels, short samples)
        bool stereo = (channels > 1);

        switch (samples) {
        case 16:
                if (stereo)
                        return AL_FORMAT_STEREO16;
                        return AL_FORMAT_MONO16;
        case 8:
                if (stereo)
                        return AL_FORMAT_STEREO8;
                        return AL_FORMAT_MONO8;
                return -1;

alBufferData(buffer, to_al_format(wave->channels, wave->bitsPerSample),
                bufferData, wave->dataSize, wave->sampleRate);
// check for errors

Binding a source to a buffer

In order to actually output something to the playback device, we need to bind the source with its buffer. Obviously you can bind the same buffer to several sources and mix different buffers to the same source. Binding is done like this:

alSourcei(source, AL_BUFFER, buffer);
// check for errors

Playing the source

We now have everything ready to start playing our source. Do you remember that we mentionned the audio rendering to be asynchronous in the beginning?

This means that a call to alSourcePlay(), will start playing the source, and return immediately, we don’t wait until the source is fully played. The reason for this is actually very simple, since it is an audio scene, one can perfectly want to stack up different sounds playing a different moments.

The underlying implementation of alSourcePlay() in OpenAL Soft actually spawns a detached thread as soon as you issue alSourcePlay(), so guess what happens in the following code snippet:


// cleanup context
alDeleteSources(1, &source);
alDeleteBuffers(1, &buffer);
device = alcGetContextsDevice(context);

you hear … nothing, because your main thread is cleaning the entire audio scene while your detached thread attempts source playback.

Instead, since we are doing this in a main thread for example purposes, you must do the following:

// check for errors

alGetSourcei(source, AL_SOURCE_STATE, &source_state);
// check for errors
while (source_state == AL_PLAYING) {
        alGetSourcei(source, AL_SOURCE_STATE, &source_state);
        // check for errors

Now we make the main thread block until the source finishes playback, actually allowing us to finally hear that "test.wav" sound.

Cleaning up context and resources

Obviously each and every single object "generated" must be freed, the following does this for us:

alDeleteSources(1, &source);
alDeleteBuffers(1, &buffer);
device = alcGetContextsDevice(context);


The biggest trap you should not fall into, is to remember that your OpenAL implementation has to implement some calls as non-blocking for the caller which usually means creating threads. Those threads are completely detached from the main thread spawning them so the main thread is not blocked waiting for them to complete.

In a real world example the asynchronous nature of the OpenAL API is not much of the problem because rendering graphics usually means dedicated one or several threads of executing for rendering and building the scene.

I hope this example explained has been useful to you!