Friday, 10 August 2012

OpenAL Soft -- demonstration of binaural 3D audio

OpenAL Soft is a branch of OpenAL which has filtering implemented in software, and it also adds support for head-related transfer functions (HRTFs). HRTFs permit 3D auralization through stereo earphones.

So here follows an example of using OpenAL. One looping sound effect (footsteps) make a random walk through the world while you, the listener, stand still.

For the footsteps.raw, I did this:

Got the file from

Then processed it using 'sox':
> sox footsteps-4.wav -b 16 footsteps.raw channels 1 rate 44100

/* footsteps.c
 * To compile:
 *   gcc -o footsteps footsteps.c -lopenal
 * Requires data "footsteps.raw", which is any signed-16bit
 * mono audio data (no header!); assumed samplerate is 44.1kHz.

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>  /* for usleep() */
#include <math.h>    /* for sqrtf() */
#include <time.h>    /* for time(), to seed srand() */

/* OpenAL headers */
#include <AL/al.h>
#include <AL/alc.h>
#include <AL/alext.h>

/* load a file into memory, returning the buffer and
 * setting bufsize to the size-in-bytes */
void* load( char *fname, long *bufsize ){
   FILE* fp = fopen( fname, "rb" );
   fseek( fp, 0L, SEEK_END );
   long len = ftell( fp );
   rewind( fp );
   void *buf = malloc( len );
   fread( buf, 1, len, fp );
   fclose( fp );
   *bufsize = len;
   return buf;

/* randomly displace 'a' by one meter +/- in x or z */
void randWalk( float *a ){
   int r = rand() & 0x3;
      case 0: a[0]-= 1.; break;
      case 1: a[0]+= 1.; break;
      case 2: a[2]-= 1.; break;
      case 3: a[2]+= 1.; break;
   printf("Walking to: %.1f,%.1f,%.1f\n",a[0],a[1],a[2]);

int main( int argc, char *argv[] ){
   /* current position and where to walk to... start just 1m ahead */
   float curr[3] = {0.,0.,-1.};
   float targ[3] = {0.,0.,-1.};

   /* initialize OpenAL context, asking for 44.1kHz to match HRIR data */
   ALCint contextAttr[] = {ALC_FREQUENCY,44100,0};
   ALCdevice* device = alcOpenDevice( NULL );
   ALCcontext* context = alcCreateContext( device, contextAttr );
   alcMakeContextCurrent( context );

   /* listener at origin, facing down -z (ears at 1.5m height) */
   alListener3f( AL_POSITION, 0., 1.5, 0. );
   alListener3f( AL_VELOCITY, 0., 0., 0. );
   float orient[6] = { /*fwd:*/ 0., 0., -1., /*up:*/ 0., 1., 0. };
   alListenerfv( AL_ORIENTATION, orient );

   /* this will be the source of ghostly footsteps... */
   ALuint source;
   alGenSources( 1, &source );
   alSourcef( source, AL_PITCH, 1. );
   alSourcef( source, AL_GAIN, 1. );
   alSource3f( source, AL_POSITION, curr[0],curr[1],curr[2] );
   alSource3f( source, AL_VELOCITY, 0.,0.,0. );
   alSourcei( source, AL_LOOPING, AL_TRUE );

   /* allocate an OpenAL buffer and fill it with monaural sample data */
   ALuint buffer;
   alGenBuffers( 1, &buffer );
      long dataSize;
      const ALvoid* data = load( "footsteps.raw", &dataSize );
      /* for simplicity, assume raw file is signed-16b at 44.1kHz */
      alBufferData( buffer, AL_FORMAT_MONO16, data, dataSize, 44100 );
      free( (void*)data );
   alSourcei( source, AL_BUFFER, buffer );

   /* state initializations for the upcoming loop */
   srand( (int)time(NULL) );
   float dt = 1./60.;
   float vel = 0.8 * dt;
   float closeEnough = 0.05;

   /** BEGIN! **/
   alSourcePlay( source );

   fflush( stderr ); /* in case OpenAL reported an error earlier */

   /* loop forever... walking to random, adjacent, integer coordinates */
      float dx = targ[0]-curr[0];
      float dy = targ[1]-curr[1];
      float dz = targ[2]-curr[2];
      float dist = sqrtf( dx*dx + dy*dy + dz*dz );
      if( dist < closeEnough ) randWalk(targ);
         float toVel = vel/dist;
         float v[3] = {dx*toVel, dy*toVel, dz*toVel};
         curr[0]+= v[0];
         curr[1]+= v[1];
         curr[2]+= v[2];

         alSource3f( source, AL_POSITION, curr[0],curr[1],curr[2] );
         alSource3f( source, AL_VELOCITY, v[0],v[1],v[2] );
         usleep( (int)(1e6*dt) );

   /* cleanup that should be done when you have a proper exit... ;) */
   alDeleteSources( 1, &source );
   alDeleteBuffers( 1, &buffer );
   alcDestroyContext( context );
   alcCloseDevice( device );

   return 0;


  1. Hi,

    I have been trying to get OpenAL-soft to work for quite some time, because I need 3D positional sound for a research-project I am working on. However, for some reason I can not get it to work with 3D sound. I have compiled your code with Visual Studio using the additional include lib: openal-soft\include
    and the Additional Library Directories: openal-soft\build\Release
    and OpenAL32.lib as an additional dependecy. When I run your code the sound seems to come from within my skull (using simple headphones)

    Is there another parameter that I need to set somewhere to get OpenAL-soft to work with 3d sound? Or have I messed up my additional directories?

    My soundcard is a basic laptop conexant card and I am running it in windows 7.

    Thanks in advance

  2. Hi Robrecht,

    Stereo output at 44.1kHz is all that's needed from the hardware/OS. What might be happening is that your audio output is set to 48kHz (this has become quite common). I don't know offhand, but I think OpenAL-soft "fails gracefully", by not performing 3D localization if the output isn't 44.1kHz -- just functioning as normal without using HRTFs.

    The reason for this restriction is that HRTF databases have been recorded at 44.1kHz, and OpenAL-soft will convolve this data with the intended output -- at the same sample rate.

    I don't have Windows to check with, but here's a link I found where someone sets the output sample rate:

    I hope that was the problem!

    1. Hiya,

      thanks for the quick reply. I force OpenAL-soft in 44.1kHz by using alsoft.ini (same as the config file for linux) and that is also where I put hrtf=on. If I change it to 48kHz then it gives a warning that the sample-rate is wrong.

      alGetString(AL_VERSION) and alGetString(AL_RENDERER)
      give me 1.1 ALSOFT 1.15.1 and OpenAL Soft, so that seems to be correct.

      However, the sound does not change when I move the source object in the z-axis. Is there another way to determine whether HRTF is enabled?

      Thanks for the help

    2. Are you certain your hardware/OS are set to 44.1kHz though? OpenAL doesn't set this... though it should report another error to stderr (in my crude example program here, add "fflush(stderr)" before entering that infinite loop so you can see it -- or running openal-info also shows this error). If I let my system use 48kHz by default (even though I set 44100 in the alsoft config) I get this:

      > AL lib: UpdateDeviceParams: Failed to set 44100hz, got 48000hz instead
      > AL lib: GetHrtf: Incompatible format: Stereo 48000hz

      In this case localized sounds will be panned and attenuated by distance, but you can't really tell where they are. They do pretty much sound "in your head". When I configure my audio device to also run at 44.1kHz, sounds are externalized and you can track the footsteps in this example.

      If this works and you do get that error after flushing stderr, I am deeply sorry... because it's probably my fault that you've been on a goose chase. :( I didn't realize I might be masking such an error notification by having that infinite loop. It's a case of "worked for me, so I didn't know there was a problem lurking" -- until I tried duplicating what might be happening to you.

      If this isn't the problem, you could try running in a debugger to see whether HRTF code is being called, or get some sense as to where it's failing. Or expert help is readily available by going onto IRC (#openal on The author "kcat" was there and very helpful when I had some questions!

      And you made me realize I should update my library. I was still using 1.13. :)

  3. Hi again,

    again for the quick reply and the help. It really drives me forward :)

    I am pretty sure I am in 44.1kHz, because when I change it to 48kHz I indeed get the same message. If I put print messages in hrtf.c they also display in the terminal. Perhaps my hrtf tables are wrong? Could you send me your Then I can rebuild it using those tables and maybe it will work then.

    Again, thanks for all the help, it really reduces stress-levels (working on my Master Thesis)


    openal-info.exe gives me the following:
    Available playback devices:
    Speakers (Conexant 20585 SmartAudio HD)
    Available capture devices:
    External Microphone (Conexant 20585 SmartAudio HD)
    Default playback device: Speakers (Conexant 20585 SmartAudio HD)
    Default capture device: External Microphone (Conexant 20585 SmartAudio HD)
    ALC version: 1.1

    ** Info for device "Speakers (Conexant 20585 SmartAudio HD)" **
    ALC version: 1.1
    ALC extensions:
    ALC_EXT_thread_local_context, ALC_SOFT_loopback
    OpenAL vendor string: OpenAL Community
    OpenAL renderer string: OpenAL Soft
    OpenAL version string: 1.1 ALSOFT 1.15.1
    OpenAL extensions:
    AL_EXT_MULAW_MCFORMATS, AL_EXT_OFFSET, AL_EXT_source_distance_model,
    AL_LOKI_quadriphonic, AL_SOFT_buffer_samples, AL_SOFT_buffer_sub_data,
    AL_SOFTX_deferred_updates, AL_SOFT_direct_channels, AL_SOFT_loop_points,
    EFX version: 1.0
    Max auxiliary sends: 4
    Supported filters:
    Supported effects:
    EAX Reverb, Reverb, Echo, Ring Modulator, Dedicated Dialog, Dedicated LFE

    1. My output from openal-info is identical at "ALC version" and lower. And if HRTF functions are being called...

      I have an idea. Do you get a L/R panning of the footsteps as they do a random walk? Or does it just sound louder and quieter but balanced left and right? I once used airline earphones and realized they are only mono! So I was only getting the left channel for both ears.

      Have you tried the virtual barbershop? This would be a good way to verify that the problem is solely with OpenAL, or my sample code. :) If you can hear the barber making noises around the room with those same earphones, then I wonder if my sample code does something wrong on Windows. Have you tried setting the sound-source location to some fixed location? Like:
      alSource3f( source, AL_POSITION, 3., 1., -3. );
      which should be 45-degrees to your right.

      I could send you my but it's the default one with OpenAL.

      I'm now realizing it would be nice if OpenAL had some simple examples using 3D spatialization for testing/verification!

    2. I think we have a winner here. The 3d Positional demos on youtube seem to suffer from the same behind-my-head strangeness that is audible in your code for me. So this probably means that although my headphones are stereo, they do something with the sound which ruins the effect. I am now trying my external soundcards to see if they can fix the problem. It is especially weird because I did hear the 3d effect a couple of times on this computer, but now all sounds seem to come from behind me. Also starting to wonder if it might be because my ears are weird and the HRTFs that are being used are not applicable to me :)

      Anyway, thank you for all the help, I now know where to look to solve it. Once I manage to do so, I will report back with any additional information.


    3. Ah! Some progress. :) Well, you could have a friend try out your setup to see how they perceive it.

      Our ears are all different, and given the KEMAR dataset there will be some people who's brains aren't fooled at all. While for many it will be "okay". I would love to have access to an HRTF recording studio to get my own personal HRTFs!

      As VR makes a comeback (I think the tech is finally at a point where it will, and the old stigma from being over-hyped has died out), it will be important to have spatialized audio again. Hopefully this brings more HRTF recordings and we can find our own most-suitable dataset to use.

      If it does turn out that someone else hears externalized sound-cues from your setup, you could try some other datasets. OpenAL has two others in the utils directory I think. They might have to be run through the makehrtf utility.

      You're welcome, and good luck!

  4. Back again :) With some new information.

    It seems like your code should not work as the orientation of the listener needs 2 vectors: at and up.

    alListener3f( AL_ORIENTATION, 0., 0., -1. , 0., 1., 0. );

    At least, according to the #openal irc.

    Can not test it directly as I was also reinstalling the DirectX SDK and that broke openal-soft :) The life of a programmer.

    Will soon report back!

    1. Bah! Haha, thanks for finding that!
      It would actually need to be:

      float orient[6] = { 0., 0., -1., 0., 1., 0. };
      alListenerfv( AL_ORIENTATION, orient );

      I changed my code and it sounds the same to me, but I imagine the listener up-vector happens to default to +y (though it's not something to rely on -- so, good to set it!). I'll edit my post with the correction.

      Ugh, yeah, I remember when the life of a programmer was more programming and less library/API voodoo. Of course, then you spent a lot of time to make simple things. :)

    2. I have another thought... when I ran this again, my eyes were open and I was mentally tracking where the sound was coming from, but it still sounded "in my head" to a degree. Maybe a better way to phrase it is "in my imagination" -- positioned, but not in my physical environment. When I closed my eyes, it would seem more externalized. Our brains tend to give visual cues priority over what is real. :) So, without a plausible source, the illusion can become apparent.

      But I think you've said there's no change in the sound, so this probably isn't what you're experiencing either. Anyway, just a thought.

    3. The visual cues will not be a problem for my research, as the application I am making is meant for visually impaired people. I managed to get directsound to be the default output for openal and rebuilt everything. If I try to point to where the sound is it always sound like it is moving behind me. So I have curr[0]=cos(x); and curr[2] = sin(x); and instead of a circle around you it sounds like the footsteps are moving in a half circle behind you.

      It is hard for me to judge whether my sounds are properly externalized, but I guess they should be as all settings seem to be correct. I will try to record the stereo channels and also make-hrtf on the other data sets.

      Again, thanks so much for the help, really appreciated!

    4. Good to hear that you're having some success! Front/back flips are a common issue, though I haven't heard of consistently hearing sounds behind. I'm sure you'll have better results from another dataset.

      A test program for finding a dataset, similar to an optician narrowing down corrective lenses, would be fantastic. It's one of the many projects on that always-growing TODO list. :)

      You're project sounds interesting -- good luck with it!

    5. growing todo list... sounds familiar :)

      Anyway, I am going to a recording studio to get my own HRTF measured! It is a bio-med research group that is currently doing a lot of research on 3d sound and 3d visual positioning and they are interested in my research and want to work together.

      I will also eventually implement one of the methods where you take some pictures of your own head and create a HRTF based on that, once I do I will def make a small stand-alone program out of it.

  5. I have to tell you how much I appreciate the code you've pinned. I wish you good luck and good fortune.
    Greetings from Slovenia.

  6. I have to tell you how much I appreciate the code you've pinned. I wish you good luck and good fortune.
    Greetings from Slovenia.

  7. Hi Tony,

    I just ran your code and it works fine.
    The problem is that it is really hard to distinguish if the source is in front or in the back of the listener not looking at the data.

    I modified the code to walk
    1. left -> right -> left ...
    2. front -> back -> front ...

    In the first case we could easily distinguish the position of the source based on the sound we heard, but in the second we were not able to distinguish if the source is in the front or back.

    I have two questions:
    1. Is your code using HRTF?
    2. Are you sure the code is correct with regards for front/back playback?