“What happens in Vegas stays on Facebook” is a meme that couldn’t be more true nowadays. “Vegas” can be almost any public place in the physical world or every discussion forum on the Internet. In addition to Facebook, our opinions, words and actions will be recorded and stored in a number of places, data centres and private computers. Once we say something it will be archived forever, and you will never know when it will surface again – probably out of its original context.

From this (rather dystopian) setting I came up with an idea for an interactive audio installation that records everything people say when they visit the installation, and plays back the recorded clips in a random matter, repeating them and creating an ever-evolving soundbed of people’s thoughts and opinions. And of course people can make any sounds they wish, which will be even more fun. The soundbed keeps evolving as long as the installation is open for visitors and they dare to say something to the microphone. Once your words or sounds are recorded into the system there is no way of deleting them. (Well, of course it’s possible, but not for the public.) Only after the exhibition is over the recorded files would be erased.

From the idea I quickly moved into doing something about it. I decided to create the heart of the installation with Pure Data (Pd), which is an easy-to-learn data-flow programming language for audio. In addition to the computer running the Pd patch other components would be an audio interface, a microphone and a surround speaker array.

Words never forgotten Pd patch under development - screenshot.png

So far the programming has been fun and rewarding. Even though there are still many challenges and a lot of tweaking ahead, the basic functionalities are already there: recording triggered by input threshold, randomising the playback order of the recorded files, randomising their spatial positioning and movement, etc. Some features are still to added, such as dynamic automation for individual clips, look-aheads and volume envelopes to smoothen the starts of the recordings, a few effects and probably something more that I don’t know yet. It would be cool to apply some level of AI to make the experience more interactive, but at least for now my skills don’t reach that high.

Here’s a short video demonstrating the first tests of the randomised soundbed. Because I don’t like my own voice too much I invited some political celebrities into my studio…

Well, it sounds a bit chaotic, and for some reason the audio in the screen recording is in mono. But as said there is still a lot of tweaking to be done and the actual installation will use quite different sound material.

The installation will definitely require a surround speaker array to able the individual sound objects to travel around the listeners. All audio clips are positioned with horizontal and elevation angles onto a 360 degree sphere. There’s a simple Ambisonics encoder that creates B format (1st order) output which can be used if the exhibition space has a full-sphere Ambisonics system. If not, the system decodes the B format into several surround setups some of which I have already included in the patch. The next thing I will test is if a simple Quadrophonic or 4.0 speaker setup (four speakers in corners) is enough to create strong enough experience as that would be the easiest setup for most exhibition spaces.

So, next time I will hopefully report from inside the first practical test installation!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s