Ben Carey is a Sydney-based saxophonist/composer/technologist with interests in contemporary classical, improvised, interactive and electro-acoustic music. After completing a Bachelor of Music at the Sydney Conservatorium of Music in 2005, Ben moved to France to study saxophone and contemporary music under Marie-Bernadette Charrier at the Conservatoire de Bordeaux. Back in Australia, Ben is currently undertaking a PhD at the University of Technology, Sydney focused upon the design and development of interactive musical systems for improvised performance with instrumental musicians. Ben has performed and exhibited work in Australia, New Zealand, France, Austria, the United States and Switzerland.
Newly formed electro-acoustic duo Covalent (Zane banks - electric guitar, Ben Carey - saxophone and electronics), will perform a concert of semi-improvised and composed works for electric guitar, saxophone and live electronics.
The Sydney algorithmic improviser hack-together will take place over 3 days in April and will bring together scientists, artists and other enthusiasts in pursuit of the creation of pieces of music-playing software that exhibit modest but noticeable forms of autonomy and musical capability when paired up with human improvising musicians. The hack-together will provide an opportunity to collaborate and explore technologies, techniques and critical issues of musical autonomy. The gathering will culminate in a performance on April 21st showcasing systems produced prior to and during the workshop. The performance will include the following musicians: Peter Hollo (cello), Adrian Lim-Klumpes (rhodes), Evan Dorian (drumkit), Ben Carey (saxophone) and Roger Dean (keyboard).
In the late 1970's, Robert Plutchik adapted his concept of the eight primary emotions into a striking graph known as "Plutchik's Flower." Though this represents a simplified version of the emotional experience, it conveys the elegance of our evolution from natural origins. Using sounds and images taken from the natural word, and a new system for live spectral composition based on Plutchik's Flower, we will return to the wild, unfurled landscape of the mind. Experience a new approach to improvisation and a unique perspective on the forms of nature in an evening of musical and visual impressions.
Benedict Carey (composer and electronics), Daniel Mayne (visual artist), Rhia Parker (recorders), Benjamin Carey (saxophones), Megan Clune (clarinet), and Ben Goodger (electric guitars).
diffuse 6 @ UTS | Bon Marche Theatre, (University of Technology, Sydney) November 18th, 6.30pm
At this address you will find up to date downloads, videos, audio and texts related to the software. Please read on for an overview of the project written in late 2011:
(saxophonist Joshua Hyde rehearses with _derivations at IRCAM, Paris - November 2012)
In a good few previous blog posts, as well as in the audio and video sections of the site, I've referred to a system I've been working on called "_derivations." Having posted work in progress snippets and bits and pieces of information about the system over the past year, I thought it high time to give some more detailed information about the genesis of this project, what I have been trying to achieve with it, how it works, as well as where I believe it has taken me in my thinking about designing for interaction in instrumental performance. What follows is some detail about this particular creative project that has preoccupied me recently as a part my PhD research. If you're interested in reading about the system then this is the place to find out more, otherwise if you prefer just to hear what it's capable of then there are numerous examples over at the sounds and videos sections of the site.
_derivations is a system that is designed for use by a solo instrumentalist, and is designed to derive all of its sonic responses to improvisational input - both synthetic and via live sampling - from the instrumentalist's live performance. A great catalyst that launched me into designing performative systems in the first place was the desire for a hands-free or unmediated mode of performance with electronics. I have been interested in creating performative environments for an instrumentalist that require no physical intervention on the part of the performer (or anyone else for that matter) with the system once a performance has begun. i.e. the performer's interaction with the machine is entirely through sound. In order to achieve this, and to enable a mutually influential interactive relationship the machine then must be able to listen to and interact with the performer in some kind of autonomous manner. In _derivations, unlike in my previous system Multiple Players, the computer's sonic vocabulary, as well as it's generative and decision making capabilities, are directly related to the timbre of the instrument being analysed. In Multiple Players I was concerned with creating novel generative responses to instrumental input that were based upon notes, rhythms, dynamics and articulations - in short, all of the kinds of musical information that are available in a system based upon the representation of musical data via the MIDI standard. Although I am by no means the first to realise the limitations of this approach, I was very keen to develop a system that relied upon the analysis of timbre, not least in order to enable the blending of acoustic and computer generated sounds.
The _derivations system evolved over a period of months, with the final design centred around the grouping of various interconnected modules that were all initially built in isolation as specific interactive/sound design experiments. The audio above is an example of the kind of synthesis that kickstarted the project. What you hear in this excerpt is the sinusoidal re-synthesis of an instrumental signal (in this case a series of alto saxophone multiphonics), with the synthetic timbres mixed with some white noise and filtered through vocal formant filters. Although the resultant timbres are by no means a completely accurate portrayal of the timbre of the instrument being analysed, it was the potential for the real-time expressive use of analysis and re-sythesis to allow a clear and direct connection between acoustic and synthesised sound that excited me here. In playing with these sounds and thinking about their interactive potential, it quickly became apparent that in order to use this type of synthesis interactively, I would need to think about ways of grouping the analysed spectral snapshots for later use.
In parallel with these synthesis experiments, I was also experimenting with techniques for the automated segmentation, storage and playback of a continuously sampled stream of audio. As has been demonstrated in the work of Hsu, Cuifo and others, it is often useful for such interactive music systems to refer to analysed phrases of a continuous audio stream. This is often achieved through the detection of phrase boundaries in the instrumental performance, and in my circumstances I chose to detect such boundaries through the use of a silence thresholds - i.e. once an instrument has been silent for a certain amount of time, report the end of a phrase. In this way, regardless of the kind of sample manipulation or audio processing involved, it would at the very least be possible for the system to link its musical output directly to specific phrases performed previously by the musician on stage.
My initial experiments focused upon segmenting individual phrases and saving them as discrete audio files for later reference, however after attending a MaxMSP programming course at IRCAM in February of this year, it was decided that it would be much easier to create a database that referred to a continuously recording audio buffer. This phrase database was simply a collection of timing points for the beginnings and ends of phrases detected in a musical performance. Whilst this database was created to be used for the recall of audio, it was only a small step to also apply these phrase boundaries to the database of spectral information used analysis re-synthesis module described previously. Now the output of these sinusoidal models would be in phrases of spectral data. This ensured that these snapshots were output within the original context in which they were analysed (although still with the potential to be greatly modified and transformed).
(screen shots: left - phrases database; right - statistics database)
As was mentioned previously, _derivations incorporates a number of modules that were initially created in isolation as interactive/sound design experiments. The inclusion of this useful phrase database was never conceived to simply playback stored phrases unaltered, but for audio processing modules to access live sampled material as a basis for their sonic responses. The two modules in _derivations that access the audio buffer directly are the phase vocoder and granulator modules. Each of the two modules uses the above phrase database as a reference from which to choose phrases within the audio file. The former is comprised of a bank of four phase vocoder/samplers, allowing the system to playback segmented phrases at various speeds without changing the transposition of the audio, and transpose the audio without changing the speed of the file (this module was initially comprised of phase vocoders built from scratch in MaxMSP - but has since been replaced with the more professional and clean sounding supervp~ collection developed at IRCAM). The granulator (a purpose built granular synthesiser) can severely alter the sound of the original phrases audio file, with adjustable ranges for the scrubbing of both sound file position and grain density (a separate version of this patch - bc.granulator - can be downloaded here).
Having settled upon the synthesis and audio processing capabilities of the system, as well as the way in which each module would access a central database of phrases, the question remained as to how the system would use this cumulative history in order to respond within a performance in a musically plausible and interesting way. As I mentioned previously, central to the design of the system was a desire for an autonomous and mutually influential relationship between the machine and the performer. To my mind, this meant that the machine would need to be aware of the musical and soniccontext in which the musician was performing in - and be able to relate the musician's current performance to its growing database of analysed phrases from the past. Using the analysis of four sound descriptors from the instrumental signal (pitch, loudness, noisiness and brightness), the system was then designed to gather statistics related to thetimbral identity of each performed phrase stored in the database (this is achieved by gathering the average and standard deviation of each descriptor is stored upon the detection of the end of a phrase). This statistics database then allows the computer to make an informed choice on which phrase to recall and send to the audio processing modules, as the current performance of the instrumentalist is constantly being compared with the growing database of statistics of past musical phrases. Once a performer's recent phrase is completed, the computer searches through the statistics database to find the two closest matching phrases for each descriptor. These phrase indexes are then compared amongst descriptors; if a phrase is chosen across more than one descriptor it is chosen as the closest match - if not, one of the returned phrases is chosen at random.
(screenshot of Rehearsal database splash screen)
An interesting part of the process of creating _derivations has been the way in which each iteration of the software has posed new questions about the nature of the design of such systems, but more importantly of the potential that software systems such as these have in defining new interactive relationships between performers and technology. Throughout the project, a recurring theme has been the storage and delayed use of analysed data captured from the audio signal. In a performance with a system such as this, the potential for increasing complexity and richness of musical material is clearly evident, as thevocabulary of the interactive system grows throughout a performance with the accumulation of more and more performance data. With these databases in place however, there was nothing to say that this data could not be stored past the temporal restriction of oneperformance time interaction. After all, the audio from the performance has been recorded, the data stored in data collections - why not make use of it? Furthermore, what if this data could then inform and complexify the interactions of a subsequent improvisation, and then the data from this improvisation be used to influence the next improvisation and so on? This is the idea that led to the latest stage of development in _derivations, that of the introduction of cumulative rehearsal databases. In the current system design, each performance with the system can be treated as arehearsal. A performer can then choose to recall an accumulated database of previous rehearsal sessions with the system. The data that is stored includes everything from the audio recordings, timing information for phrase segmentation, spectral data for re-synthesis and all of the statistics aligned to each phrase previously analysed. This then enables an interactive paradigm in which, from the outset of an improvisation, _derivations consults not only a database built up during the current performance, but from all the previous interactions loaded. With such a database loaded before performance, the system begins a performance with an already rich vocabulary of phrases and spectral information, in addition to the information being analysed and added to the database in real-time.
(the above audio example demonstrates a performance with a database of three previous rehearsals loaded - Alana Blackburn | tenor recorder)
The inclusion of the rehearsal database is the freshest development in _derivations to date - and is once more posing interesting questions about designing interactive systems for instrumental performance. The system is now no longer designed just for one performance time interaction, one live performance. The design of the system takes into account the unique nature of the rehearsal space in musical performance, but also questions the nature of instrumental performance with digital technology. Can a system such as this be used by performers as a type of creative workshop environment, rather than just performed with once? What effect does the ability for the performer to make decisions over the interactive mapping of the software have over the eventual outcome of a performance? What other elements of a rehearsal or workshop space can be thought about in the design of interactive systems for instrumental performance? These are intriguing and exciting questions, questions that I am only just now beginning to think about in my research and creative practice.
Ben Carey - December 2011
thoughts, experiments and discoveries in sound, interaction and electroacoustic music
An additive synthesiser for mixing 3-voice tone clusters. The user navigates through coloured nodes to mix the amplitudes of the chosen tone clusters. Three performance modes, droning, rhythms and one shot triggering.
An additive synthesis patch used as an ear training and sound design tool. The patch was programmed for students to construct timbres by manipulating the amplitudes of the first 16 partials of the harmonic series.