293 17 11MB
English Pages [374] Year 2018
THE ROUTLEDGE RESEARCH COMPANION TO ELECTRONIC MUSIC
The theme of this Research Companion is ‘connectivity and the global reach of electroacoustic music and sonic arts made with technology’.The possible scope of such a companion in the field of electronic music has changed radically over the last 30 years. The definitions of the field itself are now broader – there is no clear boundary between ‘electronic music’ and ‘sound art’. Also, what was previously an apparently simple divide between ‘art’ and ‘popular’ practices is now not easy or helpful to make, and there is a rich cluster of streams of practice with many histories, including world music traditions. This leads in turn to a steady undermining of a primarily Euro-American enterprise in the second half of the twentieth century. Telecommunications technology, most importantly the development of the internet in the final years of the century, has made materials, practices and experiences ubiquitous and apparently universally available – though some contributions to this volume reassert the influence and importance of local cultural practice. Research in this field is now increasingly multi-disciplinary. Technological developments are embedded in practices which may be musical, social, individual and collective. The contributors to this companion embrace technological, scientific, aesthetic, historical and social approaches and a host of hybrids – but, most importantly, they try to show how these join up. Thus the intention has been to allow a wide variety of new practices to have voice – unified through ideas of ‘reaching out’ and ‘connecting together’ – and in effect showing that there is emerging a different kind of ‘global music’. Simon Emmerson studied at Cambridge University and City University London. He is a composer, writer and professor at De Montfort University, Leicester. He works mostly with live electronics and has published widely in electroacoustic music studies. Keynote addresses include ACMC (Auckland), ICMC (Huddersfield), WOCMAT (Taiwan), and the Edgard-Varèse Guest Professorship (Berlin).
THE ROUTLEDGE RESEARCH COMPANION TO ELECTRONIC MUSIC Reaching out with Technology
Edited by Simon Emmerson
First published 2018 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 selection and editorial matter, Simon Emmerson; individual chapters, the contributors The right of Simon Emmerson to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Emmerson, Simon, 1950– Title: The Routledge research companion to electronic music : reaching out with technology / edited by Simon Emmerson. Description: Abingdon, Oxon ; New York, NY : Routledge, 2018. | Includes bibliographical references and index. Identifiers: LCCN 2018000824 | ISBN 9781472472915 (hardback) | ISBN 9781315612911 (ebook)Subjects: LCSH: Electronic music— History and criticism. | Computer music—History and criticism. | Music and technology. Classification: LCC ML1380 .R76 2018 | DDC 786.7—dc23 LC record available at https://lccn.loc.gov/2018000824 ISBN: 978-1-472-47291-5 (hbk) ISBN: 978-1-315-61291-1 (ebk) Typeset in Bembo by Apex CoVantage, LLC
CONTENTS
List of figuresvii List of contributors xii Introduction: music practice – reaching out with technology Simon Emmerson
1
PART I
Global reach – local identities
19
1 Research-creation in Latin America Ricardo Dal Farra (and others)
21
2 Electronic music in East Asia Marc Battier and Lin-Ni Liao
49
3 The three paths: cultural retention in contemporary Chinese electroacoustic music Leigh Landy 4 Technologies of genre: digital distinctions in Montreal Patrick Valiquet 5 Dancing in the technoculture Hillegonda C Rietveld
77 96 113
v
Contents PART II
Awareness, consciousness, participation
135
6 Participatory sonic arts: the Som de Maré project – towards a socially engaged art of sound in the everyday Pedro Rebelo and Rodrigo Cicchelli Velloso
137
7 The problems with participation Atau Tanaka and Adam Parkinson
156
8 The agency of sonic art in changing climates Leah Barclay
178
9 Tuning and metagesture after new natures Sally Jane Norman
203
10 Music neurotechnology: a natural progression Eduardo Reck Miranda and Joel Eaton
222
PART III
Extending performance and interaction
247
11 Where are we? Extended music practice on the internet Simon Emmerson and Kenneth Fields
249
12 Rendering the invisible: BEAST and the performance practice of acousmatic music Jonty Harrison 13 Creative coding for audiovisual art: the CodeCircle platform Mick Grierson
272 312
Index336
vi
FIGURES
1.1 The Silent Drum by Jaime Oliver La Rosa 1.2 Veerkracht by Menno Van der Woude 1.3 Yemas by Daniel Gómez 1.4 New version of Yemas by Daniel Gómez 1.5 Iluminado by Ricardo Brazileiro 1.6 The interactive performance re(PER)curso (2007) 1.7 The Brain Orchestra performance (2009) 1.8 An overall view of the stage during performance of continuaMENTE. It is possible to see the interactive carpet and the interactive mallets played on the stage floor 1.9 An overall view of the stage during performance of continuaMENTE. It is possible to see the interactive carpet and the interactive mallets played on the stage floor 1.10 AURAL – Robotic Variations – the dancer, the robots and the interactive scenery 1.11 AURAL – Robotic Variations – the robots and the musicians 1.12 AURAL – Robotic Variations – the dancer, the robots and the musicians during the rehearsals 2.1 A photo of the now famous ‘Class of 1978’, the students admitted at the first class of composition post-cultural revolution at the Beijing Conservatory of Music 2.2 This photograph of the ‘New Wave’ composers (from the People’s Music Journal 1986) includes Xu Yi (third from left) and Xu Shuya (right) 2.3 Yuasa Jôji, Tokyo, 2017 2.4 Wang Chi performing Flowing Sleeves (2015) 2.5 EMSAN Day 2012, Beijing Central Conservatory of Music: Kenneth Fields, Ban Wenlin, Anthony De Ritis, Liao Lin-Ni, Mizuno Mikako, Zhang Xiaofu, Marc Battier, Li Qiuxiao 6.1 Som da Maré participant annotating the Museu da Maré courtyard with text and drawing pointing out the sounds of footsteps, wind, noise and speech 6.2 Amanhã . . . (Tomorrow . . .): Detail of ‘sound wall’, Gallery 1, Museu da Maré 6.3 Leituras em diálogo (Readings in dialogue): Detail of newspaper photos vii
25 26 26 27 27 30 31
32
32 33 34 34 52 53 59 64
66 147 149 150
Figures
6.4 Banho de Chuva (Rain Shower): Detail of installation at MAC Niteroi 7.1 Chiptune Marching Band workshop with Kazuhiro Jo 7.2 Chiptune Marching Band participants marching with self-made instruments at the Maker Faire 7.3 Young people participating in SiDE workshop with mobile music technologies 8.1 Leah Barclay recording in Noosa Biosphere Reserve 8.2 Biosphere Soundscapes Residency, Sian Ka’an Mexico, 2015 8.3 ‘A Bird’s Paradise: Interactive Tropical Birds Soundscape’ – Dr Stéphane Pigeon 8.4 Biologist Dr Sandra Gallo-Corona recording in Sian Ka’an Biosphere Reserve, Mexico 8.5 Rainforest Listening Installation, South Bank Parklands, Brisbane, Australia, 2016 8.6 Leah Barclay recording in Noosa Biosphere Reserve 8.7 Leah Barclay recording with Aquarian Audio Hydrophones, 2016 8.8 Leah Barclay, River Listening, 2016 8.9 Leah Barclay, River Listening, 2016 9.1 The DESPATCH/ARTSAT2 model 10.1 Image of a typical hen embryo aggregate neuronal culture on a scanning electron microscope, magnified 2,000 times 10.2 Plotting of the first second of a data file showing the activity of the spheroid in terms of µV against time. Induced spikes of higher amplitudes took place between 400 ms and 600 ms 10.3 Cochleogram of an excerpt of a sonification where spikes of higher amplitude can be seen just after the middle of the diagram 10.4 A raster plot illustrating collective firing behaviour of a simulated network of spiking neurones. Neurone numbers are plotted (y-axis) against time (x-axis) for a simulation of 50 neurones over a period of 10 seconds. Each dot represents a spiking event 10.5 At the top is a sinusoid signal that stimulated the network that produced the spiking activity represented by the raster plot at the bottom 10.6 An example of a run of the simulation that produced sparse spiking activity because the amplitude of the sine wave and the spiking sensitivity of the neurones were set relatively low 10.7 Transcribing spikes from a raster plot as semiquavers on a score 10.8 The process of converting the spikes into musical notes was done manually 10.9 Resulting rhythmic template 10.10 The resulting musical passage 10.11 A patient with locked-in syndrome testing the BCMI system 10.12 Each target image is associated with a musical instrument and a sequence of notes 10.13 Notes are selected according to the level of the SSVEP signal 10.14 A rehearsal of The Paramusical Ensemble, with locked-in syndrome patients performing Activating Memory 10.15 Photo of our new SSVEP stimuli device. In this photograph, the LCD screens are showing numbers, but in Activating Memory they display short musical phrases, such as the ones shown in Figure 10.16 viii
151 159 159 162 186 188 190 191 191 193 194 196 199 213 224
225 226
228 229
230 232 233 233 234 236 236 237 238
239
Figures
10.16 Detail from the SSVEP stimuli device, showing a short musical phrase displayed on the LCD screen 10.17 An example of two sets of four musical riffs on offer for two subsequent sections of the violoncello part 10.18 Photo of musicians and hybrid BCMI performer (centre, rear) preparing to begin a performance of A Stark Mind. At the start of the performance the visual score is project on the back of the stage for the musicians, hybrid BCMI performer and the audience to see 10.19 Diagram of the hybrid BCMI system for A Stark Mind. Features from the three methods of control are extracted from EEG and mapped to parameters of the visual score through transformation algorithms 11.1 A typical connection scheme from an early concert 12.1 The basic configuration of stereo: two loudspeakers (squares) and a listener (A) in an equilateral triangle 12.2 The problems with basic stereo in a larger concert space 12.3 The BEAST Main 8 12.4 The BEAST Main 8 plus Side Fill speakers 12.5 As in Figure 12.4, with the addition of sub-woofers and tweeters 12.6 Adding height: two speakers on the Proscenium Lighting Bar, plus two pairs in the Gallery 12.7 Additional loudspeaker pairs: Front/Back (for wide halls), Flood, Punch, Floor and Desk 12.8 The DACS 3D – a custom-built diffusion desk used by BEAST during the 1990s and early 2000s 12.9 A typical BEAST fader layout for the DACS 3D, based on the system design of Figure 12.7 (though without the Front/Back pair) 12.10 Mackie control surfaces used by BEAST in the mid-2000s 12.11 The three MotorBEASTs, custom built by Sukandar Kartadinata of Glui, Berlin in 2008. The full BEAST system is normally controlled by two of these 12.12 Diffusion score of Unsound Objects (1995) 12.13 Two common but incompatible standards for 8-channel works: what in BEAST are called the ‘French Ring’ (dark) and the ‘Double Diamond’ (light) 12.14 A screenshot showing some of the features of the BEASTmulch software, used to deal with signal routing and fader control of the BEAST system in concerts 12.15 BEAST at the 2007 Øresundsbiennalen SoundAround festival, Copenhagen 12.16 BEAST in the CBSO Centre, Birmingham, in 2009 12.17 BEAST in the Elisabethkirche, Berlin, for the 2010 Inventionen festival 12.18 The main computers with outputs from the audio interfaces (Berlin, 2010) 12.19 The amplifiers for BEAST’s passive loudspeakers (Berlin, 2010) 12.20 The trussing in the roof of the new Elgar Concert Hall in the Bramall Music Building, University of Birmingham, for the hall’s opening festival, which coincided with BEAST’s 30th anniversary. Only the square of trussing remains, from which the Keystones and all 10 Tweeter Trees are now suspended ix
240 240
241
243 267 275 276 277 279 280 281 282 283 284 284
285 286
289
291 293 293 294 294 295
295
Figures
12.21 Building the BEAST system in the Elgar Concert Hall, Bramall Music Building, University of Birmingham (the concerts in May 2014 used the same configuration as was used for the opening festival in December 2012): the main circle of eight ATCs in a ‘French Ring’ 12.22 Adding four Genelec 1037s to allow the creation of a ‘distant’ image (by replacing the front four ATCs with the 1037s) 12.23 Eight APG speakers, on the floor and facing away from the audience, form the Diffuse array (also configured as a French Ring) 12.24 Eight Genelec 8030s at slightly above ear height form the Close array, configured as a Double Diamond 12.25 Eight Genelec 8030s on telescopic stands form the Truss array (so-called because these were formerly hung from aluminium lighting trussing); these are above the listeners’ heads but still relatively close 12.26 The Keystone speakers – four Genelec 8030s located on the central square of trussing, suspended from a central lighting bar over the audience. A central crosspiece and eight outriggers also allow the 10 Tweeter Trees to be suspended 12.27 Figures 12.24–12.26 combine to create a dome-like array of matched speakers 12.28 Four more Genelec 8030s form the Desk array. These are normally on the floor, facing directly upwards, though other configurations have also been tried, ranging from two to eight speakers, facing up, out or even directly down at the floor 12.29 Eight Genelec 8040s form the Mid-High array (in the Double Diamond configuration). The May 2014 events were the last to use this array, as it was felt that the height difference between these and the ‘ground-floor’ speakers was insufficient to create a noticeably different image 12.30 The High array – eight Genelec 8040s on the upper gallery and the front lighting bar 12.31 The two arrays of 8040s (Figures 12.29 and 12.30), together with the eight Main ATCs and the two APGs high up on the mesh in the roof (see Figure 12.32) form a kind of ‘outer dome’ 12.32 The FX (special effects) array – speakers in the ECH’s reverberation chambers, on the choir stalls, on the mesh high up in the ceiling and on the top of the lighting box. Because the concert also required a 5.1 array (see Figure 12.33), only one Genelec 8050 was available for this location 12.33 The five full-range speakers (Genelec 8050s) for the 5.1 array 12.34 Flood (APG) and Punch (Tannoy Lynx) speakers for stereo pieces 12.35 The Subs (eight Genelec 7070s and two Genelec 1094s) 12.36 The full 96-channel system 12.37 Screenshot of Going/Places (2015) showing the 32 source tracks, originally colour-coded (but here visible in gray-tone) by stems: Diffuse (1–8), Main (9–16), Reference (17–18), Desk (19–20), Solo (21–24), High (25–28), Low (29–32) 12.38 Layout for Going/Places (2015) – each source track from Figure 12.37 is routed directly to the correspondingly numbered loudspeaker 13.1 The CodeCircle front page
x
297 297 298 299
299
300 300
301
302 302
303
303 304 304 305 305
308 309 314
Figures
13.2
The basic CodeCircle Interface. The code editor is on the right. Above the editor are file operations, including elementary permissions. The code is rendered underneath 13.3 WebGL shader code executing in the CodeCircle platform 13.4 C++ DSP code transpiled and running in the CodeCircle platform 13.5 BMinorThing by ‘pressxtoskip’, https://live.codecircle.com/d/ cxuENqhG9ifkfkLHN 13.6 Yee-algo pattern FX by ‘Cyleung274’ https://live.codecircle.com/d/ LyRCpnLw9tf9YoGiA 13.7 A sphere being deformed by frequency modulation on the CodeCircle platform. This work was created by a second-year undergraduate student enrolled on Goldsmiths’ creative computing programme 13.8 Two screenshots from Memo Akten’s Particle system example 13.9 A wireframe superformula on CodeCircle 13.10 Note the white pop-up window and hazard symbol, clearly indicating code errors on line 131 and providing relevant reparatory advice
xi
321 322 324 326 327
328 329 330 332
CONTRIBUTORS
Leah Barclay is an Australian sound artist, composer and researcher working at the intersection of art, science and technology. She specialises in electroacoustic music, acoustic ecology and emerging fields of biology exploring environmental patterns and changes through sound. Her work has been commissioned, performed and exhibited to wide acclaim internationally. Marc Battier is a composer of instrumental and electroacoustic music. He is Professor emeritus of musicology at the Université Paris-Sorbonne (electroacoustic music, organology), DeTao Master of Electroacoustic Music (Beijing), professor of composition at the Aichi university of Fine Arts (Japan, 2018), and distinguished professor at the Suzhou Academy of Music (China). Carolina Brum Medeiros is a researcher and engineer, CTO of Fliprl and affiliate of the IDMIL Lab, Montreal. She received BS and MS degrees from Universidade Federal de Santa Catarina (UFSC), Brazil, and a PhD in music technology from McGill University. Her research focuses on sensing, estimation and tracking of human motion. Filipe Calegario holds a PhD from the Universidade Federal de Pernambuco, Brazil. His research interests include design of digital musical instruments and new input devices for musical interaction. He is currently an affiliate of Music Technology and Interaction (MusTIC) Research Group of the Federal University of Pernambuco and researcher at Daccord Music Technology. Rodrigo Cicchelli Velloso is a composer and flutist from Rio de Janeiro, Brazil. He has a PhD in electroacoustic composition from the University of East Anglia, UK. He is Professor Titular at the Departamento de Composição, Escola de Música, Universidade Federal do Rio de Janeiro. Ricardo Dal Farra is a composer, new media artist, educator, historian and curator, and Professor of Music and Media Arts at Concordia University, Canada. He has created the largest collection publicly available of Latin American Electroacoustic Music. He is founder of the international conference series Understanding Visual Music and Balance-Unbalance.
xii
Contributors
Joel Eaton is a musician, creative technologist and education specialist. He has a PhD in braincomputer music interfacing from Plymouth University, UK. His research has investigated new applications of using BCI systems for music control and performance. His work has been performed at international festivals, concerts and events. Simon Emmerson studied at Cambridge University and City University London. He is a composer, writer and Professor at De Montfort University, Leicester. He works mostly with live electronics and has published widely in electroacoustic music studies. Keynote addresses include ACMC (Auckland), ICMC (Huddersfield), WOCMAT (Taiwan) and the Edgard-Varèse Guest Professorship (Berlin). Kenneth Fields is Professor of Networked and Electronic Music at the Central Conservatory of Music, Beijing. Previously, he held the position of Canada Research Chair in Telemedia Arts, University of Calgary, investigating live music performance over high-speed networks. He is developing Artsmesh, a live networked music platform for the World Live Web. Mick Grierson is a professor in the Department of Computing at Goldsmiths. His research explores new approaches to the creation of sounds, images and interactions through techniques such as signal processing, machine learning and information retrieval. His software is widely used by award winning artists and creative industries practitioners. Jonty Harrison is a composer of acousmatic music with numerous international prizes, commissions and residencies, four solo albums (empreintes DIGITALes, Montreal) and works on several compilations. He is Emeritus Professor, University of Birmingham, UK, founder and former director of BEAST, and Compositeur Associé, Maison des Arts Sonores/KLANG! acousmonium, Montpellier, France. Fernando Iazzetta teaches music technology and electroacoustic composition at the University of São Paulo and is the director of NuSom–Research Centre on Sonology. He is currently a research fellow at CNPq, the Brazilian National Council of Scientific and Technological Development. Leigh Landy is Director of the Music, Technology and Innovation Research Centre at De Montfort University. His compositions have been performed worldwide. His publications, including several books, focus on the study of electroacoustic music. He is editor of Organised Sound, directs the ElectroAcoustic Resource Site (EARS) projects and is a founding director of the Electroacoustic Music Studies Network. Lin-Ni Liao examines interculturality in all fields of her work: composition, artistic production and academic research, the last of which focuses on musical analysis, artistic identity, cultural heritage and the role of women in contemporary music from the Far East. Jônatas Manzolli combines contemporary musical creation and cognitive sciences focusing on the dialogues between music and science. This interdisciplinary study results in electroacoustic and multimodal works. A composer and mathematician, full professor of the Institute of Arts, University of Campinas, Brazil, he is a pioneer in Brazilian research in computer music.
xiii
Contributors
Hamilton Mestizo’s work primarily explores the interfaces of arts, science and technology and their critical, ecological and social-political implications. In the last decade, Mestizo has combined his artistic practice with education and research focused on open source softwarehardware development, DIY-DIWO culture and biotechnology. Sally Jane Norman is Director of the New Zealand School of Music – Te Kōkī, Victoria University Wellington, New Zealand, a performance historian and practitioner working on embodiment, art and technology. She was previously Professor of Performance Technologies and co-director of Sussex Humanities Lab, founding Director of Culture Lab (Newcastle University), and artistic co-director of the Studio for Electro-Instrumental Music (Amsterdam). Jaime Oliver La Rosa is a computer musician and scholar. He develops open source software and interfaces for live performance that explore the malleability of the concept of ‘musical instrument’ in new media. He is Assistant Professor at New York University and holds a PhD from UC San Diego. Adam Parkinson is an electronic musician and senior lecturer in Sound Design at London South Bank University. He has published on a range of topics including human-computer interaction, sound art, design practices and sonification. As Dane Law, he releases fragmented, sample-based electronic music on labels including Quantum Natives and Conditional. Pedro Rebelo is a composer, sound artist and performer, and Professor of Sonic Arts at Queen’s University Belfast. His work focuses on practice-based research, and he has recently been awarded over £1 million from the Arts and Humanities Research Council (UK) for interdisciplinary projects investigating relationships between sound, music and conflict situations. Eduardo Reck Miranda is a composer working at the crossroads of music and science. He gained an MSc in music technology from the University of York and a PhD in music from the University of Edinburgh. He is Professor in Computer Music at Plymouth University, where he directs the Interdisciplinary Centre for Computer Music Research. Juan Reyes, composer and researcher, embraces mathematical and computer models for expression in music and the arts. He has pursued studies in computer science, mathematics and music at the University of Tampa and Stanford University. He has been professor of arts and music at several universities in Colombia. Hillegonda C Rietveld is Professor of Sonic Culture at London South Bank University. She has published extensively on electronic dance music cultures and has co-edited the collection DJ Culture in the Mix: Power,Technology, and Social Change in Electronic Dance Music, and a special issue, ‘Echoes of the Dubdiaspora’, for Dancecult: Journal of Electronic Dance Music Culture. Atau Tanaka is Professor of Media Computing at Goldsmiths. His research in embodied musical interaction takes place at the intersection of Human-Computer Interaction and gestural computer music performance. His performances and installations have been presented worldwide, and his work has been supported by the European Research Council (ERC). Patrick Valiquet is a Canadian musicologist specializing in the ethics and epistemology of experimental musics. He holds degrees from McGill University, Concordia University, the
xiv
Contributors
Institute of Sonology and the University of Oxford and has held postdoctoral fellowships at the University of Edinburgh and the Institute of Musical Research (UK). Marcelo M. Wanderley is Professor of Music Technology at McGill University, Canada, and International Chair at Inria Lille, France. His research interests include the design and evaluation of digital musical instruments and the analysis of performer movements. He is a senior member of the IEEE and of the ACM.
xv
INTRODUCTION Music practice – reaching out with technology Simon Emmerson
The changing value of practices reflected in the changing terminology At the Alternative Histories of Electronic Music conference at London’s Science Museum in April 2016, I presented a personal account of how I felt the study, research and teaching of electronic/electroacoustic music had evolved since my earliest experiences of the professional world following my graduation from university in 1972. Importantly one of the tasks was to see how that view – even at a personal level – could be reviewed and often completely re-evaluated in the light of subsequent history. From 1972–1974 I taught physics and music at a state secondary school in Hertfordshire (UK).This county had a very progressive education authority which organised awayday sessions on new teaching ideas and practices; in the music field this included a weekend workshop with John Paynter1 – who introduced a range of contemporary music techniques with an emphasis on group involvement, utilising instrumental and vocal resources available to all irrespective of traditional instrumental competence. In the school I was encouraged to use recording technology, improvisation, graphic and free scores and so on, both in classes and extracurricular music activities. I also became a tutor in further education evening classes at Hatfield Polytechnic (now the University of Hertfordshire) shortly after.Thus some of my earliest purchases included ‘DIY’ tape technique (Dwyer 1971) and teaching ‘experimental music’ texts. It was a golden age for teaching text publication in the UK (George Self (1967), Brian Dennis (1970), John Paynter and Peter Aston (1970), Trevor Wishart (1974), Richard Orton (1981)) which had (I believe) a long-term effect across many aspects of British culture which we still do not fully appreciate. In 1974 I returned to university to study towards a PhD in ‘electronic music’. One clear memory which I find now surprising from my research student years was my belief that it was possible to possess all the texts relevant to my field – probably on a shelf or two, with extra space for journals (of which there were then very few).The relationship of ‘research’ to ‘teaching’ texts was very fluid at university level; there were agreed canons of primary texts (by composers, studio reports, technical ‘how to’ books and the like) and secondary texts (introductions to the field with a varying mix of technical and musical descriptions). Almost the same goes for (LP) recordings. Recordings of most works cited in main texts had by then become available with a degree of perseverance. There was a core group of ‘easy to obtain’ labels (DG, Philips, CBS . . .) 1
Simon Emmerson
joined by specialist labels available only ‘on import/to order’ – labels for the early French musique concrète and some American labels were collector’s items almost as soon as they were issued. While the bias of my bookshelf was anglophone – I possessed all the American texts of the day (these were the earliest teaching texts in English and dominated the field) – I had a good selection of French and German primary texts in addition. Italian, Netherlands and Swedish contributions were also present, though usually via translations. The classical music canon defined by these texts was clear – it was a standard Euro-American history simply defined and largely accepted without question.
The canon and early vocabulary – words and practices This canon defined the basis of a vocabulary that remains with us (more or less) to this day, although within a very different context. I shall summarise this canonic vocabulary, as originally used and as it evolved, and then try to see how revisionist histories have steadily eroded its centrality, especially within the practice of teaching programmes. Thus the classic (European) division of elektronische Musik and musique concrète, along with American ‘tape music’, was the basis of a foundational history. Musique concrète has been largely mis-summarised in earlier Englishlanguage writings – its key tool is écoute réduite (reduced listening), that is removing from our considered perception all notions of source and cause to focus on the quality of the sound in itself.2 Indeed its founder, Pierre Schaeffer, focused on the rich nature of sound recorded and declared his total dislike of the poverty of the electronic sound of the time. But as synthesisers became more sophisticated (and produced greater complexity and interest) they could be (and were) installed in the Paris studio and included in source sound production in this field.3 Musique concrète is thus a description of an attitude to listening (and consequent approach to composition), not the sound source itself – any sound can be perceived as concrète in the right circumstances. Elektronische Musik on the other hand had a very different ideology, what we might call hardcore synthesis. A combination of an atomistic additive approach starting from ‘elementary particles’ – sine tones, impulses – was complemented by a subtractive route based on the filtering out of frequencies from noise bands.The fact that its original practitioners advocated organisational principles derived from serialism4 at every level of working was perhaps a cause of later confusion in terminology – at least a schism (we shall return to this). ‘Tape music’ was an early term and had much wider usage than is recorded in textbooks. As an umbrella label for some key experimental work from Otto Luening, Vladimir Ussachevsky, Pauline Oliveros, John Cage, Steve Reich – it appears to avoid the worst of the ideological baggage of the European traditions. Tape is the medium, what is inscribed on it is (almost literally) immaterial.That said, there was a hidden ideology left barely stated – that of the so-called experimental or avant-garde practices of the USA at the time: after all, all recorded music – classical, popular, jazz and so forth – would be ‘on tape’ by this time. But this was music that could only be on tape. In this sense it was much more widely used, including in Europe, where the phrase ‘on tape’ or ‘with tape’ became simply a description of anything using pre-recorded sound.5 Shortly after, the use of computers to generate sound and music has its foundation myth around Max Mathews at Bell Labs in 1957.6 ‘Computer music’ – a descriptor of means – retained an apparently separate identity until the advent of personal computers brought all options under one umbrella in the 1990s. But this was a complex process of analogue becoming digital.7 The powerful command line approaches of the ‘Music N’ family of programmes did not take over as this process progressed, but instead the ghost of analogue practices became visualised screen icons within the emerging DAW culture of the new ‘tapeless’ studio.The term survives because – well, nowadays everything is computer music, isn’t it? Well, not exactly . . . as we shall see. 2
Introduction: Music practice
Live electronic Another of our first generation terms is ‘live electronic’ – strangely (but creatively) this was never strictly defined and always covered a wide variety of practices and approaches. In trying to construct this part of the discussion I realise that it is very difficult to see the term as defining any kind of canon. Both the field itself and its associated vocabulary might be seen as a Trojan horse within the canon of 1950s and 1960s foundation myths. It is destabilising, challenging and innovative in its influence – no other practice (and terminology) will be left unscathed as it evolves. If there is any kind of canonic description it tends to come from a few, sometimes only generally described, roots. Sometimes even within these vague notions there are tensions and contradictions. These might be summarised as a three-way schism between ‘art music’, ‘experimental music’ and techniques associated with popular music.8 The first two notions correspond broadly with Michael Nyman’s (1974) paradigmatic definition/distinction of ‘avant-garde’ (as acknowledging and eventually renewing historical continuity in a high art tradition) and ‘experimental’ (paying much less attention to precedents and making no claims to renewing an ongoing tradition of any kind). While such a binary distinction has now become rather blurred – the experimental might be said to have become a tradition – there is a useful tool here that at least points out an anti-canonic element within the canon (as it were). The high art line focussed on maintaining the modernist view of the ‘work’, whereas from the outset the experimental stresses a more ‘DIY’ approach, also clearly including improvisation (or at least more open scoring) and a greater emphasis on process. If we include the third part of the divide – possible cross-overs to popular music practices – then we glimpse a much larger elephant in the room. Early synthesisers are often included here such as the Telharmonium, Theremin, Ondes Martenot, Hammond organ. Missing or referenced only in passing is the vast area of amplification (at first guitar and voice in the 1920s and 1930s). We shall return to this later. But then in contradistinction we have the earliest DIY approaches to live electronics. John Cage playing turntables in a radio studio in the 1940s has often been cited as first case, then a proliferation in the 1950s, using radios, turntables and phonograph cartridges to ‘amplify small sounds’; David Tudor’s chaining of home-made circuits into amoeba-like growths, usually attached to resonators within feedback systems in the 1960s and 1970s. Similarly Alvin Lucier’s harnessing of every kind of acoustic space within a resonant mode of construction is close to later interactive sound installation but remains rooted in a powerful ritual of performance. Indeed the American scene saw one of the first live electronic groups (Sonic Arts Union, founded in 1966), cited in most foundational texts. In Europe STEIM (Amsterdam) drew much from this tradition in an eclectic mix of in-house and invited residency productions; always live and challenging – and anti-canonic by ignoring any such normative tendencies.9 Its title – foundation for ‘electro-instrumental’ music – introduces a term that surely should have gained wider circulation but did not. Then there is the (originally) French denomination of ‘mixed music’ (live instrument(s) or voice(s) and electroacoustic sound) – a terminology which (again) did not gain a wide usage; originally the ‘electroacoustic’ referred to pre-recorded (thus fixed) material, but recently any such sound (including that produced in real time) can be included in this description. Stockhausen’s move to use the ring modulator ‘live’ in Mixtur (1966) was an early ‘art music’ example. At around this time he established a small group for the performance of his works.While usually described as one of the first generation of ‘live electronic music’ ensembles, its practice was (in retrospect) more ‘live with electronics’ than strictly live electronic.10 Then in the later 1960s there was an explosion of such performance groups, small, flexible, experimental, usually embracing improvisation and often the use of visual media.11 While the real-time digital revolution was to 3
Simon Emmerson
expand possibilities vastly in later decades (to which we shall return), the (sometimes analogue) DIY, low-tech ‘live’ has developed steadily, too (Collins 2009).
Electronic music – as more widely used It is strange that we return to the most obvious term ‘electronic music’ in this discussion of first generation vocabulary. The fact that the German contribution (elektronische Musik) appeared to resist translation into English (or any other language!) indicates that we associate it with a very specific set of aesthetic ideals. Most non-German usage12 was never associated with the specific avant-garde practices surrounding serialism. The English term ‘electronic’ was indeed adopted in the anglophone world quite rapidly, at first for the esoteric and ‘art music’ applications that came from continental Europe, including, strangely, references to musique concrète. If there was initially a split between the ‘high art’ experimenters and those aiming at a wider, more general audience, this was bound to be undermined by a new generation of sound creation possibilities in dedicated ‘synthesisers’ developed in the 1960s and 1970s – Buchla, Moog, EMS, Sequential Circuits, Oberheim, as well as the meteoric rise of Japanese manufacturers – Yamaha, Roland, Korg, Casio – when the term ‘electronic’ was more fully adopted into mainstream use.
Electroacoustic and acousmatic From the outset there were conflicts and contradictions in these descriptive terms, whether based on sound materials, generation, medium of creation (and dissemination), listening condition or attitude . . . and so on. When the newer term ‘electroacoustic’ came along it was initially treated with caution, even rejected by many. Yet, conversely, to this day many prefer ‘electroacoustic’, believing ‘electronic’ to have either a more specifically synthetic method at its roots,13 or more generally to suggest popular music origins. While probably introduced via the French tradition,14 its use spread more widely. It was inclusive, appearing to cross the boundaries of recorded and electronically generated sound. It probably benefitted from an existing (related) use. For many years there had been an entire section of any engineering library devoted to ‘electroacoustics’ focusing on those two key tools of the field – the microphone and the loudspeaker, and their associated physical and electronic systems. Yet it also maintained its ‘art house’ roots, never really crossing over to experimental forms (whether popular or otherwise). The relationship of the term ‘electroacoustic’ to ‘acousmatic’15 has never been clear.The latter did not replace the former, which (as I remarked earlier) has never disappeared – indeed remains one of the most used terms for this field as a whole, at least its art music manifestation (Emmerson and Landy 2016) – but seems to have somehow layered on top of it, taking only some of the territory. At its most straightforward, ‘acousmatic’ (a term claiming ancient origins) is a condition of listening without seeing the source or cause of what is heard – and has been applied to all of the sound practices produced through the loudspeaker (radio, film, sound recording etc.) but a lot of other (non-loudspeaker) situations besides (hidden acoustic sources in religion, theatre, soundscape, geosphere (Kane 2014)). For our purposes it also became applied to a smaller group of art music practices within this wider usage. It had lain relatively quiescent after its new coinage around 195516 until brought to life and advocated as a genre descriptor in the 1980s by (amongst others) François Bayle (1993). This term has often claimed a mantel, too – that of successor genre to musique concrète and (most of) electroacoustic music on fixed media.17
4
Introduction: Music practice
I say ‘most of ’ because there has been some confusion here. Soundscape-based art might be included in the acousmatic – but its practitioners encourage recognition not simply of source and cause, but of location, site and identity – the very antithesis of écoute réduite. So this could not have been included in the earlier of the two genres (musique concrète), strictly defined.To confuse matters further, in the meantime the strict abstractions of écoute réduite have (in practice) been relaxed and, in many circumstances, recognisable sounds have returned to common usage within acousmatic composition – usually designed to engage a ‘play’ of sound quality/sound source.18 ‘Acousmatic’ has since grown steadily in use but in a sense more exclusively to refer to clearly art music practices which actively exploit the condition. This ‘practice or condition’ dichotomy of meaning of the term is the cause of much confusion in its use.
Popular music glimpsed The exclusion of reference to non-‘art’ musics was almost complete in early canonic textbooks. But not quite. In a relatively superficial manner, most of the key introductory texts addressed some of the more experimental rock (and other popular) musics of the time. The obvious technical (studio based) experiments of the Beatles and Beach Boys and the more noisy extended textural live performances of the Velvet Underground and Grateful Dead were usually cited from the 1960s with innovative studio and live developments by such groups as Pink Floyd following shortly after. This was a very limited tip of a large iceberg – at least for the scholarly writings of the time. It failed to establish a canon of practice in this part of the field – possibly because those writing about it were not practitioners in any committed sense. Even within the expanding area of popular music recording studies which emerged from around this time, little space was devoted to the relationship of the technology to sound quality. There was a greater emphasis on performance aspects and overall ‘style’ in a wider visual and cultural context. It is only relatively recently that much of the innovatory use of new sound methods in rock and popular music have been disentangled from the proprietary secrecy of the recording studio and gig – for example the (usually DIY) modifications that gave that ‘special edge’ to certain sounds (that of Jimi Hendrix is a case in point). But slowly, as this history has gained a more balanced place in a discussion of the subject area (of music made with technology), there has emerged a desire to have a deeper perspective on where these 1960s and 1970s developments came from. The invention of the electric guitar and the application of amplification to the solo singer were responses to the changes in relationship of performer to audience dating from the establishment of mass media in the 1920s – both larger concert halls (and eventually open-air venues) on the one hand and the demands of radio on the other. Barely receiving a mention in early literature, these seminal developments in mediation of sound became well established in examinations of the development of popular music yet have only recently been accepted into discussions on ‘live electronics’ (Emmerson 2007; Mulder 2013). Thus experimental and innovative popular music has made two revitalising strikes at the heart of canonic thinking in electroacoustic music. First an increasing emphasis on sound quality – I do not refer to increasing hi-fidelity but to the timbral-textural turn in all its sound senses.The complexity of sequencing and layering, often polystylistic use of cross-genre sampling, the play with the medium itself – there is a multiplicity of approaches that challenge hegemony (as we saw in the ‘Linz debates’ of the 1990s).Then secondly we have its reception phenomenology – performance in every sense and how that is perceived. An increasing emphasis on the total experience takes precedence. From the presentation and PA systems for large-scale music festivals
5
Simon Emmerson
to increasingly realistic immersive experience of computer games engines,‘amazement’ has inevitably fallen away almost completely from most electroacoustic concert music experiences.
Undermining the canon: towards many canons or none? The canon was never a truly fixed entity. Any fuzzy definition contains the seeds of change and development. If we look back at the revisionist histories that have progressively challenged early versions of the canon they are based on material practices, most of which were available at the time but were somehow marginalised, ignored or otherwise left in the dark of ignorance. First, the pre-histories – often described as such without a hint of irony. There was an understandable tendency from a European perspective to view the post-World War II developments as setting out from the ‘zero hour’ (or year zero) of that catastrophe: renewal and regeneration, not simply ab initio but ex nihilo. The accepted view was of a very limited pre-history: some noise intoners, electric instruments, recorded montages, turntable experiments, all briefly described in terms designed to feed forward to the explosion of ideas and practices around 1950. The unwritten assumption that this somehow applied to the American scene was entirely unfounded. Perhaps New York did set out to ‘steal’ modern art from its Paris home (as Serge Guilbaut [1985] claims). John Cage’s rejection of a ‘centred’ view of culture and Steve Reich’s firm rejection of European historicism both deny the black hole agony of the year zero model. Even within a modernist narrative, a founding moment of this creative divide was systematically hidden from scholarly view for several decades – apart from a fragmentary glimpse.The BoulezCage correspondence (1993) reveals a personal and creative encounter, first in Paris in 1949 and later in New York, that comes paradoxically close to identity and yet profound difference: the techniques in practice so similar – yet the motivations so far apart. Both rejected anecdotal material, and both embodied contradictions in their respective world views of the proper function of music. Steve Reich of course added the dimension of jazz and popular music references, plus the syntax of world music (its rhythmic aspects) that Cage would come later not to accept.19 Two models of canonic thinking seem finally to have emerged – both deliberately fuzzy (and one at least denying its canonic heritage): on the one hand, the demand simply to widen the canon to be more inclusive; on the other the ‘many canons’ approach suggesting that each genre or practice generates ‘good examples’ worthy of further consideration. Even marked as ‘paradigm shifting or defining’, these are usually influential on further practice. If however we agree to fix some notion of ‘the good’ we need an accountable, transparent and open way to evolve, change, add and subtract. This applies both to criteria for inclusion and exclusion.Thus the two models are in effect in a close relationship – perhaps in the end indistinguishable. Nowhere is this more important than in the move towards a recognition of women’s contribution that was always long overdue but has gathered momentum in recent years. Leading pioneers did not need to wait for this more even-handed view of history – Pauline Oliveros was cited in most early texts. I have elsewhere examined issues of organisational failure (in the UK) to address this exclusion until the 1990s (Emmerson 2016). But this throws up alternative views as to how to go from here. Are we to redraw the canon with better representation of women’s works? Clearly this process has already begun.20 But we are also seeing how that very canonic thinking has been undermined. Perhaps a separate canon as more pragmatically discussed earlier will emerge. But suggestions have already been made that this risks the creation of an iconography that might not in the end be helpful. We might cite as under-explored two further key areas: experimental film sound has been the subject of considerable research in recent years (d’Escriván 2007). Film studios had widely developed techniques for sound processing and generation in the 1930s, and the development 6
Introduction: Music practice
of visual analogies to sound for the optical soundtrack was driven into reverse and used as an experimental synthesis method.21 The vocal cry of Johnny Weissmuller’s Tarzan and the great ape Kong Kong are now well discussed; the film soundtrack work of Norman McLaren is now fully circulated, but more keeps appearing – suppressed (and nearly lost) Russian material from the 1910s to 1930s has only recently been glimpsed (and is still little known) (Smirnov 2013); there are also recreations of lost machines such as those of Oskar Fischinger and Daphne Oram.22
From electronic to electronica The invention and definition of terms is sometimes disparagingly related to marketing and sales. While it is inevitable for our discussion to group and make sense of the many and various practices, we will stick to an advocacy of awareness rather than abandonment – while some labels are externally applied and resented, others are self-created and worn with pride. Nowhere is the labelling of genres in this broad field more problematic that around a group of experimental practices known variously as electronic music, electronica, IDM (intelligent dance music), EDM (experimental dance music). Other new wave practices include glitch, noise, lower case etc. – some ‘beat driven’, others much more ambient and textural – all challenging the superfinely wrought three-dimensional clarity of the electroacoustic (or acousmatic) ‘art’ traditions. I do not intend here a definitive discussion of all the many terms and sub-genres of this area – more important is a global shift in territorial claims once again. The high art canon had previously excluded musics on the basis both of materials as heard (standing against pulse and melodic materials) but social practices, too – anything seeming to come from the commercial world (however minority appealing in actual fact) was excluded, using a number of descriptions in addition to ‘commercial/popular’ – ‘music for use’, ‘music for the everyday’ etc. With the rise of personal studios at exactly this time, reliance on expensive institutional centres was no longer needed; sites of production and consumption were on the move: performance of this group of practices often happened in very different spaces to traditional fixed seating concert halls. The art music citadel was clearly under siege for claiming some kind of monopoly on the notion of ‘experimental’ and ‘innovative’.The debate surrounding submissions and prizes for the Ars Electronica (Linz) from the early 1990s are paradigmatic. Members of the judging panels felt increasingly that much of what was submitted was ‘academic’ and had clearly been eclipsed by a stylistically more innovative group of practices, some of which had roots in techno, others in sound art, which somehow lacked (at that time) the scholarly backdrop of those claiming to be the successors to the original canon. While a binary description is too simple,23 the longer term arguments have slowly played out in the steady proliferation of an even wider range of ‘sonic art’ practices – which we need now to address.
Sonic art, sound art, sound sculpture, sound installation Hardly mentioned in any early studies in this field – and with its own set of competing definitions – is a group of practices around installation art, sound sculpture and sound art.Without any hope of definitive description,‘sound sculpture’ demands the discipline of the three-dimensional object as part of the experience, while ‘sound installation’ may not (perhaps an ‘empty’ structure within which sound is projected). ‘Sonic art’ and ‘sound art’ tend to cover a wider remit24 – but all definitions in this field are fuzzy and liable to shift. If we add other languages, the field becomes a mire. ‘Klangkunst’, ‘Klangart’, ‘art sonore’ may be translations of the English terms (or the other way round!), but they do not necessarily mean the same thing. And European language speakers are only just beginning an engagement with non-European descriptions.25 7
Simon Emmerson
Usually the experience is circumscribed in time by the participant listener, who may set the duration and the form of the encounter. Hence exactly what is the beginning, middle or end of the aesthesis of the artwork may not be controllable by the (original) creator. The art is there to be encountered and the experience formed through time by the newcomer. Exactly how this challenges a canonic view seems at first a trivial question. It is clearly an ‘other’ tradition, an ‘art school’ invention set quite apart from – and thus largely ignored by – an established ‘musical’ core practice. No ‘beginning–middle–end’ paradigm here (in any traditional musical sense). In the end the autonomous artwork must give up its claim to precedence and join the many other options for musical aesthesis available. But there has also been an unexpected outmanoeuver of this apparently irreconcilable difference between sound art and music: changing listening habits (and locations) from within experimental music itself.As early as 1962 Umberto Eco had discussed the notion of ‘open work’ – relating to the then-avant-garde (and drawing parallels with developments in literary theory) (Eco 1989). Along with the influence of chance and choice that had already invaded concert works (for example by Boulez and Stockhausen), the loosening of control over the time domain of notated music allowed the almost sculptural aspects of ‘moment form’ to emerge – with clear, often cited, relations to the likes of sculptor Alexander Calder’s mobiles.26
Improvisation This involves another key practice (not a genre, as such, but a wide and varied type of music making with examples from many traditions) – improvisation and open form performances. Perhaps there were only a small focused group of such practices that directly influence the direction of my argument here. Clearly electronic apparatus and processing played a major role in some such practices within the proliferation of small ensembles (and some improvising soloists) from the late 1960s. But it is more in the realm of the performative that this influence might be felt. There are as many views on the nature of listening to improvisation as there are improvisers. Most demand the same degree of concentration as for any ‘composed concert’ music – from beginning to end. But others that emerged in the 1970s engaged with the greater informality of some venues, encouraging greater social exchange – and this led to ‘sampled’ listening. Thus open form entered the listening experience – only a recording listened to later would reveal what the performing musicians thought of as the whole performance. Hence – not without some heated debate – sampled listening entered the realm of ‘music’27 and thus opened a door to a closer relationship to ‘sound art’. Yet there was always the work of John Cage (a maverick member of the original canon) to challenge assumptions – his hybrids of performance and installation defy categorisation. Encouraging mobility of audience within an immersive stream of consciousness seems to suggest an ultimate freedom of choice in focus (as in installation) – but there remains a strong residual stress upon the role of performer within the experience that roots the work more as a kind of ‘music’, however challenging – even estranged. So more than the beginning–middle– end paradigm, the notion of the performer defined as separate from the audience (even when influential as participant) seems to this day to distinguish what we describe as ‘music’.
Order, organisation, control – the behavioural turn The ideologies of early canonic genres mostly included a philosophy of order and organisation. Whether serialism as an organising principle or the (claimed) ear-based philosophy of musique concrète, there were clear distinctions and combinations (Emmerson 1986). Algorithmic 8
Introduction: Music practice
approaches to combining ‘events’ appear to be related to the former but become ever more sophisticated and wide ranging following the introduction of more freely available computing. In terms of desktop computers, predating sound processing by a decade, event processing undermines our original divisions and artificial distinctions.This is a greater revolution for genre distinctions, however. We can now accurately model processes of the world we live in, where we can make a mathematical (algorithmic) abstraction – behaviour modelling. The challenge is simply stated but complex in application. It crosses boundaries we have previously established. A swarm behaviour may be found in any of our previous ‘categories’ (tape music, live electronic,28 installation, soundscape . . .). While earlier writers had established a vocabulary of musical behaviours of sounds in general, pertinent to composition (Schaeffer 1966, for example), here we have not simply descriptions but tools. The ability of computer algorithms to generate such behaviours as an emergent property of a large number of individual ‘events’ has allowed us creative access to ‘big data’. Sometimes explicitly, earlier philosophies stressed the product above the process in performance. That is, while Schaeffer wrote much on an approach to composition, this was concealed entirely from the listener in the end. But an increasing range of practices has thrown this in the air. Perhaps the audience needs to perceive the process as part of the aesthesis of the work. This is a clear negation of the old adage ‘you should be able to appreciate the music without a programme note’. Here you join the composer as co-equal ‘experimenter’, fully aware of intentions and tools from pre-concert talk and programme note – does it ‘work’? You the listener can decide – and let the composer know (through feedback and social media).This changes our use of vocabulary, too. The emphasis has shifted towards what links, not what distinguishes. So (for example) swarm behaviour becomes our character keyword or tag in the meta-data – not a genre or practice but some kind of emergent property that we wish to follow. Thus different grouping characteristics across (previous) genre differences now emerge as dominant identifiers.
Interfaces We may also add the new ways we interface with the technology as another ‘grouping agent’29 in the discussion. Whereas recent writing has tended to stress the new and contemporary in interface design, there is clearly a developing interest in how we interfaced with earlier technologies of sound. It lies hidden behind many studio practices that we can only hear from their results,30 as well as in the grainy images and variable quality recordings of historical performance practice on stage. Looking at interfaces is in fact simply another aspect of the ‘behavioural turn’ remarked upon earlier – although this time embodied and performative. What aspects of human activity shall we engage (and extend) with technology to perform or produce sound? This is the beginning of an enormous ‘prosthesis’ which lies at the focus of the concerns of this present volume of essays. From this perspective the canon appears long forgotten.The widest range of behaviours from the most constrained to the most extensive develops its own taxonomy of forms. It is not true to say these are all independent of the sound they produce – any interface will have a characteristic symbiosis with its potential sound. There will be a wide range of tighter and looser such relationships. Thus hacking and circuit bending, DIY electronics31 embody behaviour at one end of the spectrum – where an instrument, ‘score’ and even performance are built together. At the other a games engine model increasingly immerses the user within a system that mimics elements of a realistic (yet not real) environment. In this worldview the musical practice can become completely polystylistic – not always consciously constructed as such but simply more open and aesthetically plural than a traditional view of genre would suggest. A shift to an 9
Simon Emmerson
emphasis on narrative structure and process – both technically and socially constructed – and a new set of vocabularies, has clearly developed.
Distributed modernism? Let us finally return to where we set out.The original Euro-American-centred modernist enterprise retained elements of a neo-colonial history.There was sometimes an almost missionary zeal of some of its advocates who felt they were exporting ‘universal’ practices to other parts of the world – at first via programs that brought a worldwide constituency to festivals, courses, classes and study set in the heartlands of Europe and America, then later in exchange programmes and music promotions. It is sometimes supposed that participants returned to their native lands with like-minded good intentions. But that is too simple a description of what happened – the discourse was always two-way. Core ideas were challenged, even rejected. Visitors did not possess a tabula rasa waiting to be enscribed. The more nuanced histories of diaspora we see emerging in recent times apply to music, fundamentally.The tendency to see modernist art and music as essentially universal (pancultural), suggests a virtual exclusion of indigenous musics. As we observe in contributions to this volume from Latin America and Asia, such a philosophy was unsustainable. Such a version of modernism was probably never actually transplanted in any pure form – and the eventual adoption of local sounds, instruments, approaches, ideas has led first to reciprocal feedback, then to the more networked approach we have seen emerging recently (and shall discuss further).
Towards this book – the many meanings of ‘reaching out’ So the history of the cluster of terms surrounding ‘electronic music’ has been one of ever greater ubiquity.We could ask what music was not electronic in some part of the chain of its production and distribution. But the problem is that by the term coming to mean something ever wider, it loses meaningful use. In practice there are sometimes contradictory uses of terms by different groups – this worries academics more than practitioners on the ground. For many users, language and terminology are best understood in context. The social scientists used at one time to think in terms of ‘tribes’, but this seems increasingly problematic, as an individual may now have many parallel – or fluid – allegiances. Nonetheless most users know what they mean – whether to use ‘electronic’ or ‘acousmatic’ for example – depending on who they are with. In creeping out to include an ever widening field we might suggest that what we now address is ‘music made with technology’. Tightening the definition somewhat we might add that the music requires the technology for at least some of its essential presence. Loudspeakers are almost universal in performance but not a strictly necessary condition. Computers, too, seem so universally integrated into the environment of production and performance – but I can spot a couple of musics in this volume that avoid (even reject) them. So some boundaries do exist – must exist to allow a sensible conversation – but they are fluid and so steadily shifting. We try to fix them, but they flow through our fingers soon enough. Practices, after all, evolve in a threeway negotiation of individual, society and technology. This book is not an encyclopedia; it will not attempt to encapsulate the ‘state of knowledge’ of a subject field at a particular time – it does not aim to cover the full breadth of research in the field. I believe this is well-nigh impossible between two traditional book covers – indeed requires the likes of the internet to fulfil. I am suggesting that we shift to seeing a bigger picture – not a ‘world music’ but a music of the world. The process of establishing ‘global reach’ thus became a theme of this book. I propose that this best be done through a series of snapshots, 10
Introduction: Music practice
touching or overlapping – all different ‘takes’ on the same ideas of reaching out and connection – but most resulting in a two way interaction. For example, as suggested earlier, several contributions show the apparent diaspora of first world modernism to almost all parts of the globe – and yet a nearly immediate rejection of such a ‘one-way’ model at the same time. Ideas considered ‘universal’ (at least for a certain kind of twentieth-century urban life) are rapidly subverted, inflected with a lot more than ‘local colour’ and returned to network and interact – East Asia and Latin America are contrasted in three contributions to this volume. Some kinds of popular music, too, have similarly spread across the world. Techno practices (and there are many) seem to have sprung from localised music, then spread and mutated rapidly in new circumstances. How genres of music (both popular and experimental) coexist, interact, mutually engage and repel are key themes in two contributions. Technology not only allows and encourages extension – it becomes a manifestation of that extension.32 Sound recording and communications technology created a necessary prosthesis – as musicians our bodies could reach out and play at remote locations, sending and receiving sound and muscular effort. In some situations, internet performance, for example, the price was latency – a ‘drag’ on the response of the system.We become like vast trees – move a limb branch, and the far-away end takes time to catch up with the nearer-to-trunk movement. Then we encounter ‘reaching out’ to new communities within societies already (apparently) fully equipped for music made with technology.The assumption that ‘we are all connected now’ is too simple if it were not plain ignorant. How the creative musician (using technology) views the directly practical issues of inclusion and exclusion is not avoided in at least two contributions here.‘Outreach’ has been a common feature of organisational arts practice (in the UK at least) for many years, addressing increasing audience participation and an increasingly important skill (and potential empowerment) agenda. There are differences plain to see between Rio de Janeiro and London – but striking similarities of issues and problems too. My idealised model must therefore guard against a one-way notion of reach – that has always been the basis of the patronising (let alone the colonial). Our network should ideally have nodes that send and receive (‘reach both ways’) across links – true exchange. In practice this sometimes happens more, sometimes less.33 I shall revisit this short summary now in a more detailed discussion of the individual chapters and how these contribute to our cubist perspective on the ideas of ‘reaching out’ through music made with technology.
Contents Part I ‘Global reach – local identities’ In this section we see how two vast continents have in very different ways adopted, adapted and made their own, using much locally generated material. First our own myth of origins is challenged. Argentina had working electronic music studios before Britain, and in many spheres of research and creation Latin America has made vital contributions ‘from the start’ (as Ricardo Dal Farra clearly argues in Chapter 1, ‘Research-Creation in Latin American’). He has assembled a range of contributions in a microcosmic chapter (with respect to the book as a whole) which aims to show a vital continent – well connected to its diaspora – building new ways of researching, developing and integrating new music with societal concerns, local musics and instruments (drawing on pre-Columbian materials, for example). Latin America emerges as a global continent. Two contributions address the increasing global presence of Asia. Marc Battier and Lin-Ni Liao look at East Asia (China, Japan, South Korea, Taiwan) as a culturally related area, with a 11
Simon Emmerson
certain regional identity, rapidly developing a sophisticated globalised infrastructure (Chapter 2, ‘Electronic Music in East Asia’).They focus on the proliferation of electroacoustic music studios, festivals, cultural centres and associated activities over the last half-century. They identify key composers, many little known outside their country or region, as well as ‘schools’ of practice and their relationships. This is a comprehensive and thorough examination of a major cultural area of the world in transition. In contrast, Leigh Landy (Chapter 3, ‘The Three Paths: Cultural Retention in Contemporary Chinese Electroacoustic Music’) looks in much more detail at the relationship to a specific national culture – to what extent Chinese composers draw on specifically Chinese ideas (sound, instrument or philosophy). China is in fact far from being a single monolithic culture; the country has its own strongly differentiated regional cultures which come into play in this discussion. He has chosen to focus on two contrasting areas: firstly, a group of composers across three generations based at Beijing’s Central Conservatory of Music (CCOM): Zhang Xiaofu, who effectively founded the electroacoustic studio practice in its modern form, and two generations of his students, now established outside of academia. Of course such composers inevitably reflect some aspects of their cultural environment, but in what way, and how consciously is open to discussion – it turns out that there are strong elements of rejection and questioning as well as integration. Then Landy contrasts this with a look at similar cultural concerns in the ‘underground’ electronic music scene that has also emerged in China, as elsewhere in the world. The erosion of boundaries goes hand in hand with increasingly revisionist histories – those that question canonic accounts of ‘what happened when’ (as discussed earlier). Hybridity of genres and practices has long been seen as the key site of great innovation and enterprise – but coexistence itself has often led to richer activity, participation and innovation. Montreal has been such a place for several decades. Genres previously considered quite separate both coexist and cross-influence; different groups interact and hybridise in novel ways. Patrick Valiquet (Chapter 4, ‘Technologies of Genre: Digital Distinctions in Montreal’) examines this perhaps unique environment as it ‘plays with’ these many music scenes – some using ‘retro’ identifiers to undermine assumptions about the ‘modernity’ of digitisation, overtly rebelling against its uniformity and ubiquity. The rich mix of options is dynamic and fast changing. This chapter challenges traditional notions of genre itself as well as addressing practices that buck the trends of a reductive linear history: first there was analogue, then digital – is clearly too simple. Truly global forms of music practice remain rare, but from the more popular area of techno we see a lot is shared across continents, spaces and practices. Hillegonda Rietveld (Chapter 5, ‘Dancing in the Technoculture’) develops the idea of globalisation, but not through a single genre. Techno emerges as a cluster of related practices which share a highly technologised infrastructure yet which draw on ethnic, social and localised traditions. Ever wider networks of communication see a rhizomic spread of new sounds and innovative practices – or (as often) the new contexts for ‘older’ sounds. Whether these are driven by beat-driven dance practice or more ambient jazzy contemplation, she looks at an ever changing landscape of sound which has recently reached virtually all parts of the world. While acknowledging an ever-changing relationship of continuity and transience, we gain some insight into the complex web of genres and sub-genres of this global music.
Part II ‘Awareness, consciousness, participation’ The promotion of a new generation of smart technology has been unrelentingly optimistic in ways that are not always justified. The reasons for social exclusion are many and various. There emerged several decades ago a general consensus that at least some aspects of cultural policy 12
Introduction: Music practice
should include a conscious attempt at social inclusion – an extension of the constituency for art to include a greater range. The idea (originally known as outreach) has taken many forms, from how better to communicate ‘our work’ (contemporary music) through to generating entirely new music within a specified client community. A logical more recent development of this has aimed at empowering entire social groups through arts practice – and not simply in ‘artistic’ terms. But can smart technology furnish us with the cheap tools that might allow such wider empowerment? In this part we have a full report on the Som de Maré project from Rio de Janeiro. Pedro Rebelo and Rodrigo Velloso (Chapter 6, ‘Participatory Sonic Arts: The Som de Maré Project – Towards a Socially Engaged Art of Sound in the Everyday’) show how we can regain relationships and cross strong divisions in social systems. But how to engage without ‘parachuting in’ needs a very local touch. Crossing senses to include flavour, food, image and memory relationships, technology can help articulate not only what is already there but also create new experience from a well-understood starting point. Stark alienation from the expected norms of community can also be an incentive – brought immediately to the experience of the participants when the area (the Maré favela) was the subject of a direct military ‘pacification’ exercise while the project was running. When all is said and done in projects such as these, too much of a feel good glow should better be avoided – Atau Tanaka and Adam Parkinson (Chapter 7, ‘The Problems with Participation’) examine a series of projects based in the UK which had overt aims to increase participation and creative engagement as well as specific technical skills. Perhaps paradoxically, too great a focus on this a priori threw up issues not foreseen of who exactly benefits and how. How resilient is something apparently achieved as a result of direct intervention in a ‘problem as perceived’? When is the job finished? They suggest, too, that there are bound to be unforeseen issues if projects are designed ‘top down’ rather than the more socially embedded ‘bottom up’. Also real benefits may not correspond to what is apparently ‘predicted’ in any forward plan and application for funding. This is realistic not pessimistic – good may be the outcome in ways not foreseen. They suggest we may need to be more open and flexible in our objectives and measurements. The socially embedded dimension remains central to Leah Barclay (Chapter 8, ‘The Agency of Sonic Art in Changing Climates’) who presents the artist committed to addressing environmental issues. She examines how technological mediation can put a wider community back in contact with environmental change through sound and music. She advocates greater use of sound as an indication of health and well-being of all that surrounds us as a civilisation. In a series of local community projects she promotes an essentially participatory functionality. At the same time we need to multiply this essentially local activity to encompass a global network of understanding. She reflects on the need for participation – does it work? Do people listen? What is the legacy of this work? Does it have any lasting impact? Of course the responsibility of the artist comes strongly to the fore in this direct aim at consciousness raising. Artists must do much more than warn – they must communicate and act to change attitudes, to change minds. So we move to Sally Jane Norman’s look (Chapter 9, ‘Tuning and Metagesture after New Natures’) at the deep relationship of human mind to mediated world through the notion of ‘tuning’, both being attuned to and tuning in. Her discussion starts from how sounds are formed into gestures which signify the world. But at the same time recorded and mediated sound extends (possibly distorts) any simple perceptions, changing relationships (‘tunings’). How we describe sounds is crucial (certainly not neutral) – and assumed paradigms are always in need of questioning. Drawing on theories and practices of acousmatic music making, she reflects on the changes technology has brought about in this tuning process. Paradoxically, technology can better help ‘reveal’ the workings of nature. Our exosomatic organs include the ear and 13
Simon Emmerson
hearing – and being vastly expanded through technology, these are perhaps best considered new organs better attuned to the new meta-gestures. Technology is slowly revealing aspects of the hidden workings of the brain.34 Eduardo Reck Miranda and Joel Eaton (Chapter 10, ‘Music Neurotechnology: A Natural Progression’) look at examples of the newly created hybrid area of music neurotechnology. Reaching out can also include a form of reaching inside. Steadily over the past half-century, bio-medical science has developed both theories and tools for understanding the working of the brain – its division into areas of function and activity. The vast complexity has only recently been described and is hardly understood. Rather like astronomers trying to make sense of the universe through the signals we receive, so the neuroscientist can ‘read’ brain activity in ever more detail through electrical signals, magnetic resonance techniques (and the like). Miranda and Eaton give several contrasting examples of using brain activity data for music making. On the one hand they devise compositional possibilities, using the idea of patterns of activity becoming electronic or instrumental sound, showing how they ‘mapped’ such noisy and complex data to a sound or a score for performance – but then also a different approach, using the ability to detect how we ‘think’ different options to liberate the possibility of generating music by people otherwise unable to move limbs to do so, showing us that we have now successfully moved beyond the foothills of thought-driven music.
Part III ‘Extending performance and interaction’ Music made on the internet has been around for as long as the internet itself – more than 20 years now. This may be the ultimate prosthesis, as potentially the entire globe is connectable. Simon Emmerson and Kenneth Fields (Chapter 11, ‘Where Are We? Extended Music Practice on the Internet’) take stock of what has been done.They focus on the music; while of course the boundaries and affordances are set by the technology, some musical issues are becoming clearer. Some kinds of music making, musical materials and performances might work better than others. As yet the amazing local characteristics of places connected together have not been much exploited (at least for live music making). A multi-site specific aesthetic has been slow to emerge. Maybe we can move away from the global greenhouse – where everything seems to be (more or less) the same, a lab with connectors, computers and projectors – to something that reflects local colour. We may need, too, to better ‘get a feel’ for the internet in performance – simply following score instructions would probably not be sufficient unless complemented with such a ‘listening out’ to the network. Kenneth Fields’s Syneme Project and the development of the Artsmesh application are reviewed in this contribution. Even within canonic and post-canonic repertoires we can see this inevitable ‘reach out’ at work. As mediated sound now surrounds us in the everyday experiences of the world, so we demand more sophisticated means and methods to project the musical work to become (for a time) that world. We gain greatly from the sense of immersion in sound systems – whatever the genre of the sound art or music which is presented. The earliest of the high modernist installations is probably that built for the Philips Pavilion at the Brussels World Fair of 1958.35 The French pioneered the idea of an ‘orchestra of loudspeakers’ firstly in Bourges (Gmebaphone 1973) and Paris (Acousmonium 1974), then spreading steadily thereafter. In 1982 Jonty Harrison founded the Birmingham Electro-Acoustic Sound Theatre (BEAST) in Birmingham UK. This multi-loudspeaker system evolved steadily (continuing to the present), initially based on French models (in the pragmatics of its aims and objectives), though with some clear influence from the more ‘geometric’ constructions of German traditions (Osaka World Fair pavilion 1970 through to the more recent Klangdom systems).36 While Harrison has written about the art of 14
Introduction: Music practice
‘sound diffusion’ (and presentation) in general (1998), he has recorded for this volume a definitive history of this system (Chapter 12, ‘Rendering the Invisible: BEAST and the Performance Practice of Acousmatic Music’), balancing discussion on loudspeaker positioning for optimal audience sound field perception, with clear pointers to how the technical and musical should be locked together in performance. Sound diffusion becomes a physically active construction of the work from a fixed basis to something sculpted into the site of the performance. The aim is always to present the work to best effect for a range of audience positions within the sound field. And he finally reflects on how an experience of this extended presentation might influence composition itself. Mick Grierson places coding and data at the centre of his discussion (Chapter 13, ‘Creative Coding for Audiovisual Art: The CodeCircle Platform’). The increasing reach he describes works in several dimensions at once. He foregrounds a ‘tight’ view of audio-visual art – one which locks the two together in a very close embrace. He further presents new ideas for creative coding upon which all this is founded – how this, too, moves out from a singular point and is developed across networks, between nodes of activity. This means that ideas and practices of interactivity, collaboration – even performance – move much closer together. Interactive coding, collaborative authoring and code development become dynamic as speeds become sufficient for real-time creative exchange.This is exemplified in the CodeCircle 2 multi-user web environment for collaborative (and creative) coding. It was only with great difficulty that I divided this volume into these three parts that relate both internally and to each other. Many other groupings of ideas, ‘tags’ or ‘keywords’ would have been possible. Most chapters deal with performance – and how technology has increased its range and linkages from cause to effect; most deal with space and issues around how we can best come to terms with remote spaces and places that come to us through mediation; all (I hope) deal with aspects of how human beings relate through music (and making tools for music), a vital social dimension. For, in the end, whatever the technology enables – and however confused the language and terms used to describe the growing number of possibilities – music made with technology should be there to add to the quality of human experience of, and in, the world. I hope this volume makes a small contribution to an understanding of this aim.
Notes 1 John Paynter was a lecturer in music at York University UK from 1969, professor from 1982. From 1973 to 1982 he directed the UK Schools Council Project Music in the Secondary School Curriculum. 2 The earliest études were more ambiguous in this aim, as Schaeffer’s philosophy took some time to emerge – it had done so by the time of the early publication À la recherche d’une musique concrète (1952, English version 2012). 3 Pierre Schaeffer had commissioned the Coupigny synthesiser for the GRM in the mid-1960s. We also see the British produced EMS ‘Synthi’ (ca. 1972) in Bernard Parmegiani’s home studio image. (See images in booklet to CD INA C 1012–1013 Bernard Parmegiani (1992)). In his De Natura Sonorum (1974) we can clearly hear the electronically produced sounds. 4 Generalised and abstracted into a numerical form. 5 Thus pieces for instrument/voice ‘and tape’ – the usage survives to this day. Attempts to replace this (in French and English) with ‘fixed sounds’ or ‘fixed media’ have not gained much traction. 6 There were earlier examples that did not develop. 7 For a substantial time the elements of computer music, unit generators, envelopes, filters etc. were digital code versions of earlier analogue devices. 8 In all senses, not simply the clearly technological line from rock’n’roll to techno (and offshoots). 9 The STEIM archives contain much of value including the enormous contribution of Michel Waisvisz (co-founder (1969) and director from 1981 to his death in 2008). 10 For example the piano sound was not transformed.
15
Simon Emmerson 1 1 Too numerous to list here, see Emmerson 2017 for a fuller discussion. 12 The pre-World War II (and specifically pre-Nazi) German developments are finally receiving the greater coverage they deserve. Mostly keyboard oriented, they survived to the WDR studio of the early 1950s but were eclipsed by individual high-quality electronic modules combined with ‘tape techniques’. (See the images in the booklet to Stockhausen CD3 Elektronische Musik 1952–1960). 13 Even when the user remains largely ignorant of prior usages. 14 There is no definitive date. The term was well in use by the late 1960s (in both French and English), especially as musique concrète fell out of favour. 15 Brian Kane (2014) has made a definitive critical study of the earliest uses of the term and its recent evolution to what we now accept. 16 By Jérôme Peignot, cited by Schaeffer (1966, chapter IV). 17 And some writers would include aspects of ‘live electronics’ in the term acousmatic – to confuse the overlapping terminology still further! 18 Jonty Harrison has, while paying homage to Schaefferian roots, described this as ‘expanded listening’ (1996) in which different listening modes can be engaged. 19 Cage had a direct link to some of the earliest Asian musics imported to the USA, through his contact with Henry Cowell. This influence is directly perceived in works through to the Sonatas and Interludes (1946–1948), but the later use of indeterminacy and the elimination of memory and habit removed much of this. 20 In the UK this would include Daphne Oram and Delia Derbyshire; Eliane Radigue (France) might also be another. But there are many others. 21 Such as in the work of Norman McLaren and Oskar Fischinger, amongst others. 22 The Oramics machine is now in London’s Science Museum; see Keefer (2013) for a discussion of Fischinger’s machines and work. 23 Christopher Haworth (2016) examines, in a much more nuanced way, this and related debates which emerged around genre at the time. 24 In the UK the ‘Electroacoustic Music Association’ became ‘Sonic Arts Network’ in 1989, recognizing a wider remit in the new term (referring also to the title of Trevor Wishart’s book ‘On Sonic Art’ [1985/1996]). 25 Hiromi Ishii (2008) has placed on record the extreme difficulty of any clear mapping of terms between a European and the Japanese language. 26 Expressly in a widely used early teaching text, Stuckenschmidt (1969, p. 221), but also through practice in the relationship of Cage, Duchamp and Calder – see Calder’s Small Sphere and Heavy Sphere, 1932–1933 which produces random sounds as a mobile of two suspended spheres rotate, slowly striking resonant objects. 27 I exclude discussion on the history of ‘silent listening’ – clearly some cultures, performance situations and eras allowed social interaction during performances – although a degree of concentration was usually demanded at key moments. 28 See the ‘Live Algorithms for Music’ network (http://doc.gold.ac.uk/~mas01tb/LAM.html). 29 The existence of NIME – The International Conference on New Interfaces for Musical Expression (nime.org) is an example. 30 There are rare exceptions: in 1971 Pierre Schaeffer reconstructed an early studio ‘performance’ using 78 rpm turntables for ORTF television (‘La naissance de la musique concrète et électro-acoustique’). http://fresques.ina.fr/artsonores/fiche-media/InaGrm00208/la-naissance-de-la-musique-concreteet-electro-acoustique.html 31 John Richards (2013) has suggested that ‘DIY’ should in many cases be replaced by ‘DIT’ (‘doing it together’). 32 A tool is an extension of a limb – we include musical instruments in this description. 33 We might suggest that all links are equal but that some are more equal than others. 34 The relationship of brain to mind is left somewhat ambiguous – and will no doubt be the subject of much future discussion as the subject develops. 35 Estimated numbers of loudspeakers for this installation vary – but 350 seems a sensible figure (http:// analogarts.org/anablog/pavilion/). See also Treib 1996. 36 See images at www.stockhausen.org/osaka.html and http://on1.zkm.de/zkm/stories/storyReader$ 5041
16
Introduction: Music practice
References Bayle, F. 1993. Musique Acousmatique: Propositions . . . Positions. Paris: Buchet/Chastel. Boulez, P. and Cage, J. (eds. J.-J. Nattiez and R. Samuels). 1993. The Boulez-Cage Correspondence. Cambridge: Cambridge University Press. Collins, N. 2009. Handmade Electronic Music – the Art of Hardware Hacking (2nd Edition). London: Routledge. Dennis, B. 1970. Experimental Music in Schools: Towards a New World of Sound. Oxford: Oxford University Press. Dwyer, T. 1971. Composing With Tape Recorders – Musique Concrète for Beginners. Oxford: Oxford University Press. Eco, U. 1989. The Open Work. London: Hutchinson Radius. Emmerson, S. (ed.) 1986. The Language of Electroacoustic Music. London: Macmillan. Emmerson, S. 2007. Living Electronic Music. London: Ashgate/Routledge. ———. 2016. ‘EMAS and Sonic Arts Network (1979–2004): Gender, Governance, Policies, Practice’, Contemporary Music Review, 35(1), pp. 21–31. ———. 2017. ‘Performance With Technology: Extending the Instrument – From Prosthetic to Aesthetic’, in The Routledge Companion to Sounding Art (Marcel Cobussen,Vincent Meelberg and Barry Truax, eds.). New York: Routledge, pp. 427–437. Emmerson, S. and Landy, L. 2016. Expanding the Horizon of Electroacoustic Music Analysis. Cambridge: Cambridge University Press. d’Escriván, J. 2007. ‘Electronic Music and the Moving Image’, in The Cambridge Companion to Electronic Music. Cambridge: Cambridge University Press, pp. 156–170. Guilbaut, S. 1985. How New York Stole the Idea of Modern Art. Chicago: University of Chicago Press. Harrison, J. 1996. Articles indéfinis (CD liner notes). Empreintes Digitales: IMED 9627. ———. 1998. ‘Sound, Space, Sculpture: Some Thoughts on the “What”, “How” and “Why” of Sound Diffusion’, Organised Sound, 3(2), pp. 117–127. Haworth, C. 2016. ‘ “All the Musics Which Computers Make Possible”: Questions of Genre at the Prix Ars Electronica’, Organised Sound, 21(1), pp. 15–29. Ishii, H. 2008. ‘Japanese Electronic Music Denshi-Ongaku (電子音楽): Its Music and Terminology, Definition and Confusion’, Proceedings of the Electroacoustic Music Studies Network Conference, Paris. www.emsnetwork.org/ems08/papers/ishii.pdf Kane, B. 2014. Sound Unseen: Acousmatic Sound in Theory and Practice. Oxford: Oxford University Press. Keefer, C. (ed.). 2013. Oskar Fischinger, 1900–1967: Experiments in Cinematic Abstraction. Amsterdam: EYE Filmmuseum. Mulder, J. 2013. Making Things Louder: Amplified Music and Multimodality. PhD thesis. University of Technology Sydney. Nyman, M. 1974, 1999. Experimental Music: Cage and Beyond. London: Studio Vista (2nd Edition). Cambridge: Cambridge University Press. Orton, R. 1981. Electronic Music for Schools. Cambridge: Cambridge University Press. Paynter, J. and Aston, P. 1970. Sound and Silence: Classroom Projects in Creative Music. Cambridge: Cambridge University Press. Richards, J. 2013. ‘Beyond DIY in Electronic Music’, Organised Sound, 18(3), pp. 274–281. Schaeffer, P. 1952, 2012. A la recherche d’une musique concrète. Paris: Éditions du Seuil. Translated by Christine North and John Dack: In Search of a Concrete Music. Berkeley, CA: University of California Press. ———. 1966, 2017. Traité des objets musicaux. Paris: Éditions du Seuil. Translated by Christine North and John Dack: Treatise on Musical Objects. Oakland, CA: University of California Press. Self, G. 1967. New Sounds in Class – a Practical Approach to the Understanding and Performing of Contemporary Music in Schools. London: Universal Edition. Smirnov, A. 2013. Sound in Z: Experiments in Sound and Electronic Music in Early 20th Century Russia. London: Koenig Books. Stuckenschmidt, H. H. 1969. Twentieth Century Music. London: Weidenfeld and Nicolson. Treib, M. 1996. Space Calculated in Seconds:The Philips Pavilion, Le Corbusier, Edgard Varèse. Princeton: Princeton University Press. Wishart, T. 1974. Sun – Creativity and Environment. London: Universal Edition. ———. 1985/1996. On Sonic Art. Amsterdam: Harwood Academic Publishers.
17
PART I
Global reach – local identities
1 RESEARCH-CREATION IN LATIN AMERICA Ricardo Dal Farra with Carolina Brum Medeiros, Filipe Calegario, Marcelo M. Wanderley, Jaime Oliver La Rosa, Jônatas Manzolli, Juan Reyes, Fernando Iazzetta, Hamilton Mestizo
Introduction Ricardo Dal Farra Composers and researchers from Latin America have not only been aware of electroacoustic music but have widely contributed to its development since the very beginning. Mauricio Kagel (Argentina, 1931 – Germany, 2008) composed eight electroacoustic studies in Argentina between 1950 and 1953 (Davies 1968). Then, from 1953 to 1954, he created Música para la Torre, a long work which included an essay on musique concrète, for an industrial exhibition in Mendoza. He was among many composers who were setting the foundations of a rich history of experimentation and creation in the region. We can add Juan Blanco in Cuba, Reginaldo Carvalho and Jorge Antunes in Brazil, León Schidlowsky and José Vicente Asuar in Chile, Joaquín Orellana in Guatemala, Horacio Vaggione from Argentina and César Bolaños from Peru.1 Of course these are just a few names in the ocean of creativity that has always been Latin America (Dal Farra 2004). Considering technology developments applied to music: composer and lawyer Juan Blanco obtained a patent for a musical device that can be seen as a direct antecedent of the Mellotron, back in 1942. Engineer Raúl Pavón was working on the prototype of an analog voltagecontrolled synthesizer in Mexico in 1960, and Fernando von Reichenbach created an automatic graphic to (electronic) sound converter in Argentina a few years later (Dal Farra 2004). In Chile, Asuar produced a hybrid analog-digital computer system in the mid-1970s exclusively devoted to creating music. Once again, this is a brief list only including some of the many names that started the long tradition of electroacoustic music in Latin America, from the musical as well as from the scientific and technology-applied perspective. Far from a full coverage of the rich situation of experimental and academic music made with electronic technologies, this chapter focuses on sharing a wide range of initiatives and projects that could serve just as a first approach to some of the activities being developed in the Latin American region as well as outside its borders by Latin American creators and researchers. It is worth mentioning here that the term ‘Latin America’ is used loosely in this text to refer to all the American countries south of the United States where the Spanish and Portuguese languages predominate. 21
Ricardo Dal Farra et al.
This section of the book has been possible thanks to the participation of a number of artists and scientists who are currently working in the music and technology field. Some of them have been developing their activities in their countries of origin, while others live outside the borders of Latin America but still keep tight links to colleagues and projects in the region. The invitation to contribute to this chapter was sent to researchers-composers who, taken together, could cover a broad spectrum of scientific and creative endeavour. Latin America is a large and diverse region, and it is treated here as a whole to create a context for the work that has been carried out by some of its people – people who, considering the cultural diversity between countries, find strong and rich links with those colleagues who are or feel themselves part of this territory, wherever they are based (Dal Farra 2006).
The science behind musical expression2 Marcelo M.Wanderley, from the Input Devices and Music Interaction Laboratory of the Centre for Interdisciplinary Research in Music Media and Technology at McGill University in Montreal, Canada, has been a prolific contributor to the music field, helping to understand the links between musicians and their instruments as well as leading a new generation of researchers developing innovative musical interfaces. Wanderley, together with Filipe Calegario and Carolina Brum Medeiros – all of them from Brazil – wrote the following entry. Beyond the discussion of its main subject, this article shows the interest of Brazilian researchers in expanding the music field through renewing the concept we traditionally accept for a ‘musical instrument’. This section will describe some examples.
Design of responsive digital musical instruments Carolina Brum Medeiros, Filipe Calegario, Marcelo M. Wanderley The recent widespread availability of prototyping tools for physical computing (Sullivan and Igoe 2004) has had a major impact on the research on new interfaces for musical expression (NIME – www.nime.org) and digital musical instruments (DMI) (Miranda and Wanderley 2006).Tools such the Arduino platform (Kushner 2011) have made the prototyping of interfaces an accessible task to a community of users who did not possess the technical engineering or design background needed to work with sensors until the early 2000s. The downside of this trend is many times a trade-off between design simplicity and design quality. Indeed, it has been shown that many of the recent DMIs present sub-optimal engineering solutions and/or only use the most common sensors available (Medeiros and Wanderley 2014b). Although this might not be an issue for most general prototypes, it might severely limit the response of musical interfaces, as music performers are known for highly developed motor skills and auditory perception, hence requiring accurate and repeatable instruments to express themselves. A second issue is the design itself. Although such tools definitely help with the design of interfaces, they do not define what the design will be.This is still a task of the interface designer, who typically needs to find the proper inspiration and ideas before using prototyping tools. The process of conceiving and developing a DMI can be roughly defined as cycles of inspiration, exploration, discovery, implementation and validation. Generally, these cycles are long and make the instrument evolution slow. Prototyping is a crucial part of this process. It is the concrete realization of ideas used for better understanding the initial concept, for validating it and also for generating more ideas. But the DMI design is not only about the artifact itself, it is also about the resulting sounds that this artifact allows the musician to play. The instrument 22
Research-creation in Latin America
is a means for a creative process. DMI design demands functional prototypes that can react to players’ actions in real time. Functional prototypes are harder to create than non-functional ones. They take more time to build and, in general, are the bottleneck of the DMI design cycle. At the Input Devices and Music Interaction Laboratory (IDMIL), McGill University, we work on these two issues: first, to improve the technical design of musical interfaces and second, to provide tools to help designers create their own designs before using tools to implement them. In this text we give a brief overview of these two research directions.
Improved technical design The PhD thesis work of Carolina Brum Medeiros (Medeiros 2015) focused on the choice for advanced sensing instrumentation and signal processing techniques to improve the performance of DMIs. Musical instruments are controlled by skilled performers, whose refined motor control and auditory perception empower easy identification of any discrepancies between executed motion and resulting sounds. DMIs should not hinder musical performance due to technical limitations. Indeed, most DMI limitations, such as non-linear and non-monotonic sensor response, can be solved by signal conditioning and/or processing. Alternatively, linear sensors can be deployed with the cost of more complex installation, conditioning and processing. An example of improvement in sensing was made in the DMI ‘The Rulers’, originally designed by David Birnbaum, where the deflection and movement of metal beams are measured and this information used to control sound synthesis algorithms. Initially, simpler sensors based on infrared and Hall effect were used. Although straightforward to use, they have limitations that are reflected in the data obtained (non-linearity and non-monotonicity). However, a third type of sensor based on strain gauges, although somewhat harder to implement, can overcome such limitations and provide linear response at defined movement ranges (Medeiros and Wanderley 2011). Data processing and estimation opens another new perspective on evaluating and improving accuracy of measured physical quantities given by low fidelity sensors. A technique for improving accuracy in sensing systems is the use of sensor fusion algorithms, which typically rely on the use of measurements from multiple sensors and on the knowledge of the system’s process models. One strand of our research is to develop data processing frameworks for human input signals, focusing on sensor fusion.3 Our main work in this direction is the framework for standard Kalman filter implementation (Medeiros and Wanderley 2014a). The solution for the problem is not obvious, as driven motions – led by human input – cannot be modeled by a known pattern, sequence, rule or probability (i.e. one cannot tell exactly what gestures musicians will make during a performance). The framework includes a filter evaluation scheme for Kalman filter implementations which can also be applied in any sensing system where the definition of a process model is not easily predictable. The proposed framework is more predictive for known physical models, while being more corrective when user input deviates the system from a well-defined physical model. This balance is given by a filter evaluation protocol that guides the model selection and the gesture classification for each user-driven gesture.
Helping the design of musical interfaces – Probatio The design process for musical interfaces is typically idiosyncratic and ad hoc. In general, instruments are built by a designer for a specific player, or the roles of designer and player are merged into the same person. With the easy access of sensors and development platforms and the trend of the maker movement (Anderson 2012; Dougherty 2012) and do-it-yourself communities, lots 23
Ricardo Dal Farra et al.
of projects are being shared.What before was related to “How to do it?” now becomes “What to do with what we have?”To address this problem, we have proposed a tool to help ideas exploration which also aims to act as a catalyst for the prototyping phase. The Probatio experimentation toolkit for DMI prototyping – the thesis work of Filipe Calegario at the Universidade Federal de Pernambuco (Brazil) in collaboration with the IDMIL – is inspired by the design method called morphological analysis or morphological box, in which existing artifacts are split into fundamental or basic parts and then recombined to generate more ideas. In our case, a list of acoustical, electric and digital instruments was defined and analyzed to extract ways of controlling the instrument, as well as the positions of the instrument relative to the player (Calegario et al. 2017) This work proposes a conceptual morphological box and a tangible experimentation toolkit with which the user can make functional prototypes of DMI by combining parts of existing instrument controls and parts of instrument structure such as arm or body. One example of an instrument following this approach is the Pandivá,4 based on a guitar body; the player can trigger notes by tapping a pandeiro-like skin and can alter the pitch by moving a trombone-like slide. The Probatio project aims to provide designers with ideas for design, as well as reduce the gap between idea and prototype by merging exploration and implementation into a tool and a method that allow the user to have functional prototypes in less time. They can be used by the designer to better communicate with the musician or by the designer-player to explore new musical interaction ideas. In the near future, the development of new modules and supports for the experimentation toolkit is expected to increase the potential for combination, and, therefore, designer and performer can achieve more adequate results for their musical context and intention of use.
Pre-Columbian instruments and new music Following also on the development of innovative digital music controllers, composer Jaime Oliver La Rosa, originally from Peru and currently based in the United States, has been working with pre-Columbian musical instruments since 2001. During these years he has recorded a large quantity of whistling vessel jars or silbadores at the Peruvian National Museum of Archaeology, collaborating with the project Waylla Kepa, which documents pre-Columbian instruments5 from Peru. In his work, archaeological sites and museums become sound sources. The composer’s role is to articulate musical works using the sounds of cultures that did not leave any music recording or notation of any kind, just their instruments. Silbadores work by filling them with water and rocking them so as to generate bubbles of air pressure that activate a whistle. In this way, these instruments produce bird-like sounds as well as glissandi and air or breathing sounds, thus generating sonorities and sound structures that are extremely difficult to obtain otherwise. The work by Oliver La Rosa has multiple links with Latin America’s culture, current and ancient. No doubt one such link is the strong and clear bridge he has built between Andean musical instruments and modern digital musical controllers. In the following entry, he explains both his technical and his conceptual approach to music and performance.
Digital musical instrument design based on pre-Columbian models Jaime Oliver La Rosa The contemporary use of pre-Columbian musical instruments brings up interesting political and cultural issues. For most Peruvians, ancient Andean culture is a symbol of idealized precolonial times which must be preserved intact and which generates a large tourist economy. 24
Research-creation in Latin America
By manipulating and transforming these sounds with modern computer techniques, I intend to produce music that does not contribute to the idealization or commodification of preColumbian cultures as mystical pre-colonial tourist attractions. In this way, this work confronts the political and cultural issues of using historic sound sources and technological artifacts in contemporary Peru.6 Pre-Columbian silbadores are also a means to enquire about the nature of musical instruments at large since they fail to conform to dominant Western concepts of the musical instrument (Kartomi 1990), mainly by not producing pitch scales. I began to build sensor-augmented silbadores as part of my compositional research in 2006.7 Since then, I have been active in the development of new computer-based controllers and musical instruments. Using video cameras as sensors to track hand-shape features continuously over time, these instruments attempt to push the boundaries of the concept of musical instrument itself. They, and some of the compositions that make use of them, have received prizes in the United States, Brazil and Germany.8 The Silent Drum (Oliver and Jenkins 2008) and the MANO controller (Oliver 2010) were released using open source licenses and a detailed tutorial for their construction (available on his personal website).9 These tutorials have enabled other people to replicate these controllers, perform several of my own compositions and contribute to and modify the source code. These controllers’ multidimensional and interdependent set of gesture features undergo a variety of classification techniques and allow the performer/composer to immediately produce complex sound behaviors that would probably be impossible to achieve in disembodied contexts. Furthermore, the impulse in the design of these instruments is aesthetic, therefore producing music compositions – as opposed to ‘demos’ – which give the instrument a specific musical context.10 Through the release of the instrument’s C programming code with open source licenses there have been a few recreations of the instruments around the world, both in academic and non-academic contexts. However, three specific cases are worth exploring in detail, as they not merely replicate the Silent Drum but rather appropriate and transform it in interesting directions, to create altogether new works and devices. The Silent Drum (see Figure 1.1) is a drum with a very elastic head made of spandex fabric. When the head is pressed, the hands and fingers of the performer produce shapes in the fabric
Figure 1.1 The Silent Drum by Jaime Oliver La Rosa
25
Ricardo Dal Farra et al.
that can be analyzed and tracked over time, generating multiple control signals that are then used for sound generation and transformation. Menno Van der Woude, a dancer and audiovisual programmer-artist from The Netherlands, developed the Veerkracht (Resilience) project in 2010 (see Figure 1.2). Upon seeing the Silent Drum, Van der Woude thought, “What if the drum was bigger and the drummer would be in the drum?” This initial question led to the construction of a large surface which the dancer or dancers inhabit, so to speak, and rather than pressing a drum head, they push it from within.11 In Cali, Colombia, computer music researcher Daniel Gomez developed Yemas (Gómez et al. 2013) at the University of the Colombian Institute of Advanced Studies (ICESI), a drum instrument that adopts the Silent Drum’s basic principles but reformulates the tracking strategy using a 3D algorithm developed together with fellow ICESI researcher Jose Moncada, using a Kinnect camera in OpenFrameworks. Daniela Sanchez and Mauricio Ossa, students at ICESI, then designed a new body for the instrument. Yemas has a larger diameter than the Silent Drum and provides three-dimensional parameters for the musician to use (see Figure 1.3). Gomez, Vega and Arce-Lopera developed new and interesting sound mappings for Yemas, using its surface to navigate predetermined “timbre spaces” described using a variety of spectral features.12 A new version of the instrument discards the drum body altogether and simply stretches the fabric between two tables, as seen in Figure 1.4.
Figure 1.2 Veerkracht by Menno Van der Woude
Figure 1.3 Yemas by Daniel Gómez
26
Research-creation in Latin America
Figure 1.4 New version of Yemas by Daniel Gómez
Figure 1.5 Iluminado by Ricardo Brazileiro
Media artist Ricardo Brazileiro, from Recife, Brazil, has been developing Iluminado13 since 2009 (see Figure 1.5). Brazileiro considers Iluminado to be a “fork”14 of the Silent Drum from an Afro-Brazilian perspective. A traditional Afro-Brazilian drum called Ilú is modified by attaching to it the tracking hardware and software of the Silent Drum. Brazileiro’s idea was to integrate AfroBrazilian instrument builders and hackers to mesh traditional knowledge with digital approaches to music. Brazileiro collaborated with the Associação Recreativa Carnavalesca Afoxé Alafin Oyó15 to develop Iluminado. He often performs with ensembles combining traditional Afro-Brazilian afoxé16 with computer-generated and -controlled sounds. Brazileiro uses Iluminado’s openly gestural qualities to teach computer music in a more intuitive and attractive way in both classroom, concert and installation settings, and he has taught many people to perform with the instrument. 27
Ricardo Dal Farra et al.
Open source software and widespread availability of common computing and sensing hardware has worked as an integration mechanism to a larger community of Latin American developers and creators. Code and its associated knowledge is produced and appropriated by local artists/researchers to produce art and generate knowledge, returning transformed code to the world and to the web. These instruments and their code are created with “compositional” or aesthetic intention. Therefore, code exchange within Latin America and with the rest of the world creates new nodes and flows of knowledge and aesthetics. The internet has allowed these broader communities to articulate, produce and share information. However, while the internet and open source era allows for exchanges that were previously impossible, there is still a risk of authorship being erased in these open source communities, in particular in countries where resources to properly record and archive these histories are still limited. To conclude: some important questions still remain to be discussed. Considering the complex social and cultural makeup of these postcolonial societies, to what extent has Western history erased the multiple contributions of Latin Americans?
Music and cognitive processes As the group leader of the Interdisciplinary Nucleus for Sound Studies (NICS) at the University of Campinas in Brazil, Professor Jônatas Manzolli has been focusing on the interlacement of musical creation, computer technology, psychoacoustics and cognitive sciences. His work with interactive mediated music studying the development of evolutionary process for composition and sound design has been leading to projects like Roboser, created with Paul Verschure, combining an interactive musical system with neuromorphic devices (Manzolli and Verschure 2005, 2013). While based in Brazil, NICS has strong links with an international network of research institutions, including IDMIL in Canada, led by Brazilian researcher Marcelo M.Wanderley (see the first section of this chapter).
Musical creation based on studies of cognitive processes supported by interactive and immersive media Jônatas Manzolli We report here on recent research activities and artistic creation at the Interdisciplinary Nucleus for Sound Studies (NICS), University of Campinas (UNICAMP). Directed by Professor Jônatas Manzolli until September 2015, NICS is an interdisciplinary nucleus founded in 1983 to develop research on sound communication in a broad sense. It studies different manifestations of sound considered as the main informational content object. Bringing together researchers from different areas of knowledge, centred in arts and sciences, NICS has developed interdisciplinary projects that aim to build relationships between musical creation and the discovery of new models of production, control and analysis of the sonic phenomenon.17 In the past 10 years, more than 20 students have graduated from NICS – in collaboration with the Institute of Arts and Faculty of Electrical Engineering – with thesis topics such as interactive modeling for sound synthesis and spatialization in the micro-temporal domain, computeraided orchestration using audio descriptors, studies on the influence of expectation on auditory perception, music creation and performance in the context of digital musical instruments, recursive models applied to percussion music and interactivity modeled by a self-organized instrument-space. 28
Research-creation in Latin America
NICS also works in partnerships with international institutions. The RepMus research group of IRCAM (Paris) collaborates in the development of software environments dedicated to computerassisted music analysis. Recently, in August 2014, we organized the First French-Brazilian Colloquium on Computer Aided Musical Creation and Analysis. This event included lectures, round tables, concerts and workshops addressing topics in advanced processes applied to musical analysis and creation, real-time interaction and formalization of musical structures. The SPECS, Centre of Autonomous Systems and NeuroRobotics (NRAS) of the Pompeu Fabra University, Barcelona, provides its experience in Cognitive Systems and Interactive Media. Two works created in collaboration with the SPECS will be presented. There is a productive international collaboration between the Input Devices and Music Interaction Laboratory (IDMIL) and the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) at McGill and the Interdisciplinary Nucleus for Sound Studies (NICS), University of Campinas (UNICAMP), Brazil. An effective and fruitful program of student exchange has been developed since 2010. There follows an overview of the concepts underlying NICS’s research and the methodological approach of the Laboratory of Interactive Media and Digital Immersion (ImCognita).We then summarize recent projects with references to publication and associated audio-visual material.
Overview To compose interactive pieces to integrate multimedia resources is a creative challenge. It requires equilibrium between several sources of information and control of different devices and technology. It brings compositional design to an interdisciplinary framework that relates electroacoustic, mediated performance, improvisation, audio-visuals and the development of new digital music interfaces. Complementarily, in the context of new music technologies, an interactive environment can function as a laboratory to test computational models of music cognitive processing and interactive behavior. With the advent of new technologies that have emphasized interaction and novel interfaces, alternative forms and modes of interactive media have been realized (Rowe 1993; Winkler 2001). These developments raise fundamental questions on the role of embodiment as well as the environment and real-time music interaction. From the NICS perspective, this study should be supported by the generation of real-time data focusing on the understanding of human-machine interplay, through real-time music creation.
Laboratory of Interactive Media and Digital Immersion Located at NICS, the ImCognita Lab is a multi-user laboratory designed to capture and analyze multimodal signals (audio, video, images, human movement and bio-signals). It develops a research program based on the creation of artworks combining multimodalities using interactive media, to produce digital immersion and augmented cognition, and to study human cognition and creativity using sensory devices, computer graphics, motion caption and bio-signals. The methodological point of view is that the notion of presence indicates that there are essential inputs for the construction of self-referral agents (Bernardet et al. 2010). Thus we deploy methodological efforts focusing on interactive media within a mixed reality environment in order to study the constructions of meaningful relationships between agents and environmental stimuli in a virtual space and explore and analyze data of real-time interaction. The ImCognita Lab aims to create a unified experience where data and users are merged in space (i.e. a true mixed reality experience) and evolve coherently in time (i.e. narrative progression). In this process the ImCognita engine itself will act as an autonomous adaptive sentient 29
Ricardo Dal Farra et al.
guide that helps humans to explore high-dimensional data and discover novel patterns driven by both their implicit and explicit (re)actions. Applications of these concepts in multimodal performances are presented next: the works re(PER)curso (2007)18 and Multimodal Brain Orchestra (2009).19 The main objective is to study human cognition based on the integration of multimodal signals, real-time interaction with interactive audio-visuals and storage and analysis of all collected information.
re(PER)curso and Multimodal Brain Orchestra In two previous works, we have developed artistic performances based on the perspective of presence: re(PER)curso (2007) and Multimodal Brain Orchestra (2009). In the first, the approach was to integrate algorithmic composition with interactive narratives and scientific sonification and visualization. The structural pillars of the work were not a script or textual narrative, but how the concept of recursion could be used as a way of constructing meaning. Specifically, the interaction between two human agents that produced recurring changes in the physical world and an avatar in the virtual world created a substrate for the emergence of meaning (Mura et al. 2008). The interactive performance re(PER)curso was presented in Barcelona in 2007 at the Museu d’Art Contemporani de Barcelona and in 2008 at the Art Futura festival at the Mercat de Les Flores, Spain (see Figure 1.6). The performance explored the confluence of the physical and the virtual dimensions which underlie existence and experience and posed questions about the significance of artificial sentience and our ability to create and coexist with it. In the second study, we demonstrated how the internal and external representations of the world could be joined together in a performance to create music, sounds and video, which can be also seen as a form of interactive narrative of a mixed reality (Le Groux et al. 2010). The four Brain Orchestra members played virtual musical instruments through brain-computer interface (BCI) technology alone. The orchestra was conducted while at the same time an “emotional conductor,” seated in the front right corner, was engaged, who drove the affective content of this multimodal composition by means of her physiological state (see Figure 1.7).
Figure 1.6 The interactive performance re(PER)curso (2007)
30
Research-creation in Latin America
Figure 1.7 The Brain Orchestra performance (2009)
These two previous experiences raised a fundamental question of whether there is a universal, even basic, structure underlying human auditory and visual experiences that is based on a single property of the human brain, as opposed to being fragmented between a number of modality-specific properties. This invariant property could be the drive of the brain to find meaning in events organized in time, or to define a narrative structure for multimodal experiences to which it is exposed. In short, these works illustrate that aesthetic experience can be, at least partially, obtained as emerging organization from the interaction between human users and an interactive system.
Interactive audio-visual composition: continuaMENTE Jônatas Manzolli created continuaMENTE (2007),20 an interactive audio-visual piece for tape, texts, video, interactive percussion and live electronics (Manzolli 2008). It explores the relationship between percussion, audiovisual and digital music: interactive mallets, carpet, gloves and laptop were used as interfaces, and three percussionists were the agents. Commissioned by the Itaú Cultural Foundation, São Paulo, Brazil, it was first performed on August 2007 in the exposition “Memória do Futuro” (Memory of the Future). ContinuaMENTE integrated several types of material including real-time sounds with the music’s gestures produced by the three musicians on stage. The use of three interfaces – interactive mallets, gloves and a carpet (see Figures 1.8 and 1.9) – allowed the musicians’ actions to generate complex sound textures on a MIDI-controlled piano. The structuring elements of continuaMENTE are the sound diffusion system, video and light effects, the spatial distribution of instruments and performers on the stage, three new interfaces and performers’ gestures. Interpreters’ bodies were taken as musical instruments, and their tactile exploration with interfaces on different percussion setups (distributed all over the stage) created new sounds in real time. The musicians and the composer performing on the stage developed a dialogue exploring all possible means of interaction between digital interfaces, acoustic instruments and the real-time computer composition system. The performance role of each agent was guided by sonic cues 31
Figure 1.8 An overall view of the stage during performance of continuaMENTE. It is possible to see the interactive carpet and the interactive mallets played on the stage floor
Figure 1.9 An overall view of the stage during performance of continuaMENTE. It is possible to see the interactive carpet and the interactive mallets played on the stage floor
Research-creation in Latin America
and script with improvisation guidelines. During the performance tactile explorations were possible when the musicians wore piezoelectric sensor gloves.These interfaces acted such as gesture close-up or zoom controls. Gesture details were amplified and sent to a computer that in turn controlled a MIDI piano.This performance system integrated audiovisual material and provided a support for reinterpretation and meaning reassignment.
Interactive composition and robotics: AURAL projects AURAL (2009) and AURAL2 (2011), created by Artemis Moroni and Jônatas Manzolli, were collaborations between NICS and the research centre in Computer Technology “Renato Archer”, CTI, Campinas, Brazil.22 The AURALs were designed to focus on an interaction metaphor that a robotic interface establishes between the virtual and the real world. In the AURAL environment, the behavior of mobile robots in an arena is applied as a compositional strategy, and the sonification is generated by means of a mapping of the trajectories of the robots into sound events (Moroni et al. 2014; Moroni and Manzolli 2015). Closing AURAL’s first exhibition at UNICAMP art gallery, a dancer, three musicians and the AURAL system itself, with four robots, performed the interactive concert Robotic Variations (Moroni and Manzolli 2015).The trajectories used to generate the material for the composition were recorded and transmitted again to the master robot for the performance. An interactive scenery displayed real-time processed images on the walls. The dancer was invited to interact with the robots in the arena, in a live performance. For the visual tracking, a strong colour panel was fixed on the top of each robot. Choreography was designed so that the robot with a red panel left the room and was replaced by the dancer using a red hat. Her position was tracked by the visual system through the red hat and hence influenced the performance of the sound, incurring another human-machine interaction cycle. Figures 1.10–1.12 show some pictures of the musicians and of the dancer taken during the rehearsals. 21
Figure 1.10 AURAL – Robotic Variations – the dancer, the robots and the interactive scenery
33
Ricardo Dal Farra et al.
Figure 1.11 AURAL – Robotic Variations – the robots and the musicians
Figure 1.12 AURAL – Robotic Variations – the dancer, the robots and the musicians during the rehearsals
Colombian contributions to computer music The work by Colombian researchers and composers in electronic and computer music is not widely known. Camilo Rueda is recognized through his contributions to computer-assisted composition within programming environments such as PatchWork and OpenMusic, available for multiple platforms. Among Rueda’s earlier (but still ongoing) projects is AVISPA (Ambientes Visuales de Programación Aplicativa), research focusing on defining computational models that might allow an understanding of the nature and behavior of complex systems, such as the observation of the evolution, and interaction of their processes. The resulting applications could be applied 34
Research-creation in Latin America
to a diversity of fields in the sciences, engineering or the arts (e.g. formal languages and tools for computer music). Wiring, an open source programming framework for microcontrollers, was created with artists and designers in mind. It is a well-documented open project initiated by Hernando Barragán in 2003 that “allows writing [of] cross-platform software to control devices attached to a wide range of microcontroller boards to create all kinds of creative coding, interactive objects, spaces or physical experiences” (http://wiring.org.co). Free to download, open source and open hardware, Wiring is being used around the world. The Wiring language builds on Processing, an open project started by Ben Fry and Casey Reas. The Wiring Board along with the Wiring development environment can be used to create stand-alone interactive objects or can be connected via USB to Max/MSP, Pd or other software, opening it up to anyone working with sound. The following contribution by composer Juan Reyes spans a variety of aspects of this work. His research is aimed toward semantics of gesture and perception as well as novel ways of performance and expression. In this sense Reyes, who has degrees in music and mathematics, has been very active in expanding the possibilities of telematics concerts as well as exploring innovative sound synthesis modalities.
Extending the range: gesture, performance, synthesis and telematics Juan Reyes A range of developments from software-based automatic computer-assisted composition to teleconcerts shows that the practice of computer music in Colombia as well as in other Andean countries has been steadily established through the years. This section shows how research in computer science by a few Colombians was seminal for methods of algorithmic composition and real-time interaction around the world. Most of this research is still ongoing and gives us a vision of how the field is approached in the region. Recent telematic performances across the continent reflect directions where new music presentation might be going.
Introduction In the developing relationship between music and technology, some of the methods developed more than 20 years ago are still current. For instance, “constraint-based composition” remains an issue among computer scientists, composers and performers. This paradigm is still in use at IRCAM and other centers around the world (Truchet and Assayag 2011). Colombian computer scientist Camilo Rueda, his research group and students are still developing heuristics on a wide number of applications.23 Likewise, the Arduino revolution was in its beginnings an idea of another Colombian computer scientist, Hernando Barragán. His idea for an embedded system for physical interaction using sensors and computers was to put within the reach of artists and designers notions, terminology and craft previously in the domain of engineers. Following ideas of Bill Verplank24 at Ivrea Institute of Design,25 Barragán developed a hardware-software environment called “Wiring” which enabled flexible connection of input-output devices for controlling human-machine interaction (Barragán et al. 2012). Physical interaction has become a daily feature for many artists by being part of their search while developing extensions to musical instruments, alternatives to performing and pursuing ideas on developing interfaces, controllers and new instruments. This is a widespread practice in Colombia as in other countries in the region, mainly because Wiring’s documentation and examples are written in Spanish. 35
Ricardo Dal Farra et al.
Interaction Many have followed the path of creating custom controllers and interfaces, some modeled on the successful approaches such as Jaime Oliver’s Silent Drum (Oliver and Jenkins 2008), Sergi Jorda’s Reactable (Jorda et al. 2005), and the Mathews-Boie Radio Drum (Mathews 1991). Because of the effects of globalization, it is also true that the use of synthesis engines such as Pd and Max are popular in the region because of their features and integration with Wiring and Arduino cards. The term “haptics” was first heard on the Colombian music scene in a conversation with John Chowning while referring to dexterity in digital audio editing, particularly for blind and disabled people, on one of his visits in April 1993. After this, the notion of haptics has been a continuous inquiry in schools of design and in some music contexts as well. Juan Reyes, who was also a student of Bill Verplank and Max Mathews at Stanford, and a constant collaborator with their music interaction course, has provided in-depth research on the notion of tactile systems and haptics while working on various goals in real-time performance and expression (Reyes 2008).26
Signal processing and physical modeling Physical models of plucked and bowed strings have been used in composition as well as those of air columns and vibration modes of solid bodies and bars (Chafe 2004). Juan Reyes has used these models in most of his later compositions. His earlier work Straw-Berri (1997)27 uses parametric algorithms to model sounds of plucked strings and flutes. A virtual computer model of maracas is part of the instrumentation in Wadi-Musa (2001).28 Since physical models are independent from their actual interface, performance parameters can easily be changed. So boundary conditions can take arbitrary and sometimes extreme values in Reyes’s works. PPP (2000)29 is a duet piece for traditional piano and its synthetic physical model. The difference between these piano sounds is that the model is custom-tuned to the 13-step Bohlen-Pierce scale (Mathews and Pierce 1989, p. 167). Similarly, Open Spaces (2010–2014) is a composition including computer models of singing Tibetan bowls, video and performers in situ or in remote locations.30 Sound durations are well beyond those of the real instrument, achieving different and fresh timbres.Tunings of different bowls because of their unconventional shape and size result in micro-tonalities. Few pieces use this kind of technique in electroacoustic repertoires. The handling of parameters in this sound synthesis method is very complex. There is almost too much to control within the increased flexibility for the acoustics of the instrument being modeled, and this might sometimes go beyond the means of the compositional ideas.
Scanned synthesis A combination of physical models and haptic (tactile) perception has also been approached by Juan Reyes. This notion had also been introduced by Mathews and Verplank using a technique called “scanned synthesis” (Verplank et al. 2000). This model is based on the sense of touch, how things are manipulated and how these manipulations are perceived (e.g. haptic feedback).A prototype for scanned synthesis was made up of a radio drum as a physical controller and a visualization program. Performers would see an elastic object vibrating on the computer screen and control the shape of this motion with radio-drum mallets. Mallets will also agitate the system or mute it, and hence its haptics. A variety of different density springs can be used as elastic objects.Vibrations are scanned along the longitude of the spring, resembling friction on human fingers and skin on corrugated surfaces or on strings of pearls. Frequency is produced by the speed of scanning. Furthermore, timbral nuances are obtained by intricacies of the objects being scanned. Feather 36
Research-creation in Latin America
Rollerball (2002) is another duet piece for piano and live radio baton, counterpointing gestures achieved through scanned synthesis with those of the piano performance. Haptic feedback for each gesture is the result of interaction between performers, visuals of the system and perceived timbres. A radio baton hooked to a computer comprises the system, and the baton performer shapes sounds with its mallets. A choice of waveforms and modes of vibrations are described on the score of the piece. Another work, Os Grilos (2014), explores sound textures by using wave shapes achieved through low-frequency vibrations of scanned synthesis.31 In this case there is an ensemble (like having several radio batons at once) of virtual instruments that blend in the space, creating in-phase and out-of-phase wave beatings in addition to trajectories for spatial motion.
Teleconcerts At present, a lot of computer music activity can be related to teleconcerts and remote artists’ interaction (Chafe and Others 2014). The American part of the southern hemisphere has not been absent from this scene. Various concerts between Brazil, Chile, Colombia, the US and Europe have taken place thanks to advanced networks like Internet2 and RedClara in Latin America.32 Colombian performers have also been able to play with their counterparts in different cities of the country as well as with performers on the rest of the continent and the world (Valencia et al. 2015).33 Chilean engineer and musician Juan Pablo Caceres has been a key figure in the development of this technology, partly materialized with his software Jacktrip (Caceres and Chafe 2009).This telematic approach has sprouted a generation of artists and performers willing to extend the state of the art by appropriating its features. Aside from concerts, performances and installations, telematic-telepresence gathering venues are being developed in the region in Colombian cities such as Manizales, Medellin, Cali and Barranquilla (Valencia and Reyes 2015). The search for a language surrounding telepresence is ongoing. While the 1990s saw a dawn of computer technology in the arts and music of Colombia, it should be said that its electroacoustic music practice has developed from the tape cutting and splicing methods of the 1960s to new forms of real-time tele-performance on wide networks across the world. Extended as well as new musical instruments taking advantage of Arduino, like microcontrollers and mini-computers, have also taken the stage. This is a community where technology has played an important role in the past, as well as an increasing group from the new generations. Steps taken by the likes of Rueda, Barragán and Reyes testify that there are trails to follow if a composer or artist in Colombia or its surroundings wants to take on new paths in his or her creative research.
Experimentalism and collective work The research work led by Professor Fernando Iazzetta at the University of São Paulo in Brazil has focused on composition and performance using new technologies. Already clear in the title of his PhD dissertation: “Silicon Sounds: Bodies and Machines Making Music” (“Sons de Silício: Corpos e Máquinas Fazendo Música”) his interdisciplinary projects have brought together artists, scientists and engineers, always having sound and music playing a central role.The following entry includes aspects of his research increasingly interesting for composers/performers from around Latin America: telematic concerts. Worth mentioning here is the number of researchers around centers devoted to electronic and computer music in Latin America. Iazzetta mentions 30–40 people in the center he has been leading. In addition, it needs to be said that this is one of several research centres in Brazil focusing on similar interests, meaning that the attention paid to the investigation of musical creation and performance using new technologies is an area of relevance in Latin America at the highest academic level. 37
Ricardo Dal Farra et al.
The work at NuSom Fernando Iazzetta For the last 15 years a group of artists and scholars interested in connecting artistic production and academic research have been working together at the University of São Paulo in an interdisciplinary way. Although the group includes people from different backgrounds – from music to visual arts to engineering – their main interest is related to sound in its connections with art, science, technology and society. Since 2012 the group has been gathered around a research centre, the NuSom–Research Centre on Sonology.34 The Centre is the result of successive research projects converging on the interconnection between music and technology. For example, one of the first projects, titled AcMus,35 proposed the creation of a software toolbox for measuring and analyzing room acoustics. AcMus was followed by another project, the Mobile,36 which was more inclined towards the production and analysis of interactive music systems. By 2012 the group decided to change the direction of its work by moving the emphasis on music technology to a broader framework that goes beyond music to include other artistic, cultural and social practices in which sound would play a central role. Since that time the group decided to adopt the term “sonology” due to its neutral and general reference in Portuguese. The Centre, directed by Fernando Iazzetta, has brought together about 30 to 40 people coming from different backgrounds – music, visual and performing arts, computer sciences, engineering – and has been playing a leading role in the establishment of a growing academic scene interested in sound studies and experimental music. Funded both by the University of São Paulo and Fapesp (São Paulo Research Foundation), NuSom’s main goal is to overcome the boundaries between artistic production and scientific research by integrating the production of creative works, technological research and critical reflection in a unified process. Three aspects direct the group’s activities. The first one is an emphasis on collaborative processes of artistic creation in order to decrease the weight of authorship in the resulting artworks. At the same time, by giving up the development of individual creation, the group manages to integrate people from different areas such as sound art, architecture and computer science. In this sense, improvisation is taken as an important component of NuSom’s works. We should mention the work of the Orquestra Errante, a collective of musicians directed by Rogério Costa, one of the members of NuSom. The orchestra became a rich environment for exploring the potentialities of free improvisation. The second aspect is an inclination towards experimentation. Although the group is not associated with any aesthetic orientation, the choice of an experimental attitude reflects its proposal for integration between artistic production and academic research. Finally, the third aspect is the commitment of the group’s production to local reality: issues such as integration with the community, collaboration with and support of other groups by allowing access to its facilities, and the use of locally developed technologies are among some of NuSom’s concerns. These interests also determine the group’s relationship with technology. The choice of tools and systems to be employed is also part of their aesthetic discussions and reflects the social engagement of the group. That is, instead of focusing efforts on developing new tools, the group seeks to explore local sources – both technological and cultural – when building its music devices, tools and instruments. Parallel to the use of high-end resources or the development of custom made software, the group explores a low-tech perspective, again assuming a critical posture regarding a somehow widespread technicist perspective in current art movements. This attitude is aligned to a post-digital discourse and can also be related to the concept of gambiarra, a Brazilian term originally used to refer to informal, improvised and unusual solutions to 38
Research-creation in Latin America
unforeseen technical problems. Despite its original pejorative connotation, the concept has been adopted – in a provocative way – by several Brazilian artists to represent their aesthetic position with respect to the current technological context. To exemplify NuSom’s artistic production, we may mention two initiatives that have been carried out by the group in recent years: the ¿Música? series and the Netconcerts. The ¿Música? concert series37 originated in 2006 from the need to create a collective space to present the results of the group’s creative research. The double question-mark in the title alludes to the definition of what should be considered as music, especially within an academic and institutionalized environment aimed at concert music. The series became regular (currently NuSom produces two to three different ¿Música? performances every year) and performances take place at different environments, both inside and outside the university. Each concert explores specific characteristics of the venue’s space and audience and proposes a critical reflection about the ways we make and listen to music: this means composing site-specific sound works, building new instruments and sound devices, and creating new strategies of interaction between artists and audiences. In its turn the NetConcert series38 began as research about the idiosyncrasies of real-time audio transmission over the internet. The first steps benefitted from a partnership in 2011 with the Sonic Arts Research Centre (SARC), Belfast, Northern Ireland, whose previous experience of promoting music performances over the Internet helped the Brazilian group to develop tools and music works and to participate in several collaborative concerts involving groups located in different countries. The collaboration with SARC started with some preliminary informal performances in 2011 in which musicians located in São Paulo and Belfast played together through fast internet connections. After that, the group was engaged in different projects related to remote music performances such as the Sonorities Festival in Belfast (2012), the Festival Internacional de la Imagem (2013) in collaboration with musicians from CCRMA–University of Stanford and other Universities in Colombia (2013), and the CCRMA TeleConcert (2014)39 involving musicians from São Paulo, Stanford, Paris, Geneva, Virginia, Kansas City and Manizales.40 This is an example of research work guided by artistic concerns instead of technical ones that at the same time turned out to bring some relevant outcomes in terms of academic research. The NetConcerts project led to the production of collaborative artistic works that were created to explore the specific characteristics of the internet as a musical medium, as well as to develop academic research guided by artistic demands. For example, the Medusa project was created in the scope of a PhD programme within NuSom activities and consists in a distributed music environment that allows an easy use of network music communication in common music applications41. NuSom has also worked in partnership with other groups and artists and has promoted academic and artistic meetings. One of these partnerships has led to the constitution of the Personne Ensemble.42 This group includes artists from different parts of Brazil producing audiovisual performances that use recycled sounds and images brought by each of its members. Their performances also connect local and global contents by mixing sounds, images and also sound instruments collected from these different contexts. Personne’s first work was a tour de force to recreate Pierre Schaeffer and Pierre Henry’s classic musique concrète work, Symphonie pour un homme seul, in the form of a live instrumental performance.43 Being somehow apart from the Eurocentric tradition from which musique concrète was born and cultivated, the Brazilian group was able to craft an unusual, although very refined, realization of the Symphonie. More recently the group has been interested in exploring the possibilities brought by technology to promote the engagement of different social groups in musical processes. By promoting 39
Ricardo Dal Farra et al.
performances in non-regular venues such as parks, galleries and open spaces and by using handmade electronic instruments and other sonic gadgets in their performances, they also propose the inclusion of a more critical view of technology within artistic practice. This perspective appears in the themes of two international conferences organized by NuSom in 2016. The 12th International Symposium on Computer Music Multidisciplinary Research (CMMR)44 theme was “Bridging People and Sound,” inviting the participants to reflect on how musical practices have been directed, influenced or restricted by the devices, techniques and tools that have been applied in music production. In the same direction, the SONOLOGIA 2016 – Out of Phase,45 proposed a critical perspective on the use of sound in different contexts, including the study of warfare and politics of sound; cultural and technological critiques; sound art and associated praxis; urban phonography and acoustic ecology; sonic epistemologies; new musicology and historically situated reflections. In this perspective, NuSom has expanded the scope of its research interests to include social and political aspects that are implicated in the use of sound technologies in the artistic domain.
Popular culture, community and electronic art Electronic technologies have been permeating popular culture in places like Colombia to such an extent that the diversity of possibilities we can hear in several of the largest cities of the country, and even in some of the not so well-known towns, is vast: sometimes far from an academic environment, maybe in a pub or an open air space or a small old house turned into a cultural center, at other times crossing links between major museums, universities and science parks open to the public and small collective initiatives forged from the effort of a group of composers, musicians and electronic artists working cooperatively. Colombia is a country where sound artists and composers/performers are experimenting, in many cases breaking down the traditional barriers between musical genres, and crossing boundaries back and forth between so-called popular music and the contemporary or electroacoustic “art” music scene. Hamilton Mestizo – who has been working in his projects with art/music, science and new technologies – introduces in the following section part of this world where free experimentation, software hacking, circuit bending and DIY culture usually dominate.
Electrocumbia and beyond Hamilton Mestizo Cumbia was born in the northern part of South America at the border with the Caribbean Sea in the seventeenth century; it is a traditional music from Colombia and Panama. In its origin, cumbia derived from the process of colonialism as the meeting and blending of different traditional rhythms, sounds and instruments from Africa, Spain and native pre-Columbian (indigenous). The legacy of cumbia has been extended in recent years, and it has spread to many places in Latin America in the twentieth century. Its rhythm represents to local musicians an opportunity to create experimental sounds and mix them with local music and traditions. In the last few years cumbia has been adapted to electronic resources; a lot of new bands have appeared using synthesizers and computers for the creation of very experimental music. Cumbia also symbolizes a genuine state-of-the-art multiculturalism in Central and South America; it is open-minded in its experimentation, addressing a direct relationship between traditional and contemporary. 40
Research-creation in Latin America
Festivals, exhibitions and meetings in the first decade of the twenty-first century The Festival Internacional de la Imagen (International Image Festival) has been running in Manizales, Colombia, since 1997. It has been an important referent for new media, electronic and sound arts, conserving a tradition in the dialogue between art, technology and science. This festival has allowed the encounter between different global and local sociocultural manifestations, inviting artists, students and academics to have an open dialogue around topics related to the electronic arts in Colombia and Latin America.46 Other relevant festivals from the early 2000s included Artrónica47 (2003–2005) and Experimenta Colombia48 (2005–2009) in Bogotá. These festivals worked in a similar direction, introducing artists, projects and pieces of art related to new media, net-art, video-art and robotics. In 2010 Bogotá’s Museum of Modern Art (MAMBO) organized the exhibition Sonare: Arte Sonoro, curated by Mauricio Bejarano. This exhibition foregrounded the term “Sound Art” and invited 16 artists (both well established and early career) to contribute.49 This exhibition opened up the opportunity for sound art to be part of the art world in Colombia. During the last few years, sound art, as well as other practices such as circuit bending, software sonification, recording and electroacoustic music, has expanded its popularity, motivated by academic studies in universities. Many exhibitions, workshops and meetings have happened during the last few years; for example, Sonemalab50 (Bogotá, 2011–2014), Densidades51 (Bogotá, 2011), Sonósferas52 (Bogotá, 2013), Festival en tiempo real 53 (Bogotá, 2009–2015), touch me sound54 (Pasto, 2014–2015) and RADAR55 (Bogotá 2015).
Sound factory, compilation and community Sonema56 is a collective from Bogotá founded in 2011; it has a multidisciplinary team whose members are principally interested in experimenting with sound in many dimensions.The first Sonema meeting used the “excuse” of sound to generate a public event based on workshops in an atmosphere of collaboration. They have worked in electronics and circuit bending, mapping, urban intervention, sound laboratories, artist residency, publications and online compilations. Sonema has articulated a network principally in Latin America and Spain that involves internet practices such as radiolibre.co (a network of free radios) or the printed book Bogotá Fonográfica, which uses sounds recorded in Bogotá to generate images from visual artists, who receive the anonymous sound from the internet. The book includes the image and the sound using a QR code. Another related proposal is Matik-Matik,57 a coffee-bar in the Chapinero neighbourhood in Bogotá that has offered an eclectic space for creation and exhibition of sound art, electroacoustic and experimental music since 2008. It has been a point of confluence for artists, musicians and the public. Thus, Matik-Matik has been like a platform for the sound scene in Bogotá to create communities interested in convergence and sharing. More than 650 events have been celebrated, including a very eclectic selection with experimental music concerts (electroacoustic, experimental, free jazz, improvisation) and alternative-underground music (electrocumbia, noise, instrumental music, among others).58 In another city, Medellín, there has been a remarkable development in culture in the last few years, in part influenced by the LabSURLab59 in 2011. This meeting discussed the idea of medialab and hacking in social and cultural contexts, principally in the Americas and Europe. It motivated an interchange of different practices around art, design, cultural management, hacking and open source software. In fact, it has generated many collaborative processes between 41
Ricardo Dal Farra et al.
different cultural agents in the city, including official and unofficial spaces and institutions such as museums, galleries, cultural centers, independent groups, hackspaces, coffee shops and bars. The Museum of Modern Art of Medellín – MAMM60 – has recently inaugurated a new building, which includes a laboratory dedicated to sound experimentation and creation. Also, Platohedro,61 a non-profit organization, has been activated since 2014. There is La Jaqueescool, an edupunk academy where people from many different backgrounds experiment with sound, noise and electronics. In this school pupils learn about open source and “free culture,” also other topics such as hacktivism, neo-alchemy, technoshamanism and bioart. Periodically an artist is invited to stay in Platohedro’s house, generating interchange of knowledge and networking, through workshops, classes, performances, talks and exhibitions. It is evident that during the last few years, experimental music (involving cumbia and other mixed rhythms), sound and electronic arts have been increasing in popularity in Colombia and the region. Many new scenarios have been created to generate dialogue between art, technology, science and community, incentivised by artists, musicians, collectives, amateurs and institutions that are looking for alternative spaces of creation, experimentation and production. These scenarios have in common the networking and cooperation in transdisciplinary communities where they can share and improve ideas and proposals generating different kinds of projects.The trend is toward the creation of groups and communities based on open source technologies, DIY and DIWO culture, multiculturalism, diversity, networking and collaboration.
CMMAS and CEIArtE Ricardo Dal Farra This brief section can only give pointers to the rich contribution that composers and researchers from Latin America have been making during the past several decades. I would like to add two further organizations and activities that have been making a difference in recent years, one in Mexico and one in Argentina. CMMAS, the Mexican Centre for Music and Sound Art, located in Morelia, is a very active public organization supported by the federal and provincial governments. Directed by Dr. Rodrigo Sigal and with a highly active staff of over 20 professionals and academics, it offers a wide variety of possibilities to musicians and composers, both local and international (www.cmmas.org). Each year the Visiones Sonoras festival gathers together Mexican and international musicians working with new technologies. Up till October 2015, 15 Visiones Sonoras have been held, and over 200 online videos, 24 CDs, seven books and 15 issues of the bilingual journal Ideas Sónicas/ Sonic Ideas have been published. Ideas Sónicas/Sonic Ideas has a different guest editor and theme for each number, having a wide range of topics such as, for example, music from a philosophical perspective; the education of electroacoustic composition and sound art creation; compositional theories; the research-creation contributions by composers from the Caribbean on music produced with new technologies; and aspects linking sonic arts to their cultural and social context (www.sonicideas.org). To that same date, CMMAS has produced about 500 concerts, with over 63,000 people attending. They have had over 230 artistic residencies, supported the creation of more than 1,100 works and taught hundreds of hours through courses and workshops each year. The program of activities of CMMAS is impressive and has a significant impact at the Mexican level, as well as in the Latin American region and well beyond, having exchange agreements with Japan, the UK and Canada, amongst many others. 42
Research-creation in Latin America
The Electronic Arts Experimentation and Research Center (CEIArtE) of the National University of Tres de Febrero in Buenos Aires, Argentina, has been developing multiple activities focusing on electroacoustic music and sound art, as well as in the electronic arts in general [http://ceiarteuntref.edu.ar]. Directed by Dr. Ricardo Dal Farra and working together with a group of composers, new media artists and researchers, CEIArtE has been producing electroacoustic and visual music concerts, electronic art exhibitions and international sound art competitions in collaboration with organizations such as the Red Cross Climate Centre (http:// ceiarteuntref.edu.ar/art_climate_2014). Part of the research done at CEIArtE has been focused on electroacoustic music, on education of the media arts in Latin America and also on sound art, such as the project Argentina Suena coordinated by composer Raúl Minsburg. In addition, the Center has established an open database with technology resources developed for the community of electronic artists and electroacoustic composers, and recently an international collaboration to develop a network to empower artistic research-creation activities and works that could help to reduce the consequences of climate change. The website of the Center has a significant number of electronic publications in PDF, audio and audiovisual formats and receives several thousand consultations from around the world on a monthly basis. CEIArtE has also been the host organization for one of the largest events of electroacoustic music held in Argentina: the Electroacoustic Music Studies Network – EMS09 International Conference (http://ceiarteuntref.edu.ar/ems09) and organized events such as the Antevasin colloquium with engineers and technology experts presenting and debating issues in electronic art (http://ceiarteuntref.edu.ar/antevasin), as well as the first international Balance-Unbalance (BunB) conference, devoted to facing the challenges of the environmental crisis with a multidisciplinary approach using the electronic arts as catalyzer (http://ceiarteuntref.edu.ar/eq-deseq-en). In addition, substantial initiatives in “visual music” include a number of activities around the project Understanding Visual Music (UVM), such as the second edition of the UVM conference in 2013 (between Montreal 2011 and Brasilia 2015) and recently a Fulldome Workshop focusing on research-creation of visual music at the beautiful planetarium of Buenos Aires, both initiatives arousing substantial interest not only from experts in the field but also from the large public audiences coming to the shows (http://ceiarteuntref.edu.ar/tallerfulldomeuvm2015-2016). The local as well as the international impact of both CMMAS and CEIArtE has gone beyond the original expectations. CMMAS has today a continuous flow of composers and performers producing and presenting their work in Morelia, and has become one of the main hubs for those working with new music and technology in Latin America and a place recognized worldwide. CEIArtE, having its focus on the electronic arts, also has a strong impact in the local community of sound artists, composers and researchers of Argentina and has grown, from its beginning as a reference place for those creating art with new technologies in Buenos Aires, to being today a well-known Centre whose links and projects have reached all continents.
Last words . . . Ricardo Dal Farra This chapter could be extended to include many more projects and activities by researchers and creators from Latin America. No doubt most countries of the region today have experts in this field, whether living in their original countries or out of Latin America but still feeling themselves as part of it, usually maintaining a very active channel of exchange. There are countless
43
Ricardo Dal Farra et al.
cases that could be mentioned, but composers and researchers born in Mexico, Argentina, Brazil, Colombia, Cuba, Peru, Chile, Costa Rica, the Dominican Republic, Ecuador, El Salvador, Guatemala, Paraguay, Puerto Rico, Uruguay, and Venezuela are generating bridges to develop international projects. It is not possible to close this collection of contributions without making reference to a research project that has made more visible (and accessible!) the work of hundreds of composers, some of them starting about 60 years ago. The Latin American Electroacoustic Music Collection, a research project directed by Ricardo Dal Farra and hosted by The Daniel Langlois Foundation for Art, Science and Technology of Montreal, Canada, has become a reference on the work of almost 400 creators from the region, including over 1,700 compositions whose recordings are being preserved in digital format, a major database with over 200,000 words and some scores, photos and excerpts of interviews of a number of recognized pioneers of the electroacoustic music field. It is worth mentioning that (to date) 558 compositions – from the total number of works being preserved – are fully available for listening online (www.fondationlanglois.org/html/e/page.php?NumPage=556). Latin America has always been a major contributor to the electroacoustic music world, with so many compositions, concerts, radio series, recordings, publications and also technology innovations produced over decades. Today the activity is impressive, from sound art to electroacoustic music, from visual music to interactive sound installations and more. Latin America has not only creators, researchers and innovators, infrastructure and equipment but also large audiences attending most activities and projects, something that we know well is not always easy to achieve in this field. We have been here for a long time. Welcome to the new (old) world!
Notes 1 Bolaños also experimented in Argentina during his early years as a composer, not only with tape music but also creating multimedia works as well as using computers to assist the compositional process. 2 Unless otherwise attributed, introductory sections in this chapter were written by Ricardo Dal Farra. 3 Sensor fusion comprises prediction, association and estimation of variable states, having as goal the reduction of the overall error on estimating these states (Hall D.L., Llinas, J. Handbook of Multisensor Data Fusion. New York: CRC; 2001). 4 ‘Pandivá’ by Filipe Calegario (http://batebit.cc/instrumento/pandiva/) (undated) last accessed 22 February 2018. 5 Waylla Kepa is an Andean archaeomusicology project for interdisciplinary scientific research and art experimentation that catalogs, documents, records and replicates archaeological sound devices. www.escuelafolklore.edu.pe/investigacion/project_wayllakepa.php – see also Mansilla (2004). 6 Links to works from 2001 and 2014: https://soundcloud.com/jaiolix/silbadores-41 https://soundcloud.com/jaiolix/silbadores-1-2001-je-oliver-la-rosa 7 Links to instruments: www.jaimeoliver.pe/instrumentos/silent-drum www.jaimeoliver.pe/instrumentos/ mano 8 Guthman Musical Instrument Competition (USA), File Prix Lux (Brazil), Giga-Hertz (Germany), IRCAM Musical Research Residency Program (France). 9 ‘How to build a Silent Drum’ by Jaime E. Oliver (www.jaimeoliver.pe/archives/518) (undated) last accessed 22 February 2018. 10 Links to a few pieces with each instrument: www.jaimeoliver.pe/flexura www.jaimeoliver.pe/ workobra/9g www.jaimeoliver.pe/workobra/silent-construction-series/sc1 www.jaimeoliver.pe/ workobra/3-environments 11 Van der Woude ported the algorithm to OpenFrameworks (oF), a C++ framework for creative coding, and sent data to Pd via OSC to control sound synthesis processes. 12 Ibid. 13 The name combines the name of the ilú drum with the idea of being illuminated and its associated religious connotations, also playing with the light used for video tracking.
44
Research-creation in Latin America 14 In software engineering, a project fork happens when developers take a copy of source code from one software package and start independent development of it, creating a distinct and separate piece of software. (https://en.wikipedia.org/wiki/Fork_(software_development)) 15 Recreational Carnival Association “Afoxé Alafin Oyó”. 16 Afoxé is a secular manifestation of candomblé (ref: https://en.wikipedia.org/wiki/Afox%C3%AA). 17 The NICS research team is currently formed of 23 scholars ranging from senior to younger, postdoctoral researchers, and of a group of master and PhD students. 18 ‘re(PER)curso trailer’ (https://youtu.be/L5Gkwz9jC2U) and ‘re(per)curso’ (https://youtu.be/qln4f1k UpZU) last accessed 22 February 2018. 19 ‘World Premiere of Brain Orchestra. 2009’ (https://youtu.be/PW0x-lV8XgE) last accessed 22 February 1018. 20 ‘ContinuaMENTE’ (https://youtu.be/qeb14Yl69ww) last accessed 22 February 2018. 21 ‘Aural – Robotic Variations’ (https://youtu.be/ZT64Ql11D24 and https://youtu.be/NaeKtvBnTmU) last accessed 22 February 2018. 22 Aural (2009) Art Gallery of UNICAMP, March 2009: www.iar.unicamp.br/galeria/aural/GaleriaH2a. pdf – Aural2 (2011) INSTANTE Art & Technology Exposition, SESC Campinas, September to November 2011. 23 See Avispa. http://cic.javerianacali.edu.co/wiki/doku.php?id=grupos:avispa:avispa 24 See Bill Verplank homepage. www.billverplank.com/professional.html 25 See Ivrea. https://interactionivrea.org/en/index.asp 26 Among his accomplishments, there are graduate level courses in art, gesture and expression offered in several universities in Colombia [Reyes 2009]. 27 Straw-Berri is available at www.fondation-langlois.org/html/e/oeu.php?NumEnregOeu=o00001514 – last accessed 22 February 2018. 28 www.fondation-langlois.org/html/e/oeu.php?NumEnregOeu=o00002309 Reyes J., Point Reyes, Modeled Computer Music, YoYo Music, Bogota, Colombia, 2009. ISBN: 978–958–98717–98717– 98715 29 PPP is available at www.fondation-langlois.org/html/e/oeu.php?NumEnregOeu=o00002307 – last accessed 22 February 2018. 30 ‘Espacios Abiertos (2014)) (www.youtube.com/watch?v=ORkXVztMiMY) and ‘Tele-Espacios Abiertos et Open Spaces (2014)’ (https://ccrma.stanford.edu/~juanig/descrips/openSpaces.html) – last accessed 22 February 2018. 31 ‘Os Grilos (2014)’ (https://ccrma.stanford.edu/~juanig/descrips/osgrilos.html) – last accessed 22 February 2018. 32 See RedClara. www.redclara.net/index.php/en/we-are/about-redclara 33 Manizales/San Diego: https://vimeo.com/159571260 – Bogotá/UC Irvine: www.youtube.com/ watch?v=K9d6VVFLsUA – Cali/Stanford/Michigan: www.youtube.com/watch?v=XUCM2nMA23w – Manizales/Sao Paulo/Stanford: www.youtube.com/watch?v=ORkXVztMiMY&index=5&list=PLM Jca0djxjP0zDcgIjvKD34eUDDz2c5i_ 34 More information about NuSom can be obtained at: http://www2.eca.usp.br/nusom/ 35 AcMus was conducted by Fernando Iazzetta and Fabio Kon at the University of São Paulo from 2002 to 2007. One of its main outcomes was a software for room acoustics measurement and analysis. See: http://gsd.ime.usp.br/acmus/ 36 Mobile was carried out from 2009 to 2013 at the University of São Paulo under the direction of Fernando Iazzetta and included four main areas: Sonology; Development of interactive systems; Artistic production with interactive systems; and Music acoustics, psychoacoustics and auralization. See: http://www2.eca.usp.br/mobile/ 37 More information about the ¿Música? Series can be obtained at: www2.eca.usp.br/nusom/arte 38 Besides concerts, there are a number of outcomes derived from the NetConcerts research, including software design, PhD dissertations and music compositions. Some information about it can be reached at: www2.eca.usp.br/nusom/netconcerts 39 ‘CCRMATeleconcert’ (www.youtube.com/playlist?list=PLMJca0djxjP0zDcgIjvKD34eUDDz2c5i_) – last accessed 22 February 2018. 40 See note 33 for additional links. 41 The Medusa project was developed by Flavio Schiavoni. See: Schiavoni and Queiroz 2012, Schiavoni, Queiroz and Wanderley 2013.
45
Ricardo Dal Farra et al. 42 Personne Ensemble does not have a fixed composition (see Iazzetta et al. 2015). Some of the recent performances were done by Fernando Iazzetta, Rodolfo Caesar, José Augusto Mannis, Lílian Campesato and Alexandre Fenerich. One example of the audiovisual performances can be seen at: www.youtube. com/watch?v=KUveD-vKnO4 43 The recreation of the Symphonie pour un homme seul (1950), by Pierre Schaeffer and Pierre Henry, was carried out by a group of nine musicians: Alexandre Fenerich, Caique Bellaver, Doriana Mendes, Fernando Iazzetta, Janete El Haouli, José Augusto Mannis, Lilian Campesato, Michelle Agnes and Rodolfo Caesar. José Augusto Mannis was in charge of transcribing the acousmatic piece into a score which was lately reworked by the group. The performance happened in October 15, 2010, at Parque Lage, Rio de Janeiro: www.youtube.com/watch?v=OQOPHVTtio0 44 See the Conference’s website: http://cmmr2016.ime.usp.br/ – last accessed 22 February 2018. 45 See the Conference’s website: www2.eca.usp.br/sonologia – last accessed 22 February 2018. 46 See the festival website: http://festivaldelaimagen – last accessed 22 February 2018.This Festival hosted the 5th Balance-Unbalance (BunB) international conference in 2016 http://balance-unbalance2016. org/ and, for the first time in Latin America, the International Symposium on Electronic Art (ISEA) in 2017 http://isea2017.isea-international.org/ 47 See the festival archive: http://recursos.bibliotecanacional.gov.co/content/artronica-iii-muestra-inter nacional-de-artes-electronicas-2005 – last accessed 22 February 2018. 48 See the festival website: http://experimentacolombia-lp.blogspot.com – last accessed 22 February 2018. 49 Mauricio Bejarano, Juan Reyes, Beatriz Eugenia Díaz, Gustavo Zalamea, Jaydy Díaz, Sebastián Bejarano, Andrés Ñañez, Hamilton Mestizo, Leonel Vásquez, Ricardo Arias, Fabián Cano and Pilar Torres. 50 Organisation website: http://sonema.org – last accessed 22 February 2018. 51 Organisation website: www.fotografiacolombiana.com/exposicion-de-arte-sonoro-densidades-inau guracion-30-de-agosto/ – last accessed 22 February 2018. 52 Organisation website: www.espacioodeon.com/programacion/artesplasticas/exposicion-de-arte sonoro-en-odeon – last accessed 22 February 2018. 53 Festival website: https://tiemporealyladob.wordpress.com – last accessed 22 February 2018. 54 Event website: http://ccomunicaciones.udenar.edu.co/?p=15063 55 Festival website: www.vice.com/es_co/read/festival-radar-una-nueva-experiencia-del-arte-sonoro 56 Organisation website: http://sonema.org – last accessed 22 February 2018. 57 Project website: www.matik-matik.com – last accessed 22 February 2018. 58 Matik-Matik’s shop distributes local music, books, magazines and flyers of others places promoting events and concerts. They also produce music compilations and have an active schedule of concerts, workshops, meetings, installations, performances and conferences. 59 Archive report: https://mastersofmedia.hum.uva.nl/blog/2011/04/09/labsurlab-report-medellin-07-04/ – last accessed 22 February 2018. 60 Museum website: www.elmamm.org/ – last accessed 22 February 2018. 61 Organisation website: www.platohedro.org/ – last accessed 22 February 2018.
References Anderson, C. 2012. Makers:The New Industrial Revolution. London: Random House Business. Barragán, H. et al. 2012. Wiring. http://wiring.org.co/. Retrieved April 2012. Bernardet, U. et al. 2010. ‘The Experience Induction Machine: A New Paradigm for Mixed-Reality Interaction Design and Psychological Experimentation’. In The Engineering of Mixed Reality Systems. London: Springer. pp. 357–379. Caceres, J. and Chafe, C. 2009. ‘Jacktrip: Under the Hood of an Engine for Network Audio’. Proceedings of 2009 International Computer Music Conference, ICMA, Montreal. Calegario, F., Wanderley, M. M., Huot, S., Cabral, G. and Ramalho, G. 2017. ‘Method and Toolkit for Generating Ideas and Prototyping Digital Musical Instruments’. IEEE MultiMedia, 24(1), pp. 63–71. Chafe, C. 2004. ‘Case Studies of Physical Models in Music Composition’. Proceedings of 2004 International Congress on Acoustics Kyoto, Acoustical Society of Japan. Chafe, C. and Others. 2014. World Teleconcert. https://ccrma.stanford.edu/events/ccrma-teleconcert. Retrieved February 2015. Dal Farra, R. 2004. Latin American Electroacoustic Music Collection. Canada: La Fondation Daniel Langlois pour l’art, la science et la technologie. www.fondation-langlois.org/html/e/page.php?NumPage=556; Historical Background: www.fondation-langlois.org/pdf/e/Dal_Farra_EN.pdf
46
Research-creation in Latin America Dal Farra, R. 2006. A journey of sound through the electroacoustic wires. Art and new technologies in Latin America. Montreal: Université du Québec à Montréal. https://archipel.uqam.ca/2062/1/D1367.pdf Davies, H. 1968. Répertoire international des musiques électroacoustiques/International Electronic Music Catalog. Cambridge: MIT Press. Dougherty, D. 2012. ‘The Maker Movement’. Innovations, 7(3), pp. 11–14. Gómez, D., Vega, R. and Arce-Lopera, C. 2013. ‘Design of a Customizable Timbre Space Synthesizer’. Proceedings of the 10th International Symposium on Computer Music Multidisciplinary Research, Marseille, France. Iazzetta, F., Caesar, R., Mannis, J. A., Campesato, L. and Fenerich, A. S. 2015. ‘Game Encounters: The Experience of Personne’. ARJ (Art Research Journal), 2, pp. 1-35. Available at: https://periodicos.ufrn.br/ artresearchjournal/article/view/7029/5683 Jordà, S., Kaltenbrunner, M., Geiger, G. and Bencina, R. 2005. ‘The Reactable’. Proceedings of the 2005 International Computer Music Conference, Barcelona. Kartomi, M. 1990. On Concepts and Classification of Musical Instruments. Chicago: University of Chicago Press. Kushner, D. 2011. ‘The Making of Arduino: How Five Friends Engineered a Small Circuit Board That’s Taking the DIY World by Storm’. IEEE Spectrum. http://spectrum.ieee.org/geek-life/hands-on/ the-making-of-arduino Le Groux, S., Manzolli, J. and Verschure, P. 2010. ‘Disembodied and Collaborative Musical Interaction in the Multimodal Brain Orchestra’. Proceedings of NIME 2010, Sydney, Australia. Mansilla, C. 2004. ‘Proyecto Waylla Kepa, investigación científica y experimentación artística’. Cuadernos Arguedianos N° 5 Revista de la Escuela Nacional Superior de Folklore José María Arguedas. Manzolli, J. 2008. ‘ContinuaMENTE: Integrating Percussion, Audiovisual and Improvisation’. Proceedings of the International Computer Music Conference 2008, Belfast. Manzolli, J. and Verschure, P. 2005. ‘Roboser: A Real-World Composition System’. Computer Music Journal, 29, pp. 55–74. Mathews, M. V. 1991. ‘The Radio Baton and the Conductor Program or: Pitch the Most Important and Least Expressive Part of Music’. Computer Music Journal, 15(4), pp. 37–46. Mathews, M. and Pierce, J. 1989. Current Directions in Computer Music Research. Cambridge: MIT Press. Medeiros, C. B. 2015. Advanced Instrumentation and Sensor Fusion Methods in Input Devices for Musical Expression. Doctoral dissertation, McGill University. Retrieved from eScholarship@McGill. Medeiros, C. B. and Wanderley, M. M. 2011. ‘Evaluation of Sensor Technologies for the Rulers, A Kalimba-Like Digital Musical Instrument’. Proceedings of the Sound and Music Computing Conference 2011. ———. 2014a. ‘Multiple-Model Linear Kalman Filter Framework for Unpredictable Signals’. IEEE Sensors Journal, 14(4), pp. 979–991. ———. 2014b. ‘A Comprehensive Review of Sensors and Instrumentation Methods in Devices for Musical Expression’. Sensors, 14, pp. 13556–13591. Miranda, E. R. and Wanderley, M. M. 2006. New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. Madison: A-R Editions. Moroni, A. and Manzolli, J. 2015. ‘Robotics, Evolution and Interactivity in Sonic Art Installations’. In S. Washington (Ed.), New Developments in Evolutionary Computation Research. New York: Nova Science Publishers. pp. 159–182. Moroni, A., Manzolli, J. and Shellard, M. 2014. ‘AURAL: Robots, Evolution and Algorithmic Composition’. International Conference on Social Robotics – Workshop on Robots and Art, Sidney. https:// epress.lib.uts.edu.au/ocs/index.php/WICSR/WICSR2014/paper/viewFile/509/102 Mura, A., Rezazadeh, B., Duff, A. et al. 2008. ‘Re(PER)curso: An Interactive Mixed Reality Chronicle’. SIGGRAPH, ACM, Los Angeles, p. 1. Oliver, J. 2010. ‘The MANO Controller: A Video Based Hand Tracking System’. Proceedings of the International Computer Music Conference, New York, 2010. Oliver, J. and Jenkins, M. 2008. ‘The Silent Drum Controller: A New Percussive Gestural Interface’. Proceedings of the 2008 International Computer Music Conference, ICMA, Belfast. Reyes, J. 2008. Elements and Introduction to Physical Interaction: Tutorials for Artists and Musicians. www. maginvent.org/articles/pidht/pidtoot/pidtoot.html. Retrieved June 2015. ———. 2009. Arte y gesto, manipulaciones: Conexiones y reconocimiento del gesto. www.maginvent.org/articles/ programaayg/index.html. Retrieved June 2015. Rowe, R. 1993. Interactive Music Systems – Machine Listening and Composing. Cambridge: MIT Press.
47
Ricardo Dal Farra et al. Schiavoni, F. and Queiroz, M. 2012. ‘Network Distribution in Music Applications With Medusa’. Linux Audio Conference, 2012, Stanford. Proceedings of the Linux Audio Conference,Vol. 1, pp. 9–14. Schiavoni, F., Queiroz, M. and Wanderley, M. 2013. ‘Network Music With Medusa: A Comparison of Tempo Alignment in Existing MIDI APIs’. SMAC/SMC. Sullivan, D. and Igoe, T. 2004. Physical Computing: Sensing and Controlling the Physical World With Computers. Boston: Thompson. Truchet, C. and Assayag, G. (eds.). 2011. Constraint Programming in Music. London, Hoboken: Wiley-ISTE. Valencia, M. and Reyes, J. 2015. Tele espacios activos 2. www.festivaldelaimagen.com/es/eventos/paisajessonoros/2436-tele-espacio-activo. Retrieved on April 2015. Valencia, M., Reyes, J. et al. 2015. ‘Creacion visual y sonora en red (visor)’. Proceedings of Seminario Internacional, Festival de la Imagen 2015. Festival Internacional de la Imagen de Manizales. Verplank, B., Mathews, M. and Shaw, R. 2000. ‘Scanned Synthesis’. Proceedings of the 2000 International Computer Music Conference, ICMA, Berlin. Verschure, P. and Manzolli, J. 2013. ‘Computational Modelling of Mind and Music’. In M. Arbib (Ed.), Language, Music, and the Brain: A Mysterious Relationship. Frankfurt a.M.: Ernst Strüngmann Forum. Winkler, T. 2001. Composing Interactive Music:Techniques and Ideas Using Max. Cambridge: MIT Press.
48
2 ELECTRONIC MUSIC IN EAST ASIA Marc Battier and Lin-Ni Liao
Introduction While musical production and research from Asia have significantly grown these past few years, they remain somewhat to be discussed and disseminated.There have indeed been more exchanges between the Western world and East Asia, but it is also true that, historically, electronic music has spread unevenly across Asia as a whole.The complicated history, exposed to political turmoil and, in some cases, lack of opportunities to communicate with the rest of world, explains how difficult it has been to get acquainted and understand the musical thought of the region. This is why, in this chapter, we will present some brief historical background whenever we feel it may be useful, but we will mostly present recent developments and the current trends of electronic music in East Asia.1 Rather than focusing on a few specific examples, we have chosen to present an overall landscape in which individual experiments are conducted. There is also something specific about contemporary music in East Asia which is rooted in the contextual presence of traditional culture, such as poetry, music, religion and philosophy. A number of Chinese composers, from the mainland as well as from Taiwan or Hong Kong, are influenced by Taoism or Buddhism, and this gives their music some particularities worth noting.2 Electronic music started in the early 1950s in Japan, while it grew rather later in Taiwan and South Korea. In China it started several years after the end of the Cultural Revolution, during the 1980s. There, after an initial period of learning abroad, composers returned to develop their own approach, which is often based on traditional Chinese culture and religion. Therefore, a great deal of research and experimentation has followed the initial introduction of electronic music in China, as will be discussed later. Japan, and, to a lesser extent, Taiwan and South Korea, have known several types of development which have explored areas yet unknown or poorly represented in the West. During the 1950s, the Experimental Workshop (Jikken Kôbô), with composers Toru Takemitsu and Joji Yuasa, mixed electroacoustic music and instrumental music with ballet,3 theatre and performances (Galliano 2002).4 Later, electronic scores have often been included in commercial films, which considerably enriched the scope of what electronic music was able to offer. Another aspect of specific research has come out of Japan: the control of spatialization and its frequent use in electronic music, something which can somehow be linked to cultural aspects of this country.5
49
Marc Battier and Lin-Ni Liao
A note should be made of the development of electronic music research in South-East Asia, although this chapter will not attempt to discuss it in detail. For instance, a group was formed in 2014 in Malaysia around the question of the notation of electronic music with instruments (known in French as musique mixte). Electronic music is, in fact, flourishing in Malaysia.Works by composers like Chong Kee Yong6 are widely performed in Asia and elsewhere. As vice-president of the Society of Malaysian Contemporary Composers since 2011, Chong is an advocate of the vitality of the Malaysian contemporary music scene and a practitioner of live electronic music with Western and Chinese instruments. Also in Malaysia, from the Universiti Putra Malaysia, and partially conducted at the university of Plymouth in the UK, Noris Mohd Norowi is currently conducting research in computational approaches to music. She has published a number of papers which demonstrate advanced investigation in the areas of music information retrieval, automatic musical genre classification and human-computer interaction.7 Singapore, Indonesia and Malaysia have thriving electronic music scenes in which it is fascinating to observe the blend of latter-day technology and specific cultural traits (such as shadow theatre) (Wahid 2007, 2008). Vietnam also has a lively electronic music scene, but not at an academic level, as this field has yet to be taught in the music conservatories. As of this writing, the National Music Academy of Hanoi, which is the largest music school in the country, is, however, moving in the direction of including music technology and electronic music in the official curriculum. As a final note, there is still a certain type of divide between Western countries and Asian countries and regions in the approach to electronic music making. This chapter will try to explore this question as can be observed in musical production, in musical research and in the diffusion in festivals and symposia across Asia, but we will limit its scope to East Asia, namely the Chinese world, Japan and South Korea.
Early steps in the development of electronic music in the Chinese world To draw a survey of electronic music research in China, it would be necessary to consider separately the cases of four entities: mainland China, Hong Kong, Macau and Taiwan. In addition, there are a number of expatriate Chinese who, while they live and work abroad, often maintain strong links to their native land and, because of their understanding of Chinese culture and language, provide bridges with the West.8 In the historical development of electronic music in the Chinese world, quite a few young composers have sought to improve their training in other countries (Liu 2010). We will discuss briefly this point, but what can be observed today is that they either come back to become teachers of younger generations9 or frequently visit their homeland and share their experience with young Chinese composers.10 Furthermore, the rhythm of development of electronic music and the spread of the need for research have been notably different in each of the four entities. Composers from mainland China were only able to approach electronic music from the mid-1980s; not only had there been the caesura of the Cultural Revolution, which started in 1966, but prior to this date,Western contemporary music was only seen in a distorted way, first through the Russian school and then, when China cut off its relations with the Soviet Union around 1960, from older Western theory, and no electronic music could be heard.This situation had of course been specific to mainland China. Hong Kong and Macau were only brought under the jurisdiction of China in 1997 and 1999, respectively. Hence, composers from these territories were exposed to new music, could freely travel abroad and were mixed with Western composers who settled there. Finally, the case of Taiwan is yet different. The best extant document of the development of electronic music in Taiwan can be 50
Electronic music in East Asia
found in a study by Liao Lin-Ni (Liao 2015), but its particularities will be briefly discussed in this chapter. In observing the growing phenomenon of electroacoustic music in East Asia, through its diversity and in light of recent developments, it is useful to look at the academic system of music education, including its historical development, which has differed for over a century between Taiwan and China. At the beginning of the twentieth century, China sent students to Europe, especially Germany, such as the former education minister Tsai Yuan-Pei and the composer and first Chinese PhD in music, Hsiao You-Mei. The latter founded the Shanghai Conservatory of Music in 1927, which naturally adopted the European conservatory system. In Taiwan, the academic system created by the Japanese was limited to music education at the National Taiwan Normal University, with the sole aim of training future teachers in the art of teaching propaganda songs. For advanced musical studies, the rare candidates had to go to Japan. After the departure of the Japanese in 1945, under the influence and protection of the United States military, the American university system adapted by the nationalist Guomintang government prevailed, in which teachers were former students who had left Japan before the Second World War.
Electronic music in mainland China Throughout the 1980s, young Chinese composers sought abroad the knowledge they had been deprived of during their early formative years because of the Cultural Revolution, which started in 1966 and ended in 1976. Over the 10 years of this situation, Western music could not be practiced or studied. Several of the young composers who were admitted to the conservatories after 1976 found ways to further their studies abroad.What they brought back reflected the various styles, practices and trends which were largely the products of the Western schools they had studied at although they quickly strove to develop personal styles nourished by their Chinese culture (Kouwenhoven 1992). Here is a photo11 of the first class of composition that had been selected and admitted in 1977 (Figure 2.1). Because the conservatory had to be renovated after the strictures imposed by the Cultural Revolution, classes only began in 1978. Among them, we can see some of the new Chinese music pioneers, including of electronic music: Tan Dun (1957), Qu Xiaosong (1952), Chen Qigang (1951), Zhou Long (1953), Guo Wenjing (1956), Chen Yi (1953), Liu Sola (1955), Ma Jianping (1957), Su Cong (1957), Zhang Xiaofu (1954) and Chen Yuanlin (1957), among others. It is generally accepted that the milestone which marks the beginning of electronic music experiments, composition, and performance in China is a concert held on September 24, 1984, at the Central Conservatory. It was put together by Chen Yuanlin, Zhang Xiaofu, Tan Dun, Chen Yi, Zhou Long13 and Zhu Shirui (1954). According to Zhang Xiaofu’s testimony, none of them knew what a concert of electronic music should be like, so they followed their imagination. Furthermore, it was very difficult to find appropriate equipment, and 1984 was still a bit early for MIDI equipment (which was finalised in 1983). Soon after, in 1986, Chen Yuanlin (1957) established a studio at the Central Conservatory, which he called the ‘Computer and Electronic Music Studio’. This is very early considering the state of knowledge accessible and the equipment available at the time. After obtaining his master’s degree, he continued his studies in the United States and obtained a PhD from the State University of New York at Stony Brook. He developed in the United States a style mixing Chinese musical material, Chinese musical instruments including percussion or pipa, a jazzinfluenced idiom and electronic sounds. It’s probably safe to say that his music is a product of his Chinese origin and American musical influences. 51
Marc Battier and Lin-Ni Liao
Figure 2.1 A photo of the now famous ‘Class of 1978’, the students admitted at the first class of composition post-cultural revolution at the Beijing Conservatory of Music12
Even if Chen’s studio did not survive long, it has a significant position in the history of electronic music in China, as it marks a milestone in the development of new music. However, identifying a consistent timeline for the development of electroacoustic music in China is difficult. As Beijing’s Zhang Xiaofu put it, “It is hard to say who was the first person to compose electronic music in China”.14 One of the reasons is that there’s been quite a bit of confusion as to the very nature of the field, which can be defined in musical terms but also according to the various types of technology used. Furthermore, there has always been, since the beginning of this music, a quest for technology usable to create electroacoustic materials.We’ll get back to this. A significant step forward was the establishment of the Centre for Electronic Music in China (CEMC) at the Central Conservatory. This gesture was made by Zhang Xiaofu, one of the former students of the Central Conservatory. In 1988, he composed one of the first mixed music works in China, Chant intérieur (1988), for Chinese flute and electronic tape. As often the case with his music, Zhang later revised the piece and made a new version in 2001.15 The year after, in 1989, Zhang Xiaofu received a grant from the Chinese Ministry of Culture to further his studies. This grant was for a year to study new music, but it wasn’t meant to include electronic music, a field largely unknown to the Chinese authorities at the time. After a year at the Ecole normale de musique in Paris, he went on to study in a conservatory (Gennevilliers Conservatoire), where he followed the electroacoustic music teaching of Jean Schwartz, a member of GRM. By then, he was on his own financially, receiving no support from China. Upon his return, he could not find any studio to work in electronic music, and that’s when he decided to create one. In 1993, Zhang established the Centre for Electronic Music of China, or CEMC, at the Beijing Central Conservatory. The CEMC thrived and gave birth the following year to the first Beijing electroacoustic music festival, which came to be known as the MUSICACOUSTICA-BEIJING, a name it took in 2004 when it became yearly. In 1997, the CEMC was able to deliver a master’s degree, which was extended to a bachelor’s in 2001 and later a doctoral program. 52
Electronic music in East Asia
The conservatories of Beijing and Shanghai were the front runners in the field of electronic music in China. For example, in 1984, the Shanghai Conservatory of Music introduced the MIDI system for its electroacoustic music curriculum. In 1986, Chen Yuanlin created the first studio of digital composition synthesisers at the Beijing Central Conservatory, two years after the first ‘avant-garde’ concert of electroacoustic music with synthesisers took place there. He was also the first student at the Central Conservatory to compose an electronic work with MIDI material for his exams. At the Shanghai Conservatory of Music, noteworthy figures include its former president Xu Shuya (1961) and Xu Yi (1963), who are prominent figures of the New Wave movement (Zhou 1993) (Figure 2.2).16 Along with An Chengbi (1967), the two were trained at the Conservatoire National Supérieur de Musique et de Danse in Paris and have worked closely with centres of musical research, such as IRCAM and Paris’s GRM. Also of note is Wen Deqing (1958), an alumnus of the Conservatoire National Supérieur de Musique et de Danse in Lyon, who created the opening of the Chinese branch of the Centre for Contemporary Music Documentation of Paris and the New Music Week Festival at the Shanghai Conservatory. An advanced program in electronic music was established in 2003 under the artistic direction of An Chengbi. The Electro-Acoustic Music Centre of Shanghai Conservatory (EAMCSCM), connected to the composition department, offers six studios of various sizes to accommodate a fairly large number of students. In Shanghai, to avoid a generational gap between the ‘Masters’, those over 50 years of age, and the latest generation of students, a project was set up to bring back and integrate young composers trained abroad in order that they may begin their teaching career and share their rich, international experience, such as Xu Yi (2004–2010),Wang Ying (2013–2016) and Shen Ye (professor of orchestration and composition from 2013 to the present). The direction of the Shanghai Conservatory of Music left much latitude to these young teachers, allowing them to go abroad in order to continue their training and not be cut off from the influential musical community of Europe. It is important to emphasise the necessity of this double benefit system which permits a permanent enrichment and continuity of ties with the West.The resulting dynamic of cultural and musical exchange relies on networks established between the composers’ places of birth and
Figure 2.2 This photograph of the ‘New Wave’ composers (from the People’s Music Journal 1986) includes Xu Yi (third from left) and Xu Shuya (right) Reproduced by permission of the photographer, Jiang Li
53
Marc Battier and Lin-Ni Liao
the foreign cities where they studied. Similarly, the economic and cultural dynamics which has allowed the opening of many teaching posts in this field in China over the last 10 years must also be taken into consideration, as we discuss later. The Wuhan conservatory was a pioneer in establishing an electronic music studio. This was in 1987, and soon after, the studio founded by Liu Jian (1954–2012) and Wu Yuebei (1957) was opened to a bachelor’s program in electronic music. In addition, the Huangzhong–Journal of Wuhan Conservatory of Music17 published a substantial number of articles on this music. However, Wu Yuebei later moved to Beijing, where he now teaches composition at the Central Conservatory, while Liu Jian remained in Wuhan with a growing interest in interactive music. There, he composed what is considered one of the earliest electronic music works from China, Ornamentation, realised in Wuhan in 1985.18 However, at this period, it was felt that the political situation was not fostering these endeavours and that there also was a lack of competent teachers. Several young composers went to Europe or North America to further their training, such as Zhu Shirui (1954), who travelled to Stuttgart in Germany; Chen Yuanlin, who moved to the United States; or Zhang Xiaofu (1954): Zhang went to France to study at the Ecole Normale de Musique de Paris, then at the Gennevilliers Conservatoire with Jean Schwarz, as well as at the INA-GRM in Paris. French influence was felt in other endeavours: a class taught by Xu Shuya, a composer who resided in France for many years and studied at the Conservatoire national supérieur de musique, where he took classes on electroacoustic music – a class founded by Pierre Schaeffer. He gave this class at the China conservatory in Beijing before going to the Shanghai conservatory as president, a position he kept for several years. It was at the Shanghai conservatory that a studio was established in 2002. Founded by Chinese-Korean composer An Chengbi, the Electro-Acoustic Music Centre Shanghai Conservatory of Music (EAMCSCM) still thrives. However, one can note a substantial difference between the teaching of Zhang Xiaofu and that of An Chengbi. Although both were trained in France, they received a very different education and followed two different schools. As noted, Zhang was associated with the GRM’s teaching and adhered to the precepts of Pierre Schaeffer, of tape music and acousmatic music; An Chengbi went to IRCAM.19 This attitude towards real-time digital processors is the mark of An Changbi’s teaching and can be observed in his students’ works and his own. We will discuss the question of live electronic music in the section ‘Styles and Trends’. There have been some very partial accounts of the development of computer music in China. The very choice of ‘computer music’, as opposed to electronic or electroacoustic music, which are more general terms, is significant in denoting an American influence; Chinese studios described as computer music laboratories in the later 1980s and during the 1990s are often devoted to technological research of various kinds and not to music composition. Computer music is really a subset of electroacoustic music, or, rather, a technical environment which is itself complex and is composed of a multiplicity of different technologies. Furthermore, or as a result, computer music is a research field, because new tools have constantly to be developed: sound synthesis algorithms, digital audio processing techniques, gestural interfaces, real-time interactive software, motion capture, video analysis, common musical notation software, timbre notation, computer-aided composition software and more. This explains why so many communications presented at the international computer music conferences (ICMC) are in fact authored by researchers and engineers, and relatively few by musicians. This trend can be observed in the ICMCs since they started to be held occasionally in Asia. For example, a researcher, Ying Yu, gave a presentation at the 1992 ICMC entitled ‘Computer Music in China’.20 In this paper,Ying described several experiments with digital technology but did not discuss electronic music or music composition; instead, he mentioned “computer assisted music 54
Electronic music in East Asia
instruction for musical grammar, harmony, and ear training”, ethnomusicological research and early forms of music information retrieval.The same author gave studio reports in later ICMCs, in Hong Kong in 1996 and in Beijing in 1999.21 All these communications were following a similar trend and presented technical endeavours, but did not discuss musical composition. This is also the case of another early communication at ICMC, this time by Wu Jian, who presented experiments at the Apple Centre in Beijing.22 In Hong Kong’s ICMC, almost no Chinese composer took part, and, in 1999 in Beijing, the few Chinese contributions were technical. Computer music is not only about composition. It is a technical field which enables research in many different fields. As the reports just mentioned demonstrate, there was some sort of confusion between electronic music, computer music, informatics and software engineering, and sound engineering. Some studios were created, but with an emphasis on sound engineering and sound recording. Although this trend should not be confused with electronic music composition, there are several historical and tactical reasons which justify this development; first, audio technology (microphones, mixers, loudspeakers and physical electroacoustics) is indeed a needed field.Also, this technology may be useful for the creation of electronic music. Similarly, computer science and software engineering can often render services for music creation which uses computers; hence, strong technical and even scientific areas meet the needs of musicians who are willing to experiment.23
Electronic music and academic teaching in China The educational structure for the practice of electronic music seems extensive at the Beijing Central Conservatory. There is a master’s programme which continues into a PhD.24 While the Centre for Electronic Music of China (CEMC) itself has been integrated into the large composition department, the studios and the staff have remained to introduce and teach students this music, with subsidies from the ministries of Technology, Education and Culture. Of note is the promotion effort by the Central Conservatory to gather pieces awarded prizes during the MUSICACOUSTICA-BEIJING festival and to produce CDs. This exposes young composers to a large audience and gives them the opportunity to be heard by accomplished composers and musicians from abroad. Those students who have not stayed abroad benefit from the prestige of the Central Conservatory to engage in professional life with a rather commercial orientation. This image of the trade makes their position more positive compared to composers trained in Taiwan, where there is still a gap in understanding between academics and the commercial world, an aspect of music that has long been marked by a contemptuous glance stemming from classical Confucian thought, which sees music as a discipline of moral elevation. Conversely, in Hong Kong, which is influenced by a long English presence and a comprador (trader agent) tradition, the importance of commercial orientation largely prevails over composition. Today, the life of the electroacoustic musical universe in China continues to revolve around the conservatories of Beijing and Shanghai, although other important conservatories, such as the Sichuan Conservatory of Chengdu or the Beijing China Conservatory, produce important activity. As more schools began to support the discipline, it spread very quickly. This growth of activity relied on a few key personalities who used their personal networks to extend development in the field. Among them, Zhang Xiaofu established a strong link with the INA-GRM and other world centres for electronic music, as well as through the International Confederation of Electroacoustic Music (CIME).25 In 2002, Zhang Xiaofu founded the Electroacoustic Music Association of China (EMAC), of which he later became president. The scale of this structure mirrors the personality of Zhang, who tirelessly seeks to develop electronic music in China (Zhang 2018). Zhang, who is noted as “a figure in electronic music in China” by the Encyclopedia 55
Marc Battier and Lin-Ni Liao
of Chinese Culture, is the backbone of the MUSICACOUSTICA-BEIJING Festival and advocates for students in many activities for young composers on its national network (Wang 2018). Young composers and interested students can now find studios and teachers in a number of conservatories across China. In the city of Chengdu, the largest conservatory in China, Sichuan Conservatory,26 boasts an extensive program under the guidance of Hu Xiao (1967), director of the department of electronic music, which includes the Kyma system. The composition department has a dynamic and creative team of professors, such as Lu Minjie,27 who was the first master’s degree recipient in electronic music at the Sichuan Conservatory, and Bai Xiaomo.28 In Beijing, the China Conservatory, with Ban Wenlin,29 a Chinese composer who received her training in Japan and is a Max/MSP expert practitioner while active in publishing scholarly papers on electronic music, is also a growing centre nowadays. In the same section, science and technology, are instructors such as Cheng Yibing30 (Perry Cheng) and others who specialise in various aspects of electronic music and audio technology. A problem which remains for Chinese students is the difficulty in accessing musical and textual information. In addition to the language barrier which exists today, ordering books and CDs from abroad remains difficult.The access to many Western sites, such as YouTube and many others, is blocked. A solution has been to translate textbooks into Chinese. A colossal effort was undertaken under the leadership of Kenneth Fields to translate the textbook edited by Curtis Roads and published by The MIT Press, The Computer Music Tutorial, which is now available in two volumes from the People’s Music Publishing House (2011).31 This terminology problem gave Zhang Ruibo the impulse to extend this effort and map it to the British EARS site: this was how the CHEARS (Chinese EARS) was conceived, with vocabulary and reference material.32 Students who have not studied abroad benefit from the prestigious reputation of the Beijing Central Conservatory, which allows them to engage in the professional milieu with a rather commercial orientation (Li 2018). This approach to the music trade puts them in an advantageous position compared to composers trained in Taiwan, where (as we remarked above) classical Confucian thought has a contemptuous view of music judged as commercial, thus creating a longstanding gap between the academic and commercial spheres.
Taiwan The Taiwanese electroacoustic adventure began abroad in the late 1950s with Lin Erh performing his first computer music experiments in the United States. Lin Erh entered the National Taiwan University in Electrical Engineering in 1951, then went on to study at Northwestern University and attended various electroacoustic music laboratories in the United States and Germany. In 1965, he presented in concert a work composed entirely by computer, then later went on to compose 10 pieces for instruments and the Illiac II computer. His research and computer compositions ceased after his return to Taiwan in 1975. Erh’s innovative experiments were followed by those of Wen Loong-Hsing (1944), working in Europe, the United States, and Japan during the 1970s and 1980s. In 1985, he organised the first electroacoustic music workshop in Taiwan, and in 1987, he founded the Electroacoustic Music Laboratory of Fu-Jen University. In Taiwan, a laboratory was established at Fu-Jen University by Wen Loong-Hsing in 1988, taken over in 1989 by Kao Hwei-Chung, who was trained at the University of Illinois at Urbana-Champaign. In 1991, the National Taiwan Normal University opened a classroom with electroacoustic material purchased by composer Tzeng Shing-Kwei (1946) (Lien 2008). Original and unclassifiable, these two exceptional personalities can also be considered ahead of their time and did not leave a deep mark in Taiwan. It was not until 1989, with the arrival on
56
Electronic music in East Asia
the scene of Phil Winsor, that one can begin to speak of electroacoustic music in Taiwan (Chen 2008). He introduced the teaching curricula and taught the first courses himself at the Institute for Applied Arts, National Chiao Tung University. Eighteen years late, in 2007, a new program, the Master’s Program of Sound and Music Innovative Technologies, was created outside of the music department. The program is composed of two groups: (1) sound synthesis, computerised physical modelling, singing voice synthesis and brainwave and music effect recognition model and (2) robotic music performance, an intelligent audio-visual-learning system and intelligent sensor-based interactive technology. Instructors come from various departments such as technology, computer science and music.The new program encourages students who strive to invent new instruments and design new computer programs, with a focus on connecting with the commercial world of technology. Winsor was the source of what would become two generations of Taiwanese graduates from the University of North Texas, where he taught. The second composer mentioned, Huang Chih-Fang33 (Jeff Huang), after earning a PhD in mechanical engineering, obtained a master’s degree in music from the National Chiao Tung University. As a conductor, researcher and composer, Huang is a prominent figure in contemporary electronic music in Taiwan, standing at the crossroads of science and art through his work to make electronic music. He is currently Associate Professor at Kainan University and is Chairman of the Taiwan Computer Music Association (TCMA). A phenomenon of ephemeral experiences in electronic music appeared throughout the 1980s in China and Taiwan, resulting from public and private university initiatives. Research in electroacoustic music flourished, and several studios were created, often in the science departments of universities for researchers, engineers, technicians and non-musician computer scientists, and then later in conservatories for composers and musicians. The dynamism of electroacoustic music in Taiwan is due in large part to the composer Tseng Yu-Chung, who returned from the United States in 1998. Tseng Yu-Chung,34 a very versatile composer, is instructor and researcher at the National Chiao Tung University. He is adept at what he calls the ‘European style’, or acousmatic music, but also versed in the latest live electronic technologies. Under Tseng’s direction, young composers are breaking into the world of acousmatic music. A prolific researcher and educator, Tseng began directing master’s theses towards the historical research of electroacoustic music in Taiwan and inaugurated the teaching of the analysis of electroacoustic and mixed music at the National Chiao Tung University. He is the father of the third generation of young electroacoustic composers who have distinguished themselves by the quality of their work on the international scene.Tseng has greatly contributed to establishing cultural links between Taiwan, Europe and the United States. Today, sound35 art is widely practiced across the whole of East and South-East Asia. In China, Hong Kong and Taiwan, it has been heralded since 1997 by sound artist Yao Dajuin (1959), through “China Sound Unit” and “Taipei Sound Unit”.Yao has been trained at the University of California at Berkeley and is currently heading the Performative Media Art Centre of the China Academy of Art.This field crosses over into noise and electronic dance music and is often practiced by artists who did not receive a formal musical training (Wang 2015). Another pole of young ‘sound artists’ emerging from electroacoustic music is found at the National Taipei University of Arts, where music is tackled ‘without borders’ in every sense of the term, both in an interdisciplinary approach and between different geographical points – China, Taiwan, Hong Kong, France, etc. Other figures of alternative electronic music are Wang Fujui36 (1969), director of the Centre for Art and Technology of National Taipei University of Art,Yao Chung-Han37 (1981) and Wang Chung-Kun38 (1982), who engage in noise music.
57
Marc Battier and Lin-Ni Liao
Hong Kong and Macau The teaching of electroacoustic music in Hong Kong is still quite traditional. Of note is Clarence Mak (1959) at the Hong Kong Academy for Performing Arts, and the only contemporary ensemble of Hong Kong since 2008, the Hong Kong New Music Ensemble. Other composers making multimedia and electronic music include Christopher Coleman39 (1958), Tsang Yiu-Kwan,40 Lo Hau-Man41 (1965), Steve Hui42 (1974), Lam Lai,43 Patrick Chan44 (1986) and Kelvin King Fung Ng45 (1985). However, a number of composers have been active in electronic music, and some have engaged in research, such as Lydia Ayers, who has developed software experiments for synthesizing sounds derived from instruments, including many Chinese ones. She authored, with Andrew Horner, Cooking with Csound. Part 1:Woodwind and Brass Recipes, a book published by A-R Editions which came out in 2002. Being active in the Hong Kong computer music community, Ayers chaired the 1996 International Computer Music Conference in the city, and since 1995, she has contributed several research presentations at the ICMCs. Macau began to develop cultural activities in this field only from the early 1990s. Hence, little music has been produced so far. According to Li Hongjun, professor at the Macau Academy for performing arts, who gave a lecture in 2008 at the first EMSAN symposium46 in Beijing, there is a state of wasteland for new music in the small territory. Nevertheless, composers from Macau have realised some electronic music pieces, such as Lam Bun-Ching’s47 (1985)’s Three Easy Pieces for trombone and electronics, or Wen Bihe’s48 Dye, for flute and electronic music. Probably because of the vibrant and cosmopolitan night life of Macau, which has an abundance of casinos and clubs, the DJ and electronic dance music scene is well developed,49 but this is outside of this chapter’s scope.
Japan Japanese composers became interested in electronic music from the early 1950s. It is often considered that the earliest recorded tape composition dates from 1953 and was realised at the JOQR studio of Nippon Cultural Broadcasting by Mayuzumi Tôshiro (1929), with Work XYZ for musique concrète. This pioneer piece was inspired by the contact the young Mayuzumi had while in Paris between 1951 and 1952 with French musique concrète. There, he studied at the national conservatory. He had a chance to hear musique concrète during the festival in which Pierre Schaeffer demonstrated the whole repertoire of this music,50 and he left for Japan a few days later (Battier 2017). The piece he realised upon his return reflected the influence of the Paris School,51 although it did not strictly adhere to Schaeffer’s teaching of working with fragmented sound objects (Loubet 1997, 1998). A few years later, the Japan Broadcasting Corporation in Tokyo, known as NHK,52 established an electronic music studio which was patterned after the Cologne studio.53 In any case, Japanese electronic music came under the influence of the Cologne studio, as can be heard in a piece jointly composed at the NHK studio by Mayuzumi and Moroi Makoto (1930), Variations on the numerical principle of seven, in 1956;54 entirely realised with electronic sounds, it evokes the two Stockhausen electronic studies (Fujii 2004). It is this large studio which saw the birth of ‘classic’ electronic music production by Mayuzumi, Moroi, Ichiyanagi Toshi (1933), Takemitsu Tôru (1930–1996) and later Stockhausen, who visited in 1966.55 Despite the two examples by Mayuzumi and Moroi, the development of electronic music in Japan was marked by a quest for exploration and independence. It started in the very early 1950s with a group of artists and musicians who called themselves ‘Experimental Workshop’56 and included composers Takemitsu Tôru (1930–1996) and Yuasa Jôji (1929) (see Figure 2.3). 58
Electronic music in East Asia
Figure 2.3 Yuasa Jôji, Tokyo, 2017 © Marc Battier
The Experimental Workshop started producing tape music even before Mayuzumi, with pieces by Akiyama Kuniharu and Yuasa Joji, using early tape recorders from Sony (Galliano 2012). Later, these composers, joined by Takemitsu and Ichiyanagi, moved to the Sogetsu Art Centre in Tokyo. However, the 1968 compilation of electronic music by Hugh Davies57 listed 32 production centres in Japan, often a radio station with just a few pieces.This is, however, remarkable and is a testimony of the thirst for new forms of music after the Second World War in this country. It also shows the dispersion of effort and a lack of unity in the styles and experiments at that time, which has somewhat remained up to today (Mizuno 2008). Japan’s electronic music research and production is scattered across the country. The Tokyo NHK studio, established in 1955, has failed somehow to respond to the increasing demands on the part of composers. There are still today many small computer music studios, either private or institutional. While China is attempting to propose a unified definition of the electronic music practices with the goal of facilitating education in the field (with, admittedly, some difficulty), it remains to be seen whether Japan attempts a similar pattern. However, in the immediate period after the Second World War, throughout the 1950s, creative movements – such as music, visual art, poetry, literature – were striving to divert themselves from the values exacerbated by the nationalists before the war (Herd 1987). For this they avidly 59
Marc Battier and Lin-Ni Liao
studied the latest Western trends in their fields (Kojima 2008). When the 1960s arrived, and because Japan had somehow recovered from the terrible damage and poverty inflicted by the war, their attitude changed. As musicologist Fukunaka Fuyuko put it: Following a period of greedily absorbing, in the early ’50s, the most up-to-date compositional trends in Europe – from total serialism to musique concrète – they were now searching for their own language, a language that defined their own creative voice distinct from another and, more important, from the one embedded in the European heritage.58 There is a movement towards scholarly research on the history and the trends of electronic music in Japan. This can partly be explained by the long history of electronic music in the country. Recently, an attempt has been made with the rather conservative Japan Musicological Society to take electronic music into consideration, thanks to the effort of the Japan Society for Electronic Music (JSEM). More generally, studies of electronic music are led by composers and scholars like Mizuno Mikako, from the Nagoya City University, Ishii Hiromi, who resides in Cologne, Germany, Numano Yuji, from the Toho Gakuen School of Music and Kawasaki Koji, who authored an authoritative book on the history of electronic music in Japan (Kawasaki 2000).59 Other academics are also engaged in musicological approaches to new music and electronic music, such as Kakinuma Toshie in Kyoto, who studied at the University of California at San Diego and remains attached to the American experimentalists, or Fukunaka Fuyuko, who was trained at New York University and conducts her research from the Tokyo University of Fine Arts and Music (Fukunaka 2014). Of all the East Asia countries, Japan is the most advanced on this scholarly approach. Electromechanical music has been an important part of instrument innovation in the twentieth century. Recently, electromechanical robots with computer control have enabled further developments in this direction. In Japan, a prominent actor is Yasuno Taro, who invented Zombie Music. This is a multi-instrument composed of a number of recorders which are played by a pressure pump of the kind used for construction. Once a reserve of air has been produced, it is led to the flute mouthpieces, while a large number of electromechanical fingers are applied to the body of the flutes. In his Duet of the Living Dead (2013), Yasuno Taro uses soprano and alto recorders. The reference to zombies is emphasised by the use of finger mouldings that look like the real thing, although they are white in colour. In Quartet of the Living Dead (2013), the number of instruments is increased. The work of Yasuno Taro has been awarded various prizes in Japan, and as of this writing, two sets of his music are offered on line at Apple Music.60 The electromechanical devices are controlled by MIDI messages that are applied to the set of fingers above the flutes. They are, thus, hybrid systems, and there are increasingly mechanical instruments placed under computer control, also called alternative musical instruments.61 Taro’s zombie instruments have been very successful and show how the development of computercontrolled acoustical instruments is an important field of research for music. An important step forward in organizing events across countries has been the establishment of the Asia Computer Music Project (ACMP), which sponsors concerts and symposia across East Asia. Among the founders are two composers most active in Japan and Korea, respectively: Osaka Naotoshi and Shin Seongah. The project is mostly to present recent computer music from Asia and also to give an international forum to research in the field. As an example of the outcome of ACMP, in 2016, it helped set up the East Asian Computer Music Exchange Concert and Lecture symposium, organised at the Beijing’s China Conservatory by Ban Wenlin. The event gathered composers from Korea (CREAMA, Hanyang University), Japan (Tokyo’s 60
Electronic music in East Asia
Senzoku Gakuen College of Music) and Gifu’s Institute of Advanced Media Arts and Sciences (or IAMAS) and the China Conservatory. Today, electronic music is a forefront topic among young composers, and the various music schools have taken notice. There is now a wide selection of academic studios in universities and conservatories, such as the Tokyo University of Fine Arts and Music, with teachings in the domain by composers Nishioka Tatsuhiko or Nodaira Ichiro, at the Nagoya City University with Mizuno Mikako, at Kunitachi College of Music with Imai Shintaro, at the Shobi university with composer Kojima Yuriko, at the Aichi University of Fine Arts and Music, at IAMAS – but there are too many to cite them all. As was noted for China, the question of terminology and translation of specialised vocabulary is acute in Japan. It would seem obvious that there is a need for precise terminology which enables proper communication and learning.This was discussed by Ishii Hiromi, a composer and scholar, in a presentation of the difficulties of selecting proper Japanese terms for such practices as electroacoustic music or electronic music.62 In her paper, Ishii states that “in Japan, there is an extreme blurring differentiation of genres and the confused usage of terms is accelerating it”. This creates problems not only for transmission but also with the question of giving electronic music the status of art music and attracting the attention of musicologists. Nevertheless, efforts are constantly made in Japan for presenting electronic music history and technology, the latest example being a book of 430 pages by composer and performer Goto Suguru,63 who now teaches at the Tokyo University of Arts.
South Korea It is generally accepted that the first electronic music piece composed in South Korea was Feast of Id composed in 1966 by Kang Sukhi64 (1934).This composer had firm relations with Germany, a tradition established after Isang Yun.65 Isang Yun moved to Germany in 1957 and lived there subsequently, except for a period of around two years when he was forced to go back to South Korea. Because of his activities as a composition teacher, there has been a steady stream of young composers who followed the German track. Kang Sukhi, for instance, became artistic assistant at the studio of the Berlin Technical University.66 Even today, many young Korean composers consider that Germany is a necessary step for their training. Such is the case of Unsuk Chin, a student of Sukhi Kang at the Seoul National University, who has been residing in Berlin since 1988 after having spent three years studying with György Ligeti in Hamburg (Germany). The beneficiary of wide international recognition, Unsuk Chin has realised a number of electronic music works for tape alone or mixed music at the electronic studio of Technical University Berlin.67 Hanyang University has long been a focal point for the development of electronic music in Korea. In 2005 was founded the Centre for Research in Electro-Acoustic Music and Audio, or CREAMA. It is currently headed by Yim Jongwoo and Richard Dudas. The Korean computer music community meets international composers and researchers at the annual Seoul International Computer Music Festival (SICMF). The event supports several electronic music concerts and a symposium (Gluck 2007). Lee Donoung, Professor of Composition at Seoul National University, directed the Experimental Studio of the Heinrich-Strobel Foundation of SWF in Freiburg in the 1990s and was president of the Korean Electro-Acoustic Music Society for several years. He has been at the forefront of computer music research and composition in Korea for over 20 years and has conducted experiments with sensors applied to various instruments. Despite its relatively late start, Korea now offers a complete array of electronic music education as well as public concerts; the SICMF festival receives submissions from all over the 61
Marc Battier and Lin-Ni Liao
world and programs papers and pieces from many different countries.Young Korean composers travel more than ever, and not only to Germany. Research is presented at international professional gatherings such as ICMC or EMS. Composers like Yim Jongwoo, Anh Doo-Jin, former president of KEAMS, Shin Seongah, co-founder of the Asia Computer Music Project (ACMP), Chang Jaeho, Kim Jinho, Lee Eun Hwa, Choi Kyong Mee, Richard Dudas, Gerald Eckert and so many others represent a rich variety of trends in music and in multimedia work.
Styles and trends The initial years of setting up studios, training young composers and developing a practice were influenced by models imported from Europe and North America. Composers had to travel to these regions to receive a properly organised education in the theory and aesthetics of electronic music.The question of where they went and what they gained from these periods abroad is most interesting, as often their own teaching, once back in their countries, was naturally influenced by the schools where they received their training.This can be observed, for instance, in the foundation of the electronic music studio at Tokyo’s NHK radio in 1955, which was modelled on the Cologne WDR studio. Later, in China, Zhang Xiaofu brought back from Paris the French styles of electroacoustic music as practiced at the INA-GRM.68 In mainland China, new music has been encouraged by the Chinese nationalism encouraged by Xi Jinping. Chinese composers who work and teach in mainland conservatories encourage their students to dwell on Chinese culture. At the Central Conservatory in Beijing, for instance, the notion of the ‘Chinese Model’ is presented to students in composition. About this development of electroacoustic music, Zhang Xiaofu wrote: This historic breakthrough in sound material in electroacoustic music offers not only a new practical method with which to compose, but also inspiring prospects for composers to explore the significance and effects of abstract and concrete sounds. For electroacoustic music, sounds are resources; with advantages of sounds comes also superiorities of sources. Therefore, it is quite crucial to obtain the gene and nutrition from Chinese traditional cultures.69 In Zhang Xiaofu’s own production, there are numerous cultural references, such as the use of Chinese instruments in a mixed music setting, where the Chinese flute is in a dialogue with the electroacoustic music tape, for instance; there may be reference to Buddhism70 or the use of Chinese opera singers, with percussion and electroacoustic music tape, as in Visages peints dans les Opéras de Pékin. In Shanghai, the strong influence of French electronic music tradition can be felt in the teaching and works of An Chengbi, mixed with techniques inherited from the INA-GRM and IRCAM, and, for Xu Shuya, the influence of the Paris conservatory. American influences can be traced in the piece for orchestra and tape Chronosphere by Kojima Yuriko,71 who studied at Columbia University and now oversees the composition program at Shobi University near Tokyo. However, recently, the influences have become so numerous that it is hard to separate any schools based on style. For instance, the real-time and interactive techniques, using sensors and software such as Max/MSP and SuperCollider, or the Kyma system, have become ubiquitous and are routinely used by Jin Ping in Beijing or Bai Xiaomo in Chengdu. The limitations and possibilities of real-time (or ‘live’) electronic and acousmatic music are very different. Today, and probably for the past 25 or 30 years, musique mixte has superseded other and older types of practices. Musique mixte is of at least two kinds: the first is the most 62
Electronic music in East Asia
ancient but is still frequent these days. It is when electronic music material is composed before the performance and stored on a fixed medium, such as a magnetic tape or a digital storage medium. The benefit is that a wide range of processes and sound synthesis methods can be used to produce the material. In fact, it is limited only by the available techniques and technologies. One common usage is to record sounds and then process them. In the case of mixed music with traditional Asian instruments, the instruments can be recorded and later processed, using the tools available in a studio, digital or otherwise (Battier 2014). The second type of mixed music is more focused and also more recent. In this mode, the instrument on stage is played along with a number of real-time electronic techniques, usually digital these days, a situation which gives the performer quite a high degree of freedom.72 The disadvantage is that this comes with a limitation some may consider as severe, which is that the processing toolbox, so to speak, is limited to a small number of algorithms and techniques, although it is possible to playback pre-recorded audio sequences. Nevertheless, mixed music is nowadays widely encountered among the various schools of electronic music and deeply rooted in training and education. This is so much so that it poses a new challenge in Asia, when a piece calls for electronic material and traditional instruments. It is often the case that talented and virtuoso traditional instrument performers have had limited experience with new music, new Western music, that is, and even less with electronic music, if any. In this context, a mixed music in which the electronic material is made of processed recordings of the traditional instruments, or in which the traditional instruments are processed live, offers the performer a setting where there are landmarks. In a way, the electronic material made of processed instruments works as an extension of the live instruments. The electronic music serves as a reference with which the performer is able to recognise behaviours and patterns, timbre or rhythms (Uno 2004).An example if this approach can be found in a piece by composer and pianist Nodaira Ichiro (1953), Quatorze écarts vers le défi (1990–1991) for MIDI piano, eight strings and real-time analysis, synthesis and processing digital system,73 in which the instruments sounds are processed through units such as harmoniser, frequency shifter, sampler, reverberation, and spatialisation, and played back during the performance; in this piece, the computer which drives the processing is controlled by the keyboardist, who uses a MIDI piano, which is a piano which sends a MIDI message of the pitches being played. With this information, new processes are computed or selected in real time, making this piece an early example of successful interaction. Another Japanese composer, Mari Kimura,74 has conceived her own interactive system; as a virtuoso violinist, she drives her computer while playing. In turn, the computer executes a number of predetermined audio actions. For this, Kimura is now using a complex sensor system attached to the wrist of her right hand.This system, which she calls Augmented Violin, was built at IRCAM to her specifications.75 Despite living in the United States, Kimura has maintained strong relations with Japan. Other approaches are also encountered in mixed music. In a work by Ishii Hiromi, Kaze no michi (Wind Way), for shakuhachi and live electronics (2005),76 the composer described the relationship between the electronics and the instrument: “The computer part is not an extension of the instrument, but rather is designed as environment. The material sound is not the shakuhachi, but environmental sound recorded at Shinjuku train station, Tokyo”.77 Here, the electronic part plays the role of an ever-changing soundscape, and the attention of the listener is oriented towards the shakuhachi. Leading the audience’s focus towards the performer while using electronic material has been a constant preoccupation since the onset of electronic music, and many different settings and techniques have been experimented with. Since the 1980s, composers have been tempted to use interactive technologies, which enable a constant dialogue between the instrumentalist and the technology. 63
Marc Battier and Lin-Ni Liao
An example of interactive systems can be found in the works of Wang Chi. She moved to the University of Oregon for her postgraduate studies to focus on advanced interactive music.There, she studied with Jeffrey Stolet, professor of music technology, and became acquainted with various sensing and movement capture devices. She places her Kyma system under the control of gestural interfaces, and, as did Stolet, experiments with a number of interactive configurations. Her work with sensors led to developing complex interactive configurations and has placed her in the forefront of this approach in China. The photo (see Figure 2.4) shows Wang performing Flowing Sleeves (2015), an interactive work of electronic music for Kyma and eMotion systems in which the composer/performer is dressed in a long Chinese dress; her gestures are freely inspired by Chinese opera characters while the hands, masked by the long sleeves, hold sensing devices which transmit via wi-fi commands to the Kyma system. Here is how the composer describes her piece: In Chinese Opera, Water Sleeves can be an essential part of different types of characters’ costumes. During performances, long flowing sleeves can be flicked and waved like water, to facilitate emotive body movements. My composition Flowing Sleeves is inspired by a poem of the Tang Dynasty that depicts the women’s ritual of preparation in dressing, art of face makeup, and spiritual routine before the daily exposure to the public. In Flowing Sleeves, the performer operates an eMotion Twist with the left hand and an eMotion Accelerate with the right hand. The chorographical movements as well as performative actions together shape the visual and musical experience in real time. Traditional instruments of Asia are known for their richness of timbre. As Jing Jang put it, “The linear beauty of Chinese melody is based on the variety in timbre of its every note, which may be graced, fluctuated, vibrated, glided, and varied in movement, strength, and intensity” (Jing, 92). The same could be said of Japanese and Korean instruments. It is unavoidable that
Figure 2.4 Wang Chi performing Flowing Sleeves (2015)
64
Electronic music in East Asia
particular attention be given to timbre in electronic music. In Japan, research on timbre and an electronic timbre dictionary is conducted by Osaka Naotoshi at Tokyo Denki University (Osaka 2009). An acute attention to timbre can also be observed in works by Chinese composer Zhang Xiaofu, particularly in the alliance of Chinese traditional instruments and electroacoustic components.
Electronic music as a professional activity in Asia Gradually, several domestic organizations have taken charge of computer music activities in their respective country or region. These work as federations in that they supervise concerts and conferences which gather local studios and centres, as well as establishing links to foreign institutions.
Electroacoustic Music Association of China (EMAC) The Electroacoustic Music Association of China was founded in Beijing in 2002. It was the first national structure attempting to federate academic electronic music activities in China.78 As of this writing, there are over a hundred professional members, and over 300 students, from varied places in mainland China, such as the Wuhan Conservatory of Music, Beijing’s Central Conservatory, China Conservatory, Sichuan Conservatory, Shenyang, Harbin, Suzhou Academies of Music, Soochow University School of Music, Zhejiang, Shandong etc.
Electroacoustic Music Studies Asia Network EMSAN was first conceived in 2008 as a research project in the field of musicology. Its aim was to gather information and provide it to researchers and anyone interested in Asian electronic music production. From the start, EMSAN was structured as a network. Each network node gathers scholars, doctoral students and postdoctoral scholars from one country or territory. There are as many nodes as countries or territories involved. Since 2008 – the date of the actual launch of EMSAN through a first symposium held in Beijing in October – the network has been active.The life of EMSAN includes an annual symposium, usually held at the Beijing Central Conservatory, although it has also been organised in Taipei and twice in Japan (Gifu, 2015 and Tokyo, 2016). Among its other activities are publications.79 The main project of EMSAN is the database of musical works and publications aimed at researchers, who can find a wealth of information on electronic music pieces in Asia. The databases, however, could easily be usable for other goals (promotion of music, for instance).80 The database presents the names of composers and titles of pieces in their original language and as a transcription into Latin characters. This way, the original writing is preserved, which is important for Chinese, Korean and Japanese characters and can still be used by non-speakers of the languages. The EMSAN network has, since 2008, organised a yearly ‘EMSAN Day’ symposium (see Figure 2.5). It is usually held in Beijing during the MUSICACOUSTICA-BEIJING festival, although some have been organised in Taipei, Gifu (Japan) and Tokyo. EMSAN has also been invited by the annual Electroacoustic Music Studies conference (EMS) to hold a dedicated track at each conference. In this way, EMSAN has contributed to give Asian electronic music a slightly more visible place, which it needed, not the least for the musicological aspect of this music.
65
Marc Battier and Lin-Ni Liao
Figure 2.5 EMSAN Day 2012, Beijing Central Conservatory of Music: Kenneth Fields, Ban Wenlin, Anthony De Ritis, Liao Lin-Ni, Mizuno Mikako, Zhang Xiaofu, Marc Battier and Li Qiuxiao
Japanese Society of Electronic Music The first federation of electronic music in Asia, the Japanese Society of Electronic Music (JSEM) was founded in 1992 by a number of people involved in contemporary and computer music, with Minami Hiroaki of the Tokyo University of the Arts and Bin Kaneda of the Aichi University of the Arts, joined by Matsui Akihiko, Matsushita Isao, Terai Naoyuki, Nishioka Tatsuhiko and others. Several among them are in a position of fostering the development of electronic music in Japan. In 2003, composer and musicologist Mizuno Mikako joined the board of directors and added to the panel of activities the missing aspect of scholarly research, which was reinforced when she became vice-president in 2009.
Japanese Society for Sonic Arts Research in electronic music in Japan is now federated around a recent organization. The Japanese Society for Sonic Arts (JSSA) was founded with the aim of being a community composed of researchers who provide an insight into sound and music and are able to present experimental tools and systems; composers who incorporate those means into their advanced creations; and musicologists engaged in electroacoustic music studies and who propose novel ideas for theoretical and conceptual approaches to electronic music. Being a domestic organization, JSSA focuses on these activities emerging on Japanese soil, but it has established strong links to international institutions. In other words, JSSA unites music and musicology, science and technology by giving researchers and composers a common forum to present and share their current endeavours. The association was born from a small community founded by Osaka Naotoshi, a professor at Tokyo Denki University (Osaka 2008), which he called ON-Juku, with on a word for sound and juku designating a scholarly institution. ON-Juku was based at the laboratory where Osaka worked at Tokyo Denki University. This was the skeleton of a platform associating research, for instance on timbre, and composition, and it held meetings for this purpose. After a few years, it appeared to Osaka that the model implemented by ON-Juku could be expanded by associating a larger number of Japanese researchers and musicians, and, in 2009, he founded the Japanese
66
Electronic music in East Asia
Society for Sonic Arts. This is a unique structure in that it holds regular meetings throughout the year, organises concerts and, as importantly, serves as a Japanese base for activities from international organizations, such as, in 2016, EMSAN, and musical groups, such as the NWEAMO81 festival.
Korea Electro-Acoustic Music Society (KEAMS) One of the first national organisation for electronic and computer music in Asia was the Korea Electro-Acoustic Music Society (KEAMS), founded in 1993 by Sung Ho Hwang. Since 1994, KEAMS has organised the Seoul International Computer Music Festival, during which is held a symposium for presenting current research and compositions, the Korean Electro-Acoustic Music Society’s Annual Conference. It also publishes a journal in Korean, Emile, which appeared in 2001. The Seoul International Computer Music Festival, or SICMF, is an important outcome of KEAMS in that it combines Korean and non-Korean new electronic music during its concerts and electroacoustic music studies; in addition, it holds an international symposium. The SICMF has become a prominent part of Korean new music life.
MUSICACOUSTICA-BEIJING An important step was made in 1994 at the Beijing Central Conservatory of Music by Zhang Xiaofu when he established the Centre for Electronic Music of China (CEMC). The Centre became the focal point of production and research in China, particularly with the foundation of the first festival of electronic music in the country, the MUSICACOUSTICA-BEIJING festival.82 Right from the beginning, the festival had a policy of inviting composers and researchers from abroad, and this has remained a strong feature. By establishing links to other centres across the globe, it helped the Chinese scene to be open to all kinds of styles and practices.83 At the same time, it created links to other centres and schools in China. One of the outcomes has been to define a national curriculum in the teaching of electronic music in China.The festival, which lasts a week, became yearly in 2004. Because it is now considered an intense moment in China’s musical life, with concerts, masterclasses, lectures, competitions, symposia, meetings and round tables, it has sent ripples all throughout the year across the Central Conservatory and other places in China, and other festivals with comparable components have begun elsewhere, such as Shanghai and Chengdu; the festival itself is replicated in other cities and has become a template for increasing the awareness of electronic music in China. Hence, other festivals have appeared in China, notably in Shanghai, which has been holding the Shanghai Conservatory of Music International Electronic Music Week since 2006.
Taiwan Computer Music Association (TCAM) The Taiwan Computer Music Association was founded in 1999, the year of the International Computer Music Conference in Beijing, by a group of composers involved in computer music. The first president was Shing-Kwei Tzeng, and conductor, composer and researcher Huang Chih-Fang became chairman from 2011 to 2015. The current president (at the time of writing) is Tung Chao-Ming, a composer trained in Essen (Germany). Members of the board, like Cheng Chien-Wen, are prominent figures of computer music research in Taiwan. The association organises concerts domestically and abroad and takes part in the WOCMAT symposium.
67
Marc Battier and Lin-Ni Liao
Workshop on Computer Music and Audio Technology (WOCMAT) The Workshop on Computer Music and Audio Technology, to some extent, matches the Chinese MUSICACOUSTICA-BEIJING festival and symposium, but on a much smaller scale. Most careful to give the event an international scope, WOCMAT has, most of the time, invited overseas composers, scholars and researchers to attend and keynote. WOCMAT was founded in 2005 and is aimed mainly at composers of electroacoustic music and engineers. Through calls for both paper and music submissions at an international level, it highlights the work of young researchers and composers during the three days of the event.The organization is entrusted to five Taiwanese universities, each of which in turn takes responsibility for the festival. One of the highlights of the festival is the discussion session between engineers, composers and other players in the field. But gradually, in face of the divergence of interest between engineers and composers – but also due to the recurring financial difficulties for the organization of the festival – the centre of gravity has moved towards a few composers, the most important being from the National Chiao Tung University Music Departments (Professor Tseng Yu-Chung and Professor Tung Chao-Min), National Taiwan Normal University (Professor Chao Ching-Wen) and Kainan University (Professor Jeff Huang). Beyond the concertconference aspect, WOCMAT serves as a springboard for reaching a wider audience and above all organises workshops for university students for educational purposes with Western organisations or institutions, such as the TPMC (Tout Pour la Musique Contemporaine, Paris) for a Mixed Music Workshop at National Chiao Tung University for a week in 2015 and the WOCMATIRCAM Forum Conference at Kainan University for three days in 2016.
Conclusion What are the characteristics of Asian electronic music? One cannot assume they are similar for each country or region. What gives it a unique flavour? There is in fact a debate which reflects the question of interculturality and how musicians today absorb different cultural influences. In a recent article from the book Contemporary Music in East Asia, edited by Hee Sook On, Christian Utz quotes a statement by German sinologist Barbara Mittler. From Mittler’s perspective, “New Chinese Music ought to be considered on an international stage, not as a music both exotic and Other, but as a music in its own right”.84 She does not deny that some music may have a ‘Chineseness’, but that would have to be the choice of an individual composer. This, of course, is contradicted by the position of some Chinese composers and teachers, such as Zhang Xiaofu, mentioned earlier, who posits the Chinese model (itself an emanation of the Chinese regime). In fact, in the same chapter, Christian Utz explains that: It is by continuously reframing and reconsidering established concepts of identity – and not merely by some “individual style” – that the art music created by East Asian composers continues to vitally and effectively challenge the global hegemony of Western concepts of music.85 In this chapter, we have tried to outline some characteristics of electronic music in Asia, whenever this was relevant, as indeed, because of the insistence on finding their own voice through ties to their local culture, some composers, particularly in China, deal with particular forms of interculturality.86 This manifests itself differently in Japan, where, after a period of rejection of traditional culture which had been insisted upon before the Second World War, there has been
68
Electronic music in East Asia
a process of rediscovering it and using traditional elements, such as gagaku or Buddhist chanting shomyo, for instance. To study the question of intercultural processes in the electronic music of East Asia, it is necessary to have at one’s disposal a compilation, as comprehensive as possible, of the musical pieces which comprise the repertoire.This is the ambition of the EMSAN project, presented earlier: to serve as a platform for making Asian repertoire better known.
Notes 1 As is customary in East Asia, in this chapter family name usually precedes given name. 2 This trend is often encountered among Asian composers. Chen Yi built several works on the concept of qi (vital energy), a Chinese term which covers several spiritual meanings, including that of spirit.Two pieces by the Taiwanese composer Wang Miao-Wen can be mentioned here with an analytic introduction of the 64 hexagrams: Yi – Etudes 8 éléments (2013) for electroacoustic music (www.youtube. com/watch?v=Pj7pHhHXXH8) and Contradiction harmonieuse (1994) for two cellos and electronics (www.youtube.com/watch?v=8QH-UJN56Lk). 3 Of note is the ballet L’Eve future (mirai no ivu), on the text by Villiers de l’Isle Adam (1886) with music by Toshiro Mayuzumi, choreography and dance by the Matsuo Akemi Ballet Company. See Doryun Chong, ed., 2012, Tokyo, 1955–1970: A New Avant-Garde, New York (NY), Museum of Modern Art. 4 See Experimental Workshop. Jikken Kôbô, Kamakura (Japan), The Museum of Modern Art, 2013. Partially in English. The collaboration of musicians with other arts in the Experimental workshop is also discussed in detail in Miwako Tezuka, 2005, Jikken Kôbô (Experimental Workshop): Avant-Garde Experiments in Japanese Art of the 1950s, PhD dissertation, Columbia University. 5 Japanese conception of space can be observed in its traditional architecture, where mobile screens redefine volumes or small gardens represent complex landcapes. It is striking that composers paid special attention to the spatial projection of electronic music. We will see some examples of this trend later. 6 This well-known pedagogue-composer travels regularly in East Asia for his numerous concerts and workshops as invited professor-composer.We can listen to his 14 works on SoundCloud: https://soundcloud.com/chong-kee-yong, the work Timeless echoes (2010) composed for cello solo, live performer on painting (video projection) and live electronics: www.youtube.com/watch?v=AnS5auY7kXg. Chong Kee Yong’s biography on his personal website: www.chongkeeyong.com 7 Her 2013 PhD dissertation can be downloaded from the University of Plymouth site, at https://pearl. plymouth.ac.uk/bitstream/handle/10026.1/1606/2013mohdnorowi10167917phd.pdf?sequence=94. An Artificial Intelligence Approach to Concatenative Sound Synthesis, PhD dissertation, University of Plymouth, 2013. See also: http://profile.upm.edu.my/noris/profail.html. 8 Several composers will be mentioned in this chapter, such as Zhang Xiaofu (1954, China–France),Wen Deqing (1958), Xu Shuya (1961), Xu Yi (1963), Wang Ying (1976), An Chengbi (1967) and others. 9 For example, Shen Ye (1977) studies in Hochschule für Musik und Theater Hamburg (Germany) and IRCAM Cursus 2013–2014 in Paris (France) before teaching in the Shanghai Conservatory of Music. We can listen to the World-Premier recording Ondulation (2014) for piano and live electronics: http://medias.ircam.fr/x9fbefa 10 For example, Wang Ying (1976) studied in Hochschule für Musik und Tanz Köln, Hochschule für Musik und Darstellende Kunst Frankfurt and IRCAM Cursus 2011–2012 in Paris. His music composed before 2013 can be heard on SoundCloud: https://soundcloud.com/ying-wang-composer 11 See: www.philharmonicsociety.org/pdf/CFOnlineKit928.pdf, ‘CLASS OF 1978’ concert, Monday, October 26, 2009, Carnegie Hall, New York. 12 Photo taken from the Website mentioned note 11. 13 Zhou is currently professor at the University of Missouri-Kansas (UMKC), where he lives with his wife, composer Chen Yi, also a professor of composition in that university. In 2011, Zhou received a Pulitzer prize for his opera Madame White Snake. 14 Gluck Bob and Ping Jin, 2006, ‘A Conversation with Zhang Xiaofu’, New York, EMF Institute. Available at http://econtact.ca/162/gluck_zhang.html 15 For an analysis of Zhang Xiaofu’s Chant intérieur, see Li Qiuxiao, 2017, ‘Characteristics of Early Electronic Music Composition in China’s Mainland’, Contemporary Music Review 37(1–2), Marc Battier
69
Marc Battier and Lin-Ni Liao and Kenneth Fields, eds., Electroacoustic Music in East Asia, in press. The version with bass bamboo flute (1988–2000) is available available on CD ISRC CN-M35–06–0002–0000/A.J6. 16 The label ‘New Wave’ was given in 1986 in an article by An Anguo in the journal Zhonaquo Yinyue Xue (Musicology in China), in which he wrote: “in the new era there have emerged a group of young and talent composers”. See Zhou Jinmin, 1993, New Wave Music in China, PhD dissertation, University of Maryland, pp. 1–2. 17 Written and published in Chinese. 18 This piece is briefly discussed in Li Qiuxiao, 2017, ‘Characteristics of Early Electronic Music Composition in China’s Mainland’, Contemporary Music Review 37(1–2), Marc Battier and Kenneth Fields, eds., Electroacoustic Music in East Asia, in press. 19 IRCAM is the music research institute founded by Pierre Boulez and open in 1977. Boulez insisted on creating electronic music with instruments, or musique mixte; For this purpose, he fostered the development of real-time digital sound processors (the 4A, 4C, 4X series of the late 1970s and 1980s). Almost all pieces realised at IRCAM are of the mixed music variety. 20 Yu Ying, 1992, ‘Computer Music in China’, Proceedings of the International Computer Music Conference, San Jose (CA). 21 Yu Ying, 1996, ‘Computer Music Shenzen, P.R. China’, Proceedings of the International Computer Music Conference, Hong Kong; Yu Ying, 1999, ‘Sound of China: Computer Music’, Proceedings of the International Computer Music Conference, Beijing. 22 Wu Jian, 1992, ‘Computer Aided Composition in Chinese Studio’, Proceedings of the International Computer Music Conference, Tokyo. 23 A case in point is IRCAM, the Paris music research institute, or CCRMA, at Stanford University, MIT, in Cambridge, MA, and numerous other centres around the world. 24 Op. cit. Zhang Xiaofu, translated by Qi Minjie. 25 The International Confederation of Electroacoustic Music (CIME) was founded in 1981 and is a part of the Unesco International Music Council. It gathers representatives from 20 countries. www. cime-icem.net 26 The Sichuan Conservatory of Music founded in 1939 now combines 31 departments, 18 research centres and over 14,000 students. Sichuan Conservatory website: www.sccm.cn/english/about.asp 27 Lu Minjie (Iris Lu) received her bachelor’s degree of electronic information engineering in University of Electronic Science and Technology of China in Chengdu. She was sponsored by China Scholarship Council to be a visiting scholar at University of Oregon. 28 Bai Xiaomo received his bachelor’s degree of electronic information engineering at the University of Electronic Science and Technology of China in Chengdu. He started to teach at the Sichuan Conservatory of Music in 2003. About his music, we can find a video dated in 2011 The Fish’s Talk for two fishes and MAX, with a brief description of the work in English: https://vimeo.com/5212942 29 She received her PhD at the Tokyo University of the Arts in Music and focuses on audio expression based on technologies such as computers and recording technologies. 30 He graduated from China Conservatory of Music in 1991. While studying music psychology for his master’s degree he began his career as a teacher. Now as the tutor of graduate and undergraduate students, his research fields include composing of computer music/electronic music, technical theory such as sound synthesis, and education theory. Biography consulted on the conservatory website. 31 The preparation of this translation required a collective effort from several Chinese institutions.This led to the main theme of the Electroacoustic Music Studies Conference in 2006 held at the Beijing Central conservatory. This book was also translated into Japanese as Roads, Curtis (2001), Computer-ongaku [Computer music tutorial], Japanese translation edited by Aoyagi Tatsuya, Naotoshi Osaka, Keiji Hirata and Yasuo Horiuchi, TDU, Tokyo. 32 Online at http://chears.info/ (modeled on www.ears.dmu.ac.uk). 33 Huang Chih-Fang gained a doctorate from the Mechanical Engineering Control Group at the National Chiao Tung University in 2001 and a master’s in music from the same university in 2003. He studied composition with Wu Ting-Lien and electroacoustic music with Phil Winsor for 10 years. He taught until 2009 as a lecturer and has also supervised numerous dissertations in fields as varied as electroacoustic music, mechanics and national defence within the engineering department. This composer is exemplary in his curriculum for the training of key actors of the electroacoustic world of Taiwan. He works at the crossroads of science and the arts to make electroacoustic music better known, and especially the history of this discipline in the West. Today, he is Associate Professor at Kainan University
70
Electronic music in East Asia and the director of Automated Music Analysis and Composition Laboratory at National Chiao Tung University. 34 Tseng Yu-Chung taught at Taipei Municipal University of Education between 2000 and 2007 and, since 2008, full time in the department of music at National Chiao Tung University. A composer, pedagogue and researcher active in the field of electroacoustic music, his acousmatic and mixed music has obtained numerous international awards. Under his direction, his students have also begun to break into the international acousmatic music scene – Lin Kuei-Fan, Chen Ying-Jun, Feng Ling-Hsiuan, Shih Yu, Chang Cheng-Ya, Lin Chia-Yi and Wang Ting-Yun, amongst many. Currently Tseng Yu-Chung is an associate professor and Director of the Music Technology Laboratory at National Chiao Tung University. He is a graduate of the University of North Texas (1998), where he worked on composition and computer music with Larry Austin, Jon C. Nelson and Phil Winsor. Since 2004 Tseng Yu-Chung has lectured extensively at the universities mentioned. In 2004 he wrote the first memoir on the history of electroacoustic music in Taiwan, supported by Lin Hsiao-Yun (林曉筠). This energetic pedagogue teaches courses not only on practice and composition but also on the history of electroacoustic music and, more recently, on musical analysis. On reading the master’s dissertations directed by this composer, we can see bibliographic gaps in the field, both in the literature written in Chinese and in translations. Indeed, because of the youth of this discipline, there are few bibliographical references on this subject. It is also observed that some young researchers are turning to academic publications in China. In order to remedy this situation, Tseng Yu-Chung has undertaken major work in the publication of research and teaching in order to establish a more abundant bibliography and has tackled musical analysis such as that of Ligeti’s Artikulation, on the basis of the theory developed in the Treatise on Musical Objects by Pierre Schaeffer. 35 This chapter does not discuss sound art (as a general term for a diversified field), and the readers interested in underground music culture in China can find some recent information in these three dissertations: Carolyn Lee, 2012, Noise and Silence: Underground Music and Resistance in the People’s Republic of China, Master of Arts dissertation, University of Southern California; Christina Yan Chau Wong, 2006, Exploring the Spaces for a Voice – The Noises of Rock Music in China (1985–2004), PhD dissertation, The Chinese University of Hong Kong; Jing Wang, 2012, Making and Unmaking Freedom: Sound, Affect and Beijing, PhD dissertation,The College of Fine Arts of Ohio University. See also RPM: Ten Years of Sound Art in China, a virtual exhibition of Chinese sound artists, online at www.rpm13.com/. 36 Wang Fujui is an artist and curator specialized in sound art and interactive art whose work has played a key role in establishing sound as a new artistic genre in Taiwan. A pioneer of sound art in Taiwan, in 1993 he founded NOISE, the country’s first experimental sound label. He is currently an assistant professor of the Taipei National University of the Arts, worked for the TranSonic Lab at the Center for Art and Technology of the Taipei National University of the Arts and has curated numerous exhibitions and festivals, including the 2008, 2009, 2010 and 2012 editions of the TransSonic Sound Art Festival and the 2007 to 2009 editions of the Digital Art Festival Taipei. Complete biography of the artist, www. pfarts.com/wang-fujui-295793111929790.html; http://archive.avat-art.org/mediawiki/index.php/王 福瑞, 15.11.2016. 37 Working as an educator, artist and designer during the day,Yao Chung-Han swings his body between noise and dance music during the night.Yao Chung-Han attained his MFA degree from the Graduate School of Art and Technology,Taipei National University of the Arts. His works recall conversation and contrast between fluorescent light and sound and furthermore trigger physical imagination of audiences. As a pioneer in Taiwan’s light and sound art,Yao Chung-Han was the recipient of First Prize in Digital Art Festival Taipei for Sound Art category. He was invited to participate in Fukuoka Asian Art Triennale, NTT ICC-Emergencies!014, STEIM – Massive Light Boner and many other events. His works were presented in various international exhibitions. He is currently a full-time artist and lectures at Taipei National University of the Arts and Shih Chien University. www.yaolouk.com/works.html, 11.11.2016. 38 Wang Chung-Kun is one of the rising stars in Media arts in Taiwan. He has created various forms of machinery that have consistently maintained an intriguing purity and peculiar sense of beauty. As the viewers approach, these machines operate on their own untiringly. Sound making, switching on and off, exhaling, spinning or twinkling, they can simply do more than a single action. Rather, they have their own rhythm variation, as if they had a life of their own. Wang lives and continues his creation in Taipei. He is an expert at sound art, kinetic sculpture and interactive art. www.pfarts.com/wangchung-kun-295792021022531.html, 11.11.2016.
71
Marc Battier and Lin-Ni Liao 39 Composer, conductor, trombonist; is currently Composition Concentration Coordinator of the Hong Kong Baptist University Department of Music. Biography consulted on the website of the Hong Kong Baptist University: www.hkcg.org/coleman-christopher 40 Trained at the Boston Conservatory, he is currently a lecturer at the Hong Kong Academy for Performing Arts. His work list includes eight pieces (multimedia or computer music 1992–2006) on the website of Hong Kong Composers’ Guild: www.hkcg.org/tsang-yiu-kwan 41 “Lo got the DMus in composition from the Chinese University of Hong Kong and taught in Hong Kong Institution of Education and Hong Kong Academy for Performing Arts, is now teaching in the Chinese University of Hong Kong, active as a composer as well as a conductor and also Chairman of Hong Kong Composers’ Guild”. Biography consulted: www.hkcg.org/lo-hau-man 42 Steve Hui temporarily taught at Hong Kong Academy for Performing Arts, where he was trained.With little electroacoustic work, his music is more aimed at “crossing boundaries, experimenting tradition and remixing art forms” as he highlights on his website: www.lo4nerve.com He was the prizewinner of the Asian Cultural Council 2016. The Rockefeller family-supported foundation annually selects internationally renowned Chinese artists (from China, Hong Kong and Macau). 43 Lam Lai, composer and sound artist, trained at the Hong Kong Academy for Performing Arts under the direction of Law Wing-fai and Clarence Mak and then the Royal Conservatory of Den Haag with Martijn Padding and Yannis Kyriakides. “As a composer, she tends to create new hybrids of media and exploring sounds experience in music. She has focused on combining conventional performance practices with other forms of art such as electronic sound, visual art, film, literature and theatre. Currently, she is working with the music-theatre company de Veenfabriek for the performance RAARRR and touring around the Netherlands”. – Biography consulted on the Hong Kong Composers’ Guild: www.hkcg.org/lam-lai; Composer’s personal website: https://lamlai.net/about/c 44 Patrick Chan, composer and audio engineer, currently assistant professor at Ball State University. Personal website: www.chintingchan.com 45 Kelvin King Fung Ng, a composer specializing in interactive electronic music, studied at the Chinese University of Hong Kong, University of Missouri-Kansas City Conservatory of Music and Dance, Hong Kong Academy of Performing Arts and Kunstuniversität Graz, currently resident in Berlin. 46 Regarding the EMSAN symposia, see later. 47 Lam Bun-Ching was born in Macau in 1954, but she received her education in Hong Kong and got her PhD at the University of California, San Diego. 48 Wen Bihe, born in Macau in 1991, studied at the Beijing Central Conservatory CEMC. 49 With, for instance, popular electronic music festivals such as the Macau Electronic Music Festival; and Macau hosted the nomad festival Road to Ultra in 2015. 50 The Festival ‘L’Œuvre du XXe siècle’ proposed concerts of French electronic music on May 21 and 24. The program was conceived by Pierre Schaeffer and presensed electronic music pieces by himself, Pierre Henry, Pierre Boulez (the two studies), Olivier Messiaen (Timbres-durées), Monique Rollin and André Hodeir. 51 Here, the Paris School denotes the style and methods invented by Pierre Schaeffer, which he first called musique concrète before renaming it electroacoustic music. Later, François Bayle named it acousmatic music. 52 NHK: Nippon Hōsō Kyōkai. 53 This studio was founded in Germany in 1951 at Cologne radio by composer Herbert Eimert, researcher Werner Meyer-Eppler and others. The radio station was then called NWDR. It is this studio that the young Stockhausen joined in 1953 and did a string of electronic music masterpieces. The equipment of the NHK Electronic music studio was modelled on the Cologne studio (ring modulator, Friedrich Trautwein’s Monochord keyboard instrument, pulse and noise generators, etc.), yielding pieces resembling German electronic music, but only for a few years. After that, Japanese composers strove to find their own compositional paths. 54 Available on the CD Search for Spring of Sound, R-10A1017. 55 Where he composed the 5-channel tape Telemusik, taking advantage of a Sony 6-track prototype tape recorder which was owned by the NHK studio, and the mixed (live electronic) piece Solo. 56 In Japanese, Jikken Kôbô. The group led many different actions with music, dance, photography and other activities between 1951 and 1958. 57 Davies Hugh, 1968. Répertoire international des musiques électroacoustiques/International Electronic Music Catalog, compilé par/compiled by Hugh Davies. A cooperative publication of Le Groupe de Recherches Musicales de l’O.R.T.F. and The Independant Electronic Music Center, distributed by The M.I.T. Press.
72
Electronic music in East Asia 58 Fukunaka Fuyuko, 2014. ‘Re-Situating Japan’s Post-War Musical Avant-Garde Through Re-Situating Cage: The Sôgetsu Art Center and the Aesthetics of Spontaneity’, in Contemporary Music in East Asia, Hee Sook Oh, ed., Seoul: Seoul National University Press, 191. 59 Kawaski Koji, 2009. Japanese Electronic Music, second edition, Tokyo, Ai iku sha. 60 Taro Yasuno’s Zombie Music (Duet of the Living Dead), and Taro Yasuno’s Zombie Music: Quartet of the Living Dead. 61 Such as the ones built in the United States by Eric Singer and the League of Electronic Musical Urban Robots (LEMUR). 62 Ishii Hiromi, 2008. ‘Japanese Electronic Music Denshi-Ongaku (電子音楽): Its Music and Terminology, Definition and Confusion’, Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/ishii.pdf. 63 Goto Suguru, 2016, Emprise, Tokyo: Stylenote. 64 Also written Kang Seok Hee in the Revised System of Romanization of Korean, but he is widely known as Kang Sukhi. 65 Noted composer Isang Yun (1917–1995) started living in Germany in 1957. There, he is known for having taught many Asian composers. Even today, it is not rare that young Korean composers establish strong links with Germany, either by studying or working there. 66 An example of his music can be heard in the CD 50 Years Studio TU Berlin published in 2005, reference EMF DVD054. Kang made the music for a visual piece by Robert Darroll, Stone Lion (1990), using a Synclavier II. 67 See for instance Xi (1997/1998) for ensemble and electronics, realized at the electronic studio of Technical University Berlin, CD Deutsche Grammophon 00289–00477–05118, 2005, and Allegro ma non troppo (1994), also realized at the TU Berlin studio, CD 50 Years Studio TU Berlin published in 2005, reference EMF DVD054. 68 The INA-GRM (Groupe de recherches musicales de l’Institut national de l’audiovisuel) was founded in 1958 by Pierre Schaeffer. It was later headed by François Bayle, until Daniel Teruggi took over in 1997. 69 Zhang Xiaofu, 2017, ‘The Power of Arterial Language in Constructing a Musical Vocabulary of One’s Own: Inheriting the Inspiration and Gene of Innovation in Electroacoustic Music from Chinese Culture’, Contemporary Music Review 37(1–2), Marc Battier and Kenneth Fields, eds., Electroacoustic Music in East Asia, in press. 70 See Zhang Xiaodu’s electronic piece, Nuo Ri Lang (1996), based on Tibetan Buddhism and available on CD ISRC CN-M35–06–0002–0000/A.J6. Two versions of the piece are presented on the CD: an acousmatic version and a mixed piece piece with muti-ensemble percussion. 71 See her reflections on composing new music in Yuriko Hase. 2008. ‘Listening to the Sound: Meanings in Making Music’, Proceedings of the Electroacoustic Music Studies Network conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/kojima.pdf. 72 In much English-language writing this is referred to as ‘live electronic music’ (Ed.). 73 Nodaira Ichiro, 1953, Quatorze écarts vers le défi (1990–1991) for MIDI piano, eight strings and real-time analysis, synthesis and processing digital system, 43’, CD Contemporary Composers from Japan, Fontec FOCD2535 74 We are using the usual order of her name, as Mari Kimura lives and works in New York City. 75 The reader can visit a page of her website which describes the Augmented Violin: www.marikimura. com/augmented-violin.html. In a recent CD of her music, she uses this system. Some pieces include a mode of playing she has developed, violin subharmonics: CD Mari Kimura, Voyage Apollinian Innova 958. 76 Ishii Hiromi, Wind Way, CD Wergo ARTS 8112.2, 2006. See also her analysis of the piece in Ishii Hiromi, 2007. ‘Finding Rules in Shakuhachi Timbre and Applying Them to Structure Music: Composition for Shakuhachi and Live Electronics ‘Kaze no Michi’ (Wind Way)’, Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Leicester. On line: www.ems-network.org/IMG/ pdf_IshiiEMS07.pdf. 77 Ibid. 78 EMAC has a very comprehensive Website, emac-china.org. 79 An issue of Contemporary Music Review 37(1–2) is being completed at time of writing. It is soberly titled ‘Electroacoustric Music in East Asia’. 80 The databases are accessible at this address: www.ums3323.paris-sorbonne.fr/EMSAN/.They are being hosted by the Research Institute in Musicology, Paris, and Sorbonne University.
73
Marc Battier and Lin-Ni Liao 8 1 New West Electronic Arts & Music Organization, founded in the United States in 1998. 82 The first editions of the festival bore the name Beijing Electroacoustic Music Week. 83 Its link to the International Confederation of Electroacoustic Music (CIME) was noted earlier. 84 Barbara Mittle, 2008, ‘Against National Style: Individualism and Internationalism in New Chinese Music (Revisiting Lam Bun-Ching and Others)’, Proceedings of the Symposium at the 2003 Chinese Composers Festival, Hong Kong: Hong Kong Composers’ Guild, quoted by Christian Utz, 2014, ‘NeoNationalism and Anti-Essentialism in East Asia Art Music Since the 1960s and the Role of Musicology’, in Contemporary Music in East Asia, Hee Sook Oh, ed., Seoul, Seoul National University Press, p. 24. 85 Christian Utz, ibid. p. 25. 86 This is discussed in Chapter3 of this volume, The Three Paths: Cultural Retention in Chinese Electroacoustic Music, by Leigh Landy.
References Battier, Marc. 2014. ‘Écrire de la musique mixte pour instruments traditionnels de Chine et du Japon: shakuhachi, pipa, guqin’, in Fusion du temps: Passé-Présent, Extrême Orient – Extrême Occident, Lin-Ni Liao and Marc Battier, eds., Paris: Delatour-France, pp. 109–119. ———. 2017. ‘Studying Japanese Electroacoustic Music: A View From Paris 日本の電子音響音楽研 究――パリからの視点’, in Mixed Muses n°12, Japan: Aichi University of the Arts, pp. 17–34 (in English and Japanese). Chen, Hui-Mei. 2008.‘The Electro-Acoustic Music in the Higher Education in Taiwan – Its Evolution and Its Reception’. Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/chen.pdf. Davies, Hugh. 1968. Répertoire international des musiques électroacoustiques/International Electronic Music Catalog, compilé par/compiled by Hugh Davies. A cooperative publication of Le Groupe de Recherches Musicales de l’O.R.T.F. and The Independent Electronic Music Center, distributed by The M.I.T. Press. Fujii, Koichi. 2004. ‘Chronology of Early Electroacoustic Music in Japan: What Types of Source Materials Are Available?’ Organised Sound 9(1): 63–77. Fukunaka, Fuyuko. 2014. ‘Re-Situating Japan’s Post-War Musical Avant-Garde Through Re-Situating Cage: The Sôgetsu Art Center and the Aesthetics of Spontaneity’, in Contemporary Music in East Asia, Hee Sook Oh, ed., Seoul: Seoul National University Press, pp. 181–208. Galliano, Luciana. 2002.Yôgaku: Japanese Music in the Twentieth Century, Lanham: Scarecrow Press. Translated by Martin Mayes. ———. 2012. The Music of Jôji Yuasa, Newcastle upon Tyne: Cambridge Scholars Publishing. Gluck, Bob. 2007. ‘Electronic Music in South Korea’, eContact 11(3), originally published by the EMF Institute, Electronic Music Foundation. On line econtact.ca/11_3/southkorea_gluck.html. Herd, Judith Ann. 1987. Change and Continuity in Contemporary Japanese Music: A Search for a National Identity, Ph. D. dissertation, Providence: Brown University. Ishii, Hiromi. 2007. ‘Finding Rules in Shakuhachi Timbre and Applying Them to Structure Music: Composition for Shakuhachi and Live Electronics “Kaze no Michi” (Wind Way)’. Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Leicester. On line: www.ems-network.org/ IMG/pdf_IshiiEMS07.pdf. ———. 2008. ‘Japanese Electronic Music Denshi-Ongaku (電子音楽): Its Music and Terminology, Definition and Confusion’. Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/ishii.pdf. Kawasaki, Koji. 2000. Nihon no denshi ongaku, zouho kaitei ban(日本の電子音楽 増補改訂版 Japanese Electronic Music, enlarged and augmented edition), Tokyo: Ai iku sha. Kojima,Yuriko Hase. 2008. ‘Listening to the Sound: Meanings in Making Music’. Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/ papers/kojima.pdf. Kouwenhoven, Frank. 1992. Out of the Desert: Mainland China’s New Music, special issue, Chime Journal (5), Spring. Li, Qiuxiao. 2018. ‘Characteristics of Early Electronic Music Composition in China’s Mainland’, in Contemporary Music Review 37(1–2), Marc Battier and Kenneth Fields, eds., Electroacoustic Music in East Asia, in press.
74
Electronic music in East Asia Liao, Lin-Ni. 2015. Héritages culturels et pensée moderne: Les compositeurs taiwanais de musique contemporaine formés à l’étranger. Paris: Delatour. Lien, Hsien-Sheng. 2008.‘Sound, Song and Cultural Memory – Concerning the History of Electroacoustic Music in Taiwan’, Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/lien.pdf. Liu, Ching-chih. 2010. A Critical History of New Music in China, Hong Kong:The Chinese University Press. Translated by Caroline Mason. Loubet, Emmanuelle. 1997. ‘The Beginnings of Electronic Music in Japan, With a Focus on the NHK Studio: The 1950s and 1960s’, Computer Music Journal 21(4): 11–22. ———. 1998. ‘The Beginnings of Electronic Music in Japan, With a Focus on the NHK Studio: The 1970s’, Computer Music Journal 22(1): 49–55. Mizuno, Mikako. 2008. ‘Electroacoustic Music in Cultural Context – Two Points Towards Materials and Structure’. Proceedings of the Electroacoustic Music Studies Network conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/mizuno.pdf. Osaka, Naotoshi. 2008. ‘Planning for a Research Consortium, “ON-Juku”, on Advanced Art Music Creation’, Journal of Information Processing Society of Japan (IPSJ) 4(4): 445–449. (in Japanese) ———. 2009. ‘An Electronic Timbre Dictionary and 3D Timbre Display’. Proceedings of the International Computer Music Conference, Montreal, International Computer Music Association, San Francisco, pp. 9–12. Uno, Everett Yayoi. 2004. ‘Intercultural Synthesis in Postwar Western Art Music: Historical Contexts, Perspectives, and Taxonomy’, in Locating East Asian in Western Art Music, Y. Uno Everett and F. Lau, eds., Middletown (CT): Wesleyan University Press, pp. 1–21. Utz, Christian. 2014. ‘Neo-Nationalism and Anti-Essentialism in East Asia Art Music Since the 1960s and the Role of Musicology’, in Contemporary Music in East Asia, Hee Sook Oh, ed., Seoul: Seoul National University Press, pp. 3–28. Wahid, Hasnizam Abdul. 2007. ‘Recent Experiment and Emerging Works at the Faculty of Applied and Creative Arts, Universiti Malaysia Sarawak – the Integration and an Experiment of a Traditional Wayang Kulit Performances and Electroacoustic Music’. Proceedings of the Electroacoustic Music Studies Network Conference, Leicester. On line: www.ems-network.org/IMG/pdf_EMS07HAbdulWahid.pdf. ———. 2008. ‘A Review on “Experimentation” and Exploration on Installation Electroacoustic Work in Malaysia’. Proceedings of the Electroacoustic Music Studies Network Conference, EMSAN session, Paris. On line: www.ems-network.org/ems08/papers/wahid.pdf. Wang, Hefei. 2018. ‘Exploration and Innovation, the Chinese Model of the MUSICACOUSTICABEIJING Festival’, in Contemporary Music Review 37(1–2), Marc Battier and Kenneth Fields, eds., Electroacoustic Music in East Asia, in press. Wang, Jing. 2015. ‘Considering the Politics of Sound Art in China in the 21st Century’, Leonardo Music Journal 25: 73–78. ———. 2012. Making and Unmaking Freedom: Sound, Affect and Beijing, Ph.D. thesis dissertation, University of Ohio. Zhang, Xiaofu. 2018. ‘The Power of Arterial Language in Constructing a Musical Vocabulary of One’s Own: Inheriting the Inspiration and Gene of Innovation in Electroacoustic Music from Chinese Culture’, in Contemporary Music Review 37(1–2), Marc Battier and Kenneth Fields, eds., Electroacoustic Music in East Asia, in press. Zhou, Jinmin. 1993. New Wave Music in China, Ph.D. thesis dissertation, University of Maryland.
Web sites ACMP – Asian Computer music Project www.acmp.asia CHEARS – Chinese EARS chears.info EMAC – Electroacoustic Music Association of China emac-china.org EMS – Electroacoustic Music Studies Network www.ems-network.org/ EMSAN – Electroacoustic Music Studies Asia Network www.ums3323.paris-sorbonne.fr/EMSAN
75
Marc Battier and Lin-Ni Liao JSEM – Japanese Society for Electroacoustic Music jsem.sakura.ne.jp KEAMS – Korea Electro-Acoustic Music Society www.keams.org MUSICACOUSTICA-BEIJING 2016.musicacoustica.cn SICMF – Seoul International Computer Music Festival computermusic.asia WOCMAT – Workshop on Computer Music and Audio Technology
There is no permanent website for WOCMAT, but a site is published each year for the current workshop.
76
3 THE THREE PATHS Cultural retention in contemporary Chinese electroacoustic music Leigh Landy
Prelude In the mid-1990s when I made my first tour as composer/musicologist in Brazil, I spent a week teaching experimental composition (encompassing both new approaches to note-based works as well as electroacoustic approaches) at a summer school at the Universidade Federal da Bahia in Salvador.The ethnic diversity of this Brazilian group was extremely broad, reminding me how global contemporary and, in particular, experimental music composition had become. It turned out that the composer who founded the composition specialisation in Salvador was a Swiss composer, Ernst Widmer, who introduced serial composition to this part of Brazil. Most of the students, therefore, presented me with pieces that had roots in the second Viennese school of composition. However, in their speech and body language and during their free time, it was the music of Brazil and, more specifically, the Afro-Latin music of Bahia that they loved. I asked them during one of the sessions why their music did not resemble that, say, of Villa Lobos more, and their response was that he was a nationalist who had supported the former dictatorship and therefore they would never compose music with such Brazilian elements. This was a very surprising experience, as, outside of the country, Villa Lobos was seen to be such an icon of Brazilian music, and these students clearly all loved their country’s music as well. Leaving the Villa Lobos issue aside, I have heard many Brazilian electroacoustic works throughout the years that include elements of its music, not least the rhythm as well as sounds of the daily life of Brazil, such as its street markets and its nature, often expressed through today’s sampling culture. One can also identify the passion or spirit of Brazil in a number of electroacoustic works, an influence that seems at first to be extra-musical. These links to a culture and its musical heritage form the focus in the current chapter.
Contextual elements This chapter, including its case studies, has been inspired by a couple of areas that have not been given much attention in the field of electroacoustic music studies: •
Socio-cultural aspects of electroacoustic music associated with the field of ethnomusicology. This subject can be approached from the point of view of where a particular music fits 77
Leigh Landy
•
within a given society – this will not be our focus here. But it can also deal with cultural and musical elements that can be found in a given culture’s repertoire. In this chapter, the cultural area will be China, an interesting case due to the fact that most forms of contemporary music (in the Western sense) were banned during the Cultural Revolution (ca. 1966–1976) leading to the question: which sources of inspiration have been of importance to Chinese electroacoustic music composers?1 The use of samples2 as a building block in composition: How sampled materials are treated musically can form an important aspect of composition. Part of the world of sample-based composition, or works in which samples play any role, is the use of sound and musical materials from a given culture. In electroacoustic works, when using samples one can take something old and make it new by recomposing it, ‘orchestrating’ it so to speak.3 Sampling culture involves the grabbing (appropriation, plundering) of something existent and placing it in a new context. In this chapter, all cases of sampling will be based on the composer’s respect for the sounds in their original contexts.
Composing with sound can be found across a wide range of practices and involves people of various backgrounds, ages and abilities. For example, UNESCO, in its DigiArts project,4 celebrated the use of the sample, as its goal was to get people of all ages, but in particular young people, to work creatively with a selection of themes shared universally (such as water), organising sounds they have recorded and/or made themselves. This visionary initiative focused on developing countries making digital sonic art accessible to people around the globe. Tools for sampling are easily available using portable digital devices, and sampling culture might be seen today as a contemporary form of folk art (or people’s art). In this chapter, the case studies will initially be chosen from both the professional and ‘underground’ arts cultures. Our subject will be the use of aspects of Chinese culture, including its music, by Chinese contemporary composers of electroacoustic music. Our case studies will focus on a number of ‘generations’ of composers who have studied at the Central Conservatory of Music (CCoM) in Beijing, starting with a pioneer of electroacoustic music in China, Zhang Xiaofu,5 and, following this, on so-called ‘underground musicians’ who are not associated with China’s elite conservatories. My understanding of how these composers approach the various aspects of Chinese music and culture in their works from an etic (outside-of-culture) point of view has been supported through interviews and internet correspondence with the composers to ascertain their views on how they composed certain works from their own emic (inside-of-culture) point of view.
A bird’s eye history of electroacoustic music in China6 There is very little literature regarding electroacoustic music in China, in particular during its early years. A Beijing-based team completed the first version of an as-yet-unpublished document, China Electronic Music Development Events (中国电子音乐发展大事记), in early 2016 during the writing of this chapter. The combination of this document and the interviews has led to this introductory overview. The story begins in 1981, not many years after the end of the Cultural Revolution, when Jean-Michel Jarre performed an ‘Electronic Night’ at major venues in Beijing and Shanghai and visited the CCoM demonstrating an FM synthesiser, the first such demonstration in China. Three years later, in 1984, the CCoM held the first-ever electroacoustic concert in the country. In the same year a Computer Music Lab was opened for research at the Jiaotong University in Shanghai, the first of many. In 1986, Martin Wesley-Smith and Ian Fredericks came to CCoM to install and introduce composers to the Fairlight Computer Music Instrument (from Australia). 78
The three paths
In the same year, the sister Beijing institution, the China Conservatory of Music (CCM), opened an electronic music ‘tone colour’ lab under the direction of Li Xian. However, it is in the following year that the first electroacoustic production studio was launched by Liu Jian with Wu Yuebei at the Wuhan Conservatory of Music. Liu had been in attendance at early CCoM events and had already created electroacoustic works. In this same year, Zhang Xiaofu collaborated with Chen Yuanlin to make the first electroacoustic work for film in China. The first Chinese electroacoustic music CD (Zhang) was to follow in 1988, at which point China was already catching up, at least within the conservatories and some universities with developments (such as the use of MIDI, analogue studios, digital approaches, etc.) throughout the country. At this time there were nine conservatories in China, all of which offer instruction in electroacoustic music today. Xi’an joined in 1988, continuing until, within the last 10 years, Chengdu (Sichuan Conservatory), Tianjin and Shenyang all offered their first courses and furnished their studios. In 2015 Hangzhou opened the tenth conservatory (Zhejiang Conservatory) and immediately started to build its studios, including two full-time members of staff teaching electroacoustic music.7 Throughout the 1990s festivals started to make their appearance, symposia became more frequent and, in 1999, the International Computer Music Conference was held at Tsinghua University in Beijing. Today, both CCoM and Shanghai Conservatory hold annual back-toback international festivals in October. Many universities now offer courses in electroacoustic composition, performance and education throughout China. As in other countries, sonic creation can also be found in art schools, design departments and media faculties. The development of electroacoustic music in China has been fairly rapid and reflects its rather unique economic journey. Equally important to its growth in terms of education, creativity and dissemination is how musicians started working in its early years, given the political developments and the restricted cultural scene that preceded it. To investigate this we return to the CCoM in the late 1970s to early 1980s, where a new generation of composers was forming, all studying at more or less the same time, all having experienced severe restrictions in their musical lives when they were young. These composers included Tan Dun, Chen Yuanlin and Chen Yi (the first woman to gain a master’s in composition at CCoM) as well as Zhang Xiaofu. Zhang finished his degree in 1983 and became Professor in the Composition Department in 1984. Tan, Chen Yuanlin and Chen Yi all provided masterclasses at CCoM during this period. All were originally trained in instrumental composition, including if not focusing on Chinese musical traditions.They all were to add what was then called electronic music (as an all-embracing term) in the early 1980s. The CCoM had major international composers visiting from time to time but housed only a very modest collection of cassette and LP recordings, and virtually no scores of Western contemporary works, which might explain why they would all include electronic sounds in their works within the styles that they were already developing. There might have been more of a tabula rasa due to the historical hole caused by the Cultural Revolution, in combination with little repertoire from which to profit. This clear evolution probably planted the seeds of the relatively high interest in Chinese elements in the electroacoustic works that were to evolve over the next decades, as this was part, if not the main focus, of their approach already. As Zhang put it in a personal communication,8 there were three domains of electronic music in the early days: first, the seeds of what was to become electroacoustic music in China; second, ‘electronic music for everyone’, referring to the more popular approach of musicians such as Jean-Michel Jarre, as well as those making electronic music for film, radio and television but also for performing arts productions, as he did; and, finally, the new computer music that usually involved a keyboard, as was the case with 79
Leigh Landy
the Fairlight CMI and with early Midi devices. The earliest studio at CCoM was limited to stereo, and composers were only able to create basic electronic sounds, thus making integration with their instrumental approaches challenging. Electroacoustic music production as we know it began to flourish in the mid-1990s. By this time, Zhang had studied in Paris, and the others had all gone to the United States, two of whom (Tan and Chen Yi) studied composition and Chinese music with Chou Wen Chung at Columbia University.9 Tan has subsequently evolved into a major composer, perhaps China’s most famous, and Chen Yi won the prestigious Pulitzer Prize in 2006. All of these composers have continued writing China-influenced music despite lengthy periods outside of their native land. It is within this context that the following case study has been chosen.
Case study: ‘the three paths’ Two tendencies have evolved that are worthy of note: (a) the proportion of mixed11 music pieces appears to be higher than in most other countries, and (b) the proportion of music that consciously involves aspects from the musicians’ own culture – China in general but also its diverse regions – is also higher than in most other nations in which there is an active electroacoustic music scene. With this in mind, what is of interest is: (a) the artistic result of pieces of an essentially soundbased medium including note-based instruments; and (b) the motivation for making Chinainformed electroacoustic compositions. It is somewhat difficult to determine exactly how the preference for mixed music evolved in China. Some think that a number of key composers of the older generation who studied abroad were attracted by European mixed music practices of the 1980s and ’90s, along with the fact that they were trained in (Chinese) instrumental composition. Others present arguments heard throughout the globe that audiences want something to look at during concerts. Others still point out that most electroacoustic music composers were originally trained as instrumental musicians. The truth can probably be found somewhere between these. In any case, as live electronics developed rather slowly in China – although perhaps more rapidly in underground scenes during this century – mixed music performance makes sense given the wealth of talented performers at the conservatories. Mixed music compositions can basically be classified into two types: (a) works in which the recorded part (or live electronic part these days) uses similar materials to the live performers, and (b) works in which there are different sound worlds, the live part offering primarily note-based performance and the sound part often a mixture of notes and other sounds and/or resonances. The vast majority of Chinese mixed works fall into the second group. I have suggested a distinction can be made between music which is based on the traditional musical note (possessing pitch) and works focusing on sounds not originally intended as notes. I created a term for this second type, sound-based music.12 Mixed works of the second category, due to the focus on the live performer(s), tend to concentrate on note-based material, thus merging traditional practices with the sonic opportunities provided by electroacoustic music composition (for example, pitch-based resonances that extend the sound of the live performers). This remark is a bit of a simplification, however. Most Chinese instruments produce sounds that can equally function as ‘notes’ or ‘sounds’ in the terms defined earlier. In addition, the development of extended techniques starting in the previous century means that there is at times a fine line between the perception of a note or sound in such works.Think of some of the noise textures that a pipa (which belongs to the same 10
80
The three paths
family of instruments as the lute) can produce, or an erhu (a bowed two-stringed instrument), suona (double reed), dizi (bamboo flute) and so on. Furthermore, East Asian music is unique in its ability to articulate a single note to evolve morphologically in highly sophisticated ways, recalling the Buddhist notion that ‘a note has the right to be its own universe’. Where Western ears are more acquainted with the sequence or gesture, Eastern music often focuses on the level of the single breath or note. Here one might draw parallels with the electroacoustic notion of focusing on sound quality and morphology as opposed to a note embedded within many others in a score. This focus on mixed music is, however, not the key interest here. It is the other tendency that will be investigated in greater depth, namely the desire by composers of a broad range of ages, who are working or who have studied at the CCoM, to include a variety of Chinese influences into their music as a form of inspiration.
China-inspired electroacoustic music In preparation for writing this chapter, four composers were interviewed to examine the influences that were most relevant to their electroacoustic music compositions in which Chinese elements were of importance. This case study involved online discussions with the composers, listening to their works and face-to-face interviews.13 The composers were: Zhang Xiaofu (b. 1954, CCoM) and three of his (former) students: Guan Peng (b. 1971, currently working at CCoM), Li Qiuxiao (1985, who recently completed her PhD and now works at the brand new Zhejiang Conservatory in Hangzhou) and Qi Mengjie (known as Maggie, b. 1989, studying for her PhD with Zhang Xiaofu at the time of these interviews). A key focus of the discussions with the composers was therefore to do with their decision to employ Chinese elements in their music. Three types of influences or ‘paths’ evolved which may be seen to be reasonably distinct. What was of interest was to discover commonalities and differences between the composers regarding these three paths (as exemplified here). Two of the three paths are based on musical ideas, and the third is based on ideas from culture beyond music. They are: 1
2
Sampling This is the most obvious of the three. Take sounds from Chinese music and offer them as sonic material within an electroacoustic music composition, allowing for recognition of the source (or type) of the music to act as a means towards identification within the composed work. Musical samples might involve, for example, Beijing Opera, Tibetan chant, overtone singing, the sound of any Chinese instrument and so on. Similarly, the use of samples from daily life can form different types of experiential links. This latter approach is more commonly used by musicians outside of Chinese academe and also artists working in the area of sound art (see the later section on underground scenes). The use of Chinese instruments and/or musical approaches The presence of Chinese instruments offers listeners an experiential common ground, at least for those in China or within its diaspora. Composers see this choice both as a celebration of musical roots, but also as a means of opening up the world of electroacoustic music through the ‘known’. Beyond this, the use of Chinese compositional approaches or vocal/ instrumental techniques is another potential common-ground aspect of electroacoustic music. Personal experience suggests that this application of approaches and techniques is not always as immediately audible as a composer might think, thus limiting some listeners’ links to experience.
81
Leigh Landy
3
Inspiration from Chinese culture (e.g., Buddhism,Taoism, poetry, philosophy) This is perhaps the least tangible and is more difficult for younger composers in general. This third path focuses on Chinese extra-musical cultural ideals being reflected musically, whether they are to do with Taoist avoidance of formalisation or the ability to interpret a poem of Li Po in a wide variety of manners. Some composers have been accused of talking about such inspirations as a marketing tool, but composers who articulate such extra-musical influences are normally articulating their pride in their culture, thus aiming to form an integral foundation in their creative work. Inspiration from Chinese culture can influence sounds chosen, structural approaches and even the composer’s relationship with his or her work. As was the case regarding applying Chinese compositional approaches and techniques, this final influence is understandably not always audible to untrained listeners. For example, a work influenced by Taoist thought that has a structure resembling the flow of water as opposed to a pre-imposed form might not be heard as such by a listener unaware of the form’s signposts. Regardless, we are speaking about dramaturgical forms of inspiration, not necessarily a means towards guiding the listening experience.
There may be other sources of Chinese inspiration to today’s composers of electroacoustic music, but these three featured in most discussions, and no other key type evolved during this research. It might be suggested that these three paths could be related to more common archetypes, namely materials, instruments and philosophy.This is true, but I have decided to stick with these more tangible items for the sake of clarity.The main thoughts raised during the interviews now follow.
Zhang Xiaofu Key works chosen: Le chant intérieur: poème fantastique For Chinese bass flute [dizi] player and electroacoustic recording using material from the dizi, xiao and xun – bamboo flute, end-blown flute and ocarina, 1988–2001. Versions: with and without live performance, with and without video. The latest version (2001) is discussed here; Nuo Ri Lang Fixed medium version 1996.14 Visages peint dans l’Opéra de Pékin II Fixed medium version with video, 2008.15 The lengthiest and most unstructured and wide-reaching interview took place with Zhang Xiaofu. It covered the early years of electroacoustic music production in Beijing and the many problems encountered at the time, his experience studying in Paris and, of particular relevance here, how works, in particular the three presented, are influenced by his culture and its musical heritage.The three were chosen due to their availability (in two cases) and their overt links with Chinese culture. They are by no means exceptional in his oeuvre.
82
The three paths
He made it clear that, despite all that came with the Cultural Revolution, he was first and foremost a product of Chinese culture and the nation’s cultural history. He was resident for advanced studies in Paris at the Conservatoire Edgar Varèse and the École Normale de Musique de Paris between 1988 and 1993. This was of great value in terms of gaining insights into electroacoustic composition techniques (first analogue, then digital) and, of course, mixed music also formed a major part of electroacoustic music composition in France. Over the years, his links with both the GRM in Paris and Grame in Lyon demonstrate that he has never lost his connections with the country of this extended residency. While maintaining a very strong leaning towards the acousmatic approach to electroacoustic composition, the content of virtually his entire oeuvre is very much oriented towards Chinese musical traditions and cultural values.This was the primary focus of the interview. Zhang is someone who aims to echo and celebrate the profundity of his culture in his compositions. This can manifest itself in a number of ways. Studying a number of his works led me, in fact, to the three paths that are being presented here, as all of them appear, some more than others. Reading any programme note, the third path (inspiration from Chinese culture) seems always to be taken. It is true that many people are sceptical about what is placed in liner or programme notes, but the basis, beyond any compositional and/or technical challenges in Zhang’s works, is to do with concepts emanating from (ancient) Chinese thought. Clearly these form part of the inspiration and the dramaturgy of the works. In some cases, such as Nuo Ri Lang, he is inspired by his experiences in other, distinctly different Chinese cultures, in this case in Tibet. Although it shares Buddhism with most of the rest of China, Tibet is seen to be an exotic culture by the Han Chinese. As it turns out, Buddhism is at the heart of this piece. The three pieces have been chosen because they range from primarily sample based (Visages peint, including the video content), through a combination of sample-based material with a reliance on the sound of Chinese music (Nuo Ri Lang), to the use of traditional and extended instrumental techniques (Le chant intérieur). All three demonstrate the combination of respect for and extending of Chinese musical traditions. Although Zhang presents his works to his students in some detail, what he communicates to his public through programme notes has virtually nothing to do with how he composes or what technology he uses. He focuses instead on the dramaturgy of his works. This is very important, for the listener will notice a clear fusion of two disparate influences, those derived from China and those acquired during his years in France, much more than specific concepts of new compositional methodologies. The French influence is not specifically based on Schaefferian theory (or any other specifically) but instead on the sound that has developed over the decades at the GRM and its international diaspora. As this is a true fusion, these techniques of acousmatic gestural music are applied using Chinese materials – this is Zhang’s signature and stands in contrast to more internationalist tendencies heard worldwide. As can easily be heard in the music, he often speaks of finding a balance between music and ‘noise’ as he calls it (notes and sounds),16 and this is indeed a key characteristic of most of his works, while pitch and rhythm play a fundamental role in all of them. In short, his training as an instrumental composer is never totally relinquished.We shall look chronologically at the versions of the three works that have been chosen. It is important to know that Zhang often makes several versions of each work – he calls this one of his trademarks – and therefore we need to see what happens when we hear other versions of these same pieces.17 Le chant intérieur is a fully composed work for Chinese bass bamboo flute (dizi) and stereo recording featuring the sounds of three Chinese wind instruments. It is the only work of the
83
Leigh Landy
three that in terms of sound does not vary to a large extent between the electroacoustic versions. The work was originally composed in 1987 for dizi and large Chinese orchestra, a work in three movements lasting about 30 minutes. The following year, using a production studio, as opposed to an electroacoustic music studio, in Paris, Zhang produced the first electroacoustic version involving the limited technology present, focusing on the first movement only.18 He was able to obtain several synthesisers and attempted to find sounds that would blend well with the Chinese flute. As successful as this 1988 version is, it would be after Zhang’s return to China in 1993 that he would become able to create the sounds he really sought for this work, culminating in the 2001 version, which employed source material from the three Chinese wind instruments. From this point onwards, there was one version of the work focusing on the live performer and the recording, while in another the performer is recorded and integrated with the recorded part, and this is enhanced by a beautiful abstract video that is often projected as part of the performance. The recording includes instrumental material that is abstracted and floats between note and sound events. The live material, although primarily consisting of notes, is composed in a traditional way including the extended techniques that bring us closer to what Zhang calls ‘noise’; for example, the use of bamboo with the dizi can sound like a prepared flute, with paper and resin inserted, creating a noisy texture.There is a good balance between homogeneous sounding passages (dizi, recording) and parallel but complementary ones. In short, this is mixed music in which some of the live sound material is integrated into the recording part. Nonetheless, the work, given the prominence of the main dizi part, comes across like a concerto for wind instrument and fixed medium recording. Listening to the music, one might hear reflections of non-Chinese genres such as short jazzinspired riffs, but the techniques involved are mostly so clearly Chinese that one never feels that one wanders far from a modern work based on traditional musical characteristics. The recorded part is filled with note or sound gestures that, due to the careful use of reverberation, seem to create a coherent musical space. Le chant intérieur is seen by many specialists in China and beyond as Zhang’s strongest work. The virtuosity in performance is a key characteristic, and the work develops structurally in clear episodes, but the listener’s focus never leaves the interplay between live and recorded (wind instrumental) sounds. Nuo Ri Lang is a fixed medium work that starts immediately as if one is entering a ritual. Inspired by Tibetan Buddhism, the work uses much source material that can be linked to Tibet’s religious music, whether from metallic sources or using vocal sounds. In particular the vocal sounds are taken from Buddhist chant and manipulated musically to create this celebration of ‘the spiritual essence of Tibetan culture’ (liner notes with the CD recording). Zhang mentions one stimulus in this work, namely things to do with ‘rotation’19 due to Tibetan notions of ‘Zhuan Jung’ (turning the wheel with a sutra inscribed) and ‘Samsara’ (The Wheel of Transmigration). This is easily seen in the video part of the work but is much more challenging to perceive aurally. Nuo Ri Lang shares the episodic quality of Le chant intérieur. Zhang never makes his works too dense or complex for the listener, and navigating one’s way through is not at all difficult. In this work pitch-based passages are interwoven with more textural ones but the contrasts are limited, so that they all seem to belong together naturally. The journey is, as remarked earlier, one of a ritual, that of a culture within China that is somewhat exotic to the composer, although he has visited Tibet on more than one occasion. It is in this work that the distinct versions demonstrate another aspect of Zhang’s oeuvre.20 Where the dizi piece (Le chant intérieur) was further developed during the years, the versions consisted only of a live/combined recording version with or without video added; Nuo Ri Lang exists in many versions: recording-only form; recording plus video; a version with a 84
The three paths
percussionist; and another version with several percussionists, several dancers and video. This is, in fact, rather typical of Zhang’s approach to his works. He claims that the percussion part is also inspired by Chinese thought through the choice of the materials used: bronze, leather, wood and stone underlining the third path. Nonetheless, the addition of the percussion, according to the composer, offers a level of contrast between the lengthy materials in the recording and the more grain-like sounds produced by the live performer. Although much of this material is not pitched, this part of the version comes across as more note-based and indeed forms a strong contrast with respect to the recorded part that offers nothing similar. The visual additions of the dancers using Buddhist movement material as part of their choreography and the video (with Tibetan imagery) creates a strong multimedia performative element that demonstrates our paths well, but the key contrast is between the recorded part and percussion, turning this into a very different listening experience.21 Visages Peint is perhaps the work that falls closest to Chinese musical tradition, as the material is solely based on sounds from Beijing Opera. At the same time, it is perhaps the most experimental of the three. There are so many characteristic sounds that one can associate with this genre, and many of them are used regularly throughout this piece, although presented differently due to the use of fairly straightforward electroacoustic montage techniques including simple, clear transformations. Its video part consists solely of normal and manipulated images related to Beijing Opera, in particular the actors’ faces and masks. The integration of real and manipulated passages from Beijing Opera form part of the special character of the work. Also, the rhythmic development of non-percussion samples adds a level of surrealism, as these appear to be coming from the ‘wrong’ sources. The version of this work with five percussionists22 is again quite different, as the listener’s attention is focused on the interplay between pre-recorded materials and live performance and less on the detail of the recordings themselves. Both audio-visual and the live percussion plus recording versions have their individual strengths, as is the case with the mixed versions of Nuo Ri Lang. In ethnomusicological terms, one might speak of variants as opposed to versions due to the significant differences for the audience. The fixed medium versions in these two works have been chosen above the variants in the first instance, as their links to the three paths are more clearly audible.
Guan Peng Although Guan’s list of electroacoustic works is in fact quite modest, the path that took this composer from his self-described ‘early works’ (2000 onwards) to the composition Variation (of 2008) is one from student to the discovery of a sonic signature.The two Fusion works that follow, both mixed pieces, to a degree move away from his teacher, Zhang’s, influence. Guan stated in our interview that his earliest works were written whilst studying with Zhang. In fact, the first four works were bundled into a suite entitled ‘Cadenza’, three of which were included on his CD of that title.23 He describes this period as one where he was introduced to Zhang’s works (including analyses of traditional Chinese music in support) and works from the 1970s and 1980s from the GRM but not a huge amount of other repertoire. The CCoM Musicacoustica Festival was still in its early phase,24 not the large-scale annual festival that it is today. There were fewer visitors, in particular international visitors, meaning that the opportunity to expand knowledge of repertoire was not as great as it is for Chinese students today. He divides his few electroacoustic works into ‘periods’. The four initial works forming ‘Cadenza’ are seen to be clearly Zhang-inspired. ‘Cadenza’ consists of the works Feng Yue (2000), General’s Order (2002), Dust (2004) and Extremer (2006). Guan’s technique develops with every 85
Leigh Landy
work. The connection with Chinese materials becomes more sophisticated as well, the last two works focusing on the Chinese double-reed instrument suona and Tibetan chant (as in Zhang’s Nuo Ri Lang), as well as multiphonic singing which can be found in and around Mongolia, other outer regions of China and beyond. The piece that follows the ‘Cadenza’ suite is Variation (2008). This work focusing on bell sounds (including Tibetan bells) is very different from the early ones. Guan explains that this is due to two important steps: (a) an interest in Denis Smalley’s concepts related to spectromorphology, which he heard about at a Musicacoustica event, and, in consequence, (b) a greater interest in ‘sounds’ above notes. He maintains that the Chinese influence is less than in the earlier works. However, after I discussed specific aspects such as rhythm, dynamic development and gestures with him, he admitted that his intention was to depart from ‘typical’ Chinese sounds to spectromorphologically developed sonic gestures using Chinese materials. To my ear, through this development Guan has found his compositional voice, and he agrees. Variation plays a very careful balancing act between sound and note, exploring gesture and texture much more than the earlier works. The two Fusion works that follow introduce Guan’s first mixed works thus focus much more on pitch and note-based aspects than Variation. This might seem strange, as it is not the logical progression that one might have assumed from his previous work focused on spectromorphological issues that had little to do with note-based approaches. He claims that in Fusion 1 (for piano and electroacoustic sounds – 2010) he was attempting to blend Oriental and Western influences in particular in terms of his approach to the piano, as some of the writing was inspired by guqin (a zither) techniques. In Fusion 2 (for piano, cello, flute and electronic sounds – 2013), he was more interested in the fusion between live instrumental and electronic sounds. He claims (and I agree) that this is his least China-influenced electroacoustic work thus far.25 Guan, like the composers who now follow, was much more guarded than Zhang when speaking of the conscious use of Chinese culture and philosophy in his works.Whether this was due to a personal standpoint or a lesser knowledge of these things is hard to tell. He offered the suggestion that one of the reasons to follow in Zhang’s path was the lack of knowledge of the international electroacoustic repertoire.This may have been true, relatively speaking, at the time. However, students in later years clearly have had access (perhaps with a VPN link allowing them to break the ‘Great Fire Wall’)26 to a much larger repertoire, yet they, too, are also following the paths we have defined.
Li Quixiao This composer’s electroacoustic oeuvre, featuring several mixed works, is more extensive than Guan Peng’s, including multimedia works and some experiments with live electronics.The ability to hear the Chinese influences is slightly more challenging than in the other presented composers’ works. Her CCoM experience started at the Conservatory’s middle school when she was 12 years old, but she wasn’t to hear any electroacoustic music until her 18th year. Her reason for going into the field was most surprising. It was her belief that she would more easily find a job in this field if she chose it! In fact, just after completing her doctorate, she was offered a place at the brand new Zhejiang Conservatory in Hangzhou, which opened officially in October 2015, that is, the month that I met her. Some of her interview comments were equally surprising – even provocative. For example, she said she preferred McDonald’s to Chinese food and spoke of Smetana’s Moldau as a great inspiration to her. She followed this with the remark that she used to hate Beijing Opera, as it was ‘noisy’. Were the children of Chinese economic success rejecting the past and turning 86
The three paths
inevitably towards Westernisation and modernisation? Prying a bit further led us back to the paths I have defined, as during her years of study she in fact gained a strong respect for Chinese music and culture, including the much-disdained Beijing Opera. Of course, this respect was further supported under Zhang’s tutelage. Of all of the composers interviewed, Li is less involved with sampling. She is more interested in translating the techniques and spirit of Chinese music in her works. In fact, to exemplify this, she gave as an example of her work, Bristle with Anger (for clarinet, violin, cello and fixed medium recording – 2009), which is inspired by the Beijing Opera aria, ‘Ji gu ma cao’ and involves modest sampling from Beijing Opera materials. She also said that aspects of her instrumental writing were taken directly from Beijing Opera recitation and instrumental performance. In another Beijing Opera-inspired work, Magnolia (for soprano, related to the “Qing Yi” role, and fixed medium recording – 2010), the inspiration for the piece and the text used offer a specific Chinese element to the work’s content. She says that the best way to convey the Chinese quality of a work is through a story, through lyrics, programme note or audience knowledge of the original story behind a work, and that this is a path she follows often in her works.Yet Li’s approach to composition is also inspired by a variety of tendencies, not least highly dissonant passages that evolved in Western contemporary music over the last century, and this is quite audible, too, in both compositions.27 In discussing the paths and the border between notes and sounds, she claims that she was taught to pay attention to Chinese philosophy by Zhang but finds it difficult to express and apply. She thus focuses on notated and recorded musical elements and, where relevant, samples. She admits that instruments can produce a wide spectrum of sounds, but as most players in her experience are uncomfortable with new music, she works with them as note-based interpreters. With this in mind, she says that contrasting sounds appear on the recorded parts, thus creating a more heterogeneous mix. She even speaks of using different ‘languages’ for the live and recorded parts, one being deeply rooted in note-based (contemporary) composition, the other offering broader and largely different sonic possibilities. I do not believe that Zhang would ever describe composition technique in this way. Li has composed in a combined Chinese-influenced but also Western-inspired manner for the last eight years. She claims to be involved with Chinese thought and music in each work, but her compositions sound much more internationally focused in style than the first two composers, again due to her experiences gained during her study in the US.
Qi Mengjie The final composer we shall discuss was in the midst of her PhD study at CCoM at the time of the interview. In her case, she has been guided equally by former CCoM composition instructor Jin Ping (now Professor at the China Conservatory of Music in Beijing) and Zhang. This is of interest, as Jin has been involved in teaching live electronics and multimedia to his students in China and the US and thus perhaps offered her a complementary musical outlook to that of Zhang. Qi claimed straight away that the use of Chinese elements in composition was indeed introduced by Zhang but was not forced on his students. She was a relative latecomer to electroacoustic music, making her first work under Jin’s guidance when she was 21. In the following year, her second work, Echoes for Woodblock from Peking Opera (2010), composed as a first-year master’s student, won first prize in the Musicacoustica Festival’s electroacoustic music (acousmatic) competition of that year. This work clearly uses percussion samples but, at the same time, is highly gestural and at times abstract. Although one can hear that she was brought up with note-based composition in her approaches to horizontal, vertical and structural developments, 87
Leigh Landy
this is nonetheless a much more sonic (‘sound-based’) work than most of the works from all three other composers. As she puts it, although many of the musical ideas may come through note-based concepts, they ‘move’ to sound by way of spectral thinking. It is with this piece that she claims to have found her ‘way’ of composing, creating a structure through joining together clear sections, not dissimilar from Zhang’s episodic approach. As she put it in the interview, she is interested in articulating elements from traditional music ‘in another way’ using electroacoustic approaches. These may include sampling, using granulation to create rhythms or working with vocal sounds using analysis/resynthesis approaches. Qi is the only composer who integrated a modest discussion of techniques with that of her musical vision. Her approach to Chinese philosophy is limited to words describing her spirit whilst composing. She says that this differs from work to work, from how it is inspired to how it is composed. Her style has continued developing. Her latest piece at the time of writing was Lin Chong Fled at Night (for Peking Opera performer and electroacoustic recording – 2015) – from ‘Water Margin’, one of the most famous Beijing Opera novels. In this work she uses, for example, granulation of Beijing Opera samples and a composition method involving what she calls the restructuring of traditional musical structure.28 During the first two performances of this work, Qi changed the piece slightly for the two different circumstances. The premiere was in a midsized space, the CCoM recital hall, where the Beijing Opera performer was not far away from anyone in the space. He therefore not only danced but also vocalised part of the piece using improvised utterances from Beijing Opera not dissimilar from those recorded in the electroacoustic part. In a second performance a few days later at a large theatre in the city of Xiamen, she asked the performer just to dance, as the necessary intimacy of the vocals in that space would have been lost. This flexible approach, taking spaces and audiences into account when performing experimental music such as Qi’s, is a means of better connecting with the public. Although the material that forms the basis of this work is clearly note-based, the result is much more a sound-based composition. Her next (at the time of writing) untitled work, completed whilst this chapter was being written, was for live guqin and recording, thus a step towards her professor’s main means of composition. Nonetheless, when asked whether these new forms of presentation would influence her sonic signature, her reply was that that was unlikely, as she had found her way of composing and she felt it was working for her. Regarding the three paths and the future, her response was quite clear: ‘I am a Chinese composer and need to keep my speciality and character’.
Discussion These four composers have very different voices, yet there is a reasonably strong case for saying that Zhang Xiaofu is responsible for a form of a ‘school of composition’. In his case the goal was to celebrate the possibilities of electroacoustic technology whilst, at the same time, offering his students the ability to celebrate the breadth and depth of Chinese culture, including Chinese musical culture, in their works. If we argue that dramaturgy is one of the greatest access tools in this field, it is surprising that the third path, of non-musical aspects related to Chinese culture, was the one least trodden. Again, this may be a generational development that reflects the entire world’s loosening connections with the past. Look, for example, at the demise of most European countries’ folk music over the last 100 years or so. Chinese youth are also exhibiting similar changes of attitude and interest. Nonetheless, what is not explicit can be made implicit, as Chinese music is a reflection of broader culture, and thus these new works may indirectly have been informed by, for example, Taoist or Buddhist thinking. In contrast to most Western composers, there was a clear (and noteworthy) avoidance in all four interviews of talking much 88
The three paths
about compositional construction. This does not mean that all four composers engage solely in intuitive bottom-up composition. But the avoidance of focus on formal techniques and a greater focus on the flow of sounds suggests a more oriental sensibility. This third path is perhaps the most intangible – as it would be in most countries unless one is creating stereotypes. Still, it was clear that all composers were consciously demonstrating a desire to develop both respect and dynamism with respect to their own culture.
China’s underground scenes It has to be said that everything discussed previously is intended to be presented as ‘high art’. I have argued elsewhere29 that sound-based music forms its own paradigm (or ‘supergenre’) that often ignores the high/low art division. As time has progressed, some staff and students at Chinese conservatories have created works that sit outside of this high/low boundary as well; some in the form of live electronic performance, others as sound installations, for example. Of course, there is a good deal of electroacoustic music made outside of the walls of conservatories or universities in China. I have seen very little crosstalk between the conservatories and independent musicians creating very inventive works.These musicians extend the range of usage of Chinese elements to soundscape, text-sound composition and much more.30 Many have a more experimental spirit than the four composers introduced here, and thus perhaps they come closer to the tabula rasa approach introduced earlier, in the sense that some of these artists may not have had any formal training or have been introduced to key repertoire at all, at least prior to the availability of works by way of the internet. The pioneering work of the Taiwan-born and Hangzhou-based sound artist and art historian Yao Dajuin has provided invaluable materials including online broadcasts31 and a series of CD recordings, such as a wonderful overview of underground work that can be found on the CD ‘China: The Sonic Avant-Garde’.32 The online liner notes,33 although brief, offer a valuable overview of the types of experimentation that Yao included on this double CD release. Examples of interest to this discussion include: •
•
Hu Mage’s “chai-mi-you-yan-jiang-cu-tang” (firewood, rice, oil, salt, soy, vinegar & sugar, 2002), a humorous piece in which the title’s syllables are grabbed from samples from mainly popular music.These are difficult for anyone to catch immediately; however, when they do, this familiar Chinese phrase is reborn artistically. Yao Dajuin’s (or Dajün as notated in the liner notes) own soundscape and sampled compositions under the artistic name of {City name} Sound Unit. The two examples here are (Beijing Sound Unit) minibus pimps (1999), which captures the cacophony of sounds of minibus drivers attempting to attract customers on the streets of Beijing, made in collaboration with Li Ruyi. Yao himself as Shanghai Sound Unit recorded two broadcasts of one single text which he synchronises in the piece, Leili Fengxing (With the Power of Thunderbolt and Speed of Wind, 1998).Yao calls this ‘reverse-engineered sound art’ using a government propaganda text that will leave its Chinese listeners in hysterics not least due to its cliché-driven writing style and, thus, the resulting sound of the synchronous voices that result from this.Yao comments: ‘[there is] a juxtaposition of high-minded bureaucratic ideals and lower-class colloquial diction’ allowing him to double ‘the sarcasm inherent in the text’ (from the online CD notes).
Yan Jun is an important underground figure of interest here and will serve as this section’s case study. Yan (b. 1973) was involved in popular music criticism and, later, organisation prior to investigating more experimental forms of music making. Today he is known as an experimental 89
Leigh Landy
musician and poet but also someone who has organised a number of experimental music events, many involving both improvisation and noise, especially in Beijing, where he is based. He is also responsible for the guerrilla label Sub Jam. In conversation with him in May 2015 (Beijing) and December 2016 (Leicester), he spoke of the early years of experimentalism in China and how important Yao’s broadcasts had been.34 Yan’s work ranges from popular music influenced approaches to field recording to chant, noise and much more. One might suggest that he is more concept-driven than someone with his own particular ‘sound’. He agreed that his music was neither high art nor popular culture but simply something for his particular listening community. He added that none of the Chinese musicians in his direct circle had any links to conservatories such as CCoM and that many are self-trained. I asked Yan about the three pathways and was told that he does not normally use samples specifically from Chinese music in his works but might include samples taken from recordings he possesses of any kind. He also said that he does not feel that his music is inspired by any particular Chinese music tradition.35 Nonetheless, he was aware of traditional Chinese music, including Qinqiang, a form of Shaanxi opera, which is distinct from Beijing opera, from a young age. However, regarding the point of non-musical influences, he shared a number of passionate comments. He suggested, for example, that he prefers to call many experimental improvising musicians, ‘free-will musicians’ (as opposed to musicians influenced by free jazz).36 This notion of free-will music demonstrates, like many other statements Yan shared, deep roots in Buddhist thought. He makes a distinction between the ‘free’ in music and the will of being, thus something he sees as natural. He is not particularly interested in musicians who simply perform or act. Although it is hard to conceive of him rehearsing a particular fixed performance, he stated that: ‘If you don’t practice; you can’t talk’. This sentence concerned those who claim to understand Buddhism, illustrating his view about knowledge and the right to communicate, adding that this was obviously also true about music. One could claim that this thought is also true for students in the Western tradition, but, in this case, normally the goal is fixed. In Yan’s case, practice allows for free will. Speaking and acting represent, in Buddhism, reaching one of its highest levels (of which there are eight in Chinese Buddhism).37 If musical principles were not directly influential, artistic ones would be. He spoke of the path from calligraphy to painting for Chinese artists throughout history, the learning of embodied flow and gesture forming the basis of both. He cited the work of Ni Zan (a fourteenth-century painter) in which the painted shore of a body of water may appear like a straight line; however, upon closer inspection, the embodied gestures lead toward a complex of textures and movement based on a well-practiced flow. This reminded me of the aforementioned remark about notes possessing the right to their own universe. When Yan agreed, I suggested that this was a particular Oriental approach applied by only a few in the West. A Western approach might be based more on a phrase or other form of passage working. However,Yan does not like to see things simplified to ‘this sounds Chinese’ or Asian. Another Buddhist-inspired remark he made was: ‘Don’t try to make a sound better – just a sound’, meaning that a given sound already possesses its own beauty. For someone associated with the art of noise, these are remarkable thoughts.When discussing underground music in China, in particular in Beijing, he declared that many, but not all, such musicians had roots in popular music traditions. He distinguished between those who continued to do so within a more experimental sonic context and those who moved on, perhaps becoming more involved with approaches associated with conceptual art. He claimed that there was a renaissance of concept art currently, including many younger artists who may not know the work of the Western conceptualists of the 1980s. He made some distinctions between artists of his generation, the first generation to have the freedom to experiment, and those practicing today. Firstly, he stated that his generation 90
The three paths
knew little about repertoire and history until Yao’s broadcasts became available, as other repertoire did subsequently over the internet. He distinguishes his earlier career from younger artists today who are confronted with (as he says) ‘too much information’. Secondly, he describes the artists who developed these new means of organising sounds within his generation as ‘small’, relating to their modest personalities, a clear anti-celebrity stance. He added that one might call this ‘loser’s music’ but also music made by people ‘who are content with small sounds that last a lifetime’. He added that ‘being big is retro’, regarding the second-generation artists. Unhappy with that, he suggested that artists ‘do not need to be sexy – don’t move, jump’ but instead should investigate novelty through their dealing with sound.38 He felt that some younger musicians were becoming more influenced by a popular culture that they shared and hence were more interested in spectacle, a form of ‘big’. The names of friends, artists of his generation, include Zhe Wenbo, a self-taught clarinet player; Soviet Pop, a synthesiser duo;Yan Yulong, a self-taught violin player who is ‘unable to read scores’; Liu Xinyu ‘a no-input mixing board player who had never heard of Glenn Gould and was not interested in Cage’. Although these artists may, in some way, be demonstrating influence by the likes of Cage, Lucier and Fluxus, ‘they had no knowledge of them’; instead, he suggested, this music was ‘reacting to its own reality’. Naturally, not all underground music celebrates Chinese culture in any direct way; still, these few paragraphs have demonstrated that many outside of academe are just as linked to their culture as Zhang Xiaofu. What we learn from these examples is that two of our three pathways are being followed, namely that of the use of samples and of extra-musical Chinese considerations, in this case more socio-cultural or political than religious or philosophical. The use of specifically Chinese composition practices, particularly instrumental ones, is most likely less relevant in these sound-based contexts. Suffice to say that the paths presented in this chapter’s CCoM-based discussion are by no means unique to the Chinese conservatories, as illustrated by the examples introduced in this section.
In sum A key point of interest is how these traditional elements can be innovatively applied in music. Although our CCoM case study is a specific example, it is generalisable across conservatories in China to a large extent.39 As discovered in the examples just given, the three paths are also of value in experimental music outside of the conservatories, although perhaps less clearly. What is important here is how the specific cultural elements and the respect associated with them, when combined with a musician’s aesthetic, are communicated to various audience communities, a subject worthy of further investigation. I have often spoken of an audience’s need to connect with new experimental forms of music. Shared experience is the most efficient way of achieving this, and using cultural aspects that are identifiable to the listener exemplifies a great means to access. The works by Zhang Xiaofu and the ‘generations’ of students that have followed him offer various approaches to composing electroacoustic music as well as means of connecting with a public broader than the elitist one mostly associated with this musical genre elsewhere. This has to do, at least in part, with the experiential links they offer in their music. To what extent are these paths of relevance beyond China? The answer to this has to do with the extent to which composers seek such cultural and experiential links. As China’s introduction to electroacoustic music has been somewhat different to that of other nations, no generalisation is possible beyond the fact that this country would find itself at the high end of the scale of the application of an international version of the three paths. The proportion of European and 91
Leigh Landy
American composers, for example, consciously following the path of using local music cultural or extra-musical cultural influences would most likely be lower. In contrast, in these other countries, the application of sampling in its broadest sense would perhaps be comparable with our Chinese examples, whether the samples are of other music, urban or nature sounds, or sounds from daily life, even adding the links gained by performing site-specific works.This relaxation of interest in one’s own culture can be seen to be both an indication and a consequence of today’s global culture. Put another way, such musicians’ culture is more of a global culture making the pursuit of the first path a very different notion than in the Chinese case.This same global culture is today quite present in China, too; however, thus far many musicians have chosen to retain and modernise their culture through their music.40
Postlude Through the exemplary vision and sound of Zhang Xiaofu, I have found ways of working with Chinese musical approaches, using Chinese instruments and Chinese recordings of music and voice (for example from Chinese radio broadcasts, but also music recordings) focusing both on the cultural and the musical aspects in some of my own works. Perhaps being ‘an outsider’ can allow one to take things to places that others from within the culture itself might not visit creatively, whilst intending to demonstrate the most profound respect to the ‘other culture’. All three paths have been followed. The music produced obviously offers a different form of signification to Chinese and non-Chinese listeners, as is the case in the works discussed earlier, but it has been invigorating to reach deeply into a culture that has proven its importance both historically and through permanent renewal.41
Notes 1 I first visited China in 1993 as a visiting professor at the Central Conservatory of Music in Beijing and have visited on a regular basis since. Watching electroacoustic music develop rapidly in this country has been an extraordinary phenomenon. The term ‘conservatory’ is used here instead of the British ‘conservatoire’ due to its being the favoured word in China. 2 The word ‘samples’ is meant here as the use of a recording, regardless of length, of any sound, be it a clip from a musical work, a nature recording or anything else. 3 See Manuella Blackburn (2016) ‘Analysing the identifiable: cultural borrowing in Diana Salazar’s La voz del fuelle’ for a related discussion. 4 See, for example, http://portal.unesco.org/culture/en/ev.php-URL_ID=1391&URL_DO=DO_ TOPIC&URL_SECTION=-2 77.html – [accessed 11.11.2015]. 5 Chinese names commence with the family name, and this tradition will be maintained in this chapter. 6 For a wider context within the region see Battier and Liao, ‘Electronic Music in East Asia’ (this volume, Chapter 2). 7 The eleventh conservatory is due to open in 2016 in Harbin, where a similar electroacoustic presence is expected as in Hangzhou. 8 Voice-recorded WeChat messages on 3 February 2016. 9 The author also studied with Chou Wen Chung at Columbia, a composer closely associated with Edgard Varèse. He played a major inspirational role regarding both Chinese music and culture. 10 Based on the author’s experience of Chinese electroacoustic works and conversations with Chinese composers. 11 Mixed music normally refers to works for voice(s) and/or instrument(s) and fixed medium recording. Today it can also refer to the former combined with the use of live real-time electronics. 12 In Landy (2007). 13 The interviews with all four individually took place during and just after the 2015 Musicacoustica Festival in Beijing (30 October to 1 November).
92
The three paths 14 Both these first two works can be found on the CD with this piece as its title, on Zhongguo wen lian yin xiang chu ban gong si (中国文联音像出版公司) (China Federation of Audio-visual Publication Companies): Beijing [2006] ISRC CN-M35–06–0002–0000/A•J6 15 Recording not yet commercially available. 16 It is worth taking note of the fact that a note can be notated onto a score, but one also hears notes or note events/objects in recorded materials. Similarly, one can notate sounds beyond notes on scores using extended techniques; sounds or sound events/objects are common currency of acousmatic sonic composition. 17 Examples from all three works, as well as the other composers discussed, are in the CCoM Case Studies – List of Works (below). 18 He said that he had also made some popular music and chanson arrangements in this studio during his stay. 19 Zhang used the word ‘round’ in his programme notes. 20 See also the remark about variants following the discussion of Visages peint next. 21 There is a notable exception of the percussion-focused episode within the piece for a short while starting at ca. 11.30 in the tape-only version – in this case integration takes place between the percussion and the recording. 22 It was composed prior to the video version, although both are dated 2007. Further versions for two voices and recording (2009) and jinghu (a bowed instrument) and recording (2011) were also composed. 23 ‘Cadenza’ People’s Music Publishing House: Beijing [2010] ISRC CN-M26–09–0011–0010/A•J6. This CD also includes his work Variation. 24 It commenced in 1994 and grew progressively to the week-long annual event. 25 Guan also composes instrumental music including music for film, similar to Zhang.These works are not covered in the present discussion. 26 At the time of writing this chapter, there was no way to gain access to sites such as Google, Facebook and, more important here,YouTube or Soundcloud without a VPN through which one is ‘seen’ to be logged in in another country than China, thus allowing the user to reach such sites. 27 These tendencies were further reinforced during a visiting scholar residency she had at the University of Missouri-Kansas City (2014/15) at the end of her PhD period. It is at this institution that Chen Yi is currently based. 28 This interesting notion is to do with recontextualisation and is analogous with the use of a sample recontextualised in a work. 29 E.g., (Landy 2007) op. cit. 30 It is hoped that this will be a subject of a future publication. In the research that took place during the preparation of this chapter, it was discovered how modest many of these musicians were, and how difficult to find and to interview - in sharp contrast to the conservatory composers. 31 A worthwhile starting point is a webpage with downloads of a number of his broadcasts including a very wide mix of Chinese and overseas music: www.mediafire.com/?d6hm7hniwrneu (accessed 11 November 2015). 32 Post-Concrete 005: Berkeley (2003). 33 www.post-concrete.com/005/notes.html (accessed 11 November 2015). 34 They have subsequently worked and performed together. 35 Ironically, having informed me of this, he performed an untitled piece the following night in which he used techniques borrowed from Tibetan chant and, whilst using a microphone and particular loudspeakers, moved around the space, changing pitch until his chant found and benefitted from a resonating frequency of the space. This was clearly an exception, whether conscious or not. 36 This point and a few others have been taken from both our conversation and Yan’s text that is to appear in the second volume of Audio Culture, edited by Christoph Cox and Daniel Warner, in preparation at the time of writing. 37 In our email exchanges, he also spoke of noise groups, such as Mefeisan, being influenced by an anarchic form of Taoism and educated high art composers as Confucianists. 38 There are, of course, exceptions: one of the younger musicians with whom he works who is clearly interested in sound and very conceptually driven is Zhu Wenbo. 39 All of the composers interviewed agree that this was the case. 40 A laudable goal in my view. 41 The works include: Sonic Highway Exits Neglect Grammar for sheng (mouth organ) player/speaker and stereo recording (1995), China/Music 中國/音樂 Old/New 舊/新 for eight-channel recording (2013),
93
Leigh Landy Chinese Radio Sound 中国广播之声 for five-channel recording (2013) written in collaboration with Shenyang Conservatory of Music students, Zhao Zhengye and LIU Zhuoxuan, and xūn 埙 old / new舊/新 for eight-channel recording (2015).
References Blackburn, M. (2016), ‘Analysing the identifiable: cultural borrowing in Diana Salazar’s La voz del fuelle’, in Expanding the Horizon of Electroacoustic Music Analysis (S. Emmerson & L. Landy eds.). Cambridge: Cambridge University Press. Landy, L. (2007), Understanding the Art of Sound Organization. Cambridge, MA: MIT Press.
CCoM Case Studies – List of Works Zhang Xiaofu – electroacoustic works Le Chant Intérieur (flûte basse Chinoise et bande), 1987–1988 (10’12”). Premiere: Musicacoustica-Beijing, Wang Ciheng, 1994. Ciel cent réponse (soprano dramatique et musique électroacoustique), 1992 (11’30”). Premiere: Paris, Chen Yanyu, 1992. Dialogue entre le monde différents – Suite de musique électroacoustique, 1992–1993 (9’50”). Premiere: Paris, 1993. Nuo Ri Lang (multi-group percussion mixte et musique électroacoustique), 1996 (18’44”). Premiere: INAGRM, Radio France, Jean Pierlot, 1996. Esprits de la montagne (soprano dramatique et musique électroacoustique), 1996 (11’44”). Premiere: MUSICACOUSTICA-BEIJING, Silvia Schiavoni, 1996. Virtual Reality (two erhu and electronics), 1996 (14’20”). Premiere: MUSICACOUSTICA-BEIJING, Yu Hongmei, 1996. Beijing Arias (jing hu, Beijing Opera voice and electronics), 1997 (13’58”). Premiere: MUSICACOUSTICABEIJING,Yu Hongmei, Wang Xiaoyan, 1999. The Yarlung Zangbo River II (3 Tibetan voices, symphony orchestra and electronics), 2001–2002 (22’50”). Premiere: Beijing, China National Opera Symphony Orchestra, Luo Sang, 2002. Le Chant Intérieur – Fantastic Tone Poem (bass bamboo flute, digital image and electroacoustic music), 1987– 2004 (13’30”). Digital image: Ma Ge. Premiere: MUSICACOUSTICA BEIJING, Wang Ciheng, 2004. Nuo Rilang (multi-group percussion, digital image and electroacoustic music), 1996–2004 (18’40”). Digital image: Ma Ge. 2004. Premiere: MUSICACOUSTICA BEIJING,Yu Xin, 2004. Visages peint dans les Opéra de Pékin No.1 (5 groupes de percussion et musique electroacoustique), 2007 (17’30”). Premiere: Beijing, Percussion Clavier de Lyon, 2007. Visages peint dans les Opéra de Pékin No.2 (image digital et musique electroacoustique), 2007 (13’55”). Digital image: Ma Ge. Premiere: MUSICACOUSTICA-BEIJING, 2008. Visages peint dans les Opéra de Pékin No.3 (Peking Opera soprano, dramatic soprano and electroacoustic music), 2009 (12’50”). Premiere: MUSICACOUSTICA-BEIJING, Wang Xiaoyan, Xenia Hanusiak, 2009. Visages peint dans les Opéra de Pékin No.4 (jing hu and electroacoustic music), 2010–2011 (11’50”). Premiere: MUSICACOUSTICA-BEIJING,Yu Hongmei, 2011. @ to the Mars (percussion, interactive image, sound and electroacoustic music, 2012 (10’20”). Premiere: MUSICACOUSTICA-BEIJING,Yang Yiping, Feng Jinshuo, Wang Chi, Wu Xi, 2011. Le Chant Intérieur – Fantastic Tone Poem (bass bamboo flute, dance, digital image and electroacoustic music), 1987–2012 (13’30”). Choreography: Chen Maoyuan, Digital Image: Ma Ge. Premiere: MUSICACOUSTICA-BEIJING, 2012. Ciel Cent Réponse (voix de baryton dramatique, 2 groupes de percussion et musique électroacoustique), 2014 (11’20”). Premiere: Poly Theatre, Beijing, Shi Kelong, Percussion Clavier de Lyon, 2014. Earthbound Annals: Memories of Sound from Traditional China (multimedia music theatre for Chinese folk singers, suona horn, mixed percussion, natural sounds and electronics), 2014 (11’20”). Premiere: Poly Theatre, Beijing, Qi Fulin, Chen Youping, Wang Min, Xu Haixia, Shen Xiangli, 2014. Nuo Ri Lang (multimedia music theatre for multi-group percussion, dance, digital image and electroacoustic music), 2014 (18’50”). Choreography: Wan Su. Digital image: Ma Ge, Zhang Chao. Premiere: Poly Theatre, Beijing, Thierry Miroglio. 2014.
94
The three paths Dancing Ink (xun/xiao/di, erhu, pipa, percussion and electroacoustic music), 2015 (17’20”). Premiere: GRM, Radio France, Li Yue,Yu Lingling, Guo Gan, Thierry Miroglio, 2015. Dancing Ink (xun/xiao/di, erhu, pipa, zheng, percussion, electroacoustic music, video animation and multimedia performance), 2015 (17’20”).Video animation: Lampo Leong. Premiere: MUSICACOUSTICABEIJING, Li Yue, Li Jia, Song Feifei, Wang Ning, Bai Kai 2015.
Guan Peng – electroacoustic works Feng Yue (acousmatic music from Suite Cadenza), 2000 (6’00”) General’s Order (acousmatic music from Suite Cadenza), 2002 (6’04”) Dust (acousmatic music from Suite Cadenza), 2004 (10’30”) Extremer (acousmatic music from Suite Cadenza), 2006 (12’31”) Variation (acousmatic music), 2008 (11’30”) Fusion I (piano and electroacoustic music), 2010 (8’44”) Fusion II (piano, cello, flute and electronic sounds), 2013 (8’40”)
Li Qiuxiao – electroacoustic works Trip to the distant past (electroacoustic music), 2008 (7’10”). Sacrifice (guanzi, zheng, cello, percussion and fixed media), 2008 (11’30”). Rhythm (electroacoustic music), 2009 (8’12”). Bristle with anger (clarinet, violin, cello and fixed media), 2009 (7’14”). Magnolia (soprano, Peking opera ‘qing yi’ and fixed media), 2010 (8’11”). Speak softly water (electroacoustic music), 2011 (6’52”). Mushroom cloud (electroacoustic music), 2012? (6’10”). Silhouette of childhood (piano and fixed media), 2013 (15’16”). Speak softly water (electroacoustic music and video), 2013 (5’10”). The dancing shadow (flute, cello, piano and fixed media), 2013 (8’30”). (Commissioned by ELECTROACOUSTIC-BEIJING). Wu Song fights the tiger (clarinet and fixed media), 2014 (6’18”). (Written for Dr. Jun Qian’s “East meets West” recording project). Comedy (string quartet and fixed media), 2014 (9’20”). Dancing on the plate of jade (electroacoustic music), 2015 (5’01”).
Qi Mengjie – electroacoustic works Electroacoustic music Spectral Color, 2011 (6’27”) Echoes of woodblock from Peking Opera, 2012 (5’20”) Transfiguring of Crystal, 2014 (4’46”) The Road to Krakow, 2014 (4’32”)
Mixed electroacoustic music Autumn (violin and electroacoustic music), 2013 (11’00”) Lin Chong Fled at Night (Peking opera singer and electroacoustic music), 2015 (7’00”)
95
4 TECHNOLOGIES OF GENRE Digital distinctions in Montreal Patrick Valiquet
Introduction Genre is a perennial problem in the history and theory of electronic music. The very use of a term like ‘electronic music’ throws up tricky problems of association, distinction and categorisation, all of which can be linked to questions of genre.What musics can we identify as electronic, and how are they related? What musics does the category exclude? Are such distinctions merely pragmatic – should categories change depending on when and how we want to use them – or do they help us say something essential about the way the music is made and what it means? Different genres of electronic music offer different responses to these questions. The electroacoustic literature provides several examples. One of the most familiar systems of distinction in the canon revolves around national or studio-based ‘schools’. Pierre Schaeffer’s brief 1967 guide to musique concrète for the instructional Que sais je? series, for example, surveys the field by country, inviting novice listeners to hear particular studios or composers as aesthetically ‘characteristic’ of their nationality (Schaeffer 1967, 114). He divides the work of his own compatriots into aesthetic tendencies defined by the series of historical impasses he sees as revealed in his own research. To a contemporary listener, many of his examples might not seem to count as musique concrète. His impulse is to claim as much territory as he can. Since the majority of the production he describes took place in public radio studios, the nation was an obvious point of reference. While he does imply a loose set of inclusions and exclusions structured around the distinction between ‘abstract’ and ‘concrete’, he never explicitly disentangles the questions of genre and geography. More recent treatments of electroacoustic genre tend to focus more on an audible means of distinction. In Leigh Landy’s (2007) account, for example, the appeal to aesthetic plurality meets its limit at the boundary between ‘sound-based’ and ‘notebased’ forms of expression. Here understanding a form of music making as concrete is no longer a matter of where and when it occurs but of how it is conceived and materialised. There has been much discussion of the tendency toward eclecticism and hybridity in electroacoustic music (Atkinson and Emmerson 2016). Whether or not they agree upon which genre distinctions should play a role in the way electroacoustic is made, used or analysed, many theorists and composers have a stronger and stronger sense of the role genre does play. Surprisingly few, however, have asked deeper questions about genre in action, nor have many engaged with current debates in genre theory across disciplines. Landy treats the problem as a matter of 96
Technologies of genre
taxonomy: his primary concern is with expanding audiences, and for him this means finding meaningful ways of classifying the music to help listeners make sense of the relationships that musicians might take for granted. He leaves any question about how genre relations are formed – the ‘binding characteristics’ of works within a genre, the relations between genre construction and listening experience, or the role of ‘valorisation’ – to future research (Landy 2007, 208). Problems of this sort demand an empirical response, to be sure, but they also invite us to rethink our theoretical starting points. For one thing, as Georgina Born reminds us, no empirical study is undertaken from a neutral theoretical point of view. But furthermore, the goal of empirical study should be to adapt and improve theory (Born 2010). Studying the mechanisms of genre in electronic music thus demands both an awareness of theoretical starting points and an openness to new theoretical possibilities. It is in this spirit that turning to other disciplines can be most useful. Other kinds of music and art can tell us important things about the way genre works in electronic music, and thus my chapter provides a theoretical primer that cuts across a fairly wide range of musicological and aesthetic analyses. I do so in order to establish a starting point which is more holistic than typical treatments of genre in the electroacoustic literature. Following Born and others, I argue that systems of categorisation, as well as situating the musics they organise, are situated culturally and historically themselves.They orient listeners with respect to the value of musical objects but also the responsibilities and identities of musicians and their audiences (Born 2011).They may also presume a particular understanding of the historical succession, temporality and future direction of musical expression (Born 2015). The work of genre orders not just the kinds of music in question, then, but the whole universe of musical materials, behaviours and values. Digital technology is often cited as a mitigating factor in the dynamics of genre. Assessments of its influence in electroacoustic music are generally optimistic.We often read of the ways digitalisation has enabled the rise of the ‘bedroom producer’, and thus of new ‘hybrid’ styles unencumbered by academic tradition, by ‘democratising’ tools which were once too expensive for all but the best-funded institutions (Waters 2000; Emmerson 2001). As I show elsewhere, however, narratives of media democratisation pre-exist the technologies we tend to associate with them today (Valiquet 2014; Valiquet forthcoming). Such assessments deploy the notion of the digital as a synecdoche for the speed, interchangeability and mutability of modernisation as a whole (Rabinovitz and Geil 2004, 4). But while in some cases it is certain that digitalisation has opened access and broken down barriers, in others the digital constitutes new impasses. The sound and materiality of the digital can represent stasis and rigidity just as easily as change.This is illustrated best by situations in which technologies are used to isolate or exclude genred associations. In order to understand such situations, we must think of digital technologies not as independent objects external to the dynamics of genre but as themselves mobilised by and for genre. It is in this sense that I allude in my title to the Foucauldian account of gender construction in Theresa de Lauretis’ 1987 essay Technologies of Gender. Genre, like gender, is not a property of things but a complex political apparatus that uses bodies, behaviours and machines to construct relations of belonging (Lauretis 1987, 3–4). What the digital does to genre depends upon what genres the digital is understood to embody at a particular time and place. Even if we see evidence of hybridisation on a large scale, we cannot assume that the logic is the same at a local level. My argument has global implications, but my illustrations focus on electronic musicians in the Canadian city of Montreal. Although for decades Montreal has been recognised in the electroacoustic literature as a central site of acousmatic production (Dhomont 1996), it has received relatively little attention from electroacoustic historians and theorists.The ethnographic fieldwork I conducted there in 2011 and 2012 as a member of Georgina Born’s ‘Music, Digitization, Mediation’ project examined negotiations of digital media policy among educators, 97
Patrick Valiquet
producers and audiences of a variety of electronic musics.While my aim here is not to pin down a particular aesthetic or historical background against which to measure the various genres I studied, those I touch upon in my examples can be seen as related in part by their rejections of (or by) the city’s academic electroacoustic scene. My goal in fieldwork was to find out to what extent the development of electronic music practices outside the university studios bore out the dominant narrative of digital technology as a democratising and hybridising force. Although my points of reference are diverse, this should not be read simply as a celebration of new or potential inclusions. Aesthetic plurality provides an even more important opportunity for fresh critical reflection on the changing materialisation of exclusion.
Repetition and difference Histories and theories of electroacoustic music traditionally organise genres around the trope of technological progress. The reasons for the bias have been partly professional. Composerhistorians like Joel Chadabe (1997) and Peter Manning (2013) wrote in part to valorise the work of peers and predecessors who they saw as being misunderstood by a conservative establishment. One of the central assumptions behind their historiography was that technology would necessarily evolve in the direction of affording humans more sophisticated musical expressions.1 But the trope of technological progress also helps to exclude musics understood to be not ‘idiomatic’ to the medium.The widely read history by Thom Holmes (2008), for example, defines its object as ‘music that exists because of the use of electronics rather than music that simply uses electronics’. The correspondingly ‘idiomatic’ teleology moves toward more and more sophisticated and immediate ways of manipulating sound and thus toward more and more direct channels of communication between composer and listener (Théberge 1997, 158–159). In a similar vein, many electroacousticians cast a McLuhan-esque distinction between what they do and the antiquated disciplines of ‘note-based’, ‘literate’ or, in Trevor Wishart’s terminology, ‘lattice-based’ musics still organised around scales and grids (Wishart 1996). This celebration of electroacoustic music as a kind of ‘secondary orality’ (Ong 2002) can serve social and political purposes as well. Georgina Born (1997), for example, finds that an idealisation of orality can regulate the economy of technical knowledge in cultural institutions, sustaining hierarchical power relationships at the same time as it sustains classical liberal notions of universal accessibility and freedom of expression. The assumptions here are effectively akin to the ‘phonocentrism’ that Jacques Derrida once criticised in French semiology (Derrida 1974, 12). Expressions that bear the trace of writing are cast aside as pathological deviations from human ‘nature’, compromises to be abandoned as knowledge about music progresses towards the ‘truth’ of pure sound (ibid, 38).While they rarely become salient on a critical level, these historiographical currents work between the lines of the electroacoustic literature to expand and to police its genre boundaries, particularly those that distinguish it from the electronic dance musics which have threatened its hegemony since the 1970s. Around the turn of the millennium, however, electroacousticians began to rethink their histories. Reconsiderations of the Western art music canon had already spread across musicology over the previous decade (Bergeron and Bohlman 1992; Cook and Everist 1999). New voices downplayed modernist tropes of formal autonomy, progress and authenticity, as well as extending a limited degree of legitimacy to dance and popular musics (Chadabe 2000; Emmerson 2001). Electroacoustic composers were increasingly forced to confront the challenges of the burgeoning popular field. Several factors contributed to the proliferation of electronic genres outside of academe, including new dance subcultures (Thornton 1995), new digital production technologies based on sampling and MIDI-sequencing (Schloss 2004) and a recording industry 98
Technologies of genre
keen to reignite the market for dance music on compact disc by rebranding it as ‘electronica’ (Taylor 2001; Morris 2010; Lynskey 2015). But the nascent postmodernism did not disturb the balance of power. A new trope of ‘trickle-down’ innovation took hold, according to which electronic dance musicians had simply reappropriated techniques pioneered by the avant-garde (Waters 2000). Prestigious competitions like those at the Ars Electronica festival changed their profiles to incorporate the perceived expansion, awarding top prizes to emerging musicians like Aphex Twin and labels like Mego (Haworth 2016). The trope of diversification and democratisation found purchase in middle-brow music criticism as well, especially in magazines like The Wire and in popularising surveys like Mark Prendergast’s The Ambient Century (2000) and Peter Shapiro’s Modulations (2000). Thus, the new genres were easily subsumed into the phonocentric narrative of electroacoustic progress. As electronica moved closer to the interests of electroacoustic music, the story of its origins shifted from one of sordid rave chill-out rooms and ambient techno B-sides (Reynolds 1998, 381–400) to one of ahistorical dynamic forces. As Fabian Holt (2007, 126–127) has argued, however, the vibrant cosmopolitan pluralities the new critics advertised often stood for very narrow social spaces in practice. Scholars have persisted in selecting ‘intelligent’ dance music genres for insertion into a lineage of high art engagements with repurposed media and nonstandard synthesis (Thomson 2004; Kelly 2009; Haworth 2013). ‘Musicians on the fringes of dance music soon enough looked backward to discover the great history of experimental electronic music’, writes a contributor to the 2009 Oxford Handbook of Computer Music, ‘and automatically merged to become part of that progression (even had they not looked, they could not have helped the latter)’ (Collins 2009, 339). But the mix of faint praise and outright dismissal in these readings, wrapped in a reduction of aesthetic change to technological progress, undermines the electroacoustic tradition’s claims to diversification. The unfortunate implication is that the privileged white male musicians who make and listen to electroacoustic music see fit to attribute more agency to technology than to their counterparts in predominantly black and gay nightclub subcultures.2 As Timothy Taylor (2001, 67) has written, going back to the European avant-garde is more compelling than a more historically accurate [genealogy] that traces their music to African Americans and gays. As such, these latter groups are almost wholly exscripted as techno is championed as an intellectual music to be listened to, not danced to. What we find in the historical record of the millennial turn to electronica is thus not so much unfettered diversification as selective canonisation. The technological agents imagined to be behind the shift mask human agents whose practices and identities might break the teleological model. This is far from providing useful terms for a theory of genre as such. But it does foreground a relationship that any workable theory of genre in electronic music must consider: that between technology and power. Genre theory has seen a resurgence in recent work on late-twentieth-century and contemporary musics. Because avant-gardes place a high degree of value on innovation and individuality, they present a particular challenge to theories of genre that emphasise textual regularity and iterability (Atton 2012; Malcolmson 2013). Musicians who see creativity as the defining element of their practice will often disavow the kinds of convention that identification with a genre community implies. Musicologists and ethnographers examining these complexities have been forced to fashion more holistic theories of the genre formation process. Born, for example, proposes to move beyond Pierre Bourdieu’s class-derived account of genre hierarchy towards ‘a positive account of aesthetic formations, attentive to their productivity and genealogical 99
Patrick Valiquet
longevity as well as to artists’ role in reproducing or transforming them’ (Born 2010, 188). Other theorists, many of them building upon Born’s work, have turned to actor-network theory (ANT) for tools to analyse the production of genres (Drott 2013; Piekut 2014; Levaux 2015; Haworth 2016). These scholars highlight rhetorical flexibility and invention over stable, longterm codifications. But it is important to reconcile the dynamic aspects of ANT with Born’s insistence on the weight of technological and institutional mediations. Studies of popular music typically emphasise the way genres normalise creativity by mediating social hierarchy (Fabbri 1982). The rules, restraints and distinctions negotiated by musicians help to assemble listening publics into more or less stable ‘genre cultures’ or ‘taste communities’ (Frith 1996; Negus 1999). But the homological relationship between these social and aesthetic orders is complicated by the way genres are organised and used in everyday life. Ruth Finnegan (1989), for example, has charted how musicians and listeners form habitual ‘pathways’ between genres as they navigate a local musical scene. Finnegan’s subjects choose between contrasting musical ‘worlds’ that may or may not map easily to class, generation or gender. Simon Frith (1996) suggests a similarly experiential model when he calls attention to the differing ways listeners live in a genre. ‘[T]he genre labelling process is better understood as something collusive than as something invented individually’, Frith writes, ‘as the result of a loose agreement among musicians, fans, writers and disc jockeys’ (1996, 88–89 [original emphasis]). These approaches leave us with a notion of genre as a kind of socio-aesthetic contract, connecting particular sets of sound conventions with particular sets of people, however dynamically or contingently, based on agreements that shift across time and space. Accounts of change in such generic allegiances typically focus on expansion. Perhaps the most systematic is that of Leonard Meyer (1989), who placed genre within the historical succession of stylistic ‘schemata’ that constrain the set of intelligible musical expressions at a given period and place. His model is like a production-oriented counterpart to Hans Robert Jauss’s (1982) notion of the ‘horizon of expectations’ that makes literary genres intelligible in reception. For Meyer, periods of aesthetic change boiled down to successive acts of deviation against generic codes, followed by absorption into a normalising mainstream. Such deviations both drive forward the process of transformation and affirm the dominance of the norm. Of course, the rate of expansion was not always the same. Meyer famously argued that late-twentiethcentury ‘radical pluralism’ could be understood as a kind of ‘fluctuating stasis’ (Meyer 1967; see also Taruskin 2009, 46). But as Born and Hesmondhalgh (2000, 39) point out, Meyer’s generally formalist account falls short of explaining the ‘different self-reflective cultural, psychological, and affective properties’ that motivate new stylistic pluralities. The matter of which differences receive reinforcement and which are marginalised is not arbitrary but contingent upon the particular power dynamics in play. Inventions, transgressions and re-articulations have accordingly received a great deal of attention in genre theory. David Brackett (2016) highlights difference both in the musical text, where genres can be thought of in structuralist terms as ‘systems of difference’ in which relation precedes meaning, and on a social level, in which musical utterances are articulated to hierarchies and ideologies. In popular musics, the zone of productive difference provides a neat explanation for the existence of dynamism in spite of strong aesthetic and social conventions. Jason Toynbee (2000), for example, uses what he see as the basic dialectical tension between ‘reason’ and ‘desire’ to explain the accumulated deviations that transformed acid house into jungle in the early British rave scene. Elsewhere, Brackett (2005) shows how black musicians in the United States have mobilised a similar tension to articulate marginal identities through the production of ‘crossover’ cover songs. Fabian Holt (2007, 59–60) splits the process into the three interconnected stages of ‘disruption’, ‘outreach’ and ‘resistance’. The model seems to apply as well to situations where 100
Technologies of genre
exception is the rule. Exploring the practices of British free improvisation, for example, Christopher Atton (2012) has put forward the suggestion that the act of stylistic ‘disruption’ can itself become a norm when grounded in social and territorial regularities. But to what extent can we reduce genre formation to a dialectical spiral of ‘repetition and difference’ (Neale 1980)? Is difference always productive? Do technologies necessarily privilege one side or the other? Musicologists informed by ANT have tried to pick apart these entangled forces by highlighting the performative side of genre. Eric Drott, for example, argues for a ‘more flexible, pragmatic understanding of the concept’ that links together ‘a variety of material, institutional, social, and symbolic resources’ (2013, 9). ANT offers us a view of categories through their mediations, ‘comprising a meshwork of human and non-human entities, all contributing something unique (though not equal) to the assemblage, yet none being essential to its functioning’ (Haworth 2016, 22). According to Drott, a more holistic view of the genre formation process favours the creative agency of producers over the normative agency of shared convention. As an ensemble of correlations,’ he goes on, ‘a genre is not so much a group as a grouping, the gerund ending calling attention to the fact that it is something that must be continually produced and reproduced. Genres, in other words, result from acts of assemblage, acts performed by specific agents in specific social and institutional settings. (Drott 2013, 9 [original emphasis]) Thus both instability and stability are pragmatic and performative. The ‘appearance’ of stable substance emerges through ‘recursive inscription’ (Drott 2013, 12). Instability is not necessarily a matter of audible difference but an effect of contestation over which contexts we privilege when we frame or define those differences (Korsyn 1999). The meaning of a genre is thus figured by ANT as a kind of agency accumulating in the relations between people, texts, objects and events. New meaning in one domain results from the ‘translation’ of relations and agencies from other domains (Callon 1986) – such as when academic electroacoustic music attempted to liberalise itself by internalising late-‘90s electronica genres. Drawing upon ANT’s analytical toolkit thus allows us to see genres as strengthened by the very acts of deviation that traditional genre theorists might have seen as weakening. In the analysis of Gérard Grisey’s Les espaces acoustiques which Drott’s argument frames, for example, he points to the mobilisation of generic associations as contextualising resources. He foregrounds the way composers may inscribe in a text ‘the numerous groupings that pieces of music afford’ (Drott 2013, 39) without committing to a single identity. As Christopher Haworth writes, instead of regarding the ‘untidy, overlapping quality of genres’ as an aberration, ANT treats heterogeneity as an ‘inescapable’ ontological condition (Haworth 2016, 22). The focus shifts from belonging to active participation. Genre in this view is not so much a constraint as an asset. Of course, in a broader historical perspective there is little novelty to ANT’s pragmatism. Drott acknowledges that his own turn to plurality owes a debt to the ‘fissured’ and ‘heterogeneous’ approaches typical of poststructuralism (Drott 2013, 40–41). Pragmatic accounts of genre also come to us from musicologists looking at the classical tradition. Jeffrey Kallberg (1998), for example, has shown how nineteenth-century piano composers used Wittgensteinian ‘family resemblances’ between categories of works to mediate gendered and ideological associations. Speaking to examples from disco-era popular music, Charles Kronengold (2008) stretches the normative notion of genre to analyse differences in the sharing of conventions across an overlapping complex of musics. ‘Genres’, he writes, ‘are in works as much as works are in genres’ (Kronengold 2008, 43). The intertextual approach has been important in ethnomusicology as well. Ethnomusicologists working in the American tradition of linguistic anthropology have 101
Patrick Valiquet
given close attention to the way genre works as a mutable expressive resource. In this perspective, indebted to the anglophone rediscovery of Bakhtin in the 1980s, genre is the means by which musicians situate musical utterances socially, historically and semantically (Bauman 2000, 84–87; Bakhtin 1986). Louise Meintjes (2003), for example, highlights the importance of genred utterance in performances of professional and cultural prestige. Genre for Meintjes is both collective and individual. It indexes notions of identity by allowing us to locate our group belonging in relation to ‘our music’, and it is also ‘intimately tied to the self-making rhetoric that elaborates artistic reputations’ (Meintjes 2003, 19–36). The flexible and productive manner in which musicians mobilise conventions can both confirm and complicate the ANT account.To understand this complication it may be useful to turn to Jacques Derrida’s (1980) play on the homonymy between the words for ‘genre’ and ‘gender’ in French. Taking his rhetorical move a bit more literally, we might notice significant overlap between the poststructuralist account of genre and the notion of ‘performativity’ that Judith Butler (1990) uses to describe the relationship between expressions of gender and the biology of sex.What ANT doesn’t do is offer insights into the politics of repression and exclusion at play in such a relationship. The structuralist notion of difference implies a positive, relational contribution to the whole, but what happens when such relations are blocked? Bruno Latour himself has been criticised for not only ignoring asymmetrical categories like race and gender but also for reinforcing such inequalities by dismissing the ‘critical sociology’ of power as a whole (Haraway 1997; Sturman 2006). But the critique of asymmetry and exclusion is perhaps poststructuralism’s most important and enduring contribution to our understanding of human culture. Marcia Citron’s (1993, 125) classic study of the Western canon, for example, shows that the marking of musical genre can be just as powerful for what and whom it delegitimises as for what and whom it enshrines. ANT may be very good at telling us why canons include certain musics and how those inclusions can change. But, as Benjamin Piekut has suggested, it is not very good for explaining their exclusions.3 Accounts of genre’s mediation by technology have expanded the kinds of associations we recognise as holding genre together, but they often contradict the way musicians identify their own work. Must we assume that musicians and listeners who reject genred identifications are unaware of the meaning of their actions? Do they contradict genre theorists in bad faith? The answer is not always clear. With the flattening notion of the ‘actant’, ANT expands the range of objects and texts we need to take into account alongside those of human musicians. But it is important to remember that Latour’s ‘actant’ is not a given. ‘An actant’, writes Latour, ‘is a list of answers to trials – a list which, once stabilised, is hooked to a name of a thing and to a substance’ (Latour 1991, 122). The list of trials given to music technologies does include generic attachments, but it also includes hierarchies of skill, technical knowledge and monetary value. As Born argues, Latour’s approach emphasises immediate acts of inscription over longer institutional and historical arcs and thus offers only a very distorted lens through which to examine such relations of power (Born 2012). Bringing technology into the mix should not only expand the number of opportunities we recognise musicians as having to make and affirm genres, it should also draw our attention to the different ways the breaks between genres materialise. We can hold on to Latour’s assertion that actants are not given in advance, but this should place even more critical emphasis on the asymmetrical politics of cultural practice.
Fluorescent friends Outside of Montreal’s universities, many of the musicians and artists I encountered in 2011–2012 were openly hostile towards the academic electroacoustic tradition. Critique usually revolved 102
Technologies of genre
around the tradition’s perceived phonocentrism, its tendency to ignore or dismiss aspects of production and performance not directly related to sound. Drop-outs from the university studios often ended up in the city’s vibrant noise scene, which at the time of my fieldwork was relatively more welcoming to practices outside the electroacoustic canon, especially those involving instrument building, conceptualism and site-specificity. Instead of the studied, reflective listening of electroacoustic convention, Montreal’s noise spaces offered mixed pleasures where visual and haptic elements mattered just as much as sound. But this aesthetic openness did not, however, translate into greater political engagement or socio-cultural diversity: the noise scene was still overwhelmingly white and male. Much attention has been paid to ways instruments and recording formats seem to reinforce the boundaries of experimental rock and noise scenes. As David Novak (2011, 626) has written, the physical ‘distortion’ typical of the analog also indexes the ethical and aesthetic values of the ‘underground’ culture of circulation in which listeners must participate in order to gain access to analog recordings as commodities. Theorists such as Paul Hegarty (2007) and Joanna Demers (2010) have boiled this complex material-discursive formation down to the opposition it seems to present to mainstream conventions of beauty and progress. As many music industries moved towards cheaper and more mobile digital forms of distribution in the 2000s, noise’s saturation with ‘residual’ instruments and formats seemed designed for resistance against the music industry (Acland 2007). It intensified the fetishistic ‘paratexts’ shed by digital music commodities since the 1990s (Straw 2009) and re-enchanted the hand-to-hand exchange of DIY barter networks (Novak 2013, 198–225). Some have celebrated this in a Benjaminian tone as a kind of popular resistance to capitalist hegemony (Ghazala 2004; Collins 2006; Richards 2013). But it is wrong to think of noise aesthetics as politicised in such a simple way. What sets noise practice apart is not the use of residual media as such but a particular style of material selection and elaboration situated in a complex network of generic relations and stoppages. One of Montreal’s most prominent noise labels in 2011 and 2012 was the informal label and promotion unit known as Fluorescent Friends. It had formed around the work of Ottawa-born musician Blake Hargreaves shortly after his move to Montreal in 2002. From 2007 to 2012 Hargreaves also ran a series of festivals in collaboration with the now-defunct studio collective La Brique. He played in and recorded the work of several groups in Montreal, eastern Ontario and New England. The roster of artists ranged from guitar-based post-punk, to drone, synth pop and performance art. The audience cut across Montreal’s art, indie rock and experimental music scenes. Publicity circulated over student radio and online through Facebook, Bandcamp and a dedicated website. Having a label seemed to Hargreaves an aesthetic rather than a business decision. Inspired by American bands like Lightning Bolt (2001) and Black Dice (2002), Hargreaves treated the label as a way of framing and grouping releases. The style of noise associated with these bands was more rhythmic and melodic than the more raw, continuous sounds normally held up as typical of the genre. Their artwork and performances combined images of psychedelic kitsch, political and sexual subversion, and exaggerated violence, all embedded in an ethos of authenticity and craft. ‘It was like window dressing,’ he told me. ‘The closest I can get to just instinct, and then thinking about it later, the better.’4 Releases trickled out in a variety of formats, in runs of anywhere between 1 and 300 copies, almost always with artwork assembled by hand. Particularly distinctive were a series of Plexiglas discs with music cut into one side and colourful artwork on the other. First each disc was painted with multiple layers of spray paint and stencils. Then Hargreaves etched the music directly onto the opposite surface using an antique mastering lathe he had bought on the online auction site eBay. For packaging he repurposed LP covers from second-hand stores. The difficulty of repeating the etching and painting processes made mass production impossible, so 103
Patrick Valiquet
instead Hargreaves saw the device as a part of the creative process: ‘For me it’s one-off music art: sort of concept pieces.’ The finished product blurred the line between instrument and recording medium. Tracks were selected from a pool of digital sound files that could be reused in a variety of formats. Parts of an album originally released on cassette could show up later on a lathe-cut disc, a Bandcamp page or even a CDR from a different label. Thus, the sale price reflected not so much the rarity of the music as the cost of labour and materials that went into the format. ‘That’s like forty dollars for an LP,’ he assured me. That, people when they see it, they don’t really feel sure that it’s going to sound as good as they want it to. I don’t sell ones that sound really bad. They sound a little sketchy, but I can get it to sound pretty good, and I set pretty high standards for that. He also saw no reason not to contract his work out to other musicians. In the spring of 2012, for example, he had worked on a commission from the turntable improviser Martin Tétreault, a prominent figure in Quebec’s musique actuelle tradition (Stévance 2012). Hargreaves cut a set of discs for use by Tétreault’s turntable quartet (2011) for a series of concerts in Quebec City. Tétreault’s work explores the turntable as a complex amplification mechanism – the needle as a contact microphone and the platter as a resonator – rather than a transducer of recorded sounds per se. The blur between medium and instrument resonated strongly with Hargreaves’ approach. ‘Projects like that are nice because I’m just trying to do a good job with the lathe,’ he said. ‘I’m not really thinking about this band, or will people want this. I’m this craftsman. Like a cabinet maker or something. Just giving him the specs of what he needs.’ In both cases – as a medium and as an instrument – the use of the disc worked against the possibility of hearing the music as pure sound. This was not a matter of nostalgia for the analog. It was a new cut across a contemporary aesthetic network. The same was true of more conventionally ‘retro’ technologies like the cassette. Critics of popular music have depicted the afterlife of cassette culture in the digital age as a nostalgic evocation of lost authenticity (Reynolds 2011). The cassette’s redundancy appears to leave it little value aside from arbitrary subcultural validation. Its classic role as a grassroots medium – as in Peter Manuel’s (1993) account of the democratisation of popular music recording in India – has eroded to the point that, as Elodie Roy (2015) argues, the cassette now operates more as a keepsake than a medium of circulation. For noise musicians, however, cassettes are more than media. Katherine Kline, Hargreaves’ bandmate in the duo Dreamcatcher, described to me how her use of cassettes wove a personal history of mix-tape trading into her instrumental practice. On Dreamcatcher’s 2008 cassette for the Ecstatic Peace label, A Team Come True, Kline runs the audio output of a cassette player into the external input of her Electribe ER-1 drum machine. When she found a section she liked she could simply flip the cassette, advance on the opposite side, and then flip again to make a ‘loop’. ‘It’s always a surprise,’ she told me. Because I don’t mix. I haven’t chosen specific loops or moments. There are moments that work better than others with certain beats, but I’ll just let it play [. . .] and depending on where the tape is at, it always kind of changes the quality of the music.5 Many of Kline’s cassettes came from second-hand stores, like the covers of Hargreaves Plexiglas discs, but she also used tapes she’d traded with other bands. Notice also how Kline’s use of the cassette bends the generic links we normally associate with the drum machine. She told me that when she had begun to work with Dreamcatcher 104
Technologies of genre
her choice of instrument had been inspired by electroclash bands like the Detroit-based Adult (2003). It did give Dreamcatcher’s music a rhythmic and repetitive quality, but Kline’s associations changed when she saw how rarely people in the noise scene danced. When the duo was later invited to play at Montreal’s MUTEK festival in a showcase of local talent curated by Eric Mattson, Kline found herself on the opposite side of the distinction. As she described it, the fact that she was now playing among the dance-oriented musicians who had once inspired her brought the irony of her drum machine playing even more strongly into the foreground. It now seemed ridiculous to her that their musics should be thought of as related just because of a single device. She had plugged it in differently, and a different music came out. LPs, cassettes, and even hardware drum machines usually fall outside of the domain we normally consider when we talk about ‘digital music’. But as recent work by Roger Moseley (2015) and Jonathan Sterne (2016) has shown, the deeper we dig into the components and interconnections of musical media, the less easy it is to distinguish between analog and digital in technical terms. More often than not, the term analog marks a judgement of value against the digital – continuous as opposed to discrete, old as opposed to new, hardware as opposed to software, real as opposed to imaginary – which can only be made sense of against a particular cultural background. Recognising this can help us push back against the reduction of the analog to a simple retrograde of digital culture. But it also changes the way we attribute agency in the development of genres like noise. Because genres shift and mutate in relation to perspective and over time, generic associations cannot be hard-wired into technologies like an instruction manual or script. Even as uses become sedimented, this does not restrict the user to a particular generic attachment. Noise musicians are thus not passive subjects of a general analog nostalgia. They actively shape their instruments and media in ways that avoid the narrative of technological progress. The space they establish excludes the technical binaries that structure electroacoustic hierarchies.
KANTNAGANO Some musicians I worked with placed even greater stress on their instruments’ contradictory capacities. This was especially the case in the more conceptual performance art circles that proliferated on the boundaries of Montreal’s noise scene in 2011–2012. Conceptual art can encompass sound and music but is not primarily a music genre itself.When music is used in this context, the citational quality may be amplified.This can be true both of the sound itself and also of the instrumental and gestural repertoires involved. Consider Janet Cardiff ’s 2001 installation 40 Part Motet, for example (Christov-Bakargiev 2002) The piece is not reducible to a recording of Thomas Tallis’s Spem in Alium, even if this is the installation’s only sonic content. Cardiff is citing both the work as a whole and its various internal relations, which the listener can discover by walking amidst the speakers. This framing makes the work unique. Whereas Spem in Alium participates in certain genres of Elizabethan sacred choral music, 40 Part Motet does not, or at least not in any direct way. The work of the trio KANTNAGANO is similarly multilayered. The group’s genealogy connects with a variety of local traditions of experimental music and theatre, but they make no solid claims to belonging. All three members – guitarist Jonathan Parent, bassist Alexandre St-Onge and synthesist Alexander Wilson – were born and raised in Quebec, and all three have roots in the late 1990s post-rock scene.Two – St-Onge and Wilson – have since pursued doctoral degrees, but both chose to study philosophy and critical theory rather than music. They resisted associating with academic composition or free improvisation, but at the same time their work was often too disjointed and cerebral to sit well with rock or electronic dance music audiences. 105
Patrick Valiquet
Instead, they treated the proliferation of boundaries as a source of formal and conceptual tension. Their deliberate ambivalence did put them equally at home in rock clubs, electronic music festivals, dance venues and art museums, but this venue-hopping was not informed by a politics of reconciliation like other attempts at postmodern cross-genre synthesis. The group’s tongue-incheek name, a play on the name of Montreal Symphony musical director Kent Nagano, signalled an irreverence towards serious music at the same time as it established a clear penchant for other kinds of seriousness. Explicitly or not, it also rooted the group firmly in the culture of the city. The proliferation of translational registers is typical of literary and vocal performance in Montreal, permeating everything from billboard advertising to poetry (Simon 1994; Heller 2011). In the spring of 2012, KANTNAGANO announced a new album entitled Blessure narcissique (‘Narcissistic Injury’) in conjunction with a 45-minute audiovisual performance at the Musée d’art contemporain de Montréal. It was to be their first release on vinyl. The style of the performance floated between a kind of psychedelic sublime and science fiction kitsch. The musicians sat in a triangular formation in the centre of the audience. Laser lights controlled by Wilson’s synthesisers traced repeating patterns on the walls around them. St-Onge and Parant played MIDI-augmented guitars which, instead of making their own sounds, controlled samples of drums, animal noises or synthesisers. Analog and digital infected and inflected one another, both in the performance and in the album. There were 30 copies, in hand-numbered 12-inch LP sleeves. But each contained a large format vinyl sticker instead of a disc.The image on the sticker was a ‘quick response’ or QR code, which, when scanned with a cell phone camera, directed the listener’s web browser to the album’s Bandcamp page. St-Onge described the strategy as one of ‘impurification’. The point of making so many cuts and connections was to set up conditions in which aesthetic agency could slip in and out of the performer’s grasp. ‘It’s about multiplying the inputs and outputs,’ Wilson told me, ‘so that we can get intertwined within it, have a direct relationship with it, but not know exactly what’s going to come out of it’. This foregrounding of the digital set KANTNAGANO far apart from their contemporaries in the noise scene, which tended to prefer low-tech, intuitive setups. Their use of MIDI, for example, opened up a tangible gap between gesture and sound. But they made no attempt to fill the gap with ‘human’ expression, and in this way also distanced themselves from the academic scene. For decades, computer music researchers have tried to invent more organic, high-bandwidth ways of interfacing with digital synthesis on the basis of this supposed lack. As composer and engineer F. Richard Moore (1988) once wrote, poor resolution and ‘sluggish’ serial transmission channels made the standard anathema to the ideal of ‘control intimacy’. The natural foil for Moore was the human voice, in which ‘the microgestural movements of the performer’s body are translated into sound in ways that allow the performer to evoke a wide range of affective quality in the musical sound’ (Moore 1988, 22). The difficulty of interfacing with the first generation of MIDI-enabled instruments has grown to mythical proportions. According to one version of the story, technicians for a prominent synthesiser model (in some tellings the DX-7, in others the Prophet-5) noticed that the majority of units came back to the factory for repair with their preset patch information untouched. The assumption in computer music circles has been that the users must have been incapable of programming and relied entirely on presets, despite the fact that they may just as well have saved their own patches elsewhere to protect their work (Théberge 1997, 75–83).The truth behind the stereotype is that MIDI seems to open a gap between the performer and the production of sound, which the electroacoustic tradition has seen as a problem to be solved. For KANTNAGANO, however, this gap afforded a kind of ‘abjected’ playing that they could use as a productive resource in its own right.6 The ‘bad’ qualities of MIDI-enabled playing are not repressed, but internalised and reframed. The goal is to stage the alienation, almost as a mockery of those who abhor it. 106
Technologies of genre
Other performances placed the logic of connection itself in the foreground. At a collective event curated by the group in April 2012, St-Onge presented a solo performance entitled Aimer la concrescence (‘Loving concrescence’). He stood hunched in front of a microphone, his bass guitar lying on the floor in front of him, humming and singing while breaking a bundle of wooden sticks in his mouth. As he vocalised the pieces of wood fell from his mouth and struck the strings. After a few minutes he repeated the action with a sheet of paper, chewing it to pieces and dropping it on to the strings of his bass.The sounds of voice and bass were not mixed separately, however, but triggered a feedback system running on Ableton Live and an analog synthesiser. He controlled the modulations with a Wiimote attached to his back. The surreal series of movements and vocalisations, added to the fact that he rarely touched the instruments, gave the slowly evolving composite a dream-like, disjointed quality. The piece’s title referenced Gilbert Simondon’s theory of technological evolution. A device or system is abstract, according to Simondon, when its components are differentiated from each other and therefore only held together by the whole. As elements become adapted to each other’s functions and less separable from the whole, the system becomes more purposefully individuated and thus more concrete. The process Simondon calls concrescence is the movement between these ‘phases’ (Simondon 1958; Dumouchel 1995, 261–263). We can think of the individual components of St-Onge’s setup in a similar way. Separately they point to a variety of generic attachments that we must read and evaluate separately. Connecting them so loosely sets up a condition in which the associations of the components and the associations of the whole almost compete with each other for priority. The generic identity of the whole is not a composite so much as a framing of the conflict between the parts. Borrowing from Judith Butler (1993), Georgina Born has argued that cultural institutions produce their aesthetic identities not only by managing internal differences but also as a function of their ‘constitutive outsides’, the musics they absent or repress (Born 2010, 193).There are clear resonances with the semiotics of genre. As David Brackett has shown, a generic assemblage is produced by both internal iterations and external interactions at the same time (Brackett 2016, 10). Indeed, for Karen Barad, we should think of the act of dividing things ‘together-apart’ as producing agency and materiality as such (Barad 2012, 32). KANTNAGANO’s citational use of digital technologies worked in precisely this manner, rejecting any belonging to computer music on one hand while also representing computer music (if ironically) for their experimental rock audiences on the other. This generic double negation is not built into their instruments, however. It materialised at a particular place and time, in a specific generic assemblage, in relation to a local set of discursive and social practices.
Conclusion I began by questioning the narrative that pins generic hybridisation to the processes of technological democratisation. This is not to say that hybridisation and democratisation are bad things. But we need to know exactly what each process entails if their interrelationship is to tell us anything useful or specific about musical aesthetics. It is not enough to hypothesise a correlation based on a theoretical principle – such as Meyer’s classic recourse to entropy – and then look for examples that back the theory up. Technological and aesthetic transformations are equally shaped by human desire, and as such by hierarchical mechanisms of social order like gender, race and class. Recognising this should also stop us from assuming that all musics necessarily aspire to belong to the same imagined wholes. In effect, I am arguing that generic change should be even more complex than the millennial heralds of postmodernity would have had us believe. New pluralities, where they exist, are not 107
Patrick Valiquet
flat landscapes of productive difference. Some pluralities simply apply a new aesthetic surface to old social exclusions. Some close gaps while others engender distance. We may have done away with the grand, Adornian negation that once separated high art from entertainment, but this should not blind us to breaks and exclusions on a local scale. The more closely we attend to differences, the less likely it is that we will be able to relate them all in an inclusive way. At any rate, the concept of hybridisation offers us only a very blunt instrument with which to do so. Technological progress is similarly difficult to generalise. And we need to be especially suspicious of terms like democratisation, which give a distinct political value to this anonymous force. Technical innovation is increasingly taken as a model for political governance as a whole (Barry 2001). However, whether it means extending popular access to technology or enhancing public participation in technological evolution, the notion of democratisation assumes a great many things about how that governance should work. A democratic politics is not necessarily a politics in which every voice and every issue has equal weight. Indeed Langdon Winner (1995) and Andrew Feenberg (1999) have argued that political ideals like democracy, if they are worth preserving at all, should hold back the progress of technological change instead of being driven by it. A degree of free public access to the means of production is a good thing, but popular access is not a substitute for responsible collective administration. For Feenberg the goal should be a ‘deep’ democracy in which popular agency is normalised and incorporated into the procedures of an electorally empowered technological governance (Feenberg 1999, 146–147). Regardless of whether we agree with Feenberg’s prescription, it is important to recognise that the politics of technological change is itself a work in progress. The value of democracy depends by definition upon the responsibility of those who have a stake in it. As I hope to have shown in my illustrations, the forms of technological and generic change are contingent upon both local circumstances and the broader social and historical background. Accounts of a digitally mediated explosion of new electronic musics are incomplete without close consideration of the boundaries and asymmetries in play. Instruments have been opened up to new generic relations, but in many cases musicians understand these identifications as liabilities rather than assets, problems rather than solutions. In addition to resolving conflicts, then, plurality can also intensify and multiply exclusions.
Notes 1 Early examples of this trope appear in Luening 1968 and Appleton and Kasdan 1970. 2 See Taylor (2001) for a more detailed discussion on this point. Popular music scholars like Fikentscher (2000) and McLeod (2001) have gone further, suggesting that disco’s absence from many electronic dance music genealogies conceal efforts to suppress its close association with gay nightclubs. Several authors have also pointed out how frequently electroacoustic theorists slide into tropes foregrounding ‘embodiment’ over intelligence when describing music associated with black subcultures, including Lewis (1996), Born and Hesmondhalgh (2000), Meintjes (2003). 3 Indeed, this is a central point of Piekut’s conclusion to his ANT-inspired book on experimentalism: ‘Future histories of experimental music may well include the Stooges, but that would require that these histories explain how and why certain musicians, performances, or venues were previously thought to be outside the boundaries of experimental music. In short: those future histories must include exclusion’ (Piekut 2011, 196). 4 Personal communication, 23 March 2012. 5 Personal communication 12 August 2011. 6 For a Lacanian account of abjection in musical performance see Schwarz (1997, 143). In his analysis of the paintings of Francis Bacon, Gilles Deleuze (2004, 15–16) relates the abject to the image of a body that ‘attempts to escape from itself through one of its organs in order to rejoin the field or material structure’. While potentially informative, further inquiry into the psychoanalytical value of the abject would be beyond the scope of this chapter.
108
Technologies of genre
References Acland, Charles, ed. Residual Media. Minneapolis: University of Minnesota Press, 2007. Appleton, Jon, and Leonard Kasdan. ‘Tradition and Change: The Case of Music’. Comparative Studies in Society and History 12, no. 1 (1970): 50–58. Atkinson, Simon, and Simon Emmerson. ‘Editorial’. Organised Sound 21, no. 1 (2016): 1–3. Atton, Chris. ‘Genre and the Cultural Politics of Territory: The Live Experience of Free Improvisation’. European Journal of Cultural Studies 15, no. 4 (2012): 427–441. Bakhtin, Mikhail Mikhailovich. Speech Genres and Other Late Essays. Austin: University of Texas Press, 1986. Barad, Karen. ‘Nature’s Queer Performativity’. Kvinder, Køn & Forskning 1–2 (2012): 25–53. Barry, Andrew. Political Machines: Governing a Technological Society. London: Athlone Press, 2001. Bauman, Richard. ‘Genre’. Journal of Linguistic Anthropology 9, no. 1–2 (2000): 84–87. Bergeron, Katherine, and Philip V. Bohlman, eds. Disciplining Music: Musicology and Its Canons. Chicago: University of Chicago Press, 1992. Born, Georgina. ‘Computer Software as a Medium: Textuality, Orality and Sociality in an Artificial Intelligence Research Culture’. In Rethinking Visual Anthropology, edited by Marcus Banks and Howard Morphy, 139–169. New Haven:Yale University Press, 1997. ———. ‘The Social and the Aesthetic: For a Post-Bourdieuian Theory of Cultural Production’. Cultural Sociology 4, no. 2 (2010): 171–208. ———. ‘Music and the Materialization of Identities’. Journal of Material Culture 16, no. 4 (2011): 376–388. ———. ‘Music and the Social’. In The Cultural Study of Music: A Critical Introduction. 2nd ed., edited by Martin Clayton, Trevor Herbert, and Richard Middleton, 261–274. London: Routledge, 2012. ———. ‘Making Time:Temporality, History and the Cultural Object’. New Literary History 46, no. 3 (2015): 361–386. Born, Georgina, and David Hesmondhalgh. ‘Introduction: On Difference, Representation, and Appropriation in Music’. In Western Music and Its Others: Difference, Representation and Appropriation in Music, edited by Georgina Born and David Hesmondhalgh, 1–58. Berkeley: University of California Press, 2000. Brackett, David. ‘Questions of Genre in Black Popular Music’. Black Music Research Journal 25, no. 1/2 (2005): 73–92. ———. Categorizing Sound: Genre and Twentieth-Century Popular Music. Berkeley: University of California Press, 2016. Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. London: Routledge, 1990. ———. Bodies That Matter: On the Discursive Limits of ‘Sex’. London: Routledge, 1993. Callon, Michel. ‘Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fisherman of St Brieuc Bay’. In Power, Action and Belief: A New Sociology of Knowledge? edited by John Law, 196–223. London: Routledge, 1986. Chadabe, Joel. Electric Sound:The Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice-Hall, 1997. ———. ‘Remarks on Computer Music Culture’. Computer Music Journal 24, no. 4 (2000): 9–11. Christov-Bakargiev, Carolyn. Janet Cardiff: A Survey of Works Including Collaborations With George Bures Miller. New York: P.S. 1 Contemporary Art Center, 2002. Citron, Marcia J. Gender and the Musical Canon. Cambridge: Cambridge University Press, 1993. Collins, Nick. ‘Electronica’. In The Oxford Handbook of Computer Music, edited by Roger T. Dean, 334–353. Oxford: Oxford University Press, 2009. Collins, Nicolas. Handmade Electronic Music:The Art of Hardware Hacking. New York: Routledge, 2006. Cook, Nicholas, and Mark Everist. Rethinking Music. Oxford: Oxford University Press, 1999. Deleuze, Gilles. Francis Bacon:The Logic of Sensation. Minneapolis: University of Minnesota Press, 2004. Demers, Joanna. Listening Through the Noise: The Aesthetics of Experimental Electronic Music. Oxford: Oxford University Press, 2010. Derrida, Jacques. Of Grammatology. Translated by Gayatri Chakravorty Spivak. Baltimore: Johns Hopkins University Press, 1974. ———. ‘The Law of Genre’. Critical Inquiry 7, no. 1 (1980): 55–81. Dhomont, Francis. ‘Is There a Québec Sound?’ Organised Sound 1, no. 1 (1996): 23–28. Drott, Eric. ‘The End(s) of Genre’. Journal of Music Theory 57, no. 1 (2013): 1–45. Dumouchel, Paul. ‘Gilbert Simondon’s Plea for a Philosophy of Technology’. In Technology and the Politics of Knowledge, edited by Andrew Feenberg and Alastair Hannay, 255–271. Bloomington: Indiana University Press, 1995.
109
Patrick Valiquet Emmerson, Simon.‘From Dance! To “Dance”: Distance and Digits’. Computer Music Journal 25, no. 1 (2001): 13–20. Fabbri, Franco. ‘What Kind of Music?’ Popular Music 2 (1982): 131–143. Feenberg, Andrew. Questioning Technology. London: Routledge, 1999. Fikentscher, Kai. ‘You Better Work!’: Underground Dance Music in New York City. Middletown: Wesleyan University Press, 2000. Finnegan, Ruth. The Hidden Musicians: Music-Making in an English Town. Cambridge: Cambridge University Press, 1989. Frith, Simon. Performing Rites: Evaluating Popular Music. Oxford: Oxford University Press, 1996. Ghazala, Qubais Reed. ‘The Folk Music of Chance Electronics: Circuit-Bending the Modern Coconut’. Leonardo Music Journal 14 (2004): 97–104. Haraway, Donna. Modest_witness@second_millennium.femaleman©_meets_oncomouseTM: Feminism and Technoscience. New York: Routledge, 1997. Haworth, Christopher. ‘Xenakian Sound Synthesis: Its Aesthetics and Influence on “Extreme” Computer Music’. In Resonances: Noise and Contemporary Music, edited by Michael Goddard, Benjamin Halligan, and Nicola Spelman, 183–197. New York: Bloomsbury, 2013. ———. ‘ “All the Musics Which Computers Make Possible”: Questions of Genre at the Prix Ars Electronica’. Organised Sound 21, no. 1 (2016): 15–29. Hegarty, Paul. Noise/Music: A History. London: Continuum, 2007. Heller, Monica. Paths to Post-Nationalism: A Critical Ethnography of Language and Identity. Oxford: Oxford University Press, 2011. Holt, Fabian. Genre in Popular Music. Chicago: University of Chicago Press, 2007. Holmes, Thom. Electronic and Experimental Music: Pioneers in Technology and Composition. 2nd ed. London: Routledge, 2008. Jauss, Hans Robert. Toward an Aesthetic of Reception. Minneapolis: University of Minnesota Press, 1982. Kallberg, Jeffrey. Chopin at the Boundaries: Sex, History and Musical Genre. Cambridge: Harvard University Press, 1998. Kelly, Caleb. Cracked Media:The Sound of Malfunction. Cambridge: MIT Press, 2009. Korsyn, Kevin. ‘Beyond Privileged Contexts: Intertextuality, Influence, and Dialogue’. In Rethinking Music, edited by Nicholas Cook and Mark Everist, 55–72. Oxford: Oxford University Press, 1999. Kronengold, Charles. ‘Exchange Theories in Disco, New Wave, and Rock’. Criticism 50, no. 1 (2008): 43–82. Landy, Leigh. Understanding the Art of Sound Organization. Cambridge: MIT Press, 2007. Latour, Bruno. ‘Technology Is Society Made Durable’. In A Sociology of Monsters? Essays on Power,Technology and Domination, edited by John Law, 103–132. London: Routledge, 1991. Lauretis,Teresa de. Technologies of Gender: Essays on Theory, Film, and Fiction. Bloomington: Indiana University Press, 1987. Levaux, Christophe. ‘DIY as a Tool for Understanding Musicological Paradigms’. In Keep It Simple, Make It Fast! An Approach to Underground Music Scenes, edited by Paula Guerra and Tânia Moreira, 385–389. Porto: University of Porto: Faculty of Arts and Humanities, 2015. Lewis, George E. ‘Improvised Music After 1950: Afrological and Eurological Perspectives’. Black Music Research Journal 16, no. 1 (1996): 91–122. Luening, Otto. ‘An Unfinished History of Electronic Music’. Music Educators Journal 55, no. 3 (1968): 43–145. Lynskey, Dorian. ‘How the Compact Disc Lost Its Shine’. The Guardian, May 28, 2015. www.theguardian. com/music/2015/may/28/how-the-compact-disc-lost-its-shine. Malcolmson, Hettie. ‘Composing Individuals: Ethnographic Reflections on Success and Prestige in the British New Music Network’. Twentieth-Century Music 10, no. 1 (2013): 115–136. Manning, Peter. Electronic and Computer Music. 4th ed. Oxford: Oxford University Press, 2013. Manuel, Peter. Cassette Culture: Popular Music and Technology in North India. Chicago: University of Chicago Press, 1993. McLeod, Kembrew. ‘Genres, Subgenres, Sub-Subgenres, and More: Musical and Social Differentiation Within Electronic/Dance Music Communities’. Journal of Popular Music Studies 13 (2001): 59–75. Meintjes, Louise. Sound of Africa! Making Music Zulu in a South African Studio. Durham: Duke University Press, 2003. Meyer, Leonard. Music, the Arts and Ideas. Chicago: University of Chicago Press, 1967. ———. Style and Music:Theory, History, and Ideology. Chicago: University of Chicago Press, 1989.
110
Technologies of genre Moore, F. Richard. ‘The Dysfunctions of MIDI’. Computer Music Journal 12, no. 1 (1988): 19–28. Morris, Jeremy Wade. ‘Understanding the Digital Music Commodity’. PhD Dissertation, McGIll University, 2010. Moseley, Roger. ‘Digital Analogies: The Keyboard as Field of Musical Play’. Journal of the American Musicological Society 68, no. 1 (2015): 151–227. Neale, Steven. Genre. London: British Film Institute, 1980. Negus, Keith. Music Genres and Corporate Culture. London: Routledge, 1999. Novak, David. ‘The Sublime Frequencies of New Old Media’. Public Culture 23, no. 3 (2011): 603–634. ———. Japanoise: Music at the Edge of Circulation. Durham: Duke University Press, 2013. Ong, Walter J. Orality and Literacy:The Technologizing of the Word. New York: Routledge, 2002. Piekut, Benjamin. Experimentalism Otherwise: The New York Avant-Garde and Its Limits. Berkeley: University of California Press, 2011. Piekut, Benjamin. ‘Actor-Networks in Music History: Clarifications and Critiques’. Twentieth Century Music 11, no. 2 (2014): 191–215. Prendergast, Mark. The Ambient Century From Mahler to Trance – the Evolution of Sound in the Electronic Age. London: Bloomsbury, 2000. Rabinovitz, Lauren, and Abraham Geil. ‘Introduction.’ In Memory Bytes: History, Technology, and Digital Culture, edited by Lauren Rabinovitz and Abraham Geil. Durham: Duke University Press, 2004. Richards, John. ‘Beyond DIY in Electronic Music’. Organised Sound, 18, no. 3 (2013): 274–281. Reynolds, Simon. Energy Flash: A Journey through Rave Music and Dance Culture. London: Picador, 1998. ———. Retromania: Pop Culture’s Addiction to Its Own Past. London: Faber and Faber, 2011. Roy, Elodie. Media, Materiality and Memory: Grounding the Groove. Farnham: Ashgate, 2015. Schaeffer, Pierre. La musique concrète. Paris: Presses Universitaires de France, 1967. Schloss, Joseph G. Making Beats: The Art of Sample-Based Hip-Hop. Middletown: Wesleyan University Press, 2004. Schwarz, David. Listening Subjects: Music, Psychoanalysis, Culture. Durham: Duke University Press, 1997. Shapiro, Peter, ed. Modulations: A History of Electronic Music.Throbbing Words on Sound. New York: Distributed Art Publishers, 2000. Simon, Sherry. Le trafic des langues:Traduction et culture dans la littérature québécoise. Montreal: Boréal, 1994. Simondon, Gilbert. Mode d’existence des objets techniques. Paris: Aubier, 1958. Sterne, Jonathan. ‘Analog’. In Digital Keywords: A Vocabulary of Information Society and Culture, edited by Benjamin Peters, 31–45. Princeton: Princeton University Press, 2016. Stévance, Sophie. Musique actuelle. Montreal: Les Presses de l’Université de Montréal, 2012. Straw, Will. ‘In Memoriam: The Music CD and Its Ends’. Design and Culture 1, no. 1 (2009): 79–92. Sturman, Susan. ‘On Black-Boxing Gender: Some Social Questions for Bruno Latour’. Social Epistemology 20, no. 2 (2006): 181–184. Taruskin, Richard. The Danger of Music and Other Anti-Utopian Essays. Berkeley: University of California Press, 2009. Taylor, Timothy D. Strange Sounds: Music,Technology and Culture. New York: Routledge, 2001. Théberge, Paul. Any Sound You Can Imagine: Making Music/Consuming Technology. Middletown: Wesleyan University Press, 1997. Thomson, Phil. ‘Atoms and Errors: Towards a History and Aesthetics of Microsound’. Organised Sound 9, no. 2 (2004): 207–218. Thornton, Sarah. Club Cultures: Music, Media and Subcultural Capital. Cambridge: Polity Press, 1995. Toynbee, Jason. Making Popular Music: Musicians, Creativity and Institutions. London: Arnold, 2000. Valiquet, Patrick. ‘ “The Digital Is Everywhere”: Negotiating the Aesthetics of Digital Mediation in Montreal’s Electroacoustic and Sound Art Scenes’. DPhil Thesis, University of Oxford, 2014. ———. ‘ “All Sounds Are Created Equal”: Mediating Democracy in Acousmatic Education’. In Critical Approaches to the Production of Music and Sound, edited by Samantha Bennett and Elliot Bates. New York: Bloomsbury, forthcoming. Waters, Simon. ‘Beyond the Acousmatic: Hybrid Tendencies in Electroacoustic Music’. In Music, Electronic Media and Culture, edited by Simon Emmerson, 56–83. Aldershot: Ashgate, 2000. Winner, Langdon. ‘Citizen Virtues in a Technological Order’. In Technology and the Politics of Knowledge, edited by Andrew Feenberg and Alastair Hannay, 64–85. Bloomington: Indiana University Press, 1995. Wishart, Trevor. On Sonic Art, Revised Edition. Edited by Simon Emmerson. London: Routledge, 1996.
111
Patrick Valiquet
Discography Adult. Anxiety Always. Compact Disc. Detroit: Ersatz Audio, 2003. Black Dice. Beaches and Canyons. Compact Disc. New York: DFA, 2002. Dreamcatcher. A Team Come True. Cassette. Easthampton: Ecstatic Peace!, 2008. KANTNAGANO. Blessure narcissique. FLAC. Montreal: Les Encodages de l’Oubli, 2012. Lightning Bolt. Ride the Skies. Compact Disc. Providence: Load Records, 2001. St-Onge, Alexandre. Aimer la concrescence. Streaming Audio. Glendale: Absence of Wax, 2012. Tétreault, Martin + Le Quatuor de Tourne-Disques. Points, Lignes Avec Haut-Parleurs. Compact Disc. Montreal: Oral, 2011.
Acknowledgements The research presented in this article was supported by doctoral award 752–2011–0655 from the Social Sciences and Humanities Research Council of Canada and by the European Research Council Advanced Grants scheme under the European Union’s Seventh Framework Programme (FP7/2007–2013), ERC grant agreement no. 249598. The research programme, directed by Professor Georgina Born, was titled ‘Music, Digitization, Mediation: Towards Interdisciplinary Music Studies’ and ran from 2010–2015 at the University of Oxford. I am grateful to Georgina Born, Kyle Devine, Adam Harper, and Mads Krogh for feedback on earlier drafts.
112
5 DANCING IN THE TECHNOCULTURE Hillegonda C Rietveld
Introduction This chapter addresses transnational, yet locally definable, electronic dance music culture as it engages with and intervenes in globalising experiences of urbanised life and the digitally networked information and communication technologies that characterise and dominate what is currently known as “the technoculture” (Shaw 2008; Robins and Webster 1999; Constance and Ross 1991). Electronic dance music, as it developed during the late 1980s into the 1990s, can be understood here as DJ-friendly dance music, characterised by the dominant use of electronic music technologies, such as synthesisers, sequencers and digital audio workstations (DAWs – computer software for music composition, recording, editing and production). A rhythm- and texture-dominated music genre, tempos range between around 125 beats per minute (bpm) for a strutting house-styled track, to 135 bpm for what is now regarded as techno proper, and up to around 160 bpm for drum’n’bass. Extremes exist, with soulful vocalised deep house at the slower side and electronic body music and techno-styled gabber house (or “gabba”) at the fast end. The development of electronic dance music roughly spans nearly four decades, encompassing a wide range of subgenres, DJs, dance parties, dance clubs, raves and festivals across globally mediated hubs that can loosely be mapped across similar global territory as currently exists for social media such as Facebook and YouTube, or music-based streaming services that includes Boiler Room (https://boilerroom.tv), a magazine format music television service that streams online DJ performances, which are next archived via dedicated online video channels. Boiler Room’s programming illustrates the spread and diversity of current electronic dance music scenes well. Its DJ performances take place in a wide spread of urban locations, first from London, followed by offices in Berlin, New York and Los Angeles, expanding their reach across a wide range of cities that includes Buenos Aires, Mexico City, Tokyo, Edinburgh and Amsterdam. The spread of electronic dance music culture goes further, to cities like Bangkok, Jakarta, Hong Kong, Singapore, Melbourne, Sydney and Bombay, which feature dance clubs and provide gateways to dance parties and outdoor festivals in the countryside. A slightly different scenario presents itself in the Indian coastal province of Goa, where since the late 1980s long-term hippy migrant/tourists have engaged in a dance scene based on beach parties (preferably held during full moon) that eventually gave shape to Goa trance during the early 1990s and which subsequently morphed into psytrance (see Rietveld 2010a; Davies 2004).The processes of glocalisation 113
Hillegonda C Rietveld
can be noticed, in which a cultural concept that moves along the communication pathways of globalisation acquires local meanings (Robertson 1995). Over time, electronic dance music has become a wide-ranging musical category, including most electronically generated music that serves the purpose of dancing at dedicated events. A range of styles can be identified under the umbrella of electronic dance music, from vocalised dance songs to instrumental noise tracks. The musical common denominator seems to be techno, however, a mostly instrumental form of electronic dance music that foregrounds (rather than hides) its electronic sound generation with the use of synthesisers, sequencers and digital software-based plug-ins that are associated with DAWs (digital audio workstations). Also in the study of electronic dance music, techno is often regarded as the main musical form without specifying (sub)genres (see Butler 2006, for further discussion). This includes genres, such as dub step and drum’n’bass, with mixed traces of Black Atlantic syncopations and break beats that are part of a reggae continuum, as well as what Reynolds (2013) identifies as part of a “hardcore continuum” that is linked to hardcore rave, and will here be understood through the lens of techno. Meanwhile, as a subgenre, techno is itself arguably a “post-soul” music genre (Albiez 2005) and a post-industrial amalgamation of cross-Atlantic electronic dance music. In the words of seminal techno producer Kevin Saunderson: It’s all called techno or dance music now, ’cause it’s all electronic music created with technological equipment. Maybe that should be the only name,“dance music”, because everybody has a different vision of what techno is now. (cited by Rubin 2000: 108) Amplified on a large sound system and combined into a soundtrack by a DJ, techno is mostly instrumental and foregrounds its electronic textures, shaping a technologised sonic dance environment that has been well documented (Attias et al. 2013; Barr 2000; Brewster and Broughton 2006; Collin 1997; Garratt 1998; Reynolds 1998; Rubin 2000; Savage 1996; Sharp 2000; Sicko 1999). However, perhaps confusingly, in the United States the term “EDM” can now be understood as a “poppy” subgenre of electronic dance music that has been associated with mega raves since 2012–2013. Therefore, to differentiate EDM as a specific genre from the umbrella genre “electronic dance music”, this chapter will use the term “electronica” by way of abbreviation, while for the purpose of the argument here, the main focus will be on techno’s machine aesthetic. During the 1980s and 1990s, techno music opened a space to articulate a specific response to the post-industrial experience of a rapidly emerging information society (Albiez 2005; Eshun 1998; Rietveld 2004). Referring to Detroit’s once buoyant car industry, Detroit techno pioneer Derrick May commented in 1988 that this “post-soul” music was partly inspired by a shift in production processes: “today the automobile plants use robots and computers to make their cars” (cited in Cosgrove 1988a: sleeve notes). Not only automatisation but an actual disappearance of manufacturing industries has impacted on the techno music aesthetic; for example, when on a North English dance floor, cultural commentator Savage observed that: as machine noise swirls around us, it hits me. This is industrial displacement. Now that Britain has lost most of its heavy industry, its children are simulating an industrial experience for their entertainment and transcendence. (311: 1996) Robbins and Webster (1999: 1) note that “The idea of the technological revolution has become normative – routine and commonplace – in our technocultural times” where, technocultural 114
Dancing in the technoculture
commentator Davies (1998: 8) argues, “the spiritual imagination seizes information technology for its own purposes”. Vietnam vet Rick Davis, co-founder of Detroit’s electronic band Cybotron, imagines that transformation into a “supra-human entity” is possible through an “interfacing of the spirituality of human beings into the cybernetic matrix” (in: Reynolds. 1998: 8). Existing uniquely in a studio-generated soundscape, synthesised bleeping and modulating sounds dominate techno’s surreal dance recordings, allowing an exploration of the experience of a cyber-future in which information and communication technologies play a central role: “the electronic sounds all too accurately reproduce the snap of synapses forced to process a relentless, swelling flood of electronic information” (Savage 1996: 312). The aesthetic of techno does not only offer a musical response to the technoculture, it enables an immersive and kinetic engagement with an increasingly posthuman condition on a somatic and spiritual level (Rietveld 2004). In other words, techno offers the chance to dance to the technoculture. The futurism embedded in techno music is simultaneously a seeming rejection of the past in order to empower its listeners to cope with a bewilderingly accelerating present, offering an aesthetic that has spread globally through rapidly developing electronic networks. Local differences in identity politics have given rise to an ongoing discursive struggle regarding the ownership of the futurist memories that techno’s musical hybrids embody within their sonic characteristics. DJs, producers and promoters battle out the meaning of techno through their music productions. In particular, although the term “techno” was used earlier in Germany (Dir. Sextro and Wick, 2008), techno was marketed in the UK as Detroit’s response to house music from Chicago and electro from New York. The cultural contexts in which it is listened and danced to arguably emerged earlier, in the 1970s, during the development of disco, in which dancers would mainly dance individually or in groups (rather than as couples) to groove-based recordings selected by DJs (Fikentscher 2000; Lawrence 2003). During the 1980s, electronic dance music genres emerged that partly took cues from electro-funk, Italo-disco, post-punk electronic pop from the UK and electronic body music from Belgium and Germany. New York’s electro-funk mixed hip-hop culture with post-punk electronica during the early 1980s. And from around 1986 onwards, house music was initially exported from Chicago to New York, the UK and elsewhere in Europe, while it also inspired the inception of techno in Detroit (Collin 1997; Garratt 1998; Rietveld 1998, 2007; Reynolds 1998). Such discourses are further explored in dance and music production magazines and websites. Numerous academic publications have addressed various aspects of electronic dance music and its cultures since the early 1990s, including Albiez (2005), Butler (2006, 2014), Fikentscher (2000), Lawrence (2003), Pini (2001), Attias et al (2013) and St John (2004), while trade and subcultural magazines for DJs, music producers and dance fans have reported on its development from its inception. In 2009, a dedicated academic journal appeared, Dancecult, Journal of Electronic Dance Music Culture, led by Graham St John. The journal’s editorial team and advisory board both show an international but mainly English-language scholarship based in the USA, Canada, Australia and Western Europe. Topics range from dance music festivals, DJ cultures and production issues, to the genre aesthetics of psytrance, Afrofuturism and dub. Here the scholarship is mainly centred on the contexts of instrumental electronic dance music, techno and trance, defined by “four to the floor” beats, although the range of genres under discussion is becoming more inclusive. Certain “technocracies” have evolved, a term that is used playfully here; rather than referring to governance on the basis of technical expertise, in this discussion it refers to ongoing power struggles regarding the validity and dominance of distinct techno subgenres. For example, the term “techno” was coined by Andreas Tomalla (DJ Talla 2XL) during the early 1980s, when he worked in a record shop in Frankfurt, Germany, in order to indicate post-punk pop and dance 115
Hillegonda C Rietveld
music associated with electronic body music and the Neue Welle (“new wave”), as well as electro and the tech-noir sound of Suicide from New York, electro-funk from Detroit and, later in the 1980s, the almost always instrumental minimalist sound of acid house that came from Chicago. In this case, the word “techno” referred to music that was produced “technologically”, through electronic music technology. In Germany, then, all forms of electronic dance music, whether house music or otherwise, are referred to as “techno”. In the English-speaking world, however, the story is told in a different way. This will be illustrated here through locating formative moments of techno in Detroit and drum’n’bass in London before returning to Germany, and elsewhere, for a discussion of trance.
The new dance sound of Detroit In 1988, Techno! The New Dance Sound of Detroit, compiled by Northern Soul collector Neil Rushton for 10 Records, was released in the UK. The compilation gave techno its conceptual manifesto, written as the liner notes by Stuart Cosgrove (1988a), who also wrote a seminal introductory article around the same time for British style magazine The Face, in which he quotes Detroit techno producer Derrick May giving a much-cited short-hand definition of the Detroit sound of techno: “The music is just like Detroit, a complete mistake. It’s like George Clinton and Kraftwerk stuck in an elevator” (Cosgrove 1988b: 86). The term “techno” was employed as a marketing device to distinguish Detroit’s dance music output from Chicago house music around that time. Chicago house and Detroit techno were stylistically intertwined, as can be heard on the double album in tracks such as “Share this House” by Members of the House and the title of the “megamix” (a DJ mix of tracks), “Detroit is Jacking”, as the word “jacking” refers to a dance style that is associated with the Chicago house music scene. In the sleeve notes, cocompiler and techno DJ-producer Derrick May explains the difference between the local styles: “House music still has its heart in the ’70s disco. We don’t have any of that respect for the past. It’s strictly future music” (cited by Cosgrove 1988a: sleeve notes). Chicago house music may be partially regarded as a revival of disco, albeit in electronic form; it thereby reinterprets the past for the present. Detroit techno rejects a nostalgic view on the past; its producers embed their music in Afrofuturist cultural politics (Williams 2001;Veen 2013).The 12 tracks on the compilation are diverse, from the previously mentioned house tracks and the commercially successful vocal dance anthem “Big Fun” performed by Inner City and produced by Kevin Saunderson to the genre-defining electronic funk abstractions of Rhythim Is Rhythim’s “It What It Is”, by Derrick May and Juan Atkin’s “Techno Music”, which ultimately defined the genre of Detroit techno. Atkins appropriated the term “techno” from Toffler’s 1980 publication The Third Wave, which suggests that the “techno-rebel” takes control of technology rather than being controlled by it, by making it part of frontline culture: The techno-rebels are, whether they recognize it or not, agents of the Third Wave. They will not vanish but multiply in the years ahead. For they are as much part of the advance to a new stage of civilization as our missions to Venus, our amazing computers, our biological discoveries, or our explorations of the oceanic depths. (1980: 153) Toffler refers here to an economic-historical three-wave model: agricultural (extraction), industrial (manufacture) and post-industrial (information). The last, third, is post-Fordist in production structure,1 in which flexible specialisation has become a central characteristic, a response 116
Dancing in the technoculture
to individualised life styles, to decentralisation of production, to automation, to freelance work and to a rapidly developing electronic home worker industry. Toffler speaks of “the rise of the prosumer” (265), the do-it-yourself consumer who in effect becomes part of the production process. A fledgling DIY electronic music producer could certainly identify with this model. Toffler observes that “The techno-rebels contend that technology need not be big, costly, or complex in order to be ‘sophisticated’ ” (1980: 152). The introduction of chip technology enabled the compact and relatively cheap mass production of digital music instruments in the 1980s, resulting in widening access to music production and a mushrooming of home recording studios based on digital music systems (Katz 2004; Théberge 1997). This has led to the production and fluid metamorphoses of a wide range of electronic music formats, including electro, Italo-disco, house music, hardcore rave, drum’n’bass, trance and techno. Although other once industrially leading cities, such as Manchester (UK) or Chicago, experienced similar post-industrial change as Detroit during the 1970s, the latter uniquely encountered this shift quite brutally: In the period 1940 through 1963, Detroit was the greatest manufacturing city in the world, unmatched in real physical productivity. But during the period 1964–2004, Detroit became synonymous with blight and decay beyond imagination. (Freeman 2004: web source) Detroit was once proud of its dominant car industry, giving soul music label Motown its name and creating a significant black middle class with social control, even though corporate control remained in white hands. When the city’s industrial economic core collapsed in the 1960s and 1970s, a middle-class population flight occurred, leaving behind an almost halved impoverished population in an urban ruin (Sicko 1999). Even recording label Motown left, heading for Los Angeles’ entertainment network in 1972. Many of the decaying buildings have been cleared since, making space for new corporate structures or simply left undeveloped, creating an empty ghost town. In the 1990s, remaining manufacturing spaces have lent themselves well for techno dance parties, organised, for example, by Canadian Richie Hawtin (Rubin 2000). In a haunting take on electro pop, Cybotron’s 1984 recording “Techno City”2 depicts a posthuman city, its sound design suggesting a hollow desolate urban space; although the lyrics hold futuristic promise, the overall effect is an inescapable dystopian vortex. The song’s epic direction was not where Juan Atkins wished to go, leaving Cybotron to Richard Davis. Based in a middleclass suburban town near Detroit in the 1980s, the “Belleville Three” (Juan Atkins, Kevin Saunderson and Derrick May) experimented further to give electronic musical form to Detroit’s alienating experience. These first self-declared techno producers took inspiration from a range of sources, such as the futurist electro sound of Afrika Bambaataa’s “Planet Rock” (Tommy Boy, 1982); the dance format of Chicago house; Detroit’s electronic funk, by pioneers George Clinton, Bernie Worrell, and Bootsy Collins of ParliamentFunkadelic; the electroacoustic Motown soul of Stevie Wonder; and the European electronic music of Gary Newman, New Order, Giorgio Moroder and Kraftwerk. Albiez (2005) points out that early techno’s “post-soul aesthetic” should be understood in the context of “how musical production in Detroit, and elsewhere, is caught up in the global and trans-Atlantic flows (of) popular culture” (p 4). The particular mix of these musical influences was mediated by Detroit radio, especially DJ the Electrifyin’ Mojo’s regular show (Sicko 1999). Techno’s formation drew importantly on the Afrofuturism of ’70s funk and on an interest in science fiction (SF). Afrofuturism was loosely based on the premise that African Americans are aliens without a place to return to; although feeling fettered in the post-industrial present, there is a liberating 117
Hillegonda C Rietveld
technological future to look forward to (Dery 1994). In this context, Eshun writes about “sonic fiction”, science fiction in musical format: The bedroom, the party, the dancefloor, the rave: these are the labs where the 21st C nervous systems assemble themselves, the matrices of the Futurhytmicmachinic Discontinuum. (1998; -001) As part of this rhizomic discontinuum, electronic imports appealed to Detroit’s techno producers “because the European music sounded as alien as they felt” (Jonker 2002: web source), and according to Atkins, the interest in electronic music from Europe was a black middleclass tactic, “to distance themselves from the kids that were coming up in the projects, the ghetto” (cited by Reynolds 1998: 5). Through such influences, Detroit’s sonic fiction engaged with European futurism. In postmodernist fashion, the German group Kraftwerk has successfully pioneered an experimental bridge between the modernist avant-garde intellectualism and soul music in the 1970s to produce electronic pop (Rietveld 2010b). Presenting themselves as “man machines” that design music for the computerised future (Barr 1998; Bussy 1993), they stated,“We are not entertainers, we are sound scientists” (in: Shapiro 2000: 249). A machine aesthetic can be found throughout the twentieth century in avant-garde art, film and music. Italian futurist Russolo envisaged artistic potential in the harnessing of urban and industrial machine noise, which he articulated in his 1913 manifesto, “The Art of Noises” (1973). Set in a hierarchically segregated city that inspired Cyborton’s song “Techno City”, Fritz Lang’s expressionist Metropolis (1927) offers a dystopian vision of the machine getting out of control in the year 2026; the film collapses this image with the integration of a female spirit into a robot, who becomes both a “supra-human entity” (Davis, cited by Reynolds. 1998: 8) as well as a “vamp in the machine” (Huyssen 1986), which subsequently leads the masses to an irrationally destructive freedom from the depths of a dehumanised industrial existence. Near the end of the century, Bradby (1993) identified a similar patriarchal construction of a sexualised feminine cyborg in the use of female vocals within techno’s variants, positioning women and machines as significant others within the technocultural aesthetic. The flowing disco funk machine rhythm of Chicago house music gave a danceable framework for the Afrofuturist reworking of the machine aesthetic in techno. Of particular importance was the cyberdelic abstraction found in Phuture’s 1987 recording “Acid Tracks” (Trax Records), a minimalist track that consists of a rigid disco drum pattern and idiosyncratic modulating bass sequence. The track was the result of an “accident”: the bass sequence was produced by a sound-generating sequencer, the Roland TR303 Bass-line, without programmed memory. Its random notes sounded as though this machine was tripping on acid (LSD). The producers, including DJ Pierre, had never heard anything like it and decided to frame this in a “four-to-the-floor” house music beat (Rietveld 1998). Here the machine was allowed to get out of control in order to produce alien music for alienated “cyborgs”. The machine texture was enhanced by overdriving the resonance of the bass sound, put to tape and tested on the psychedelic dance floor of Chicago DJ Ron Hardy. This was the start of Acid House, which inspired not only budding Detroit techno producers Derrick May and Kevin Saunderson but also the rave craze in the UK and, eventually, the formation of trance in Germany. In Techno!The New Dance Sound of Detroit, one can hear the range of influences that contributed to the manifestation of techno. Albiez has argued that Detroit techno was an African-American 118
Dancing in the technoculture
response to acutely felt “racial antagonism” in “a post-industrial, postmodern, post-soul America” (2005: 18). Williams (2001: 171) similarly states, Detroit techno represents an African American aesthetic approach to technology, but one that has been so muddied by cultural exchange that it cannot be tied to one single physical place. . . . (Black) alienation is converted to cyborg identity; and the practice of international musical data exchange becomes a utopian myth of non-property-based, “open source” collaboration that functions as a resolution to the contradictions and inequities of global electronic capitalism. In doing so, techno embraced both European and Afrofuturisms, dislocating the African-American subject from the past, producing an intense yet deterritorialised sound that moves towards the future. This has enabled techno participants elsewhere to make sense of their experience of alienation in the rapidly emerging global technoculture. Rhizomically, techno’s hybrid futurist notions have dispersed, intermingling with local sensitivities and cultural sensibilities. This will be illustrated by the formation of drum’n’bass, which uniquely developed in the UK, contrasted by a discussion of a distinctly different formation of trance.
Black Secret Technology In 1995, A Guy Called Gerald released acclaimed drum’n’bass album Black Secret Technology, containing washes of synth sounds, sampled segments from dance music recordings, and intricate break beat rhythms. An occasional female vocal sounds hauntingly sensual, at times as if from across a deserted plain, at other times twisted in digitally warped sonic space. The album was re-released by the artist in 2008, stating that, Balanced on the razor edge between mysticism and frenzy, Black Secret Technology remains a powerful comment on the eternal struggle between man and his future. (Simpson 2008: web source) The album bridges the technocultural experiments of Detroit techno and the development of UK break beat music. Gerald Simpson hails from a working-class Jamaican family in postindustrial Manchester (UK), where he made early waves in 1988, not least in the local Haçienda club, with a haunting techno track, “Voodoo Ray” (Rham! Records), partially a response to the Detroit techno he heard being played on local radio by DJ Stu Allen. Experimenting with a then-novel digital audio sampler, a female vocal is processed and reversed, with the effect that it sounds otherworldly. Its sonic treatment seems to resemble the acoustic space of the now demolished high-rise buildings of Hulme council estate in Moss Side, the inner city neighbourhood where Gerald grew up. The album track “Voodoo Rage” revisits this earlier recording, digitally deconstructing the now longer version of the sampled vocal and lining it up alongside time-shifting percussive break beats within lush synthesised textures (see also Rietveld 2014). In between these two recordings, in 1990, Gerald went to Detroit to meet Derrick May, who eloquently clarified their difference in approach by describing Gerald as a raw diamond that should not be polished. Drum’n’bass emerged in the UK during the mid-’90s, just a little later than jungle, or jungletechno, which developed in the early 1990s and is culturally closer to dancehall reggae and ragga than drum’n’bass. Still, the terms are often used synonymously, as from a historical distance there 119
Hillegonda C Rietveld
is more resemblance than difference. These genres are, like “hardcore rave” music, characterised by speeded-up versions of electro break beats. For example, an accelerated drum sample from The Winstons’ 1969 soul B-side, “Amen Brother”, itself a faster version of gospel song “Amen”, has become the beat source material for the ubiquitous “Amen Break” (Butler 2006), which was deconstructed and popularised by electro producer Kurtis Mantronik (ya Salaam and ya Salaam 2006). An example is Luke Vibert’s pastiche “Junglism” for the project Amen Andrews (Rephlex, 2006).3 On average, with variations on either side, drum’n’bass and jungle recordings typically have a speed of around 160 beats per minute (bpm). In contrast to the rapid break beats, the genre’s bass lines often seem superimposed from dub reggae and half the speed of the drum programming. This, in turn, lends it to a multi-dimensional way of listening, as can be seen in dance styles that range from rapid jungle-rave crew jump up and flaying hands to occasional jazz-steps or a sensuous hip swing. Like techno, drum’n’bass and jungle embrace futuristic technologised sensibilities that seem to break with the past.Yet, break beat genres embody distinct memories of Black Atlantic musical pathways, which is not only audible in the pattern and low bass of the bass line. Jonker, for example, shows how Black Secret Science points to a rhizomatic concept of machine magic: It’s important to note that in Jamaican patois, “science” refers to obeah, the African grab-bag of herbal, ritual and occult lore popular on the island. Black secret technology is postmodern alchemy, voodoo magic. (2002: web source) Drum’n’bass and jungle are as much part of the techno-continuum as they are related to reggae. In a discussion about DJ-producers Fabio and Grooverider, promoters of jungle-defining London club Rage, Collin states, Like reggae sound system DJs, they consciously sought to transform the very nature of the music they played, part of a long black futurist lineage dating back through Detroit techno and Jamaican dub, Jimi Hendrix and the cosmic bands of Sun Ra. (1997: 241) Jungle DJs are supported by MCs (masters of ceremony/“mike controllers”), while the reggae residual “dub plate” (a unique single pressing of a version or remix; see, for example BelleFortune 2004, and Rietveld 2013b) is central in break beat DJ practices. In addition, the jungle DJ trick of the “rewind”, exciting the crowd with a return to the start of a record, stems directly from the reggae sound system. According to the 1995 TV documentary All Junglists; A London Somet’in Dis (Sharp Image Prod, 1995), showing interviews with many of the main protagonists in the scene, jungle was a black British response to a perceived white dominance of British acid house raves, and it enabled the bonding of multi-racial youth in their shared experience of alienation in post-industrial Britain. UK break beat music is thereby not necessarily homologous to the black British experience but has formative links to this. Gilroy’s concept of a fluid, discontinuous “changing same” (2004) is useful here. Responses to varying cultural formats may be changing, but the historical context of (post-)colonial racism remains. In this manner, the technological experimentalism and sonic alienation of Jamaican dub in the 1970s parallels the embrace of techno-rebellion by Detroit techno producers in the 1980s and the techno trickery of drum’n’bass.
120
Dancing in the technoculture
Stylistically, and in terms of MC delivery, jungle seems closer to Jamaican dancehall ragga than drum’n’bass. Collin traces the historical development of break beat dance music back to London: East End duo Shut Up and Dance crystallised a direction for the breakbeat house of 1989 and 1990, turning it into a recognizable genre all of its own: raw, urgent and noisy. (1997: 243) It developed partly from the rave scene’s hardcore sound, itself a mixture of techno and house music, plus electro heard at sound systems owned by second-generation Jamaicans during the ’80s. Sharp explains the hardcore aesthetic was a resourceful creation within limited material circumstances: Hardcore’s graininess and garishness were largely forced on it by limitations of the equipment on which it was made. (2000: 139) Jungle has kept this rough-cut, even anarchic, approach to sound. By 1994, the assertive sound of jungle and ragga breaks had exploded in the London dance scene, DJs glitching, rewinding and cutting up tracks. At times jungle sounds like hysterical rave music with accelerated vocal samples, break beats and melody lines, for example the seminal old School jungle track “Junglist” by Rebel MC’s Demon Boyz (Tribal Bass Records, 1992).4 At other times, rasta MC Congo Natty presents a dancehall styled “Junglist Soldier” (Zion 10, 1998).5 At other times, jungle can be reflectively moody or deeply dark, belching out aggressive low bass lines, as can be heard in Dead Dred’s “Dred Bass” (Moving Shadow, 1994).6 James (1997) identifies drum’n’bass as arguably lighter in texture than jungle, more thoughtful and more melodic, while Sharp notes that, If hardcore was a collage – a roughneck ride through a succession of intense experiential instants – drum and bass synthesized those experiences into a swathe of new texture. (2000: 139) In the album tracks of Black Secret Science, drum’n’bass takes on board Detroit techno’s leanings towards jazz, funk and the suggestion of a cinematic sonic space. LTJ Bukem’s atmospheric drum’n’bass sound has paved the way in this jazzier ambient direction. In 1994, he established London club night Speed to promote drum’n’bass to a wider audience. Trained in classical piano and musically developed in the 1980s acid jazz and rare groove soul scene, LTJ Bukem has brought a carefully constructed ambient approach to techno’s conceptual direction. For example, his recording “Rainfall” (Looking Good Record, 1995) uses a sophisticated sonic palette of soothing sounds that suggest digitised raindrops and an almost Japanese music scheme, hinting simultaneously at a pastoral escape from city grime as well as at a broad awareness of a global force in electronic culture, Japan. Presented to a soundscape of lush electronic string pads, the break beats are no longer necessary for dancing but rather function as a rapid flow of data, which slots seamlessly into a world of informational overload. The “darkness” that is spoken of in relation to certain break beat subgenres seems to relate to a black majority dance scene (Hesmondhalgh and Melville 2001). However, in particular
121
Hillegonda C Rietveld
darkcore and dark-step, darkness suggest an almost Gothic or post-punk sensibility, oozing an eerie and angry dystopian feeling of living in an encroaching ghoulish automated city (Christodoulou 2015). In this sense, it signifies a similar notion of the dangerous city as film noir and is even comparable to the nihilist tech-noir recordings of New York’s electronic post-punk duo Suicide. This direction can also be heard in Goldie’s epic drum’n’bass triptych “Timeless”, in which the digital design of part 1, “Inner City Life”, suggests the dark sonic space of a lonely rainy night in a concrete housing estate. Such musical darkness appeals to an ethnically diverse crowd and seems related to an avid consumption of sci-fi films and computer games. Listening to London’s (illegal) pirate radio, some examples of jungle can sound like rumbling artillery to defend against a coercing machine, like a refusal to become an obedient component that works according to the manual and to march, on the hegemonic beat, according to the rules and regulations that are perceived to produce social inequalities. As memory traces of acid house and dub meet, it is the ruptured flow of the runaway renegade machine that is celebrated.
Trance Trance produces an industrial spacious soundscape that veers away from rhythmic rupture. In 1992, German recording label Harthouse, co-owned by DJ Sven Väth (famed for his marathon hardhouse sessions), released Hardfloor ‘s genre-defining track “Hardtrance Acperiance 1” (Harthouse, 1992). The recording returns to the acid meme of Phuture’s acid bass line, Acid Tracks (Trax Records, 1987); on this occasion, however, the modulating squelching sequences are multi-tracked, looped and exaggerated into more ornate sonic shapes, producing a baroque version of techno that contrasts starkly with its minimalist predecessor. In addition, breakdowns (pauses in the drum patterns) are programmed to allow the whooshing electronic textures of sequenced arpeggios to dominate. Such breakdowns are based in a DJ-practice to occasionally turn off the bass frequency; they tease and bond the crowd, who halt their dance movements, expecting the kick drum to return for another round of slam-dance. In the recording, the drums are reintroduced by a snare riff that sounds like the announcement of a circus trick, running along the 16s (semiquavers) of a mid-high textured sequence, adding to the increasing intensity to excite the dancer.This simple “fort-da” suspense trick has proven to be so successful that most contemporary trance recordings sport several quite lengthy breakdowns in each. Trance is characterised by metronomic arpeggiated monotones, of which sections shift up and then down by half a note, adding to a perpetual state of focussed desire and a powerful feeling of immanence that something important may arrive soon, keeping the dancer engaged, hypnotised, to go on and on and on. Trance offers swirling repetitive sequences in an epic expansive space, seemingly wide open to be explored and possibly conquered. Psytrance, a subgenre favoured at countercultural raves, adds to this a spiritual encounter with the sonic equivalence of mandala shapes. Musical structures spiral into a virtual trip through an infinite time tunnel, to nowhere in particular. In emphasising a mid- to high frequency range, trance seems to physically affect one’s upper body more than the lower body. Dancers often move their arms in the air without moving their hips, hopping on the spot. Mixed seamlessly by the DJ, the tracks glide into each other, deleting individual differences between them. As tracks often start and end with synthesised layers of sound, the harmonic blend overtakes beat mixing.The repetitive structures of trance tracks do not pose an intellectual challenge to change the future; instead, their predictability sooths the dancer, making it safe to indulge in an experience of amnesia or to focus deeper on the moment of the trip through movement. In the words of Goa Gil, the
122
Dancing in the technoculture
oldest trance DJ in Indian hippie tourist resort Goa: “Dance! Dance is active meditation” (cited by Saldanha 2007: 71). Europe’s twentieth-century machine aesthetic has contributed an industrial dimension to techno. In 1987, Kraftwerk member Ralf Hutter described their experimental electronic tracks “Autobahn” (Phillips 1974) and “Trans-Europe Express” (EMI 1977) as a type of trance music: Letting yourself go. Sit on the rails and ch-ch-ch-ch-ch. Just keep going. Fade in and fade out rather than trying to be dramatic or trying to implant into the music a logical order, which I think is ridiculous. In our society everything is in motion. Music is a flowing artform. (Toop 1992: 21) Trance travels, it transports – taking the dancer on a journey in a mesmerising ebb and flow of machine noise that resonates with the cyberdelic qualities of acid house: Trance is trippy, in both the LSD and motorik senses of the word, evoking frictionless trajectories of video-games, virtual reality . . . hurtling through cyber space. (Reynolds 1998: 184) The “motorik” is an insistent beat emphasising rhythm that was introduced in the early 1970s by German drummer Klaus Dinger, founding member of rock band Neu! and of Kraftwerk (Reynolds 2000). The motorik can be heard in the 1980s sound of the Neue Deutsche Welle (such as German electronic outfit DAF); industrial electronic body music (such as Belgian Front 242); HiNRG (a type of Eurodisco); Dutch gabber house; and trance-techno. Like a relentless machine, averaging a speed of 145 bpm, trance moves along a 4/4 kick drum and regular semiquaver electronic sequences. Although Hutter voices a desire to move away from “a logical order”, the motorik shows a rationalist approach to music: its machine motion is industrial in its repetitive rhythmic time management, fast like driving a car on the German motorway (Autobahn) and hypnotic like taking an international train across Europe (TEE). In Europe, the notion of “trance” seems common when addressing electronic dance. For example, as early as in 1988, raves in Amsterdam, which often hosted acid house DJs from London, were referred to as “trance parties” (Rietveld 1998). In Berlin, trance and techno have dominated the annual Love Parade during the 1990s to celebrate the demolition of the Berlin Wall, and consequent German reunification. By the end of the 1990s, this sound system street party attracted over a million young people in one day. The slogan for the 1997 Berlin Love Parade was: “One Nation Under a Groove”, which recontextualised Funkadelic’s 1978 recording (Warner Bros) to a statement for national peace, celebrated underneath the historical Siegessäule (Victory Column). During the mid-’90s, Berlin even became a temporary hometown to various successful Detroit techno DJ-producers, such as Jeff Mills. The two cities had, for different historical reasons, experienced abrupt change that urgently necessitated a new noise, a new music, to enter a shared global technoculture. By 2004, Dutch DJ Tiësto famously DJ-ed a trance music set at the opening ceremony of the Olympics in Athens, “Parade of the Athletes” (Magik Muzik, 2004),7 as athletes from around the world paraded into the arena in view of a global TV audience. Such exposure seems to indicate that trance music offers an all-inclusive soundtrack to the global technoculture, a musical
123
Hillegonda C Rietveld
aesthetic for transnational “non-places” (Augé 1995). In an ethnographic study of countercultural trance parties in hippy resort Goa, India, Saldanha observed a variation on this: transcendental experience in white psychedelic counterculture is considered a means to overcome one’s kind of racial formation to embrace all of humanity or all of the planet. (2007: 72) Detroit techno’s deterritorialised escape from racial identifiers made its abstracted format a suitable accompaniment to transcendental dance parties. By default, techno is often hegemonically designated as “white” or “European” (Albiez 2005). In addition, in the alternative party scene of Goa and psytrance, a hyperreal nostalgia is mixed with the machine aesthetic of trance; technologised versions of imaginary ancient, pagan, spiritual practices can accompany especially psytrance parties (Moreton 1995). An example is in the title of a DJ mix, Shamanic Trance, compiled by the Japanese trance DJ Tsuyoshi Suzuki for London-based label Return to the Source (1996).8 In an embedded study of the Goa trance scene, Saldanha (2007) wonders “to what extent techno-shamanism escapes white modernity” (73), concluding that trance-psychedelics are about “whiteness reinventing itself ” (74). As the demographic composition of long-term tourists in Goa is mixed, perhaps the notion of “whiteness” may be best understood as a complex and uneven relationship between the escapist parties of global travellers and daily material realities of the local population. Taking the problematic further into a global context, trance offers an opportunity to investigate how technoculture reimagines itself musically. A combination of cosmopolitanism and a deterritorialised format make trance a conveniently “odourless” (Iwabuchi 2002) exportable product that can be localised: “trance in all its forms has truly conquered the world, forming the backbone of the rave scene in America as well as the expanding club circuit” (Broughton and Brewster 2006: 370). Trance has spread in directions where the global economy dominates, including Europe, the US, Japan, Singapore, Australia, Greece, various backpacker holiday destinations, from Goa and Thailand to Peru, and especially Israel (listen, for example, to Infected Mushroom’s 2003 album Converting Vegetarians, on Yoyo Records,9 where rock, European classical melody lines and Middle-Eastern tonality are mixed into a spacious yet pulsating digital danceable soundtrack). As a functional response to techocultural anxiety, which abstractly combines a countercultural mission with post-industrial traces of techno-rebellion (Rietveld 2010a, trance has been adapted to various youth cultural agendas. On one end of the spectrum are anarchic outdoor raves, fuelled by psytrance, and on the other end a version of trance is utilised in a corporate endeavour, favouring brand names that market international club concepts, recording labels and DJs. In between, trance-flavoured dance pop can be heard in various local DJ bars, from Thailand to Britain and back. In a process of rapid hybridisation, memories of the future seem to dissipate, displaced by a seemingly forever present, that forgets the genealogy of a musical journey that is, nevertheless, traceable. Although trance is a subgenre under the umbrella term of electronic dance music, within the popular imagination it is often confusingly regarded as synonymous with electronic dance music.With its deterritorialised digital sound and relatively simple musical structures, during the 1990s trance’s trajectory partially morphed into a popular genre that was briefly referred to as “epic trance”. By 2012 it developed further in the American “mainstream” rave scene as a pop-dance genre that is now referred to as “EDM” (Reynolds 2012; Rietveld 2013a), an abbreviation of “Electronic Dance Music”, not to be confused with previous academic uses of this term.
124
Dancing in the technoculture
DJ aesthetics The DJ cultures associated with electronic dance music have been instrumental in forging music genres in specific urban locations, and subsequent musical tourism has developed to globally connected cities that are associated with such genre formation. When the DJ segues or mixes two records, a “third record” is produced that results in both recombination and innovation (Rietveld 2007, 2011). In common between the different subgenres, including house music, techno, trance, drum’n’bass, dub step and myriad others, electronic dance music is characterised by a functionality that enhances an extended dance experience through DJ-friendly musical structures. For example, by starting and ending a recording that enables a layering of two recordings, seamlessly segueing one into the next, dancers can escape a rationalised sense of time and instead enter into a different here and now, of living forever in a sonic space that seems without boundaries. Being locked in a groove within a dance space that is delineated by sonic dominance, which is produced through a powerful sound system, can create a contemporary ritualised type of transcendence that leaves dancers free to vote with their feet: with just one messy mix or mistaken music selection, the magic can be broken. During the 1970s, some New York-based disco DJs would remix a song so that the rhythm section would start the track before voices and melody enter (Lawrence 2003). During the start of the 1980s, the use of reasonably affordable drum machines (soon even cheaper when obtained second-hand) enabled the production of stable beats that are easier to mix than recordings based on the organic “feel” of human drummers. Within the electro scene in the Roxy club in New York, DJs used the drum machine (especially the Roland TR-808) as a basis to mix music from vinyl records to, and soon this became, together with rap, a signature sound for the electro-funk genre. In Chicago, innovative DJ Frankie Knuckles remixed existing recordings with additional drum machine sounds to “beef up” the bass and the “foot” of his music selections, playing these on reel-to-reel tape, mixed in with vinyl-based recordings; for example, his mix of Jamie Principle’s “Baby Wants to Ride”.10 This, in turn, gave rise to the formation of house music (Rietveld 1998). By 1987, this was closely followed by acid house in Chicago and techno in Detroit. The processed sampled sounds of the Roland TR-909 provided a biting sound palette that was not only heard in the mix work of Frankie Knuckles. Derrick May, for example, had informed Knuckles of this drum machine and later made musical history as Rhythim is Rhythim with the seminal Detroit techno track “Strings of Life” (Transmat 1987). Even though these drum machines were taken out of production by Roland not long after their introduction, the sound of the TR-909 drum machine is iconic, dominating the sound of techno and related dance music genres. Listen, for example, to the intensely pulsating UK techno recording “Pagga” by Surgeon (Kickin Records, 1996)11 or Detroit techno recording “Jaguar” by The Aztec Mystic a.k.a. DJ Rolando (Underground Resistance, 1999).12 Hereby a standard for current techno productions was shaped, as can be heard on, for example, Ansome’s revival acid techno recording “Halyard Hitch” (Mord, 2014).13 Techno producers still use the sound of the TR-909, whether with the original machine, the revival Roland Boutique TR09, or its sampled sounds within the virtual music production spaces of DAWs. DJ-friendly track openings can also be heard in drum’n’bass, jungle and related UK break beat genres that have followed since, such as UK Garage and dub step. Simon Reynolds presents this sequence of genre formation as the hardcore-continuum, as these styles have some of their beginnings in East London’s hardcore rave scene, yet there is also a detectible reggae continuum in these specific styles of electronic dance music. DJ styles for these subgenres not only developed from a beat-matching layered DJ style that can be heard at techno and house music events
125
Hillegonda C Rietveld
but also from the ruptured reggae sound system performance styles. During related dance events, the manner in which DJs manipulate the crowd and MCs pick up the mike to address the crowd seems comparable to what Back (1988) found in his study of reggae sound system events of the 1970s and ’80s. As the older reggae sound systems only used one single record player, the MC (called the DJ, comparable to a radio DJ) spoke between records to keep the crowd engaged. In turn, the DJ (called the selector) would add energy to a dance by teasing the crowd, stopping a record after the intro. At times, people would demand a stop and ask for a “rewind”, which literally means rewinding a vinyl recording while leaving the needle on the groove and starting it again. In particular, this may be the case when the unique pressing, the dub plate, is used, as this would be the only opportunity to hear this recording. In this specific context, a beatmatched layered mix between two recordings is not necessary – instead, the recording needs to be instantly recognisable. Reggae recordings are often structured like pop songs on 7-inch vinyl, without a mix-friendly set of melody-free beats at the start (as one finds in disco, house and techno). This format means that recordings in the reggae continuum do not necessarily always start with a sparse rhythm track, even though their mixed genealogy in house and techno music makes this a possibility. In other words, the DJ-friendly mix is less uniform in these subgenres of electronic dance music. The routes of influence work in a broken and roundabout manner via multiple routes that, in the case of break-beat-related electronic dance music, include dancehall, electro-funk and the impact of house, techno and trance in UK rave culture. For trance the development of DJ-friendly mixes differs again. This is a dominant genre at events such as Boom Festival in Portugal, Burning Man in Nevada, or Voov Experience/Vuuv Festival in Germany. As the style of this subgenre is based on a mix of techno, acid house, industrial and electronic body music, trance tracks follow a structure similar to many house music and techno tracks by starting a recording with quite a sparse rhythm track to enable a beat mix. However, trance partly developed within a countercultural context of the Goa scene, where, with other DJs such as Fred Disko, DJ Goa Gil developed his unique tantric DJ style during long sets that start relatively funky but finds their peak during the deep night when the crowd dances to a nightmarish world of pounding machine rhythms of industrial music and repetitive musical components that have the effect of animated mandalas, to finally mellow at sunrise, enabling subject “rebirth” into the normalising visual space of the light of day (see also Davies 2004; Rietveld 2010a, 2013a). Up to the mid-’90s, the format of the music was affected by a combination of climate and available audio medium, as in the heat of a tropical climate vinyl records tend to buckle, making digital audio tape (DAT) attractive (before the embrace of the CDR), enabling DJs like Goa Gil to play imported productions before their release. This had an effect on the formation of Goa trance and psychedelic trance, as audio tape does not lend itself well to beat-matching. Instead, devoid of beats, the introductions of Goa trance recordings tend to open with washes of sound; for example, SFX’s 1995 recording Celestial Groove,14 which after an ambient start develops to a fast moving (and TR909-driven) melodic trance track. Although neither DAT nor vinyl are of major consequence, since the use of CDJ players and digital DJ software, which gained popularity since the start of the twenty-first century (Rietveld 2016), this musical structure has left traces in subsequent psytrance and its subgenres, demonstrating how even a genre that prefers not to emphasise the past carries within it a specific genealogy of relationships that have shaped its aesthetic.
Technocultural networks Similarly to other music genres, fans connect via communication networks (Thornton, 1995), which have increased in efficiency and speed. Whereas during the 1980s there were still 126
Dancing in the technoculture
print-based fanzines and an exchange of mix cassette tapes, during the 1990s mail-based and online electronic dance music discussion forums appeared that, by the end of 1990s, migrated to the World Wide Web and peer-to-peer music exchange networks. An example of a discussion group is UK-Dance (Rietveld 1999). The current website, set up in 2007, states that, “UKDance was started in 1992 as a place for people to discuss everything to do with dance music culture in the UK: clubs, parties, special events, record shops, radio, records and anything else to do with the underground scene” (www.uk-dance.org). In addition to reviewing records, dance events and other related cultural items, such as documentaries and film, the participants are also sharing some other aspects of their personal preferences, whether food or politics. This has led on occasion to engaged debate and helped to act as social glue between participants who would otherwise not really know each other. Although DJ mixes were swapped on cassette and CD during the 1990s, this activity has been mostly replaced now by links to music upload websites like Soundcloud and Mixcloud, the audio equivalents to the video format of YouTube. In the case of some participants of UK-Dance, music labels were created – a good example being Hyperdub, which was at the forefront of the development of Dub Step during the early part of the millennium. In other cases, journalistic and academic work developed through discussions, my own included. In the US, the Hyperreal list (http://music.hyperreal.org/) engaged a similar mix of fans, practitioners and authors, reaching out beyond the borders of the US to include a wide range of specifically Detroit techno fans, which included the writer Dan Sicko, who produced the definitive history of that genre (Sicko 1999). Such networks merged into a social media format during the midnoughties, at first with MySpace, a Web 0.2 site that was embraced particularly by American, UK-based, West European and also Japanese underground dance DJs and producers during the first decade of the twenty-first century. There is less debate, as these sites are not text-based, and although there are links to journalism, reviews, mixes and music videos, formats like Facebook seem to encourage the visual spectacle for a distracted generation. Streaming tailor-made DJ performances online from a global range of urban hubs, Boiler Room (https://boilerroom.tv/) has expanded considerably since its inception in 2010, starting first in London, next in New York, Los Angeles and Berlin, and now from a wide range of cities, including Buenos Aires, Mexico City, Tokyo, Edinburgh and Amsterdam.Videos of the show are placed on sites such as YouTube and Daily Motion, while additionally it offers background articles to specific artists and local scenes. Initiated by Blaise Bellville with Thristian “bPm” Richards and Femi Adeyemi, this website started out in London as a streaming platform in spring 2010, with relatively simple DJ set-ups of two turntables and mixer in front of a camera (BNTL, 2012). The audio-visual shows are streamed live, although some shows are repeated during one day, while there is also a growing online archive that can be found on YouTube. Since 2012, there have been offices in Los Angeles, New York and Berlin. Especially the latter proved popular with an international online audience. The site mainly shows dance DJ and electronic music performances in the presence of small audiences in a variety of large cities around the world. The attraction to online audiences is to be able to feel part of its exclusive shows; they can even make online comments as it occurs. Auslander (2011: 196) observes that, traditionally, liveness implies the “physical and temporal co-presence of performer and audience”, but by contrast, a website is perceived as live when it can be interacted with through the possibility of feedback, whereby the cultural order has shifted from personal “relationships” to online “connections” (ibid: 197). This may leave this platform open to the accusation of a type of social impoverishment.Yet, in a study of Boiler Room, Heuguet (2016: 83) argues that, as the online shows are open access, a broadcasting event is offered, rather than a closed club experience, which is supplemented by social media chat features, taking 127
Hillegonda C Rietveld
“advantage of the connotations of social affinity organised on Facebook and Twitter to suggest that the DJ mix resembles a community of experience”. The added attraction of Boiler Room is the proximity of the camera(s) to show the techniques of trendsetting DJ-producers in what seems an intimate (VIP) closeness to the viewer, yet at a mediated online long distance (Rietveld 2016). Performing artists include underground favourites and emerging as well as established artists, such as Theo Parish, Laurent Garnier, Richie Hawtin, Masters At Work (MAW), James Blake and Flying Lotus. The site has further expanded into other musical styles, such as hip-hop, jazz and local ethnic (“world”) music. In 2014, performances were also streamed from other cities, such as Buenos Aires, and performances in studio settings were explored, as well as classical music. According to Rolling Stone (2014), “Both its listeners and guest DJs demand surprises and proper set creation, bringing back DJ sets to the notion of a proper dance floor journey rather than a hit parade”. The site has partnered with international festivals such as Sónar (in Barcelona, Spain) and Amsterdam Dance Event (ADE, in the Netherlands). The cultural credibility of Boiler Room with its audiences makes it an attractive vehicle for selected advertising, which means it can offer a free service. In particular, it has a deal with YouTube (Stolman 2013), and it organises events within the annual Red Bull Academy, an international project that provides scholarships to DJproducers to meet and learn from each other and from their role models. As recordings of some of the performances can be viewed at a later time, this site seems a useful research tool, providing insight into some of the practices and performance technologies of the digital DJ. Unlike the more predictable pop-sound of EDM and the embrace of trance-related styles by superstar DJs, the musical performances of the Boiler Room,15 DJs differs widely. Despite the appearance of a global village, the question raised by such online performances and musical exchanges is how a sense of a cohesive global music scene is possible despite the different cultural contexts. With reference to Auslander (2008), would it be possible to understand this cultural phenomenon as a type of mediated glocalisation (Robertson 1995)? Or is a new understanding of transnational electronic music dance culture required, one that may point to shared post-colonial links, comparable metropolitan experiences and similar relationships to electronic technologies?
Compilation The dancer can enter the musical experience by embodying its electronic sound. As I have also shown elsewhere (Rietveld 2004, 2010a, 2018), whether club nights, raves or dance festivals, during a peak experience on the dance floor, a type of cyborg subjectivity is produced, helping participants to culturally internalise a sense of posthuman alienation. What otherwise could be an experience of abject horror is instead socialised and internalised as part of a regime of pleasure. Meanwhile, the ongoing repetitive beats provide a temporary escape from the knowledge that life is finite: time is now – despite the dominance of a futuristic trope in techno and musical intertextuality achieved through the DJ’s music selections, there is little sense of past or future during the ritual of dance. As Till (2016) also shows, with reference to processes of entrainment, the shared experience of dancing to techno can produce an empowering group experience. Specific ecologies of affect can produce a powerful shared feeling that, according to Till (2016), changes individual dancers – these include the vibe of the event in terms of sociality and vibration of the music (St John 2009), as well as the sonic qualities of techno music, such as rhythm, deep bass frequencies and electronic textures. When understood as techno, electronic dance music becomes a type of transnational lingua franca, shared across global networks that currently operate via the internet, whether through social media, distribution sites, videos, discussion forums, blogs and journals on the World Wide 128
Dancing in the technoculture
Web or within a range of smartphone apps. Techno is not just a simple music genre, but more a musical aesthetic that can be found in a range of subgenres of electronic dance music. There is, I also argue elsewhere (Rietveld 2004), a specific reason why electronic dance music seems to be globally embraced despite, and perhaps exactly because of, its relatively abstract machine aesthetic. Non-vocal instrumental versions of techno and trance enable the dancing subject to relate to the experience of ubiquitous electronic technology. Although the cultural contexts within which techno and trance are danced to can differ significantly, and physical mobility across borders may not be the same for all dance fans, the DJ-producers and dance participants of electronic dance music share technological and logistical experiences of intense urbanisation and digitalised communications, which can be (literally) shockingly alien to the organic rhythms and needs of our evolutionarily developed physical bodies. Electronic dance music, in particular its techno aesthetic, offers an opportunity to engage sonically with the experience of electronic technologies and of acceleration that characterises the posthuman world––an experience that may well be termed the technoculture.
Notes 1 Ford (a “second wave” company) introduced a manufacturing process in Detroit in which a uniform car was mass-produced within the company – no work was shipped out and labour division was mainly static. 2 ‘Cybotron – Techno City(Instrumental) 1984’ www.youtube.com/watch?v=cZFL2Ewo-oI [last accessed 23 February 2018] 3 ‘Amen Andrews Vs. Spac Hand Luke (Luke Vibert) – Junglism’ www.youtube.com/watch?v=RdhuSg WUOLg [last accessed 23 February 2018] 4 ‘Demon Boyz - Junglist (Armshouse Mix)’ www.youtube.com/watch?v=i3tKoCqGFQo [last accessed 23 February 2018] 5 ‘Congo Natty Junglist’ www.youtube.com/watch?v=QL4GQ5H0lkc&list=PLRzhgNCalkqH__ XDEBYzE_nZGZbZnIten [last accessed 23 February 2018] 6 ‘Dead Dred - Dred Bass (Back 2 Basics Remix)’ www.youtube.com/watch?v=jofBL7vg1BY [last accessed 23 February 2018] 7 ‘Tiesto – Heroes’ www.youtube.com/watch?v=Sw3Di_0sd7Q&list=PLeoy7CSXMi2gm_F3t7LjTEfhaveRt446 (play “Heroes”) [last accessed 23 February 2018] 8 ‘Shamanic Trance – Dada Funk Mix By Tsuyoshi Suzuki 1996’ www.youtube.com/watch?v=m9KO jKqoAos [last accessed 23 February 2018] 9 ‘Infected Mushroom – Converting Vegetarians – 2003 Full Album’ www.youtube.com/watch?v=zqwPj NGDuIQ [last accessed 23 February 2018] 10 ‘Frankie Knuckles – Baby Wants To Ride’ www.youtube.com/watch?v=oTbuYH84bfg [last accessed 23 February 2018] 11 ‘Surgeon – Pagga’ www.youtube.com/watch?v=K7OwOvgD1C0 [last accessed 23 February 2018] 12 ‘Dj Rolando – Jaguar (Original mix) (1999)’ www.youtube.com/watch?v=huQaQ62hFR0 and ‘Underground Resistance — Jaguar, live (DJ Rolando)’ (live jazz) www.youtube.com/watch?v=NUeE_ QXfwUw [last accessed 23 February 2018] 13 ‘Ansome – Halyard Hitch [MORD010]’ www.youtube.com/watch?v=bVQTydJVepY [last accessed 23 February 2018] 14 ‘Astral Projection – Celestial groove’ www.youtube.com/watch?v=UNIHimdrl4o&index=9&list=PL usJz-rd-6gtkIVfzgmy4xvnktGlTnaS1 [last accessed 23 February 2018] 15 ‘Boiler Room’ https://boilerroom.tv/ [last accessed 14 March 2017]
References Albiez, Sean (2005) ‘Post Soul Futurama: African American Cultural Politics and Early Detroit Techno’. European Journal of American Culture,Vol. 24, No. 2. www.intellectbooks.co.uk/journalarticles.php?issn= 14660407&v=24&i=2&d=10.1386/ejac.24.2.131/1
129
Hillegonda C Rietveld Auslander, Philip (2008) Liveness: Performance in A Mediatized Culture. London: Routledge. Auslander, Philip (2011) ‘Afterword: Is There Life After Liveness?’ In Susan Broadhurst and Josephine Machon (Eds) Performance and Technology: Practices of Virtual Embodiment and Interactivity. Basingstoke and New York: Palgrave Macmillan: 194–198. Attias, Bernardo A., Gavanas, Anna, and Rietveld, Hillegonda C. (Eds) (2013) DJ Culture in the Mix: Power, Technology, and Social Change in Electronic Dance Music. London and New York: Bloomsbury Academic. Augé, Marc (1995) Non-Places: Introduction to an Anthropology of Supermodernity. London:Verso. Back, Les (1988) ‘Coughing Up Fire: Soundsystems in South-East London’. New Formations,Vol. Summer, No. 5, pp. 141–153. Barr, Tim (2000) Techno:The Rough Guide. London: Rough Guides. ——— (1998) Kraftwerk: From Dusseldorf to the Future (with Love). London: Ebury Press. Bussy, Pascal (1993) Kraftwerk: Man Machine and Music. Wembley: SAF. Belle-Fortune, Brian (2004) All Crews: Journeys Through Jungle/Drum & Bass Culture. London: Vision Publishing. Bradby, Barbara (1993) ‘Sampling Sexuality: Gender, Technology and the Body in Dance Music’. Popular Music,Vol. 12, No. 1. Broughton, Frank and Brewster, Bill (2006) Last Night a DJ Saved My Life: The History of the Disc Jockey. London: Headline. Butler, Mark J. (2014) Playing With Something That Runs: Technology, Improvisation and Composition in DJ and Laptop Performance. Oxford and New York: Oxford University Press. ——— (2006) Unlocking the Groove: Rhythm, Meter, and Musical Design in Electronic Dance Music. Bloomington and Indianapolis: Indiana University Press. Collin, Matthew (1997) Altered State:The Story of Ecstasy Culture and Acid House. London: Serpent’s Tail. Cosgrove, Stuart (1988a) ‘Detroit’. Sleeve notes to: Neil Rushton (Compiler) Techno! The New Dance Sound of Detroit. UK: 10 Records. ——— (1988b). ‘Seventh City Techno.’ The Face, vol. 97, May, pp. 86–88. Reprinted in The Faber Book of Pop, eds. Hanif Kureshi and Jon Savage. London: Faber and Faber, 1995/2002. Christodoulou, Chris (2015) ‘Darkcore: Dub’s Dark Legacy in Drum ‘n’ Bass Culture’. In Rietveld, Hillegonda C., and van Veen, Tobias (Eds) Echoes From the Dub Diaspora. Special Issue of Dancecult: Journal of Electronic Dance Music Culture, Vol. 7, No. 2. https://dj.dancecult.net/index.php/dancecult/article/ view/695/6974 Davies, Erik (2004) Hedonic Tantra: Golden Goa’s Trance Transmission. In Graham St John (Ed.) Rave and Religion. New York: Routledge. ——— (1998) Technosis. London: Serpent’s Tail. Dery, Mark (1994) ‘Black to the Future’. In Mark Dery (Ed.) Flame Wars: The Discourse of Cyberculture. Durham: Duke UP. Eshun, Kodwo (1998) More Brilliant Than the Sun: Adventures in Sonic Fiction. London: Quartet. Freeman, Richard (2004) Death of Detroit: Harbinger of Collapse of Deindustrialized America. Executive Intelligence Review, 23 April issue. www.larouchepub.com/other/2004/3116detroit_dies.html Fikentscher, Kai (2000) “You Better Work!” Underground Dance Music in New York City. Hanover, NH: Wesleyan University Press. Garratt, Sheryl (1998) Adventures in Wonderland: A Decade of Club Culture. London: Headline. Gilroy, Paul (2004) ‘Sounds Authentic: Black Music, Ethnicity and the Challenge of a Changing Same’. In Simon Frith (Ed.) Popular Music: Critical Concepts in Media and Cultural Studies. London: Routledge. Heuguet, Guillaume (2016) ‘When Club Culture Goes Online: The Case of Boiler Room. Trans. LuisManuel Garcia’. Dancecult, Vol. 8, No. 1, pp. 73–87. https://dj.dancecult.net/index.php/dancecult/ article/view/904/776 Hesmondhalgh, David, and Casper Melville (2001) ‘Urban Breakbeat Culture’. In Tony Mitchell (Ed.) Global Noise: Rap and Hip-Hop Outside the USA. New York: Wesleyan UP. Huyssen, Andreas (1986) ‘The Vamp and the Machine: Fritz Lang’s Metropolis’. In After the Great Divide: Modernism, Mass Culture, Postmodernism. Bloomington: Indiana University Press. Hyperreal. http://music.hyperreal.org/ [last accessed 12 March 2017] Iwabuchi, Kochi (2002) ‘Japanese Cultural Presence in Asia’. In Diana Crane, Noboko Kawashima and Ken’ichi Kawasaki (Eds) Global Culture: Media, Arts, Policy and Globalisation. New York: Routledge. James, Martin (1997) State of Bass. Jungle:The Story So Far. London: Boxtree. Jonker, Julian (2002) ‘Black Secret Technology (The Whitey on the Moon Dub)’. In Arthur and Marilouise Kroker, Ctheory. www.ctheory.net/articles.aspx?id=358
130
Dancing in the technoculture Katz, Mark (2004) Capturing Sound: How Technology Has Changed Music. Berkeley and Los Angeles: University of California Press. Lawrence, Tim (2003) Love Saves the Day: A History of American Dance Music Culture, 1970–1979. Durham, NC: Duke University Press. Moreton, Cole (1995) ‘Goa Dance and LSD Craze Sweeps Clubs’. Independent, 29 October. www.independ ent.co.uk/news/uk/home-news/goa-dance-and-lsd-craze-sweeps-clubs-1579958.html [last accessed 21 March 2017]. Dancecult, Journal of Electronic Dance Music Culture, https://dj.dancecult.net Penley, Constance, and Ross, Andrew (1991) Technoculture. Minnesota and London: University of Minnesota Press. Pini, Maria (2001) Club Cultures and Female Subjectivity:The Move From Home to House. New York: Palgrave Macmillan. Reynolds,Simon (2013)‘TheWire 300:Simon Reynolds on the Hardcore Continuum:Introduction’, https:// www.thewire.co.uk/in-writing/essays/the-wire-300_simon-reynolds-on-the-hardcore-continuum_ introduction Reynolds, Simon (2012) ‘How Rave Music Conquered America’. The Guardian, 2 August. www.theguard ian.com/music/2012/aug/02/how-rave-music-conquered-america ——— (2000) ‘Krautrock. Kosmik Dance: Krautrock and Its Legacy’. In Peter Shapiro (Ed.) Modulations: A History of Electronic Music,Throbbing Words on Sound. New York: Caipirinha Productions. ——— (1998) Energy Flash: A Journey Through Rave Music and Dance Culture. London: Picador. Rietveld, Hillegonda C. (2018) ‘Machine Possession: Dancing to Repetitive Beats’. In Olivier Julien and Christophe Levaux (Eds) Over and Over. Exploring Repetition in Popular Music. New York: Bloomsbury. Academic. ——— (2016) ‘Authenticity and Liveness in Digital DJ Performance’. In Ioannis Tsioulakis and Elina Hytönen-Ng (Eds) Musicians and Their Audiences. New York and London: Routledge. ——— (2014) ‘Voodoo Rage: Blacktronica From the North’. In Jon Stratton and Nabeel Zuberi (Eds) Black Popular Music in Britain Since 1945. Farnham and Burlington,VT: Ashgate. 153–168. ——— (2013a) ‘Journey to the Light? Immersion, Spectacle and Mediation’. In B. A. Attias, A. Gavanas and H. C. Rietveld (Eds) DJ Culture in the Mix: Power, Technology, and Social Change in Electronic Dance Music. Bloomsbury: Academic: 79–102. ——— (2013b) ‘Dubstep: Dub Plate Culture in the Age of Digital DJ-ing’. Situating Popular Musics. 16th Biennial IASPM International Conference, 2011, Rhodes University, Grahamstown, South Africa. Conference proceedings: www.iaspm.net/proceedings/index.php/iaspm2011/iaspm2011/ schedConf/ ——— (2011) ‘Disco’s Revenge: House Music’s Nomadic Memory’. Dancecult: Journal of Electronic Dance Music Culture, Vol. 2, No. 1, pp. 4–23. https://dj.dancecult.net/index.php/dancecult/article/ view/298/284 ——— (2010a) ‘Infinite Noise Spirals: The Musical Cosmopolitanism of Psytrance’. In Graham St John (Ed.) The Local Scenes and Global Culture of Psytrance. New York and London: Routledge: 69–88. ——— (2010b) ‘Trans-Europa Express:Tracing the Trance Machine’. In Sean Albiez and David Pattie (Eds) Kraftwerk: Music Non-Stop. New York and London: Continuum: 214–230. ——— (2007) ‘The Residual Soul Sonic Force of the Vinyl 12” Dance Single’. In Charles Ackland (Ed.) Residual Media. Minnesota: University of Minnesota Press: 46–61. ——— (2004) ‘Ephemeral Spirit: Sacrificial Cyborg and Communal Soul’. In Graham St John (Ed.) Rave and Religion. New York and London: Routledge: 45–60. ——— (1999) ‘Spinnin’: Dance Culture on the WWW’. In Jane Stokes with Anna Reading (Eds) The Media in Britain. London: Macmillan Press. ——— (1998) This Is Our House: House Music,Technologies and Cultural Spaces. Aldershot: Ashgate. Rietveld, Hillegonda C., and van Veen, Tobias (Eds) (2015) ‘Echoes From the Dub Diaspora’. Special Issue of Dancecult: Journal of Electronic Dance Music Culture,Vol. 7, No. 2. https://dj.dancecult.net/index.php/ dancecult/issue/view/82 Robertson, Roland (1995) ‘Glocalization: Time-Space and Homogeneity-Heterogeneity’. In Featherstone, Mike, et al. (Eds) Global Modernities. London and Thousand Oaks, CA: Sage: 25–44. Robins, Kevin, and Webster, Frank (1999) Times of the Technoculture: From the Information Society to the Virtual Life. London: Routledge. Rolling Stone (2014) ‘50 Most Important People in EDM:The Movers, Shakers and Speaker-Quakers Shaping Dance in 2014’. www.rollingstone.com/music/lists/50-most-important-people-in-edm-20140317/ thristian-richards-and-blaise-bellville-boiler-room-19691231 [last accessed 7 January 2015]
131
Hillegonda C Rietveld Rubin, Mike (2000) ‘Techno: Days of Future Past’. In Peter Shapiro (Ed.) Modulations: A History of Electronic Music,Throbbing Words on Sound. New York: Caipirinha Productions. Russolo, Luigi (1973) ‘The Art of Noises (Extracts) 1913’. In Umbro Apollonio (Ed.) Futurist Manifestos. London: Thames & Hudson. St John, Graham (2009) Technomad: Global Raving Countercultures. London: Equinox Publishing. ——— (Ed.) (2004) Rave and Religion. New York: Routledge. ya Salaam, Mtume, and ya Salaam, Kalamu (2006) ‘The Winston’s Amen Brother’. Breath of Life: A Conversation About Black Music, 12 March. www.kalamu.com/bol/?www.kalamu.com/bol/2006/03/12/187/?P HPSESSID=be95c18b2c46e4faa9317ffb2f1e7cb9 Saldanha, Arun (2007) Psychedelic White: Goa Trance and the Viscosity of Race. Minneapolis: University of Minnesota Press. Savage, Jon (1996) ‘Machine Soul: A History of Techno’. In Time.Travel – From the Sex Pistols to Nirvana: Pop, Media and Sexuality, 1977–1996. London: Chatto and Windus. Shapiro, Peter (Ed.) (2000) Modulations: A History of Electronic Music, Throbbing Words on Sound. New York: Caipirinha Productions. Sharp, Chris (2000) ‘Jungle. Modern States of Mind’. In Peter Shapiro (Ed.) Modulations: A History of Electronic Music,Throbbing Words on Sound. New York: Caipirinha Productions. Shaw, Debra Benita (2008) Technoculture:The Key Concepts. London and New York: Bloomsbury: Academic. Sicko, Dan (1999) Techno Rebels:The Renegades of Electronic Funk. New York: Billboard. Simpson, Gerald (2008) Promotional Blurb to the Album Black Secret Technology. www.amazon.com/BlackSecret-Technology-Called-Gerald/dp/B0019M62ZA/ref=pd_bbs_sr_1?ie=UTF8&s=music%27n%2 7qid=1219787172%27n%27sr=8-1 Stolman, Eissa (2013) ‘Inside Boiler Room, Dance Music’s Internet Streaming Party’. Billboard, 11 April. www.billboard.com/articles/columns/code/1556765/inside-boiler-room-dance-musics-internetstreaming-party?page=0%2C1 [last accessed 4 November 2014] Till, Rupert (2016) ‘DJ Culture, Liminal Space and Place’. AAG Conference 2016, San Francisco. Théberge, Paul (1997) Any Sound You Can Imagine: Making Music/Consuming Technology. Hanover, NH: Wesleyan UP. Thornton, Sarah (1995) Club Cultures: Music, Media and Subcultural Capital. Cambridge: Polity Press. Toffler, Alvin (1980) The Third Wave. Toronto and New York: Bantam Books. Toop, David (1992) ‘Giorgio Moroder: Throbbery With Intent’. The Wire, Issue 98, April, p. 21. Van Veen, Tobias (Ed.) (2013) ‘Special Issue on Afrofuturism’. Dancecult: Journal of Electronic Dance Music Culture,Vol. 5, No. 2. https://dj.dancecult.net/index.php/dancecult/issue/view/50/showToc Williams, Ben (2001) ‘Black Secret Technology’. In Alondra Nelson and Thuy Linh N. Tu with Alicia Headlam Hines (Eds) Technicolor: Race, Technology, and Everyday Life. New York and London: New York University Press: 154–176.
Discography Ansome (2014) ‘Halyard Hitch’ (Mord) A Guy Called Gerald (1988) ‘Voodoo Ray’. Rham! Records, 1988. A Guy Called Gerald (1995) Black Secret Technology. SRD, 1995. Afrika Bambaataa (1982) ‘Planet Rock’. Tommy Boy Records. Amen Andrews Vs. Spac Hand Luke, a.k.a Luke Vibert (2006). ‘Junglism’. Amen Andrews Vs. Spac Hand Luke. Rephlex. Cybotron (1984) ‘Techno City’, Interface. Fantasy. Dead Dred (1994) ‘Dred Bass’. Moving Shadow. Frankie Knuckles presents Jamie Principle (1987) Baby Wants to Ride. Trax Records Funkadelic (1978) ‘One Nation Under a Groove’, One Nation Under a Groove.Warner Bros. Records, 1978. Goldie (1995) ‘Timeless’, Timeless. FFRR, 1995. Hardfloor (1992) ‘Hardtrance Acperiance 1’, Harthouse, 1992. Infected Mushroom (2003) Converting Vegetarians,Yoyo Records Kraftwerk (1978) The Man Machine. Capitol, 1978. Kraftwerk (1977) ‘Trans-Europe Express’, Trans-Europe Express. Capitol Records, 1977. Kraftwerk (1974) ‘Autobahn’, Autobahn. Phillips. LTJ Bukem (1995) ‘Rainfall’. Looking Good Records.
132
Dancing in the technoculture MC Congo Natty (1998) ‘Junglist Soldier’. Zion 10. Phuture (1987) ‘Acid Tracks’, Trax Records, 1987. Rebel MC’s Demon Boyz (1992) ‘Junglist’. Tribal Bass Records Rhythim is Rhythim (1987) Strings of Life. Transmat SFX (1995) Celestial Groove. On Trust In Trance 2. Outmosphere Records, Phonokol. Surgeon (1996) ‘Pagga’. Kickin Records The Aztec Mystic a.k.a. DJ Rolando (1999) ‘Jaguar’. Underground Resistance The Winstons (1969) ‘Amen Brother’, B-side of ‘Color Him Father’. Metromedia. Tiësto (2004) Parade of the Athletes. Magik Muzik Various (1996) Shamanic Trance. Compiled by DJ Tsuyoshi Suzuki. Return to the Source Various (1988) Techno! The New Dance Sound of Detroit. compiled by Neil Rushton. 10 Records, 1988.
Film/Video ‘A Guy called Gerald meets Derrick May in 1990’ YouTube, Discodelirio: 19 October 2010 www.youtube. com/watch?v=qGCM0hlsm88 [accessed 13 March 2017] All Junglists; A London Somet’in Dis (Sharp Image Prod, 1995, England) BNTL (2012) Inside Boiler Room. Vimeo. http://vimeo.com/40169111 [accessed 4 November 2014] Metropolis (dir. Fritz Lang, 1927, Germany) We Call It Techno (dir. Maren Sextro and Holger Wick, 2008, Germany)
133
PART II
Awareness, consciousness, participation
6 PARTICIPATORY SONIC ARTS The Som de Maré project – towards a socially engaged art of sound in the everyday Pedro Rebelo and Rodrigo Cicchelli Velloso
Introduction From participatory and socially engaged art practices have emerged, for a couple of decades, a myriad of aesthetic and methodological strategies across different media. These are artistic practices that have a primary interest in participation, affecting social dynamics, dialogue and at times political activism. Nato Thompson, in Living Form: Socially Engaged Art from 1991–2011, surveys these practices, which range from theatre to urban planning, visual art to health care. Linked to notions such as relational aesthetics (Bourriaud 1998), community art and public art, socially engaged art often focuses on the development of a sense of ownership on the part of participants. If an artist is working truly collaboratively with participants and addressing the reality of a particular community, the long-term effect of a project lies in the process of engagement as well as in the artwork itself. Projects by New York-based artist Pablo Helguera, for example, use different media to engage with social inequalities through participative action while rejecting the notion of art for art’s sake. Socially engaged art functions by attaching itself to subjects and problems that normally belong to other disciplines, moving them temporarily into a space of ambiguity. It is this temporary snatching away of subjects into the realm of art-making that brings new insights to a particular problem or condition and in turn makes it visible to other disciplines. (Helguera, 2011) Socially engaged practices develop the notion of artwork about or by a community, to work of a community. In this chapter we address how socially engaged, participatory approaches can form a context for the sonic arts, arguably less explored than practices such as theatre and performance art. The use of sound is clearly present in a wide range of socially engaged work; for example, Pablo Helguera’s Aelia Media enabling a nomadic radio station in Bologna or Maria Andueza’s Immigrant Sounds – Res(on)Art (Stockholm) exploring ways of sonically resonating a city, or Sue McCauley’s The Housing Project addressing ways of representing the views of urban dwellers on public space through sound art.1 It is nevertheless rare to encounter projects which take our experience of sound in the everyday as a trigger for community social engagement in a participatory context. 137
Pedro Rebelo and Rodrigo Cicchelli Velloso
We address concepts and methodologies behind the project Som da Maré,2 a participatory sonic arts project in the favelas of Maré, Rio de Janeiro. In doing so, we attempt to create links between literature in areas such as sound studies, acoustic ecology, music, education and art criticism. The focus on sound throughout the text and indeed the project under discussion is intended to be just that, a focus, a point of departure, sometimes an unexpected one. This sonic-centeredness is not meant to undermine other media or sensory experiences but rather to re-evaluate them. To begin from sound is to change perspective, to reflect on a multisensory embodied understanding of the everyday, the sounds that surround us and how they relate to everything else going on (both externally and internally).
Soundscape and the everyday The term soundscape, as developed by Murray Schafer in the 1970s (Schafer 2006), describes an analogy to landscape featuring sound as a primary mode of operation and perception. In considering the soundscape of a city or neighbourhood one is reflecting on the aural dynamics of a geographical region, a community and a series of activities or events which have an overt sonic manifestation. Schafer and associates such as Barry Truax have, through the World Soundscape Project, been responsible for establishing many of the methodologies and lines of enquiry that still frame sound studies today. Practices such as field recording are introduced as methods serving both the collating of sound sources for composition and the development of a sound ethnography which can be readily used by geographers, urban planners, architects and social scientists. The field of acoustic ecology emerges then as a movement to promote education and research addressing “social, cultural, scientific and ecological aspects of the sonic environment”.3 At the core of any discussion of soundscape is the act of listening. Although this is beyond the scope of this text, it is worth pointing out a few understandings of listening which have imbued the idea of soundscape, or more widely the relationship between sound, space and place. Pierre Schaeffer’s disciplined, analytical and studio-driven ‘reduced listening’ reminds us of what it is to listen in and out of context. Schaeffer has made a key contribution to how we understand the relationship between sound and meaning through his thorough investigation into the notion of the sound object. Although the idea of objectifying sound to the point of removing all its associations and context is problematic for more recent authors and practitioners, the methodologies and strategies that Schaeffer laid out in his pursuit of a solfège de l’objet sonore have undoubtedly shaped our approach to recording, manipulating and listening to sound. Closer to the qualities of listening we want to practice in the context of participatory sonic arts and the project discussed here is the notion of anecdotal sound as proposed by composer Luc Ferrari. Ferrari’s move away from Schaeffer’s clinical approach to listening led him to embrace sound with all its meaning, context and grain. Ferrari’s accidental or anecdotal recordings are full of humanity, full of social interaction, full of signs (Kim-Cohen 2016). This anecdotal approach to field recording and composition brings to the foreground all of the imperfections, the contradictions, the presences of everyday life, including the presence of the author himself in works such as the Presque Rien series. Here, the soundscape is not something that the researcher or composer listens to and analyses but rather what we all are involved in making. The notion of soundscape has, since its coinage by Schafer, been critiqued for its sole association with one sense (hearing) and emphasising an artificially sliced up world experience by sensory paths (Ingold 2007). Going further and reflecting on an analogous relationship
138
Participatory sonic arts
between light and vision, Ingold articulates the risk in studying sound as an object of our perception. [L]istening to our surroundings, we do not hear a soundscape. For sound, I would argue, is not the object but the medium of our perception. (lngold 2007) An alternative approach to our relationship with everyday sound, which brings together the acoustic and physical characteristics of sonic phenomena with qualitative descriptors, is the notion of “sonic effect” by Augoyard and Torgue in Sonic Experience: A Guide to Everyday Sounds (Augoyard 2005). The sonic effect was first used in the social sciences. Our work on perceptions and everyday sound behaviours indicated four important psycho-sociological processes: sound marking of inhabited or frequented space; sound encoding and interpersonal relationship; symbolic meaning and value linked to everyday sound perceptions and actions; and interaction between heard sounds and produced sounds. (p. 8) Augoyard’s approach comes from a psycho-sociological perspective and considers the environment as a range of possibilities which manifest themselves through sound, a kind of repertoire of effects that anchor our experience in the everyday. The notion of acoustemology (acoustic epistemology) developed by Steven Feld (2003) suggests a more encompassing study of aural culture aiming at an understanding of the world as shaped by environmental, cultural and historic factors. Here, sound stands for a mediation of knowledge and a way of investigating “the primacy of sound as a modality of knowing and being in the world.” (Feld 2003, p. 226). The listening in everyday life that concerns us primarily, however, is one that affords participatory processes in which collaborators are not necessarily sound experts. Sound, in this context, is not so much the object of study but a trigger to articulate experiences of place and memory. We are therefore primarily concerned with the relationship between the act of listening and a sense of place as analogous to Merleau-Ponty’s ‘anthropological space’ as distinct from ‘geometrical space’. This anthropological space is concerned with the notion that space is existential and “existence is spatial”, as discussed by De Certeau (Certeau 2002). A focus on ‘a sense of place’ provides an effective way of discussing the aurality of listening situations but doing so in an embodied and situated manner, which inevitably triggers narratives and memories.
Sound technologies and mediation The history of sonic arts is implicated in the development of electronic sound technologies. Beginning with Pierre Schaeffer’s ‘unorthodox’ use of the radio studio or Stockhausen’s exploration of scientific audio equipment for producing electronic tones, this type of practice is characterised by complex, sometimes conflictive discourses negotiating aesthetics with technological development. Technological mediation plays here an important role in framing an environment in which specific tools create fields of possibilities and transgression.4 Studio technology, as developed for radio and music production, is at the same time essential and overwhelmingly beyond reach for sonic arts practitioners. Audio technologies are essential in providing an environment
139
Pedro Rebelo and Rodrigo Cicchelli Velloso
that permits the exploration of sound beyond the here and now; recording, being without a doubt the most transformative in our relationship with sound, is at the centre of a myriad of practices, philosophies and social phenomena that range from popular music (Katz 2004) to the audio ethnography of the British Library Sound Archive5 or the Reel to Real Sound Archive at the Pitt Rivers Museum in Oxford.6 The effect of the phonograph on ethnography is illustrated by Brady in A Spiral Way: How the Phonograph Changed Ethnography (Brady 2012) through, for example, the implications of duration supported by a particular recording format in relation to a performance activity.7 These technologies are, at the same time, beyond reach for sonic arts practitioners, as they are, in their vast majority, developed for commercial applications. Implied aesthetics associated with the commercial environment in which these technologies are developed are, in many cases, in conflict with the very nature of sound as media, which sonic arts has as a core methodology.Think of dealing with the concept of ‘song’ or ‘meter’ embedded in many digital audio workstations. This relationship with technology is told through numerous stories that place ‘the artist’ as someone who accepts the commercial drive for technological development but creates his or her own environment for experimentation and research. The examples are numerous and illustrative . . . Jean-Claude Risset and Max Mathews famously ‘stayed on’ at the Bell Labs facilities after hours to work on their private projects in parallel to their commitments to the company. A sonic artist’s love affair with electronics is fraught with difficulties. Technologies are of course present in all types of artistic endeavour, from pencil and paper to the orchestra. It is electronics in particular or ‘music technologies’ that cause both fascination and suspicion. These music technologies are in many cases common to the experimental artist and the popular DJ, even though the workflows and modes of production are distinct. Interestingly, technologicalbased situations such as the solo laptop performer develop distinctly but in parallel in two social domains: academic experimental music and night life. In the experimental sonic arts milieu, laptop performance in the 1990s was seen, on one hand, as the promise of an orchestra at one’s fingertips (a common attribute of new technologies in their marketing or honeymoon period) or as a problematic, lifeless and disembodied type of performance practice in need of correction or extension through embodied devices such as controllers and sensors (Jaeger 2003). The impoverishment associated with a performer appearing on the concert stage with ‘just a laptop’ became an opportunity for the DJ in the preparation and performance of sets with precisely ‘just a laptop’. The complex sociology of highly technologised environments is eloquently exposed by Born in Rationalizing Culture, an ethnographic study of IRCAM revealing relationships between modernist techno prowess and aesthetic expectation (Born 1995). It is these social dynamics as manifestations of relationships with technologies that we want to focus on in the context of participatory practices in the sonic arts. We will address this relationship through the most ubiquitous but perhaps also most transformative of sound technologies, recording and editing. Everyday tasks for all those involved with the sound world, these two activities are relatively uncommon for those not working with audio. The act of recording in the sense of capturing became, with the advent of affordable portable devices and social media, a core element of everyday life. Our most familiar methods of recording are however focused on the visual and not on sound. As such, we constantly operate on image and its relationship with the everyday, through various flavours of photography and video.8 The reasons for having become accustomed, if not dependent, on documenting the everyday through image and not through sound are beyond the current discussion. For our purpose it is, however, important to reflect on the opportunity that emerges out of this image-sound dichotomy when working in an environment with participants that are not familiar with sound recording, and hence on recalling experiences from an aural perspective. 140
Participatory sonic arts
The unfamiliar nature of sound recording (as opposed to audio in a video recording) is somewhat uncanny, as it is in itself akin to video recording, which has supported innovative and engaging ways of sharing aspects of everyday life.9 The use of portable sound recorders is unusual due to the very fact that no image is captured. Needless to say, the motivation for picking up a device to make a short movie clip or an audio recording remains distinct. The most important aspect associated with sound recording as a strategy for engaging with the world is the space it opens for listening and sharing. In the same way someone shares a story when showing a photograph on a mobile device, sound recordings potentiate the making of meaning around a few minutes of audio. This meaning making can go from the factual and banal – “this was recorded in school” – to the personal and poetic – “I recorded water because it makes me feel at home”.10 What concerns us when dealing with a listening experience mediated by recording is not so much the audio information captured by the device but the act of recording itself, the choices made that have to do with time, duration, location, position, equipment. Primarily for ‘nonprofessional’ recordists, what matters even more is what motivates one to record something in the first place. The action is inevitably embedded with meaning – perhaps a memory or a story fragment, a recognition of place? These motivations are hardly captured in the recording itself but might be communicated through other means. This represents another layer of mediation which is critical to the participatory processes under discussion here. A recording is mediated by the recordist through his or her presence in the recording itself and through the context that is articulated through the recording. Matilde Meireles’ project Moving Still (2013–2015)11 places the recordist at the centre of both the recording process itself and the mediation with an audience. A recording made in one single spot from sunrise to sunset is ‘extended’ by text notes and photography (also from one single spot), punctuating a sense of presence and duration. The work edits the recording down to 17 minutes and conveys this sense of presence beyond sound through video projection, which displays the photographs and visibly articulates the passing of time through changes in light, and text, which reminds us of the act of listening through the embodiment of the recordist and the evocation of events or information not captured through sound or image.12 The presence of the recordist or initiator is, in sound field recording, articulated differently from photography or video. The person making the recording, choosing the subject matter, technique, time and duration is a critical element for the audio that is eventually produced and distributed as a media object, critical in a way due to the individualised nature of listening (Blesser and Salter 2006) and to the relational abstraction that embeds a sound object. Unlike the pictorial elements that situate a photograph in an instant, a sound recording is forever ambiguous and fleeting. How is a recording of daily traffic in one of Maré’s favelas significantly different from a recording of traffic in another part of Rio de Janeiro, or for that matter, another part of the world? Here, we must elaborate on two aspects that emerge when using sound recording in a context where recordists are personally engaged in telling their story as opposed to conducting a professional task of capturing a particular sound event or soundscape. Firstly, the motivation or cause behind the recording act has the capacity to transform the listening experience. A Som da Maré participant came to the group with a recording of birdsong from outside his bedroom window which evoked a sense of calmness and normality, only contrasted by the fact that his community had been occupied by the army just a few days earlier. Secondly, the ability to openly explore recording techniques in idiosyncratic and personalised ways produces often uncanny sonic snapshots which are rich and evocative in their own right. A recording produced by another participant included an edit (elaborated on-the-fly by pressing record and pause on the recorder) of an instrumental intro to a favourite song played out through a mobile phone. 141
Pedro Rebelo and Rodrigo Cicchelli Velloso
The participatory context addressed here is framed and embedded in technologies of recording which, as seen earlier, play an active role in the articulation and sharing of the everyday. Recording presents an opportunity for engaging a group in reframing their everyday through a medium which, for better or for worse, is both familiar and foreign. Given the two points elaborated here, the articulation of the reasoning behind the recording act, or for that matter, any engaged listening activity, becomes crucial in the understanding of how sound permeates life. The Som da Maré project brought together a series of methodologies for facilitating the articulation of sound experiences and memories. Participatory methodologies are well established in Brazil and are often used as a mode of resistance in the arts (e.g. Helio Oiticica), theatre (Augusto Boal) or in education (Paulo Freire).
Participatory methodologies The work of Brazilian pedagogue Paulo Freire (1921–1997) has been deeply influential when it comes to participatory methodologies beyond pedagogy itself. Much of our understanding of participatory practices is owed to Freire’s Pedagogy of the Oppressed, originally published in Brazil in 1968. Freire starts off with the assumption that “[n]obody educates anybody else, nobody educates himself, people educate each other through their interactions of the world” (Freire 1993). He develops an analysis of the traditional educational model through the notion of banking and identifies the educator-educated relationship as intrinsically narrative or discursive. The educator is the agent whose function is to ‘fill’ the educated with contents of his own narration. In this discourse, reality is edited and the word is emptied of its concrete dimension, transforming itself in an alienated and alienating verbosity. In the banking concept of education, knowledge is a gift bestowed by those who consider themselves knowledgeable upon those whom they consider to know nothing. Projecting an absolute ignorance onto others, a characteristic of the ideology of oppression, negates education and knowledge as processes of inquiry. (Freire 1993: 72) The banking concept positions educators and educated in contradiction with each other. The educator knows, thinks, speaks, disciplines, makes decisions, chooses programmes of activities and identifies the authority of knowledge in his own functional authority. The educated don’t know, rely on someone else’s thinking, listen passively, are disciplined, follow the programme, have the illusion of action, get comfortable with someone else’s choices, adapt themselves to the prescriptions of authority and are mere objects. For Freire, a truly transformative education must raise problems and must be liberating. For this to happen the educator must understand his own contradictory position and search to bring together his action with that of the educated, moulding it towards the goal of humanisation for both sides. This can replace the delivery of knowledge by genuine thinking, in partnership. For this to happen it is necessary to root the pedagogical process in a dialogical practice, and it is precisely this aspect of Freire’s pedagogy that is at the root of the participatory methodologies discussed here. Human beings are not built in silence, but in word, in work, in action-reflection. In this dialogical practice the educator becomes an educator-educated and the educated become educated-educators; both parties acknowledging their incompleteness in a constant search for “being more”. (p. 88) 142
Participatory sonic arts
The dialogue begins in the search for the programme of study and in the research of generative themes. What Freire calls ‘liberating education’ seeks to decode a concrete situation and make it the subject of research through themes that are the starting point of the dialogical pedagogical process. The elements of Freire’s pedagogy which are at the core of the participatory practices presented here can be summarised under what he calls the ‘circle of thematic research’. In a first phase, researchers (meaning all involved in the educative process) conduct a critical observation of a particular situation through dialogue with all involved.This requires a participation and investment in various aspects of the situation under study; observing it at different moments, at work, in meetings, during leisure time. It also requires due attention to language, ways of talking, thinking and doing. After this initial phase we proceed to an evaluative phase which, with participation from all researchers, aims to re-present the situation under study. This is a phase of thematic generation under which aspects of the situation are codified through pictures, sounds or verbally (though Freire warns this should be done with ‘few words’).These codifications are meant to be objects that represent real situations and challenge critical reflection on the part of the researchers involved. This reflective process in turn generates new themes which follow the same process. This ‘circle of thematic research’ represents a pragmatic mechanism for engaging heterogeneous groups of participants/researchers as, through dialogue, all aspects of the situation under study can be brought forward and discussed without a prescribed programme, which by definition would include some and exclude others. Paying tribute to Freire’s work, the ethnomusicology research group Musicultura has developed work in the Maré favelas for over 10 years and represented a fundamental interlocutor for the Som da Maré project. Samuel Araújo, the founder of the group, summarises Freire’s influence in the field of ethnomusicology, referring to the work of Catherine Ellis in Australia, discussing horizontal exchange between academic researchers and aboriginal people (Ellis 1994), and Angela Impey in South Africa, addressing her participative experience in mediating a research group, bringing together music, culture and ecology (Impey 2002). Central to the work of Musicultura, which brings together academic researchers and young Maré residents, is the implementation of participative strategies based on Freire’s concept of dialogic research. Following participatory action models, particularly the one proposed by Paulo Freire [. . .] the university researchers would act as mediators of discussions among the youngsters on relevant musical subjects and categories, as well as on strategies for music research, would indicate relevant readings upon the group’s request, and would facilitate the actual documentation through the use of digital technology. (Araújo and Musicultura 2006, p. 298) The influence of Freire’s work in Musicultura is again addressed in a later publication, reaffirming the impact of his concepts and methods in participative ethnographic research. In this sense, the critical notions of autonomy and dialogue are fundamental in the creation of a context in which discussion about soundscape and everyday can flourish. The basic idea, as suggested by Paulo Freire (1970), is that the divides between learning and knowing mediated by self-experienced research can be broken down little by little, and a new comprehension of the educational process, one which is not only instrumental but also political, is potentially ready to be apprehended. (Araújo e Cambria 2013, p. 32) In this manner, members of the Musicultura group have become gradually more conscientious and critical of the social processes in which they are situated and their implied contradictions. 143
Pedro Rebelo and Rodrigo Cicchelli Velloso
The “open-ended dialogic process”, in the context of an academic research career, has, for Araújo and Cambria, radical consequences for researchers themselves and “may profoundly challenge the prevalent beliefs and doings of research institutions, demanding newer politicoepistemological foundations” (Araújo e Cambria 2013, p. 39) The impact of certain groupings, associations, institutions, their agendas and social dynamics is paramount in activities which attempt a ‘horizontal’ mode of dealing with decision making. In the context of what she calls ‘collaborative creativity’, Bishop (2006, p. 12) reminds us of the motivations behind participatory work going back to the Dada movement in the 1920s in emerging from and producing a “more positive and non-hierarchical social model”. The experience of Som da Maré, and indeed previous projects with similar aims,13 reveals a tension between the openness that participatory projects require and the expectation from institutions which follow traditional modes of production. Participatory projects of this sort often involve institutions in areas such as education (schools, study centres, universities), social justice and activism (non-governmental organisations, community centres, local organisations), and the arts (museums, galleries) as well as the media.The complex relationship between these is briefly discussed later in the context of Som da Maré.
Social mediation A complementary aspect of the technological-based mediation addressed earlier is how the social or collective networks required to engage in participatory work also mediate relationships and outcomes. The Som da Maré project is based on an engagement with local culture which depends on effective diplomacy, negotiation and mediation articulating the aims of the project with the local community. This mediation is centred on the project’s host, the Museu da Maré, which provided an important type of methodological and public engagement alignment through approaches to local aural histories. The Museu da Maré focuses its activity on archiving, preservation and dissemination of community history in the Maré favela complex. These activities feature a permanent exhibition, research into aural history, art and education workshops and regular temporary exhibitions within which the Som da Maré was hosted. The museum was established in 1998 with a view to engage with local memory and culture in association with CEASM (Centro de Estudos e Ações Solidárias da Maré), a non-governmental organisation focusing on the valorisation of favelas in the context of the plurality or urban space. CEASM and the museum have the primary aim of serving Maré residents and facilitating access to culture and education through the affirmation of citizenship.14 The museum’s mission played a key role in mediating between participants and the Maré public for the Som da Maré project.The museum provided both a physical and cultural home to the project, allowing participants to engage in something that, having a limited duration, is part of a larger ongoing effort to articulate a sense of place and community in Maré. The social mediation that is so critical in this context articulates the diverse and ever-changing expectations of the different sub-groups engaged in the work: participating secondary level students and Maré residents, UFRJ (Universidade Federal do Rio de Janeiro) academics and post-graduate students, museum archivists and directors and the curator and team from SARC (Sonic Arts Research Centre, Queen’s University Belfast).15 A further, non-institutional layer of social mediation takes place as project participants begin to involve their family members and friends. This creates an extended network of complicity and support which contributes to a sense of critical mass but also points towards societal heterogeneity. As a network of participants grows (both in quantity and in level of engagement), initial expectations are often transformed into something more concrete and often more demanding. Questions such as ‘How is my story represented?’ or ‘How many of my ideas will be actually 144
Participatory sonic arts
realised?’ begin to concern those participating. Here, the diplomacy that needs to emerge between individual participants, institutions and indeed those leading the project becomes critical. It is at this intersection that the tension between quality and action begins to manifest itself. On the one hand, an artist such as Thomas Hirschhorn redefines concepts of ‘elitist aesthetic criteria’ such as quality in favour of energy and coexistence.16 On the other, in the influential text Artificial Hells, Bishop (2012) reclaims the necessity for value judgement and warns us against an overtly end-driven social science approach. There is an urgent need to restore attention to the modes of conceptual and affective complexity generated by socially oriented art projects, particularly to those that claim to reject aesthetic quality, in order to render them more powerful and grant them a place in history. (Bishop 2012 p. 8) Bishop highlights the importance of mediating objects, text, materials and in our case sound to articulate links between the work and a ‘secondary audience’ – those who didn’t directly participate in the work. Once again the role of institutions comes to the fore, as it is the responsibility of the authors/initiators/curators to negotiate between the needs of a local museum that might, for example, be interested in an archivist approach to the activity in contrast to an art public who expects to be drawn to the work and be led through the process without necessarily engaging initially with its social context.
Sonic experiences methodology A methodology for mapping out individual and collective sound experiences was developed with a view to articulate participatory strategies in a sonic arts context. This consists of a series of small group activities aiming to explore and share the notion of event, sense of place and time through sound. This exploration is focused on articulating a phenomenology of sound through firsthand lived experience. Each of the activities was employed to frame a weekly session, and progressively we witness the development of a shared vocabulary and shared experiences. The experience of sound in these activities is often contrasted with the experience of taste to provide opportunities for exploring a phenomenological experience of sensation in two different realms. This exposes a number of interesting points of departure, which include relationships such as memory-narrative, sensation-context and experience-sharing. The sonic experiences methodology includes the following activities and was shared with participants one week in advance, not only to allow for some preparation but to provide some continuity to a reflection on sound and the everyday:
Sonic Experience I 1 2 3 4
Describe a positive or negative aural sensation from childhood Describe a positive or negative taste sensation from childhood Describe a situation from childhood in which sound had an important role Describe a situation from childhood in which taste had an important role Reflect on the different types of language used to describe and share sensations of sound and taste. To what extent do we have control of our own memories and to what extent do they control us? 145
Pedro Rebelo and Rodrigo Cicchelli Velloso
Sonic Experience II 1 2
Describe a musical experience from childhood Describe a gastronomic experience from childhood Communicate your experiences to another participant, who will then share them with the group. Reflect on the significance of musical and gastronomic experiences for the notion of identity. What is different about relating the experience of another? What are the responsibilities at play, and what is the impact of hearing another sharing one’s own experience?
Sonic Experience III Bring a sound and a piece of food to share, both related to your personal experience. Reflect on to what extent sound and food are representatives of a particular experience and how effective they are in sharing that experience. Consider and reflect on the logistical conditions that this activity imposes.
Sonic Experience IV Bring a sound and a photograph that can be enriched through a personal story. First share the sound and photo with no context, then with a personal story. Record the story and reflect on how aural history is complementing or compensating for sound and image. Map out what types of elements (linguistic, gestural etc.) are used to enrich the media in order for it to serve as a platform for personal storytelling.
Sonic Experience V Record, photograph and write a text about an experience of your choice. Reflect on issues of media and format, aggregation of experiences, events and places and on the relationship between text, sound and image. What is easiest to convey with each media?
Sonic Experience VI Identify a listening space and annotate with chalk on the actual space the various sound sources present. Map the space onto paper translating the sound map created on the ground (see Figure 6.1). Reflect on the relationship between space and sound through graphic strategies for notating a listening experience.
Sonic Experience VII Identify a listening space, photograph it and annotate the photos with text tags pointing to sound source locations. Reflect on the framing associated with the photographs in relation to listening. Discuss the type of vocabulary used to identify sound sources.
146
Participatory sonic arts
Figure 6.1 Som da Maré participant annotating the Museu da Maré courtyard with text and drawing pointing out the sounds of footsteps, wind, noise and speech
Sonic Experience VIII Identify a sonic experience with special significance in everyday life and relate to others through text, sound recording and a drawing.
Sonic Experience IX Relate everyday routine from morning to nighttime through sonic experience Reflect on how events or actions throughout the day relate to sound and the implications of creating a chronological narrative. ‘Sonic Experiences’ is a simple methodological framework which provides some continuity (even development) across a number of sessions aiming to devise a shared vocabulary to talk about sound. By asking participants to engage simultaneously with sound and taste experiences in the first few sessions (arguably taste could be replaced by any other type of experience), we aim to reflect on memories, and how personal stories are inevitably entangled in multisensorial experiences. By contrasting the experience of sound with other senses, specific vocabularies and ways of accessing memories and experiences begin to emerge. As the work develops, it becomes increasingly important for the group to think about articulating experiences through a variety of strategies and media which complement and extend field recording, as a core method. These activities are designed to encourage depth of experience which can be articulated through shared memory or individual stories. Throughout the process there is an emphasis on embodied everyday experiences, as banal as these might sound. With the starting point in the everyday, the routine, the seemingly unimportant or at times incongruous, the group begins to share an understanding against which the personal, the individual, the extraordinary can then be established.
147
Pedro Rebelo and Rodrigo Cicchelli Velloso
Som da Maré In May 2014, the participative project Som da Maré brought together the creative energy of a group of inhabitants from a cluster of favelas in Maré (Rio de Janeiro)17 through the sonic arts. The work recalled everyday experiences, memories, stories and places. These memories elicited narratives that leave traces in space while contributing to the workings of local culture. The result of four months of workshops and fieldwork formed the basis of two cultural interventions: an exhibition in Museu da Maré and guided soundwalks in the city of Rio de Janeiro. These interventions presented realities, histories and ambitions of everyday life in the Maré favelas through immersive sound installation, documentary photography, text and objects. Som da Maré included groups of participants who together have developed themes, materials and strategies for the articulation of elements of everyday life in Maré. Participants include secondary level students under the Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) ‘Young Talent’ scholarships and their families, post-graduate students at the Universidade Federal do Rio de Janeiro (UFRJ), PhD students from the Sonic Arts Research Centre (Queen’s University Belfast) and members of the Cia Marginal, a theatre company based in Maré. The project also counted on the participation of academics from music, ethnomusicology, visual art and architecture at the UFRJ and a partnership with the Museu da Maré, making up a team of over 30 people. The project was curated and coordinated by Pedro Rebelo, Professor of Sonic Arts at the Sonic Arts Research Centre, Queen’s University Belfast, and visiting senior professor at the Universidade Federal do Rio de Janeiro. During the development of the project, various changes occurred in the city as a whole and in Maré in particular, which generated an alteration in the territorial politics related to the spaces implicated in the project.18 The way in which these events become part of the everyday life of the inhabitants emerged as yet another point of reflection on identity, territory and community. This is discussed in detail in Silva et al. (2015), who identify the militarisation process that occurred in Maré in 2014 as an example of urban colonialism with direct impact on all aspects of favela life including sound and music practices. Control over sonic demonstrations (bailes funk, parties, playing music outdoors) was one of the overtly oppressive ways in which the army, given the task of ‘pacifying’ the favela, expressed its power over the inhabitants. Som da Maré articulates small fragments of all this cultural expression through sound. Between technologies and everyday life, these experiences can echo in the form of memories but also future ambition, re-signifying space and place through time. Workshops focusing on articulating the everyday through sound, through methodologies such as those described earlier, helped generate numerous thematic strands, which in some cases were common to a number of participants. Specific sonic experiences acted as a trigger for storytelling, identification of places or simply daily routine. As categories of experience begin to emerge, it became possible to cluster contributions into themes such as domestic life, street life and fear. A map of experiences, traced back to an interview fragment or a comment made as part of an activity, was created and served as a collection of ideas as they are being generated. Domestic life included sonic experiences such as the precise jingle of keys on a gate, preceded by the sound of a car signalling the arrival of a participant’s father. The ubiquitous ‘shhhhhhhh’ made by the pressure cooker cooking the traditional Brazilian black beans was shared by many as a sound of the home. Street life brought together a wide range of experiences with particularly evocative sonic manifestations. The sound of the skipping rope, the van selling sweets and all the children screaming as they run towards it, or the sound of game songs (piquetá, piquesconde). Conflict emerged as a theme given Maré’s complex history of drug traffic-controlled territory and regular police, army and special forces interventions, often resulting in violence and death. This condition manifests itself very 148
Participatory sonic arts
directly in sound through the ‘warning music’ played by the special forces armed vehicle, the caveirão.19 Daily explosions at midday in a nearby quarry were themselves frightening for local children and provided an ironic contrast to memories of a participant’s mother placing cotton wool in her ears to attenuate the sound of shooting outside the house. The resulting exhibition articulates a selection of themes through three gallery works (installed in Museu da Maré by the project team).
Amanhã . . . (Tomorrow . . .) Gallery 1 The installation articulates the ambitions and future plans of a group of youngsters from Maré, through personal reflections including personal dreams but also their projection of Maré’s future. The sound wall projects the sound of their voices commenting on their future and on that of Maré, while triggering the visitor’s participation. The sound wall is also a chalkboard, offering visitors the possibility to intervene in the installation by writing their own projections (see Figure 6.2).
Leituras em diálogo (Readings in dialogue) Gallery 2 The gallery shares a dialogue that suggests two forms of reflecting on Maré; everyday life and the military occupation of the favela, as mediated by the local press. The installation exposes a contrast between daily life experiences and the report of an event that changed some of the reflections and dynamics within the project: the militarisation of Maré. The installation intends to expose the contrast between daily life and how this event somehow changed and interfered with the Som da Maré project. A quadrophonic sound piece occupies the gallery space with recordings of the newspaper reports and interviews, coloured by sonic environment recordings and everyday sonic events. Alongside, eight photographs show four of the youngsters in two very distinct situations; while reading O Globo newspaper headlines about the military occupation (see Figure 6.3) and through four photographic montages showing a selection of their selfies.
Figure 6.2 Amanhã . . . (Tomorrow . . .): Detail of ‘sound wall’, Gallery 1, Museu da Maré
149
Pedro Rebelo and Rodrigo Cicchelli Velloso
Figure 6.3 Leituras em diálogo (Readings in dialogue): Detail of newspaper photos
Banho de Chuva (Rain Shower) Gallery 3 The sound of the rain was one of the strongest memories identified across the community, from the strong summer rain on the metal rooftops to the times of the palafitas (the first wooden houses on stilts that were built in Maré). This sound memory emerges from different situations ranging from the joyful street baths in the summer rains to a domestic preoccupation – rain falling inside the house. The installation proposes an interactive experience where various intensities of rain are played back through the recycled pans hanged from the ceiling. A wooden platform was placed on the floor remembering the times of the palafitas, when locals would move throughout the neighbourhood using the wooden platforms. Contact microphones placed under the wooden structure worked as sensors by changing the intensity of the rain as the visitor progressed along the wooden platform (see Figure 6.4). The three gallery pieces installed in Museu da Maré provided a readable context (helped in no small way by the permanent exhibition in the museum which tells the story of Maré). One of the aims of the project was always to create a dialogue with other parts of the city and to question the too easily accepted distinction between morro (‘mountain’, standing for favela, as this is where a number of them are located in Rio de Janeiro) and asfalto (‘pavement’, standing for the ‘official city’). This dialogue was articulated through a guided soundwalk exploring very much the same themes as the exhibition.
Passeio Sonoro Som da Maré (guided soundwalk in Parque do Flamengo) A maximum of 15 persons are guided by members of Cia Marginal, a theatre company from Maré. The soundwalk articulates statements, sonic environments and field recordings accompanied by the actors’ performances. Maré is remapped in Parque do Flamengo, bringing to this 150
Participatory sonic arts
Figure 6.4 Banho de Chuva (Rain Shower): Detail of installation at MAC Niteroi
new space some of the sonic memories and experiences of residents from Maré, and others common to us all. The predefined route introduced a new way of looking at space through sound, narrative and the act of walking, composing and experience that relates the act of listening with the physical experience of the space. The soundwalks took place throughout the week and during one month. The meeting point was the project’s van, which also marked the beginning of the immersive walk.
Sonic arts and ethnography The attraction and tension between artistic practice and ethnography, or more widely anthropology, is at play in the type of site-specific participatory work addressed here. In 1996, Hal Foster writes critically about the artist as ethnographer while enumerating the various reasons why an anthropological approach to the other is attractive to artists. “[E]thnography is considered contextual, the rote demand for which contemporary artists share with other practitioners today, some of whom aspire to fieldwork in the everyday” (Foster 1996 p. 305). Foster explicitly criticises a caricature situation in which an artist is flown in for a project engaging a community that subsequently emerges as one with not enough time and resources to address the basic principles of ethnography as far as building a relationship of trust is concerned. These tensions are undoubtedly present in projects such as Som da Maré, and hence notions of the participative and horizontal structures were cultivated rather than an alterity which would have placed the community in question as the object of rather than the interlocutors in an exploration of sound, place and the everyday. Feld, in Doing Anthropology in Sound (2004), addresses issues of aural culture, artistic intervention and representation more directly when discussing his own recordings. Albums such as Voices of the Rainforest present an edited and selective daily routine of the Kaluli people in New Guinea. Even though Feld’s voice is clearly present in his recording, there is an effort to share the process of recording as the key element of the type of mediation addressed earlier. As in Som da Maré, Feld is explicit about the recording process with his participants and plays back recordings in the field, which serves not only to embed meaning but also aid in the selecting, editing and storytelling that characterises much of the material. 151
Pedro Rebelo and Rodrigo Cicchelli Velloso
In the case of Som da Maré, the core resulting activity from a participative field process was not in itself an ethnography but a sound art exhibition and soundwalk. There are undoubtedly ethnographic strategies at play, though by virtue of the participatory methods established, these are closer to auto-ethnography than an ethnography of the other. Source recording materials including field environmental recordings, interviews and workshop sessions constitutes an archive, now with the Museu da Maré, hence having a potential future life in anthropological or museological treatments which might contrast with the admittedly selective, aestheticised sonic arts approach that governed the curation of the exhibition and soundwalk.20 The notion of authorship was partially addressed through a curatorial model in which all participants share equal status, even though the intensity and qualities of participation varied widely. As pointed out by Drever, “a contemporary ethnographic approach to soundscape composition may require that the composer displace authorship of the work, engaging in a collaborative process, facilitating the local inhabitants to speak for themselves” (Drever 2002). The question of authorship is complex and beyond the scope of this text. However, a level of engagement and the ability for any participant to recognise aspects of the work, as well as being able to trace back ideas that are manifested in the ‘end result’ back to specific workshops or interviews, stands as an important way of ensuring the work is true to the process. Ownership of the work by all participants is, of course, a central objective in this practice. The curatorial model suggests certain strategies for negotiating the dynamics within the group and at the same time focussing on the inevitable resource-intensive production stage that sooner or later needs to begin. Curatorship in this context is by no means unproblematic, as it introduces a hierarchy and a position of power. The experience of Som da Maré suggests more nimble and flexible models are possible, as far as the working methods of the participants (including the initiators). What is perhaps more difficult to implement is the interface between these horizontal structures and certain types of institutions. This undoubtedly remains an area for development through trust, management of expectations and flexibility to engage in multiple ways of producing artwork, engaging communities and the wider public.
Conclusion We have addressed the context, ambitions and challenges associated with creating a sonic arts practice which takes the experience of sound in everyday life to articulate aspects of identity, relationship with place, personal and shared memory. Sound represents a privileged medium for engaging in participatory work precisely because of the difficulties in articulating listening as an experience, its personal character and its inability to be captured objectively. All this enriches a process which aims to develop an awareness for our sound environments and an understanding of how they too make up what we are, as individuals and communities. We have discussed aspects of mediation not only in relation to technologies of sound – their capabilities, their limitations and, most important, their ability to frame our listening experiences – but also in terms of the social. In this respect, mediation becomes a method for access and community building. Shared understanding of specific conditions, places, memories and participatory working methods contributes towards a potentially positive space of mediation which aims towards social inclusion and an approximation of the objectives of individuals and institutions. Brazil, with its tradition in participatory work represented by Freire, Boal, Oiticica and many others, in a context of widespread social injustice, makes for fertile ground as far as socially engaged arts are concerned. Projects such as Som da Maré, used here as a case study, demonstrate the vast amount of materials, ideas and ontologies that can be generated and shared in societies which increasingly need to articulate their histories, ambitions and prospects. The work discussed here 152
Participatory sonic arts
presents nothing more than a small contribution to that process, in which the arts can be an effective way to highlight a need for understandings of ‘the other’ which is self-mediated and not at the mercy of preconceived narratives of poverty, violence, power or social inclusion. The participatory, socially engaged processes discussed here do, without doubt, contribute to this understanding, but they do so in fragmentary, personal, sometimes incomplete or contradictory ways. Such is the nature of lived experience as it emerges through these types of processes. Such is the context in which artistic production thrives in the unexpected, the evocative, the surprising, the banal and the ironic. The privilege position of the arts, and the unique space occupied by the sonic arts, provides a platform for the ephemeral to bring to light what would otherwise remain suppressed. This ‘temporary snatching way’ that Helguera refers to in the citation at the beginning of this text is what makes this whole process fragile and fraught with difficulties and contradictions – but at the same time unquestionably worthwhile.
Notes 1 See ‘Pablo Helguera Archive’ http://pablohelguera.net/2011/10/ælia-media-2011/, ‘María Andueza (‘Immigrant Sounds’)’ https://mariaandueza.org/2014/09/06/immigrant-sounds-resonart-stockholm/, ‘Sue McCauley (personal site)’ www.firestation.ie/programme/curators/sue-mccauley/ - last accessed 23rd February 2018. 2 See ‘Som Da Maré’ site http://somdamare.wordpress.com - last accessed 23rd February 2018. 3 See ‘World Forum for Acoustic Ecology’ site http://wfae.net/ - last accessed 23rd February 2018. 4 The notion of mediation, as we understand here, is anchored in the work of Antoine Hennion, Bruno Latour and other authors prescribing to the field of actor-network theory. In this context, a fundamental step was taken by Ann Swidler, before Hennion, in the notion of ‘toolkit’.The starting point is that different actors engage with categories, concepts, ideas and whatever else is present in their environment as tools, in addition to the more common technological tools that might be utilized in more or less conventional ways.There is an evident tension between the idea of transgression mentioned earlier and this ‘more or less conventional’ use of these tools as they are shaped by what their own history embedded them with; a conflictuous discourse negotiating aesthetics with technological development. For an in depth discussion of mediation, see Swindler 1986, Hennion 2007: part II and Latour 2007: 91ss. 5 See ‘Sound [British Library]’ www.bl.uk/soundarchive – last accessed 23 February 2018. 6 See ‘Reel to Real: Sound at the Pitt Rivers Museum’ http://web.prm.ox.ac.uk/reel2real/ – last accessed 23 February 2018. 7 Brady reflects on the “mechanical presence” embodied in a recording format such as wax cylinders and how it determines the way information is captured and preserved, hence significantly changing the fieldwork interaction. 8 Practices such as selfies and their distribution on social media have undoubtedly changed our relationship with the camera. Social media networks such as Instagram and Facebook both encourage and frame specific forms of engaging with the moment through the ways photography gets displayed, captioned and tagged. 9 Arguably, our relationship with the everyday through video has itself gone through a dramatic transformation. One only has to compare the culture of the long, unedited and shaky home movies of the 1980s to the slick, fast-paced clips encouraged by sites such as vine.com – “the best way to see life and share in motion” or http://1secondeveryday.com/ in which one-second snippets are automatically edited together to create a larger scale movie clip, or indeed the 360° video revolution, which has significant implications for the production of sound. 10 These are two quotes from Som da Maré participants when asked to present their own sound recordings. 11 See ‘Moving Still: 1910 Avenida Atlântica, Rio de Janeiro, Brazil’ http://matildemeireles.com/portfo lio/movingstill - last accessed 23rd February 2018. 12 These practices are articulated by Meireles through the notion of extended phonography, a way of capturing a focus on sound but an expansion into media such as image and text. 13 Sounds of the City (Rebelo, Chaves, Meireles, McEvoy, 2012) established a number of methodologies later used in Som da Maré and was developed in the context of a commission by the Metropolitan Art Centre in Belfast. http://soundsofthecity.info and http://www.socasites.qub.ac.uk/soundsofthecity/ – last accessed 23 February 2018.
153
Pedro Rebelo and Rodrigo Cicchelli Velloso 1 4 See the websites http://museudamare.org.br/ and http:www.ceasm.org.br/ – last accessed 23 February 2018. 15 A well-known art school in the centre of Rio de Janeiro was initially involved as a site for exhibition and as a way of promoting exchange between favela residents and art students. As the project evolved the directorship of the school became increasingly concerned with the development of the project and finally decided to pull out.This is an example of the tension mentioned earlier which is a manifestation of a dichotomy between open-ended, process-focused participatory work and traditional result-driven artistic production. 16 See ‘Thomas Hirschhorn’ (personal site) https://art21.org/artist/thomas-hirschhorn/ - last accessed 23rd February 2018. 17 With nearly 140 000 inhabitants, Maré is one of Rio’s largest cluster of favelas, composed of 16 different communities in the north part of Rio (zona norte).The area is at the edge of Baía de Guanabara and has been inhabited since the 1940s. It now spreads 800,000 square metres across two of the main circulation routes in Rio: Avenida Brasil and Linha Amarela. 18 The favelas in Maré were occupied by the special forces (BOPE) on 30 March 2014 as part of a process known as pacification. 19 Carioca funk, a genre of music very much associated with favela life, was used by the police and special forces, BOPE (Batalhão de Operações Especiais da Polícia Militar) when entering favelas with armed vehicles such as the caveirão (named after the skull imagery the vehicle is modelled on). The music would be familiar to residents, but lyrics were replaced to emphasise the intended oppressive message. For a discussion of relationships between music and violence in Maré, including a reflection on the caveirão, see Araújo 2006. 20 The impact of Som da Maré is examined in “Sounding Conflict: from resistance to reconciliation” (https://www.qub.ac.uk/research-centres/SoundingConflict/)
References Araújo, S. 2006. ‘A violência como conceito na pesquisa musical; reflexões sobre uma experiência dialógica na Maré, Rio de Janeiro.’ Trans. Revista Transcultural de Música 10. http://www.sibetrans.com/trans/p5/ trans-10-2006 [unpaginated] Araújo, S. and Cambria,V. 2013. ‘Sound Praxis, Poverty and Social Participation; Perspectives From a Collaborative Study in Rio de Janeiro.’ Yearbook for Traditional Music 1, pp. 28–42. Araújo, S. and Musicultura, G. 2006. ‘Conflict and Violence as Conceptual Tools in Present-Day Ethnomusicology; Notes from a Dialogical Experience in Rio de Janeiro.’ Ethnomusicology, Estados Unidos 50, no. 2, pp. 287–313. Augoyard, J.-F. and Torgue, H. 2005. Sonic Experience: A Guide to Everyday Sounds. Kingston: McGill-Queen’s University Press. Bishop, C. (ed.) 2006. Participation. London: Whitechapel Art Gallery. Bishop, C. 2012. Artificial Hells: Participatory Art and the Politics of Spectatorship. London ; New York: Verso Books. Blesser, B. and Salter, L.-R. 2006. Spaces Speak, Are You Listening? Experiencing Aural Architecture. Cambridge: The MIT Press. Bourriaud, N. 1998. Relational Aesthetics. Les Presses Du Reel edition. Dijon: Les Presse Du Reel, Franc. Born, G. 1995. Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-garde. Berkeley; London: University of California Press. Brady, E. 2012. A Spiral Way: How the Phonograph Changed Ethnography. Jackson: University Press of Mississippi. Certeau, M. De. 2002. The Practice of Everyday Life. New edition. Berkeley: University of California Press. Drever, J. L. ‘Soundscape Composition: The Convergence of Ethnography and Acousmatic Music.’ Organised Sound 7, no. 1, April. doi:10.1017/S1355771802001048. Ellis, C. 1994. ‘Powerful Songs: Their Placement in Aboriginal Thought.’ The World of Music 36, no. 1, pp. 3–20. Feld, S. 2003. ‘A Rainforest Acoustemology,’ in The Auditory Culture Reader, eds. Bull, M., and Back, L. Sensory Formation Series. New York: Berg. Foster, H. 1996. The Return of the Real:The Avant-Garde at the End of the Century. Cambridge: MIT Press. Freire, P. 1993. Pedagogy of the Oppressed. London: Penguin Books.
154
Participatory sonic arts Hennion, A. 2007. La Passion Musicale. Paris: Métaillé. Also 2015. The Passion for Music: A Sociology of Mediation. New edition (translated by Margaret Rigaud). London: Routledge. Impey, A. 2002. ‘Culture, Conservation and Community Reconstruction: Explorations in Advocacy Ethnomusicology and Action Research in Northern KwaZulu.’ Yearbook for Traditional Music 34, pp. 9–24. Ingold, T. 2007, ‘Against Soundscape,’ in Autumn Leaves: Sound and the Environment in Artistic Practice, ed. Carlyle, E. Paris: Double Entendre, 10–13. Jaeger, T. 2003. ‘The (Anti-)Laptop Aesthetic.’ Contemporary Music Review 22, no. 4, pp. 53–57. doi:10.1080 /0749446032000156982. Katz, M. 2004. Capturing Sound: How Technology Has Changed Music. Berkeley: University of California Press. Kim-Cohen, S. 2016. Against Ambience and Other Essays. Bloomsbury: Publishing USA. Latour, B. 2007. Changer de Société, refaire de la sociologie. Paris: La Découverte. Schafer, M. 2006. Soundscape. Rochester: Inner Tradition. Silva, A. D. da, Emery, A., Carvalho, E. M. C. L. de, et al. (Grupo Musicultura). 2015. É permitido proibir: a práxis sonora da pacificação. Revista Vórtex (Dossiê Som e/ou Música Violência e Resistência – Org.: Guazina, Laize), Curitiba, 3, no. 2, pp. 149–14158. Swindler, A. 1986. ‘Culture in Action: Symbols and Strategies.’ American Sociological Review 51, no. 2. Thompson, N. 2012. Living as Form: Socially Engaged Art from 1991–2011. Cambridge: MIT Press.
155
7 THE PROBLEMS WITH PARTICIPATION Atau Tanaka and Adam Parkinson
Introduction Participation is a broad, attractive notion that has become increasingly expected in contemporary creative practice. Is it the manifestation of a utopian vision, or even the complete opposite? Participation emerges in different disciplines and takes on multiple meanings: from the general notion of simply taking part, to a taking part that creates positive change in participants and their social surroundings, to art practices that involve the audience and design practices where the “users” are consulted. Participation in art and design is often conceived as being democratising, and perhaps a route to raising an individual’s consciousness, catalysing social change and creating associated new social relations, resonating with the political implications of the word. Since the 1960s, we have witnessed a growth of artistic activities in which the spectator ostensibly has more agency and effectively constitutes part of its materials. In the computer science field of human-computer interaction (HCI), through practices such as participatory design and user-centred design, we have witnessed a reversal of the otherwise top-down dynamic of design-led technology development to encourage bottom-up methods for gauging end-user desires and needs as drivers for the conception and elaboration of new technologies or services.1 The risks and consequences of participation have been critically interrogated through art and theory. Participatory art can fail to meet its democratising ideals (when it claims to have them) and even become instrumentalised by forces acting against such ideals. Likewise, participatory design, which emerged from Scandinavian workers unions and Marxist critiques of Taylorism,2 has seen its methods co-opted into the more politically neutral user-centred design, where it potentially becomes a tool for product design that has lost its ethical underpinnings and focus on democratic empowerment.3How do we honour the utopian vision of participation and conceive of technology-based music projects where the inclusion of the spectator maintains its originally intended beneficial or boundary-breaking effect? This chapter retraces as case studies a series of projects involving the authors and collaborators spanning the period 2008–2015, where participation was a key factor in their conception, delivery and raison-d’être. We will look at ways in which participation-driven approaches can work but also consider when they break down and the pitfalls they engender. By describing the projects and the legacy they left, looking at the broader issues surrounding participation and evoked in art and theory, we will problematise the concept as we have encountered it and reflect upon the projects themselves. 156
The problems with participation
The underlying question throughout is whether we successfully facilitated meaningful participation or whether we risked being instrumentalised, designing projects and writing up outcomes expected by those above, be they sponsors, partners or the participants themselves. How can this work inform our understanding of the messy complexities of embodied, material reality: of different ways of knowing, different types of participant and of different ways of participating? By situating the projects beyond personal subjectivity in contexts of creative practice, the cultural sector and academic constraints which shape them, we seek to invite broader discussion about the material underpinnings and myriad forces acting upon contemporary research. We seek to be self-critical and interrogate the necessity to fit with policy agendas. Rather than rest on the panacea of interdisciplinarity we allow ourselves to be prone to tensions between disciplines and epistemological divides, the pressures of funding and the drive to produce research “impact”. All of this forms an essential but often ignored element in present day research and its effects, and we wish to bring it into the open, with the hope that it can better inform participatory research methodologies.
Participatory methods A key methodological device in the work presented is the workshop. Ad hoc communities are nurtured and brought together through workshop environments and the types of sociality and collective learning they facilitate. We have previously described this as “workshopping”,4 drawing on Christopher Small’s notion of musicking.5 Small introduces the term to point to the richness of musical experience and the many different, valid ways in which one can participate in music without being subject to an implicit value hierarchy of composing, performing, listening. By proposing the verb “to music”, he asserts that music is an activity as opposed to a thing. There are multiple ways to participate in a musical event, and the audience can be participants who contribute as opposed to passive consumers. Drawing on Small’s process and action-based way to think about musical activity, we proposed workshopping as a similar notion that captured the various acts of participating in a way that could be reified. The concept of workshopping emerged from studying our activities that combined workshops and performance. These workshops responded to situations whereby fixed ideas such as “user”, “listener” and other supposedly passive roles were blurred through the advent of digital technologies alongside concurrent social-cultural changes. A process-based approach evolved whereby the workshop enabled different types of participation in collective musical practices. This brought fresh perspective on sharing knowledge and teaching technical topics where participants themselves, based on their prior experience, entered into a dynamic of “mutual teaching and self-organised peer-learning”.The projects described here take musicking to heart as an ethos for digital musical practice and deploy workshopping as a method to trigger creative activity using sophisticated digital music technologies with participants from a wide range of backgrounds.
The projects We present a series of six projects which we initialised or were involved in, spanning the period 2008–2015: 1
Chiptune Marching Band Workshops and musical performances with DIY electronics 157
Atau Tanaka and Adam Parkinson
2
“Turn Your iPhone into a Sensor Instrument!” Workshops and performances turning the Apple iPhone into a musical instrument whilst teaching programming.
3
Social Inclusion through the Digital Economy A project exploring multiple ways in which digital tools can extend participation to the disenfranchised.
4
Design Patterns for Inclusive Collaboration Creating an accessibility tool for audio producers with visual impairments.
5
Form Follows Sound Using workshops to explore embodied sonic interaction design.
6
BEAM@NIME and EAVI concert series Broadening audience participation through concerts.
These projects provide a lens by which to study the deployment and impetus of “participation”. They all invoked participation in different ways, connected music with a number of disciplines and took place in different geographical areas. They were not conceived as a series, but upon reflection we can see how they addressed a consistent set of themes in different contexts. The projects were centred in the UK and run out of British universities with funding from UK research councils (in both the sciences and the arts), cultural sector actors, and European funding bodies. While the activities were concentrated in the UK, certain activities or parts of projects also took place in continental Europe and the United States. Some projects included hosts or partners from countries such as Norway, France, Portugal and Spain. In the UK, the projects took place in the northeast of England, where culture-led regeneration was a successful socio-economic prerogative in the 1990s–2000s, and in southeast London, where the interface between a world class university and a poor surrounding neighbourhood in the throes of gentrification created a dynamic propitious to this work. The projects for the most part included workshops as a delivery vehicle, where the workshop structure and delivery were predicated by the specific theme in question, the participants (or users) and the context. These participants ranged from youth groups to art school students, experimental music performers to visually impaired audio producers. The venues varied from DIY happenings to academic conferences, from pubs to festivals. In some cases, participation through social inclusion was an explicit driver. In others, introduction to digital technologies as catalysts to participation motivated the event, while in others, the driving force was broadening awareness of new forms of music.
DIY: Chiptune Marching Band The Chiptune Marching Band (CTMB) was conceived by Jamie Allen and Kazuhiro Jo at Culture Lab, a cross-disciplinary research centre at Newcastle University, and sought to capture the burgeoning energy of the DIY and maker scenes to create a band of self-sufficient electronic musical instruments. The project put in place an “ecology of practice” combining educational workshops, collective instrument making and musical performance (see Figures 7.1 and 7.2). It introduced basic electronic circuit design to a broad range of users, and resulted in ad hoc 158
The problems with participation
Figure 7.1 Chiptune Marching Band workshop with Kazuhiro Jo
Figure 7.2 Chiptune Marching Band participants marching with self-made instruments at the Maker Faire
“performances” in the streets. Participants were encouraged to think about self-sufficiency, offgrid energy generation, and a democratic approach to music making.6 CTMB workshops were held eight times in a two-year period, 2008–2010, including Newcastle upon Tyne, Helsinki, New York City and Tokyo. In each iteration, 7–20 participants worked together for three hours to each build a small sound-making circuit which was sensor driven and powered by an alternative energy source – a hand-crank. The process of making the instrument facilitated social interactions around knowledge exchange and materials and educated participants on a number of topics, from basic electronics to broader issues of energy consumption.The workshop was followed by a marching musical performance in the surrounding area, where participants played the instruments they had made.7 159
Atau Tanaka and Adam Parkinson
Jo and Allen use the term “ecology of practice” to describe the complex meshwork of interrelationships within a CTMB workshop/performance, involving the learning of and exchange of skills, engagement with and awareness of materials, and creativity expressed in making and performing. As they describe, “The overall motivation for the design of CTMB is the creation of an event and environment wherein people of various walks of life converge around a set of ecologies”.8 People were brought together in the CTMB workshops through their collective engagement and knowledge exchange around limited resources. CTMB involved sharing knowledge between different types of participant, allowing for each to bring different elements to the process, be they hackers, musicians or makers.Within the workshops, traditional notions of musical creators and performers were blurred, making everyone a combination of creator-performer-listener, themes that will be investigated further here as we interrogate the notion of the “user”. CTMB effectively used the format of the workshop to create a social environment where people learned and gained a sense of both group and individual agency, ideas which will surface throughout the projects discussed. The project also explored novel ways to capture participant feedback, glean insight and communicate the essence of the event. Brief semi-structured interviews recorded participant reactions on site. Postcards were distributed to participants, providing them with a channel to communicate after the event was over through a playful way to ask them “What had become of your instrument?” thereby attempting to capture the multiplicity of the event and its extension into people’s lives. Documentation focused on the creative process, exploring the participants’ experience of creativity within the collaborative workshop environment and how this might fit into their everyday life. The project was summarised by a short, stylised film portraying a typical CTMB event.
Mobile music: “Turn Your iPhone into a Sensor Instrument!” The workshop series “Turn Your iPhone into a Sensor Instrument!” had a similar workshopperformance structure along with strong pedagogical elements.The technology presented in the workshop/performance harnessed the increasing computational power of smartphones at the time (2009) to perform real-time digital sound synthesis. We used RjDj, an app that allowed “patches” authored in the musical programming language Pure Data to be deployed on the iPhone. We had been using Pure Data on computers for some time, for laptop performances that used off-the-shelf MIDI controllers and bespoke gestural interfaces for performer input. The growing processing power of the iPhone meant that the same Pure Data patches, such as granular synthesisers, could run on the mobile, albeit sounding grittier at half the sampling rate. Meanwhile the phone offered something the computer did not have: its tilt sensor meant that the controller was not an external interface, and one could have instant, embodied interaction with the sound. By plugging the headphone output of the iPhone directly into an amplifier, it became a self-contained instrument consisting of sensor input, signal processing/sound synthesis and audio output all in a single handheld device that had instrumental autonomy akin to that of an electric guitar. Suddenly, we could perform computer music without the computer, bringing us into an era of what we dubbed, “post-laptop music”. With one phone in each hand and in duo ensemble, we performed as Adam & Atau, 4-Hands iPhone, in a tip of the hat to four-hand piano duets.9 Alongside the performative potential of this system, it also seemed apt to use as a teaching platform to introduce interactive music programming to non-specialists. We began teaching 160
The problems with participation
half-day workshops in which we introduced 12–16 participants to Pure Data and guided them through building a simple, gesture-controllable synth on their computers – a primitive version of the theremin – and downloading it to play it on an iPhone or iPod touch. This echoed the personal instrument building that took place in the earlier CTMB project. The workshops were presented over 10 times in the period 2009–2011 in a range of settings across the UK, Europe and North America from schools to festivals, with participants ranging from children to musicians, artists, students and specialist programmers. Each workshop was followed by a public event where the workshop participants were invited to demonstrate or perform the “instrument” they made earlier. The event ended with a concert performance of Adam & Atau, 4-Hands iPhone. 10 One of the appeals of Pure Data and similar programming environments is that they are open-ended toolkits or “blank canvases” where the user can sketch, develop and modify creative ideas. However, this can also be a barrier, as the blank canvas can be daunting. Starting something when the possibilities are infinite can be difficult, as can deciding that the same something is finished, in an environment which affords constant editing and modification. The iPhone instrument provided a structure within which learning could take place, creating a goal-oriented task with a fixed end-point. Once a patch was loaded onto the phone it became locked and “instrumentalised” – that is, turned into a (relatively) fixed musical instrument. In this way, we hoped that participants didn’t feel as though they were learning to program; rather, they were building a synth, and it just happened that they would pick up some programming on the way. Seen in this light, the workshops served to “trick” participants into learning Pure Data and being introduced to the potential of musical programming. We hoped that by taking part in the workshop, the participants would see programming as a way of turning a locked-down consumer device into a creative, noisy tool that they could use to realise and explore their own musical ideas. The approach was successful, with many participants quickly grasping the basics and overcoming elements of the notoriously steep learning curve. It has evolved into a pedagogical strategy used by one of the authors when teaching musical programming to those who might not be immediately drawn to it, or see the advantages of learning. One workshop saw 15-yearolds developing synths and even extending them beyond the workshop goals, relating to the next case study.The show-and-tell segment of the process intended to bridge the workshop and the concert. At art school renditions of the event, students happily gave demonstrations of their projects. However, we found that in general participants were unwilling to perform with the technology, which could say as much about the social relations of musical performance as anything. Just because people have developed the knowledge to be able to make music on a stage and are interested in musical interactions with their phone does not mean that they will want to perform, whether because of nervousness or other factors. Wanting to perform can be quite separate from wanting to personally explore creative interactions with something.
Social Inclusion through the Digital Economy Social Inclusion through the Digital Economy (SiDE) was a major research “hub” from 2009– 2014. Funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC), it was one of three hubs funded in the UK at the time in their then-new cornerstone programme on the digital economy. It was driven at the national level by research funding bodies responding to government imperatives to broaden the economic impact of digital technologies. Given the history of post-industrial decline in northeast England by the dismantling of the mining and shipbuilding industries under the Thatcher government (1979–1990), followed by an ambitious 161
Atau Tanaka and Adam Parkinson
culture-led regeneration of the region by New Labour (1997–2010), social exclusion was an appropriate theme for a digital economy research centre based in Newcastle upon Tyne. SiDE covered a broad range of disciplines, from transport, to the connected home, to accessibility.Within this hub, one of the authors represented the creative industries, ostensibly to conduct research on the role of digital technologies in enhancing social inclusion in creative media such as film, music and gaming. This shaped the design and delivery of a series of participatory art projects within SiDE.11 The SiDE project involved working with young people on a range of projects, often with local partner organisations (see Figure 7.3). The first was a participatory design workshop to teach gaming technologies to young people 14–18 years old who were part of the Regional Youth Work Unit social benefit organisation. A workshop presentation informed the participants about the basics of how and where video games were made, with the goal of raising aspirations by showing that “games are made by someone”, and in fact that there were major game developers in northeast England (that they “don’t just come from somewhere else”). This was followed by presenting user-friendly game authoring technologies, so the participants felt empowered and able to create games. After brainstorming, the group decided to make a game that was not necessarily artistic as such but that informed young people about education pathways. This was an unexpected divergence from the original project goals. However, in the spirit of participation, we followed the course decided by our participants. The result was the Future Options Pack, an interactive brochure developed by the young people to navigate their way through educational choices. The medium of an interactive game allowed them to imagine garnering the interest of people their age who risked being less engaged with education and allowing them to discuss education choices with their parents. In another partnership, this time with the Generator Music charity, SiDE researchers developed the workshop Remix Your ’Hood. Here our partner was interested in teaching young people digital music making technologies. The techniques we had adopted in the iPhone workshops described in the previous section were adapted to this user group. These involved meeting with groups of young people and lending them iPod Touches with the RjDj app, which allowed them to sonically “remix” the city, as different programs running within the app would take
Figure 7.3 Young people participating in SiDE workshop with mobile music technologies
162
The problems with participation
environmental sounds and loop and process them in different ways. Through some of these activities, SiDE attempted to utilise public spaces as areas for engagement. Remix Your ’Hood encouraged groups of youths to remix ambient sounds in their everyday environments. In its publication and dissemination activities, the Creative Industries group in SiDE responded to a call from the Arts & Humanities Research Council (AHRC)’s Connected Communities programme for scoping studies on the topic of communities.12 It is interesting to note that the original project, to mobilise the participatory potential of digital technologies, was funded by a scientific research grant. The follow-on project, to critically examine the communitarian dynamic that was emerging in an IT-driven society, was supported by humanities funding. The result was a published policy document and a public symposium and exhibition.13
Accessibility research: Design Patterns for Inclusive Collaboration From 2012–2015, the authors took part in a collaborative research project between Goldsmiths, Queen Mary University and the University of Bath and funded by the EPSRC entitled Design Patterns for Inclusive Collaboration (DePIC). The aim of the project was to apply techniques of multimodal interaction with disabled communities. By engaging users through participatory methods, we sought to develop a set of design guidelines for mapping information across sensory modes (sight, audition, touch) to create more accessible interfaces.14 We worked with audio producers who have visual impairments through a series of workshops, with the goal to develop a set of accessible tools to assist them in the workplace. We started with an initial series of ethnographic workshops to understand the barriers our users met in professional workaday situations and the solutions they used to overcome them. The workshops began with discussion sessions to learn about the existing ways in which the participants overcame their disabilities to integrate themselves into the demanding world of professional audio production. This aided us in learning our users’ existing solutions as well as to hear about their needs and their frustrations. Our users were accustomed to using standard accessibility tools such as screen readers; however, there was a high level of frustration amongst them on account of the inability of these screen readers to verbalise all elements of a highly graphic digital audio workstation (DAW) user interface. This was compounded by the fact that the nature of the content being edited (audio) clashed with the mode (speech) into which visual interfaces were being translated. This phase of the research was ethnographic in the sense that it helped us, as sighted researchers with no personal experience of blindness, to better understand and imagine our users’ everyday workplace realities. We next presented potential enabling technologies in the workshop as a way to engage in group brainstorming and prototyping. We presented existing interfaces such as the Novint Falcon haptic controller, coupled with custom software that mapped the audio waveform amplitude to a haptic representation, causing resistance in the device’s articulated arm. This form of cross-modal mapping was enlightening to our users, and the experience triggered lively brainstorming on the possible ideas for a dedicated audio-haptic interface. At the same time, there was healthy scepticism on the part of our users. They found it difficult to orient themselves in free space without orientation guides to navigate linear audio data. Thus, the six degrees of freedom of the Falcon was not effective in helping the user interact with a two-dimensional waveform representation. Informed by this, the authors developed the Haptic Wave, a device that allows users to scrub through an audio recording using a rail-mounted motorised slider, using haptic feedback to inform them of the amplitude of the recording at any point in time. An initial lo-fi prototype 163
Atau Tanaka and Adam Parkinson
was produced in the hacker/DIY spirit by repurposing a disused flatbed scanner. This version was tested in a second round of workshop to get feedback from our users. Here the symmetrical visual representation of audio (above and below 0), which was mapped to two sliders around a fixed centre point, was deemed unnecessary and even un-ergonomic. Our users found it awkward to rotate their wrist into a position to span the two sliders. Perhaps more significantly, in one user’s case, because he had never seen an audio waveform drawing, he was not even aware of the default symmetrical representation. These insights informed the design of a production version of the Haptic Wave.15 We worked with an industrial designer to develop a new single slider device. The final prototype of the Haptic Wave was deployed as a “technology probe” in real-world studio trials in the UK and USA, where participants lived with the Haptic Wave in their studios for several weeks, kept diaries about how they used it, and answered questionnaires about their use of it.16
Embodied sonic interaction: Form Follows Sound We applied our workshopping methods in a sonic interaction design (SID) context in Form Follows Sound (FFS), designed by Goldsmiths colleagues Alessandro Altavilla and Baptiste Caramiaux and Scott Pobiner from the Parsons School of Design (New York). The workshop was carried out at Goldsmiths (London), Parsons, IRCAM (Paris) and ZHdK Academy of Arts (Zurich).17 Here, participation was deployed to study people’s embodied relation to sound in the everyday and to make participants think critically about their sonic environments and the role of sound in their daily interactions. SID is a design practice where sound content assets and interactions are designed.18 This can inform users about the task they are doing (by auditory icons or through audification of activity), enabling refinement of their actions,19 or facilitate non-task-oriented and creative activities through the sonic augmentation of everyday objects.20 FFS deployed embodied sonic interaction in a workshop setting, not so much to design sonic interactions but as a way to learn about people’s visceral interactions with sound in the everyday.Then, by giving participants themselves the means to design gestural sound interactions, we were interested to see if an active process of design might help them become more attuned to sound in their daily lives and give them ways to apprehend and describe sound more sensitively. The workshop took place in two phases, “Ideation” and “Realisation”, where interactive audio technologies were introduced only in the second phase. In the ideation phase, we first asked participants to remember a notable sound from their recent everyday life. This followed the critical incident technique21 from psychology, which elicits specific memories related to particular recent moments lived by the subject. By applying the technique to aural memory, we proposed the Sonic Incident method.The ideation phase concluded with a storyboarding exercise where participants imagined a corporeal interaction with their sonic incident and sketched it out cartoon style on paper. In the realisation phase, a gesture-sound authoring toolkit comprised of software in Max MSP and a small accelerometer sensor was introduced. Participants brought their storyboards to life by acting out their scenarios. These skits were presented to the group, were filmed as “video prototypes”,22 and served as the basis of discussion.The exercise of rendering a sonic incident to be interactive served as a way to understand the embodied nature of the sonic experience. Here we followed Gibson’s notion of environmental affordances to encourage participants to think about the possible embodied affordances of sound, or sonic affordances.23 164
The problems with participation
With Form Follows Sound, we wanted to find out whether sound evoked forms of embodied imagination and sought ways in which interactive digital audio technologies could facilitate the process. In this case, participation was important in two ways: first, as a way for personal encounters with everyday sound to be suggested by our workshop participants, and second, to provide our participants with a route into practice with these interactive technologies.
Broadening audience participation: BEAM@NIME and the EAVI Nights The projects described earlier used digital musical practice as a vehicle through which to conduct research into social inclusion and accessibility. Workshopping was a key method, where the workshop event focused and abstracted “real-world” incidents out of context. We were also interested in studying digital musical practice production and performance in its “native” context, that of the concert. In this way, we were interested in making public events objects of study and seeing if the way in which we framed new music could increase the accessibility to it and engagement with it. We were interested to extend Gaver’s notion of the cultural probe 24 to see whether public events like concerts could become sites of and objects of practice-led research, to see if we could think of “a concert as cultural probe”. In the period 2012–2017, we staged a series of concerts known as “EAVI Nights”. We programmed a diverse range of musicians, from Leafcutter John to Trevor Wishart, Kaffe Matthews, AGF, Cathy Lane, People Like Us and David Toop, to perform in the back room of a pub near Goldsmiths. Much effort was made in order to keep the entrance fee low, and the audience was generally a relatively varied cross-section incorporating students and people who lived in the area, as well as people from music or computing: in short, it generally extended slightly beyond the usual audience for a night of experimental electronic music. The environment of a pub, with the hum of conversation, the smell of beer and indie rock drifting in from the front bar might not make for the perfect listening environment for experimental electronic music but can be more accessible than the formal and highly codified traditions of the concert hall. The mode of listening was less prescriptive; people could drift in and out of the room rather than feeling trapped in a seat and committed to watching a whole performance. It was through recognising and creating affordances for these multiple ways of listening to and engaging with new music that we hoped to broaden participation in electronic music. Engagement has become an increasingly central criterion for cultural sector funding b odies. Arts Council England, for example, requires an estimation of the number of “People who benefit from your activity” in their application form. They request audience numbers and feedback forms as part of the reporting post-event. Rather than oblige the concert-going audience to fill out surveys, we are currently looking at ways in which engagement can be encouraged and documented in ways that are intrinsic to the conception of the events we organise and are editing a series of video documentaries of our concerts. In 2014, we hosted the New Interfaces for Musical Expression (NIME) conference in London.25 We were interested in extending the traditional format of an academic conference and to experiment with aspects of the festival format.We collaborated with Sarah Nicolls of the Brunel Electronic and Analogue Music festival under the banner BEAM@NIME, and organised a series of high-profile concerts that were presented in established cultural venues across London. Alongside the concerts, we also presented an exhibition programme of sound artworks. We extended the peer review process of academic publication to have alongside it a curated portion of the programme. The two types of selection were not separated in the concert programmes themselves, resulting in hybrid programmes where submitted and invited works were side by 165
Atau Tanaka and Adam Parkinson
side. This facilitated an innovative mix of student and academic work sharing the stage on equal footing with high-profile pop artists such as Imogen Heap or including, in what is ostensibly an electronic music event, purely acoustic instrumentalists, such as saxophonist John Butcher, who use extended technique to push the timbral limits of their instruments. The exhibition programme took place in a gallery setting on site at the conference and also included a commission for a public-facing installation with interactive sonic bikes by sound artist Kaffe Matthews in a local park. This prompted us to appoint a specific outreach officer who co-ordinated school visits. Through this combination of co-ordinated school visits, high-profile artist and interactive installations in public places, we were able to reach beyond the normal NIME crowd of conference attendees and engage with the general gig-going public, local residents in the London borough of Lewisham, school children and more, who did not generally have access to, interest in or awareness of computer music. Both BEAM@NIME and the EAVI Nights were attempts on our part to broaden participation in electronic music. Increasing audiences to concerts and providing multiple access points to electronic music, whilst making those gigs as affordable and welcoming as possible, is part of that. Work is ongoing on documentation and feedback methods, to examine the potential of using the entity of the concert as a research probe. We will explore ways in which documentation can be used creatively to capture the unique, subjective experiences of the participants, whilst still giving us materials that can be used and analysed with appropriate methodological rigour.
Discussion We will use the projects described as case studies to frame a critical discussion of participation and the ways it can create contexts for electronic and computer music. We will do this through drawing upon critical literature and artistic practices which have invoked participation, comparing the conclusions and critiques within our own projects and experience. We hope that this critical interrogation of our own work will be informative in terms of studying the way in which concepts are mobilised, may be instrumentalised by the powers that be, and hopefully will suggest some ways in which participation can be productively used despite its associated risks.
Who is participating? We cannot discuss participation without considering who it is that participates. This brings up questions about the subject: the spectator in art and the user in computer systems.The increasing technological dominance of contemporary society has meant that the idea of the user has been widely adopted outside of technical realms, often without second thought. A common term in HCI is that of “the end user”.The idea of the end user invokes a certain hierarchy of production and consumption, with the user firmly situated at the “end” of said chain. Certain types of participation start to problematise this idea of the end user. The users can be active subjects, might be involved in the design of things they use, they might hack and personalise what they buy, or they might be creative within an artistic project. Seen in this way, there is no single archetypal user. There may be multiple types of “user”, or stakeholders, in a given project, with each user not only “using” (taking) but contributing (giving). We may think of such stakeholders not just as users, but as actants (evoking actor-network theory),26 or simply as “actors”. In the SiDE project, we encountered situations where the user was not an individual but an organisation or group. Our partner organisations, such as the Regional Youth Work Unit, Generator Music and Helix Arts, gave us access to existing networks of young people. In this sense 166
The problems with participation
they acted as gatekeepers. But as organisational “users”, they were also keenly interested to learn and adopt the digital media techniques and knowledge that we as researchers offered. We can think of these situations as having multiple tiered levels of user. The CTMB and iPhone workshops blurred boundaries between workshop attendee, performer and audience. People attending one of these workshops might find themselves momentarily in the role of a teacher as they worked with someone else in the group explaining a concept, then as a hacker as they used a consumer technology in a new way, then as a performer as they marched around a city with a noise maker or did a gestural music performance with a smartphone. While they remained a user of their smartphone, they became a kind of “power user” or hacker. These devices encourage creativity through easy-to-use apps for music making, photography, and other artistic activities; however, their actual use modes of interaction are formatted and pre-determined by the software publisher. The projects we describe here used participation to open up the creative process beyond just being an end user of a ready-made app to becoming the app maker, or instrument builder. This blurring of roles in the workshop parallels the potentials for flattening of musical engagement in Small’s musicking and makes our notion of workshopping not just a method but an activity of participation that is less deterministic in structure and more open-ended in anticipated results. The DePIC project framed workshop participants not as end users of a product, or even end users who had designed a product, but as experts in their field. The members of the highly committed group of visually impaired audio engineers we assembled as our “user group” were to an extent “hackers” (though would perhaps not identify as such), writing their own scripts to adapt accessibility tools for their DAW software or developing DIY configurations of commercial studio devices to facilitate editing according to their specific abilities. They worked out of necessity to modify and “hack” products in order to use them, as most music software is not built with visually impaired users in mind. As such, they represented a unique and interesting kind of expert user who cannot easily be captured by conventional categories. In this research, we framed ourselves and those we worked with as actors in an attempt to articulate the nonhierarchical and iterative way in which we worked together, with the design process creating a space for dialogue. The concertgoers we have engaged with through the EAVI gigs and BEAM@NIME may be initially thought of as more conventional users, but viewed from a musicking perspective, the audience should not be thought of as mere passive end users or consumers. The unique social situation of the EAVI concerts was built only through the continual actions – and participation – of all those there, from the bar staff to the sound engineer through to the audience and performers. The concerts cannot be separated from the audience, as they were constituted in part by the audience. The audience were not mere users – they were the event. These elements can be thought of as actors in Latour’s actor-network theory, or constituting an “ecology” as Jo and Allen imagined in ChipTune Marching Band.We owe this point of view to Small, where all aspects of a musical event are important elements in the act of musicking.27
Trust building, partnerships and collaboration Participatory work is often built upon the unspoken assumption that researchers are sensitive to the needs of the subjects. Best practice in research ethics can assure due diligence in this matter through processes such as ethical approval at the institutional level and asking for informed consent from subjects. The researcher in classical anthropology and sociology is detached from the subject for the sake of rigour.28 Over the course of the twentieth century, a growing selfreflexivity in subjects involving field research – from anthropology to ethnomusicology and 167
Atau Tanaka and Adam Parkinson
even art theory (as we shall see) – has revealed such detachment to be, on the whole, something of an illusion. We have seen a rise of “situated perspectives” in human-computer interaction, particularly in experience-based third wave HCI. Williams and Irani note how involving ethnographic engagement in the design process undermines the traditional perception of the designer as a “neutral observer” and disrupts boundaries between designers, researchers and users.29 This exposes the limitations and problems of discourses which maintain a “designer-user dichotomy”.30 The authors look to ways in which design practices can allow users to represent themselves – and therefore more effectively participate – rather than be represented by designers. This resonates with sociologist Les Back’s assertion of the importance of researchers in contemporary sociology being sensitive to the subject’s lived experience and to “listen on their terms.” 31 Also exploring the role of ethnography in design, Blomberg and Giacomi describe a continuum from the extremes of the “observer participant” – who strives to be the invisible “fly on the wall”, as unobtrusive as possible – to the “participant observer”, who becomes a full participant in the activities studied. Both these extreme positions have advantages and disadvantages and, to a degree, represent ideals that cannot necessarily be realised but are roles between which a researcher might move. Indeed, our role as researchers in these projects was fluid. We drew upon social science and HCI methods, but we did not maintain the distance of an outside researcher. We were ourselves practitioners not just interested in sharing our musical experience but also open to challenging unconscious assumptions we might hold. These open-ended, participatory activities were not just dissemination channels but ideally should have become two-way streets where we learned from our participants at the same time as our participants learned from us. From the listening social scientist, to the sensitive designer, to the ethically rigorous researcher, there is a growing appreciation of the subject’s dignity and stake in research. But how can we communicate this respect to the subjects themselves in the midst of research? The interviewer’s dictaphone, a structured brainstorming activity or a signed consent form, despite all being tools of user-centric research, may still be bewildering or alienating to the subject. Given that the researcher necessarily remains “the other”, we need to find ways in which to build trust with our research participants. Much of the participation throughout the projects described here was enabled through collaborations with organisations or groups where trust already existed with the user group in question. SiDE’s partners obviated the need for us to recruit users in areas where we were new, instead leveraging the networks and trust established over time by our partners. We were able to access potential beneficiaries in often precarious, difficult-to-access communities, something we as privileged academic researchers simply could not have done, and inspire confidence amongst them by association with our partners.This enabled us to bring to the table digital mobile music technologies within the context of existing partner activities. However, there were also tensions in these collaborations and clashes between agendas which caused stalling or prevented ideas from being realised. As we discussed, institutions can be seen as a type of user or actor, possessing a certain agency and having certain goals or responsibilities to sponsors. Any institutional collaboration creates potential for such clashes. Our funders, a national research council, had expectations on the power of digital technologies as economic drivers. Our project colleagues, a mix of artists, social scientists and computer engineers, each brought specific views towards work with communities. The charities we worked with were gatekeepers to the actual participants in question and gatekeepers for these participants to the activities and technologies we were offering. It is remarkable that, with such a large number and broad range of stakeholders, that vested interests did not create more diverging agendas. 168
The problems with participation
In DePIC, one of our researchers on the team, Tony Stockman, is himself visually impaired and is a member of the community of audio engineers with visual impairments. This community of practice, while small, is highly motivated and well connected regionally and internationally. 32 Tony, as a member of this community, had an existing trust relationship with the others. It took a great deal of commitment for participants to attend our workshops – one participant travelled from Wales to London on several occasions, and others traversed London repeatedly to spend long days giving us feedback in our workshops. Such commitment relied upon trust and a healthy dynamic of collaboration. The group quickly acknowledged our appreciation of their cause and enthusiastically shared with us their common understanding, forms of tacit knowledge of the types of tasks that music production involved for people of their specific abilities.33 Although the project is over, at the time of writing the group remains in contact through, for example, a mailing list set up by one member on the occasion of our first workshop, and we are still continuing to demonstrate the Haptic Wave and looking at ways of moving into production so we can permanently put the technology into the hands of those who worked with us to develop it.
The problems with participation We now turn to the broader context of critical writings on participation and the role of participation in artistic practices, in order to reflect upon our work. As we use the literature to examine the practice, we can also use the practice to re-examine the literature and ask whether our experiences bear out the various critiques and taxonomies of participation. Arnstein questions the assumption that participation is inherently “good” or that all participation is equal by proposing a ladder of citizen participation, where partial participation in fact can represent forms of exploitation.34 She proposes three stages of citizen participation, moving from non-participation to tokenism and finally citizen power. Within these stages there is an eightrung ladder of participation, moving from manipulation to citizen control, which Arnstein uses to illustrate different levels of what might be called “participation” in society, using this model to expose potentially deceptive uses of the concept by those with political and economic power. This begs us to ask, have we been deceptive to our users by offering partial forms of participation? With the Future Options Pack, a workshop ostensibly meant to be about gaming was appropriated by our participants to create an interactive tool to present education pathways. One could say these participants exercised “citizen power” by appropriating a games workshop to produce an educational resource. The participants in Remix Your ’Hood were led to believe that the audio processing of found sounds would somehow reflect their creativity, but ultimately, sound effects were pre-programmed. Were we as researchers guilty of promoting tokenistic creative agency? In the “Turn your iPhone into a Sensor Instrument!” workshop, “deception” took a positive twist: through the seemingly fun activity of tinkering with mobile phones and app software, our users unwittingly found themselves learning how to program Pure Data. Had the workshop been advertised as a programming tutorial, the attendance and engagement of participants would likely have been quite different. In Arnstein’s article, an image of a poster from the Ateliers Populaires (1968) uses a play on words in the (mis)conjugation of the French verb participer (to participate), to say “I participate, you participate, they exploit”. Interestingly, this poster is used in Arstein’s text, as well as, 30 years later, Chapter 3 of Bishop’s Artificial Hells, a text which examines the history of participatory art. This is useful for examining our own work: whilst they may not have been art projects per se, there are similar structures and hierarchies as different interest groups and agendas flock around the concept. 169
Atau Tanaka and Adam Parkinson
In Participation, an earlier collection of essays, Bishop focuses on the social possibilities of participatory art, by catalysing experiences for the participants: the history of those artistic practices since the 1960s that appropriate social forms as a way to bring art closer to everyday life . . . they differ in striving to collapse the distinction between performer and audience, professional and amateur, production and reception. Their emphasis is on collaboration, and the collective dimension of social experience.35 Throughout our projects, we sought to create sense of creating collective experience by orchestrating the social and participatory elements in different ways. We blurred not just participant/ research roles in the iPhone and CTMB workshops, but the very identity and distinction between learning activity and performance. We sought to re-contextualise musical performances away from concert halls and “ivory towers” and place them in other kinds of venues, be they the pubs of the EAVI gigs or the public park, where we presented Kaffe Matthews’ Sonic Bikes piece during BEAM@NIME. In Artificial Hells, Bishop critiques the nature of that participation or its actual manifestations, pointing to gaps between artistic rhetoric and practice. Through a series of case studies of participatory art in three key periods in twentieth-century European history (the rise of fascism, May ’68, the fall of the Berlin Wall), Bishop seeks to “problematise contemporary claims that participation is synonymous with collectivism, and . . . that participatory art under state socialism was often deployed as a means to create a privatised sphere of individual expression”. Ultimately, there is the suggestion that “artistic models of democracy have only a tenuous relationship to actual forms of democracy”.This might undermine some of the utopianism in, for example, the SiDE project, where too much faith may have been placed in digital technology and the mechanisms through which participation in our projects was expected by our sponsors to provide our users skills and knowledge to participate in society itself. This brings up the delicate question: can we self-assess the “participatory” nature of our own projects? We find within Bishop a problematisation of the very way in which we might report upon participation: Today’s participatory art is often at pains to emphasise process over a definitive image, concept or object. It tends to value what is invisible: a group dynamic, a social situation, a change of energy, a raised consciousness. As a result, it is an art dependent on first hand experience, and preferably over a long duration (days, months or even years). Very few observers are in a position to take such an overview of long-term participatory projects: students and researchers are usually reliant on accounts provided by the artist, the curator, a handful of assistants, and if they are lucky, maybe some of the participants.36 These are all qualities we claim for our own projects: communities were built through social situations we facilitated (in DePIC, for instance); people became aware of their sonic surroundings (Form Follows Sound), or people’s attitude towards musical programming was changed (Turn Your iPhone into a Sensor Instrument!). Are these accounts credible when reported here? Could even an outside observer be trusted, and does such a person truly exist? Bishop ultimately questions even her own outsider status as, through site visits and discussion with those involved with pieces, she becomes imbricated in the process. Objectivity remains a fantasy even for the critic, and in response, we hope that we have developed enough self-criticism and an openness to criticism 170
The problems with participation
to avoid making false claims about participation in our projects. Indeed, it is for questions of objectivity and rigour that social sciences, before the “sensitive turn”, maintained distance from its subject. The problem of observers who are also implicated in that which they observe is not, of course, confined to participatory art but has been tackled in anthropology, ethnomusicology and numerous other disciplines. If anything, it is symptomatic of the inherently social dimension of these practices that they create a dynamic which is at times too rich to unproblematically capture or too complex and interconnected to observe without affecting. The problems with participation do not stop with the potential gulf between intentions and rhetoric and what happens on the ground.This gap causes participatory projects to risk glossing over real social problems and, through their rhetoric, offer false hope of reconciliation. As David Beech notes in “Include Me Out!” participation can present a fantasy vision of social reconciliation at the expense of the messy material reality in which it is embedded: In both art and politics, participation is an image of a much longed for social reconciliation but it is not a mechanism for bringing about the required transformation.37 This exposes the SiDE assumption that participation in one of our projects would lead to participation in the “digital economy”. CTMB played on this gap in a tongue-in-cheek manner – its logo is a raised fist grasping electronics components, resistors and capacitors. By teaching basic electronics and parading about town, did it really empower participants to transform the music industry or our fossil-fuel driven society? The gaps between the rhetoric and reality of participation are clear from the outset in some of the projects. Whilst there was an attempt to empower all participants as musical performers, in reality the CTMB performances often devolved into a sociable walk. This might suggest that musical performance does not come naturally to everyone no matter how empowering the act of instrument building might be. Nonetheless, it should not be the sole measure by which the workshops are judged: even if it didn’t create a team of performers, it served to educate participants in electronics and nurture socialisation, neither of which was directly measured or quantified.Were we right to assume that workshop participants should perform or would know “how” to perform? Was there a misplaced assumption that DIY instrument building could somehow replace traditional musical training? Or, if the “performance” was more a “happening” in the tradition of Allan Kaprow, what was the nature of the social intervention? Participatory art does not exist outside political and economic context. In the contemporary context, not only might participatory art fall short of its ideological claims, it might even be used in the service of the politics that on the surface it appears to oppose, providing something like a bandage of aesthetics over very real social rifts. According to Josephine Berry, public and participatory art might be mobilised to aestheticise problematic spaces in urban areas, where the tension between urban planning, political agendas and the actual well-being of communities might be at their greatest. She discusses London’s Olympic Park, a heavily branded public space in an area of rising rents and social inequality, pointing out that art rooted in community and participation risks being a propaganda tool for gentrification by which housing can be withdrawn and life rendered naked and exposed to the relentless forces of the market.38 Through this lens, one could ask whether our activities in the SiDE projects were convenient for the ambitious culturally driven urban regeneration of Newcastle upon Tyne.Were the young people who remixed their neighbourhoods gaining an aesthetic sense of the psychogeography 171
Atau Tanaka and Adam Parkinson
of their neighbourhood, perhaps pragmatically learning essential transferable digital skills, or were we part of a possible gentrification of that neighbourhood?39 The app used in Remix Your ’Hood was presented to participants as being open source, connected to hacking culture and inherently subversive. However, in reality, the app ultimately presented the software developer’s vision for how sounds could be remixed. The young people who wanted to sample city sounds and use them in their own compositions at home were left wanting, as the compelling sounds they heard around them were absorbed into the app’s templates and used according to someone else’s musical agenda.The composer was still the software developer, whose drum sequences and melodies subsumed the urban sounds. The agency of the participants was limited: they remained passive listeners, but the passiveness was dressed up by motion sensors and hidden by a rhetoric of hacking, creativity, open source technology and intervention. One could argue it engaged more with an idea of participation suggested from above than the type of engagement that the participants actually desired, potentially succumbing to the traps suggested by Berry, Bishop and Beech. Even if participation might seem beyond redemption, an empty buzzword or, worse yet, a tool in the hands of capitalist urban planning unanchored from any ethics, there is still the potential for it to be productively used within socially engaged electronic music projects. Participatory art often aims to create circumstances which may facilitate some extension of the cognition or awareness of the participants, as Bishop describes. The Form Follows Sound workshops created a space where participants became more aware of their surroundings, and the Turn Your iPhone into a Sensor Instrument! workshops used the appeal of instrument making and hacking to trick 16-year-olds into taking a programming class. Sometimes the projects engendered participation in ways we had not anticipated.The DePIC project used workshops to design and develop a tool, but the workshops also served to nurture a community.The first workshop allowed one participant to present his idea for the AIMS mailing list for audio engineers with visual impairments: this is now an active mailing list, and used, for instance, by software companies looking for accessibility advice or for individuals wanting to learn about certain software functions, amongst other things. Did we invoke participation in a way that transcended empty gesture or fantasies of political reconciliation? We hope that through a continual emphasis on process, maintaining different points of entry, being open to different modes of participation and maintaining a self-critical awareness, participation was present in a palpable, positive way in our work, possibly even in ways beyond that which was always captured in the reports or publications that emerged from them. We hope, for instance, that the iPhone workshops encouraged people to begin programming or exploring electronic music performance and production, that the EAVI gigs and the NIME concerts helped extend the MP3 collections and musical interests of some individuals – maybe even encouraging some of them to begin creating music – or that participants in the Chiptune Marching Band had a newfound confidence with electronics. Many of the events could serve in small ways to help build communities or relationships between practitioners.
Interdisciplinarity, differing epistemologies and tacit knowledge Bishop notes the “breakdown of medium specific artforms” as being part of the post-1960s context out of which participatory art emerged, with its fusion of elements from theatre, performance and more, alongside conventional artistic ideas.We can draw parallels here to the context within which our own participatory projects occur, in an environment of interdisciplinarity
172
The problems with participation
where practices are not confined to specific disciplines or media, as we ourselves have moved through music, design and computer science. Like participation, interdisciplinarity is often presented as being indiscriminately desirable, but the material reality muddies any relentlessly positive view of it. Whilst within music the personal experience of an individual can be grounds for research, within the human-computer interaction community there is more of an emphasis upon user studies, quantitative analysis, or the gathering of multiple subjective reports.The movement between these disciplines is becoming increasingly fluid, and is evidenced in the New Interfaces for Musical Expression (NIME) field. Rather than simply taking the methodologies of HCI to music, we should be mindful to bring the awareness of embodied experience and of the richness of musical experiences to HCI and use this to problematise areas where we see naive positivism. This is not to undermine the methodologies of HCI but to enrich them. The concert series that we have presented – both BEAM@NIME and the EAVI gig series – have considered the cultural event as a site of research. In seeking to articulate that research and explore what knowledge may be generated through these, we evoke Gaver’s cultural probes. This is necessarily not a direct transposition of these methods – in fact, from an HCI perspective we run the risk of distorting them. However, these qualitative research methods, as openended as they may seem, are specific to their original research field, in this case Third Wave HCI, which studies technology interaction in societal contexts and requires rigour in their use. As music practitioners, we are inspired by these techniques to provide a methodological basis to help us to explore very different and increasingly intangible forms of experiential knowledge that arise in live, interactive music performance and articulate them to the diverse research communities within which we operate. Having a methodological basis, however inspirational, built on Third Wave HCI helps us to explore very different and increasingly intangible forms of experiential knowledge that arise in live, interactive music performance and articulate them to the diverse research communities within which we operate. Here we hope that the methodologies from HCI enrich musical practice and aid the understanding of concert spectatorship as a conduit to sharing tacit knowledge about embodied interactive music technologies and practice. Throughout these different disciplines we encounter the importance of tacit knowledge, forms of knowledge that people have but which they cannot always express in language.40 Performing music necessarily involves tacit and embodied, enactive ways of knowing;41 musicians might not be able to describe how they play an instrument, but this has little bearing on their ability to play it. Likewise, listening to music can also draw on tacit knowledge, as one might know whether something sounds “right” without being able to articulate just why. Enactive knowledge is not secondary or inferior to symbolic knowledge and can be just as “exact”; musicians (and often listeners) demand high precision and accuracy in most of their tools. Some see one of the primary functions of participatory design as drawing out and utilising the tacit knowledge of the “end users” of technology.42 Through creating situations in which people act, without necessarily attempting to translate that embodied knowledge into anything symbolic, such as the scenarios commonly used in design practices, tacit knowledge can be articulated and shared, drawing on the axiom from anthropology that “what people say and what people do are not the same”.43 The projects we have described often used musical practices as domains in which various types of tacit knowledge are expressed. Furthermore, workshopping can successfully create spaces where people can take on different roles, often enabling them to share tacit knowledge through acts that require it and roles that draw them into demonstrating it. Participants in a
173
Atau Tanaka and Adam Parkinson
workshop move between roles of student, teacher, demonstrator, collaborator and performer. The “leader” acts not as a “sage on the stage” but a “guide on the side”, engineering a situation where different kinds of knowledge exchange – and participation – occur.44 Participants both learn and teach through watching, communicating, helping, assisting and demonstrating. Recognising the validity of tacit knowledge and creating spaces where it can be exchanged potentially open up spaces for participation without being overly prescriptive about how that participation should occur.
Conclusion We have presented a survey of projects that the authors were involved in, taking place across different institutions in different geographical locations. Using these projects as case studies allows us to reflect upon some of the innate challenges associated with participation. Participation might be used by funding bodies or gatekeepers as a “buzzword” without sensitivity to the different levels of actual participation or types of user that may be represented within such a term. We hope nonetheless that we have managed to effectively enable participation through focusing on process-based activities and critical self-reflexivity, recognising that through communal involvement one can be a “participant” and that there are modes of participation and ways of knowing that will always overspill what can be articulated in a conference paper or to a funding body but remain with one, informing one’s worldview and the way one approaches the next project. We hope that our critical reflections upon our own projects can offer some guidance for people working within similar funding structures or in participation-driven projects. This is a conclusion but also an invitation, then, to value the material complexities of participation and preserve this against reductionist and quantitative tendencies that the structures we work within might produce. No one paper, project report or user study might capture the rich, experiential reality of a project, but rather than let this be cause for despair, we should let an awareness of that rich reality inform the way in which we carry out such projects and prevent us from accepting uncritically any reductionist uses of the concepts we encounter.
Acknowledgements The research reported here has been made possible by grants from the EPSRC (EP/J018120/1, EP/G066019/1), AHRC (AH/J500905/1),and European Research Council (ERC FP7– 283771).The cultural sector projects have received generous support from Arts Council England and the Performing Rights Society. The work in the projects described reflects the efforts of the following colleagues and collaborators: Lalya Gaye, Joëlle Bitton, Andreia Cavaco, Graham Mearns, Ben Jones, Ranald Richardson Kazuhiro Jo, Jamie Allen Alessandro Altavilla, Baptiste Caramiaux, Scott Pobiner Nick Bryan-Kinns,Tony Stockman, Oussama Metatla, Fiore Martin, David Cameron Sarah Nicolls, Steph Horak We would like to sincerely thank all of the participants in the different projects. 174
The problems with participation
Notes 1 Chadia Abras, Diane Maloney-Krichmar, and Jenny Preece, “User-Centered Design,” in Bainbridge, W. (ed.) Encyclopedia of Human-Computer Interaction (Thousand Oaks: SAGE Publications, 2004); Michael J. Muller, “Participatory Design: The Third Space in HCI,” Human-Computer Interaction: Development Process 4235 (2003): 165–16185. 2 Taylorism, named after a nineteenth-century industrialist, is an approach to systematically analysing and organising production (and production lines) in order to achieve greater efficiency. 3 Clay Spinuzzi, “The Methodology of Participatory Design,” Technical Communication 52, no. 2 (2005): 165–1168. 4 Kazuhiro Jo, Adam Parkinson, and Atau Tanaka, “Workshopping Participation in Music,” Organised Sound 18, no. 3 (2013): 282–291. 5 Christopher Small, Musicking: The Meanings of Performing and Listening (Middletown: Wesleyan University Press, 2011). 6 Jamie Allen, Areti Galani, and Kazuhiro Jo, “An Ecology of Practice: Chiptune Marching Band” (Proceedings of the seventh ACM conference on Creativity and Cognition, ACM, 2009), 347–3348. 7 www.youtube.com/watch?v=QJ663robrbg 8 Allen, Galani, and Jo, “An Ecology of Practice: Chiptune Marching Band.” 9 Atau Tanaka, “Mapping Out Instruments, Affordances, and Mobiles” (New Interfaces for Musical Expression, NIME, 2010), 88–93, www.nime.org/proceedings/2010/nime2010_088.pdf. 10 For a video of 4-Hands iPhone see https://youtu.be/jkXAFP9IGV0 11 Atau Tanaka, Lalya Gaye, and Ranald Richardson, “Co-Production and Co-Creation: Creative Practice in Social Inclusion,” in Cultural Computing (Springer, 2010), 169–16178, http://link.springer.com/ chapter/10.1007/978-3-642-15214-6_17. 12 Joëlle Bitton et al., “Situating Community through Creative Technologies and Practice,” AHRC Connected Communities Scoping Study, n.d. 13 Joëlle Bitton et al., United We Act. A Scoping Study and a Symposium on Connected Communities (lulu. com, 2011), http://dm.ncl.ac.uk/benjones/files/2012/06/cc_book_download3.pdf (retrieved on 6th September 2016). 14 Oussama Metatla et al., “Audio-Haptic Interfaces for Digital Audio Workstations,” Journal on Multimodal User Interfaces, 2016, 1–12. 15 Adam Parkinson, David Cameron, and Atau Tanaka, “Haptic Wave: Presenting the Multiple Voices, Artefacts and Materials of a Design Research Project,” 2015. 16 Atau Tanaka and Adam Parkinson, “Haptic Wave: A Cross-Modal Interface for Visually Impaired Audio Producers” (Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2858304: ACM, 2016), 2150–2161, doi:10.1145/2858036.2858304. 17 Baptiste Caramiaux et al., “Form Follows Sound: Designing Interactions From Sonic Memories” (Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems ACM, 2015), 3943–393952, http://dl.acm.org/citation.cfm?id=2702515. 18 Karmen Franinović and Stefania Serafin, Sonic Interaction Design (Cambridge: MIT Press, 2013). 19 Thomas Hermann and Andy Hunt, “An Introduction to Interactive Sonification,” IEEE Multimedia, 2005, 20–224. 20 Nicolas Rasamimanana et al., “The Urban Musical Game: Using Sport Balls as Musical Interfaces” (CHI’12 Extended Abstracts on Human Factors in Computing Systems, ACM, 2012), 1027–1030; Davide Rocchesso, Pietro Polotti, and Stefano Delle Monache, “Designing Continuous Sonic Interaction,” International Journal of Design 3, no. 3 (2009). 21 John C Flanagan, “The Critical Incident Technique,” Psychological Bulletin 51, no. 4 (1954): 327. 22 Wendy E Mackay, “Using Video to Support Interaction Design,” DVD Tutorial, CHI 2, no. 5 (2002). 23 James J. Gibson, The Ecological Approach to Visual Perception: Classic Edition (New York: Psychology Press, 2014). 24 Bill Gaver, Tony Dunne, and Elena Pacenti, “Design: Cultural Probes,” Interactions 6, no. 1 (1999): 21–229. 25 “NIME 2014,” NIME 2014, accessed February 10, 2017, www.nime.org/2014/ 26 Bruno Latour, Reassembling the Social: An Introduction to Actor-Network-Theory (Oxford: Oxford University Press, 2005). 27 Ibid.; Allen, Galani, and Jo, “An Ecology of Practice: Chiptune Marching Band”; Small, Musicking: The Meanings of Performing and Listening.
175
Atau Tanaka and Adam Parkinson 28 Claude Lévi-Strauss, Structural Anthropology, vol. 1 (New York: Basic Books, 1963). 29 Amanda M Williams and Lilly Irani, “There’s Methodology in the Madness: Toward Critical HCI Ethnography” (CHI’10 Extended Abstracts on Human Factors in Computing Systems, ACM, 2010), 2727. 30 Ibid., 2728. 31 Les Back, The Art of Listening (Oxford: Berg, 2007). 32 Etienne Wenger, Communities of Practice: Learning, Meaning, and Identity (Cambridge: Cambridge University Press, 1998). 33 Michael Polanyi, The Tacit Dimension (Chicago: University of Chicago Press, 2009). 34 Sherry R Arnstein, “A Ladder of Citizen Participation,” Journal of the American Institute of Planners 35, no. 4 (1969): 216–224. 35 Claire Bishop, ed., Participation, Documents of Contemporary Art (London: Cambridge, Mass: Whitechapel; MIT Press, 2006), 10. 36 Claire Bishop, Artificial Hells: Participatory Art and the Politics of Spectatorship (New York: Verso Books, 2012), 6. 37 Dave Beech, “Include Me Out!” Art Monthly, April 2008, 315. 38 Josephine Berry, “Everyone Is Not an Artist: Autonomous Art Meets the Neoliberal City,” New Formations 84, no. 84–85 (2015): 20. 39 Stuart Cameron and Jon Coaffee, “Art, Gentrification and Regeneration – From Artist as Pioneer to Public Arts,” European Journal of Housing Policy 5, no. 1 (2005): 39–58. 40 Polanyi, The Tacit Dimension. 41 Francisco J. Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (Cambridge, MA: MIT Press, 1991). 42 Spinuzzi, “The Methodology of Participatory Design.” 43 Jeanette Blomberg et al., “Ethnographic Field Methods and Their Relation to Design,” Participatory Design: Principles and Practices, 1993, 130. 44 Mark Spikell and Behrouz Aghevli, “The Workshop Method of Teaching: An Example From the Discipline of Mathematics Education,” Inventio 2, no. 1 (1999).
References Abras, Chadia, Diane Maloney-Krichmar, and Jenny Preece. “User-Centered Design.” In Bainbridge, W. (ed.) Encyclopedia of Human-Computer Interaction. Thousand Oaks: SAGE Publications, 2004. Allen, Jamie, Areti Galani, and Kazuhiro Jo. “An Ecology of Practice: Chiptune Marching Band.” In ACM, 2009, pp. 347–348. Arnstein, Sherry R. “A Ladder of Citizen Participation.” Journal of the American Institute of Planners 35, no. 4 (1969): 216–224. Back, Les. The Art of Listening. Oxford: Berg, 2007. Beech, Dave. “Include Me Out!” Art Monthly, April 2008. Berry, Josephine. “Everyone Is Not an Artist: Autonomous Art Meets the Neoliberal City.” New Formations 84, no. 84–85 (2015): 20–39. Bishop, Claire. Artificial Hells: Participatory Art and the Politics of Spectatorship. New York:Verso Books, 2012. ———, ed. Participation. Documents of Contemporary Art. London: Cambridge, MA: Whitechapel; MIT Press, 2006. Bitton, Joëlle, Andreia Cavaco, Lalya Gaye, and Ben Jones. United We Act: A Scoping Study and a Symposium on Connected Communities. lulu.com, 2011. Accessed September 6, 2016. http://dm.ncl.ac.uk/benjones/ files/2012/06/cc_book_download3.pdf Bitton, Joëlle, Andreia Cavaco, Lalya Gaye, Ben Jones, Graeme Mearns, Ranald Richardson, and Atau Tanaka. “Situating Community through Creative Technologies and Practice.” AHRC Connected Communities Scoping Study, n.d. Blomberg, Jeanette, Jean Giacomi, Andrea Mosher, and Pat Swenton-Wall. “Ethnographic Field Methods and Their Relation to Design.” Participatory Design: Principles and Practices (1993): 123–155. Cameron, Stuart, and Jon Coaffee. “Art, Gentrification and Regeneration – From Artist as Pioneer to Public Arts.” European Journal of Housing Policy 5, no. 1 (2005): 39–58. Caramiaux, Baptiste, Alessandro Altavilla, Scott G. Pobiner, and Atau Tanaka.“Form Follows Sound: Designing Interactions From Sonic Memories.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, 2015, pp. 3943–3952. http://dl.acm.org/citation.cfm?id=2702515.
176
The problems with participation Flanagan, John C. “The Critical Incident Technique.” Psychological Bulletin 51, no. 4 (1954): 327. Franinović, Karmen, and Stefania Serafin. Sonic Interaction Design. Cambridge: MIT Press, 2013. Gaver, Bill, Tony Dunne, and Elena Pacenti. “Design: Cultural Probes.” Interactions 6, no. 1 (1999): 21–229. Gibson, James J. The Ecological Approach to Visual Perception: Classic Edition. New York: Psychology Press, 2014. Hermann, Thomas, and Andy Hunt. “An Introduction to Interactive Sonification.” IEEE Multimedia (2005): 20–224. Jo, Kazuhiro, Adam Parkinson, and Atau Tanaka. “Workshopping Participation in Music.” Organised Sound 18, no. 3 (2013): 282–291. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press, 2005. Lévi-Strauss, Claude. Structural Anthropology.Vol. 1. New York: Basic Books, 1963. Mackay, Wendy E. “Using Video to Support Interaction Design.” DVD Tutorial, CHI 2, no. 5 (2002). Metatla, Oussama, Fiore Martin, Adam Parkinson, Nick Bryan-Kinns, Tony Stockman, and Atau Tanaka. “Audio-Haptic Interfaces for Digital Audio Workstations.” Journal on Multimodal User Interfaces (2016): 1–12. Muller, Michael J. “Participatory Design: The Third Space in HCI.” Human-Computer Interaction: Development Process 4235 (2003): 165–16185. “NIME 2014.” NIME 2014. Accessed September 7, 2016. www.nime2014.org/. Parkinson, Adam, David Cameron, and Atau Tanaka. “Haptic Wave: Presenting the Multiple Voices, Artefacts and Materials of a Design Research Project,” 2015. Polanyi, Michael. The Tacit Dimension. Chicago: University of Chicago Press, 2009. Rasamimanana, Nicolas, Frédéric Bevilacqua, Julien Bloit, Norbert Schnell, Emmanuel Fléty, Andrea Cera, Uros Petrevski, and Jean-Louis Frechin. “The Urban Musical Game: Using Sport Balls as Musical Interfaces.” In ACM, 2012, pp. 1027–1030. Rocchesso, Davide, Pietro Polotti, and Stefano Delle Monache. “Designing Continuous Sonic Interaction.” International Journal of Design 3, no. 3 (2009). Small, Christopher. Musicking: The Meanings of Performing and Listening. Middletown: Wesleyan University Press, 2011. Spikell, Mark, and Behrouz Aghevli. “The Workshop Method of Teaching: An Example From the Discipline of Mathematics Education.” Inventio 2, no. 1 (1999). Spinuzzi, Clay. “The Methodology of Participatory Design.” Technical Communication 52, no. 2 (2005): 163–174. Tanaka, Atau. “Mapping Out Instruments, Affordances, and Mobiles.” In NIME, 2010, pp. 88–93. www. nime.org/proceedings/2010/nime2010_088.pdf. Tanaka, Atau, and Adam Parkinson. “Haptic Wave: A Cross-Modal Interface for Visually Impaired Audio Producers.” In ACM, 2016, pp. 2150–2161, 2858304. doi:10.1145/2858036.2858304. Tanaka, Atau, Lalya Gaye, and Ranald Richardson. “Co-Production and Co-Creation: Creative Practice in Social Inclusion.” In Cultural Computing, pp. 169–178. Springer, 2010. http://link.springer.com/ chapter/10.1007/978-3-642-15214-6_17. Varela, Francisco J., Evan Thompson, and Eleanor Rosch. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press, 1991. Wenger, Etienne. Communities of Practice: Learning, Meaning, and Identity. Cambridge: Cambridge University Press, 1998. Williams, Amanda M., and Lilly Irani. “There’s Methodology in the Madness: Toward Critical HCI Ethnography.” In ACM, 2010, pp. 2725–2734.
177
8 THE AGENCY OF SONIC ART IN CHANGING CLIMATES Leah Barclay
Overview Climate change is arguably the most critical issue of the twenty-first century. The evidence is clear, with 97% of scientists agreeing that human influence is the dominant cause of global warming (Cook et al. 2013). However, climate change is not simply a scientific concern; it is a cultural crisis that requires interdisciplinary action and integration with social and political systems to mobilise collective action. This has inspired a global movement of artists leading critical discourse on climate change through transdisciplinary collaborations, public art and community engagement (McKibben 2015). In our visually dominant society, listening to the state of the environment can connect and inspire us at a profound sensory level. Immersive sonic experiences can transport us to a place and time and evoke empathetic and philosophical responses to climate change. Composers and sound artists drawing on environmental field recordings have an unprecedented opportunity to respond to climate change through the artistic and scientific possibilities of sonic art and acoustic ecology. This research explores these opportunities through the development of two large-scale interdisciplinary research projects that position composers and sound artists within multi-platform collaborations working directly with communities. Biosphere Soundscapes and River Listening are introduced through the lens of contemporary acoustic ecology, a socially embedded interdisciplinary field that can inspire communities across the world to listen to the environment. This research aims to demonstrate the value of creativity, the importance of environmental listening and the possibilities of sonic art in addressing the most critical needs of our time (Carlyle and Lane 2013).
Listening to climate change In December 2015, members of the United Nations Framework Convention on Climate Change (UNFCCC) agreed to strengthen the global response to climate change and accelerate actions to reduce global temperature increase well below 2 degrees Celsius. The Paris Agreement came at a time where climate change is undeniably reaching a global tipping point, with increasing evidence of the catastrophic impacts expected in the future (McNutt 2013). Ocean acidification, rising sea levels, atmospheric temperature increases and diminishing ice have all 178
The agency of sonic art in changing climates
been attributed to human influence (Pachauri and Meyer 2014). Scientists continue to urge for action to address the dramatic consequences of climate change and the irreversible impacts for global communities and ecosystems. The critical steps towards a sustainable low carbon future have been clearly outlined by scientists and widely championed by public figures, yet public engagement and behavioural change remains a challenge. In order to implement the Paris Agreement, the policies surrounding climate change mitigation and adaptation must look beyond science to facilitate societal and behavioural change. The underpinning causes of climate change are engrained in unsustainable ways of thinking and acting, meaning the ecological crisis is not just scientific but political, social and cultural (Yusoff and Gabrys 2011). Climate change mitigation and adaptation requires interdisciplinary approaches that bridge the ethical and societal challenges and reframe the boundaries of sustainability. The arts and humanities are beginning to play a key role in reframing the narrative of climate change, particularly after two decades of scientific consensus has had limited success in public engagement. Nisbet (2009) advocates for carefully researched metaphors and creative examples to trigger a feeling of the personal relevance of climate change. Hoffman (2012) believes greater inclusion of the social sciences in climate change mitigation and adaptation strategies will assist in navigating public engagement and provide insight into the social acceptance of new policies and solutions. Boulton (2016) suggests that the scientific framing of climate change has not accounted for the importance of cultural, ontological and psychological dimensions and draws on interdisciplinary research to suggest that the current framing of global warming exceeds our cognitive and sensory abilities. The public engagement required to successfully implement the Paris Agreement urgently needs new tools to facilitate cultural and societal change at a deeply philosophical and sensory level. While the value of visual art is well established – from large-scale installations igniting the public imagination to data visualisations that reveal new ways of interpreting science (Knebusch 2008) – the interdisciplinary possibilities of sound as a tool for understanding environmental change are yet to be fully explored. Sound has a profound ability to make us feel present and deeply connected to our environment. In our visually dominant society, listening to changing environments and immersive interpretations of climate change can connect us at a deeply philosophical, empathetic and sensory level. At a time when there is a critical need to listen to the environment, composers have an opportunity to engage the public in the complexities of climate change by drawing on sonic material which may expose the state of the environment and provide an immersive and embodied experience to understand the temporal complexities of changing climates. When these experiences are embedded in long-term research, interdisciplinary frameworks and community engagement, possibilities may arise for cultural change at both micro and macro levels. Biosphere Soundscapes and River Listening are two large-scale interdisciplinary project underpinned by the creative possibilities of acoustic ecology, bioacoustics and rapidly evolving fields of biology used to record environmental patterns and changes through sound. Both projects are designed to inspire communities across the world to listen to the environment and explore the value of sound as a measure for environmental health. These projects position composers at the core of multi-platform initiatives working directly with scientists and communities in mapping changing soundscapes. Biosphere Soundscapes and River Listening sit at the intersection of art and science, with the recordings providing valuable scientific data for biodiversity analysis and incredible source material for creative works that bring awareness to these environments. These research projects are inherently collaborative and interdisciplinary, designed with the assumption that creative practitioners have a desire and responsibility to contribute towards climate change mitigation and adaptation. 179
Leah Barclay
Prior to outlining the development of Biosphere Soundscapes and River Listening, I will introduce my personal background as a composer and contextualise this research in the growing body of artists and researchers working in the field of environmental sound art and acoustic ecology.
Sonic explorations At the beginning of my professional career as a composer, I was interested in exploring the way in which my artistic practice could contribute towards environmental awareness and engagement. My doctoral research focused on the development and dissemination of original electroacoustic music compositions drawing on environmental field recordings from various parts of the world (Barclay et al. 2014). The central research question investigated the possibilities of electroacoustic music in contributing towards environmental engagement and awareness. These works were composed in cultural immersion by travelling to the locations that inspired the projects and by working intensively within local communities and the environment. The social and cultural context was a vital element in the realisation of each project. I was conscious to avoid the exoticism and exploitation that is often associated with the practice of field recording, and I wanted the resulting compositions to be relevant and meaningful for the collaborating communities. The locations ranged from sonic explorations in the centre of the Amazon rainforest to exploring significant rivers in India, Korea, China and Australia. Throughout the research, it became clear that my creative process was influenced and inspired by engaging with the community in which I was composing. Experimenting with different methods of dissemination and community engagement became extremely valuable in formulating preliminary responses to my research intentions. The findings and observations from each project led me to believe that composers had a valuable role to play in environmental awareness and engagement, but it was essential that the creative projects were socially embedded, multi-platform, interdisciplinary and designed with clear community engagement strategies. As a result, I developed the Sonic Ecologies framework, which documents and presents a number of procedures that have become integral to my creative practice (Barclay 2013). My research began by exploring the value of electroacoustic music as a tool for ecological engagement and evolved into a series of multi-platform projects harnessing music composition to raise cultural, social and environmental awareness. My first experiments with electroacoustic music in late undergraduate studies drew me towards the infinite possibilities of environmental sound. The soundscapes of the natural environment that had inspired my instrumental and vocal music immediately became source material in my compositions as I focused my practice towards electronic music. This transition was shaped by a seminal text, The Language of Electroacoustic Music (1986), which remains one of the most valuable resources in this field. This book encompasses the work from leading academics and composers such as Denis Smalley, Trevor Wishart and David Keane, but it was the chapter by Simon Emmerson that was most influential, particularly when Emmerson spoke of electroacoustic music being the first musical genre to place under the composer’s control ‘an acoustic palette as wide as that of the environment itself ’ (Emmerson 1986, 18). While my initial explorations in electroacoustic music opened up a liberating language of infinite possibilities, composing with found sounds was by no means a revolutionary concept; the ideas were explored in Luigi Russolo’s manifesto The Art of Noises in 1913, among many
180
The agency of sonic art in changing climates
other examples in the early twentieth century.The proposed use of natural sound in music composition was suggested in John Cage’s famous 1937 lecture, in which he stated: I believe that the use of noise to make music will continue and increase until we reach a music produced through the aid of electrical instruments which will make available for musical purposes any and all sounds that can be heard. (Cage 1961) Mexican composer and music theorist Carlos Chávez shared a similar vision. In his book Toward a New Music: Music and Electricity (Chávez 1937) he spoke about the possibilities of electricity in becoming a legitimate art of our era. Australian composer Percy Grainger (1882–1961) also pioneered an innovative approach to creating sound with Free Music, a concept he conceived at an early age when observing the waves on a lake in Melbourne. Free Music was a liberating approach to the constraints of rhythm, pitch and harmony, and he spent a great deal of his career developing complex machines to realise his visions (Balough 1982). Jacques Attali’s (1985) seminal text refers to music as not just simply a reflection of culture but a ‘harbinger of change’. He states that, ‘For twenty-five centuries, western knowledge has tried to look upon the world. It has failed to understand that the world is not for the beholding. It is for hearing. It is not legible, but audible’ (Attali 1985). This notion resonates strongly with the field of acoustic ecology founded by R. Murray Schafer in the late 1960s. His premise was that we should attempt to hear the acoustic environment as music, and we should take responsibility for its composition (Schafer 1977). Schafer was actively involved in education and has been a strong advocate for integrating listening skills and ‘sonological competence’ into the school curriculum (Wrightson 2000). Schafer’s book The Tuning of the World, published in 1977, remains one of the most valuable resources on acoustic ecology. Schafer launched the World Soundscape Project in the late 1960s as the first major acoustic ecology project at Simon Fraser University in Vancouver along with his founding colleagues, including Hildegard Westerkamp and Barry Truax, who remain the most influential figures in the field today. Acoustic ecology has evolved into an accessible and dynamic interdisciplinary field concerned with the social, ecological and cultural contexts of our sonic environment. It intersects with numerous other disciplines and continues to inform critical discourse on environmental changes in fields including landscape ecology, geography and emerging fields of biology concerned with environmental patterns and changes through sound. In our current state of environmental crisis, accessible biodiversity analysis and community engagement are critical to understanding the rapid ecological changes taking place across the globe. In the last 10 years there has been a strong emergence of non-invasive monitoring involving auditory recordings of the environment. This scientific discipline is often referred to as soundscape ecology and shares many parallels with other fields, including acoustic ecology and bioacoustics, a well-established field of acoustics and biology studying animal communication (Krause 1987, 1993). The term soundscape ecology was introduced by Canadian composer Barry Truax as the ‘study of the effects of the acoustic environment on the physical responses or behaviour of those living in it’ (Truax and Barrett 2011). The field was proposed to leverage and combine landscape ecology and acoustic ecology and enhance our understanding of anthropogenic impact on ecosystems. American musician and ecologist Bernie Krause adopted the term soundscape ecology to describe his pursuits of recording the natural world. He coined the terms ‘biophony’ to describe biological sounds created by living organisms and ‘geophony’, which refers to nonbiological environmental sound such as rain and wind. In collaboration with a team of scientists,
181
Leah Barclay
Krause later introduced the term ‘anthrophony’ to describe anthropogenic sounds created by humans (Pijanowski et al. 2011, Krause 2009). While the term ‘ecoacoustics’ is often interchangeable with acoustic ecology, it has recently been adopted to define a new field of science that studies sound along a broad range of spatial and temporal scales to understand environmental changes (Sueur and Farina 2015). The discipline of ecoacoustics is also leveraging new advances in reliable and affordable audio recorders and the increasing scientific interest of environmental sound as a non-invasive proxy for monitoring environmental changes (Towsey et al. 2014). Ecoacoustics calls for greater collaboration with other disciplines including electronics, remote sensing, big data and social sciences (Sueur and Farina 2015). Soundscape ecology and ecoacoustics are rapidly advancing, particularly with increased engagement from the scientific community in Australia, Europe and North America. These fields played a vital role in the development of Biosphere Soundscapes and River Listening; however, the acoustic metrics and terminology used in these disciplines can be problematic in the context of these interdisciplinary projects. When the health of an environment is measured through the presences of biophony and anthrophony, the resulting data often has negative connotations around the anthrophony. While invasive anthropogenic noise is certainly negative, anthrophony categorisations can be particularly problematic when working in an interdisciplinary context with indigenous communities whose language and music is deeply connected to the temporal and spatial variations of the environments; this perspective has been critical in the development of both Biosphere Soundscapes and River Listening. Indigenous knowledge systems can provide incredible insight into environmental change, particularly from deep listening perspectives. While terminology is being expanded in more recent ecoacoustic studies, including a collaboration with the Gitga’at Nation in Hartley Bay, Canada (Ritts et al. 2016), acoustic ecology remains to provide the most appropriate frame for studying the cultural, social and ecological aspects of sonic environment. It encompasses these emerging scientific disciplines while being grounded in a significant body of literature and practice that includes social, cultural and scientific perspectives. The broad possibilities of acoustic ecology have naturally attracted critics and a spectrum of contested ideas; some find the discipline restrictive, while others have attempted to reframe acoustic ecology in a contemporary context. The acoustic ecology literature from Schafer and Truax (1999), particularly in the categorisation of hi-fi and lo-fi soundscapes and criticisms of urban noise, can be contentious amongst some artists and scholars. Australian sound artist Jordan Lacey believes we should build a new relationship with urban noise rather than attempt to escape it; we should actively reshape the ‘urban roar’ (Lacey 2016).There are countless sound artists who work within this realm dating back to the 1970s, including Max Neuhaus’s urban sound installation Times Square (1977), which can be discovered amongst the sensory overload of New York City. Some composers drawing on found sound and environmental field recordings in their compositions have felt alienated from acoustic ecology and preferred to adopt more contemporary terms. However, throughout my research I continue to advocate the interdisciplinary potential of acoustic ecology and the opportunities for composers, particularly with increased engagement across a wide spectrum of disciplines and the ongoing need for experienced listeners to be active collaborators in research projects. Fortunately, many artists and composers share this ambition, and we are now seeing the emergence of organisations and collectives coming together to share environmental projects and ideas. Most notably, these include organisations such as Ear to the Earth (eartotheearth. org) founded by the president of the Electronic Music Foundation (emf.org), Joel Chadabe, in New York. Chadabe recently stated that the current artistic practices of electroacoustic music 182
The agency of sonic art in changing climates
composers are rooted in the idea that new technologies, unlike traditional musical instruments, can produce sounds used to communicate core messages, including information about the state of our environment. He claims that this field is participating in the emergence of a new type of music accessible to anyone, which can be used to communicate ideas that relate more closely to life than those communicated through traditional musical forms. He believes composers need to think of themselves as ‘leaders in a magnificent revolution rather than the defenders of an isolated and besieged avant-garde’ (Chadabe 2011). Argentinean/Canadian composer and academic Dr Ricardo Dal Farra founded the BalanceUnbalance International Conference series in 2010 to explore how artists can participate in the challenges of our ecological crisis.The event inspires creative thinking and transdisciplinary action to create perceptual and pragmatic changes. Balance-Unbalance is not just a conference but the catalyst for new ideas, collaborations and most importantly actions in shaping our collective futures. It is a global initiative designed to harness the talents of innovators working at the forefront of the arts, science and technology to explore transdisciplinary approaches to sustainability. One resulting example is a global sound art competition initiated at Balance-Unbalance 2011 in Montreal which is devoted to the power of organised sound and calls for works related to the effects of climate change and the global environmental crisis (Dal Farra 2014). It was developed in partnership with the Red Cross/Red Crescent Climate Centre and the Electronic Arts Experimentation and Research Centre (CEIArtE-UNTREF) of the National University of Tres de Febrero in Argentina. Each iteration of the competition has responded to a theme proposed by the Red Cross that has been selected to inspire awareness and engagement around pertinent issues that require global attention. For example, the selected works for 2015 included ‘The Rising Forces’ (2015) by Jules Bryant, which draws on a pulsating single oboe note slowing rising in pitch to signify rising sea levels, and Daniel Blinhorn’s ‘FrostbYte – Red Sound’ (2012), drawing on field recordings from the Arctic region of Raudfjorden, where Blinkhorn describes the reactive terrain of sound and light as a world so ‘finely tuned it responded to every nuance in temperature, no matter how slight’ (Blinkhorn 2012). The project has attracted hundreds of submissions from across the world and is now a core element of the conference program. The sonic art resulting from this initiative has been used by the Red Cross in events, field activities and engagement campaigns. This project showcases the possibilities of aligning the mission of a large-scale humanitarian organisation with the work of socially engaged sound artists. In addition to creating a database of functional creative resources, the sound art competition attracted global attention and encouraged a dialogue around the role of sound and creativity in responding to climate change. Both Chadabe and Dal Farra are active electroacoustic composers who have made a gradual shift to diversify their practice to encompass these environmental engagement activities. Their commitment and passion for environmental issues is increasingly evident through their compositions. Dal Farra’s ‘Entre mi cielo y tu agua’ (Between My Sky and Your Water), composed in 2007, explores the geography and culture of Nordic and Latin American regions through their relationship with water and climate (Dal Farra 2007). Chadabe’s ‘One World 1’ (2006) juxtaposes field recordings from New York City and New Delhi in a complex interaction of social and cultural soundscapes exploring energy, turbulence, chaos and human interaction to question social and political systems through sound. As the recent documentary Racing Extinction (2015) highlights: if we can bring the sights and sounds of the natural world to wider audiences who would otherwise never think about them, they may be motivated and inspired to alter their habits enough to take action and respond to the ramifications of climate change. Racing Extinction director Louie Psihoyos sees his latest film as a catalyst to create change and inspire people across the world to take action in responding to the mass extinction of endangered species. 183
Leah Barclay
Dr Christopher W. Clark, director of the Cornell Lab of Ornithology Bioacoustics Research Program (BRP), features in the opening scenes of Racing Extinction, speaking passionately about the songs of the natural world. His poetic words captured the imaginations of the audience, and statements such as ‘the whole world is singing, but we’ve stopped listening’ produced an audible gasp in the audience at the Racing Extinction premiere I attended in Los Angeles in September 2015. While this documentary targets mainstream audiences, it highlights the increased awareness of environmental sound and another important medium for public engagement. In Jeannette Catsoulissept’s New York Times review of the film, she praises the documentaries mission and aesthetic and concludes by saying ‘Yet it’s the film’s sounds that really wrench. If you’ve ever wondered what a breaking heart sounds like, it’s right here in the futile warble of the last male of a species of songbird, singing for a mate that will never come’ (Catsoulis 2015; see also www.racingextinction.com). Electroacoustic composers have a profound ability to connect listeners to place through complex explorations of field recordings. Ros Bandt’s Mungo (1992) was composed with source material collected on site at Lake Mungo in Australia using large Aeolian harps installed in the environment and collaborations with the local Aboriginal community. Hildegard Westerkamp’s Into India (Westerkamp 2002) explores a country in which the composer spent substantial time over a 10-year period working closely with communities. Other important examples include David Dunn’s The Lion in Which the Spirits of the Royal Ancestors Make Their Home (1995), set in Zimbabwe; Steven Feld’s Rainforest Soundwalks (2001), in the Bosavi rainforest in Papua New Guinea, and Annea Lockwood’s river sounds maps (1989; 2008; 2010). Many of these composers have pursued other activities to draw greater engagement and scholarship around acoustic ecology and sound studies. This diversification of practice remains to be a critical point in highlighting the agency of sonic art in changing climates. Ros Bandt has published prolifically in the field, including anthologies such as Hearing Places: Sound, Place, Time, Culture (Bandt et al 2009), which invited 37 international authors to investigate place as acoustic space and was edited in collaboration with geographer Michelle Duffy and historian Dolly MacKinnon. Composer and clarinetist David Rothenberg performs with the sounds of insects, birds and whales, which is well documented in his publications Why Birds Sing (2005), Thousand Mile Whale Song (2008) and Bug Music (2014), which have served as valuable resources for musicians interested in the soundscapes of the natural world. Through projects and publications, British composer Katharine Norman has called for a greater engagement in listening, sound and place. She states, ‘My hope is that through sustained, truly multidisciplinary research our boundaries may continue to shift and extend, so enlarging the places where listening as art takes place’ (Norman 2011). Sound artist Francisco Lopez (franciscolopez.net) has hosted international residencies in remote locations in the Central Amazon Rainforest and South Africa, inviting composers and field recordists to join his expeditions and gain a greater understanding of the aesthetics of field recording and environmental sound. Soundscape ecologist Bernie Krause established Wild Sanctuary (wildsanctuary.com) to record, research, archive and disseminate the sounds of the natural world. The initiative includes outreach activities, education programs, creative works and publications including Voices of the Wild (2015) and The Great Animal Orchestra (Krause 2012). Bryan Pijanowski and Catherine Guastavino created The Global Sustainable Soundscapes Network (soundscapenetwork.org) in 2011 with a grant from the U.S. National Science Foundation, intended to bring together landscape ecologists, conservation biologists and acoustic ecologists to coordinate international research projects and collaborations. Chicago-based sound artist Eric Leonardson is the current president of the World Forum of Acoustic Ecology (wfae.net) and founded the World Listening Project (worldlisteningproject.org) 184
The agency of sonic art in changing climates
in 2008 as an organisation devoted to understanding the world and its natural environment, societies and cultures through the practices of listening and field recording. The most notable initiative the organisation has produced is World Listening Day, an annual global event on July 18 to celebrate the practice of listening and acoustic ecology internationally.The event now attracts hundreds of participants from across the world for collaborative projects, concerts, soundwalks and educational initiatives, all exploring the value of listening (worldlisteningproject.org). World Listening Day also maintains an active social media presence and has acted as a valuable entry point into acoustic ecology for many people. The Deep Listening Institute (deeplistening.org) was established to promote the music and deep listening practice of pioneering composer Pauline Oliveros (1932–2016).The organisation was founded in 1985 as the Pauline Oliveros Foundation and evolved into the Deep Listening Institute in 2005, where it now connects and supports musicians, artists and scientists exploring the sound of the world through interdisciplinary perspectives. Oliveros was profoundly influential in my personal development as a composer, particularly her perspectives on sound, embodiment, listening and tuning to our sonic environment. In 2008, Alaskan-born composer Matthew Burtner created EcoSono (ecosono.com), an activist network designed to advocate environmental preservation through experimental sound art. Through EcoSono, Burtner promotes ecological education through field institutes for the study of ecoacoustics, environmentalism and music in addition to a wide spectrum of engagement tools such as publications, performances and lectures. Burtner’s creative approach and trajectory with EcoSono resonate strongly with my research and highlight the agency of sonic arts in responding to climate change. Burtner states, By creating responsible interactions between the natural world and audiences, those involved may come to think in new ways about the environment. They may convince others of the need for sustainable practices, or they may join preservationist societies and activism networks. (Burtner 2011) Perhaps the most relevant example for this chapter is David Monachi’s Fragments of Extinction (fragmentsofextinction.org). Monacchi is a multidisciplinary Italian composer and researcher who is pioneering new ecoacoustic compositional approaches based on 3D soundscape (ambisonic) recordings of ecosystems to foster discussion on the biodiversity crisis. Fragments of Extinction is a multidisciplinary research project that was initiated in 2001 with the intention of recording the world’s undisturbed primary equatorial forests to highlight the disappearing soundscapes of nature (Monacchi 2013). The project has evolved into a non-profit organisation that works in collaboration with artists, scientists and sound engineers to produce immersive installations.The intention of these immersive experiences is to increase public engagement and scientific knowledge of the acoustic biodiversity of equatorial rainforests, with the understanding that the soundscapes provide critical information about the systemic behaviour of these undisturbed ecosystems (Monacchi 2016). The soundscapes are recorded in high-definition ambisonics and ideally diffused in Monacchi’s Eco-acoustic theatre, which enables a 3D reconstruction of the sonic environments. There is a sense of urgency in Monacchi’s research, as he believes the ongoing ecocide is silencing soundscapes we have not yet heard or recorded and we are losing a ‘sonic heritage of millions of years of evolution’ (Monacchi 2016). These examples highlight composers working across activism and education who are initiating large-scale research projects and interdisciplinary collaborations with the intention of responding to the greatest environmental challenges of our time.While some initiatives highlight 185
Leah Barclay
the entrepreneurial capacities of composers, they serve as valuable examples of sonic artists diversifying their practice and acting as catalysts for change.This expansion and diversification is in line with my personal artistic practice, where I have shifted from an internal and often isolated creative process to an expanded awareness and social consciousness, where artistic outcomes have become milestones in large-scale interdisciplinary research projects that have ingrained social purpose and intent within a community and environment. This transition inspired the Sonic Ecologies framework, an adaptable and responsive practiceled research methodology for embedding acoustic ecology projects in multi-platform community engagement and interdisciplinary partnerships to ascertain long-term impact and inspire a culture of listening (Barclay 2013). These creative foundations and my ongoing work in community-embedded acoustic ecology inspired the development of the two projects that will now be discussed. Fundamentally, these projects explore how composers drawing on environmental field recordings have an unprecedented opportunity to respond to climate change through the artistic and scientific possibilities of sonic art and acoustic ecology. This notion is considered along with the premise that a deeper awareness and engagement with the natural environment, and potentially a stronger auditory perception, could result in a greater respect for the environment and, consequently, in behavioural changes in responding to the afflictions associated with climate change. These concepts will be explored individually through Biosphere Soundscapes (biospheresoundscapes.org), launched in the Noosa Biosphere Reserve in Australia in 2012, and River Listening (riverlistening.com), launched along four Australian river systems in 2014 (see Figure 8.1).
Biosphere Soundscapes Biosphere Soundscapes is a large-scale interdisciplinary project underpinned by the creative possibilities of acoustic ecology and the emerging fields of biology concerned with the study of environmental patterns and changes through sound. This project is designed to inspire communities across the world to listen to the environment and explore the value of sound as a measure for
Figure 8.1 Leah Barclay recording in Noosa Biosphere Reserve Image: Robert Munden
186
The agency of sonic art in changing climates
environmental health and ecological engagement in UNESCO biosphere reserves.1 It is delivered through immersive residencies with artists and scientists, research laboratories, intensive masterclasses and a diversity of creative projects spanning four continents. Biosphere Soundscapes draws on indigenous knowledge systems and responsive community engagement to explore the social, cultural and ecological soundscapes of biosphere reserves. The project demonstrates that listening to the environment can reveal the interconnected nature of cultural and biological diversity, in what Timothy Morton (2012) describes as the vast intertangling ‘mesh’ flowing through all dimensions of life. This also resonates with Steven Feld’s concept of acoustemology, exploring sound as a valuable and distinctive medium for knowing the world (Feld 1996). Biosphere reserves are sites recognised under UNESCO’s Man and the Biosphere Program (MAB) to promote innovative approaches to sustainable development. There are currently 669 biosphere reserves in 120 countries comprising terrestrial, marine, freshwater and coastal ecosystems. Each biosphere reserve is designed and managed in a different way, but all seek to reconcile the conservation of biological and cultural diversity.They differ from world heritage sites in that they encourage active community participation and are ideal locations to test and demonstrate innovative approaches to ecosystem monitoring and sustainable development. The MAB Programme was introduced by UNESCO in 1976 and has evolved over the last four decades through strategic plans and collaborative projects. The Action Plan of Minsk Conference (1983) framed the initial strategy and was followed by the Seville Strategy and the Statutory Framework of the World Network of Biosphere Reserves (1995), the Madrid Action Plan for Biosphere Reserves (2008) and finally the Lima Action Plan (2016), which was introduced at the 4th World Congress of UNESCO Biosphere Reserves hosted in April 2016 in Lima, Peru. The key goals of a biosphere reserve include: encouraging conservation with a balanced focus on cultural and biological diversity, advocating sustainable development and enhancing community resilience and adaptation. Biosphere reserves are designed to engage in innovative research, ecosystem monitoring and learning activities related to conservation and sustainability, and to share knowledge, locally, nationally and internationally (UNESCO 1995). Biosphere Soundscapes draws on the inherently interdisciplinary nature of sound to explore cultural and biological diversity through acoustic ecology, accessible audio recording technologies, community capacity building and environmental engagement with local and global communities. Biosphere Soundscapes was conceived in 2011 and officially launched on World Listening Day 2012 in Queensland, Australia, with a field recording expedition in the Noosa Biosphere Reserve, a symposium featuring international sound artists including Ros Bandt, Gerardo Dirié and Daniel Blinkhorn, and a pilot sound map. The project was endorsed by UNESCO in 2013 and is the first international research initiative documenting the changing soundscapes of UNESCO biosphere reserves. Biosphere Soundscapes sits at the intersection of art and science, with the recordings providing valuable scientific data for biodiversity analysis and incredible source material for creative works that bring awareness to these environments. Biosphere Soundscapes pivots on a network of site-specific acoustic ecology projects embedded in multi-layered community engagement processes within biosphere reserves. Composers, field recordists, scientists and community members in the biosphere reserve can contribute recordings, compositions and soundscapes to a virtual community sound map and collaborate with other locations online (biospheresoundscapes.org).The engagement programs are adaptable and responsive depending on the capacity of the community and accessibility of the environment. The most successful tools throughout the initial phase of research included community soundwalks, participatory field recording sessions, acoustic ecology workshops, and providing access to the appropriate field recording technology for the community to remain engaged in the ongoing process. This was particularly evident with the Noosa Biosphere Reserve and Great Sandy 187
Leah Barclay
Biosphere Reserve in Australia, which have been involved since the beginning of the project. The Noosa Biosphere Reserve has an active community and has served as an experimental site for designing and developing each phase of Biosphere Soundscapes. This has included hosting Biosphere Soundscapes masterclasses and field labs delivered by some of the major international figures of sonic art including Francisco Lopez (Spain) and Andrea Polli (USA). The virtual platform, developed in collaboration with the Australian cultural development agency Feral Arts, is designed to host the sound database and showcase outcomes from the interdisciplinary residencies, which are the core activity in implementing this global project. All of the sound, text and imagery from the Biosphere Soundscapes residencies are geolocated in an interactive biosphere reserve map and are also available through a timeline feature to compare the seasonal changes in the soundscapes. This content is all made available and accessible to the local community of the biosphere reserve and in some instances made public online. Each residency involves 10 days of immersive field recording with a selected group of participants, workshops with artists and scientists, and knowledge-sharing experiences with the community. The residencies are designed in consultation with the local community with a focus on collaboration, experimentation and exploration and have a balanced engagement with biological and cultural diversity. Residencies have taken place across the Asia-Pacific region and Latin America, including the Sian Ka’an Biosphere Reserve in the Mexican state of Quintana Roo. The Sian Ka’an Residency was supported by Fonoteca Nacional de Mexico, Mexico’s national sound archive, and delivered in collaboration with Amigos de Sian Ka’an, the local conservation organisation responsible for managing the biosphere reserve (see Figure 8.2). It was supported and promoted by CONANP (The Mexican National Commission for Natural Protected Areas) and acted as a catalyst for bringing together national arts, humanities and conservation organisations that would not usually have the opportunity to interact or collaborate. The residency received applications from across the world ranging from Hollywood film composers interested in expanding their sound design library to anthropologists keen to deepen their engagement with sound studies. The large majority of the applications were from early career researchers who had recently graduated from master’s or doctoral degrees and were interested in shifting towards interdisciplinary approaches. These included biologists entering
Figure 8.2 Biosphere Soundscapes Residency, Sian Ka’an Mexico, 2015
188
The agency of sonic art in changing climates
the field of soundscape ecology and composers and field recordists experimenting with the scientific possibilities of their practice. This particular residency application process showed a dramatic increase in participants wanting to explore the interdisciplinary possibilities of sound. The participants were selected based on their creative or scientific backgrounds, capacity to collaborate and potential to make a contribution to the field. The selection panel also made conscious decisions to achieve a balance between disciplines, experience and geographical locations. While participants are not expected to produce an outcome during the residency, they are encouraged to publish their recordings and research, share the results and act as catalysts to engage other biosphere reserves and communities in the intentions of this project.The residency has a structure that incorporates daily field recording, presentations and thematic dialogues, but it allows flexibility for participants to explore the environment from their personal perspectives and disciplines. It is always fascinating to observe these environmental interactions that result in an incredible diversity of recordings. In one instance, Australian sound artist Kate Carr recorded ants crawling on a broken wire with contact microphones at our research station while Mexican bioacoustics researcher Manrico Montero searched for the sounds of specific invertebrates deep in the Sian Ka’an jungle. All the resulting recordings were included in the Fonoteca sound archive and catalogued for the Biosphere Soundscape community map. In addition to compositions and installations from the participants, the results often inspire a spectrum of other projects, both from the local community and online. Following the residency, French sound artist Félix Blume published his resulting recordings on Freesound, which inspired signal processing engineer Dr Stéphane Pigeon to create a generative online project titled ‘A Bird’s Paradise: Interactive Tropical Birds Soundscape’ (see https://mynoise.net) using the recordings (see Figure 8.3).This piece was published on Pigeon’s website myNoise.net, which attracts hundreds of daily users to listen to generative environmental soundscapes. The scientific outcomes of the residency are being led by Mexican biologist Sandra GalloCorona, who was the lead scientist working with participants during the residency. Following the residency she has identified the species in the resulting recordings and assisted in the design of annual monitoring programs for the Sian Ka’an Biosphere Reserve (see Figure 8.4). While this is just one residency example, this case study shows the diversity of outcomes from facilitating collaborations between national organisations to inspiring wider engagement with the soundscapes in virtual environments. The Biosphere Soundscapes residencies are accompanied by workshops, research laboratories, internships and masterclasses which have taken place internationally. The most recent series of masterclasses were developed with support from UNESCO (Jakarta Office) and delivered as blended learning experiences focused on creative approaches to ecosystem monitoring with sound. This series attracted participation from Australia, Mexico, India, Cambodia, Vietnam and the Philippines and highlighted the value of listening to the environment and the unique opportunity for synthesising experiences and sharing knowledge in response to the ramifications of climate change (UNESCO 2016). The creative outcomes are disseminated at international events and realised as performances, installations and augmented reality experiences. Recent examples include Rainforest Listening (rainforestlistening.com), an interactive augmented reality installation layering the tropical rainforest soundscapes of the Central Amazon Biosphere Reserve in urban environments across the world (see Figure 8.5). Rainforest Listening launched in September 2015 in the centre of Times Square, with an augmented reality soundwalk that mapped the Amazon Rainforest to New York City as a featured event for Climate Week NYC 2015. The sounds of the rainforest grew across New York City, where hundreds of people engaged with this experience in iconic locations 189
Figure 8.3 ‘A Bird’s Paradise: Interactive Tropical Birds Soundscape’ – Dr Stéphane Pigeon
The agency of sonic art in changing climates
Figure 8.4 Biologist Dr Sandra Gallo-Corona recording in Sian Ka’an Biosphere Reserve, Mexico
Figure 8.5 Rainforest Listening Installation, South Bank Parklands, Brisbane, Australia, 2016
such as Central Park and Dag Hammarskjold Plaza, the gateway to the United Nations. Rainforest Listening was also featured at COP21 (Conference of the Parties), the United Nations Climate Change Conference in Paris, where over 200 sounds were planted across the city for COP21 delegates to discover throughout the event. The Eiffel Tower and surrounding parklands were transformed into an immersive sonic experience layering rainforest soundscapes over the city. Each observatory platform of the Eiffel Tower was interpreted as one of the four distinct layers of tropical rainforest vegetation through immersive soundscapes and original sonic art created exclusively for COP21. These creative outcomes from Biosphere Soundscapes are critical factors 191
Leah Barclay
for international awareness and provide immersive, sensory experiences to reframe climate change and inspire public engagement. Biosphere Soundscapes was featured during the 4th World Congress of UNESCO Biosphere Reserves in Lima, Peru, in April 2016, which marked the first time acoustic ecology was included on the UN agenda, during a keynote presentation I delivered on creative approaches to ecosystem monitoring and acoustic ecology. This was also a valuable opportunity to contribute towards the Lima Action Plan, the strategic document that will provide the operating framework for the World Network of Biosphere Reserves for 2016–2025. The Lima Action Plan aims to harness lessons learned through sustainability science and test, experiment and disseminate the results globally through open, transparent and accessible platforms.This plan positions biosphere reserves as priority sites and observatories for ecosystem-based climate change action and living laboratories for the sustainable management of biodiversity through interdisciplinary research that embraces new technologies (UNESCO 2016). Through the implementation of the Lima Action Plan, UNESCO was actively seeking best practice examples of research that demonstrates the potential of biosphere reserves, and Biosphere Soundscapes was well positioned to showcase examples of the initial impact of the project. The project was identified as a valuable initiative that highlights the key strategic focus areas in the 2016–2025 agenda and has the potential to showcase the global importance of exploring biological and cultural diversity through creative projects and soundscapes that represent all major ecosystems of our planet. The project benefits UNESCO and the World Network of Biosphere Reserves in providing a platform that exemplifies the 2016–2025 MAB strategy and engages the communities of biosphere reserves in MAB’s vision of a world in which people are conscious of their common future and act collectively and responsibly to respond to climate change. The international impact of the project has resulted in a dialogue where the outcomes can now be shared directly with UNESCO and influence and inspire how biosphere reserves are managed, monitored and designated. There has also been a series of locations in Peru, Colombia and Papua New Guinea applying to become biosphere reserves as a result of engaging with Biosphere Soundscapes. Unexpected outcomes such as this open up the possibilities for the project to contribute more broadly towards sustainability. The future potential of Biosphere Soundscapes revolves around the digital platform and sound map that provides real-time interaction and engagement for anyone with access to an internetenabled device. This sound map will become a constantly evolving interface drawing attention to the changing conditions of the sonic environment in each given location, from both a cultural and biological perspective. In the future, this sound database and map will enable live-streaming tools, the ability to mix soundscapes in real time, and monitor the changing soundscapes of biosphere reserves. It will host soundscapes, creative responses and networked performances exploring the complex temporalities of climate change from a diversity of perspectives.This will also provide access to soundscapes currently at risk, allowing virtual collaborators opportunities to explore the sounds of central Australia, the Amazon rainforest or Kenya’s Mount Elgon, all within an accessible interface. Biosphere Soundscapes combines art, science, technology and communities to highlight the changing soundscapes of biosphere reserves with the potential to engage a global audience online. The resulting soundscapes continue to provide a valuable scientific database, while at the same time offering infinite possibilities for creative inspiration. Biosphere Soundscapes is an example of combining passionate community engagement with the infinite possibilities of creative technology to inspire a culture of listening and environmental awareness. From local primary school children participating in soundwalks to instigating changes in the strategic and governing 192
The agency of sonic art in changing climates
frameworks for UNESCO biosphere reserves, this project explores the micro and macro possibilities of sonic artists acting as change agents (see Figure 8.6). Perhaps the most exciting aspect of this project is there is no end in sight, but it is a constantly expanding process that will continue adapting and evolving with new collaborations, partnerships and discoveries in the years to come.
River Listening River Listening is an interdisciplinary research project that explores the creative possibilities of aquatic ecoacoustics and the potential for new approaches in the conservation of global river systems. It was developed in collaboration with The Australian Rivers Institute (www.rivers. edu.au) in 2014 across four river systems in Queensland, Australia. The project shares many similarities to Biosphere Soundscapes, particularly in the intention, design and multi-platform community engagement methodologies, but is differentiated through its focus on river systems and freshwater biodiversity. River Listening was developed through a Synapse Residency, a joint initiative of the Australia Council for the Arts and the Australian Network for Art and Technology, which supports innovative research collaborations between artists and scientists in Australia. This allowed me to work directly with leading scientists and develop the foundations of the project in residence at the Australian Rivers Institute in Queensland. River Listening was designed to inspire community engagement through interactive listening labs, field recordings, sound maps, immersive performances and interactive sound installations. It combines digital technologies, creativity and interdisciplinary design to further the understanding of river health and aquatic biodiversity. Recognising the critical value of river systems, it seeks to engage local and global communities in conservation. As Mark Angelo (1992) believes; ‘Rivers are the arteries of our planet; they are lifelines in the truest sense’.
Figure 8.6 Leah Barclay recording in Noosa Biosphere Reserve Photo: Robert Munden
193
Leah Barclay
The core ideas for River Listening emerged out of a body of my creative work spanning 10 years of collaboration with river communities across the world. This body of work begins with instrumental compositions – drawing inspiration from rivers – and develops into immersive sound installations and electroacoustic works using field recordings from river systems. During this time, hydrophone (underwater) recordings became integral to my practice. I am always eager to listen to the acoustic ecology of a river system. While healthy river systems are usually perceived as quiet and tranquil environments, there is often a captivating sound world just beneath the surface (see Figure 8.7). One of the most critical outcomes of my larger rivers collaborations such as Sound Mirrors (2009–2011) and The DAM(N) Project (2011–2013) was realising the opportunities for hydrophone recordings as a measure for river health. The soundscapes of rivers can expose many qualities, including the active marine life and impact of anthropogenic sound. Looking at the surface of a river, it is virtually impossible to detect the health of the river system. The impacts of climate change are often visible in terrestrial environments, yet dramatic changes in river systems can go unnoticed simply due to the visibility of the ecosystem. While I could never predict exactly what the river would sound like when I lowered the hydrophones into the water, the resulting recordings were always extremely revealing. The polluted and stagnant waterways were silent, often with a hum of anthropogenic sound from boats and machinery on the riverbanks. The healthy waterways were usually filled with sound ranging from dolphins, fish and turtles to shrimp and insects (riverlistening.com). I regularly used this material in my compositions and installations, and while some sounds were very easy to identify, others were left ambiguous. This became problematic during exhibitions of Sound Mirrors in 2010, when listeners would ask about the source material and identification of fish and aquatic insects. As the sound of river systems could potentially be a tool for understanding river health, it became necessary to gain a greater understanding of the scientific literature in this field. Considering the current biodiversity crisis and the dramatic impact on freshwater ecosystems, it would seem logical that aquatic bioacoustics would be an active monitoring tool in river systems. Biodiversity assessment is critical to understanding the rapid ecological changes taking place across the globe. We have seen a dramatic increase in bioacoustics and
Figure 8.7 Leah Barclay recording with Aquarian Audio Hydrophones, 2016
194
The agency of sonic art in changing climates
ecoacoustics for non-invasive environmental monitoring, as outlined earlier in this chapter, yet the large majority of this work has focused on terrestrial and marine environments, with very few studies exploring aquatic bioacoustics in freshwater environments. Initial studies in the 1960s focused on identifying and cataloging soniferous fish sounds (Fish and Mowbray 1970), while more recent research has taken an ecoacoutic approach studying acoustic communities in freshwater environments (Desjonquères et al. 2015) and exploring acoustic patterns from a holistic perspective that incorporates the physical habitat of the river ecosystem (Tonolla et al. 2011). Although research in this field is expanding, there are still minimal studies in comparison to marine bioacoustics, and extensive research is required to identify and catalogue freshwater species. I had applied for the Synapse Residency to work with The Australian Rivers Institute, primarily due to their emerging research interest in aquatic bioacoustics, which was led by Dr Simon Linke, one of the world’s leading freshwater conservation scientists. Dr Linke’s pioneering work in biomonitoring and river conservation planning has been used by agencies and NGOs from South East Queensland to the Congo, and he was interested in the possibilities of aquatic bioacoustics. I was first introduced to Dr Linke’s work by Dr Toby Gifford, a music technologist and software programmer who shared our interest in aquatic bioacoustics. This marked the beginnings of our River Listening collaboration and a year of experimental research and creative development in Queensland river systems throughout 2014. River Listening was designed to extend the existing creative work I have done in this area and explore new approaches to the conservation of river systems. The process involved not just recording and composing but collaborating with communities, developing scientific measurement tools, listening to the differences in each river, and responding and adapting to other processes that emerged. Dr Linke believes that classic techniques for measuring aquatic biodiversity are problematic, as they potentially injure the study organism (such as electrofishing), and can be biased as they only provide a brief snapshot at the time of observation rather than a continuous time series. He believes that passive acoustics presents a viable, uninvasive and yet unexplored approach to freshwater ecosystem monitoring. It should be noted that the practice of recording underwater sounds with hydrophones is not considered new or emerging in artistic disciplines; it has been pioneered by artists and provided composers with rich source material for decades. Annea Lockwood has been actively recording rivers since the 1960s, beginning with her river archive of field recordings, which evolved into her iconic river sound maps: A Sound Map of the Hudson River (1982), A Sound Map of the Danube (2005) and A Sound Map of the Housatonic River (2010) (see Lockwood 1989, 2004, 2007). The soundscapes of rivers feature prominently in the work of Ros Bandt, particularly in compositions including Voicing the Murray (1996), Blue Gold (2005) and Rivers Talk, (2012) which all explore social and political themes in the management of aquatic ecosystems (see Bandt 1996, 2012). David Monacchi’s Stati d’Acqua (States of Water), composed in 2006, was inspired by the multiple physical transformations of water (such as evaporation and condensation) through processed field recordings from the Tiber River in Rome (Monacchi 2006). River Listening (see Figure 8.8) was initially developed across four Queensland river systems: the Brisbane River, the Mary River, the Noosa River and the Logan River. The initial phase of the project involved listening labs, field recording, sound mapping, performances and installations to experiment with hydrophonic recording, virtual technologies and community engagement in understanding river health and aquatic biodiversity. During each listening lab we created short compositions and allowed the local community to hear the difference in river systems. Each community was very receptive to the ideas and actively assisted in forming local partnerships to disseminate this project. 195
Leah Barclay
Figure 8.8 Leah Barclay, River Listening, 2016 Photo: Robert Munden
Among these partnerships was Tiaro Landcare (maryriverturtle.tiarolandcare.org.au), a small not-for-profit organisation based in rural communities along the iconic Mary River.The organisation has been instrumental in the conservation of the endangered Mary River turtle and is comprised of some of the most passionate and committed community volunteers I have encountered. Leading the local research team was Marilyn Connell, a turtle specialist with incredible knowledge about the ecosystem of the Mary River. Marilyn was intrigued by our project and was immediately enthralled by hearing the soundscapes beneath the surface of the Mary River. This collaboration has continued to develop, and Tiaro Landcare hosted our first River Listening community performance in late 2014, most likely the only multichannel electronic music performance on the side of the Bruce Highway in the regional town of Tiaro. While some of the local community were noticeably perplexed, others were enthralled and captivated by the soundscape beneath the surface of their river system. The Queensland river systems share many similarities in terms of climate and biodiversity, so many of the resulting soundscapes certainly share key features. In direct contrast, we launched the pilot for the River Listening sound installation on the Thames in London during the 25th Anniversary of the EVA (Electronic Visualizations in the Arts) London Conference in 2014. Listening to the Thames explored real-time hydrophonics as a means to revealing the hidden world beneath the river surface (Barclay et al. 2014). The hydrophones in the Thames streamed continuously for five days in a physical installation and through a public website online. We used social media, particularly Twitter, to provide insight into the soundscapes and attract online listeners. Many people in London were surprised by the intensity of the sound, which at times could be likened to a busy highway. While the boat traffic made it difficult to hear the presence of fish or aquatic insects, this provided a valuable contrast to our database of Australian rivers and allowed us to demonstrate the dramatic differences in the soundscapes of global river systems. River Listening has expanded into various other communities across Australia, Europe and North America. In a similar method to Biosphere Soundscapes, the project has an established interdisciplinary framework but is adaptable and responsive to each community and river system. The scientific grounding of the project has continued to develop, with our team advocating 196
The agency of sonic art in changing climates
for passive acoustics as a potentially revolutionary development in freshwater ecology that can enable dynamic detection of events and monitor temporal and spatial ecosystem dynamics to inform the conservation and management of global river systems. As this is still an emerging field from scientific perspectives, education and community engagement remains to be a core focus of River Listening. Many of the leading scholars in freshwater bioacoustics have been actively advocating for education, including Dr Rodney Rountree, a marine biologist and fish ecologist who predicted ‘with the advent of new acoustic technologies, passive acoustics will become one of the most important and exciting areas of fisheries research in the next decade’ (Rountree et al. 2006).While he has produced some of the most important scientific studies on soniferous fish, he has placed equal importance on public outreach and education, including producing a children’s e-book called Listening to Fish: New Discoveries in Science, available from his website, fishecology.org. The creative outcomes from River Listening, including installations and performances drawing on our evolving database of hydrophone recordings, remain the core method of public engagement and awareness. As rivers across the world continue to be impacted by human activity and natural disasters, the River Listening project is designed to bring attention to rivers through compositions, sound installations and community engagement while being deeply grounded in the scientific possibilities of passive acoustics and hydrophone recording. As the international interest in the possibilities of aquatic bioacoustics expands, there are clear opportunities to harness virtual technologies to develop accessible community engagement around the creative and scientific possibilities of listening to the environment. River Listening is a catalyst for interdisciplinary thinking at a time when the management of aquatic ecosystems is a critical priority and we urgently need new ways to engage communities in river conservation and find techniques that can provide updated and reliable ecological information to decision makers. The flexible structure of River Listening has allowed me to explore interdisciplinary ideas without restrictions, which has resulted in a wide spectrum of outcomes spanning from collaborations with local primary school children to international research partnerships with conservation organisations. The opportunity to collaborate with some of Australia’s leading scientists has been instrumental in expanding and challenging my creative ideas and also provided a strong foundation for interdisciplinary research. River Listening is not about art interpreting science or science informing the development of art but finding a balance between different modes of thinking that can contribute towards conservation and public engagement. This interdisciplinary balance is evident in our creative outcomes, particularly our focus on augmented reality soundwalks, which have become a central tool in public engagement and awareness.The first soundwalk opened on August 27, 2015 on the Noosa River in Queensland, Australia. WIRA – River Listening was designed as an interactive sound installation that reimagines the world beneath the surface of the Noosa River for the international art and ecology event Floating Land at the Noosa Regional Gallery. WIRA was experienced by walking along the river with a smartphone and listening to content that is geotagged from Noosa Regional Gallery to the river mouth. As you walked along the riverbank, the sounds of the Noosa River system were layered with sonic art, stories and soundscapes from Floating Land over the previous 10 years that I had recorded through my involvement in the festival. Many of these soundscapes included my recordings of the voice of Gubbi Gubbi artist Lyndon Davis speaking about Gubbi Gubbi language, the history of the Noosa River and his approach to creative collaborations. When experiencing WIRA on location, these geolocated soundscapes were layered with binaural and hydrophone recordings and live streams of the Noosa River. These recordings and streams evolved and adapted based on the conditions of the Noosa River, meaning every walk was a unique experience. 197
Leah Barclay
It was essential WIRA was accessible for the local community, so those without access to a smartphone could also listen inside Noosa Regional Gallery at the WIRA listening station. I also created an accessible version online (leahbarclay.com/wira) with selected compositions to allow community members who were unable to walk the distance the opportunity to listen from any location. This webpage also includes video documentation of the project and hosted the live streams during the exhibition in 2015. While many consider mobile technologies key factors in our disconnection to the environment, particularly amongst the younger generations, this project explored the possibilities for repurposing these technologies to reconnect us to the environment and facilitate collaborations that showcase ecological systems through accessible creative technology. WIRA allows communities of listeners to hear sounds beneath the surface of a river they would not usually think about. This was building on my research and creative practice in GPS soundwalks dating back to 2009, but the interdisciplinary focus of River Listening allowed me to expand the possibilities of environmental interaction and real-time engagement. As WIRA stretches towards the river mouth, looking out towards the ocean, the voices from indigenous communities in Vanuatu can be heard across the surface of the water. These include the rich soundscapes from the Leweton Cultural Group and Vanuatu Women’s Water Music, who are struggling with ways to preserve their cultural knowledge. Sandy Sur, the leader of the Leweton Cultural Group, visited Noosa and listened to the sounds of his communities for the first time along Noosa River when the installation opened at Floating Land. He stood in silence at the river mouth and looked out towards Vanuatu, listening intently. He believed this technology could be powerful in showcasing his culture and bringing awareness to the Pacific Islands that are experiencing the true ramifications of climate change. These island communities will be among the first climate refugees and are at risk of not just losing their homes but their cultural knowledge systems inherently attached to local environments. Accessible mobile technologies present another important tool for public engagement around changing soundscapes and offer valuable artistic and scientific possibilities: from immersive compositions tracing cultural knowledge along a riverbank to dynamic algorithms that can identify soniferous fish and insects. The projects introduced in this chapter are designed to inspire environmental stewardship and connect communities through the interdisciplinary possibilities of listening to the environment. While this might seem far removed from what we traditionally associate with being a composer of electronic music, it has been a natural and instinctive process with an identical intention to many of my other compositions. I love the inherently interdisciplinary nature of sound and its ability to create that sense of wonder and curiosity, particularly in young people. It is this generation who will experience the true ramifications of climate change, and I feel it is critical these projects are engaging and relevant to young people. Biosphere Soundscapes and River Listening are designed as participatory, collaborative endeavors that provide a platform for artists and scientists to collaborate.While I am very active as a composer in leading these initiatives, they are designed to inspire others to collectively explore the value of sound in understanding the environment. Biosphere Soundscapes and River Listening (Figure 8.9) demonstrate there are clear opportunities for composers and sound artists to facilitate collaborations that bridge the divide between art and science. This is a call to action for artists to engage in participatory projects, whether that is through field recording, art-science collaborations, community engagement, education or the creation and dissemination of sonic art that inspires engagement around the changing soundscapes of global ecosystems. I truly believe sonic artists have an important role to play at this critical time in history. As Canadian philosopher Marshall McLuhan said, ‘I think of art, at its most significant, as a DEW 198
The agency of sonic art in changing climates
Figure 8.9 Leah Barclay, River Listening, 2016 Photo: Robert Munden
line, a Distant Early Warning system that can always be relied on to tell the old culture what is beginning to happen to it’ (McLuhan 1994).
Note 1 See UNESCO website ‘Ecological Sciences for Sustainable Development’ www.unesco.org/new/en/ natural-sciences/environment/ecological-sciences/biosphere-reserves/ Accessed 23 February 2018.
References Angelo, Mark. “Rivers Are the Arteries of Our Planet; They Are Lifelines in the Truest Sense.” (1992). http://worldriversday.com/ Accessed October 1, 2016. Attali, Jacques. Noise:The Political Economy of Music. Minneapolis: University of Minnesota Press, 1985. Balough, Teresa, ed. A Musical Genius From Australia. Perth: University of Western Australia, 1982.
199
Leah Barclay Bandt, Ros. Voicing the Murray, Mildura Art Gallery, 1996 Mildura Festival, Australia. 1996. ———. Blue Gold, Hearing Places, 2012, CD. ———. Rivers Talk, Hearing Places, 2012, CD. Bandt, Ros, Michelle Duffy, and Dolly MacKinnon. Hearing Places: Sound, Place,Time and Culture. Newcastle: Cambridge Scholars Publishing, 2009. Barclay, Leah. “Biosphere Soundscapes: The Rainforests of Brazil to the Coastlines of Australia.” Paper presented at the 2013 International Computer Music Conference, Perth, Western Australia, August 12–16, 2013. ———. “River Listening.” In Environmental Sound Artists, edited by Frederick Bianchi and V. J. Manzo, 127–137. New York: Oxford University Press, 2016. ———. “WIRA River Listening.” Accessed July 15, 2016. http://leahbarclay.com/portfolio_page/wira/ Barclay, Leah, and the Australian Rivers Institute. “River Listening.” Accessed October 1, 2015. http:// riverlistening.com Barclay, Leah, T. Gifford, and S. Linke. “Listening to the Thames.” Paper presented at the Electronic Visualisation and the Arts Conference, London, UK, July 8–10, 2014. ———. “River Listening.” Paper presented at Invisible Places, Sounding Cities: Sound, Urbanism and Sense of Place,Viseu, Portugal, July18–20, 2014. Biosphere Soundscapes. Accessed July 15, 2016. www.biospheresoundscapes.org Blinkhorn, Daniel. “Frostbyte.” (2012). Australian Music Centre. www.australianmusiccentre.com.au/ work/blinkhorn-daniel-frostbyte-red-sound – Accessed August 3, 2016. Boulton, Elizabeth. “Climate Change as a ‘Hyperobject’: A Critical Review of Timothy Morton’s Reframing Narrative.” Wiley Interdisciplinary Reviews: Climate Change 7, no. 5 (2016): 772–785. Accessed September 19, 2016. doi:10.1002/wcc.410 Burtner, Matthew. “EcoSono: Adventures in Interactive Ecoacoustics in the World.” Organised Sound 16, no. 3 (2011): 234–244. doi:10.1017/S1355771811000240 Cage, John. “The Future of Music: Credo.” In Silence. Middletown: Wesleyan University Press, 1961. Carlyle, Angus, and Cathy Lane, eds. On Listening. London: CRiSAP, 2013. Catsoulis, Jeanette. “Review: ‘Racing Extinction’ Charts the Slaughter of Vital Species.” New York Times, September 17, 2015. www.nytimes.com/2015/09/18/movies/review-racing-extinction-charts-theslaughter-of-vital-species.html?_r=0 Chadabe, Joel. “A Call for Avant-Garde Composers to Make Their Work Known to a Larger Public.” Musicworks 111, no. 6 (2011). ———. “One World.” Ear to the Earth. 2006. http://eartotheearth.org/2015/02/joel-chadabe/ Accessed October 1, 2015. Chávez, Carlos. Toward a New Music: Music and Electricity. Translated by Herbert Weinstock. New York: Da Capo Press, 1937. Cook, John, and Peter Jacobs. “Scientists Are From Mars, Laypeople Are From Venus: An Evidence-Based Rationale for Communicating the Consensus on Climate.” Reports of the National Center for Science Education 34 (2014). Accessed September 3, 2016. doi:10.6084/M9.FIGSHARE.1534562 Cook, John, Dana Nuccitelli, Sarah A. Green, Mark Richardson, Bärbel Winkler, Rob Painting, Robert Way, Peter Jacobs, and Andrew Skuce. “Quantifying the Consensus on Anthropogenic Global Warming in the Scientific Literature.” Environmental Research Letters 8, no. 2 (2013). Accessed September 3, 2016. http://iopscience.iop.org/article/10.1088/1748-9326/8/2/024024/meta. Dal Farra, Ricardo. “Between My Sky and Your Water.” 2007. Accessed May 15, 2015. https://soundcloud. com/ricardo-dal-farrra/entre-mi-cielo-y-tu-agua. ———. “Balance-Unbalance: Art and Environmental Crisis.” Leonardo 47, no. 5 (2014): 490–490. Accessed October 1, 2015. doi:10.1162/LEON_a_00815 Desjonquères, Camille, Fanny Rybak, Marion Depraetere, Amandine Gasc, Isabelle Le Viol, Sandrine Pavoine, and Jérôme Sueur. “First Description of Underwater Acoustic Diversity in Three Temperate Ponds.” PeerJ 3 (2015). https://doi.org/10.7717/peerj.1393 Dunn, David. The Lion in Which the Spirits of the Royal Ancestors Make Their Home, EarthEar, 1995, CD. Emmerson, Simon. “The Relation of Language to Materials.” In The Language of Electroacoustic Music, edited by Simon Emmerson, 17–39. Basingstoke, UK: The Macmillan Press, 1986. Feld, Steven. “Waterfalls of Song: An Acoustemology of Place Resounding in Bosavi, Papua New Guinea.” In Senses of Place, edited by Steven Feld and Keith H. Basso, 91–135. New Mexico: School of American Research Press, 1996.
200
The agency of sonic art in changing climates ———. Rainforest Soundwalks, EarthEar, 2001, CD. Fish, Marie Poland, and William H. Mowbray. Sounds of Western North Atlantic Fishes: A Reference File of Biological Underwater Sounds. London: Johns Hopkins Press, 1970. Hoffman, Andrew, and P. Devereaux Jennings. “The Social and Psychological Foundations of Climate Change.” Solutions 3, no. 4 (2012): 58–65. Accessed September 8, 2016. www.thesolutionsjournal.org/ node/1130 Lacey, Jordan. Sonic Rapture. Sydney: Bloomsbury, 2016. Krause, Bernie. “Bioacoustics, Habitat Ambience in Ecological Balance.” Whole Earth Review 57 (1987): 14–18. ———. “The Niche Hypothesis: A Hidden Symphony of Animal Sounds, the Origins of Musical Expression and the Health of Habitats.” Explorers Journal 71, no. 4 (1993): 156–160. ———. “Where the Sounds Live.” In The Book of Music and Nature, edited by Davida Rothenberg and M. Ulvaeus, 215–213. Middletown: Wesleyan University Press, 2009. ———. The Great Animal Orchestra. New York: Little, Brown and Company, 2012. Knebusch, J. “Art and Climate (Change) Perception: Outline of a Phenomenology of Climate.” In Sustainability: A New Frontier for the Arts and Cultures, 242–261. Frankfurt am Main: Verlag für Akademische Schriften, 2008. LaBelle, Brandon. Acoustic Territories: Sound Culture and Everyday Life. London: Continuum, 2010. Lockwood, Annea. A Sound Map of the Hudson River, Lovely Music, 1989, CD. ———. “Sound Mapping the Danube River From the Black Forest to the Black Sea: Progress Report, 2001–2003.” Soundscape:The Journal of Acoustic Ecology 5, no. 1 (2004): 32–34. ———. “What Is a River?” Soundscape:The Journal of Acoustic Ecology 7, no. 1 (2007): 43–44. ———. A Sound Map of the Danube, Lovely Music, 2008, CD. ———. A Sound Map of the Housatonic River, 2010. www.annealockwood.com/compositions/housa tonic.htm McKibben, Bill.“FourYears After my Pleading Essay, Climate Art Is Hot, in Art in a Changing Climate.” Grist. Accessed July 11, 2015. http://grist.org/article/2009-08-05-essay-climate-art-update-bill-mckibben/ Mcluhan, Marshall. Understanding Media:The Extensions of Man. Cambridge: MIT Press, 1994. McNutt, Marcia. “Climate Change Impacts.” Science 341, no. 6145 (2013): 435. Accessed August 26, 2016. doi:10.1126/science.1243256 Monacchi, David. Stati d’Acqua (States of Water), 2006. ———. “Fragments of Extinction – an Eco-Acoustic Music Project on Primary Rainforest Biodiversity.” Leonardo Music Journal 23 (2013): 23–25. ———. 2016. “A Philosophy of Eco-Acoustics in the Interdisciplinary Project Fragments of Extinction”. In Environmental Sound Artists, edited by Frederick Bianchi and V. J. Manzo, 159–168. New York: Oxford University Press, 2016. Morton, Timothy. The Ecological Thought. Cambridge: Harvard University Press, 2012. Nisbet, Mathew C. “Communicating Climate Change:Why Frames Matter for Public Engagement.” Environment: Science and Policy for Sustainable Development 51, no. 2 (2009): 12–23. Accessed September 15, 2016. doi:10.3200/ENVT.51.2.12–23 Norman,Katharine.“Editorial.”Organised Sound 16,no.3 (2011):203–205.doi:10.1017/S1355771811000197 Pachauri, R. K., and L. A. Meyer, eds. “IPCC, 2014: Climate Change 2014: Synthesis Report.” Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change IPCC, Geneva, Switzerland (2014): 151. Pijanowski, Bryan C., Alma Farina, Stuart H. Gage, Sarah L. Dumyahn, and Bernie Krause. “What Is Soundscape Ecology? An Introduction and Overview of an Emerging New Science.” Landscape Ecology 26 (2011): 1213–1232. Accessed October 1, 2015. doi:10.1007/s10980-011-9600-8 Red Cross/Red Crescent Climate Centre and Centro de Experimentación e Investigación en Artes Electrónicas, Universidad Nacional de Tres de Febrero. “art! ⋈ climate/arte! ⋈ clima.” Accessed October 29, 2014. https://soundcloud.com/ceiarte Ritts, Max, Stuart H. Gage, Chris R. Picard, Ethan Dundas, and Steven Dundas. “Collaborative Research Praxis to Establish Baseline Ecoacoustics Conditions in Gitga’at Territory.” Global Ecology and Conservation 7 (2016): 25–38. doi:10.1016/j.gecco.2016.04.002 Rothenberg, David. Why Birds Sing. New York: Basic Books, 2005. Rothenberg, David. Thousand Mile Song: Whale Music in a Sea of Sound. New York: Perseus Books Group, 2008. ———. Bug Music: How Insects Gave Us Rhythm and Noise. New York: Picador, 2014.
201
Leah Barclay Rountree, Rodney A., R. Grant Gilmore, Clifford A. Goudey, Anthony D. Hawkins, Joseph J. Luczkovich, and David A. Mann. “Listening to Fish: Applications of Passive Acoustics to Fisheries Science.” Fisheries 31, no. 9 (2006): 433–446. Schafer, R., Murray. The Tuning of the World. San Diego: Random House Inc, 1977. Sueur, Jérôme, and Almo Farina. “Ecoacoustics: the Ecological Investigation and Interpretation of Environmental Sound.” Biosemiotics 8, no. 3 (2015): 493–502. Accessed July 11, 2016. doi:10.1007/ s12304-015-9248-x Tonolla, Diego, Mark S. Lorang, Kurt Heutschi, Chris C. Gotschalk, and Klement Tockner, “Characterization of Spatial Heterogeneity in Underwater Soundscapes at the River Segment Scale.” Limnology and Oceanography 56, no. 6 (2011): 2319–2333. Towsey, Michael, Stuart Parsons, and Jérôme Sueur. “Ecology and Acoustics at a Large Scale.” Ecological Informatics 21 (2014): 1–3. Truax, Barry. Handbook for Acoustic Ecology. Somerville: Cambridge Street Publishing, 1999. CD-ROM Edition. Truax, Barry, and Gary W. Barrett. “Soundscape in a context of acoustic and landscape ecology.” Landscape Ecology 26, no. 9 (2011): 1201–1207. doi:10.1007/s10980-011-9644-9 UNESCO. Biosphere Reserves: The Seville Strategy and the Statutory Framework of the World Network. Paris: UNESCO, 1995. ———. “Lima Action Plan.” As endorsed by the 4th World Congress of Biosphere Reserves, Lima, Peru, March 17, 2016. www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/SC/pdf/Lima_Action_Plan_ en_final.pdf Wrightson, Kendall. “An Introduction to Acoustic Ecology.” Soundscape, the Journal of Acoustic Ecology 1, no. 1 (2000): 10–13. Westerkamp, Hildegard. Into India, Earsay Frace, 2002, CD. Yusoff, Kathryn and Jennifer Gabrys. “Climate Change and the Imagination.” Wiley Interdisciplinary Reviews: Climate Change 2, no. 14 (2011): 516–534. Accessed September 17, 2016. doi:10.1002/wcc.117
202
9 TUNING AND METAGESTURE AFTER NEW NATURES Sally Jane Norman
Introduction Tuning is key to musicking.1 The word ‘tune’, an unexplained fourteenth-century variant of ‘tone’ that from the fifteenth century designates the state of being in proper pitch, today has multiple practical and metaphorical meanings. We tune in and out, we tune machines as well as instruments, and fine-tune to adapt ourselves and our systems. R. Murray Schafer’s Tuning of the World (1977), dealing with ‘sounds that matter’ (p. 12), sets out a framework that widens musicking to the study of soundscapes, opening up new fields such as ecoacoustics. Richard Coyne’s Tuning of Place (2010) studies the social synchronisation and calibration affects of pervasive, interconnected media. This chapter is focussed on the tuning demands of our transformed instrumental practices and technological environments. Radically extended conceptions of scale and reach require new tunings and metagestures – gestures generated and relayed through digital prosthetics (Bec 2015).2 In turn, the evolving dynamics of tuning and metagesture reshape our creative practices. Beyond pragmatic implementations, tuning and the yearnings of metagesture are also volitional forces. They drive efforts to explore, recognise and respect nuances, to savour and value difference as a cultural strength and to non-invasively discover new natures (LeroiGourhan 1993). Acts of tuning and metagesture are deeply ethical. In musicking we strive to tune instruments so that they resonate in certain ways, to accord sounds with other sounds with perceived or imagined phenomena, to synchronise with other agents making or listening to sounds. Tuning engages sensory and cognitive experience at different scales: today we continue to discern melodic signatures produced dozens of miles away with alphorns derived from Neolithic trumpets (Avanzo et al. 2010)3 whilst being able to enjoy globally networked performances (as in Oliveros’s Deep Listening)4 or musicians far from our planet (as in Wally Schirra and Thomas Stafford’s Jingle Bells rendering from Gemini 6 in 1965).5 We regularly come across evidence of our ancestors’ experiments with instruments and tunings: Aurignacian bird bone and mammoth ivory flutes dating back 43,000 years, found in the Geissenkloesterle Cave in the German Swabian Jura, have renewed speculations about prehistoric culture, its objects and the environments in which they were effectively used (Higham et al. 2012). Forensic work on sounding bodies and resounding spaces shows how tightly tuning behaviours are caught up in the acoustics of musicking venues – dedicated and custom built, requisitioned or improvised. Performative acoustics of the Oracle Room of the Hypogeum 203
Sally Jane Norman
Hal-Saflieni in Malta (3,000–2,500 bc), where a voice speaking at a fundamental of around 110 Hz produces resonances said to vibrate other minds, exemplifies this conjoining of physical and intangible symbolic affordances (the Hypogeum houses remains of over 7,000 human bodies) (Debterolis and Bisconti 2013). Social instrumentalisation inherent to the design of cultural edifices drives their efficiency as political tuning systems, European live arts venues offering striking examples. Enthusiasts of oral expressivity denounced oculocentric trends in late eighteenth-century performance houses: Drury Lane and Covent Garden refurbishments turned these upscaled, profit-seeking enterprises into ‘theatres for spectators rather than playhouses for hearers’ (Richard Cumberland, 1806, cited in Nagler 1959, p. 408). Nineteenth-century musicking venues were characterised by the audience silence that ‘reined in the concerts of the bourgeoisie, who affirmed thereby their submission to the artificialized spectacle of harmony – master and slave, the rule governing the symbolic game of their domination’ (Attali 1985, p. 47).6 Technical and architectural affordances are integral to the development of prosthetics: together with spectacles and opera glasses, hearing aids and head-mounted displays, they enable us to weave cognitive, experiential and social links to our environments, constituting what Robert Innis calls exosomatic organs (Innis 2002). This mass of related, often intertwined technical artefacts that extend, substitute and compensate for natural powers of the human body (e.g. microscopes, computers, languages, weaving looms, airplanes, institutions) ‘have their own “trajectories” – dynamic logics or vectorial paths – and define and predefine the grounds for the historical variability of consciousness, and forms of perception and apprehension’ (Innis 1984, p. 68). In their co-evolution with perceptual schemata and motor skills, exosomatic organ-spaces construct as much as they construe what Innis (like Merleau-Ponty, Polanyi and others before him) denotes as the world at the end of the cane: material qualities of the blind man’s cane – rigidity, weight, texture – serve as both probe and filter. Decried by those who see them as dangerously outstripping our biological rhythms, exosomatic systems are defended by those who insist that we are drawn to defying constraints – indeed, who see this attraction as a defining characteristic of our species. Yet debate about our very identity as a species leads to interrogation about what kinds and scales of experience remain meaningful, as we strive to reach beyond familiar domains. On the one hand, exosomatic systems allow us to access forms of biological and cosmological existence that extend our sense of the intelligible universe we are part of and depend on. On the other hand, if existence ‘seamlessly articulated with intelligent machines’ warrants our positioning as posthuman (Hayles 1999, p. 3), this in turn raises questions as to how we might coherently articulate experience and cognition characteristic of our posthuman standing. These are questions this chapter attempts to address, drawing on experiments in tuning that are motivated by or have clear ramifications for musicking and broader artistic practices. It looks at relations between our physiological and technical co-evolution, in turn influenced by the wider social organisations in which it takes place, considering the three kinds of essential organs implicated in the performing arts – physiological, technical, social – as constitutive of a ‘general organology’ (Stiegler 1998, 2012). In this context, tuning to different exosomatic systems demands different kinds of metagesture, i.e. gesture that is generated, relayed and amplified by technological prostheses, encompassing actual and potential behavioural (reflexive), fabricated (artistic) and communicational (semiotic) activity.7 The starting point of this text is material: tuning and gesture, resonance and reach, gesture and metagesture reference physical events.Vibrations of sound waves, like motor forces causing gesture, involve substantial bodies and dynamics. Even if compositional processes forgo physical sounding objects as in much electronic and digital sound art, musicking nonetheless requires 204
Tuning and metagesture after new natures
our engagement as more-or-less consciously entrained and embodied receivers in the physical world, who can but relate this experience to gestural dynamics or surrogacy (Smalley 1986, pp. 82–83, also Smalley 1997).
Ars imitatur naturam in sua operatione/art imitates nature in its workings Thomas Aquinas, Summa theologiae8 How we culturally value sound affects the technologies we use to deal with it: clearly real-world provenance may be considered determinant, or priority may instead be given to transcendent qualities associated with electronic or digital synthesis and processing. In other words, sound objects may be valued for their obvious attachment to ‘real’ sources (Smalley’s ‘source bonding’)9 or for acousmatic values generated by calculated and/or algorithmic processes. The obvious shortcoming of oppositional framings like these is that they undermine crucially mixed approaches and the complexities of jointly evolving perceptive and technical apparatus: something apprehended via one conceptual and instrumental configuration might look totally different through another. Distinct historical and cultural settings harbour divergencies which are in turn subject to transformations: our understanding of the rainbow – which for Christian fundamentalists signifies God’s post-flood promise to never again drown the earth – and of related principles of light composition and decomposition has been steadily transformed by optical experiments with instruments refined in the course of many centuries. The mediaeval theory of art as expounded by Aquinas was a theory of human technology seen as an extension of nature (Eco 1988). Art’s status as derivative and accidental compared with the substantiality of ontologically prior natural substances – divine creation preceding human production – is what endows it with so-called imitative qualities, albeit as a powerful combinatorial force: ‘There is an activity in the soul of man which, by separating and joining, forms different images of things, even of things not received from the senses’ (Aquinas cited in Eco 1988, 172). Eco illustrates this statement by referring to the architect or builder of a house: he does not extract the idea of a house from some internal store of ideas, by means of divine illumination, or from a hyperuranic source. Instead, his experience enables him to conceive of the possibility of something not given in nature, but which can be realized through the use of natural objects and through constructional activities analogous to those of nature. (Eco 1988, 172) This is more generally the case with all human tool development: constructional activities analogous to those of nature in its workings frame the nature they explore to produce insights that prompt the invention of tools to explore freshly disclosed phenomena, and so it goes on in an endlessly spiralling process. Each probe-filter shapes and differently reveals the material it investigates, yielding sometimes irreconcileable visions (as in Feynman’s double-slit experiment). Intertwined practices and situations underpinning these visions resist facile, dualismprone labeling – as natural, cultural, technological, human, non- or posthuman – making nature in its workings particularly hard to define when it comes to arts and craftsmanship (Aquinas’s artifex groups poets and painters with utilitarian workers like blacksmiths and sheepshearers). While artificially or artistically produced forms draw and depend on pre-existing concrete reality and experience, reality and experience are themselves transformed by the means we invent 205
Sally Jane Norman
to explore and affirm them. Phenomena identified as workings of nature thus lend themselves to very different readings, thence to different kinds of artistic synthesis. Music cannot be boiled down to a well-defined language, nor can it thus be coded merely by usage. Music is always in the making, always groping its way through some frail and mysterious passage – and a very strange one it is – between nature and culture. (Schaeffer 1967, p. 11) Electronic music offers a rich vantage point for seeing how ‘nature in its workings’ might be grasped and artistically modeled by means of our extended exosomatic organs, given the variation in types of musicking events and in the kinds of links they assert with their wider environments (however ephemeral such links may be within an evolving legacy). These variations underpin cultural visions and practices. For example, while it offers a means of classification in theory, Schaeffer’s quest for a morphological and typological description of sounds abstracted from their modes and moments of production is thwarted in practice by our prior awareness of real sounds in changing contexts that play into our hearing, thence into our descriptive languages. Harvesting ‘concrete sound’, regardless of its origins, in order to abstract its potential musical values appears as a salutary alternative to mid-twentieth-century compositional strategies that noted musical ideas by means of music theory symbols, then confided their concrete production to familiar instruments. But decades of experimentation, expectations and possibilities that have built on openings created by Schaeffer and others confirm the essentially historical and transient – and all the more impressively radical – nature of this groping passage between nature and culture. François Bayle’s pioneering speleolithophonic10 work exemplifies changes in the status of sound objects due to evolving exosomatic affordances – in this case, natural found objects that enjoy specific transcontextual mobility.11 An acousmatic pioneer, Schaeffer’s collaborator and successor as director of the Groupe de Recherches Musicales from 1966, Bayle was inspired by his discovery of the Jeïta Cave in Lebanon in 1968. Jeïta (ou murmure des eaux) (1970) is a creation based on recordings – natural water sounds, use of speleothems as lithophones, casual music making and ambient noise of local workers, and even sounds from adjoining plumbing facilities adjudged compositionally valuable – initially made for an electroacoustic piece commissioned to open the cave’s upper gallery (inaugurated in January 1969).12 The following year, Bayle produced 17 studies based on dynamic patterns identified in the original materials (all but two sketches feature sounds recorded in the actual cave), using acousmatic compositional principles and the GRM’s prototype mixing desk and synthesiser. Coupigny’s synthesiser, its 20 generators linked via a pin connector matrix, allowed modeling of broad sonic envelopes and ‘typo-morphologies’ requiring the composer’s close listening, in keeping with Schaeffer’s insistence on the importance of human perception (cf.Teruggi 2007).With Coupigny’s system, complex sounds could be readily generated and controlled globally, as morphological phenomena lending themselves to the Schaefferian ‘making through listening’ approach.13 Studio 54 allowed a single user to perform tasks which, according to conventionally hierarchised (and ardently union-defended) roles, previously required the separate skills of a composer, desk technician and studio manager. The new situation, akin to that of the future garage- or bedroom-based creative industries practitioner as much as the traditional lone composer, was a revolutionary social aspect of the GRM’s general organology: artist-technicians could explore sounds independently, in freely looped listening, learning and shaping activity. Implementing the Schaefferian making-through-listening approach and acousmatic morphoconcepts that yoke together gesture/matter/instrument, Jeïta is the product of constructional 206
Tuning and metagesture after new natures
activities that appear to be analogous to those of nature. While recognising that this work qualified as concrete by virtue of its operating mode and resources, Bayle adamantly defended his search for abstract sound organisation principles, devoid of causal or narrative frameworks.14 With hindsight, this comes across forcefully in his notes for Jeïta Retour (1999): With this music (from the 1970s) I had my heart set on starting an ‘abstract’ method – neither causal nor narrative – of dealing with sound organisation. In arranging displays of energies, it seems to me that I gave new prospects to the idea of development: the acousmatic horizon opened up, onto a music of harmonized forms and movements. This specific cave therefore became the symbolic one where, sheltered from chance and time, nature labours to create innumerable models. Sounds from the limestone cave were generated, relayed and amplified by a range of technical systems – exosomatic organs – operating at multiple levels.These organs, or technical extensions of our auditory senses, were operational as the instruments and microphones which were used to make the initial recordings, as the synthesiser and mixing desk used to remediate the resultant materials in the GRM facilities, then finally as the amplifier and speaker systems used to deploy the opus thus composed in the original site and elsewhere. In terms of institutional implications, in addition to Studio 54’s disruption of professional studio work hierarchies, the Jeïta cave events are emblematic of a wider search for alternative cultural venues that neither overtly impose nor tacitly corroborate conventional audience hierarchies, like those denounced by Attali (1985). Bayle emphasises the fact that technology-enabled changes of scale in the ways we apprehend nature require profound changes in the language of music, which for him means a new aesthetic focus on the energetic aspects of sound. The Jeïta episode is here related not to debate real or virtual, concrete or abstract qualities of acousmatics and electroacoustically processed sounds (questions admirably addressed by Windsor 2000, and Field 2000), but rather to echo Bayle’s insistence on the need to invent and invest organologies attuned to our constantly evolving expressive potential. Stripped of their causal cues (the cave’s sheer volume and physical atmosphere, calcite draperies and speleothems and dripping water, not forgetting the human and technical agents conducting the actual recordings), the harvested Jeïta sounds offered new acousmatic resources.15 By artfully separating and joining things, we can synthesise novel images of things that – to paraphrase Aquinas – cannot be directly received from the senses. Reworked, and despite – or perhaps because of – Bayle’s respect for their provenance, Jeïta sounds were resolutely instated as non-figurative entities: the murmurs of waters or of waves, the rhythmical patter of droplets, the thrill of whispers, the “noises”, for all their evocative power, make no attempt to describe. In the words of Magritte’s famous title, this is not a cave! (Bayle 1999) Plasticity of our perceptive apparatus, entrained by use of a growing array of exosomatic organs, remoulds our conceptual and cognitive frameworks in a recursive process Hayles calls intermediation to characterise emergent cognition associated with the singular dynamics of human-computer interaction (Hayles 2008). This situation inexorably gives rise to creep in the ways we read reality: as yesterday’s discoveries fold into today’s humdrum, demarcations of natural versus synthetic become slippery – all the more curiously so when new tools reveal previously unfathomable phenomena which nonetheless seem indisputably and recognisably natural. To illustrate this historic creep, Brilliant Noise (2006) by tandem Semiconductor (Ruth Jarman, 207
Sally Jane Norman
Joe Gerhardt) is an intriguing comparator for the Jeïta cave piece. Semiconductor accessed hundreds of thousands of computer files collated from ground-based and satellite solar laboratories and observatories, selecting raw images of previously unseen activity which they reorganised into spectral groups to create time-lapse sequences.16 By mapping luminescence of dynamic regions in the resultant images to sound derived from solar natural radio, Semiconductor’s work is literally conducted by fluctuations in solar intensity, evidenced by the visuals.17 Bayle’s claim that his 1969 Jeïta work ‘is not a cave’ contrasts amusingly with Semiconductor’s assertion almost 50 years later that Brilliant Noise is, quite literally, a ‘symphony by the Sun’.18 In baldest possible form: the computer began as a tool – an object for the manipulation of machines, objects, and equations. But bit by bit (byte by byte), computer designers deconstructed the notion of a tool itself as the computer came to stand not for a tool, but for nature itself. (Galison 1997, p. 777) Mythologies that convey our attempts to relate to an unfathomable universe and to meaningfully assimilate barely sensed magnitudes of time and space have filled collective imaginations from the earliest times. Superhuman scales projected and processed by computational means, encompassing and stretching perceptions gleaned by ‘natural’ organs, are radically transforming the nature of experience: our new organology allows us to sound out places we cannot physically penetrate and map events otherwise out of range to human bandwidth. Sonification and auralisation are largely driven by utilitarian goals: our ability to discern complex patterns and syntax in auditory information can facilitate the parsing of big data, optimising the distribution of cognitive tasks across discrete sensory channels. The synthesis of complex data can also yield new aesthetic potential: granular synthesis, a technique based on Dennis Gabor’s timefrequency analysis (leading to Gabor atoms and the Gabor transform), was introduced to music by Iannis Xenakis’s fastidious use of tape splicing and analogue tone generators. First featured in Analogique A-B (1958–1959), where Analogique B constitutes the granular synthesis response to the initial, stochastically composed orchestral part A, the technique involves splitting sonic samples into 1–50 ms grains that are layered and manipulated to constitute audible events whose sonic qualities cannot be attained by traditional synthesis methods.19 Barry Truax’s implementation of real-time granular synthesis in interactive compositional environments20 and Curtis Roads’ invention of the digital granular synthesis engine boosted its uptake. Increasingly fine adjustments to independently treat speed, pitch and formant characteristics of audio samples now make granular synthesis a core feature of such widely used free software languages as CSound, SuperCollider, Reaktor, Pure Data and ChucK.21 Crafting textured, evolving soundscapes by shaping masses of indivisible sonic quanta calls for tuning and metagesture scoped beyond the reach of familiar activities. Computers ensuring the corresponding calculations are, in Galison’s terms, generators of a new nature, or at least an unprecedented scale of nature, in that their sampling-based models challenge anthropocentric definitions that have long depended on direct observability.22 Like the pseudo-random numbers of Monte Carlo simulations that link fields – mathematics, physics, chemistry – to found novel representations and implementations (for example those of the Manhattan Project – cf. Galison 1997), the recombinant possibilities of granular synthesis demand creative approaches that actively build and extend our organology. Technical scaling to make legible data that would otherwise outstrip human understanding, allowing us to generate and sensibly manipulate stochastic masses of data, involves dealings with consensually recognised ‘nature’ as much as with our demiurgic projections. In the sonic realm, 208
Tuning and metagesture after new natures
such processes have given formidable impetus to initiatives launched and inspired by Murray Schafer’s World Soundscape Project (Schafer 1977; Wrightson 1999), reinforced by concerns for our endangered planetary survival, and calls to rethink the logics and ethics of the Anthropocene (Zylinksa 2014). By using extensive statistical analysis to demonstrate the dynamics of population and landscape ecologies, ecoacoustics shows species spread and diversity – niche and adaptive behaviours – within predefined temporal and spatial frameworks (Sueur and Farina 2015). Discretising and interpreting signals from the environment, such techniques provide insights that cannot be gained by direct observation, while addressing the senses with multimodal models of information that are more or less ‘earthed’ in or abstracted from their subject matter.23 Primarily designed to inform, such meta-material instantiations and their conceptual frameworks also inspire new creative approaches (Davis and Turpin 2015). Like other empirical sciences, ecoacoustics investigates vital materialities by harnessing probabilist and predictive mathematical models to core subject-matter considerations, giving rise to a computational gap between reality in the field and abstraction of the models that address it. Such tensions frequently punctuate human efforts to reconcile abstract and empirical reckoning: the computation-versus-fieldwork gap encountered in ecoacoustics might, for example, be whimsically compared with the mythversus-astronomy gap that prompted our ancestors’ readings of terrestrial animals in the celestial zodiac. How can one not establish a radical difference between universal Nature and relative culture? But the very notion of culture is an artifact created by bracketing Nature off. Cultures – different or universal – do not exist, any more than Nature does. There are only natures-cultures, and these offer the only possible basis for comparison. (Latour 1993, p. 104) Latour thus resumes the paradox whereby our purportedly natural environment is constantly transformed by exosomatic organs which recursively and simultaneously transform the objects of their attention. In a world where we are forever renewing our grasp of nature in its workings, analysis of tuning practices means reflecting on our positions and categorisations and on corresponding time scales deemed relevant. The intangible resonances and physical vestiges of cultural legacies combine to produce discrete, tightly interwoven durational nodes: like other artistic artefacts, traces of past musicking are treasured for polysemic values that outstrip obvious meanings and functions. When re-injected into contemporary creation, they generate new temporal patterns: we ‘imagine the flow of time as assuming the shapes of fibrous bundles, with each fiber corresponding to a need upon a particular theater of action, and the lengths of the fibers varying’, such that cultural bundles ‘consist of variegated fibrous lengths of happening’ (Kubler 1962, p. 111). Variegated, stranded patterns and ‘shapes of time’, which Kubler contrasts to the central lens of sensibility radiating from artists at a given place and time (as per the structural methodology or Strukturforschung approach), allow multiple temporalities to be deployed across artistic processes and productions, extending our tuning range and abilities. In contrast to the vertical transmission of biological evolution, subject to extinction and irrevocability, cultural evolution features retroactive appearances of outdated or superseded phenomena and horizontal coopting of innovations from concurrent branches (Vaccari and Barnet 2009).24 These resurgences and cooptations are made possible by the anchorage of human culture in or as storable media. This gives it a resilience denied to its biological/biodegradable human makers, and incomparable potential diversity. In recent history, the range of sonic materials open to musicking has been massively augmented by the development of storable media and associated processing techniques, 209
Sally Jane Norman
exploited through compositional strategies like Schaeffer’s that privilege perceived attributes of sound objects over speculatively organised instrumental palettes blanched by academic traditions (Schaeffer 1973). In parallel, performance and sound art, multimedia and computational art, and vernacular ‘noise’ culture have contributed to new aesthetic sensibilities, enriching the range of sonic materials we tune to and value.Yet the advent of novel exosomatic organs and the race to stay perceptively and cognitively abreast of openings they offer also raise real quandaries: what are the links between collectively readable physical signals evidenced by instrumentation and the percepts they produce in our minds? How might we relate our analysis of poietic processes (composition or production of sound) with analysis of aesthetic processes (its reception)?25 François Delalande addressed these questions when trying to devise a new methodological starting point for analysing electroacoustic music in the late 1990s (Delalande 1998). To get beyond then mainstream approaches to melodic and rhythmic organisation, he studied listener responses to Pierre Henry’s Sommeil,26 characterising them as taxonomic (distinction of key morphological units to acquire a synoptic sense of the work), empathic (attention to individually felt sensations and experience of sound dynamics), and figurativist (interpretation of the sound work as a narrative) and/or as mixes of these elements. While they remain valuable 20 years later, Delalande’s categories must now be read against growing awareness of how much our technical organs are answerable to unruly amalgams of habits, expectations and aspirations. However compellingly reproducible their findings, their implementation mobilises tacit, historically layered knowledges that make abstract reasoning and empirical evidence tortuously interdependent.27 The fact that unfamiliar events disclosed by tools and technical systems are often staged to take on aesthetic value adds to this complexity, as does use of tools and systems to elicit new meanings from old materials: ‘There is history, there is culture, and there are the artefacts which carry them beyond our death: technics’ (Vaccari and Barnet 2009, p. 10). Materials and artefacts taken for granted – or dismissed as obsolete – can resurge as creative means endowed with unique affordances that mobilise sometimes deeply sedimented and spatially remote cultural experience. For example, recent grid technology-enabled investigation of complex sounds of certain long-lost instruments has produced inspirational sonic libraries used by contemporary musicians, as well as historians. Resources thus derived from models of the Ancient Greek epigonion, a wooden-framed, 48 stringed harp, have prompted research into other ancient instruments including the salpinx, barbiton, aulos and syrinx, further enriching our twenty-first-century organology.28 Use of exosomatic organs to source deep and distant real-world phenomena inaccessible to unaided perception is a hallmark of Alvin Lucier’s work, whose serendipitous meeting with physicist and amateur organist Edmond Dewan created the conceptual and technical conditions for his Music for Solo Performer (1965).29 Dewan’s system to investigate alpha brainwaves struck Lucier more for theatrical or visionary reasons than for sound or musical reasons, because I didn’t know what it was going to sound like. Actually, it doesn’t sound like anything because it’s ten hertz and below audibility; it isn’t a sound idea, it’s a control or energy idea. (quoted in Dewar 2012, pp. 2–3)30 Quasimodo,The Great Lover,31 conceived in 1970 and inspired by the long-distance sound-sending ability of whales, orchestrates input sounds relayed by and actual sounds of a set of very different kinds of connected spaces: In large, single places such as prairies, glaciers or ocean basins, use single systems of great power or several weaker systems in series. Connect small, separated spaces such 210
Tuning and metagesture after new natures
as rock formations within faults, detached railroad cars on sidings, the rooms, foyers, and corridors of houses, schools, or municipal buildings with relays of systems, adding shorter distances to make longer ones. (Lucier cited in Kahn 2013, p. 167) Kahn coins and defines transperception as hearing in a sound the influences of intervening space traversed by a signal or sound (p. 109), where channels of intervening space and time are not presumed to be evacuated (p. 171). He suggests that Lucier allows us ‘to create and understand mixes and mashes that are transperceived environmentally’, placing works like Quasimodo in the context of ‘transductive trajectories in the “mixed circuits” in the history of telecommunications, in earth returns, ionospheric reflections, and elsewhere’ (p. 169). Such mixes and mashes have become an integral part of everyday planetary media, often spurred on by pioneering collaborations between ICT operators and artists. For The Virtual Abbey (1995), musician-producers Luc Martinez, David Hykes and John Maxwell Hobbes fused historic and spatial sensibilities by connecting the twelfth-century Abbaye de Thoronet in Toulon to The Kitchen in New York, where the Harmonic Choir performed Hykes’s Earth to the Unknown Power via an ISDN network supplied by Telecom Interactive.32, 33 The Cistercian abbey’s acoustics have been prized since its foundation under Folquet de Marseille, its first abbot elected in 1199, son of a wealthy Genoese merchant and renowned ex-troubadour, composer and singer of secular love songs, who in 1195 left his stellar music career for the divine voices of monkhood. Some 800 years later, Hyke’s New York choir was digitally relayed to Thoronet, played live through a sound system that encoded the abbey’s acoustic signature, and transmitted back to The Kitchen, thus able to host The Virtual Abbey performance. Sharing of ambient sound and images from both sites reinforced their respective audiences’ sense of co-location. Hykes and Hobbes dubbed acoustic transportation this real-time application of acoustic parameters from a distant location to source signals. Through culturally and affectively potentiated recognition of malleability of materials previously experienced as unyielding, we readily make these materials part of newly ‘naturalised’ substrates of reality. In the seemingly chaotic ‘noise’ originating from info streams, nature and the observations of the universe, certain structures, rhythms, and cycles exist. By processes of filtering, emphasizing and amplifying these rhythms of the electromagnetic waves and data structures, artists and musicians are remodelling the con-texture of acoustic space. (Smite and Smits 2000) Tuning to remote spaces has long inspired creative visions and strategies: moon-bounce or earth-moon-earth experiments using the moon as a passive communications satellite for terrestrial signals have been underway since the 1940s. Oliveros’s explorations of the sonosphere – defined as the earth’s primary sonic envelope, interwoven with secondary biospheric and tertiary technospheric layers (Oliveros 2006, 481) – include over a decade of Echoes from the Moon events launched in 1987 with engineer Scott Gresham-Lancaster and ham radio operator and moon-bounce specialist David Oleon.34 Aelectrosonics (Kahn 2013)35 and other energetic and electromagnetic arts offer exciting resources to artists seeking new realms of sonic experience and – in keeping with the social implications of general organology – new kinds of cultural gatherings and configurations fittingly and sympathetically attuned to this experience. The 2001 Acoustic Space Lab in Irbene, Latvia, was set up to explore the social and creative potential of sound and acoustic environments, relations between data streams and radio 211
Sally Jane Norman
waves, and collaborative broadcasting and streaming dynamics. Several dozen sound artists, net and community radio activists gathered in August around ‘Little Star’, a 32 m-diameter radio telescope sabotaged and abandoned by the Soviet Army when it left the Baltic States, previously used for monitoring planetary, stellar and extragalactic radiation, for very long baseline interferometry and for surveillance. The Ventspils International Radio Astronomy Center (VIRAC) salvaged the radio telescope and collaborated with Latvian artists Rasa Smite and Raitis Smits to run the Acoustic Space Lab workshop and symposium.36 The so-called acoustic group explored expressive possibilities of the actual dish (a 600-ton mass with an 800 m2 surface), rigging microphones to pick up near environment sounds (neighbouring forest, bird cries, wind noise, groaning of the telescope’s pan-and-tilt mechanics from its 25 m tower). The surveillance group led by Marko Peljhan/Makrolab switched the feed horn to 1.5 GHz to eavesdrop on INMARSAT communications satellites serving mobile phone services, shipto-shore communications, air traffic control signals and data packet transmissions. The radio astronomy group focussed on planetary observation, producing 2D and 3D renderings, line graphs and control parameters for audio applications. Participants collectively explored and jammed with their findings during a six-hour webcast from Riga, organised with partners including Kunstradio in Vienna, who thereafter made available the archives, thus constituted on an open source platform. Through their will to expand and glean artistic materials from beyond the precinct of conventional practices, Smite and Smits opened up ways to collectively create with transduced Aelectrosonic materials (Kahn 2013, p. 55). The Acoustic Space Lab’s ‘self-unconcealment of data’ inspiringly reveals as much as it interprets these unimaginable resources (Whitelaw 2013, p. 226), incentivising artists in their attempts to tune to unknown energies. Sputnik’s ‘beep, beep’ pulses as the first artificial satellite were captured and in turn captured the imaginations of radio operators all over the world in 1957. Spacecraft velocity and trackability and antenna radiation patterns have been tested in increasingly effective acoustic and radio-frequency anechoic chambers since our earliest extraterrestrial excursions. In addition to their being made operable by the usual battery of frequency tuning tests, the world’s first two art satellites, collaboratively developed by Tama Art University and Tokyo University, feature creative sonic content. Missions of the low-cost 10 cm cube nano-satellite ARTSAT1 INVADER, launched in February 2014, included algorithmic generation and transmission of synthesised voice, music and poems. ARTSAT2 DESPATCH (Deep Space Amateur Troubadour’s Challenge) is a 3D-printed space probe launched in February 2014, whose last signal, detected in January 2015 from 4.7 million km, set the world distance record for amateur radio (see Figure 9.1). This record is all the more striking given DESPATCH’s singular artistic profile: the probe’s sensor readings serve to generate a kind of acoustic poetry structured according to a rhythm-phrase based on Dadaist Hugo Ball’s poem Gadji beri bimba.37 Converted to current or angular velocity, this rhythm-phrase allows the vessel’s trajectory to be ascertained by cooperative data communication and reconstruction: ground stations receiving fragments of the broadcast poetry share it through the Web and social networks to collectively estimate the probe’s position.38 There is a peculiar poignancy to this use of staggered, polyrhythmic cues to derive spacecraft coordinates. By thus poeticising the Deep Space Amateur Troubadour’s extraterrestrial dynamics, its artistic and technical makers’ humanising metagesture is powerfully and effectively moving. Frequencies, rhythms, patterns and cycles thus scaffold parallel explorations of embodied time and space and of the computational datasphere where ‘digital data is figured (here) as exactly the thing that it is not: matter’, and where the ‘sound particle stands for a (problematic) convergence of data, sound and matter’ (Whitelaw 2003, pp. 93, 95).These tunings to purportedly alien 212
Tuning and metagesture after new natures
Figure 9.1 The DESPATCH/ARTSAT2 model
phenomena stretch cultural bandwidth and our ability to discern difference without savagely colonising it, opening up vital senses of possibility. Artificiality is not a characteristic that denotes the manufactured origin of the object as opposed to nature’s productive spontaneity. Artificiality is something that is within the artificializing action of man, regardless of whether this action affects a natural object or an entirely fabricated object. (Simondon 1958, p. 71) Cultural positioning embedded in concepts of tuning plays into and is played out by exosomatic organs employed in musicking. The heterogeneity of these interfaces is channeled by biases that underpin their design at pragmatic and conceptual levels and that determine their optimal modes of operation.These biases are often attributable to the standardisation quest that facilitates communication and uptake of techniques but at the same time limits differences, constraining expressive potential.They may also be attributable to more or less conscious ideological choices. Modern Western concepts of tuning are largely anchored in nineteenth-century scientific decisions favouring often elegant mathematical models that facilitate the development of holistic systems. Joseph Fourier’s concept of waveform synthesis was applied to sound by Georg Ohm, who claimed that our perceptive apparatus resolves complex tones into simple, sinusoidal components (Rodgers 2010). Searching for analogies across vision and audition and building on what he dubbed ‘Ohm’s law’, Hermann von Helmholtz posited correspondences between loudness, pitch and timbre, and brightness, hue and saturation of colour, arguing that we can only perceive tones that correspond to sinusoidal vibrations. Helmholtz’s tuning fork apparatus demonstrated Ohm’s theory by synthesising complex sounds from sinusoidal components (Rodgers 2010, p. 123). Deemed compliant with a take on reality affirmed as uncontestably objective, this model triumphed over August Seebeck’s equally legitimate model that addresses features glossed over by the sine wave. Intrigued by the way we hear as continuous the discrete sonic pulses produced by a siren, Seebeck showed pitch to be perception based and determined by periodic phenomena (pulse frequency) rather than by a purported fundamental frequency. Tara Rodgers’ study of the contradictions between Seebeck’s empirical findings and Fourier’s analysis-based theory, adopted by Ohm, discusses this moment as a historical turning point. Advocating Ohm’s definition of tone in terms of its sinusoidal components rather than 213
Sally Jane Norman
its periodicity, as claimed by Seebeck, Helmholtz paved the way for viewing the harmonic oscillations of continuous tones as more naturally integrated to human perception than the discontinuous tones exemplified by the discrete impulses of a siren (Rodgers 2010, p. 123). By equating the sine wave with purity, neutrality and musical value, in keeping with neoclassical aesthetics of simplicity and order, conventional acoustics ascribes lesser status to the physical/ timbral characteristics of sound associated with aperiodic waveforms (Rodgers 2010, p. 126). Sine wave-based definitions of tone moreover foreground diagrammatic modes of scientific representation, in other words, legibly rationalised information that was and often still is considered more trustworthy than the supposedly fickle findings of the senses. As Rodgers notes, historic tendencies to ascribe direct perceptive appeal to sine-based tones, ruling out the validity of more complex aperiodic conceptualisations, prefigure construals that today oppose analogue media, seen as closer to ‘natural’ reality by virtue of their continuous encoding, and digital media seen as alienated from supposed ‘real-world’ continuity by virtue of their discretised encoding. Over-simplification evident in both construals begs the question of whether, and how far, one can argue for the interdependence of theoretical constructs and of their physical instantiations or ramifications. Conflicting views like these are at least partially imputable to the ways we differentially map and value theoretical conjectures and their real-world objects. Suarez identifies two main approaches in theory-to-world tuning: (1) we approximate theory to the problem situation by introducing corrections into the theoretical system (qualified as ‘construct’ idealisation); and (2) we approximate the problem situation to the theory by means of simplifying the problem situation itself (qualified as ‘causal’ idealisation) (1999, p. 174). Our concepts of computation testify to these differences and to our difficulties trying to reconcile agonistic terms while avoiding simplistic or reductionist holism. At once a technique of abstraction, using formal logic, mathematics and manipulations of symbols and languages to ground its procedures, computation is also a ‘technology of material agency’ which pragmatically partakes in the world it models and represents (Fazi and Fuller 2016).39 Instead of imposing distance, discretisation is consequently viewed as the means to tightly weave computational processes into the fabric of the world. Sound art, infused with the legacies of abstract, theoretical logics and with visceral, temporal resonances that prompt complex affective responses, is charged with this ambivalence. We attend to sonic events at multiply layered levels, seeking to (re-)cognise temporal processes we can map to our rhythmic sensibilities – via taxonomic, empathic, figurativist or other kinds of constructs. Tuning in involves tuning out – filtering out unwanted frequencies, and signals that would interfere with operations, distort the final output (the sounds), or produce other ‘interference effects’. But interference banding serves as a metaphor for the rich potential, and shortcomings, within interoperating models. It also points to the potential for the generation of unintended social effects and hybrid artifacts. Images and sounds can be combined to produce not just averages, but new entities. (Coyne 2010, p. 35) Auditory interfaces allowing humans to tune to machine irregularities and interference have been built into or onto computer circuits to extract hardware and software information since early mainframe days. Louis D. Wilson recounts computer audio experiments in 1949: static from background radio kept running during night test shifts on the BINAC revealed patterns of activity in the computer (Miyazaki 2012). Wilson installed a detector, amplifier and speaker to make this more audible, thus useful for monitoring purposes. Machine listening techniques were developed for the American UNIVAC-1 and TX-0 computers, the Australian CSIRAC, 214
Tuning and metagesture after new natures
English Pilot ACE and Pegasus machines, and the Dutch PASCAL (acronym for Philips Akelig Snelle CALculator – ‘Philips’ horribly fast calculator’).40 In 1962, electronics engineer Willem Nijenhuis of Philips Natuurkundig Research Laboratory in Eindhoven published a 45 rpm vinyl recording of computer sound experiments entitled Rekengeluiden van PASCAL, made by amplifying radiofrequencies generated as the machine ran through different programmes. The resultant stretches of noise, or variously textured electronic bursts, resemble the gritty, leaky sonic materials prized by contemporary afficionados of ‘noise music’ and analogue synthesisers, predating ‘snap, crackle, glitch’ aesthetics presciently identified as post-digital by Kim Cascone (Cascone 2000). Recent media archaeology experiments which bridge these generations and communities of practice confirm how thoroughly such materials have been integrated into wider cultural contexts. Matt Parker’s Imitation Archive (2015)41, for example, has established a permanent sonic repository covering 70 years of computing history and a set of his own compositions bearing titles like Test Patterns, Wrens, Bombes of Bletchley, Terminal, Transmission from Overseas. Commissioned by the British Library Sound and Vision Archive, the repository was created during a residency at Bletchley Park’s National Museum of Computing.42 Parker recorded characteristic sounds of historical machines considered key for computing – and indeed for world history – including the restored Harwell Dekatron relay-based ‘WITCH’ (1951), the Tunny cipher machine and the recently completed 1943–1944 Colossus replica.Together with the sonic signatures of mainframe computers, the repository features sounds of the mechanical comptometers and calculators that were integral to their operating environment (e.g. Facit, Brunsviga, Contex devices). Layered rhythmic patterns and harmonics punctuated by the whirring and hissing of machine parts that accelerate and decelerate are compiled in Parkers sophisticated, technoromantic compositions. These contrast with the more rawly literal machine-listening legacies of artists like Jonathan Reus, whose iMac Music media archaeological performance explores digital software routines and the hardware housing them.43 Running iMac G3 circuitry is surgically live-hacked by performers, whose fine-tipped sound amplifying probes generate acoustic ‘signatures’ of the ongoing processes while messing with desktop screen displays. As malfunctions multiply, distortion levels rise, until the system’s memory, corrupted beyond recognition, fatally crashes this sonic and machinic theatre of anatomy. Electronic machines and synthesisers have been used in music for over a century: Meissner’s 1936 electronic music history publication already evokes a 40-year incubation (Rodgers 2010, p. 9). Given the enrichment of compositional material by various kinds of electromagnetic fluctuations, much creative investigation in recent decades has focussed on the edge of analogue and digital systems, if not on the wilful blurring of that edge. For example, live coding’s embodied, audiovisual displays of real-time algorithmic processing imbue what is initially and essentially a computational practice, with the gestural immediacy of performance. We tune to diverse kinds of signals and to the micro- and macro-temporalities emanating from the digital computation processes and from the coder’s physical interventions, to enjoy the vitality of this ‘differential distribution of intensities’ (Murphie 2013, non-paginated). Antidatamining works by the French RYBN artists’ collective exploit what they call real-time archaeology of data flows. RYBN mines web-extracted data to foreground significant moments of socio-economic and geopolitical imbalance exacerbated by the proliferation and impenetrability of digital data. For Antidatamining VII – Flashcrash, commissioned for the Raisons d’agir festival at Espace Mendès France, Poitiers, in April 2011, Kevin Bartoli, Marika Dermineur and Julie Morel based their sound performance/installation on the trillion-dollar stock market crash of May 6, 2010, where massive plunges in key stock values triggered general financial chaos and breakdown. RYBN acquired publicly available data from whole market data company Nanex, 215
Sally Jane Norman
a fierce critic of high frequency trading (HFT) like that behind the notorious Flash Crash, and scraped further material from Yahoo!Finance via IP proxies. Installed in the Poitiers planetarium, RYBN’s Flashcrash used a 9.1 surround-sound speaker system where the audio channels relayed data streams corresponding to eight markets peripheral to the central New York Stock Exchange.44 The work featured high frequency bursts and pulses, bass rumblings and propagation of resonance effects to heighten intensity: the whole signal remains fabricated, and is based on very complex phenomena of feedback interactions. . . . Financial noise is created by the sum of all its internal feedbacks, anticipation process(es), and mimetic forces. The noise we can produce in the framework of antidatamining, is based on the matter we explore. HFT provides a wide range of frequencies, infinite structural composition sets, and a strong symbolic and metaphoric matter.45 Rekengeluiden, iMac Music, Imitation Archive and Flashcrash foreground what Whitelaw defines as transmateriality: ‘a view of media and computation as always and everywhere material but constantly propagating or transducing patterns through specific instantiations’ (2013, p. 223). To return to Smalley’s terms, sonic materials here are source-bonded: they meet expectations regarding their relations to supposed causes and discernibly voice their interrelations within a realm of operationally tuned human-computer intermediation. Theirs is a new kind of performativity, a machinic orality or machinic processuality akin to that described by Guattari, its ‘flight into machinations and deterritorialised machinic paths capable of engendering mutant subjectivities’ (1995, p. 90).
To conclude [T]he ‘post-digital’ approach to creating music has changed into a digital deconstruction of audio by manipulating the prima materia itself: bits . . . this bit-space (sans sound-objects) contains background information as most spaces do . . . we have only recently begun to consider the gauzy veil of hums, clicks, whirs and crackling as worthy of our attention . . . data-mining the noise-floor is today’s alchemical pursuit of turning bits into atmospheres. (Cascone 1999)
Developing affordances to cope with increasingly technologised nature-culture is essential for our evolution as creative extremophiles or ‘savanturiers’ (adventurers of knowledge),46 driven to constantly extend our exosomatic organ-spaces. Tuning works at least two ways: by opening up to the discrepant, the superfluous and the deviant (Coyne 2010, p. 46), it extends our experiential and cognitive bandwidth. Normative or monopolistic tuning, in contrast, flattens and inhibits horizons of possibility. Art, geared towards promoting imaginative diversity and honing our capacity for discernment yet beholden to the communities it seeks to address, is acutely prey to this tension. But this is ultimately a fundamentally ethical issue, to do with willingness to tune to difference, to multiplicity, to unfamiliar dynamics and entities. Only by positioning ourselves as genuinely open to aesthetic experience, relinquishing conventional expectations, expedient interpretations and formulaic reductions, can we hope to freely explore the nexus of potentially meaningful relations offered by a work of art and enhance our ability to entertain otherness.
216
Tuning and metagesture after new natures
This chapter ends by reformulating the questions it began with. Can musicking help us meaningfully integrate the hybrid, multispecies communications opened up by our new affordances, their unforeseeable coincidences and connections, their posthuman rhythms and scales? Might our new organs and metagestures allow us to draw on resources from the biosphere, the cosmos, and the acoustic habitats of our expanding datascapes, in order to build compellingly post- or superhuman visions comparable to those of ancient creation myths?
Acknowledgements Sussex Humanities Lab colleagues, particularly Liam Berriman, David Berry, Alice Eldridge, Beatrice Fazi and Chris Kiefer, have been generous and stimulating discussants for ideas raised here. Kim Cascone forwarded a welcome offline residue of his Residualism manifesto.
Notes 1 Small’s term ‘musicking’ is used throughout this text to designate the broader processual, making and listening qualities of our human musical activities (see Small 1998). 2 Bec’s reflexion on gesture and metagesture, developed in collaboration with Vilém Flusser (1920–1991), runs throughout his publications and artistic and paedagogical initiatives. Our numerous collaborations include the joint organisation of a debate on ‘Metageste et Machinerie‘ during the 2006 Avignon Festival, Rencontres de la Maison Jean Vilar. 3 Used for millennia for signalling purposes in European Alpine regions, the carved wooden alphorn is considered a descendent of Neolithic auroch or buffalo horn instruments. From about 3,500 bc, the introduction of farming and animal husbandry techniques dramatically increased the development and spread of trumpets in Europe (Hirt 2015, p. 1). 4 Oliveros began her ‘Deep Listening’ experiments in 1988 with musicians Stuart Dempster and Panaiotis and engineer Albert Swanson, in an unused water cylinder (186 foot diameter) in Port Townsend, Washington, featuring a 45-second reverberation at low frequencies (for an example, cf. Suiren at www.youtube.com/watch?v=7qp2Js_4urQ). Given access difficulties and the compromised acoustics audience presence would entrain, Jonas Braasch et al at Rensselaer Polytechnic Institute created and implemented a digital model in RPI’s Experimental Media and Performing Arts Center in 2012 for Oliveros’s 80th birthday. Her ‘Deep Listening’ practices often involve online collaborative improvisation sessions. 5 On December 16, the astronauts reported sighting an unknown satellite-type object, then played Jingle Bells on a harmonica and bells they had smuggled on board (now preserved at the Smithsonian Institute). www.youtube.com/watch?v=HqfIEQKnkJU 6 ‘How many errors would have been avoided in social science over the past two centuries if it had known how to analyze the relations between spectators and musicians and the social composition of the concert halls. A precise reflection of the spectators’ relation to power would have been seen immediately’ (Attali 1985, p. 118). 7 Borrowed from Bec, these terms roughly map to the three areas Kahn relates to what he calls the ‘variable technology’ of communications, namely the experiential (aesthetic/ artistic), the scientific and the communicative (Kahn 2013, p. 20). 8 Via Ceylonese philosopher Ananda Coomaraswamy, Aquinas’s nature ‘in its workings’ – translated as its ‘manner of operation’ – inspired John Cage. For discussion of Cage’s Eastern and Western influences and attributions, see Crook 2011. 9 ‘Source bonding is the term I use to encapsulate the natural tendency to relate sounds to supposed sources and causes, and to relate sounds to each other because they appear to have shared or associated origins’ (Smalley 1994, p. 37). 10 A lithophone (Greek prefix litho- meaning stone, and suffix -phone meaning sound) is a percussion instrument made of stones. Speleothem music (Greek prefix speleo- meaning cave) is percussion music obtained by tapping or striking cave formations; a speleolithophone is a lithophone designed to be played in a cave.
217
Sally Jane Norman 11 ‘Transcontextuality can be used as a tool to lend old or existing contexts new meanings [. . .] the borrowing of materials may come from any genre, and that borrowing may or may not be reflected in the formal design of the work the foreign materials are inserted into’ (Field 2000, pp. 50–51). 12 For four November evenings in 1969, the Jeïta Cave hosted the polyphonic rhythmic patterns and layered overtones of Stockhausen’s Stimmung (‘tuning’ of the voice – Stimme) [double meaning – Stimmung means ‘tuning’ or ‘voicing’], composed in 1968 for six vocalists and six microphones, for the Collegium Vocale Köln. 13 Schaeffer was wary of abstract, procedurally calculated forms of control that risked undermining, if not altogether omitting, human perception and listening, a force he considered essential in the creative compositional process. Regarding his mistrust of parametrically controllable and digital systems, cf. Teruggi (2007) and Battier (2007). 14 Bayle thus pursues the philosophy of his predecessor, Schaeffer, who ‘did not include in his music any reference to nature, or any recognisable sound – these diaboli in musica that “corrupted” or dramatised the perception of music’, but instead ‘ “the use of sounds with no relation to a specific meaning, that is, “listen to it by itself ” with no external signification that would pollute the perception’ (Teruggi 2007, 214). See also Bayle 2007. 15 A new version of the first 1970 Jeïta series, created in 2012, can be consulted at www.youtube.com/ watch?v=XO3mervXG3A 16 The work was made during an Arts Council England International Artists Fellowship at the NASA Space Sciences Laboratory, University of California, Berkeley. For further information and an extract from Brilliant Noise, see http://semiconductorfilms.com/art/brilliant-noise/ 17 Brilliant Noise has been presented in configurations including multiscreen and multichannel or surround sound audio, DVD and live remixes by artists seeking their own readings of this uniquely sourced material. 18 Works produced during Semiconductor’s 2015 CERN residency will no doubt add spice to this debate. 19 Roads segments time (past, now and future) into periods, delay effects, frequencies, and perception and action, identifying nine time scales: infinite, supra, macro, meso, sound object, micro, sample, subsample and infinitesimal, where normal auditory recognition spans the macro to micro range. (Roads 2001, pp. 1–42). See also Roads 2015. The 1964 GRM Analogique A-B recording is available at www.youtube.com/watch?v=mXIJO-af_u8 20 Truax’s 1986 Riverrun, a powerful instantiation of his real-time synthesis technique, was recreated as an 8-channel tape version in 2004 (cf. www.sfu.ca/~truax/octo.html). A brief extract can be heard at www.sfu.ca/sonic-studio/excerpts/Riverrun.mp3 21 Affordances include wavelength synthesis where grain length is determined by pitch of contents, Clisson synthesis where grain contents are modified with a glissando, and pulsar synthesis where each grain is generated by an impulse generator. 22 “The real question is whether it is legitimate to have an “anthropocentric ontology”, that is, to draw the line between the real and the non-real by what we humans can directly observe. What makes our scale of observation, in space or time, so privileged? [. . .] Why should we study things in “real time” [. . .] instead of at longer periods . . . ?’ (DeLanda 2003). 23 Modeling processes are variously linked to physical objects in terms of their correlations with empirical facts (tallying of prediction models and experimental findings) and with fundamental theory (mathematical justification of the validity of assertions). ‘Floating models’ are insufficiently attached to either empirical or theoretical premises. Cf. Morgan and Morrison 1999. 24 This passage draws heavily on Vaccari and Barnet’s text, while reconfiguring their ideas in very different ways that hopefully read as respectful of, and resonant with, their work. 25 These terms and modes of differentiation are cited from Delalande, who references Nattiez’s distinction between the immanent structure of a musical work, its compositional (poietic) processes and its perceptual (esthesic) processes. Delalande 1998. 26 Sommeil constitutes the first movement (2’52) of Henry’s 25-movement Les Variations pour une porte et un soupir. Available at www.youtube.com/watch?v=SLDPcnicyUA 27 ‘We prove the value of an empirical law by making it the basis of a line of reasoning. We legitimate a line of reasoning by making it the basis of an experiment.’ Gaston Bachelard, La Philosophie du non, Paris, Presses Universitaires de France, 1940, p. 5. 28 For an account by scientist Domenico Vicinanza, see ‘Stories from the grid, Episode 2: the Epigonion’. www.youtube.com/watch?v=S-AL3Z0GmlM 29 Excerpt available at www.youtube.com/watch?v=bIPU2ynqy2Y
218
Tuning and metagesture after new natures 30 In focussing on Innis’s exosomatic rather than what might be called ‘endosomatic’ organs, this chapter leaves aside the kinds of tuning achieved by means like Lucier’s alphawaves or Jacob Kirkegaard’s use of otoacoustic emissions (see the documentation of his work Labyrinthitis (2008) at http://fonik.dk/ works/labyrinthitis.html).The dividing line between ‘inner’ and ‘outer’ employed here, however, necessarily remains somewhat arbitrary. 31 Recreated in 2007 by Matt Rogalsky, Laura Cameron et al (https://mattrogalsky.bandcamp.com/ track/quasimodo-the-great-lover-alvin-lucier), and in 2009 by Tintinnabulate, located at Rensselaer Polytechnic Institute in Troy, New York, led by Pauline Oliveros, and the VistaMuse Ensemble at University of California, San Diego led by Mark Dresser (www.greenleafmusic.com/ telematic-performance-quasimodo-the-great-lover/). 32 Technical and industrial partners included Artists on Line, the Marcel Network led by Don Foresta, and Telecom Interactive (see www.youtube.com/watch?v=qLW78_lMBiY). The concert was released on CD by Catalyst/BMG. 33 Speleothems were used in a related 1999 initiative: the International Telecommunication Union (ITU) set up a duplex link between percussionist Alex Grillo, who performed on stalactites in the cave of St Cézaire-sur-Siagnes (Alpes-Maritimes), and Martinez, who improvised on stage 500 kilometers away at ITU’s Geneva headquarters. 34 After a 1996 performance, audience members at the Moon Festival, California State University at Hayward, queued to hear their voices echo back when ‘talking to the moon’. Oliveros’s Echoes from the Moon events, including the 1997 Salzburg Festival and 1999 St Pölten Höfefest, have featured moonprocessed sound with a range of instruments including a conch shell, gas pipe whistle, wood block, temple block and Tibetan cymbals, as well as with Oliveros’s hallmark accordion. For detailed information, see Kahn 2013. 35 ‘I coined the term Aelectrosonic as a way to accommodate the way “nature” (that is, naturally-generated rather than human-generated electromagnetic activity) was heard as music or otherwise aestheticallyengaged. [. . .] [I]t was obvious that aesthetic or musical listening occurred on the newest media (telegraph and telephone lines), manifesting both mechanical/acoustical and electromagnetic energies [. . .].Whereas the mechanical/acoustical manifestation had a word – Aeolian – and a varied history, the electromagnetic had neither” (Kahn interviewed by Macauley; Kahn and Macauley 2014). 36 Participants and sponsors are listed at http://acoustic.space.re-lab.net/lab/history3.html.As co-founders of the RIXC Centre for New Media Culture in Riga and of the Acoustic Space journal (created in 1998), Smite and Smits have developed interdisciplinary, collaborative networks for several decades. 37 Used for Talking Heads’ I Zimbra track (Fear of Music, 1979). 38 For information on ARTSAT1: http://artsat.jp/en/; ARTSAT2: http://despatch.artsat.jp/en/ 39 Examples cited by Fazi and Fuller include the algorithmic organisation of commercial warehouses, air traffic, and administrative data, as well as extensive arrays of networked social practices. For another view see Thrift 2008. 40 The record was enclosed with Issue 4/5 of the Philips Technical Review, devoted to scientific and musical discussion of the algorithmic materials. See Fritz 2011.Tracks made available online by Kees Tazelaar at www.keestazelaar.com/philips.html 41 See documentation of The Imitation Archive at https://www.earthkeptwarm.com/the-imitation-archive/ 42 Further information and extracts: www.earthkeptwarm.com/the-imitation-archive/ 43 Further information and extracts: www.jonathanreus.com/index.php/project/imac-music/ 44 Information sourced at http://emf.fr/6567/upgrade/ and provided by Patrick Tréguer, director of Lieu Multiple Poitiers, which organised and hosted this event. 45 RYBN, quote from an interview published by Knouf 2013, pp. 149–150. 46 ‘Creative extremophiles’ and ‘savanturiers’ are Bec’s terms, the latter being ambiguous in that savoir (knowledge) and saveur (taste) derive from the same Latin root sapere.
References Attali, Jacques (1985, French edition 1977), Noise: The Political Economy of Music, Manchester: Manchester University Press. Avanzo, Salvatore; Barbera, Roberto; De Mattia, Francesco; La Rocca, Giuseppe; Sorrentino, Mariapaola; Vicinanza, Domenico (2010), The ASTRA (Ancient Instruments Sound/Timbre Reconstruction Application) Project Brings History to Life!, in Lin, Simon C.;Yen, Eric (eds.) Managed Grids and Cloud
219
Sally Jane Norman Systems in the Asia-Pacific Research Community, Proceedings of the International Symposium on Grid Computing (Taiwan, 2009), New York: Springer, 145–156. Bachelard, Gaston (1940), La Philosophie du non, Paris: Presses Universitaires de France. Battier, Marc (2007), What the GRM Brought to Music: From Musique Concrète to Acousmatic Music, Organised Sound, 12(3): 189–202. Bayle, François (1999), www.electrocd.com/en/select/piste/?id=mgcb_1399-1.19 ——— (2007), Space, and More, Organised Sound, 12(3): 241–249. Bec, Louis (2015), Zoosystémie: Ecrits d’un zoosystémician, Prague: CIANT (iTunes). Cascone, Kim (1999), Residualism (manifesto originally published online, kindly forwarded by the author). ——— (2000), The Aesthetics of Failure: ‘Post-Digital’ Tendencies in Contemporary Computer Music, Computer Music Journal, 24(4): 12–18. Coyne, Richard (2010), The Tuning of Place: Sociable Spaces and Pervasive Digital Media, Cambridge, MA: MIT Press. Crook, Edward James (2011), John Cage’s Entanglement With the Ideas of Coomaraswamy, PhD thesis, University of York. Davis, Heather; Turpin, Etienne (eds.) (2015), Art in the Anthropocene: Encounters Among Aesthetics, Politics, Environments and Epistemologies, Cambridge, MA: MIT Press. Debterolis, Paolo; Bisconti, Niccolò (2013), Archaeoacoustics Analysis and Ceremonial Customs in an Ancient Hypogeum, Sociology Study, 3(10): 803–814. Delalande, François (1998), Music Analysis and Reception Behaviours: Sommeil by Pierre Henry, translated by Christiane ten Hoopen and Denis Smalley, Journal of New Music Research, 27(1–2): 13–66. Delanda, Manuel (2003), 1000 Years of War: CTHEORY Interview, www.ctheory.net/articles.aspx?id=383 Dewar, Andrew Raffo (2012), Reframing Sounds: Recontextualization as Compositional Process in the Work of Alvin Lucier, On-Line Supplement: Lucier Celebration, Leonardo Music Journal, 22(1) unpaginated. Eco, Umberto (1988), The Aesthetics of Thomas Aquinas, Cambridge, MA: Harvard University Press. Fazi, Beatrice; Fuller, Matthew (2016), Computational Aesthetics, in Paul, Christiane (ed.), A Companion to Digital Art, Hoboken, NJ: Wiley, pp. 281–296. Field, Ambrose (2000), Simulation and Reality: The New Sonic Objects, in Emmerson, S. (ed.), Music, Electronic Media and Culture, Aldershot: Ashgate, pp. 36–55. Fritz, Darko (2011), The Beginnings of Computer-Generated Art in the Netherlands, http://darkofritz.net/text/ DARKO_FRITZ_NL_COMP_ART_n.pdf Galison, Peter (1997), Image and Logic: A Material Culture of Microphysics, Chicago: University of Chicago Press. Guattari, Félix (1995, French edition 1992), Chaosmosis: An Ethico-Aesthetic Paradigm, Bloomington: Indiana University Press. Hayles, Katherine (1999), How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, Chicago: University of Chicago Press. ——— (2008), Electronic Literature: New Horizons for the Literary, Notre Dame: University of Notre Dame. Higham,Thomas; Basell, Laura; Jacobi, Roger;Wood, Rachel; Bronk Ramsay, Christopher; Conard, Nicholas J. (2012),Testing Models for the Beginnings of the Aurignacian and the Advent of Figurative Art and Music: The Radiocarbon Chronology of Geißenklösterle, Journal of Human Evolution, 62(6): 664–676. Hirt, Aindrias (2015), Addendum to ‘The Devolution of the Shepherd Trumpet and Its Seminal Importance in Music History’, Special Supplement to the International Trumpet Guild Journal, January, 1–23. Innis, Robert E. (1984), Technics and the Bias of Perception, Philosophy and Social Criticism, 10(1): 67–89. ——— (2002), Pragmatism and the Forms of Sense. Language, Perception, Technics, Pennsylvania: Pennsylvania State University Press. Kahn, Douglas (2013), Earth Sound Earth Signal. Energies and Earth Magnitude in the Arts, Berkeley: University of California Press. Kahn, Douglas; Macauley, William R. (2014) On the Aelectrosonic and Transperception, Journal of Sonic Studies, 8, sonicstudies.org Knouf, Nicholas Adrian (2013), Noisy Fields: Interference and Equivocality in the Sonic Legacies of Information Theory, PhD thesis, Cornell University. Kubler, George (1962), The Shape of Time, New Haven:Yale University Press. Latour, Bruno (1993, original French edition 1991), We Have Never Been Modern, Cambridge, MA: Harvard University Press. Leroi-Gourhan, André (1993, French edition 1964), Gesture and Speech, Cambridge, MA: MIT Press.
220
Tuning and metagesture after new natures Miyazaki, Shintaro (2012), Algorhythmics: Understanding Micro-Temporality in Computational Cultures, Computational Culture, 2, http://computationalculture.net/article/algorhythmics-understandingmicro-temporality-in-computational-cultures Morgan, Mary S.; Morrison, Margaret (eds.) (1999), Models as Mediators: Perspectives on Natural and Social Science, Cambridge: Cambridge University Press. Murphie, Andrew (2013), Convolving Signals.Thinking the Performance of Computational Processes, Performance Paradigm, Issue 9, www.performanceparadigm.net/index.php/journal/article/view/135, n.p. Nagler, A. M. (1959, original edition 1952), A Source Book in Theatrical History:Twenty-Five Centuries of Stage History in More Than 300 Basic Documents and Other Primary Material, New York: Dover. Oliveros, Pauline (2006), Improvisation in the Sonosphere, Contemporary Music Review, 25(5/6): 481–482. Roads, Curtis (2001), Microsound, Cambridge, MA: MIT Press. ——— (2015), Composing Electronic Music: A New Aesthetic, Oxford: Oxford University Press. Rodgers, Tara (2010), Synthesizing Sound: Metaphor in Audio-Technical Discourse and Synthesis History, PhD thesis, Montreal: McGill University. Schaeffer, Pierre (1967), Solfège de l’objet sonore (1967), re-edition 1998–2005, Foreword, p. 11 (translated by Abbaye Traductions), Paris: Groupe de Recherches Musicales, Institut National de l’Audiovisuel. ———. (1973), La musique concrète, Paris: Presses Universitaires de France. Schafer, R. Murray (1977), The Tuning of the World, New York: Knopf. Simondon, Gilbert (1980, French edition 1958), On the Mode of Existence of Technical Objects, Ninian, London, Ontario: University of Western Ontario (Canada Council). Small, Christopher (1998), Musicking: The Meanings of Performing and Listening, Middletown, CT: Wesleyan University Press. Smalley, Denis (1986), Spectro-Morphology and Structuring Processes, in Emmerson, S. (ed.), The Language of Electroacoustic Music, London: Macmillan, pp. 61–93. ——— (1994), Defining Timbre – Refining Timbre, Contemporary Music Review, 10(2): 35–48. ——— (1997), Spectromorphology: Explaining Sound-Shapes, Organised Sound, 2: 107–126. Smite, Rasa; Smits, Raitis (2000), http://kunstradio.at/PROJECTS/CURATED_BY/RR/mainframe. html Stiegler, Bernard (1998, French edition 1994), Technics and Time:The Faults of Epimetheus, Stanford: Stanford University Press. ——— (2012), Die Aufklärung in the Age of Philosophical Engineering, Ars Industrialis, http://arsindus trialis.org/bernard-stiegler-%C3%A0-www2012 Suarez, Maurizio (1999), The Role of Models in the Application of Scientific Theories: Epistemological Implications, in Morgan, Mary S.; Morrison, Margaret (eds.), Models as Mediators. Perspectives on Natural and Social Science, Cambridge: Cambridge University Press, pp. 168–195. Sueur, Jérôme; Farina, Alsmo (2015), Ecoacoustics:The Ecological Investigation and Interpretation of Environmental Sound, Biosemiotics, 8(3): 493–502. Teruggi, Daniel (2007),Technology and Musique Concrète:The Technical Developments of the Groupe de Recherches Musicales and Their Implication in Musical Composition, Organised Sound, 12(3): 213–231. Thrift, Nigel (2008), Non-Representational Theory: Space, Politics, Affect, London: Routledge. Vaccari, Andrés; Barnet, Belinda (2009), Prolegomena to a Future Robot History: Stiegler, Epiphylogenesis and Technical Evolution, Transformations, 17. Whitelaw, Mitchell (2003), Sound Particles and Microsonic Materialism, Contemporary Music Review, 22(4): 93–100. ——— (2013),Transmateriality: Presence Aesthetics and the Media Arts, in Ekman, Ulrik (ed.), Throughout: Art and Culture Emerging With Ubiquitous Computing, Cambridge, MA: MIT Press, pp. 223–235. Windsor, Luke (2000),Through and Around the Acousmatic:The Interpretation of Electroacoustic Sounds, in Emmerson, S. (ed.), Music, Electronic Media and Culture, Aldershot: Ashgate, pp. 7–35. Wrightson, Kendall (1999), An Introduction to Acoustic Ecology, Journal of Electroacoustic Music, 12. Zylinska, Joanna (2014), Minimal Ethics for the Anthropocene, Michigan: Open Humanities Press.
221
10 MUSIC NEUROTECHNOLOGY A natural progression Eduardo Reck Miranda and Joel Eaton
Introduction Imagine if you could play a musical instrument with signals detected directly from your brain. Would it be possible to generate music that represents brain activity? What would the music of our brains sound like? These are some of the questions addressed by our research into music neurotechnology,1 a new field of research that is emerging at the crossroads of neurobiology, engineering sciences and music. Systems that interact directly with our nervous system (Rosenboom 2003), sonification methods to diagnose brain disorders (Vialatte et al. 2012) and biocomputing devices (Braund and Miranda 2015) are emerging plausible technologies for musical creativity, which 100 years ago were thinkable perhaps only in the realm of science fiction. Many recent advances in the neurosciences, especially in computational neuroscience, have led to a deeper understanding of the behaviour of individual and large groups of biological neurones, and we can now begin to apply biologically informed neuronal functional paradigms to problems of design and control, including applications pertaining to music technology and creativity. For instance, we have been exploring the behaviour of computational models of brain functioning to make music with. We find their ability to generate very complex biologicallike behaviour from the specification of relatively simple parametric variables compelling and inspiring. They allow for the design of complex sound generators and sequencers controlled by a handful of parameters. We have been looking into designing new musical instruments based on such models. Moreover, a better understanding of the brain combined with the emergence of increasingly sophisticated devices for scanning the brain is enabling the development of musical interfaces with our neuronal systems.These interfaces have tremendous potential to enable access to active music making for people with severe physical impairments, such as severe paralysis after a stroke or accident damaging the spinal cord, in addition to open the doors to completely new ways to harness creative practices. This chapter discusses four projects that epitomise the research that we have been conducting in the field of music neurotechnology at Plymouth University’s Interdisciplinary Centre for Computer Music Research (ICCMR).2 The first looks into harnessing the behaviour of neuronal tissue cultured in vitro, with a long-term ambition to build hybrid bio-silicon musical processors. The second project concerns developing methods to compose music inspired 222
Music neurotechnology
and informed by neurobiology; more specifically, we introduce Shockwaves, a violin concertino whose first half was composed using rhythms generated with a simulated neuronal network. Then we introduce our work into developing brain-computer music interfacing (BCMI) technology and present two projects, one aimed at enabling people with physical disabilities to make music and another which explored the potential of BCMI technology for creative musical composition and performance more generally.
Sound synthesis with in vitro neuronal networks Computational paradigms informed by the principles of information processing in physical, chemical and biological systems are promising new avenues for the development of new types of intelligent machines. There has been a growing interest in research into the development of neurochips coupling living brain cells and silicon circuits together. The ambition here is to harness the intricate dynamics of in vitro neuronal networks to perform computations. A number of researchers have been looking into developing ways to culture brain cells in mini Petri-like dishes measuring only a few square millimetres. These devices are referred to as MEA (short for “multi-electrode array”) devices.They are embedded with electrodes that detect the electrical activity of aggregates of cells and stimulate them with electrical pulses. It has been observed that in vitro cultures of brain cells spontaneously branch out, even if they are left to themselves without external input. They have a strong disposition to form synapses, even more so if subjected to electrical stimulation (Potter et al. 2006). Research into hybrid wetware-silicon devices with in vitro neuronal networks has been making continual progress. DeMarse et al. (2001) reported the pioneering development of a neuronally controlled artificial animal – or Animat – using dissociated cortical neurones from rats cultured on a MEA device. Distributed patterns of neuronal activity, also referred to as spike trains, controlled the behaviour of the Animat in a computer-simulated virtual environment. The Animat provided electrical feedback about its movement within its virtual environment to the cells on the MEA device. Changes in the Animat’s behaviour were studied together with the neuronal processes that produced those changes in an attempt to understand how information was encoded and processed by the cultured neurones. Potter et al. (2004) described a similar study, but they have used physical robots instead of simulated Animats. In this case, different patterns of spike trains triggered specific robotic movements; e.g., step forward, turn right and so on. The robot was fitted with light sensors and returned brightness information to the MEA as it got closer to the light source. The researchers monitored the activity of the neurones for new signals and emerging neuronal connections. We are interested in developing interactive musical computers based on such neurochips. As an entry point to kick-start our research towards this end, the BioMusic team at ICCMR teamed up with scientists at University of the West of England (UWE), Bristol, to look into developing methods for rendering the temporal behaviour of in vitro neuronal networks into sound (Miranda et al. 2009). The dynamics of in vitro neuronal networks represent a source of rich temporal behaviour, which inspired us to develop and test a number of rendering methods using different sound synthesis techniques, one of which will be introduced later. The UWE team developed a method to extract brain cells from hen embryos at day seven in ovo and maintain them for relatively long periods of time, typically several months (Uroukov et al. 2006). Figure 10.1 shows a typical hen embryo aggregate neuronal culture, also referred to as a spheroid. In our experiments, spheroids were grown in culture in an incubator for 21 days. Then they were placed into a MEA device in such a way that at least two electrodes made connections into the neuronal network inside the spheroid. One electrode was designated as the 223
Eduardo Reck Miranda and Joel Eaton
Figure 10.1 Image of a typical hen embryo aggregate neuronal culture on a scanning electron microscope, magnified 2,000 times Courtesy of Larry Bull, University of the West of England, UK
input by which we applied electrical stimulation and the other as the output from which we recorded the effects of the stimulation on the spheroid’s spiking behaviour. The appropriateness of a connection is ascertained through the recording of the constant spontaneous spiking activity within the spheroid on a given electrode (Uroukov et al. 2006). Electrical stimulation at the input electrode consisted of a train of biphasic pulses of 300 mV each, coming once every 300 ms. This induced change in the stream of spikes at the output electrode, which was recorded and saved into a file.3 The resulting neuronal activity for each session was saved as separate files. Figure 10.2 plots an excerpt lasting for 1 second of typical neuronal activity from one of the sessions. Note that the neuronal network is constantly firing spontaneously. The noticeable spikes of higher amplitude indicate concerted increases of firing activity by groups of neurones in response to input stimuli. We developed and tested a number of methods to sonify the activity of the neuronal network using different synthesis techniques, including FM, AM, subtractive synthesis, additive synthesis and granular synthesis. Here we introduce a method that combined aspects of granular synthesis and additive synthesis. We implemented an additive synthesiser (Miranda 2002) with nine sinusoidal oscillators, which required three input values to generate a tone: frequency (freq), amplitude (amp) and duration (dur). We established that the neuronal data would produce freq and amp values for the first oscillator only. Then the values for the other oscillators were calculated relative to the values of the first oscillator; e.g., freqosc2 = freqosc1 × 0.7, freqosc1 = freqosc1 × 0.6, and so on; the calculation of dur is explained later. The synthesiser was implemented in Csound, and we wrote an application in C++ to generate the respective Csound score files from the data files. Initially, we synthesised a tone for every datum. However, this produced excessively long sounds. In order to address this problem, a data compression technique was developed which preserved the data behaviour that we were interested in sonifying, namely patterns of neuronal activity and induced spikes. For clarity, we firstly describe the method whereby we produced a 224
Music neurotechnology
Figure 10.2 Plotting of the first second of a data file showing the activity of the spheroid in terms of µV against time. Induced spikes of higher amplitudes took place between 400 ms and 600 ms
tone for every datum. Then we present the method using data compression. We experimented with a number of values on an ad hoc basis and made choices intuitively based on the results obtained; there were no specific a priori criteria. In the case of synthesis of one tone per datum, each datum yielded three values for the synthesiser: frequency (freq), amplitude (amp) and duration (dur). The frequency value is calculated in Hz as follows: freq = (datum × 20) + α. We set α = 440 as an arbitrary reference to 440 Hz; changes to this value produce sonifications at different registers. The synthesiser’s amplitude parameter is a number between 0 and 10. The amplitude is calculated as follows: amp = 2 × log10(abs(datum) + 0.01) + 4.5. This produces a value between 0.5 and 9.5. In order to avoid negative amplitudes we take the absolute value of the datum. Then, 0.01 is added in order to avoid the case of logarithm of 0, which cannot be computed. We later decided to multiply the result of the logarithm by 2 in order to increase the interval between the amplitudes. Since, log10(0.01) = −2, if we multiply this result by 2, then the minimum possible outcome would be equal to −4. We add 4.5 to the result because our aim is to assign a positive amplitude value to every datum, even if it values 0 µV. The duration of each sound is calculated in seconds; it is proportional to the absolute value of the datum, which is divided by a constant c, as follows: dur =
abs (datum) +t c
In the case of the present example, c was set equal to 100. The higher the value of c, the more “granular-like” the results. We add a constant t to the result in order to account for excessively short or possibly null durations (e.g., t = 0.05). The algorithm to yield shorter sounds was implemented as follows: it begins by creating a set with the value of a datum. To start with, this will be the first sample of the data. Then it feeds in the second sample, the third and so on.The value of each incoming sample is compared with the value of the first sample in order to check if they are close to each other according to a given 225
Eduardo Reck Miranda and Joel Eaton
distance threshold Δ. If the difference between them is lower than Δ, then the incoming datum is stored in the set. Otherwise, the values of all data stored in the set are averaged and used to generate a tone; this provided an efficient way to compress the data while preserving its overall behaviour. Then a new set is created whose first value is the value of the datum that prompted the creation of the last tone, and so forth. In this case, the frequency of a tone is calculated as follows, where n is the minimum value found in the data file that is being sonified and x is the maximum value: ( set _ average − n) × 900 freq = + 100 x−n
The result is scaled in order to fall in the range between 100 Hz and 1 kHz. The amplitude is calculated as for the case of one tone per datum, as described earlier, with the only difference that the datum is replaced by the set average. The duration is also calculated as described earlier, with the difference that we introduce a bandwidth defined by minimum and maximum duration thresholds. If the calculated duration of a tone falls outside the bandwidth, then the system assigns a predetermined duration value; e.g., the tone is assigned a duration of 0.1 s if its calculated duration is below the minimum threshold. Figure 10.3 shows the cochleogram of an excerpt of a sonification, where one can clearly observe sonic activity corresponding to induced spiking activity.
Rhythms of spiking neurones Obviously, more work is needed in order to establish how we might be able to exert controllability and repeatability in systems based on in vitro neuronal networks. However, basic experiments such as the one introduced earlier are useful because they can lead to ideas that we would not have had otherwise. For instance, during our sonification experiments we were amazed by the variety of rhythmic patterns produced by the neurones, which gave us the idea of generating rhythms for a piece of music using a model of a spiking neuronal network. We teamed up with Etienne B. Roesch, a cognitive scientist at the University of Reading, to implement a computer model of a network of spiking neurones. As with the aforementioned experiments with in vitro neurones, when the network model is stimulated with an external signal, the neurones of the network produce bursts of activity, forming streams of rhythmic patterns. The advantages of working with a computer model are that we can set its parameters to
Figure 10.3 Cochleogram of an excerpt of a sonification where spikes of higher amplitude can be seen just after the middle of the diagram
226
Music neurotechnology
produce different behaviours in a highly controlled manner and can trace the spiking activity of each individual neurone on a raster plot, which is a graph plotting the spikes. In this section we will introduce a method we developed to compose music based on such raster plots and Shockwaves,4 Eduardo Reck Miranda’s violin concertino for orchestra, percussion and electronics. One of the musical ideas that the composer wanted to convey in this composition is the notion of a rhythmic structure that would emerge from widely dispersed events in the timeline.These somewhat pointillist events would become increasingly frequent and complex as the piece evolved, which would then lead to regular rhythm resembling a samba school. This is the form of the first half of the concertino and was composed entirely with raster plots. In a nutshell, the composer orchestrated raster plots by allocating each instrument of the orchestra to a different neurone of the network simulation. Each time a neurone produced a spike, its respective instrument was prompted to play a certain note. The actual pitches for the notes were assigned based upon a series of chords, which served as a framework to make simultaneous spikes sound in harmony. Our implementation is based on a model that simulates the spiking behaviour of biological neurones, developed by computational neuroscientist Eugene Izhikevich. A biological neurone aggregates the electrical activity of its surroundings over time until it reaches a given threshold. At this point it generates a sudden burst of electricity, or spike, referred to as an action potential. The model has a number of parameters which define how the neurones behave; for instance, one of the parameters defines the spiking threshold, or sensitivity of the neurones to release an action potential.5 As we ran the model, each action potential produced by a neurone was registered and transmitted to other neurones, producing waves of activation which spread over the entire network. A raster plot showing an example of such collective spiking behaviour, taken from a simulation of a network of 50 neurones, is shown in Figure 10.4. This results from a simulation of the activity of this group of 50 artificial neurones over a period of 10 seconds: the neurones are numbered on the y-axis (with neurone number 1 at the bottom and neurone number 50 at the top) and time, which runs from zero to 10,000 milliseconds, is on the x-axis. Every time one such neurone spikes, a dot is placed on the graph at the respective time. Figure 10.4 shows periods of intense collective spiking activity separated by quieter moments. These moments of relative quietness in the network are due to the refractory period during which neurones that have spiked remain silent as their electrical potentials decay back to a baseline value. Of course, the network needs to be stimulated to produce such patterns of activation. For the composition of Shockwaves, the network was stimulated with a sinusoidal signal that was input to all neurones of the network simultaneously. Generally speaking, the amplitude of this signal controlled the overall intensity of firing through the network. For instance, the bottom of Figure 10.5 shows a raster plot generated by a network of 50 spiking neurones stimulated by the sinusoid shown at the top of the figure. As the undulating line rises, the spiking activity is intensified.6 In order to compose Shockwaves, the network was set up with 50 neurones and the simulation was run several times, lasting for 10 seconds each. For all runs of the simulation the stimulating sinusoid was set to a frequency of 0.5 Hz, which means that each cycle of the wave lasted for 2 seconds. Therefore, each simulation took five cycles of the wave, which can be seen at the top of Figures 10.5 and 10.6, respectively. The amplitude of the sinusoid and the sensitivity of the neurones to fire varied for each run of the simulation. The amplitude of the model’s stimulating signal could be varied from 0.0 (no power at all) to 5.0 (maximum power), and the sensitivity of the neurones could be varied from 0.0 (no sensitivity at all; would never fire) to 5.0 (very sensitive). For instance, for one of the first 227
Figure 10.4 A raster plot illustrating collective firing behaviour of a simulated network of spiking neurones. Neurone numbers are plotted (y-axis) against time (x-axis) for a simulation of 50 neurones over a period of 10 seconds. Each dot represents a spiking event
Figure 10.5 At the top is a sinusoid signal that stimulated the network that produced the spiking activity represented by the raster plot at the bottom
Figure 10.6 An example of a run of the simulation that produced sparse spiking activity because the amplitude of the sine wave and the spiking sensitivity of the neurones were set relatively low
Music neurotechnology
runs of the simulation the power of the stimulus was set to 1.10 and sensitivity of the neurones to 2.0 (Figure 10.6), whereas in a late run these were set to 2.0 and 4.4, respectively (Figure 10.5). One can see that the higher the power of the stimulus and the higher the sensitivity, the more likely the neurones are to fire, and therefore the more spikes the network produces overall. The composer established that each cycle of the stimulating sinusoid would produce spiking data for three measures of music (with time signature equal to 4/4). This was established intuitively, after experimenting with the density of spiking activity produced with a range of different amplitudes for the sinusoid. More than three measures would produce too many notes at maximum sinusoid amplitude (i.e., high density of spikes), and less than three measures would produce excessively long periods of silence at lower amplitudes (i.e., very few spikes). In order to transcribe the spikes as musical notes he decided to adjust them to fit a metric of semiquavers. Then, he associated each instrument of the orchestra to a neurone, as shown in Table 10.1. As the orchestra comprised 34 instruments, only the first 34, counting from the bottom of the raster plots upwards, were used.7 The compositional process progressed through three major steps: the establishment of a rhythmic template, the assignment of pitches to the template and the articulation of the musical material. In order to establish the rhythmic template, firstly the composer transcribed the spikes as semiquavers onto the score. Figure 10.7 shows an excerpt of the result of this transcription for a section of the strings. The spikes were converted into musical notes manually. The raster plots for each cycle of the stimulating signal were printed and enlarged on a photocopier. Then a template drawn on an acetate sheet was placed on top of each print to establish the positions of the spikes and transcribe them into musical notation (Figure 10.8). In order to forge more musically plausible rhythmic figures, the durations of the notes and rests were altered slightly, while preserving the original spiking pattern. Figure 10.9 shows the
Table 10.1 Instruments are associated to neurones of the network. Each instrument plays the spikes produced by its respective neurone. Neurone
Instrument
Neurone
Instrument
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Contrabass Cello 3 Cello 2 Cello 1 Viola 3 Viola 2 Viola 1 2nd Violin 4 2nd Violin 3 2nd Violin 2 2nd Violin 1 1st Violin 6 1st Violin 5 1st Violin 4 1st Violin 3 1st Violin 2 1st Violin 1
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Solo violin Snare drum Maracas Bass drum Cymbal Wood blocks Tubular bells Timpani Trumpet 2 Trumpet 1 Horn 2 Horn 1 Bassoon 2 Bassoon 1 Oboe 2 Oboe 1 Flute
231
Figure 10.7 Transcribing spikes from a raster plot as semiquavers on a score
Music neurotechnology
Figure 10.8 The process of converting the spikes into musical notes was done manually
Figure 10.9 Resulting rhythmic template
new version of the score shown in Figure 10.7 after this process. Figure 10.10 shows the final result of the compositional process, with pitches and articulation. As mentioned earlier, the notes for the rhythmic template (Figure 10.9) were assigned based on a chord progression. Pitches were assigned differently as the piece progressed; for instance, sometimes a chord provided pitches for a beat of the 4/4 measure, but at some other times a 233
Eduardo Reck Miranda and Joel Eaton
Figure 10.10 The resulting musical passage
single chord provided pitches through various measures. In general, those figures to be played by instruments of lower tessitura were assigned the lower pitches of the chords, and those to be played by instruments of higher tessitura were assigned the higher pitches, and so on. There were occasions where pitches were transposed one octave upwards or downwards in order to best fit specific contexts or technical constraints of the various instruments. Other adjustments also occurred during the process of articulating the musical materials; for example, pitches might have been changed in order to render a specific passage more idiomatic for the respective instrument and/or forge smoother voice leading. This strategy worked well for the composer’s ears, and the conductor welcomed a score that did not require long hours of rehearsal to practice passages that would otherwise have been technically cumbersome to play.
Brain-computer music interfacing Brain-computer interfacing technology, or BCI, allows a person to control devices by means of commands expressed by brain signals, which are detected through brain monitoring technology (Dornhege et al. 2007). We are interested in developing brain-computer interfacing technology for music, or BCMI,8 aimed at people with special needs and music therapy, in particular for people with severe physical disability who have relatively standard cognitive functions. Severe brain injury, spinal cord injury and locked-in syndrome result in weak, minimal or no active movement, which therefore prevents the use of gesture-based devices. These patient groups are currently either excluded from music recreation and therapy or are left to engage in a less active manner through listening/receptive methods only (Miranda et al. 2011). Currently, the most viable and practical method of detecting brain signals for BCMI is through the electroencephalogram, abbreviated as EEG,9 recorded with electrodes placed on the scalp.The EEG expresses the overall electrical activity of millions of neurones, but it is a difficult 234
Music neurotechnology
signal to handle because it is extremely faint and is filtered by the membranes that separate the cortex from the skull (meninges), the skull itself, and the scalp. This signal needs to be amplified significantly and harnessed through signal processing techniques in order to be used in a BCI or a BCMI (Miranda 2010). In general, power spectrum analysis is the most commonly used method to analyse the EEG signal.10 In simple terms, power spectrum analysis (“fast Fourier transform”, or FFT) breaks the EEG signal into different frequency bands and reveals the distribution of power between them. This is useful because it is believed that specific distributions of power in the spectrum of the EEG can encode different cognitive behaviours (Miranda and Caster 2014). As far as BCI systems are concerned, the most important frequency activity in the EEG spectrum lies below 40 Hz. There are five, possibly six, recognised bands of EEG activity below 40 Hz, also referred to as EEG rhythms, which are often associated with specific states of mind. For instance, the frequencies between 8 Hz and 13 Hz are referred to as alpha rhythms and are usually associated with a state of relaxed wakefulness; e.g., as in a state of meditation. The exact boundaries of these bands are not so clearly defined, and the meaning of these associations can be contentious. In practice, however, the exact meaning of EEG rhythms is not so crucial for a BCI system. What is crucial is to be able to establish whether or not users can produce power within distinct frequency bands voluntarily. For instance, alpha rhythms have been used to implement an early proof-of-concept BCMI system, which enabled a person to switch between two types of generative algorithms to produce music on a MIDI-controlled Disklavier piano in the style of Robert Schumann (when alpha rhythms were detected in the EEG) and Ludwig van Beethoven (when alpha rhythms were not detected) (Miranda 2006). Broadly speaking, there are two approaches to steering the EEG for a BCI: conscious effort and operant conditioning. Conscious effort induces changes in the EEG by engaging in specific cognitive tasks designed to produce specific EEG activity (Miranda et al. 2004; Curran and Stokes 2003). The cognitive task that is most often used in this case is motor imagery because it is possible to detect changes in the EEG of a subject imagining the movement of a limb, such as a hand (Dornhege et al. 2007). Operant conditioning involves the presentation of a task in conjunction with some form of feedback, which allows the user to control the system without having to think about the task at hand explicitly (Kaplan et al. 2005). In between these two aforementioned approaches sits a paradigm referred to as evoked potentials. Evoked potentials (EP) are spikes that appear in the EEG in response to external stimuli. EP can be evoked from auditory, visual or tactile stimuli producing auditory (AEP), visual (VEP) and somatosensory11 (SSEP) evoked potentials, respectively. It is extremely difficult to detect the electrophysiological response to a single event in an ongoing EEG stream. However, if the person is subjected to repeated stimulation at short intervals (e.g., 9 repetitions per second, or 9 Hz) then the brain’s response to each subsequent stimulus is evoked before the response to the prior stimulus has decayed. Thus, rather than being allowed to return to a baseline state, a so-called steady-state response can be detected. Steady-state visual evoked potential (SSVEP) is a robust paradigm for a BCI, on the condition that the user is not severely visually impaired. Typically, visual targets are presented to a user on a computer monitor representing tasks to be performed. These could be spelling words from an alphabet or selecting directions for a wheelchair to move, and so on. Each target is encoded by a flashing visual pattern reversing at a unique frequency. In order to select a target, the user must simply direct gaze at the flashing pattern corresponding to the action he or she would like to perform. As the user’s spotlight of attention falls over a particular target, the frequency of the unique pattern reversal rate can be accurately detected in his or her EEG through spectral analysis. It is possible to classify not only a user’s choice of target but also the extent to which he 235
Eduardo Reck Miranda and Joel Eaton
or she is attending the target. This gives scope for SSVEP-based BCI systems where each target is not a simple binary switch but can represent an array of options depending on the user’s level of attention. Effectively, each target of a BCI-based SSVEP system can be implemented as a switch with a potentiometer. This immediately suggests a number of musical applications. In 2011 we completed the implementation of our first SSVEP-based BCMI system, which we tested with a patient with locked-in syndrome at the Royal Hospital for Neuro-disability, in London. The system comprised four targets, as shown on the computer screen in front of the patient in Figure 10.11. Each target image represents a different musical instrument and a sequence of notes (Figure 10.12). Each image flashes reversing its colour (in this case the colour was red) at different frequencies: 7 Hz, 9 Hz, 11 Hz and 15 Hz, respectively. Thus, for instance, if the person gazes at the image flashing at 15 Hz, then the system will activate the xylophone and will produce a melody using the sequence of 6 notes that was associated with this target; these notes are
Figure 10.11 A patient with locked-in syndrome testing the BCMI system
Figure 10.12 Each target image is associated with a musical instrument and a sequence of notes
236
Music neurotechnology
set beforehand, and the number of notes can be other than 6. The more the person attends to this icon, the more prominent is the magnitude of the brain’s SSVEP response to this stimulus, and vice versa. This produces a varying control signal which is used to make the melody. Also, it provides a visual feedback to the user; the size of the icon increases or decreases as a function of this control signal. The melody is generated as follows: the sequence of six notes is stored in an array whose index varies from 1 to 6. The amplitude of the SSVEP signal is normalised so that it can be used as an index sliding up and down through the array. As the signal varies, the corresponding index triggers the respective musical notes stored in the array (Figure 10.13). The system requires just three electrodes on the scalp of the user: a bipolar pair placed on the region of the visual cortex and a ground electrode placed on the front of the head. Filters were programmed to reduce interference of AC mains noise and artefacts such as those generated by blinking eyes or moving facial muscles. SSVEP data was then filtered via band-pass filters to measure the band power across the frequencies correlating to the flashing stimuli. The patient took approximately 15 minutes to learn how to use the system, and she was able to quickly learn how to make melodies by increasing and decreasing the level of her SSVEP signal. We collected from the staff of the hospital and the patient suggestions and criticism with respect to improvements and potential further developments (Miranda et al. 2011). Two important challenges emerged from this exercise: 1
Everyone felt that the music sounded mechanical and it lacked expressivity because the system produced synthesised sounds. As far as the patients and the hospital’s professionals were concerned, it would be preferable to make music with real acoustic musical instruments. 2 Our system enabled a one-to-one interaction with a musical system. However, it was immediately apparent that it would be desirable to design a system that would promote interaction amongst the participants.Therefore, our BCMI system should enable a group of participants to make music together.
Figure 10.13 Notes are selected according to the level of the SSVEP signal
237
Eduardo Reck Miranda and Joel Eaton
Activating Memory and The Paramusical Ensemble In order to address the aforementioned challenges, we adopted a slightly different research methodology. We started by imagining a musical composition and a performance scenario first, and then we considered how that would work in practice with our BCMI technology In order to address the issue of lack of expressivity we came up with the idea that the patient would generate a score on the fly for a human musician to sight-read, instead of relaying it to a synthesiser. In order to promote group interaction we established that the composition would be generated collectively by a group of participants. However, the generative process would be simple and clearly understood by the participants. Also, the controllingbrain participants would need to clearly feel that they have control of what is happening with the music. Moreover, everyone involved would need to agree that the outcome sounds musical; whatever “musical” means, it should be an enjoyable experience. Clearly, these were not trivial tasks. In the end, we established that the act of generating the music collectively and in real time would have to be like playing a musical game, but with no winners or losers. We thought of designing something resembling a game of dominoes; that is, musical dominoes, played by sequencing blocks of pre-composed musical phrases selected from a pool. Finally, we created the concept of a musical ensemble where severely physically disabled and non-disabled musicians make music together: The Paramusical Ensemble. By way of related work, the concert of the British Paraorchestra12 at the opening of the London Olympics Games in 2012 came to mind. However, the work introduced here addresses the development of bespoke technology and musical composition for a very specific type of impairment that is not tackled by the British Paraorchestra: brain-computer music interfacing for locked-in syndrome. The result is Eduardo Reck Miranda’s Activating Memory, a piece for eight participants, a string quartet and a BCMI quartet, and a new version of the SSVEP-based system. Each member of the BCMI quartet is furnished with the SSVEP-based BCMI system, which enables him or her to generate a musical score in real time. Each generates a part for the string quartet, which is displayed on a computer screen for the respective string performer to sight-read during the performance (Figure 10.14).
Figure 10.14 A rehearsal of The Paramusical Ensemble, with locked-in syndrome patients performing Activating Memory
238
Music neurotechnology
This new BCMI system works similarly to the one described earlier, with the fundamental difference that the visual targets are associated with short musical phrases. Instead of flashing images on a computer monitor, we built a device with flashing LEDs and LCD screens to display what the LEDs represent (Figures 10.15 and 10.16). This new device increased the SSVEP response to the stimuli because it enabled us to produce more precise flashing rates than the ones we were able to produce using the standard computer monitors. Moreover, the LCD screens provided an efficient way to change the set of options available for selection. Subliminally, it promotes the notion that one is using a bespoke musical device to interact directly with others rather than via a computer. Activating Memory is generated on the fly by sequencing four voices of predetermined musical sections simultaneously. For each section, the system provides four choices of musical phrases, or riffs, for each part of the string quartet, which are selected by the BCMI quartet (Figure 10.17). The selected riffs for each instrument are relayed to the computer monitors facing the string quartet for sight-reading. While the string quartet is playing the riffs for a section, the system provides the BCMI quartet with another set of choices for the next section. Once the current section has been played, the chosen new riffs for each instrument are subsequently relayed to the musicians, and so on. In order to give enough time for the BCMI quartet to make choices, the musicians repeat the respective riffs four times. The system follows an internal metronome, which guarantees synchronisation.
Figure 10.15 Photo of our new SSVEP stimuli device. In this photograph, the LCD screens are showing numbers, but in Activating Memory they display short musical phrases, such as the ones shown in Figure 10.16
239
Eduardo Reck Miranda and Joel Eaton
Figure 10.16 Detail from the SSVEP stimuli device, showing a short musical phrase displayed on the LCD screen
Figure 10.17 An example of two sets of four musical riffs on offer for two subsequent sections of the violoncello part
Activating Memory has been publicly performed on a number of occasions with members of the ICCMR laboratory before we performed with The Paramusical Ensemble. This allowed us to make final adjustments to the system and music. The Paramusical Ensemble’s first public performance of Activating Memory took place on 17 July 2015 at the Royal Hospital for Neurodisability in Putney, London.13 240
Music neurotechnology
A Stark Mind A Stark Mind is a live audiovisual performance piece designed by Joel Eaton that expands the control on offer in Activating Memory by using a hybrid BCMI: an interface that combines more than one method of EEG detection. Designed for a hybrid BCMI performer and a trio of musicians playing violin, viola and percussion (Figure 10.18), the system provides the BCMI performer with a mix of conscious and unconscious control over multiple musical parameters at the same time. During a performance, the BCMI performer’s objective is to conduct the musicians by controlling a visual score. The score is projected onstage for both the audience to see and the musicians to sight-read. Unlike the traditional musical notation used in Activating Memory, the score for A Stark Mind consists of colourful, abstract visual patterns. The decision for this was two-fold. Firstly, an abstract score allows for much more varied artistic interpretation, and although the graphics have direct musical connotations that are translated by the musicians, the option for musical variety is much wider, and this sense of freedom can help push the music in new directions, allowing the musicians to work together in novel ways, making every performance different. Secondly, projecting the score onstage draws the audience in closer to the performance and allows them to see how the hybrid BCMI performer’s brainwaves are able to control the visual display and conduct the musicians at the same time, without having to be able to read musical notation. The primary method of control in A Stark Mind is, again, the SSVEP technique. However, for this piece we have expanded the SSVEP control to allow eight channels provided by combining two of our stimuli units.The SSVEP choices allow the hybrid BCMI performer to select visual patterns and effects that correspond to different musical phrases and instrumental playing techniques. In addition, there are two other methods of brainwave control in A Stark Mind.The SSVEP technique is considered to be an active method of user control because the user is able to choose which icon to select; therefore, affective responses can be considered as a less precise means of control because it is difficult to identify precisely signatures of emotion in the EEG. Nevertheless, measuring emotional indicators, known as affective responses, in EEG offers a particularly interesting area for creative exploration, especially when considering the role music
Figure 10.18 Photo of musicians and hybrid BCMI performer (centre, rear) preparing to begin a performance of A Stark Mind. At the start of the performance the visual score is project on the back of the stage for the musicians, hybrid BCMI performer and the audience to see
241
Eduardo Reck Miranda and Joel Eaton
plays in influencing the emotions of a listener (Schmitt and Trainor 2001). As such, emotional control for music making presents itself as a natural pairing due to the emotional associations inherent with music for many listeners. Levels of arousal and valence, two indicators of affect, are commonly detected from EEG measured in the frontal cortex area of the brain. Russell’s two-dimensional model of affect illustrates around a circular form emotions related to arousal (vertical axis) and valence (horizontal axis) and provides a way of parameterising emotional responses to music in two dimensions (Russell 1980). In the hybrid BCMI we take arousal as the measure of mental activation and valence, measured as the symmetry across the left and right hemispheres of the brain, as an indication of either a positive or negative engagement with the music. During performances of A Stark Mind changes in valence and arousal are mapped to parameters of the visual score to invoke musical changes associated with different affective states. For example, if the hybrid BCMI performer’s measure of arousal decreases during one time window of analysis, indicating a move towards a state of “calm”, the playback speed of a particular visual pattern will increase by a corresponding amount.This conducts the musicians to play faster during the next time window. This increase in musical tempo has the knock-on effect of increasing the hybrid BCMI performer’s arousal, which the system would target as “excited”, and so the mapping of arousal (and also of valence) is used to regulate the affective states of the hybrid BCMI performer during the performance by responding to his or her affective changes in real time.This provides a novel approach to using emotional indicators in EEG to control music and also induce affective states in a manner that adds an element of unpredictability and variance to the live performance. In addition to SSVEP control and affective response, a third – and another active – method of control is used. Motor imagery is a technique where a user imagines a specific physical movement. When programmed accordingly, the BCMI records the difference in brainwave patterns between imagining the movement and relaxation. This ability to detect distinctions in imaginations presents a particularly fascinating stage in the development of BCMI control, as it is closely linked to reading thoughts by being able to distinguish mental actions without any external stimulus present. Sensorimotor rhythms are idle oscillatory waves that reduce in amplitude during motor imagery, providing a means of voluntary control. In the hybrid BCMI, motor imagery is measured by the detection of such amplitude reduction, known as event-related desynchronisation (ERD), in alpha rhythms across the motor cortex. In practice, if users perform a motor imagery task such as imagining squeezing their right-hand, ERD is expected in the alpha-band power over the left motor cortex (note that the left motor cortex is contralateral to the right hand). The imagination of relaxed motor task has the opposite effect in increasing the alphaband power back to the idle state (Daly et al. 2014). In the hybrid BCMI, monitoring which states are active is applied during SSVEP gazing as an extended control option – a switch that can be used to add an extra mapping to an SSVEP choice. For example, when SSVEP is used to select a specific visual pattern for the string instruments to play, a motor imagery extension can allow for the hybrid BCMI to select either the viola or the violin. The integration of three control methods in the hybrid BCMI not only increases the number of options available for a user but it allows for simultaneous musical control across three EEG dimensions (Figure 10.19). This simultaneous control provides the BCMI equivalent of polyphony, a concept ingrained in many traditional musical interfaces. Combining two methods of active control (SSVEP and motor imagery), coupled with the passive control method of mapping affective responses to music, A Stark Mind demonstrates a unique application of how BCMI systems can push the boundaries of creativity in computer music.
242
Music neurotechnology
Figure 10.19 Diagram of the hybrid BCMI system for A Stark Mind. Features from the three methods of control are extracted from EEG and mapped to parameters of the visual score through transformation algorithms
Concluding remarks In this chapter we introduced four illustrative projects in the field of music neurotechnology, ranging from the development of basic research and pragmatic systems targeted to medical applications to more artistic and creative works. We hope to have demonstrated how biology, more specifically, neurobiology, can inform and inspire musical research and new developments in music technology. Practical outcomes from research into developing biochips with in vitro neuronal networks are likely to be something for the distant future. We reckon that progress at this front will occur in tandem with progress in the field of synthetic biology, which is looking into synthesising neurones artificially. In the meantime, computer models provide a viable way to explore the behaviour of neuronal networks for composition. As for the BCMI research, currently most EEG-music initiatives do not employ adequate hardware. Unfortunately, there are a number of low-cost pieces of EEG equipment in the market that do not perform as well as their manufacturers advertise. Practitioners’ general lack of technical knowledge means that these are commonly used due to budgetary constraints, which ends up being a false economy: research progress is hindered due to lack of EEG measurement precision. Moreover, EEG-music initiatives are largely based on rather simplistic direct sonification of the EEG signal, which is often contaminated by noise due to low quality of the equipment used. It is hoped that cost-effective equipment of reasonable good quality becomes more affordable in the near future, and artists might soon benefit from more scientifically robust techniques to use EEG to control musical systems. As the kinds of EEG control methods that we tested in the piece A Stark Mind evolve in sophistication, we hope that more possibilities will be available for BCMI designers and composers.
Notes 1 The term ‘music neurotechnology’ appeared in print for the first time in 2009 in the editorial of Computer Music Journal, volume 33, number 1, page 1.
243
Eduardo Reck Miranda and Joel Eaton 2 Created in 2003, ICCMR develops musical research and provides post-graduate training at the crossroads of art and science. Its research expertise ranges from musicology and composition to biomedical applications of music and development of new music technologies. Website: http://cmr.soc.plymouth. ac.uk/ 3 The reader is invited to consult (Miranda et al. 2009) for more information about the stimulation and observed behaviour of the network. 4 Shockwaves was premiered on 20 June 2015 in The House, Plymouth, by Ten Tors Orchestra under the baton of Simon Ible, with Pierre-Emmanuel Largeron on the solo violin. A recording is available on SoundCloud: https://soundcloud.com/ed_miranda/shockwaves 5 A detailed explanation of the model is beyond the scope of this chapter; please refer to (Izhikevich 2007) for more information. 6 Note that a more complex signal could replace the sinusoid; for instance, a sound other than a sinusoid could be used to simulate the network. In this case, the raster plots would look much more complex than the ones shown in this chapter. 7 Initially, the composition was planned for an orchestra of 50 instruments, but due to unforeseen circumstances the commission ended up being for an ensemble of 34 instruments. 8 The expression “brain-computer music interfacing”, or BCMI, was coined by the ICCMR team to denote BCI systems for musical applications and it has been generally adopted by the research community. 9 The EEG is a measurement of brainwaves detected using electrodes placed on the scalp. It is measured as the voltage difference between two or more electrodes on the surface of the scalp, one of which is taken as a reference. Other methods for measuring brain activity include MEG (magnetoencephalography), PET (positron emission tomography) and fMRI (functional magnetic resonance imaging), but they are not practical for BCI. 10 Please refer to (Miranda and Castet 2014) for an overview of EEG analysis methods. 11 Our somatosensory system informs us about objects in our external environment through touch and about the position and movement of our body parts (proprioception) through the stimulation of muscle and joints. The somatosensory systems also monitor the temperature of the body, external objects and environment and provide information about painful, itchy and tickling stimuli. In the context of this chapter, it is concerned with visual stimuli. 12 www.paraorchestra.com/ 13 A video documentary is available in Vimeo: https://vimeo.com/143363985, and a studio recording of one of the millions of possible renderings of Activating Memory is available on SoundCloud: https:// soundcloud.com/ed_miranda/activating-memory
References Braund, E. and Miranda, E. (2015). BioComputer Music: Generating Musical Responses With Physarum Polycephalum-Based Memristors. Proceedings of 11th Computer Music Multidisciplinary Research (CMMR15): Music, Mind, and Embodiment, Plymouth University, Plymouth, UK. Available online, accessed 7 July 2015, http://cmr.soc.plymouth.ac.uk/publications/CMMR2015EBEM.pdf Curran, E. A. and Stokes, M. J. (2003). Learning to Control Brain Activity: A Review of the Production and Control of EEG Components for Driving Brain-Computer Interface (BCI) Systems. Brain and Cognition, 51(3): 326–336. DeMarse, T., Wagenaar, D. A., Blau, A. W. and Potter, S. M. (2001). The Neurally Controlled Animat: Biological Brains Acting With Simulated Bodoies. Autonomous Robots, 11(3): 305–310. Dornhege, G., del Millan, J., Hinterberger, T., McFarland, D. and Muller, K-R. (Eds.) (2007). Toward BrainComputer Interfacing. Cambridge, MA: The MIT Press. Izhikevich, E. R. (2007). Dynamical Systems in Neuroscience. Cambridge, MA: The MIT Press. ISBN 978-0262090438. Kaplan, A.,Ya Kim, J. J., Jin, K. S., Park, B. W., Byeon, J. G. and Tarasova, S. U. (2005). Unconscious Operant Conditioning in the Paradigm of Brain-Computer Interface Based on Color Perception. International Journal of Neurosciences, 115: 781–802. Miranda, E. R. (2002). Computer Sound Design: Synthesis Techniques and Programming. Oxford, UK: Elsevier/ Focal Press.
244
Music neurotechnology ———. (2006). Brain-Computer Music Interface for Composition and Performance. International Journal on Disability and Human Development, 5(2): 119–125. ———. (2010). Plymouth Brain-Computer Music Interfacing Project: From EEG Audio Mixers to Composition Informed by Cognitive Neuroscience. International Journal of Arts and Technology, 3(2/3): 154–176. Miranda, E. R., Bull, L., Gueguen, F. and Uroukov, I. S. (2009). Computer Music Meets Unconventional Computing: Towards Sound Synthesis With in Vitro Neuronal Networks. Computer Music Journal, 33(1): 9–18. Miranda, E. R. and Castet, J. (eds.) (2014). Guide to Brain-Computer Music Interfacing. London: Springer. Miranda, E. R., Magee, W., Wilson, J. J., Eaton, J. and Palaniappan, R. (2011). Brain-Computer Music Interfacing (BCMI): From Basic Research to the Real World of Special Needs. Music and Medicine, 3(3): 134–140. Miranda, E. R., Roberts, S. and Stokes, M. (2004). On Generating EEG for Controlling Musical Systems. Biomedizinische Technik, 49(1): 75–76. Potter, A. M., DeMarse, T. B., Bakkum, D. J., Booth, M. C., Brumfield, J. R., Chao, Z., Madhavan, R., Passaro, P. A., Rambani, K., Shkolnik, A. C., Towal, R. B. and Wagenaar, D. A. (2004). Hybrots: Hybrids of Living Neurons and Robots for Studying Neural Computation. Proceedings of Brain Inspired Cognitive Systems, Stirling, UK. Available online, accessed 6 October 2008, www.cs.stir.ac.uk/~lss/BICS2004/ CD/CDBISprog.html. Potter, S. M., Wagenaar, D. A. and DeMarse, T. B. (2006). Closing the Loop: Stimulation Feedback Systems for Embodied MEA Cultures. In M.Taketani and M. and Baudry (eds.) Advances in Network Electrophysiology Using Multi-Electrode Arrays, pp. 215–242. New York: Springer. Rosenboom, D. (2003). Propositional Music From Extended Musical Interface With the Human Nervous System. In G. Avanzini et al. (eds.) The Neurosciences and Music – Annals of the New York Academy of Sciences, vol. 999, pp. 263–271. New York: New York Academy of Sciences. Russell, J. A. (1980). A Circumplex Model of Affect. Journal of Personality and Social Psychology, 39(6). Schmidt, L. A. and Trainor, L. J. (2001). Frontal Brain Electrical Activity (EEG) Distinguishes Valence and Intensity of Musical Emotions. Cognition and Emotion, 15(4): 13. Uroukov, I., Ma, M., Bull, L. and Purcell,W. (2006b). Electrophysiological Measurements in 3-Dimensional In Vivo-Mimetic Organotypic Cell Cultures: Preliminary Studies With Hen Embryo Brain Spheroids. Neuroscience Letters, 404: 33–38. Vialatte, F., Dauwels, J., Musha, T. and Cichocki, A. (2012). Audio Representations of Multi-Channel EEG: A New Tool for Diagnosis of Brain Disorders. American Journal of Neurodegenerative Disease, 1(3): 292– 304. Available online, accessed 7 July 2015, www.ncbi.nlm.nih.gov/pmc/articles/PMC3560465/.
245
PART III
Extending performance and interaction
11 WHERE ARE WE? Extended music practice on the internet Simon Emmerson and Kenneth Fields
Introduction At least two decades have passed – taking stock At the time of writing, at least 20 years of extended sound-based performance1 via the internet is now behind us.2 Already in 2003 Àlvaro Barbosa summarised the elements that had emerged in the first generation of networked performances. His ‘classification space for computersupported collaborative music’ distinguishes modes of interaction (synchronous/asynchronous) as well as location (co-located/remote). He distinguishes four classes of music making: local inter-connected musical networks, music composition support systems, remote music performance systems and shared sonic environments (Barbosa 2003, Fig.4). It is interesting to see how the emphasis has been further refined or changed since.3 The first two categories seem to have been absorbed into a more general ‘sharing’ culture of the internet and are more taken for granted than before; and local area network (low latency) performance is available in some parts of the recording and film industry.4 The remaining two will be our focus here – shared environments5 and remote performance.These have characteristics that do considerably more than enhance previous options: they open up some entirely new possibilities. The emergence of network-specific practices, distinct from simply importing and adapting other practices, is emerging steadily. These must clearly address new features. Here we can think of the network latency (discussed further later) becoming a primary feature – not a limitation. And the same goes for presence – now replaced by the idea of re-presence.We take presence absolutely for granted in the local situation, but in distributed performance re-presencing becomes an outstanding feature. Of course we need to encourage the mix, even hybridisation, of old6 and new practices to see what works for what we need to do – an experimental and empirical approach.7 It is time to take stock of what has been further achieved and to look at current issues, questions and trajectories. While we are bound to link the developing technology to this history, we intend to keep the focus of this chapter more towards musicking (Small 1998). The music and its making will be firmly within the frame: what has been achieved in practice and how performance and composition have been influenced by these radical changes in the networked relationship of one musician to another. Some well-established questions of interactivity and liveness are once again 249
Simon Emmerson and Kenneth Fields
brought into the foreground. This is of course aimed at encouraging future work in making music that engages with the specific characteristics of the internet.
Space is mediated into time The space-time relationship is not new Practicing musicians have always known that space and time cannot be separated. A large orchestra ‘feels’ the subtle timing differences from front to rear8 necessary to have a synchronised wave front at the centre of the concert hall – especially for a unison ‘attack’. The performance of sound in large reverberant spaces is now believed to have played an important role in prehistoric times and is potentially fundamental to our species’ development of music (Debertolis and Bisconti 2014). Nearer our time the relationship of the acoustic of music performance space to musical style is now well understood to be mutually dependent – along with a strong relationship to the social spaces of the community: the nature of the ritual, site, number of participants (and their relationships) and so on (Forsyth 1985). There are however limits (or at least transitions) in this acoustic world. Beyond a discreet delay of about 50 ms we hear a delayed sound as a separated sound – an echo.9 Some of the largest human constructed spaces for music have crossed this boundary.The Royal Albert Hall in London has had at least three generations of acoustic treatments to overcome problems of large distances of the reflections. If acoustic musicians and audiences have just about tolerated such idiosyncracies, network music moves through another transitional threshhold: that of localised to distributed practices, from a feeling of an ‘immediate now’ surrounding us to perceptible delays above 50 to 100 ms in receiving what is present elsewhere. In the twentieth-century music world, composers and performers have used extended distance as a constructive component of the affect of the music – for example Charles Ives, Murray Schafer, Henry Brant, Iannis Xenakis (Harley 1994). One of the most ambitious works actually to have been performed was Arseny Avraamov’s Symphony of Sirens realised in its full form twice (Baku 1922 and Moscow 1923). This was a city-wide performance involving vast logistical forces (human and machine) coordinated through site and sound (Molina Alarcón 2008). And there are further examples from sound installation and soundscape art (de la Motte-Haber 1999).The transition to larger spaces of performance requiring amplification is documented and discussed elsewhere (Emmerson 2007; Mulder 2013). Even amplified music has to contend with the acoustic delay time across (for example) a festival arena; such installations use signal delay processes to compensate and to create coherent wavefronts at great distances.10
The new space/time – relativity The new synch is no synch – multiple times Rather than a single central clock in the world, the agreed international time-keeping standard is based on an average of more than 400 atomic clocks worldwide, using GPS transfer and agreed standard calculations. This is known as International Atomic Time. Its aim is to agree on an absolute time of day around the world and to allow the best possible degree of synchronisation of the world’s clocks. The problem then starts as this clock information is bussed around a network with a wide range of latency times. The Network Time Protocol has been agreed upon to try to keep track of time in such an environment, using the notion of ‘timestamping’ data to indicate what happened 250
Where are we?
when. Errors inevitably creep in, and time accuracy is compromised. This is in addition to the basic latency that is inevitably defined by (at best) speed of light transmission between two points in a network.11 The further these two points are apart the longer the delay – almost 1.3 seconds for the radio signal from the moon’s surface to earth – if that were part of a network, its ping time would be 2.6 seconds! Such network time servers can supply ‘absolute clock’. So different performing groups could coordinate to the same (central) clock. We would know the delay (latency) and could apply an exact correction such that all such points could agree on a notional simultaneity – but this defines synchronicity of your own activity with respect to this clock, not as perceived by other performers. Thus synchronisation of the separate elements of the performance itself would not be perceived by any of the contributing parties. We would need to construct a node that was the equivalent of a ‘sweet spot’ in a media concert at which all the incoming information was at the correct (that is intended) time and amplitude. From any other position in the world there would be a different set of corrections and a jumble of delay times from the same set of performance points. In other words, without serious computation, simultaneity/synchronicity can only practically be worked out for an agreed single point. We are now in a world of multiple and relative local times that inevitably ‘slip’ when perceived from different listening points. Such natural asymmetries can only be ‘corrected’ artificially and locally. So, in internet performance, for the first time we can have no simple control over the space/time delays of our distributed sources, no clear definition of a neutral point from which others are observed and heard. These are no longer parameters to be corrected – they may be compensated for, or accounted for, but remain a given (and possibly a variable) for each performance.
The new affordances In this chapter we shall examine how these new relativistic parameters have been tackled by musicians. Importantly, we may try to disentangle what is appropriate (even essential) to the network.This is not a simple process and will include reflection on experience with a look at existing recordings of the music.We might expect new practices to emerge that articulate networked music making in a way no other approach has done before, generating kinds of experience not possible previously. These might be from many sources – ideas and materials might be available from a wide variety of other music making, which might find a new kind of use in this world. We shall return to this later. James Gibson has given us a basic definition of affordance: The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. [. . .] It implies the complementarity of the animal and the environment. (Gibson 1979, chapter 8) Luke Windsor has applied and developed this idea with respect to acousmatic (electroacoustic) music, arguing that for such music, far from being separated from the ‘real’ events of causation, The listener inhabits an environment rich in stimulation, rich in structure, and will perceive affordances not only through the pick up of structured auditory information from the piece but from the environment as a whole, whether acoustic or not. (Windsor 2000, 20) 251
Simon Emmerson and Kenneth Fields
Windsor’s analysis never excludes the totality of our environment and all our senses – even if such an exclusion is the composer’s wish and an apparent part of the aesthetics of the work.12 Our internet performances often demonstrate a kind of ‘displaced’ or disturbed causality. Even given stable audio and video streams, our ability to perceive the cause of sound becomes ambiguous at a greater distance in space and hence in time. Where the immediate information from a particular source is insufficient the human being not only hunts for additional information from the “natural” environment but also from the social and cultural environment. (Windsor 2000, 21) We will discuss the internet as a site for making music (musicking) – whether previously described as composing or performing. Perhaps we should try to avoid negative definitions. For example, we might say that the internet performance space has lost possibilities of secure synchronisation, but we need also to consider the alternative potential gains – and previously impossible opportunities. Musicking on the network is as much about relationships as it ever was previously. We have often quoted Christopher Small (1998: 193), who has written: “it is the relationships that it brings into existence in which the meaning of a musical performance lies”. His discussion includes participants (performers and listeners both), the site (physical setting) of the performance and the sounds being made. He took for granted traditional acoustic instruments, so I would want to add the relationship of those taking part to the sound production (means and methods). In addition, the relationships themselves are clearly influenced fundamentally by the mediation involved – we actively interpret, and may now need to develop strategies to reinterpret, the delays and inflections of responses from other participants in the performance. Relationship is not simply defined by a description of physical location and connection, even including latency, time and space considerations – Small demanded we include the broader social constructions of all of these.13 He asked pertinently “What’s really going on here?” (1998, chapter 12). How our many and various performance nodes are constructed in every sense will contribute to the meaning of what they do. Why these groups of musicians? What for? Who with? What with? (and so on).
Site and location Phase relationships of the world’s cycles What makes internet performance different is not only a matter of displaced location and consequent time delays. This could simply be defined in terms of a geometric construction within an absolute Newtonian space and time ‘frame’. However, there is one inherited dilemma that internet music might articulate in new ways. In so much music making in the world, time and place play a vital role. Unique site- and time-specific elements need in the end to be accounted for to understand the music. In Western concert music, however, this specificity seems to have declined steadily, and the romantic view of ‘the work’ has prevailed – it somehow exists independently of its real performances. Of course many contemporary and experimental practices, sound art, installation and soundscape art have done much to push back this dominance – but this dichotomy is still very much with us. The transglobal location and interplay of cities involves the crossing of time zones, climate differences across seasons and their cultural articulations.14 These factors have influenced our listening 252
Where are we?
habits and probably contributed to the formation of our notions of making music together since prehistoric times.15 To network across latitude might connect a morning to an afternoon (even the middle of the night), across longitude, a spring to an autumn, a summer to a winter. Such rhythms of the rotating world and its relationship to our nearest star are not of course lost in this process but placed in stark juxta- and superposition – their phase relationships (temporally) collapsed – so what new relationships may form? Even this view needs considerable nuance – it tends to treat the world as a ‘flat’ (or at least flattened) projection dominated by simplified notions of polar to equatorial conditions. In some countries (in Central and South America, for example) altitude defines climate; in others, wet (monsoon) and dry seasons are the prime descriptors. The reduction of these differences to insignificance within a uniform controlled environment becomes a kind of ‘greenhouse’ – perhaps the ultimate ‘white cube’ for contemporary music making – a space where local character is minimised.16 I am here, of course, referring to usually enclosed spaces for (and with) human performers. Much of the currently archived material shows this spatio-temporal grey-out inherited from modernist art’s claims – the music somehow existing independently of its places and times of production.17 But in contrast to this approach we are now potentially connecting local characteristics together. We have the means to create the ‘multi-site-specific’ performance – a new site created from many sites displaced in space and time. The streaming of remote soundscapes has already begun such a process of site-stamped localisation.18 If such connectivity becomes more site and season sensitive, as in other wave theories might we have an equivalent of interference or phasing? Psychological or social interference patterns . . . imagine, given full access to the visual and sonic aspects of the location, connecting positions of longest and shortest days together in a new kind of ritual ‘celebration’. Only with such site sensitivity (in all aspects) can this truly global affordance be realised.
Space in a relationship – construction We bring to the conception of this new space a vocabulary based on what we have already experienced. Acoustic music is a ‘broadcast’ – albeit slow and local; it is omnidirectional until constrained.19 The relationship of media to broadcast or narrowcast formats is more complex. Although in principal omnidirectional, radio broadcasts can be directed, or at least localised. In the case of computer networks, such omnidirectionality needs to be simulated – a mix of physical (fibre optic) connections and encrypted radio links is not usually apparent to the end user. The illusion of universal availability conceals that the receiver still needs to seek it out – the link needs to be made to suitable routers and servers. In specific network performances the signal is constrained into a narrow passageway to form the link, to send sound one to another, thus behaving as a pipe. The nearest acoustic equivalent would be ‘two cans and a taut string’ as a primitive telephone link. This narrows communication down to a simple loop – to and fro – in, strictly speaking, one dimension. In classical acoustics we have ‘ray tracing’ which allows us to construct and follow the ‘directionality’ of sound, reflection paths and the like – a radical simplification of the actual wave in space and its relationships. But from this simplification we gain a comprehensible model of (for example) a reverberant environment with separate early reflections defining the nearest boundaries, and later a generalised reverberant field articulating some measure of the size of the enclosure and absorption of the surrounding reflective surfaces. In terms of our perception of space (‘where we are’) we are fine tuned to interpret what this affords us. Performers have reported that they sense an ‘internet space’ with at least some parallel properties to an acoustic space, but the relationship is far from simple. 253
Simon Emmerson and Kenneth Fields
The equivalent of each wave path in the acoustic model has been specifically constructed in this new media space, although boundaries, reflection and absorption will have very different interpretations. Nonetheless, the idea is clearly attractive and recasts the unfamiliar in familiar terms (as in Netrooms – see Rebelo and King 2010).We are slowly developing a suitable vocabulary to describe the characteristics of even the simplest music internet space.20 Such a space emerges through the addition of nodes and inter-node relationships. The development of the listening skills of the performer is crucial: how to grasp an emergent totality from the complex experience may not be easy. We have a first stage of relatively simple listening that allows us to hear network exchange; a singular 1:1 relationship with a fellow performer may be relatively direct to grasp. Complexity increases exponentially as we extend the network to further performance nodes. The more inter-node relationships are added, the more we have to conceive of what’s happening. Letting the internet space sound – or ‘speak’ – has strong parallels (as we remarked earlier) to how musicians respond to the acoustics of their performance space. Software visualisation aids may play an important role (Rebelo and King 2010) as we learn to feel at home within the emergent properties of the space. Such a space may have no single character – it may be best to think of it as a multiple space depending on the point of observation (which we will discuss further later) with differences in ‘mix’ and mode of listening. So the question ‘where am I?’ has two related senses – ‘what kind of space do I inhabit?’ and ‘where am I (and where are others) within that space?’
Point of perspective, processes, proportions – the listener But – to whom? – we must be careful not to extend the Newtonian frame to assume a fixed and universal observer. Where we observe this relationship from will change radically the nature of what is observed.We might be alongside a performer in one location or another – or suspended somewhere else in between. No one location can be the prime location for listening. Of course each performance node will have its own local mix of the others. Many perspectives means many time frames, and there might be one appointed node to collect all the feeds and create a single mix for broadcast more widely than simply back to the performers. How signals are routed to a third party listener would also create a different view. If we simply assembled audio from all perspectives in real time during the performance, then this would be ‘true’ only for the one listening position. There can be no one true ‘record’ of the event after it has concluded. Post-performance remixing21 can never recreate an objective listening point – perhaps only an idealised generalisation. It will inevitably be a construction made from the multiple points of perspective – a multiplicity of relativistic listening points to what amounts to multiple performances. There are thus many options for such a ‘mix’. A distributed model might suggest six different mixes for six nodes – leaving us the option to be able to switch perspectives ourselves whether listening during or after the event.
Time > space in performance and listening Where are we? Feeling space through time Spatialised textures and timbres may be a rich area to explore. I suspect that the recorded documentation of performances to date does not do justice to this domain. The photographic and video imagery available is strongly suggestive of this dimension, and we appear to glimpse this in the recordings, but usually with inadequate detail. In a physical acoustic space sound changes in transmission – so works by Charles Ives or Henry Brant, performed from a distance within a 254
Where are we?
landscape, will have the acoustic influence of that landscape indelibly imprinted upon them, due to reflection and absorption of surfaces, along with the complex frequency-dependent absorption in air over distance, all to a degree dependent on temperature, humidity etc. In contrast, a relatively dry sound lab projected from Beijing to Calgary will have a different set of transfer functions superimposed by the mediating electronic technology. So how do we construct a sense of distance in this situation? Will we have a sense of prosthetic space ‘out there beyond’? We not only use our physical senses of performance presence but include knowledge and information we learn before and during the performance itself. The technology infrastructure is part of the new environment, not simply a transparent medium of connection. We need to sense and conceive of this space and learn its effects on our relationships and their meanings. Generally speaking, as a basic fact we trust the information we have been given about the system – we believe that the images and sounds we hear are produced at some great distance from us. The sound and video system gains a character of its own emerging from the complex of ‘pipes’ down which the connections are made. The consequences of the time and space displacement give this characteristic – and ideally are supplemented (and possibly explained) by our outside performance time knowledge of the system. The notion of the acousmatic – well established in many areas of discussion of recorded sound and music (Kane 2014) – becomes more problematic here. The degree of synchronisation between audio and video streams may not be good. When clues to the cause of a sound are mediatised and delayed there is increasing ambiguity – cause may be confirmed only after the effect (the sound). Such delays must usually be within the span of short-term memory, or vital links may be lost. But, as in our previous discussion of affordance, we would be wrong to focus on the sound of the music alone. Learning the ‘feel’ of the system may engage positively with what were originally thought of as negative features. The glitches, timing errors, inevitable latency delays, lack of audio-video synch become indicators in signaling the distance apart of the performing groups. This may stimulate the affirmation of community, relationships and values that emerges through the ritual of making music together.
Time and performance Pulse – from duo to trio (and beyond) The inevitability of relative timing in internet music making – that is the impossibility of establishing shared perceptions of synchronisation – does not necessarily exclude pulse-based music if we see the links between nodes as simple delay lines. If we conceive of a simple metrical pulsed beat, then we may be able to align more than one performing group to a shared pulse (even though not strictly synchronised to the same downbeat). Let us start with performers at two nodes (A, B) separated by some appreciable distance such that the transmission delay time between them (∆t) is perceivable. Our aim is the simple alignment of beats (beat synch). A initiates: performs an event to give a (first) beat; Then ∆t later, B plays a (first) beat in synch with receiving A (first) beat; Then ∆t later, A plays a (second) beat in synch with receiving B (first) beat; The overall delay time between beats for any individual performer is thus 2 x ∆t, the ping time, becoming a pulse of 60,000 (ms)/ping time (ms) = bpm. So if ∆t = 100 ms, ping time = 200 ms, hence 60,000/200 = 300 bpm (or 5 bps). Generated in this way there is an absolute 1 beat delay in performance between the two nodes; in practice, it is also possible (and sometimes easier) to gravitate towards 2 beat delays between the performers. 255
Simon Emmerson and Kenneth Fields
While two performer locations can quite easily generate this latency-derived pulsed metric, when we add even a single further node the combinations become more complex and potentially irrational. So for A, B, C – there are now three combinations AB, AC, BC overall. At any given node there will now be two pulse metrics (bpm) generated – most likely a polyrhythm or polymetre (or polytempi) – from the two delays to the other two nodes. If all three nodes are sharing all sound feeds then one performer will also hear (but not generate) the exchange between the other two. In the unlikely event that two of the latency times will be the same (an isosceles triangle so to speak, where AC = BC),22 A and B may appear in synch to C but not to each other. Overall it is therefore unlikely that if pulse-based beat synch is the aim there can meaningfully be greater than three nodes for this reason.23 Caceres and Renaud (2008) give an excellent view of how best to engage with the inevitable delays due to latency. They utilise feedback locking to generate the strict pulse-based patterns we discuss here; but in addition they use more complex patterns of network delays (again with controlled use of feedback and spatial location control) to create quasi-reverberant spaces, with interference effects such as phasing. Of course there are many other approaches to time management in a more general sense where any kind of synchronisation is not the aim. Ideas of more flexible (relativistic) conductor/artificial assistants, based on multiple delay times, may be created. So now that strictly synchronised polyphony is effectively impossible, what might ‘slipped’ relativistic time afford the performers? What other kinds of music and music making?
Simplicity and complexity – some models of layering There are many other useful models that we can find in the world’s traditional musics as well as the more experimental Western music and sound art developed in the second half of the twentieth century. We will give some pointers here; some are already found in the internet music examples we shall discuss later, others not.
Heterophony A commonly understood musical line is played in varied forms simultaneously, sometimes at different speeds, but usually with a broadly interpreted but fuzzy sense of ‘keeping together’ (it is not at all strictly synchronised). Heterophony has been central to many melodic musics from around the world. There are many examples of traditional musics from Arab cultures, Indonesian gamelan, Japanese gagaku, traditional jazz, Balkan traditions and Hebridean Gaelic psalm singing (see https://en.wikipedia.org/wiki/Heterophony). In some of these the heterophonic elaboration acts to separate the lines in the listener’s perception, emphasising the contrasting contributions of the elements.
Micropolyphony ‘Micropolyphony’ was a term invented to describe a particular technique in works by György Ligeti of the 1960s.24 This was a kind of multilayered polyphony where similar melodic lines (one for each instrument or voice) ran at slightly different tempi through the use of ‘tuplets’ – subdivisions of (say) 2, 3, 4, 5 within the beat – which was thus effectively lost in the wash of sound resulting. The lines are not the same but similar in ‘weave’ – usually waves rising and falling by small chromatic steps. Such a specific example might be seen as a subset of heterophony, but as the lines of music proliferate – they are deliberately designed to fill out a changing 256
Where are we?
‘bandwidth’ (cluster) of chromatic pitch space – the perception of the result rapidly crosses over into texture (discussed further later).
Juxtaposition and superposition An obvious technique to cite here is that of the deliberate juxtaposition or superposition of disparate elements where the different events or layers have no special relationship in need of exact coordination. In the second of his Three Places in New England (Putnam’s Camp) Charles Ives creates the effect of two marching bands crossing in a parade – they play music at different tempi, each sufficient to itself.25 There are many other works by Ives and others that use multiple tempi in like manner. Such ideas fed the experimental traditions emerging in the 1950s – the ideas of John Cage, for example, might transfer the notions of agency and expression to the network itself. Clearly ideas for performance that allow audience movement (or involvement) within a shifting indeterminate happening could be reapplied to network performance.26 The ideas of David Tudor might go further in distributed form, too, towards creating the cybernetic equivalent of emergent life forms from the interconnection of small modules (Rogalsky 2010). We may also have many possibilities for discrete and isolated sounds. While in many ways the Japanese concept of ma is almost impossible to translate, it is possible that focus on ‘the gap between things’ would provide useful musical insights here.
Exchange (alternation) Some of the previous discussion tends to downplay the importance of an identifiable event – a gesture of potential drama and surprise. Two specific approaches could be of use in this nonsynchronised world. One of the oldest basic phrase forms in music (from many genres) is call and response, from worksong to piano concerto, soul, jazz, monkey chant and Mozart.While true that all of these are based on a shared pulse, there may be good reason to re-engage with this structural resource.The caller and respondent have different functions (the latter often a chorus). It might be possible to develop meaningful call and response materials across networks. When these roles are more equal, we might have the possibility of hocket. In its strict form hocket involves the alternation of events between two performing groups. Usually the summation of the two builds the intended musical line.27 Such techniques are found in many cultures from Andean pipes, Western European mediaeval music to contemporary music such as Louis Andriessen’s large ensemble work Hoketus.
Emergent behaviours – from events to texture and timbre The establishment of a sonic texture might be approached from two directions – which correspond broadly to time and frequency domain pathways. First (time domain) we have the accumulation of events becoming increasingly dense, moving from individuated (in perception) through grain to texture. One tool for this has been granular synthesis. As recently developed, this has usually handled grains on a micro-scale and their accumulation into sustained sounds, of a very wide variety of sound types from pitched to broadband noise (Roads 2004; Truax 1988). More recently swarm algorithm (and related) techniques have redeveloped the middleground of grain and texture, within which individual events and sounds are distinguishable but not isolated or functional in any real sense. Then (frequency domain) we have the ever greater layering of pitched (or narrowly focused noise) sustained sound occupying greater bandwidths of perceived frequency space. In this 257
Simon Emmerson and Kenneth Fields
transition lies a range of possibilities of pitched drone, sustained harmony (both harmonic and inharmonic)28 through to steadily denser forms of noise (Smalley 1986 terms these stages ‘note, node, noise’). Within these sound materials the timing of individual components may not be crucial to the result. Thus emergent perceptions of change in general, within which any specific ordering of elements is not functioning in the musical language, are the salient feature. Used creatively, such spectral distribution and movement can also reinforce our senses of space (Smalley 2007). More recently developed studio techniques for the spatialisation of a ‘single’ sound may be extremely valuable for this kind of distributed texture performance. We might imagine an emergent texture whose components are generated quite remote from each other but which is assembled at each node location in a subtly different way. Well-handled spatialisation affords separation and nuanced judgement of texture and timbre at the local level. In encountering such a texture (and probably being immersed in it for some time), some ‘deep listening’ strategies may be brought into play and possibly extended in new ways (Oliveros 2005). Barry Truax has written of ‘listening inside a sound’ in situations of granular timestretch (Truax 1992), although such focused listening to internal change has a long history that might be related to mindfulness, meditation and even shamanic practices – with Buddhist chant, Mongolian khoomi and Stockhausen’s Stimmung as examples.29 Handled creatively there are opportunities to play on the very ambiguities seen previously as limitations. A holistic and emergent sense of unifying two separate textures may work in different ways depending on the point of perspective.
The evolution of the Syneme playlist In this section Ken Fields discusses30 aspects of the evolution of internet performance practice through highlights of the Syneme group’s contributions. Many were part of the annual MUSICACOUSTICA Festival (Beijing) based at the Central Conservatory of Music (CCOM). Syneme is the working name of Fields’ research lab/program as Canada Research Chair in Telemedia Arts at the University of Calgary, 2008–2013, and current lab in Beijing. The term is intentionally enigmatic so as to serve as an unresolved conceptual ‘attractor’. The term ‘syneme’ continually prompts us to consider the meaning of the conjunction of the key concepts of ‘presence’ and ‘signal’, this being the ultimate driver of Syneme’s research agenda. It comes from a quote from Sergei Bulgakov, who draws a parallel between the name and the icon: The phonème corresponds to the colours of the form of an icon, the morphème to the hieroglyphic character of the “original” that provides the design for the representation. The synème is the name itself, the energy of the representation. (cited in Arjakovsky 2009, 33–4) It could stand there, but we have moved from a linguistic/representational to a media/communicational paradigm and beyond towards a new materialism. The essential/physical aspect of media is the signal itself, so the last sentence of the quote might better now be rendered as: “The syneme is the signal itself, the energy of the communication”. That is the link to signaletics (Thomsen 2012), the science of signals (Canales 2011) and importantly, the relativity of space/ time as evidenced by the very signals generated in our work, travelling close to the speed of light over fibre optic cables, producing both qualitatively and quantitatively different events at different locations. In general, I have approached many of these performances as ‘use case scenarios’, wanting to demonstrate some prototypical uses of the network for making music. So I was not always as 258
Where are we?
concerned with adjustments at the low level of the note/event or effect as I should have been. But I did not really have a choice, as it was hard enough to simply put all the intricate pieces together and get performers and audiences set for the downbeat. The early performances were often first (and only) complete run-throughs. We always seemed to finally get settled down just 5 minutes before a performance; thus apportioning time 90% to tech and 10% to music. Of course, our practice is constantly evolving, and that ratio is evening out. It was not common to have partners that could meet regularly and experiment and get used to the connection process and hence make that knowledge tacit. Partners usually started weeks out from a performance by contacting their network departments and then spent only a couple of days setting up in a space they could reserve for a short period. The Canada Research Chair position allowed me the exclusive luxury of a full-time, dedicated studio space, fully wired and always on, with plenty of high-level network support (Canarie Net – pan-Canada).This allowed a (rare) commitment to this one focus. What follows is an accompanying commentary to a series of publicly available audio-visual recordings of distributed internet performances.31 Quality is very variable (in several senses!) and editing is often minimal – though this does sometimes reveal the real difficulties of set-up and preparations for performance, but we hope the atmosphere of experiment and technical problem solving – inevitable in coming to terms with the characteristics of a radically new environment – does not totally obscure the new kinds of music making that are glimpsed.
2010 Calgary NetTets Concert, January 2010 Christian Wolff: Stones (1969) (excerpt) www.youtube.com/watch?v=vXYBv_ijd1s Zhang Xiaofu: Nou Ri Lang (excerpt) www.youtube.com/watch?v=Ix1sZxNWRKU NetTets was an annual telemusic event staged in collaboration with the Happening Festival at the University of Calgary. NetTets 2010 was a collaboration between Syneme (Calgary), CEMC (China Electronic Music Center at CCOM, Beijing), Tavel Arts Technology Research Center (Indianapolis), the Yong Siew Toh Conservatory of Music with the Interactive and Digital Media Institute of the National University of Singapore, and the Sonic Arts Research Centre (SARC), Belfast (earlier in the day for a Netrooms performance). Two of the more musically successful pieces in the concert were Zhang Xiaofu’s Nou Ri Lang for four percussionists (in Beijing, Singapore, Calgary and Indianapolis) and Stones by Christian Wolff – a particularly successful example of adapting experimental music scores to contemporary situations. The prototyping ideas we intended to investigate included sonifying – through mining network statistics that were unique to the performance, to drive some of the musical parameters – and presence: striking and scraping closely miked stones together produced intimately close and strongly embodied sounds, which were projected across the planet. In addition, this was a test of remote processing: using a particular node for the live acoustic sound source and then remotely processing the sound at another node before sending it back or forwarding it on if more nodes were involved. While only a short clip is available, Zhang’s percussion piece does show the complexity of the set-up and the interaction of some of the resources – most clearly the relationship of attack to textural percussion, which was the result of a deliberate strategy by the composer to create 259
Simon Emmerson and Kenneth Fields
solo foreground to texture background alternations (Fields 2012) as a response to latency and coordination issues.
Summer workshop in Calgary, 2010: Calgary, Hong Kong and Skype performances www.youtube.com/watch?v=b1hLN8y9MnQ This is an early workshop performance with students in distributed teams, and a whole semester and summer course to work daily towards one performance. The results were useful, though maybe not fully worked out. The students blogged, we discussed theory32 and network music performance (NMP) history, and we steadily developed one group piece. We were ambitious in trying to put all possibilities into networked play: dance, live video, audio and open sound control, remote body motion tracking, as well as 3D environments as remotely staged and streamed set backgrounds. In this workshop we applied the ‘salad methodology,’ a kind of networked multimedia performance that reflected all aspects of the group’s interests to make one unified piece. These included live poetry reading over Skype; body motion capture data from a studio in Hong Kong converted to OSC and sent to Calgary to control sound and video; projection – the video mapping of multiple sources (remote and local) in Isadora;33 in addition Curtis McKinney in the UK created a controlled network feedback loop with Calgary using Supercollider (RecursionNet). A ‘strange’ and remote atmosphere around the sources seems to be well captured here – vocal and instrumental (erhu, gong sound) interventions stand out from a multilayered and spatialised drone.
MUSICACOUSTICA, Beijing, October 2010 Grid 2010 (Beijing-Calgary) – Large Scale Computed Synth Cluster Working Members: Ken Fields and Haku Wang www.youtube.com/watch?v=dNt9odPTE00 The prototypical idea of Grid 2010 was to use multiple remote network nodes as a massively distributed supercomputing grid of sound wave generators. Of course, if we could get it working on one node, it could be extended to any number of nodes. I was usually thinking in these terms, such that even if only two nodes were involved (P2P), pieces were conceived for a variable sized group that could be rolled out more widely. In this case, I used one 8-core server in Calgary as the ‘computing grid’, with 8 supercollider sound servers (one on each core). Both machines (Calgary and Beijing) were in an OSCGroup, so I could use an iPad in Beijing to start/control/stop each sound server separately in Calgary – or (as suggested) anywhere in the world. Calgary streamed 8 channels of audio to me in Beijing, while locally I played some live sampled sounds as foreground material. I used freesound.org in the role of a remote database store which supercollider could directly search/download into a running patch. The result is a potentially powerful approach to sound grid computing on the network, generating a slow developing texture.The literal distribution of sound production with inherent delays results in interesting spatialised interference. Glitchy sounds add foreground space and enhance a sense of the layered spaces assembling from a distance (via imperfect transmission channels as a ‘mark’ of the system). 260
Where are we?
ResoNations 2010: An International Telematic Music Concert for Peace: United Nations NY, Calgary, Beijing, Seoul (December 2010) UN WEB-TV VERSION/PERSPECTIVE
http://webtv.un.org/watch/resonations-2010-an-international-telematic-music-concertfor-peace/5240597314001 CALGARY PERSPECTIVE EDITED INTO THREE PARTS
1 www.youtube.com/watch?v=VMcVBhNUoLc 2 www.youtube.com/watch?v=BQopPNv2d_I 3 www.youtube.com/watch?v=EOkKvW10UX4 Composers from each location produced a work for collaborative presentation coordinated from the UN (NYC) by Sarah Weaver.34 Within this document of the total event35 may be found many of the musical materials, approaches and problems discussed earlier. In the first piece (Heo Yoon-Jeong’s The Spirits of the Water) – centred on the group from Seoul but with contributions from the others – slow heterophonic materials are complemented by improvisatory imitative (and some textural electronic) sounds. There is also a strongly pulsed section that achieves beat synchronisation – these become looser and freer.36 In the second work (Min Xiao-Fen’s Harmony – centred on Beijing but with her contribution from the UN), the network delay is used to enhance the quality of instrumental and vocal exchanges. The final piece (Sarah Weaver’s Ascension) has a textural (and perpetual) drone at its basis, though with slow changes and evolution, acting as an anchor from which improvisatory elements, carefully controlled accelerandi and ritardandi, are drawn. Some complex sounds and atmospheres develop from the interaction of the different ensembles working together to form the drone continuity.37 The composer’s central coordinating role over the remotely performing ensembles is clearly visible and audible. In this high-profile UN presentation involving four centres (Calgary, Beijing, Seoul and the UN in New York), my role was as facilitator (based in Calgary, which had both IPv4 and IPv6 standard connections), bridging Asia to North America. New York only had ipv4, so I forwarded Asia’s Jacktrip audio received on IPv6 to New York University on IPv4, which NYU forwarded to the UN (which only had a commercial network). It was a complicated spaghetti of connections. The feed from Seoul was supposed to be coming from the university, but that didn’t work out, so they performed from a theater. Beijing performed from their recital hall at the conservatory of music (CCOM), with my students running the tech there. As a demonstration or use case scenario (from the Calgary perspective), this showed how a node could act as a non-performing participant – or as a technical participant only, in other words, outsourcing some aspects of a project, such as mixing, recording, bridging, planning. We (Calgary) were situated in the middle in a convenient timezone so that we could rehearse tech separately with Asia or New York and then put the pieces together later (probably one day ahead of performance, as usual). From the UN’s perspective, the use case was more straightforward in terms of connecting a multicultural/national ‘concert for peace’ in New York City – the implication being if one could put three cities together, why not 100 for a major event (given the availability of bandwidth). The musical possibilities given 100 different delay times suggest a highly textural approach or an algorithmic approach to beat syncing (as discussed earlier in this chapter). 261
Simon Emmerson and Kenneth Fields
2011 NetTets Concert, January 2011 (Montreal – Calgary – Edmonton) CanDLE: Durées (first public performance). CLOrk (Concordia Laptop Orchestra, Montreal), NuMuLO (New Music Ensemble Laptop Orchestra, Calgary), Mark Hannesson and Scott Smallwood (Edmonton). Directed by Eldad Tsabary, organised by Syneme. https://vimeo.com/19596478 and www.youtube.com/watch?v=5200JAKTJ9k One of our most complete performances based on three locations in Canada, this is an instance of excellent network conditions on the CANarie network.Three laptop orchestras jammed, producing clear glitchless multichannel audio with, in addition, a live jitter video performance from the Edmonton group, producing some of the highest quality sound and media production to date. Of course laptop ensembles are complex – and without much to go on in terms of cause/effect clues – although the relationship of the Concordia band to Tsabary’s sign/gesture language is fascinating to follow.38 The resulting textures are very well paced in evolution and immersive spatiality. The world of the sound is eclectic, although with only limited recognisable sound sampling. We sense the space between the layers of sound as between the cities – circumstantial suggestion, possibly, but clearly a case that the musicians are generating ‘meaningful response’ across the network.39
2012 MUSICACOUSTICA, Beijing, October 2012 The next two pieces resulted from a project designed to investigate networked and interactive score systems. The performances link Beijing, Indiana, Calgary and Waikato/Hamilton (NZ), demonstrating distributed OSC controlled scores. First Ian Whalley’s GNMISS (Graphic Networked Music Interactive Scoring System) (Whalley 2014). Ian Whalley: Sensai Na Chikai (shakuhachi and electronics). www.youtube.com/watch?v=d6G07znKrSU Sensai na Chikai (Fragile Vows) was written for Beijing MUSICACOUSTICA 2012. Performers in Hamilton, New Zealand, included Ian Whalley (wind controller, foot controller, synthesiser programming) and Hannah Gilmour (keyboard controller, pad controller) triggering synthesis parts and traditional Maori instrument samples. In Beijing, Bruce Gremo played shakuhachi and shakulute (a shakuhachi-flute hybrid) (Whalley 2014). The composer has created a circular score driven remotely by OSC and designed to be interpreted freely to account for distributed and local needs and resources. Guided interpretation is a better description than improvisation – Whalley has designed a symbolic system that engages with fundamental archetypes of human emotion and expression. The aim to engage a sensitivity in the performers is well illustrated in this performance, producing a varying soundworld in which the layers are broadly coordinated without the need for any sense of over-defined timing. Ken Fields: Performing Research nr 1 2012 (shakuhachi controller, flute and electronics). Performers: Ken Fields, Bruce Gremo and Alex Chung. www.youtube.com/watch?v=o3mXIHM7Jmg 262
Where are we?
This piece is performed from an OSC driven graphics score using a remote instance of Iannix40 (3D), which sends pre-mapped OSC messages. Iannix has web front end to view publicly while in run mode – that was displayed on a monitor in the hall. In addition I used Kinect body mapping OSC control from Calgary to control Ableton Live in Beijing. There is very clear contrast and trajectory of sound types and textures using extended instrumental and processed sounds along with electronically generated materials. The gesture of the instruments clearly works to control the overall production of the sound in the remote locations.
2013 MUSICACOUSTICA Beijing, October 2013: Telematic Concert41 Beat Bits Concordia Laptop Orchestra (Montreal) and McMaster Cybernetic Orchestra (Hamilton, NZ) performed to an audience in Beijing’s Central Conservatory of Music (organised by Ken Fields). www.youtube.com/watch?v=drDujB__X1E and www.youtube.com/watch?v=WYjs_klw-I8 The orchestras performed a shared composition/improvisation in synchronized fashion with the use of a network metronome (espGrid, designed by David Ogborn).With the addition of smart delay compensation, the two orchestras were able to synchronize their clocks and coordinate a joint metronomic performance in real-time, presented to audience in Beijing. (Programme note) This is a good example of coproduction with a degree of synchronisation (or at least coordination) of materials – described as a ‘telematic metronomic piece’. Some very interesting and immersive textures and atmospheres evolve and develop. This arose from what was essentially a laptop orchestra jam session (directed locally by Eldad Tsabary at Concordia and David Ogborn at Hamilton (live coding), with additional live electronics from Beijing). Interestingly, there are two recordings of this performance from different locations, giving very different perspectives on the dynamic of the interaction (a different feel from both audio and video feeds). Performing Research nr 2 – Oscilosound 2013 Ken Fields and Bruce Gremo (Cilia controller). www.youtube.com/watch?v=vnLowdWGpGY This piece uses network delay as its main constituency. A mono audio channel is generated from supercollider, visualised in oscilloscope channel 1 (x-axis). It is also routed to Sydney to be looped back to Beijing and input into oscilloscope channel 2 (y-axis), so the signal returns out of phase with the original by the round trip (from Beijing to Sydney) delay time. Interesting glitches were network transmission errors that also showed in the oscilloscope display. This produces tight audio-visual sync and a highly spatialised sound from the (delay) interference of the signals. Bruce Gremo’s Cilia controller interfaces to a Max patch for this duo. 263
Simon Emmerson and Kenneth Fields
2014 MUSICACOUSTICA, Beijing, October 2014; Telemusic Performance: Calgary/Beijing Performing Research 3 Performers: Ken Fields, Bruce Gremo (controller), Meng Qi. www.youtube.com/watch?v=uu5DGQXBF7Q This performance demonstrated the potential to distribute sequencers in remote locations (based on the Nodal app from Monash University). So in Beijing I simply triggered one sequencer in Calgary, but with all in one OSCgroup, the sequencers can be distributed widely for complex textures. In this case they were not streaming audio but just MIDI over OSC.The sound was then generated locally in Beijing. Musically this produced good texture-driven layering plus punctuating events (from the piano, for example), resulting in an attractive immersive atmosphere.42 David Eagle: Change Ringing: Etude 1 www.youtube.com/watch?v=DN-OZJVfyuU This was a strategy piece and hard to perform. It demonstrates clearly the coordination issues between remote locations – in this case between Beijing and Calgary. It is basically a pattern process based on interactive change ringing (with more complex rhythmic development in the final section). The metric beat is synchronised with the network delay to allow accurate and coordinated performance of the changing patterns. It demonstrates well engaging with (and good use of) an unavoidable network characteristic (discussed earlier).
Graduate research student final presentations (University of Calgary), December 2014 – Beijing/Calgary Nathan Bosse: Untitled www.youtube.com/watch?v=fqx8PD7dT4U Michael Young: Network Ostinato in F# Minor www.youtube.com/watch?v=ekeV5vFkhfI These were two graduate student works that focused on very specific network characteristics to develop their ideas. Nathan Bosse’s piece controls Max/Jitter remotely, using OSC interactively, applying sound and video games ideas. Performers in Beijing and Calgary both use Kinect controllers to interact with objects projected on screens to manipulate and collide them, producing sound events. The tight sound-image coordination develops over time to create a simple overall narrative. Michael Young’s starts with a simple riff at 153 bpm based on network delay time between the two guitarists’ locations, then shifts steadily towards a texture which is generated as a result of the intercontinental feedback – a steadily evolving drone.
Study performance, December 2014, Calgary/Beijing Ethan Cayko: Percussion Studies www.youtube.com/watch?v=XnCI5BPp21Y
264
Where are we?
Ethan Cayko (a master’s student) also created a set of tightly coupled Percussion Studies for Calgary/Beijing based on the 197 ms (152 bpm) network delay time – he has integrated subtle uses of exact beat synch and phase shifting across the beat.
2016 Network Music Concert: Indiana, Calgary, Beijing, April 2016 Ethan Cayko: untitled www.youtube.com/watch?v=lAjkz4ruiOI This was one of Ethan Cayko’s master’s degree (Calgary) pieces. We collaborated weekly over the year, carrying out rhythmic experiments in sync with the network delay. His work explored many rhythm synchronisation strategies and how to notate them – contrasting drone and rhythm elements. This piece used Max to add delay to a node if necessary so we could synchronise to a common multiple delay time shared by the three nodes. The video shows screenshots of the Artsmesh set up and Max control modules (see Artsmesh discussion following).
The evolution of Artsmesh (a native macOS application for network music performance) The vision Plans drawn up following Ken Fields’ appointment to a Canada Research Chair in Telemedia Arts (CRC Calgary) in 2008 aimed toward a large-scale art/science collaborative research grid: clusters of computers/performance spaces connected by gigabit networks, scientists working with artists to realise an always-available globally networked performance laboratory. Telemedia arts would prioritise real-time protocols (World Live Web) and would be modeled upon an ontology of presence. Ubiquity and an ecology of performance networks were to follow as bandwidth became economically accessible to smaller venues and home studios. Ten years later, this larger vision has yet to emerge, as we still await the alignment of bandwidth, business models, public attention and creative fascination with musical interaction involving the special characteristics of working between distant collaborators (as discussed previously). In this formative time, however, a professional interface (Artsmesh) has been honed for multimedia ‘presence engineering’, that we have generally termed a Digital Presence Workstation (DPW).
Development – first stages The first version of Artsmesh rolled off the press about three years after the commencement of the Canada Research Chair position (2011). The aim was to move forward from traditional software modeled on the needs of teleconferencing. At the time, the Access Grid software was widely available for users on research networks (Internet 2) but lacked the ‘social/semantic’ dimension clearly developing in the contemporary social media of the time. Existing systems in general lacked the ability to support a ‘dramaturgy of live performance’ (Rebelo 2009; Schroeder 2009). There was the need to consolidate the specialised low latency protocols and communication protocols emerging in the networked stage-crafting trade (such as OSCgroups, syphon, jackaudio, jacktrip, gnusocial, ffmpeg).
265
Simon Emmerson and Kenneth Fields
Our goal was to optimise the platform for creative telepresence and artistic collaboration. Importantly for interactive artwork and music especially, the quality and low latency of sound and video was the highest priority, thus mandating the implementation of the next generation IPv6 protocol, a necessity for lower signal losses (packet loss), lower need for time critical media compression,43 less network jitter and overall greater reliability due to IPv4’s multiple layered hierarchical structure of internet gatekeepers with the need for recursive Network Address Translation (NATs in NATs) along the way. Artsmesh 1.0 was simply a jack/jacktrip connection GUI with iChat (XMPP) support. It put users in groups and used the short username rather than typing in a long IP address to connect jacktrip. Little of the original code and design remains in the currently more ambitious platform to support the myriad of processes that make a network music collaboration: from conception, through experimentation, rehearsal, performance and post-production to archiving (2009). Research initiatives such as the Telemediations project, a joint conference initiated by Kjell Peterson at IT University Denmark with Guto Nobrego (UFRJ, Brazil) and Ken Fields (CCOM, China) in 2011, allowed for deeper exploration of telematic theories such as Thomas Pederson’s ‘Situative Space Model’ for the analysis of telematic performance space (Pederson and Surie 2008). The premise of this approach was to move from a focus on the human-computer interaction (HCI) to that of distributed human-network interaction (HNI), which must consider the constraints involving multiple points of presence and the overlapping of their non-aligned temporal frames; not the perspective of a shared virtual space (third space) but the explicit awareness of distributed nodes in the real world. It is difficult to convey the complexity involved in the final three-continent performance (Copenhagen, Rio, Beijing) that was conceived out of this year-long collaborative project. Audio (IPv4 and IPv6 streams coexisting) and control signals (OSC) had to be handled and routed individually between participants, some without telepresence hardware (see Figure 11.1). This was the performance that broke the camel’s back and shouted the need to move away from the previous practice of choreographing innumerable applications, protocols and long textual terminal commands toward a natively coded GUI, consisting of a unified collection of connection/visualisation widgets. (Fields 2012)
Development – application In reality, putting a network performance together is disorienting with even two nodes. With every additional node, it becomes exponentially more difficult; thus, a rigorous and focused networked stage management expertise becomes very important. Artsmesh as a protocol interface facilitates the user’s (or a team’s) control by having to engage and think by not being a ‘one button solution’. It promotes the need for ‘presence engineering’. We have witnessed the establishment and evolution of the studio recording engineering industry – people with trained ears and the ability to situate the tech in the process of the art. There is now the recognition that a similar expert is needed in the domain of presence, such that the overwhelming disorientation of connecting multiple remote sessions and the multiple signal paths between them will not interrupt but will integrate with the artistic process (and be its own creative domain). A new industry is evolving. Network music performance has proven needs, which could be called its interaction or task space. Of course much needs to be decided apart from the composition and performance aspects of the music itself – (1) the essential interface connection issues: IP needs, node identity, known ports, whether IPv4 or IPv6, protocols (udp/tcp/http/icmp/rtp/OSC), formats (mpeg/h264), the bandwidth and agreed sample and frame sizes/rates for the group; (2) then a small step towards technical staging issues (though remaining infrastructure problems): how many channels 266
Where are we?
Figure 11.1 A typical connection scheme from an early concert
needed, what specific channel routing, parallel audio/video communication channels, chat links, audio/video mixing and finally recording the performance. Only then can we come to grips with artistic performance issues. There may be further routing interface issues (foldback for example), metronome/timer coordination demands, score issues – long-distance communication of performance instructions; whether the network latency is part of the interpretation (for example where delay time generates meter bpm – discussed earlier). Surrounding this there are larger consequences of location and timezone, both for the performers as well as public dissemination, the how and where of broadcasting the performance. So the interface for Artsmesh evolved to constrain the high potentiality for disorientation. The decade to the time of writing has seen exciting developments. There was endless redesigning/sketching, brainstorming, U-turns.44 Early mockups, Max/MSP prototypes, performances were supplemented with hundreds of workshops, tutorials, community building, etc. The first 267
Simon Emmerson and Kenneth Fields
prototype version (Artsmesh 1.0) was developed through support from Canadian research funds (previously mentioned, 2008–2013). From 2013–2017, Detao Group funding (China) allowed development of Artsmesh 2.0, a currently complete and working Macintosh application. Artsmesh is now entering into its third generation: in 2017, the development of Artsmesh 3.0 moves to the USA (Santa Barbara, CA).
Artsmesh and HNI (the future) Compared to the multi-decade agenda of human-computer interaction (HCI), human-network interaction (HNI) has not yet scaled out – or it has been sidetracked by ISPs, social media, network security, cloud services and data centers, and a longer list of IT middlemen and intermediary structures. The ‘desktop’ and the ‘cloud’ as metaphors have very different effects regarding facilitation or obfuscation of the user’s knowledge and hence participation. We learned how to put a file in a folder, but how do decentralised oriented cybernauts navigate through ‘the cloud?’ Instead, users will need a basic understanding of their networked environment – of routers and peering – as they have adapted to highway systems, tolls, automobiles and airports to transport the physical body. In the future, an extended sense or consciousness of remote/complex presence will most assuredly be required beyond the current unidimensional verbs of ‘follow’ and ‘like’, which is all that contemporary asynchronous social networks allow. There are only a handful of full-time Artsmesh users in the world, maybe a hundred with any significant experience in P2P networked performance at all. All universities have such networks now, and there are currently a few cities with 1 gigabit fibre optic network links to the home. Otherwise, commercial connections are still a bit expensive, but not out of reach for budding professional music networkers. According to many experts, we are entering the age of the ‘world live web’, or next generation radio/TV. We stand to correct what Bertold Brecht described in his article from 1932, ‘Radio as an Apparatus of Communication’: But quite apart from the dubiousness of its functions, radio is one-sided when it should be two. It is purely an apparatus for distribution, for mere sharing out. So here is a positive suggestion: change this apparatus over from distribution to communication. The radio would be the finest possible communication apparatus in public life, a vast network of pipes. That is to say, it would be if it knew how to receive as well as to transmit, how to let the listener speak as well as hear, how to bring him into a relationship instead of isolating him. On this principle the radio should step out of the supply business and organize its listeners as suppliers. (Brecht 1964) (‚Der Rundfunk als Kommunikationsapparat’ in Blätter des Hessischen Landestheaters Darmstadt, No. 16, July 1932) We hope to convince the Periscope/YouTube/Facebook live crowd to begin collaborating across the net and not perpetuate the solo perform-to-the-camera studio broadcast paradigm.
Notes 1 Of course we go further back if we include the use of the internet for construction of performance, MIDI file exchange and the like. 2 This corresponds broadly to the launch and development of Internet 2 by US research and education institutions from 1997 (https://en.wikipedia.org/wiki/Internet2). 3 Weinberg (2005), Föllmer (2005), Carôt and Werner (2007) have developed these categorisations.
268
Where are we? 4 As will P2P network music very soon. And although not an interactive exchange in Barbosa’s terms, the growth of concert and opera relays to local cinema audiences is another example. 5 We do not consider many aspects of shared environments here – that would include multi-user virtual reality, for example. 6 ‘Old’ means here practices from pre-internet music making, which may be from truly ancient to relatively recent. ‘New’ may mean internet or at least computer dependent. 7 Maybe we should invent the internet simulator – place groups around (say) a campus connected by links that simulate the latency (and other technical realities) of internet connectivity, to allow testing and practice of this music making. 8 At 10 m from choir stall to first violins that would be about 30 ms delay. 9 Below 50 ms the original and delayed sound add together in ways that interfere or ‘blur’ the sound (depending on spectral detail). 10 Without such a delay the sound might arrive at the listener from the loudspeaker first, which means it sounds as if produced there and not on the stage. 11 The to and fro time of transmission (network packet echo) is measured by Ping, an agreed software utility – used also by Chris Chafe as the basis for a site-specific sound installation (2001–2005) with digital artist Greg Niemeyer (http://crossfade.walkerart.org/ping/). Currently and ideally, transmission over fibre optics might be around 70% the speed of light in a vacuum – but in the lab and in the future this will get faster. 12 ‘Hardline’ (though by no means all) acousmatic music is based on écoute réduite (reduced listening) – the deliberate bracketing out of possible sources and causes in the real world – sound for sound’s sake, perhaps. 13 He did not live into the age of media social networking but his ideas seem to predict its importance. 14 An excellent example is the Ethernet Orchestra (see http://ethernetorchestra.netpraxis.net/info/). Climate change (and eco-sonic possibilities) as a topic in its own right may be addressed within this view of the site (see Barclay (Chapter 8, this volume)). 15 Formal research is well on its way (Debertolis and Bisconti 2014), but the public remains fascinated by the reports that ‘Stone Age Art Caves May Have Been Concert Halls’ (http://news.nationalgeographic. com/news/2008/07/080702-cave-paintings.html). 16 While often chaotic, the spaces so far used for distributed telemusic performance inevitable focus on a studio ‘scene’ for technical reasons – the liberation of connectivity to a wider more characterful set of locations has developed, of course, but is not yet much in evidence in the archived materials. 17 A wonderful exception is the use of Chinese ensembles in some examples of internet performance discussed later. 18 ‘Localisation’ seems to be the wrong word if one includes streaming of sferics (and other such geophysical, sonified data, off-world phenomena etc.) from websites such as NASA (nasa.gov) – but I mean the ‘pinning down of origin’. 19 Of course not equally in all directions – performance spaces usually reflect the signal towards a designated audience area; the notion of forward demonstrates the greater sensitivity of human perception (and power of acoustic production) to the straight ahead, ca. 120 degrees. 20 Those working in this area have taken words from other disciplines: nodes, edges, vertices, mesh, bridge, propagation, lateral, arboreal, reticular . . . etc. 21 Carried out on many of the YouTube (and other) clips available. 22 More unlikely still – but perfectly possible – would be an equilateral triangle where the delay times were all equal: AB = BC = CA. 23 Typically, Indianapolis may be 216 ms, Calgary 199 ms delay from Beijing; that is slightly less than 20 ms difference, and they are 35 ms delay from each other. Below the 50 ms threshold is a ‘zone of tolerance’ allowing this combo to follow a common pulse of ca. 150 bpm. 24 His Requiem, Atmosphères and Lux Aeterna most notably, made famous through their use in the soundtrack of Stanley Kubrick’s 2001. 25 True, to make this work in performance it is an accurately notated relationship, but one can surmise other perspectives on the scene than the one Ives chose. 26 Musicircus and Roaratorio come immediately to mind as good cases for distributed performance. While not teleporting audience from one node to another we could send sensor information as a valid substitution. 27 We shall cite David Eagle’s, Change Ringing: Etude 1 between Calgary and Beijing later(www.youtube. com/watch?v=DN-OZJVfyuU).
269
Simon Emmerson and Kenneth Fields 28 LaMonte Young’s drone music is multi-faceted, from the open form fluxus pieces (‘hold for a very long time’) to complex frequency ratio constructions based on different tunings and temperaments. 29 And more speculatively the reinforcement of spectral components of sounds produced in cave and constructed prehistoric environments. 30 To make clear this is Ken’s direct experience, he speaks in first person describing the origins of the projects and specific technical and musical developments. Some of the comments on the music as heard are from the second author, in addition. 31 The titles given are the (often awkward, sometimes shortened) names used in the YouTube (or Vimeo etc.) listing. 32 For example, Latour’s Actor Network Theory (ANT), theory, to try to understand the large distributed network projects we were involved with. 33 Because of a difficult timezone issue, we had to record a morning live performance to use in a later evening performance. 34 Remarks from listening were made before obtaining Sarah Weaver’s MA dissertation discussion (Weaver 2011), which examines in detail the technical-musical issues for these three works, suitable materials and notations, as well as ascertaining from the audience their perception of the latency-related issues. 35 (Strangely repeated twice.) Heo Yoon-Jeong’s The Spirits of the Water (starts at 12.30); Min Xiao-Fen’s Harmony (at 37.20); Sarah Weaver’s Ascension (at 50.00). 36 My ear suggests that there are sections where the network delay is ‘pulling’ the beat into line rather than consciously defining it through conscious ‘lining up’. 37 The composer refers to the influence of tai chi in this process. She invites all the (desynchronized) audiences to hum the drone with the performance. 38 A bias of the documentation – as there is no equivalent visual feed from Calgary. 39 I have suggested (Emmerson 2013) that this is a more helpful characteristic than ‘liveness’ in thinking about network interaction. 40 Described as ‘a graphical open source sequencer, based on Iannis Xenakis works, for digital art. Iannix syncs via Open Sound Control (OSC) events and curves to your real-time environment’ (iannix.org). 41 A good example of timezone differences for this performance: 10 am Beijing, 10 pm Montreal, 9 pm Hamilton, 8 pm Calgary, 1 pm Sydney. 42 Unfortunately, the video mix was poor, as I missed fully capturing the sound recording of the instrumentalists. 43 Jack/Jacktrip use uncompressed, multichannel audio 44 To follow a chronicle of this development see the embedded gnuSocial microblog stream (http:// artsmesh.io).
References Arjakovsky, A. 2009. ‘Glorification of the Name and Grammar of Wisdom (Sergii Bulgakov and Jean-Marc Ferry)’. In Pabst, A. and Schneider, C. (eds.), Encounter Between Eastern Orthodoxy and Radical Orthodoxy – Transfiguring the World through the Word. Farnham: Ashgate. (Original conference presentation 2005). Barbosa, Á. 2003. ‘Displaced Soundscapes: A Survey of Network Systems for Music and Sonic Art Creation’. Leonardo Music Journal 13, pp. 53–59. Brecht, B. 1964. Brecht on Theatre. Translated and edited by Jon Willett. New York: Hill and Wang. Cáceres, J. P. and Renaud, A. 2008. ‘Playing the Network: The Use of Time Delays as Musical Devices’. In Proceedings of the International Computer Music Conference, Belfast 2008. ICMA. Canales, J. 2011.‘A Science of Signals: Einstein, Inertia, and the Postal System’. Inertia. Special Issue.Thresholds Journal 39, pp. 12–25. Carôt, A. and Werner, C. 2007. ‘Network Music Performance – Problems, Approaches and Perspectives’. In Proceedings of the Music in the Global Village Conference, September 2007, Budapest. De la Motte-Haber, H. 1999. Klangkunst – Tönende Objekte und klingende Räume. Germany: Laaber. Debertolis, P. and Bisconti, N. 2014. ‘Archaeoacoustic Analysis of an Ancient Hypogeum in Italy’. In Proceedings of Conference ‘Archaeoacoustics.The Archaeology of Sound’, Malta, pp. 131–139. Emmerson, S. 2007. Living Electronic Music. London: Ashgate. Emmerson, S. 2013. ‘Rebalancing the Discussion on Interactivity’. Proceedings of the Electroacoustic Music Studies Network Conference, Lisbon, June 2013. www.ems-network.org Fields, K. 2012. ‘Syneme: Live’. Organised Sound 17(1), pp. 86–95.
270
Where are we? Föllmer, G. 2005.‘Electronic,Aesthetic and Social Factors in Net Music’. Organised Sound 10(3), pp. 185–192. Forsyth, M. 1985. Buildings for Music: The Architect, the Musician, the Listener From the Seventeenth Century to the Present Day. Cambridge: Cambridge University Press. Gibson, J. J. 1979. The Ecological Approach to Visual Perception. New York: Lawrence Erlbaum. Harley, M. A. 1994. Space and Spatialization in Contemporary Music. Unpublished PhD, McGill University. Kane, B. 2014. Sound Unseen: Acousmatic Sound in Theory and Practice. Oxford: Oxford University Press. Molina Alarcón, M. 2008. Baku: Symphony of Sirens – Sound Experiments in the Russian Avant Garde. London: ReR Megacorp. Mulder, J. 2013. Making Things Louder: Amplified Music and Multimodality. Unpublished PhD thesis, University of technology Sydney. Oliveros, P. 2005. Deep Listening: A Composer’s Sound Practice. New York: iUniverse. Pederson, T. and Surie, D. 2008. ‘A Situative Space Model for Distributed Multimodal Interaction’, Future Mobile Experiences, Workshop at NordicCHI, October 2008, Lund. Rebelo, P. 2009. ‘Dramaturgy in the Network’. Contemporary Music Review 28(4–5), pp. 387–393. Rebelo, P. and King, R. 2010. ‘Anticipation in Networked Musical Performance’. In Proceedings of the EVA Conference, London 2010. Roads, C. 2004. Microsound. Cambridge: MIT Press. Rogalsky, M. 2010. ‘ “Nature” as an Organising Principle: Approaches to Chance and the Natural in the Work of John Cage, David Tudor and Alvin Lucier’. Organised Sound 15(2), pp. 133–136. Schroeder, F. 2009. ‘Dramaturgy as a Model for Geographically Displaced Collaborations: Views From Within and Views From Without’. Contemporary Music Review 28(4–5), pp. 377–385. Small, C. 1998. Musicking – the Meanings of Performing and Listening. Middletown: Wesleyan University Press. Smalley, D. 1986. ‘Spectro-Morphology and Structuring Processes’. In S. Emmerson (ed.), The Language of Electroacoustic Music. London: Macmillan, pp. 61–93. Smalley, D. 2007. ‘Space-Form and the Acousmatic Image’. Organised Sound 12(1), pp. 35–58. Thomsen, B. M. S. 2012. ‘Signaletic, Haptic and Real-Time Material’. Journal of Aesthetics & Culture, 4. Truax, B. 1988. ‘Real-Time Granular Synthesis With a Digital Signal Processor’. Computer Music Journal 12(2), pp. 14–26. ———. 1992. ‘Electroacoustic Music and the Sound-Scape: The Inner and Outer World’. In J. Paynter, T. Howell, R. Orton and P. Seymour (eds.), Companion to Contemporary Musical Thought. London: Routledge. Weaver, S. 2011. Latency: Music Composition and Technology Solutions for Perception of Synchrony in “ResoNations 2010: An International Telematic Music Concert for Peace, Unpublished Masters Dissertation, New York University. Weinberg, G. 2005. ‘Interconnected Musical Networks: Toward a Theoretical Framework’. Computer Music Journal 29(2), pp. 23–39. Whalley, I. 2014. ‘GNMISS: A Scoring System for Internet2 Electroacoustic Music’. Organised Sound 19(3), pp. 244–259. Windsor, L. 2000. ‘Through and Around the Acousmatic: The Interpretation of Electroacoustic Sounds’. In S. Emmerson (ed.), Music, Electronic Media and Culture. Aldershot: Ashgate.
271
12 RENDERING THE INVISIBLE BEAST and the performance practice of acousmatic music Jonty Harrison
Introduction In 1980 I was appointed to a lectureship in the music department of the University of Birmingham, UK. The department had an Electroacoustic Music Studio (the main attraction for me) which had been founded two years earlier by my predecessor, John Casken. After four years of freelancing in London, it took me a while to adjust to this strange new world of academia, but by the autumn of 1982, I felt the time had come to present both established international repertoire and new work from the Birmingham Studio in a public forum. Using the mixer, four channels of amplification and four loudspeakers from the studios, plus my own amplifiers and speakers (four more channels), plus two channels of piezoelectric tweeters, BEAST (Birmingham ElectroAcoustic Sound Theatre) presented its first concert in the Elgar Concert Room on the university campus in December 1982. This modest event was the start of a 32-year experiment in which my thinking about both acousmatic performance practice and composition developed alongside and because of BEAST’s gradual transformation into a 96-channel system. In this chapter I hope to outline the history and development of BEAST but also to use this as a convenient construct through which to explore more generally the fuzzy boundary between composition and performance that is such a feature of acousmatic music. The progressive expansion of the BEAST system provides a timeline that enables me to offer a rationale for the practice of sound diffusion and its complex relationship with composition itself – predominantly in stereo but increasingly in multichannel formats – as well as to discuss some of the more practical aspects of performing acousmatic music.1 At its simplest, sound diffusion is the performance practice of distributing – dynamically and in real time – the source signal(s) of a work of acousmatic music over multiple loudspeakers located in different parts of the performance space. The control of this distribution is normally in the hands of a single performer located somewhere near the centre of the auditorium, and the device most commonly used to enact this control is a bank of linear faders. The caveats in the preceding paragraph are there mainly because there are no universally accepted standard means by which any of this is achieved. Indeed, there is not even agreement between composers working in the field about whether such a practice is necessary, desirable or even valid in the first place. Nevertheless . . . 272
Rendering the invisible
First steps The pace of technological development in electroacoustic music, between Pierre Schaeffer’s first experiments in musique concrète in 19482 and the present day, has been astonishing – I now have access to a studio many times more powerful than the one I first used as a postgraduate at the University of York in the early 1970s – and I can carry it around with me in the form of a laptop computer. From the early 1950s to somewhere around the late 1980s or early 1990s, when a gradual move to digital formats occurred,3 the standard medium for sound storage was magnetic tape. It is no surprise, therefore, that the works presented in BEAST’s first concert in 1982 were all stereo tape pieces. We should perhaps take a moment to discuss these two aspects: magnetic tape and stereophony. Magnetic tape, though capable of higher quality sound than the discs and turntables with which Schaeffer made his first experiments and much more practical as a medium in and with which to work, is far from perfect. The frequency range of tape tends to ‘roll off ’ at both the high and low ends of the theoretical frequency range of human hearing (roughly 20–20,000 Hz); the resulting sense that the sound ‘lacks presence’ is problematic. Tape is also restricted in dynamic range, between distortion if signal levels are too high and hiss that is clearly audible through low-level signals.The usable dynamic range of a professional tape recorder (i.e. between these extremes of distortion and hiss) is around 63 dB; enhanced by noise reduction systems like Dolby A,4 this could be improved to around 72 dB. Even then, this is nowhere near the range of the human ear, which is around 120 dB. So, although composers could still offer a clear indication of the relative amplitudes of different sound materials, they were obliged to constrain the dynamic range of what they committed to tape. The result is that the quality and physicality of recorded sound, when compared to the ‘real-world’ original, was rather underwhelming, with everything, especially the dynamic, seeming to end up ‘in the middle’ – a kind of spectral and dynamic ‘averaging out’ of certain features. In performance, therefore, it is reasonable that one might, at the very least, expand the playback levels to restore the dynamic range by rendering the loud sounds louder and the quiet sounds quieter (and in the case of the latter operation also reduce even further the risk of tape hiss becoming audible in the quieter passages). So here we have arrived at the first key reason for intervening in the playback of a work composed on a fixed medium: we need to restore the implied dynamic range to something more appropriate for a public listening environment. And as the shaping of sounds through fader movements on a mixer, together with starting and stopping tape recorders (which we might consider ‘performance gestures’) was an intrinsic aspect of composing in a tape-based studio, this enlargement of the dynamic range in performance can thus be considered a kind of ‘reenactment’ of the compositional fader gestures on top of the stored sound. If multiple speakers are involved, then clearly such fader manipulation also introduces a spatial dimension – though, in my view, the manipulation of space, so commonly thought of as the defining aspect of sound diffusion, is only part of the bigger story. So let us now address the issue of stereophony. Stereo has been the ‘default format’ for works throughout most of the history of acousmatic music, and remains so even today. It is vastly superior to mono (the format with which Schaeffer began his research) for several reasons: it offers an enhanced dynamic range (not all the energy is contained in a single channel); it enables any intrinsic spatial characteristics of a source sound to be captured for compositional use; it sounds subjectively more ‘open’ and natural; it permits both width and depth to be conveyed, thereby enhancing perception not just of amplitude/volume, but also of the ‘physical size’ of an event – a sound can seem ‘bigger’ within a stereo field than in mono. Furthermore, it is relatively simple to understand and to set up, which makes it extremely 273
Jonty Harrison
portable (that is to say, stereo works are more likely to be performed in concerts than pieces in more ‘exotic’ formats – an important consideration for composers!). Of course, I am not claiming that stereo is a perfect format – far from it; it actually has many problems (or, at least, limitations) in respect of human perception. Its most obvious shortcoming is that sound in the real world does not necessarily originate only in a frontally oriented 60-degree arc in the horizontal plane. Our acceptance of the stereo format is, I presume, primarily because our ears simply work better in the area broadly definable as ‘in front of us’.5 Connected with this frontal bias of our hearing is our familiarity with the convention of the stage or the concert platform – we assume that is where the ‘performance’ originates; reflected sounds from other directions in the space captured in a recording are generally minimal (or minimised) and, in practice, are substituted by those of the room in which we are listening, even if these are unlikely to match the acoustic of the original recording location; and any other extraneous sounds are not considered part of the performance, so are not included in recorded music. So stereo is an artificial, yet accepted and acceptable convention – a compromise, requiring just two loudspeakers (so low cost, where ‘surround sound’ would require more loudspeakers and amplifiers), but still a huge advance on mono. Despite its apparent shortcomings, what is particularly useful about stereophonic sound from a composer’s point of view is that it is able to present a coherent sonic image, which exists between and behind the loudspeakers. This image is not necessarily fixed or static – it can fluctuate in size (both width and depth)6 to a greater degree than the more quantitative (serially informed), architectonic approach of elektronische Musik,7 in which the distances between the points of origin of sounds (‘spatial intervals’, analogous to serially organised intervals of pitch and dynamic) might be considered more important. Manipulation of phase can even suggest point sources and movement beyond the limitations of the stereo field defined by the loudspeaker positions.
Sonic images It is impossible to discuss diffusion without referring to the kinds of images that stereo is able to deliver. If we avoid the common assumption that diffusion is concerned only with quantifiable space (location, provenance, trajectory, etc.) and think instead of spatiality’s qualitative features, (speed and character of motion, energy profiles, the ‘size’ of the sound, etc.) we might find it easier to deal with what it is we are really trying to do in diffusion, which is to translate the composer’s intentions from the controlled acoustic environment of the studio (as encoded in the Urtext – the ‘tape’, disc or sound file produced by the composer) to the significantly less controlled environment of a large public space, containing many people, not all able to sit in the so-called ‘sweet spot’.8 My listening experience suggests that the kinds of images that occur most commonly in stereo works are (or certainly include): 1 2 3 4
close and intimate, with a central focus – ‘solo’, if you like; bold and dramatic, featuring clear lateral movement or placement; distant, implying also a depth of field, and including approaching and receding; enveloping, implying that the sound is filling the room, not just the frontal ‘stage’.
It is relatively easy to hear these images in the studio (where the acoustic environment is controlled and neutral and the audience usually comprises just one person) or even a domestic living room (which is relatively small and probably not very reverberant because of soft furnishings); in both cases, the listener will probably be in or close to the ideal relationship with the 274
Rendering the invisible
loudspeakers [Figure 12.1].There is no central speaker, but an in-phase signal of equal amplitude on both speakers will appear to be centrally located; left-right movement or placement will be clear, and a sense of depth or distance behind the speakers can be perceived, especially when aided by studio trickery such as added reverberation, reduction of both level and high frequency content, and narrowing the stereo image to an aural equivalent of the vanishing point. And here we arrive at the second rationale for sound diffusion. If 9 we assume, as I do, that there is intrinsic merit in coming together, precisely and primarily to listen to music in an active, focused and attentive way, over higher quality equipment than most of us can afford at home (for the sake of argument, let us call it ‘a concert’),10 then this social gathering will require the performance to take place in a larger space. And larger spaces mean, quite simply, that much of the spectral, temporal, dynamic and spatial detail so carefully placed in the Urtext by the composer could simply disappear because of the dimensions and quirks of the larger acoustic (longer reverberation times in particular), the size of the audience and, finally, the related ‘double-whammy’ of the unpredictable distances between various loudspeakers and different listeners and the impossibility of placing everyone in the sweet spot. This problem is illustrated in Figure 12.2, which now represents a public performance space. Even though still in the
Figure 12.1 The basic configuration of stereo: two loudspeakers (squares) and a listener (A) in an equilateral triangle
275
Jonty Harrison
C
B
A
D Figure 12.2 The problems with basic stereo in a larger concert space
theoretical sweet spot, in a larger space, listener A will receive a different and significantly less precise image in comparison with the equivalent position in the studio shown in Figure 12.1 – and this is in theory the best seat in the house; listener B (way off to one side) loses the virtual centre of the stereo image and most of the accuracy of the lateral presentation; listener C has a ‘hole’ in the middle of the image from being too close to the plane of the speakers; and listener D hears everything as ‘distant’ and mostly mono.
The BEAST Main 8 How, then, can we deliver the implied qualitative images encoded on the storage medium to as many people in the audience as possible (whilst still accepting that there will, of course, be some local variations for each individual listener)? The answer is that we need to ‘help’ the sound – re-scaling it to fit the size of the auditorium and the size of the audience – so that those qualitative images I listed stand more chance of being perceived by everyone. This is where it is necessary to expand not only the dynamic range of the Urtext, as previously discussed, but also to increase the number of loudspeakers used for playback to enlarge its spatial aspects. 276
Rendering the invisible
I would usually try to ensure that I have at least eight full-range loudspeakers at my disposal (happily, the number available for that first BEAST concert in 1982), which I consider the minimum for stereo diffusion. I would deploy them in something I call ‘the BEAST Main 8’: Distant, Main, Wide and Rear pairs (Figure 12.3). The exact positions would depend on the layout of the venue and seating; this is a generalised version of the plan.11 But the fairly even spacing of the Mains and Wides (narrower and wider stereo arrays) in a frontal arc helps our perception of the most detailed spatial information – as I mentioned, this is the region in which our hearing is most spatially (and spectrally) acute. Note that I would normally turn the Distants in towards each other (as indicated by the small arrow emerging from the boxes in Figure 12.3), both to try to retain the sense of the distant image being behind the Mains12 and also because the off-axis positioning of the loudspeaker drivers reduces our perception of those speakers’ high frequency output – and a reduction in high frequency content is a ‘distance’ cue for the brain. By emphasising (but not necessarily soloing) respectively the Main, Wide, Distant and Rear speakers, the BEAST Main 8 can start to deliver precisely those four common images I listed above – close and intimate; bold and dramatic; distant; enveloping – and provide variations in the perceived ‘size’ of the sounds. And subtle combining and rebalancing of these speaker pairs create variations or even different images altogether. Though possible, I tend not to use much
Figure 12.3 The BEAST Main 8
277
Jonty Harrison
in the way of ‘diagonal’ combinations of faders/speakers (for example, Distant Left and Wide Right), as this starts to distort the stereo image (which is only truly coherent when it is squarely in front of us – a nominally ‘stereo’ signal played over two loudspeakers to our right just sounds ‘on the right’; a perception of true stereophony is only restored by turning our heads to the right so that the signal is now ‘in front of us’), although an image that might be called ‘erratic and flighty’, which results from the rapid movement of all the Main 8 faders (including the inevitable random emergence of diagonals), can be achieved with startling spatial results.13 A deliberate feature of the BEAST Main 8 is that it is not a regular circular array but is frontweighted.This reflects both the convention of stereo as a kind of aural proscenium arch framing a stage or concert platform and, once again, the fact that our ears privilege frontal sound, particularly in relation to spatial precision – if you have a sound that tracks across the stereo stage in finely nuanced small gradations, do not put it on the rear speakers, because no-one will notice the detail; it will be vaguely left and vaguely right, but will disappear in an ‘acoustic shadow’ around the centre and mostly will simply be ‘behind’ you. And, of course, as a species, if we hear a sound behind us, our most likely reaction is to turn to face that direction . . . at which point, not only our visual faculties come into play, but we also hear more accurately once again – see also the previous discussion about ‘stereo’ sound in speakers on only one side. Of course, this front weighting can be problematic, especially in halls that are longer than they are wide. In this case something is needed that can link the frontal array to the Rears. Adding in some Side speakers is the obvious solution. But what I call Side Direct speakers are extremely dangerous – they are like giant headphones and can easily dominate the rest of the array. A simple way of overcoming this tendency is to use Side Fill speakers [Figure 12.4], turned away from the audience and possibly pointing slightly up and reflecting off the walls to reduce the degree of sound localisation. These could be in fairly frequent use, but only at a low level; they are there merely to provide ‘aural glue’ between front and rear arrays, without drawing too much attention to themselves.
Going up? The next issue to consider is height.This is an interesting topic as, strictly speaking, stereo cannot actually encode this. Nevertheless we seem to ‘hear’ it in the stereo format, because the brain tends to attribute higher frequencies to higher physical realms and lower frequencies to lower ones. This is, presumably, based on observations from the real world, where slow, lumbering animals that usually emit lower frequency calls tend to be earth-bound, whilst higher, lighter creatures that emit higher frequencies often have the ability to fly (there are, as always, exceptions), and is echoed in our use of spatial metaphor when talking about ‘high’ and ‘low’ frequencies and pitches in music. Denis Smalley14 refers to this as ‘spectral space’ and, in a diffusion system, we can enhance and ‘lift’ the stereo image by positioning Tweeters (usually piezoelectric, which weigh very little and have the added bonus of being very low cost) above the heads of the listeners and by adding Subs (sub-woofers) to ‘ground’ the low frequencies on the floor [Figure 12.5]. We can, of course, go a step further and hang full-range speakers from lighting bars or place them on any balconies or galleries that may exist in the space [Figure 12.6]. A word of warning, though – as it is the time and amplitude differences between the signal from the same source arriving at our two ears that enable us to locate the sound’s direction and point of origin, and our ears are, of course, most commonly in the horizontal plane, we do not resolve detailed height information as well as we do lateral positioning; the difference between individuals in this respect is quite marked. Experiments with BEAST have demonstrated that two levels of higher speakers on balconies or galleries can easily be one too many, especially if you also have closer 278
Rendering the invisible
Figure 12.4 The BEAST Main 8 plus Side Fill speakers
overhead speakers as well (in which case their proximity seems more important than their angle of elevation). Similarly, it is interesting that one of the few spaces I know to have the option of placing speakers underneath the audience, the Sonic Arts Research Lab (SARC) in Belfast, originally featured two levels of subterranean speakers; this was later reduced to one, because of the difficulty of actually hearing the distinction between ‘below’ and ‘even further below’. Rather like rear sound, it is usually enough to know that sound is ‘below you’ (qualitative) rather than being able to measure exactly how far below (quantitative) it might be.
Expansion: BEAST in the ’90s After covering the basics with something like the Main 8, system design depends very much on the idiosyncrasies of the performance space. During the 1990s, BEAST gradually added further pairs of loudspeakers (and these were from a range of manufacturers, including ATC, KEF, Tannoy and Urei), which were incorporated into our set-ups. Figure 12.7 shows some typical positions and functions: Flood, on the stage floor, pointing up and out and filling the space without too much precise localisation; Punch, close to the audience, for accentuation; Desk, at the diffusion desk, pointing up, down or out, to pull the image in from the periphery 279
Jonty Harrison
Figure 12.5 As in Figure 12.4, with the addition of sub-woofers and tweeters
of the space; Floor speakers, aimed down at the floor, to suggest sound below us; and a central Front/Back pair, useful for filling in the ‘hole in the middle’ if the hall is wider than normal (interestingly, sending just one of the stereo channels to the Front and the other to the Back normally achieves this, even though it theoretically skews the stereo image – another instance of pragmatic decision making). Beyond this I would also consider using (i.e. exploiting) any particular (or peculiar) architectural features of the space by placing speakers in unusual positions. In BEAST’s recently completed new home, the Elgar Concert Hall in the Bramall Music Building in Birmingham, for instance, we put ‘special effects’ speakers in the hall’s adjustable reverberation chambers, on the mesh grid high up in the roof and in several other unusual locations specific to that space. The mention of additional speakers from a range of manufacturers raises an interesting point about choosing loudspeakers for diffusion purposes. Like many people, I feel that it is important to have different sizes and brands of speakers in a system to add ‘colour’ and to enhance the character of the images that the speaker placement is aiming to achieve – I would always use the best speakers for the Mains and Wides, as these cover the frontal arc of greatest aural sensitivity. As we have already discussed, the Distants are deliberately angled to reduce their perceived high frequency output, so speakers with less intrinsic treble energy could be deployed here; similarly, 280
Rendering the invisible
Figure 12.6 Adding height: two speakers on the Proscenium Lighting Bar, plus two pairs in the Gallery
the Rears need not be particularly accurate in what loudspeaker manufacturers and journalists call ‘imaging’ (rendering minute spatial gradations across the stereo field) because, as discussed earlier, our ears are not especially good at detecting fine spatial gradations in stereophonic sound originating behind us. So, with careful selection and matching of speaker to function within the system, even relatively poor quality speakers can make a valid contribution to sound diffusion; my motto from around this period was that, for diffusion, there is no such thing as an unusable loudspeaker.
How do you ‘drive’ a larger system? I mentioned early on that the normal control interface for diffusing is a set of faders. Before we move beyond stereo diffusion, let us consider issues concerning the desk – a term I prefer to ‘mixer’, as what we are doing here is not mixing to stereo, but splitting from stereo. Nevertheless, many standard mixers can split signals in this way through the use of group outputs and postfade auxiliary busses (though you cannot always get a good fader layout using groups); alternatively, if your mixer has a post-fade direct output on each input channel, you can use some kind of signal splitter to create multiples of your stereo source to send to the inputs. BEAST started 281
Jonty Harrison
Figure 12.7 Additional loudspeaker pairs: Front/Back (for wide halls), Flood, Punch, Floor and Desk
with a fairly standard mixer with direct outputs on each channel and then moved to a modified Soundcraft 200B mixer. We had a custom diffusion desk, the DACS 3D [Figure 12.8], built in the early 1990s, with push-button routing of stereo signals to multiple output pairs. Please note that this was an analogue desk in which each fader controlled the individual left or right signal sent to a specific amplifier channel driving a particular speaker. The point about fader layout is not a trivial one and, for me, it is a vital part of system design that the desk is configured in a way that makes it intuitive to drive – in diffusion you need very fast decision making and action to find the faders that control the particular speakers you are hoping to use. I do not favour one of the most common approaches to fader layout, in which the left-most pair of faders controls the speakers furthest away in front of you. The next two faders control the next closest pair . . . etc – ending up with the right-most pair of faders controlling the speakers furthest to the rear. The problem here is that the speakers that can make the most significant difference within a large system (such as the BEAST Main 8) end up scattered all over the desk, and you cannot adjust them all at the same time (i.e. within one gesture) because your hands are not big enough!15 I group my faders by function, and always in pairs, starting with the BEAST Main 8 at the dead centre of the run of faders, and placing other groupings (high, 282
Rendering the invisible
Figure 12.8 The DACS 3D – a custom-built diffusion desk used by BEAST during the 1990s and early 2000s
stage, etc.) progressively outwards from this centre: the further faders are from the centre, the less likely will be their heavy use in concert (Figure 12.9). Later, especially with multichannel pieces becoming more common and as the BEAST system grew, we had to move from analogue desks to control surfaces that merely send data to the VCAs in the audio interfaces – we tried an off-the-shelf solution in the form of the Mackie Control (Figure 12.10) but encountered problems with calibration and with the slow speed of MIDI. In 2008 we asked Sukandar Kartadinata of Glui in Berlin (www.glui.de) to design and build compact (motorised) fader controllers running over OSC. These have the advantage of connecting to the system over ethernet, so two of our three ‘MotorBEASTs’ (Figure 12.11), a laptop (effectively running as a terminal for the main machine) and an ethernet hub are all that is needed at the diffusion position; the main computer, the audio interfaces and all the signal cabling can be moved off to the side or hidden altogether. This means a very small footprint at the diffusion position, which I believe should be somewhere near the centre of the audience (not further forward, where the antics of the performer become a visual spectacle which distracts from the listening experience; I also prefer to sit to diffuse, as this also makes me less conspicuous). It also frees up some of the best seats, creating a larger audience somewhere near the sweet spot. Positioning the diffusion desk within the audience is essential, as it gives the performer the best chance of hearing what the public is hearing. This is, of course, why I favour listeners all facing in the same direction – we are all hearing more or less the same thing, so my use of frontal and rear arrays makes sense.
Before moving on . . . Before we leave stereo we must address aspects of performance itself. How do we approach it? What preparations should we make for presenting works in a public concert? What might we 283
Figure 12.9 A typical BEAST fader layout for the DACS 3D, based on the system design of Figure 12.7 (though without the Front/Back pair)
Figure 12.10 Mackie control surfaces used by BEAST in the mid-2000s
Rendering the invisible
Figure 12.11 The three MotorBEASTs, custom built by Sukandar Kartadinata of Glui, Berlin, in 2008. The full BEAST system is normally controlled by two of these
use by way of scores or other aides-mémoire? And, finally (it cannot be avoided), what makes a diffusion good or bad? Firstly, there is no substitute for knowing the work. My preparation normally consists of multiple listenings in the studio, trying to ‘get inside’ the piece, but if I am diffusing works by other people (and even for new works of my own) I will probably also make a kind of graphic diffusion score (Figure 12.12).16 This is not a ‘score’ in the traditional sense – it is not a detailed blueprint of what to do and, even less, a true representation of what the music sounds like. My diffusion score is hand-drawn; its primary function is to indicate the precise timings of the events I should already know by ear, along with a few graphic squiggles to remind me what kind of motion might be in the sound – again, a qualitative assessment. (These days, and for very specific reasons associated with the large number of tracks in my recent pieces, I often use the waveform display on my computer screen – though this is not all that easy to read either, in terms of differentiation between sound types.) In my opinion, scores should not specify too much about fader movements, except in very general terms, like ‘+ Wides’, ‘fade Roof ’, etc., and should certainly not indicate specific levels for faders, as any levels set while rehearsing in an empty hall will probably not apply when the acoustic space is altered through the presence of 150 people at the concert! So, by all means have a starting set-up (with automated desks, these can be stored and recalled), but then go with the flow of the music and be prepared to adjust for the prevailing conditions. If you have too detailed and prescriptive a score or ‘plan’, it can all go haywire the moment you miss just one fader change (as you are performing and will probably be nervous, this is very likely to happen). Remember that what you are trying to do is deliver the images that you know are in the work – it is more important that your diffusion delivers the kinds of motion or energy profiles possessed by the materials (slow and stately; fleeting and erratic . . .) than that a specific sound event appears exactly here or there, or at an exact level. Another important aspect of diffusion is that it can help articulate a work’s structure. It is worth bearing in mind that good diffusion can improve a mediocre piece and make a good 285
Jonty Harrison
Figure 12.12 Diffusion score of Unsound Objects (1995)
work sound exceptional. But it is also true that bad diffusion – by which I mean ignoring the cues and clues about movement and dynamics that the composer put in the Urtext, or using inappropriate strategies for deploying the materials on the system – can ruin even the best piece. 286
Rendering the invisible
Figure 12.12 (Continued)
Some examples (from Unsound Objects) Let me use the opening sections of my work Unsound Objects (1995) (Harrison 1995) to illustrate my point about structure (the diffusion score appears in Figure 12.12).The work starts with a short statement, followed by an elaboration/development of related material, cascading down to the sound of a wave breaking on the shore (underlining the preceding ‘water’ references) and, finally, ‘water droplets’ dying away into an implied distance. In diffusion, I tend to present the initial statement on the Mains, thereafter following the energy curve of the elaboration across other speakers in the system (certainly the BEAST Main 8, but possibly more), but ensuring that the breaking wave is presented as a strong frontal image. The implied movement of the sound away into the distance is underlined by then moving the sound away to the Distant loudspeakers. The next musical paragraph (0’50–1’59) follows a similar unfolding, though with different (or, at least, significantly developed) material, and ends with a similar implied dying away of water droplets.The diffusion strategy I use for this is essentially the same as for the opening paragraph – so a repeating pattern of musical shaping is reinforced by an analogous pattern of spatial shaping in diffusion. It is important to emphasise that this diffusion strategy is not arbitrary – it is strongly implied by the material itself. The third, somewhat shorter paragraph (1’59–2’39) is rather different, picking up from the distant water drops and opening out to the sound of a stream, which then ‘dissolves’ or ‘evaporates’ into some rather cloudy, gritty, imprecise sounds. In diffusion, I underline that this is a contrasting section by starting from the Distants (where the previous paragraph ended), then following the opening out of the stream by introducing the Wides, before taking the image (via the Side Fills) completely off to the Rear speakers at the end (an unusual strategy for me, as I tend not to like ‘rear only’ sound). The use of the Mains to start sections and the recurring dissolve into the Distants are things that I do quite a bit in diffusing this piece17 because this motion and these images echo those on 287
Jonty Harrison
the tape. Another instance is the receding section of ‘bouncing’ sounds, whose movement away from the listener is also implied dynamically. The move to the Distants here creates a spatial link (around 9’30) to the following section, in which the sound of a distant door being unlocked subsequently ‘opens’ into a reprise of rain and thunder material heard earlier in the work. This reprise is underlined by a recapitulation of the spatial/diffusion strategies used for the earlier appearance of these natural elements – namely the use of almost the full system, especially any located above the audience’s heads, for the thunderclaps. In the ‘fire’ section (ca. 7’28–ca. 8’30, but hinted at from around 6’55), I use a slightly different strategy: small shifts (mostly between Mains and Wides) to articulate and underline the small rhythmic dislocations that occur, and there are several moments where the energy profile of the fire sound (especially 7’30–7’48) suggests that it could engulf the listener – so I use rapid fader movements (‘wiggly-wiggly’) to place the spits and cracks of the fire randomly in speakers all over the auditorium to create the illusion that listeners are inside the fire. Finally, throughout Unsound Objects there is a lurking underlay of human presence and human agency (footsteps on various surfaces, unlocking doors, children on the beach, etc.). This can raise issues about the appropriateness of certain diffusion strategies. One scene (3’03–ca. 4’20) features a more or less unprocessed recording of someone chopping wood, interspersed by a succession of wilder developments, in which the sounds are much less naturalistic and seem to ‘splinter off ’ or ‘fly out’ from the chopping sounds. I expand these transformed versions in diffusion by moving them around the system rapidly, but always return to the Mains for the ‘real’ chopping sounds – there is something rather unnerving about sounds that imply low-flying human beings!
And then came 8-channel With the emergence of affordable digital technology came a wave of devices that offered eight tracks of high-quality recording and playback: the ADAT and Tascam DA-88 digital tape machines in the early 1990s, for example, and a number of computer sound cards. Composers were eager to use these devices, and ‘8-channel’ soon established itself as something of a new standard for acousmatic composition. Unfortunately, ‘standard’ was hardly the word, because there was no agreement on what an 8-channel loudspeaker layout should look like. The two set-ups most commonly favoured (Figure 12.13) are mutually exclusive if one assumes, as I do, a forward-facing audience; any images composed on the dark array are horribly skewed if simply placed on the light array, and vice versa.18 When I composed my first 8-channel pieces in the late 1990s, I rejected both these circular formats – in which all the loudspeakers are equidistant from a supposed sweet spot at the centre, the only differentiation being the angle of orientation – and composed instead for the eight channels of the BEAST Main 8. The reason for this was to enable me to use the different angles and distances of the speakers of the Main 8 for different images within the 8-channel space (I did not think of it this way at the time, but I have since realised that I was effectively composing in spatial stems).19 In Streams (1999) I used the frontal arc of Mains and Wides to deliver an extremely detailed frontal image, using the Distants and Rears more for envelopment and ‘special effects’ – there are significant stretches of time in this work where pairs of tracks are completely silent. In Rock’n’Roll (2004), I again used the BEAST Main 8, but this time the Distants,Wides and Rears (which could be interpreted as a kind of roughly regular hexagonal array) created an ambient image (implying open spaces) within which the Mains were used to deliver a focused, intimate image (implying closeness). In both works there are moments when these apparently fixed images fall away and all eight speakers are used to deliver other images and, of 288
Rendering the invisible
Figure 12.13 Two common but incompatible standards for 8-channel works: what in BEAST are called the ‘French Ring’ (dark) and the ‘Double Diamond’ (light)
course, if played on a system that offers more than just eight speakers, these different ‘stems’ can be differentially diffused. There is, though, a distinct drawback to using the BEAST Main 8 for 8-channel works, to which I have already alluded – and that is that it is more difficult to get them performed. I found I could not ignore this issue of portability, so all my subsequent 8-channel works are in either the dark configuration from Figure 12.13 – what in BEAST we called, because this was the format that seemed to be favoured by the GRM, the ‘French Ring’, or Figure 12.13’s light array, the so-called ‘Double Diamond’. In fact, the four works that comprise ReCycle (Rock’n’Roll, Internal Combustion, Free Fall and Streams) (Harrison 1999–2006) use three different physical speaker configurations (BEAST Main 8, French Ring, Double Diamond and BEAST Main 8 respectively) with even the ‘same’ configuration being handled, as we have seen, completely differently in Rock’n’Roll and Streams.
Middle age spread My decision to start working with regular 8-channel arrays, however, presented me with the same limitations (but in eight channels) that stereo has if only two speakers are available: how 289
Jonty Harrison
could I deliver, over just eight speakers, all the same distance from the central sweet spot, different qualitative images (close, intimate, dramatic, distant, diffuse, high, etc.)? As we have seen, such qualitative distinctions could be achieved by composing 8-channel works for the BEAST Main 8, with its four stereo sub-sets or spatial stems, whereas eight ‘equal’ source channels in a regular circular array can only imply such images rather than making them explicit. One solution, of course, is to diffuse the source(s) over multiple 8-channel arrays, each of which is designed to deliver one of those qualitative images. I made some initial progress in this direction for a research demonstration at BEAST’s twentieth anniversary event in 2002, borrowing additional speakers from other institutions to create the largest system we had ever installed up to that point in the CBSO Centre in Birmingham. I soon realised that driving such a system was impossible from an analogue desk like the 3D: with each fader carrying the signal for just one channel, even a simple cross-fade between two 8-channel arrays required 16 fingers on the faders. The temporary solution for the demonstration was to use a Peavey MIDI fader box, with each fader controlling an entire 8-channel array via a simple Max patch; this led us to the Mackie control surfaces and, later, the MotorBEASTs. Luckily, a significant grant in 2005 under the SRIF-2 scheme to BEAST and the Studios at Birmingham enabled us to buy audio interfaces, controllers, amplifiers and a substantial number of loudspeakers (not just in pairs, but in eights!), and the system grew to have the 96-channel capability I mentioned at the start of this chapter.
8-channel diffusion The notion of diffusing 8-channel works over multiple 8-channel arrays has a significant practical impact on system design and installation. Much more pre-planning is required; it is no longer feasible just to turn up at a venue and then decide how the system should be installed, as one is able to do to a large degree with stereo material. Where possible, therefore, I tried to visit the performance space ahead of time – at significant additional cost if that happened to be abroad. Other issues arose, too: whereas previously the system used mostly passive speakers, requiring only a speaker cable to be run to each cabinet, BEAST’s expansion resulted in a mixture of active and passive loudspeakers, with mains power and signal cable required to reach the active speakers; also, the weight of the system in transit20 now meant that we had to use a trucking company to move it around, thus incurring more cost (though it has to be said that this was less stressful than driving a 7.5 tonne truck across Europe ourselves, in addition to installing and removing the system). Practical issues aside, the enhancements to the system began to open many new possibilities for performance, and these inevitably fed back into my own compositional thinking and that of colleagues and students. These included the significant number of speakers available for the kind of ‘differentiated’ diffusion I mentioned earlier – for example, composing in eight channels, of which two would be diffused and six fixed, and so on. This kind of thinking was made easier to implement by the move to the control surfaces I described earlier and by the software developed to control the system, BEASTmulch (Figure 12.14). BEASTmulch was the result of an AHRC-funded Research Project, Development of an intelligent software controlled system for the diffusion of electroacoustic music on large arrays of mixed loudspeakers, led by my colleague Dr Scott Wilson. A PhD student, Sergio Luque, co-authored some of the SuperCollider code with Scott, and I was Co-investigator. The project funding enabled a number of installations of the system to be made outside of normal concert-giving times, purely for the purpose of trying things out and evaluating them, rather than trying to squeeze 290
Figure 12.14 A screenshot showing some of the features of the BEASTmulch software, used to deal with signal routing and fader control of the BEAST system in concerts
Jonty Harrison
experimentation and testing into the mad rush that normally characterises concert get-ins and rehearsals. Many things were tested, including the use of vector base amplitude panning (VBAP) over dome-like arrays of matching speakers (which became a regular component of BEAST configurations). Of particular interest to me were some of the tests we ran to explore the use of ‘distant’ loudspeakers. Through one experiment, we realised that a sense of distance could be given to an 8-channel image by moving only the four frontal tracks onto more distantly located speakers whilst leaving the four rear channels in the original speakers. This confirmed once again that frontal sound plays a larger role in qualitative assessment of images than rear sound (and had the additional advantage of saving money!). Another experiment involving distance was very telling and confirmed even further my thinking about the importance of the qualitative assessment of sonic images. Scott had developed a swarm granulator that moved multiple small sound grains around an arbitrary number of speakers in a manner reminiscent of a flock of birds or a swarm of bees. We evaluated the effectiveness of this on the system, both with and without time-alignment (which applies delays to compensate for speakers at different distances from the listener), and every single member of BEAST who was present agreed that it sounded better without the time-alignment. This suggested that, in practice and in large spaces, idealised, phase-coherent sound systems can be less effective than a pragmatic approach of simply finding out what sounds best – the whole point about, for example, Distant speakers, is that they should be and sound distant!
The enlarged BEAST system and n-channel composition When describing a large, complex system like BEAST, it is all too easy to lapse into a technologybased discussion and lose sight of such a system’s primary purpose: to articulate the sounds and structures of acousmatic music. Nevertheless, I should make some mention of the way in which the BEAST system was actually configured as a result of the SRIF-2 grant and which remained current up to my retirement as director in September 2014. I have already described the set-up at the desk: normally two MotorBEASTs, and a laptop, all connected through an ethernet hub to the main machine. The laptop is able to ‘see’ and control the main computer (normally off to the side) via JollysVNC. This main machine, a Mac tower, contained a MOTU 424 PCI card, connected via MOTU’s AudioWire to 4 MOTU 24 i/o interfaces, each offering 24 channels of analogue audio output. From here, the audio signals were sent to the amplifier/speaker chain (active or passive), using multicore cables to reach different parts of the concert space. In the Elgar Concert Hall of the new Bramall Music Building at the University of Birmingham, the MOTU system is replaced by the in-house Roland REAC system, as the building itself has an ethernet infrastructure.The bulk of the speakers added courtesy of the SRIF-2 award were Genelecs (24 8030s, 16 8040s, eight 8050s and eight 7070 subs, plus two 1037s to add to two we already had; we also had eight 1029s available if required). In addition, we bought four Volt speakers (kits from Wilmslow Audio), bringing our total up to eight, and 12 APG stage monitor-style speakers. New studio monitors enabled us to establish a Main ring of eight ATCs permanently assigned to BEAST. We also bought more amplifiers, flight cases, a considerable quantity of stands and other metalwork, including lighting trusses from which to hang speakers, and what must have been many kilometres of cables, multicores, etc. Some of this can be seen in the photographs (Figures 12.15–12.20, taken in Copenhagen, the CBSO Centre, Birmingham, several in the Elisabethkirche, Berlin, and in the new Elgar Concert Hall, University of Birmingham).21 292
Rendering the invisible
Figure 12.15 BEAST at the 2007 Øresundsbiennalen SoundAround festival, Copenhagen
Figure 12.16 BEAST in the CBSO Centre, Birmingham, in 2009
System design from around 2006 was primarily based on accommodating 8-channel works, using multiple 8-channel arrays to obtain a range of qualitative sound images in a way analogous to stereo diffusion. The ability of the MOTU 424 card to drive four 24 i/o interfaces giving 96 audio channels was a happy coincidence: as what I would consider a ‘reasonable’ system for stereo diffusion might comprise around 24 speakers (12 arrays of two – for example: the BEAST Main 8, Side Fill, Flood, Punch, Desk, Roof Front, Roof Rear, Subs and Tweeters), it seemed completely logical to me that 12 arrays of eight (i.e. 96) loudspeakers would be needed for 8-channel diffusion. 293
Jonty Harrison
Figure 12.17 BEAST in the Elisabethkirche, Berlin, for the 2010 Inventionen festival
Figure 12.18 The main computers with outputs from the audio interfaces (Berlin, 2010)
Performance and composition: chicken or egg? Throughout this chapter, there have been – I hope – some strong hints that pragmatism in dealing with the public performance of acousmatic music (based on a similar approach to composition itself) can frequently cause ‘eureka’ moments as a result of playing on and with a system like BEAST. My work with BEAST has without doubt influenced my compositional thinking, and 294
Rendering the invisible
Figure 12.19 The amplifiers for BEAST’s passive loudspeakers (Berlin, 2010)
Figure 12.20 The trussing in the roof of the new Elgar Concert Hall in the Bramall Music Building, University of Birmingham, for the hall’s opening festival, which coincided with BEAST’s 30th anniversary. Only the square of trussing remains, from which the Keystones and all 10 Tweeter Trees are now suspended
I have evidence of this happening with a succession of students. For an acousmatic composer, the sound system over which one’s work is played can be considered roughly equivalent to ‘an instrument’ (indeed, many sound systems are described as ‘loudspeaker orchestras’). And, as with historical developments and improvements in the design of acoustic instruments,22 there exists in acousmatic music that blurred boundary between the normally distinct domains of composition and performance to which I referred at the very beginning of this chapter. 295
Jonty Harrison
The way I compose in the studio is in a kind of feedback loop. Sound material attracts me and prompts me to work with it, in what I think of as a partnership. I propose transformations and processing; the material responds: yes, no, maybe – but with more of this or that; try again; test again; listen again; assess again . . . and eventually we – material and I – arrive at something we can call ‘a work’.The way I diffuse and work with a large sound system like BEAST is very similar: the concrete (and I choose this word very deliberately) sound experience excites my musical interest. Working with a large and incredibly flexible system like BEAST, it is not surprising that compositional strategies and possibilities suggest themselves. This is precisely what happened with BEASTory (2010) (Harrison 2010b), a kind of ‘portrait’ work based on the sounds of the BEAST system ‘at rest’ (metalwork, speaker stands, etc.) and being installed, packed up and loaded into a truck (trundling flight cases, winding cables, etc.), which was a direct result of experiencing BEASTmulch as the means of controlling the system. This is an n-channel piece: some of the component tracks are spatial stems (mostly stereo) intended for real-time spatialisation in performance, using BEASTmulch’s inbuilt VBAP plug-ins. Further compositional ideas were triggered by performance-related experiences and experiments during the summer of 2012 when, before officially moving into the Bramall Music Building at the University of Birmingham, we had a full week of testing the BEAST system in the new Elgar Concert Hall. Hearing loudspeaker arrays with certain ‘colours’ and in specific locations in relation to each other made me think about the kinds of materials that might work best on each array. This led to an approach related to the stem-based method described earlier, but now with specific and unique materials composed specifically for sub-sets (stereo, 4-channel, 8-channel) within a larger BEAST set-up. BEASTiary (2012), another ‘portrait’ work, based on the same materials as BEASTory, was composed for the opening festival in the Elgar Concert Hall in December 2012 (which coincided with BEAST’s thirtieth anniversary). It was originally composed in 72 channels (seven 8-channel stems, two 4-channel stems and four stereo stems) for direct mapping onto 72 speakers of the 96-channel BEAST system installed for the event.
BEAST in 2014 The following description of the system is based on my design for that thirtieth anniversary concert, based in turn on the direct, concrete experience of our week of testing, and was refined for my last BEAST weekend as Director in May 2014. I shall go through the design layer by layer and try to explain the function of each array of speakers, not only for BEASTiary or even 8-channel works, but also for stereo pieces – in most BEAST concerts, stereo is still the predominant format.23 The schematic of the hall does not show the balconies clearly, so I shall mention height when we reach the appropriate arrays. We start with the Main array (Figure 12.21) of eight ATC SCM50 loudspeakers (one of the tasks of our AHRC-funded Research Project was to try to evolve a clear taxonomy; for this array it would be Main Front Left and Right, Main Wide Left and Right, Main Side Left and Right and Main Rear Left and Right, always using stereo pairs and moving from front to rear). Please note that this is not the same as the BEAST Main 8, although for concerts including both 8-channel and stereo works it is possible to use the Front, Wide and Rear pairs from this circular array as the Main, Wide and Rear pairs of the BEAST Main 8 for stereo diffusion (as I mentioned earlier, for stereo works, I would prefer to use Side Fill speakers instead of the Side speakers of this Main circular array). Figure 12.22 shows this same array, extended for distant 8-channel images, as discussed earlier, by four Genelec 1037s (for the diffusion of stereo works, the Distant Front pair of these 296
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.21 Building the BEAST system in the Elgar Concert Hall, Bramall Music Building, University of Birmingham (the concerts in May 2014 used the same configuration as was used for the opening festival in December 2012): the main circle of eight ATCs in a ‘French Ring’
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.22 Adding four Genelec 1037s to allow the creation of a ‘distant’ image (by replacing the front four ATCs with the 1037s)
Jonty Harrison
four speakers act as the Distants of the BEAST Main 8, and the Distant Wide 1037s can provide a kind of ‘stereo reference’). Figure 12.23 shows the Diffuse 8-channel array (Diffuse Front, Diffuse Wide, Diffuse Side, Diffuse Rear) of APG stage monitors – these all sit on the floor angled up (conveniently achieved via their wedge shape) and pointing out to reflect off the walls.The Diffuse Side pair double as the Side Fills for stereo diffusion. Next we have a Double Diamond array of small speakers (Genelec 8030s) set at just above head-height [Figure 12.24]. In BEAST taxonomy, these are labelled Close Wide Left and Right, Close Side Left and Right, Close Rear Left and Right and Close Front/Back. For most stereo diffusion, the Front/Back pair is of limited use and the Sides risk the ‘giant headphone’ effect, so should be used with caution. Above these and offset with them in a French 8 array are the ‘Truss’ speakers (Front, Wide, Side, Rear; also Genelec 8030s [Figure 12.25]), so called because for many years these were suspended from aluminium lighting trusses, above the listeners’ heads, but quite close.24 Higher still is a square of new trussing on which are four more 8030s (known as Keystone Front and Rear), pointing more or less directly down (Figure 12.26). The Tweeters, eight or 10 ‘stars’ or ‘inverted umbrellas’, each carrying six piezoelectric tweeters, also hang from this square of trussing.The Keystone speakers grew from our experiments with VBAP on domelike loudspeaker structures, and the entire dome of 8030s (as well as the tweeters) can be seen in Figure 12.27. To hint at the dome being part of a full sphere25 and also to emulate the Desk speakers in our stereo set-up, there are four more 8030s on the floor at the Desk [Figure 12.28].
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.23 Eight APG speakers, on the floor and facing away from the audience, form the Diffuse array (also configured as a French Ring)
298
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
1094 (2)
D&B PA spkrs (if available)
Figure 12.24 Eight Genelec 8030s at slightly above ear height form the Close array, configured as a Double Diamond
X
X
X
X
X
X
X
X
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.25 Eight Genelec 8030s on telescopic stands form the Truss array (so-called because these were formerly hung from aluminium lighting trussing); these are above the listeners’ heads but still relatively close
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
1094 (2)
D&B PA spkrs (if available)
Figure 12.26 The Keystone speakers – four Genelec 8030s located on the central square of trussing, suspended from a central lighting bar over the audience. A central crosspiece and eight outriggers also allow the 10 Tweeter Trees to be suspended
X
X
X
X
X
X
X
X
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.27 Figures 12.24–12.26 combine to create a dome-like array of matched speakers
Rendering the invisible
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.28 Four more Genelec 8030s form the Desk array. These are normally on the floor, facing directly upwards, though other configurations have also been tried, ranging from two to eight speakers, facing up, out or even directly down at the floor
On the first balcony in the Elgar Concert Hall is an offset ring (Double Diamond) of Genelec 8040s (the Wides and Front are actually on tall stands on the stage to match the elevation of the others [Figure 12.29]). Above these are eight more 8040s (Figure 12.30), hanging from lighting bars on the front of the second balcony (the technical gallery), with the Front pair on a lighting bar above the stage.These 8040s can very roughly be thought of as part of an ‘outer dome’ which also includes the Main ATCs and two Keystone speakers very high up on the mesh in the roof of the auditorium (Figure 12.31). I originally thought of the two roof speakers as one pair of ‘special effects’ speakers (Figure 12.32) that I needed for BEASTiary but which are also useful for stereo diffusion.This array also includes speakers in the ECH’s reverberation chambers, another pair perched on the top of the retracting choir seating and a solitary rear speaker on the roof of the lighting control box. This single channel resulted from using all our Genelec 8050s elsewhere (the choir seating just mentioned and a true 5.1 array [Figure 12.33], necessitating the mixdown of two of my 72 BEASTiary channels to mono!). Because the BEAST thirtieth anniversary concert also contained stereo works (including Klang (Harrison 1982), performed back at the first BEAST concert in December 1982), we needed to include stereo-specific arrays for stereo diffusion, such as Flood and Punch speakers (Figure 12.34). The Sub-woofer arrays (Figure 12.35) complete the full installation, which can be seen in its entirety in Figure 12.36. 301
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.29 Eight Genelec 8040s form the Mid-High array (in the Double Diamond configuration). The May 2014 events were the last to use this array, as it was felt that the height difference between these and the ‘ground-floor’ speakers was insufficient to create a noticeably different image
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.30 The High array – eight Genelec 8040s on the upper gallery and the front lighting bar
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.31 The two arrays of 8040s (Figures 12.29 and 12.30), together with the eight Main ATCs and the two APGs high up on the mesh in the roof (see Figure 12.32), form a kind of ‘outer dome’
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figures 12.32 The FX (special effects) array – speakers in the ECH’s reverberation chambers, on the choir stalls, on the mesh high up in the ceiling and on the top of the lighting box. Because the concert also required a 5.1 array (see Figure 12.33), only one Genelec 8050 was available for this location
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.33 The five full-range speakers (Genelec 8050s) for the 5.1 array
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
D&B PA spkrs (if available)
1094 (2)
Figure 12.34 Flood (APG) and Punch (Tannoy Lynx) speakers for stereo pieces
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
1094 (2)
D&B PA spkrs (if available)
Figure 12.35 The Subs (eight Genelec 7070s and two Genelec 1094s)
X
X
X
X
X
X
X
X
ATC (8)
8050 (8)
APG (12)
1037 (4)
8040 (16)
Lynx (2)
7070 (8)
HIGH: technical gallery & front lighting bar
8030 (24)
Volt (4) or
Tweeters (10)
Figure 12.36 The full 96-channel system
D&B PA spkrs (if available)
1094 (2)
Jonty Harrison
Wrapping up (and heading out?) My 32 years of experience with BEAST taught me a huge amount about acousmatic music performance, but also about composition – the border between the two remains fuzzy. Despite the performance practice of diffusion being nearly as established as the acousmatic medium itself, it remains something of a novelty in certain regions of the world. For example, of almost 300 works performed at the 2015 ICMC in Denton, Texas, only a handful were actively diffused, despite the considerable trouble taken by the organisers to install good sound systems for the concerts. This is a significant expressive loss for composers and listeners alike. For I remain firmly committed to the belief that, whilst the Urtext alone can realistically represent ‘the work’ in a controlled acoustic environment, it is far less likely to reveal its interior detail in larger public venues. I therefore have some trouble understanding composers who have presumably struggled for hours, days, weeks in the studio to create something as close to perfection as they can, but who are content for all that inner detail, so crucial to a full appreciation and understanding of their music, to be swallowed up and rendered inaudible in a public space. There is always something that can be done to improve the audience’s reception of a piece. From a personal point of view, I am convinced of the necessity of some level of intervention in the presentation of a work in a public space; even in the case of my own examples of what in BEAST came to be called ‘seriously multichannel composition’, I constantly adjust relative levels and diffuse images to different loudspeaker arrays if the system on which I am performing offers such additional arrays. Nevertheless, my enthusiasm for this way of working at the unstable interface between composition and performance has to be tempered by an infusion of hard reality. I am no longer director of BEAST, and it is clear to me that I cannot and should not have a central role in the next stages of BEAST’s story. Although I am assured by the new BEAST team, Dr Scott Wilson (director), Dr Annie Mahtani (my replacement at Birmingham) and Dr James Carpenter (technician), of plenty of future opportunities to perform on the system, it makes no sense to compose only for such a system. I want to ensure that my music can be presented in as many contexts as possible, and there are not that many systems in the world which offer BEAST’s flexibility. This has made me think carefully about what my approach to composition and performance should be from this point onward. Perhaps I shall spend time reinvestigating stereo; alternatively, if 8-channel really is, as some have claimed, the ‘new stereo’, (though one might ask which ‘8-channel’!), I could continue with that format, on the basis that, if enough speakers are available, then diffusion – even of eight channels – is still an option to enhance the delivery of the composed images. On the other hand, two post-BEAST experiences have led me to consider other approaches to both composing and performance. What initially triggered this line of thought was the presentation of both older and more recent pieces of mine in concerts with systems of just 10 loudspeakers – arranged as the BEAST Main 8 plus Side Fills (facing away) for stereo works, then turned to face the audience as Side Directs for 8-channel pieces, with the option of using the Distants for the front two source channels at certain points.26 These concerts were remarkably successful, which led me to consider the possibility of composing directly for this 10-channel format, which is easily addressed by some compact interfaces. A further prompt to think about new formats grew directly out of the experience of composing Going/Places (an hour-long, 32-channel work) for the Huddersfield Immersive Sound System (HISS) (Harrison 2015). This piece was my first after retiring from Birmingham and
306
Rendering the invisible
continued my stem-based approach from BEASTiary but, because of the specifics of HISS and the performance venue, was much less profligate in its use of channels and, in particular, of 8-channel stems as the norm. Going/Places incorporates four stereo and two 4-channel stems as well as two 8-channel ones [Figures 12.37 and 12.38] and demonstrates that, contrary to my earlier exposition of diffusing 8-channel stems over multiple 8-channel arrays, it might not really be necessary to have eight high speakers (for example) to give the cue of ‘high’ to the brain; four (or even two) might be enough. The same goes for ‘low’ images and possibly even ‘diffuse’ ones, too. So I am beginning to look for efficiencies with regard to the overall size of the loudspeaker array and the number of composed channels. This is thus a refinement of my ideas about spatial stems for differentiated diffusion over irregular arrays like the BEAST Main 8, enhanced by adding the ‘most qualitatively significant’ speakers/ images – all capable of being established within relatively small systems without undue difficulty these days. At the same time and in direct contradiction to this, I have recently become seriously interested in the possibilities offered by ambisonics – certainly for composition, but possibly also for performance. This interest grew from discussions with a former PhD student, Dr Joseph Anderson, now a researcher at DXARTS at the University of Washington in Seattle, who volunteered to undertake the gigantic task of rendering the 32 channels of Going/Places and the 30 channels of Espaces cachés (2014) to a format suitable for CD release. His solution was UHJ stereo, into which much massaging of the characteristics of the different loudspeaker arrays of these two works was somehow rendered; the stress was on finding a pragmatic solution that captured the perceived qualitative aspects of the different arrays. The CD (Voyages, on the empreintes DIGITALes label) does not represent the Urtext as with a piece that was originally stereo, but is a version, rendered for a specific listening situation. Of course, there are still issues surrounding the use of ambisonics – the need to decode for a given speaker array (both for monitoring during composition and for concert presentation, necessitating multiple decode options for different systems and venues), the number of channels required for higher orders27 and how to organise compositional workflow within a digital audio workstation being just three. The most problematic aspect to me is the possible decoupling, because of these complexities, of spatial processing from any other sound transformations; ‘space’ runs the risk of being treated as a ‘parameter’ (quantitative thinking!) that is ‘bolted on’ afterwards rather than emerging organically from the quintessential characteristics of one’s sound material. Nevertheless, the quality of the spatial render in ambisonics demands further investigation, despite my suspicion of any kind of idealised single solution – I am particularly excited by the ability to create 3D images of perceptually different sizes, distances and locations within the total sound field, the fact that the sound field is not limited to only the horizontal plane and the smoothness and realism with which sounds move within the field, apparently decoupled from the actual loudspeakers.28 Whatever happens, it seems that my own journey into seriously multichannel composition, for reasons of sheer practicality, may be drawing to a close – composers want their music to be heard, and you will not hear much of my recent work unless the concert promoter can provide a large loudspeaker system. Am I concerned about coming to the end of my research with and through BEAST? Yes, because spatiality remains a vital element in what attracts me to sound materials in the first place, and it is something that access to a large system enabled me to explore in an extremely open-ended way (and, it has to be said, with a great deal of enjoyment); no, because I remain a pragmatist – I shall continue to use whatever tools I have at my disposal to explore and experiment with sound in all its glorious variety.
307
Figure 12.37 Screenshot of Going/Places (2015) showing the 32 source tracks, originally colour-coded (but here visible in gray-tone) by stems: Diffuse (1–8), Main (9–16), Reference (17–18), Desk (19–20), Solo (21–24), High (25–28), Low (29–32)
Rendering the invisible
Jonty Harrison: Going / Places 1
2 22
21
23
24
17
18 9
25
10
29
26
3
4 11
12
31
19
30
20
5
6
13
14
27
28
32 15
16
7
8
8 Diffuse (4 wedge & 4 Mackie)
21-24
4 Solo/Effects (Bose)
8 Main (Meyer + sub)
25-28
4 High Quad (Mackie)
17-18
2 Reference/Solo (Meyer + sub)
29-32
4 Low Quad (Bellecour)
19-20
2 Desk (2 x 2 Moca Audio)
TOTAL channel count = 32
1-8 9-16
Figure 12.38 Layout for Going/Places (2015) – each source track from Figure 12.37 is routed directly to the correspondingly numbered loudspeaker
Notes 1 There will be numerous issues raised in this narrative that would benefit from further exploration but which space constraints do not permit. 2 Mention of Schaeffer will indicate to most readers my musical and aesthetic affiliations with regard to the historical division between musique concrète and elektronische Musik. For more elaboration of this point, as well as a fuller discussion of the historical and aesthetic links between musique concrète,
309
Jonty Harrison acousmatic music and sound diffusion, see Harrison (1998, 2000) and Harrison and Wilson (2010) (though I should point out that, like François Bayle, I am a huge Stockhausen fan!). 3 The speed of transition was largely determined by the budget available. 4 For the principles of Dolby A noise reduction see Dolby (1967). 5 See Howard and Angus (2009) for a good introduction to this (and other aspects of sound and hearing). 6 Something that characterises the more qualitative, organic nature of musique concrète, whose composers were generally advocates of diffusion. 7 Once again, see Harrison (1998, 2000) and Harrison and Wilson (2010) for a fuller discussion of my thinking regarding these terms and the distinctions between the two early European schools. 8 The point at which the stereo image is deemed to be ‘correct’ – normally assumed to be at the third corner of an equilateral triangle with the two loudspeakers – see Figure 12.1. 9 I accept that it is a big ‘if ’ in these days of personal listening devices and internet streaming. 10 The conventions of the concert are frequently targeted for special criticism, largely arising from their links to supposedly outmoded social behaviours and their lack of appeal to potential new audiences for acousmatic music. I do not have space to address all the dimensions of this issue here, though the orientation of the ears of the listeners is a matter of some concern to me, as I make clear elsewhere in the text. 11 This point in fact applies to all the speakers in a system – one should always be prepared to try adjusting speaker positions and angles to achieve improvements; and note that the assessment should be made by ear, in the actual space, and not simply by reference to some idealised set-up. 12 An idea first expressed to me by Denis Smalley, who was also the person responsible for engaging me in diffusing one of my works (Pair/Impair) for the first time in, I think, 1979. 13 Among members of BEAST this was normally referred to by describing the actions of the fingers on the faders: ‘wiggly-wiggly’. 14 See Smalley (2007). 15 Nor do I like layouts in which all the left and right speakers are mirrored outwards from the centre of the desk – good for moving between front and rear, but it is almost impossible to be sure you are grabbing a specific pair of speakers – and you need two hands to do so. 16 This is sometimes known as ‘evocative transcription’ and has been common since the earliest years of such performance (see Chion 1982 on Parmegiani, Bayle 1993, Wishart 2012). 17 Including at the very end of the work, where the breaking wave is recapitulated on the frontal array, leaving resonant filters to die away on the Distants. 18 Worse still is the lack of any standard of track/channel numbering beyond stereo; when I send multichannel works to other performers and promoters, I always send multiple mono files – one per track – and a map of the speaker configuration with the source track for each speaker clearly marked. 19 The use of stems is common in commercial music and cinema sound, where tracks containing similar materials are often grouped together, effectively as sub-mixes – normally for the purpose of more efficient signal processing. 20 From the early days, I had tried to make BEAST a system capable of touring, rather than just playing in its home venue – as we did not really have one back then, this was an obvious route to take. 21 One of the main things these pictures show is how difficult it is to photograph a large sound system, distributed all over a performance space. 22 Were they the result of demands from composers or were they the reason that composers are able to think differently? 23 Stereo diffusion does not really require all 96 speakers, which are there for the multichannel works – all the same, most composers of stereo pieces seem to think they have to use them all! 24 Unfortunately, the trussing was deemed unsafe in 2014, so the Truss speakers are now mounted individually on very high floor stands (marked X in Figure 25). 25 We had earlier attempted – for example, at the Inventionen festival in Berlin in 2010 – more of a full sphere by placing Genelec 1029s on the floor at the foot of each of the ‘truss’ stands; these were very effective in rehearsal but did not work at all in concert, as all the sound was absorbed by the audience members closest to those speakers. 26 The question of whether one might prefer Side Directs or Side Fills at any given moment is a niggling problem, but one which could also be addressed and overcome through processing at the composition stage, if one had already decided on composing for this 10-channel format. 27 A detailed discussion of ambisonic theory and practice is outside our remit here. There is a good introduction on Wikipedia (https://en.wikipedia.org/wiki/Ambisonics).
310
Rendering the invisible 28 With ‘normal’ amplitude panning techniques, there is a tendency for moving sounds to arrive in any loudspeakers in their trajectory with something of a ‘thump’.
References Bayle, F. 1993. Musique Acousmatique – Propositions . . . Positions, Paris: INA & Buchet/Chastel. Chion, M. 1982. L’Envers d’une oeuvre (Parmegiani: De Natura Sonorum), Paris: Buchet/Chastel. Dolby, R. 1967. ‘An Audio Noise Reduction System’. Journal of the Audio Engineering Society 15(4), pp. 383–388. Harrison, J. 1998. ‘Sound, Space, Sculpture: Some Thoughts on the What, How and Why of Sound Diffusion’. Organised Sound, 3(2), pp. 117–127, Cambridge: Cambridge University Press. ———. 2000. ‘Diffusion: Theories and Practices, With Particular Reference to the BEAST System’. eContact, 2(4), Montreal: CEC. http://cec.concordia.ca/econtact/Diffusion/diffindex.htm Harrison, J. & Wilson, S. 2010. ‘Rethinking the BEAST: Recent Developments in Multichannel Composition at Birmingham Electro Acoustic Sound Theatre’. Organised Sound, 15(3), pp. 239–250, Cambridge: Cambridge University Press. Howard, D. & Angus, J. 2009. Acoustics and Psychoacoustics (4th edition), Oxford: Focal Press. Smalley, D. 2007. ‘Space-form and the Acousmatic Image’. Organised Sound, 12(1), pp. 35–58, Cambridge: Cambridge University Press. Wishart, T. 2012. Sound Composition,York: Wishart (ISBN 978-0-9510313-3-9)
Recordings Harrison, J. 1982. Klang, on Évidence matérielle, Montreal: empreintes DIGITALes (IMED 0052). ———. 1995. Unsound objects, on Articles indéfinis, Montreal: empreintes DIGITALes (IMED 9627). ———. 1999–2006. ReCycle (Rock’n’Roll; Internal Combustion; Free Fall; Streams), on Environs, Montreal: empreintes DIGITALes (IMED 0788). ———. 2010b. BEASTory, on Musik für mehr als einen Lautsprecher (30 Jahre Inventionen VII), Berlin: Edition RZ (ed. RZ 3006–3008). ———. 2012. BEASTiary – there is currently no recording available. ———. 2015. Going/Places, on Voyages, Montreal: empreintes DIGITALes (IMED 16139).
311
13 CREATIVE CODING FOR AUDIOVISUAL ART The CodeCircle platform Mick Grierson
Introduction Audiovisual art is a popular form of global expression. However, it is a discipline that occupies a liminal space between art, science, music and film. As such it can be difficult to define and challenging to master. Its primary roots are in experimental cinema, but also electroacoustic music and related practices (Garro 2012; Brougher and Mattis 2005). It is in many ways a technical art,1 most often requiring low-level knowledge of computer graphics and sound synthesis methods. Pioneers in the field such as John Whitney Sr, Larry Cuba and Vibeke Sorensen are considered key figures in computer graphics history (Sito 2013; Sitney 2002), whilst also being recognised as artists and composers (Moritz 1986). Furthermore, a number of those studying electronic music consider themselves creators of audiovisual art, and it is increasing in popularity in the academy. Although audiovisual art’s relationship to coding is less explicit, it is clearly a historically important element of the practice. A similar relationship persists in other fields, for example, live coding (McLean 2004) and ‘creative coding’ (Lopes 2009), and it is likewise challenging to define the boundaries of these forms of art making (Carvalho and Lund (2015), Daniels, Naumann and Thoben (2010)). Creative computing is becoming a more widely accepted academic discipline, and this in turn occupies similar territory. Despite these difficulties in definitions, there is increased interest in creative technology practices and a thirst for technical skills that facilitate their creation. Developing a practice in audiovisual art and similar disciplines can be challenging due to the requirements of learning the necessary technical skills. There are a number of reasons why these challenges are important to address. It can be argued that these forms of art are important to the continued development of a wide range of technologies and media. For example, in the case of Whitney, Sorensen and Cuba, they were amongst the first to use computers to make audiovisual art. Their contributions also incorporate the first computer generated title sequence (Vertigo, 1957, title graphics by Whitney), the first computer-generated 3D graphics used in a movie (Star Wars, 1977, 3D Death Star/flight control graphics by Cuba) and the inspirational paradigm for the 3D computer graphics software Maya.2 This interaction between research and practice continues to be a feature of the discipline. As an example, our work3 on the use
312
Creative coding for audiovisual art
of generative adversarial networks for the creation of artworks has led to an exhibition in the Whitney Museum of America Art (Broad and Grierson 2017) and is considered state of the art in technical terms. More importantly, there are a great many people who would wish to develop these skills and apply them in their own work. ‘Massive open online courses’ (MOOCs) on the topic have reached very high numbers of participants (Grierson et al. 2013).4 However, these more global, distributed methods of learning reveal the need for better, more interactive coding tools. In order to more successfully deliver learning that can effectively develop and enhance the experience and knowledge of the creative practitioner, it is essential to provide better methods for understanding and developing knowledge of the underlying techniques and practices of the form. Improving accessibility to and functionality of creative computing platforms is a potentially valuable approach that can be used to tackle this problem. This realisation has led to the creation of a new approach to making audiovisual art through creative code, which we call ‘CodeCircle’ (www.codecircle.com). CodeCircle represents an attempt to find better methods for supporting the learning and use of computer programming for interactive sound and graphics (see Figure 13.1). It might be more appropriate to call it CodeCircle V2,5 as it is the second generation of the tool, reflecting the longstanding and widely held philosophy of creative computing at Goldsmiths. It is a project that places embodied audiovisual interaction and creative coding at its centre. The platform fuses a number of approaches to writing software that are borrowed from the history of computing, including the notion of interactive programming (explored later in this chapter). Further, it introduces the idea of collaborative code, where creators can work together at the same time and on the same documents, which can be considered an interesting method for learning.6 CodeCircle is an online, web-based programming tool created by Goldsmiths Computing. The tool is designed to be specifically tailored to the creation of practical work in the field.This includes computer music, computer graphics, digital signal processing, real-time interaction, interactive machine learning, games development and design. All such practices share a historical link to the field of digital audiovisual art. CodeCircle consists of a browser-based HTML5 integrated development environment (IDE) with bug detection, real-time rendering and social features. Although many such platforms exist, CodeCircle uniquely fuses interactive programming with collaborative coding, providing just in time (JIT) compilation (where available) alongside real-time, socially oriented document editing in the browser. Users can work together on software that features accelerated computer graphics, buffer-level audio, signal processing, real-time user interfaces and any other HTML5/ CSS3/JavaScript compatible features they wish to use. Crucially, updates to the program code are simultaneously pushed to all connected users and then immediately rendered, enabling a novel form of instant, interactive collaboration that is not available in any other platform, to our knowledge, at the time of writing. What follows is an attempt to properly define the requirements for CodeCircle based on informed, pedagogical and creative practice needs. This is done through a brief definition of audiovisual art methods in the context of creative computing, further contextualising its position as a domain of enquiry that depends on and informs technological innovation in sound, graphics and interaction. We also briefly explore and delineate creative, interactive, and collaborative coding, including key associated benefits and problems, before describing how these features form part of the design of the CodeCircle platform and providing examples of its use and evaluation.
313
Figure 13.1 The CodeCircle front page
Creative coding for audiovisual art
Audiovisual art: a brief definition Audiovisual art is a modern artistic discipline and a global phenomenon, with leading artists including Ryoji Ikeda, Ryoichi Kurokawa, Paul Prudence and many others who, although less well known, produce work of very high quality. The ease with which artists can distribute their work online has not diluted the field – rather it has demonstrated its potential to create new audiences, even in its most abstract, experimental forms. As mentioned earlier, the technical difficulties associated with its production, particularly in real-time interaction contexts, have naturally led to its association with technological and scientific progress. As such it is closely aligned to other disciplines that fall within the broader academic field of creative computing. Despite its popularity and success, audiovisual art is a discipline that is misunderstood and poorly characterised. Academics from related disciplines, such as music, digital media arts and film, tend to think of audiovisual art as a subfield of their own, perhaps because that is how it has emerged from the perspective of these academic disciplines and also perhaps because of the natural territoriality of academics with respect to the disciplinary specificity of their fields (Birtwistle 2010, Chion 1994, Le Grice (1981)). Authors naturally focus on those aspects of the field that fit the remit of their discipline (for example, Garro 2012 explores the field in terms of music and Lopes 2009 contextualises it as operating within creative coding). Filmmakers who make music, or musicians that have made films, may refer to their work as audiovisual art, but terms such as ‘film’, ‘music video’ and ‘VJing’ are often sufficient to describe such practices (Richardson, Gorbman and Vernallis (2013)). I assert that neither the historical nor the emerging canon of audiovisual art could be properly described by those terms. What best characterises audiovisual art is the long-established practice exploring relationships between formal ideas and experiential observations shared across image and sound composition (McDonnell 2010, Lund and Lund 2009). Much of this can be thought of as extending the concept of visual music (Moritz 1986, 1997).The earliest works of visual music were preoccupied with formal characteristics of music composition applied to image making, in all probability due to the lack of sufficient vocabulary to describe abstract time-based media concepts in the visual arts. In addition, grammars of visual composition, such as graphic design principles developed by Bauhaus tutors, including Johannes Itten, follow a line of thought derived from their colleagues’ early experimentation in similar fields (Itten 1975). The early work of John and James Whitney (Abstract Film Exercises, 1948) and certain of Norman McLaren’s animations (for example, Dots, 1942) are known for their extraordinary innovations in sound synthesis. These innovations were primarily achieved through the application of techniques from the visual arts to sound practice, namely drawing and animation.7 It should be noted that in both these cases, the works mentioned aimed to create and explore an abstract, unified, audiovisual production method and outcome: abstract so as to highlight the formal relationships between the two media and unified in such a way that neither image nor sound would be meaningful without the other. So it is clear that audiovisual art predates electroacoustic music as a field of enquiry incorporating experimental sound synthesis, and also that it has its own aesthetic concerns. These explorations are now known to exploit dedicated multi-sensory cells in the brain that only fire when strong audiovisual connections occur. It is known that these effects cause modulation in attention mechanisms, leading to a concrete, observable form of complex neurological stimulation from wholly abstract material.When added to the consideration that this stimulation is involuntary, we get much closer to a fuller understanding of audiovisual art as an art form. The notion of a physically affective experience created through effective connections between abstract image and sound lies firmly at the centre of the practice. 315
Mick Grierson
So to summarise, it is three things that most accurately characterise audiovisual art: firstly, the focus on the audiovisual experience over the musical or visual, for the purposes of specifically and directly modulating attention through multisensory stimulation; secondly, the use of abstraction to focus the work on these experiential qualities, as opposed to other qualities such as story, characters, context – elements that might detract from the experience of the relationship itself; and thirdly, the development and application of new technological approaches to more effectively explore these principles in practice (Rees 2013). We can consider the CodeCircle project, presented here, as an example of such a practice, but it is also more coherently understood as an attempt to enhance pedagogy around such practices.
The importance of code in audiovisual art Audiovisual art often focuses on pattern making – sonically, visually and as an audiovisual interaction between the two. One of the central challenges of audiovisual art is to improve the manner by which such patterns and relationships can be explored. Non-real time methods for audiovisual art have been, and continue to be, functionally identical to standard film-making and animation practice in the majority of cases. However, the problem with such approaches is that although it is far easier to generate high-quality visual art and sound in non-real time, it is more challenging to create and explore strong connections between the two streams, largely because production methods do not afford for the easy sharing of data or information across sound and image. For example, most video post-production tools do not allow audio data to be analysed by feature extraction tools or images to be synthesised based on their relationship to sound.This is important, because, as I have stated in the past (Grierson 2005), textural information – for example, the timbre of a sound, or the visual texture of an image, can be considered of primary importance in audiovisual art and composition – more so than simple, temporal mapping or alignment between data streams (Abbado 1988; Ikeshiro 2013). In order to capture and explore textural relationships between sounds and images, composers require access to raw data – the ability to both generate and analyse pixels and audio samples, including any associated features that can be derived from them. One of the most efficient and powerful methods for achieving this is through signal processing at the level of computer code. To be absolutely clear, without code and algorithms, no form of digital art making would be possible at all. Therefore, code is a primary medium for the production of audiovisual art. Code can therefore be seen as central both in real-time and non-real-time audiovisual composition. It could even be said that to a large degree, the field of audiovisual art and composition is dominated by code. One can produce a work in real time and/or render a high-definition version of a piece in non-real time, but to do so relies upon code. There are exceptions, such as analogue video synthesis and physical systems, and such practices definitely deserve special attention, but the definition and use of these approaches is not the subject of this chapter.
Creative code and interactivity Historically, artists from the world of fine art are not often considered to be programmers, and vice versa. There are welcome exceptions to this, as already mentioned, and one or two key historical events that are well known, such as the 1968 Cybernetic Serendipity Exhibition. However, coders are often seen as technicians, and artists seen as visionaries. This is not always the case in the fields of computer music and computer graphics, where computer use is primary. It is important to remember that John Whitney Sr, considered by many to be the first computer 316
Creative coding for audiovisual art
graphics artist, canonised the term ‘audiovisual composition’, and I consider it in much the same manner he described (Whitney 1980).8 Furthermore, computer music pioneers were as preoccupied with art making as with signal processing – for example, Daphne Oram pioneered synthesiser design9 in the United Kingdom whilst also attempting to define electronic music as a compositional approach (Oram 1972). Despite these exceptions, computer programming is not historically associated with the arts in the broadest sense, and even within the fields mentioned, artists often see programming as a technical act, performed by technicians, not artists.10 This view leads to the frequent pairing of artists or composers with technicians for the realisation of digital artworks11 and tends to lend weight to assumptions that technical skill is not a requirement in order for creative acts to be produced. However, this approach is contrasted by recent practices such as creative code, where craft and technique are essential for the realisation of artistic expression. It is fully accepted that to many, the notion of technique being central to any creative art is in itself an anathema, but it is entirely clear it is the confluence of science, technology and art, and the mastery of that liminal space between them, that leads to audiovisual art, creative code and related practices. In the last decade there has been a noticeable increase in the number of artists working with code as material. This can be attributed in large part to the increased accessibility of online educational resources, related creative coding software tools such as Processing (processing.org), openFrameworks (openframeworks.cc), Cinder (libcinder.org), Supercollider (supercollider. github.io), and visual data-flow languages such as Pure Data (msp.ucsd.edu), Max (cycling74. com) and VVVV (vvvv.org).12 These resources have made it much easier to code whilst increasing access to the knowledge required to do so. With the use of carefully constructed languages, simplified programming syntax, and well-documented examples, these environments make it more possible for artists to hone their craft and develop creative ideas interactively. This has allowed them to engage in more meaningful contemporary art making, ushering in the era of creative code. However, such resources are not without their problems. The nature of the workflow is much the same as traditional programming in many cases, specifically those that feature textual programming. Perhaps you type code into an integrated development environment (IDE).You compile your software (which can easily take minutes).You watch it run and interact with it.You attempt to improve it – without necessarily being clear with regard to the impact of your edits, then wait for your program to compile before you can experience it again.This is, in some ways, very far from an embodied experience and not something we would associate with a natural, human-like, creative process, such as plucking a string on a cello or striking a drum. Visual data-flow languages such as Max, Pure Data and VVVV are more interactive and, perhaps as a result, more embodied than traditional approaches. For example, both Max and PD run software interactively as the program is edited. This provides a useful affordance – the experience of your edits is immediate. It is possibly for this reason that visual data-flow approaches are popular amongst musicians, as musicians often use real-time feedback and experience-led approaches in order to understand how their creative acts unfold in the world. However, although this affordance is welcome, and in some cases inspiring, unfortunately, the restrictions surrounding the visual data-flow paradigm can make certain forms of signal processing more challenging to learn and understand. For example, I would argue it is more challenging to compute buffer-level signal processing in visual data-flow languages than it is to simply program them using traditional methods.13 This is particularly the case if users engage with a specifically designed C++ DSP toolkit that has been designed to be easy to use, such as Maximilian (https://github.com/micknoise/Maximilian). So – to summarise – if the language 317
Mick Grierson
and syntax of textual programming methods are good, but the interaction method is not, and if visual data-flow environments have better interaction, but are not flexible enough to code bespoke signal processing routines, what are the next logical steps, and how do they impact on our design decisions for CodeCircle?
Just in time and interactive programming Interactive programming describes a process whereby code compiles and runs whilst it is being written. The idea has been incorporated into CodeCircle in order to address the issues raised earlier regarding the lack of an interactive feedback process in traditional programming, in the hope that it will help make learning to program more experiential and embodied. The idea has many potential benefits, one being that it may allow users to understand more quickly the relationships between their actions and any associated outcomes, increasing the likelihood that their actions might be implicitly associated with the output of the program directly. This may increase the possibility for intuitive leaps to occur whilst also making the process of programming considerably more engaging. Useful for programming where the desired outcome or problem is not fully known or understood, this method is a strong candidate for practices such as computer music and audiovisual art. In fact, it has flourished in this domain, with Max, PD and Supercollider being prominent examples. Interactive programming tools most often use a just in time (JIT) approach. JIT platforms have been around for decades in a variety of forms (McCarthy 1960). Contemporary C++ compilers such as LLVM now have the capacity for JIT compilation that can enable interactive programming (CLANG). The contemporary C++ framework, JUCE, now includes an integrated development environment (IDE) based on the principle (The Projucer, 2015). The Chuck programming language can also be described as an interactive programming platform (Wang 2008). Such approaches have become very powerful in the domain of JavaScript web applications, particularly since contemporary browsers began to support JIT compilation, greatly accelerating JavaScript, which is an otherwise interpreted language. Modern browsers now support JIT compilation of accelerated graphics (the graphics framework, webGL, was finalised 2011), and buffer-level audio DSP (the webAudio framework was finalised 2014, and is now the subject of a regular academic conference, WAC (The WebAudio Conference)). Platforms such as glslsandbox.com and shadertoy.com are excellent examples of contemporary interactive coding platforms that rely on such processes. These are used by learners and visual artists alike to develop and demonstrate their skill, and tools such as livecodelab.net use the same approach (graphically at least), coupled with a hugely simplified yet elegant syntax to achieve the same results with children who are learning to code. What is crucial here is that these platforms are becoming a central mechanism whereby people learn to code for creative purposes. I would argue that these platforms, specifically shadertoy.com, glslsandbox.com and livecodelab.net, represent a step change in how people now engage with audiovisual art and creative code.
Live coding and interactive programming Although there is a great deal of crossover between interactive programming as an approach, and the live coding movement represented by collectives such as TOPLAP, interactive programming is a separate field of interaction research. In addition to the use of ideas from the field of interactive programming, live coding highlights concerns that are fundamentally performative, 318
Creative coding for audiovisual art
aesthetic, cultural and conceptual in nature – such as the imperative for performers to show their code to audiences and the importance of performer or domain specific language design. Interactive programming is an interaction technique and not concerned with aesthetics or culture, instead being focussed on problem solving issues, including learning, usage, experience and output. It is clear, as has already been mentioned, that live coding is a form of practice that shares a number of similar approaches and methods with audiovisual art and creative code generally. As a result, concerns raised here may well apply across such practices. However, it is important to recognise that interactive coding is a technical solution to an interaction problem that has been the subject of debate for over 50 years, whereas live coding is considerably more than just this – it is an art form with a number of different technical and aesthetic challenges.14 Importantly, the JavaScript-based, interactive programming tools mentioned earlier use more-or-less immediate interpretation; i.e., they execute code as soon as the user finishes typing. It is perhaps this which is most interesting to our discussion with respect to embodied interaction, learning and use in the context of programming. However, this approach may not always be a useful or recommended live coding technique. For example, this specific method makes it difficult for the user to dictate when the code will be executed – it is executed immediately, and following any further edits, it will execute from the start once more. In live coding, specifying precisely when code runs can be central to a live performance. Nevertheless, it is clear that there are shared concerns between the more or less technical field of interactive programming and live coding, as evidenced by the excellent livecodelab. net and other impressive platforms such as Tidal.15 However, it is clearer to say that interactive programming is a technique that features in some forms of live coding as well as other fields of practice, but that on its own is not ‘live coding’. This does not mean that interactive programming platforms such as CodeCircle cannot be used for live coding, just that they might not be a very good choice for doing so, as the program will keep being interrupted. It is fair to say that CodeCircle could be used as a live coding platform, but as such, being that it currently only features interactive programming methods as a means to solve interaction issues, it might not be the best platform available for such purposes.
Collaborative coding As the name suggests, collaborative coding describes a process where different people collaborate to write a single piece of software.This is a common practice, made more effective with version control tools such as git16 (by Linus Tovalds, creator of Linux), svn and similar. These tools provide a central online repository where all code is stored. Users make copies of this repository, make edits and then submit edits back to the repository. Any conflicts between different users’ edits are managed at that point. This process helps to ensure that edits to code by different people can be more effectively controlled and integrated, for example, when they happen at similar times, or when they relate to the same precise section of any particular piece of code being edited. It is safe to say that almost no software is or should be written without the use of some form of version control, as described earlier.17 A similar, more interactive kind of collaborative coding has recently emerged in new software development platforms such as cloud9 (c9.io), collabedit.com and etherpad.org. Users can see each others’ edits immediately, or at least as soon as possible after they occur. This kind of interaction can be experienced in Google’s popular ‘Google Docs’ platform,18 but it is not supported by any major professional development environments, such as Windows Visual Studio or Xcode. This may be with good reason, but as little is currently known regarding the possible impact of this new approach, it remains a largely unexplored territory. 319
Mick Grierson
The advantages of such collaborative approaches are numerous. Programming can be a highly challenging task, and code sharing is common. Supporting code sharing in this way seems eminently sensible, allowing users to work together on complex programming problems – for example, composers and artists often work together when generating various forms of art. However, computer programming is more or less considered a solitary task – and it need not be. In education and the creative arts this new method may be hugely important. For example, teachers could comment on student work, make suggestions and monitor group work in real time with the advantage of knowing precisely what each student has contributed. It can also facilitate peer learning, rewarding students for social interaction and assisting other learners through mutual shared experience. Finally, it may bring to light collaborative approaches reflecting improvised composition and performance that would otherwise not emerge in the creative computing discipline. As we will see, it is a founding element of CodeCircle for precisely these reasons.
CodeCircle CodeCircle has been developed using the Full-Stack web development framework Meteor (www.meteor.com), in collaboration with Dr Matthew Yee-King and Jakub Fiala (Fiala et al. 2016). Meteor is an excellent tool for creating websites that feature multi-user interaction. This is due to its support for reactive architectures. Users interact with databases that automatically update themselves when clients or servers cause or detect changes to database elements. We adapted this approach, applying it to the development of a web-based programming tool, rather than using it for the purposes of website design specifically. We do this by referencing code documents as database objects whilst recording code edits as user-specific database entries. Crucially, we render the document in the browser, with the code window on top and to the right, as can be seen in Figure 13.2. Figure 13.2 shows the main interface for the document editor. On the right-hand side one can clearly see the code edit window. This is a version of the ACE code editor (https://ace. c9.io/). The ACE code editor includes highlighting for different languages (HTML5, CSS3, JavaScript, Coffee Script etc.). We have also included support for dynamic error checking using JSHint (http://jshint.com). This combination is powerful – providing instant, in-place feedback on code edits, allowing users to understand the implications of their creative coding decisions more fluidly. Figure 13.2 also shows that the file has been edited by a number of authors, represented by individual usernames in the upper right section of the document window. By parsing the database, it is very simple to discover which edits were created by which users. This is of the highest importance with respect to assessing document ownership and individual effort. In this way, group projects can be undertaken with confidence that convincing evidence will be available to indicate the provenance of any code excerpts within a document. Furthermore, information about how people code can be analysed offline and used to help better understand the kinds of strategies people might use in certain circumstances. We have used this approach to gather data on thousands of users in order to better understand the creative, exploratory strategies that might help people develop better work. Figure 13.2 also shows an interactive session featuring buffer-level signal processing. A discussion regarding how to implement stereo audio is visible in the collapsible comments pane. Here we can see the potential educational value in terms of remote tutorial support. Figure 13.3 shows the potential for accelerated computing provided by the platform. The screenshot shows a template created in order to instruct students regarding how to implement a basic fragment shader. The template has been specifically designed to be compatible with 320
Figure 13.2 The basic CodeCircle Interface.The code editor is on the right. Above the editor are file operations, including elementary permissions. The code is rendered underneath
Figure 13.3 WebGL shader code executing in the CodeCircle platform
Creative coding for audiovisual art
glslsandbox.com, a popular interactive coding platform – documents can be copied and pasted directly into CodeCircle. In addition, the comments section gives specific advice to beginners regarding how to improve the performance of the graphical content – a factor that varies from machine to machine. The Maximilian C++ DSP framework has been converted to JavaScript via emscripten (https://github.com/kripken/emscripten), a tool that transpiles19 C++ code to JavaScript (see Figure 13.4). This provides comprehensive, buffer-level support of professional level synthesis, sample manipulation, granular synthesis, FFT/iFFT-based manipulation, Music Information Retrieval and Atomic (wavelet-based) synthesis, and many other features not supported by the webAudio framework. The webAudio framework is an excellent innovation, standardising digital audio across all modern browsers, but there are some specific electronic and computer music approaches that it does not support. Transpiling offers the opportunity for users to implement such methods, with the same functionality as complex C++ signal processing libraries, whilst interactively programming in a web browser. In addition to buffer-level signal processing and accelerated computer graphics, the platform supports comprehensive asset loading and manipulation. Samples, images, video and other assets are uploaded to the platform and then stored in the database alongside the document. When a document is forked (copied by another user), these assets migrate with the document. When a user downloads any document, all assets are contained within it, and the document can be run in any web browser with all such assets, without the need of a webserver.20
CodeCircle in the wild This new, interactive version of CodeCircle has been in active development online since late 2015. During that time we have used it for the delivery of two MOOCs, one as part of an international collaboration with the Los Angeles-based online learning provider Kadenze and a second with UK MOOC provider Futurelearn (produced by Dr Matthew Yee-King). The Kadenze programme was a ten-week course in audiovisual art, supporting thousands of learners. Learning materials on audio signal processing, visualisation techniques, computer vision, algorithmic composition methods and other key topics were presented as runnable documents on CodeCircle (which are still freely available at codecircle.com). Students were instructed to make copies of example code and work through video tutorials delivered by the Kadenze.com platform to explore audiovisual art-making principles. All assignments were completed on the CodeCircle, allowing us to perform statistical analysis to more fully understand how students learn to program for creative practice. Analysis of activity by users attempting to complete specific tasks in audiovisual processing, undertaken by Dr Matthew Yee-King, Professor Mark d’Inverno and myself, has demonstrated a potentially important finding we have reported at the 2017 ieee EDUCON conference (YeeKing et al. 2017). Our results suggest that students who are engaged in programming for the purposes of creative activity of their own devising, such as the creation of artworks and similar projects, tend to achieve higher grades in programming tasks than students engaged in more traditional science, technology, engineering and maths (STEM) exercises. The evidence for this comes from the number of delete operations carried out by highly achieving learners. In our study, delete operations are significantly statistically correlated with higher grades and also with creative arts methods.21 This result suggests that the platform itself is useful for exploring creative approaches to technology-enhanced creativity, a subject that sits right at the heart of creative computing and
323
Figure 13.4 C++ DSP code transpiled and running in the CodeCircle platform
Creative coding for audiovisual art
audiovisual art specifically. This work is ongoing, but does appear to evidence that the design of CodeCircle may help to more effectively reveal the significance of this liminal, poorly-defined space that creative computing practices occupy between science and the arts. We have also been able to use CodeCircle to explore methods that are only now emerging in the field of audiovisual art and creative code, such as machine learning (ML) for electroacoustic music. For example, we present work towards rapid prototyping of electronic musical instrument interfaces using interactive machine learning (IML) in “Rapid Prototyping of New Instruments with CodeCircle” (Zbyszyński et al. 2017). This work shows how creative coders can use machine learning to help develop better interactive tools through the use of our own machine learning libraries embedded in the CodeCircle interface (the rapid-mix-api). The machine learning application programming interface (API) we have developed is based on Rebecca Fiebrink’s Wekinator (Fiebrink and Cook 2010) and has been developed as part of the project RAPID-MIX.22
Examples of work created on the platform Overall there are currently several thousand users on the CodeCircle platform, each with a number of documents. This creates something of a problem, in that it is quite hard to find specific examples of good practice amongst the available material. However, by browsing the platform and looking at the data in order to discover documents that are more popular than others, it has been possible to extract a number that are of interest. Because the platform is online permanently, it is possible to retrieve specific work by student learners that demonstrates the potential of the platform by URL, and this has been done on occasion. Importantly, as is the case with other creative coding tools mentioned earlier in this chapter, including Max, PD, openFrameworks etc., users most often use the platform to sketch out ideas, and only a very small number of public documents feature completed works. Documents produced by the community include basic demonstrators for sound generation and composition methods and audiovisual instruments as well as a few presentable pieces. Next are some examples of such works. A video of some of these examples and a great number of others can be found at www. doc.gold.ac.uk/~mus02mg/cc.mp4. The example BMinorThing by user ‘pressxtoskip’ demonstrates proximity-based audiovisual synthesis, where the volume of specific partials relates to the distance of a single circle in relation to a series of static circles (see Figure 13.5). The first circle is controlled with the mouse, and as this circle approaches other circles in the composition, the amplitude of a specific partial or set of partials is increased as a function of the distance.Visual links between each element in the system are drawn thicker as objects get closer together, reinforcing the relationships between each partial and the dynamic system that controls them. This creates a simple audiovisual instrument that can be used as a basis for generating harmonic textures. Yee-algo pattern FX (see Figure 13.6) is a simple Audiovisual Composition by user ‘cyleung274’, whose username appears to be a pun on Cycling74, which is the name of the company who makes the popular creative programming platform, Max. This work features the use of webGL and Maximilian, used together to create a scene with ‘techno’ aesthetics, using neon-like primary coloured geometry on a white background, with synthesised audio. Other webGL work includes a series of experiments with GLSL, where accelerated graphics are used to render complex scenes quickly. One such example is shown in Figure 13.7. It is by a user on the second year creative computing programme at Goldsmiths and shows a 3D sphere being deformed by frequency modulation.
325
Figure 13.5 BMinorThing by ‘pressxtoskip’, https://live.codecircle.com/d/cxuENqhG9ifkfkLHN
Figure 13.6 Yee-algo pattern FX by ‘Cyleung274’ https://live.codecircle.com/d/LyRCpnLw9tf9YoGiA
Figure 13.7 A sphere being deformed by frequency modulation on the CodeCircle platform. This work was created by a secondyear undergraduate student enrolled on Goldsmiths’ creative computing programme
Creative coding for audiovisual art
The artist Memo Akten (www.memo.tv) produced a tutorial for the CodeCircle audiovisual art MOOC on Kadenze, where he presented a series of code examples that helped users learn how to build dynamic systems, specifically particle systems.23 Two screenshots of the final work can be seen in Figure 13.8. On the video,24 you can see the dynamic behaviour of the particle system, and hear how it impacts on the sound synthesis approach. This example incorporates some excellent programming and, although it is only an example, represents a very interesting approach to generating audiovisual art. Figure 13.9 shows a 3D superformula algorithm being explored by students, who are themselves extending one of the given examples provided on the platform. In my experience, students sometimes consider the superformula mysterious and challenging. It allows a large number of shapes to be generated by quite simple means and was patented by Johan Gielis in 2005.25 However, it is nonetheless remarkably simple, being an extension of quite basic methods for shape generation in spherical coordinate systems, useful for creating curved shapes with polar coordinates. It is similar in fact to the approach Whitney describes in Digital Harmony (Whitney 1980). The work has been extended by a number of students at Goldsmiths, including Kingsley Ash, who used the superformula for audiovisual synthesis.26 The superformula is also used by the audiovisual artist Paul Prudence in a number of his pieces, including the phenomenal work Cyclotone II.27 There are a range of very interesting audio-only works on the platform, some of which can be heard in the video referenced earlier. These explore a range of approaches to sound making
Figure 13.8 Two screenshots from Memo Akten’s Particle system example
329
Figure 13.9 A wireframe superformula on CodeCircle
Creative coding for audiovisual art
and are offered up with full source code – as all CodeCircle documents are – to the community. One example features the use of DSP methods taken from chiptune aesthetics. This is fascinating, particularly for those not familiar with early digital audio synthesis on personal computing platforms.The example makes great use of bitwise operations, where calculations are performed based on individual bits rather than bytes.28 In this way, the mathematical operations, rather than being performed on real numbers, are performed on 8-bit number representations. There are a great many more documents on the platform, and they contain a range of fascinating approaches by students on our MOOCs, undergraduate and postgraduate programmes. For a greater selection of such examples, please visit the platform at www.codecircle.com and see the example video here: www.doc.gold.ac.uk/~mus02mg/cc.mp4.
Conclusion From our initial experiments using the platform, a key question that has arisen has been ‘Is language design really more important than interaction approach?’ The reason for this question is best explained with reference to earlier arguments in this chapter regarding interactivity. With respect to well-known creative coding platforms such as those already discussed, language design has been the fundamental area of concern. However, our experience demonstrates the possibility that with real-time, instant interactive programming fuelled by powerful JIT compilation, the use of more complex syntax may be made easier, as the distance between the act of creation and the experiencing of the outcome is as small as is practicably possible. Our platform supports any language or framework that can be run in JavaScript (including C++). However, as feedback and debugging is detailed and instantaneous (see Figure 13.10), it may not be necessary to sacrifice the power and detail of a language in order to increase ease of use. What this means is that although language design is important for a number of reasons, it may be that the interaction method has a greater impact on the ability of users to use code to create art. Indications that transpiled C++ code may outperform some embedded browser features require significant scrutiny. The notion of C++ code running in JavaScript may cause many developers to be sceptical, mainly as a result of the generally poor performance of JavaScript when compared to C++, but there is at least some comfort in the possibility that whatever the truth of this, the general approach could yield significant benefits in terms of functionality, accessibility and learning. On the subject of collaborative coding, it seems clear that the social features can promote engagement and improve learning. My own experience is that the approach has led to very quick improvements in collaborative projects. The ability to witness changes to the code in real time, and understand the outcome of those changes immediately, is a feature that has generated great excitement, energy and engagement amongst users. A possibly negative aspect of real-time collaboration in this context has been the potential for anarchic and irresponsible action. Users sometimes found that other users would break code in their absence or, occasionally, in front of them. On the other hand, there is an element of this that might add to creative outcomes; notions of hacking, breaking and cracking are an important part of both the creative arts and also computing culture, and the humorous outcomes it can lead to offer potential for creativity. CodeCircle allows users to back up, duplicate and hide their documents, yet it is clear that such breakage can be compelling – there is something magical about the notion of a random editor copy-pasting a running 3D world into another person’s document. So CodeCircle projects sometimes become deliberately broken. However, the platform allows all such adjustments to documents to be easily repaired. Exactly how we will allow
331
Figure 13.10 Note the white pop-up window and hazard symbol, clearly indicating code errors on line 131 and providing relevant reparatory advice
Creative coding for audiovisual art
such interactions to manifest is still emerging, but at present, users must protect their documents to ensure that they work, and it may be that this happens by default in the future. The CodeCircle project has allowed us to explore better methods for embodied, globally oriented audiovisual artwork creation. The platform potentially improves access and support for creative practitioners exploring audiovisual art and other related areas of creative computing practice whilst simultaneously allowing us to better understand the impact of interaction and language design in the context of technology art practices. The platform is free to use and contains a large number of interactive coding examples that we continue to use in teaching and research. Finally, the platform received funding in November 201629 to facilitate its development and maintenance as a teaching resource. This will allow the project to continue for the foreseeable future, aiding the dissemination of knowledge, methods and approaches that remain core to audiovisual art making and related creative practices.
Notes 1 Although not exclusively. 2 Sorensen’s Maya (1993) was funded by the National Science Foundation, led to improvements in stereoscopic 3D graphics approaches and inspired the widely used software of the same name. 3 By which I mean work conducted in the Goldsmiths Embodied Audiovisual Interaction Group (EAVI), specifically Terence Broad and myself, in this case. 4 Over 150,000 users have taken our “Creative Programming” MOOC – it was the first MOOC from an English university, and the first in the world on the topic of creative code. 5 It is important here to acknowledge Professor Mark d’Inverno and Dr Matthew Yee-King, whose European Commission-funded Framework Programme 7 project, PRAISE, led to the creation of the first CodeCircle. Version 2 was a complete redesign as part of research in the Embodied Audiovisual Interaction group (EAVI) at Goldsmiths, undertaken by Matthew Yee-King, Jakub Fiala and myself. 6 It is our understanding that our new version of CodeCircle represents the first time this method has been used for pedagogical purposes in this field. 7 John Whitney is recognised as one of the fathers of computer graphics, and motion graphics more generally (Moritz 1986). 8 Although he worked mainly in the field of graphical signal processing in later life. 9 The Oramics Machine is amongst the first music synthesisers to feature digital control. 10 This presents a subtle paradox, as one could say the same of any musician whose instrument requires technical skill. 11 A practice which is common at a number of institutions including the eminent Institut de Recherche et Coordination Acoustique/Musique (IRCAM). 12 VVVV is used by many creative coders who focus on the visual arts, but it is generally similar to Max/ PD. 13 Consider implementing a time-domain convolution reverb in Max or PD, and you might see what I mean. What is just a few lines of code in C++ can be very challenging to program in Max or PD. 14 For an introduction to the culture and practices of live coding, visit organisation website https://toplap. org – last accessed 23 February 2018. 15 See TidalCycles music language website https://tidalcycles.org – last accessed 23 February 2018. 16 See the Git system website https://git-scm.com – last accessed 23 February 2018. 17 At Goldsmiths, all computing students must use git version control, regardless of the complexity of their task. 18 See the website www.google.com/docs/about/ – last accessed 23 February 2018. 19 As opposed to compiling – specifically the library is compiled into bytecode first, and then translated or ‘transpiled’ to asm.js, which is a low-level form of JavaScript. 20 This functionality represented a considerable technical challenge, specifically relating to cross-domain requests and other issues. 21 Yee-King, d’Iverno and myself reflect on how this might relate to John Dewey’s notions of experience, creativity and ‘Inquiry’, put forward in Art as Experience (1934), and Logic: The Theory of Inquiry (1938). See (Yee-King, Grierson, & d’Inverno 2017)
333
Mick Grierson 22 Funded by the European Commission, Horizon 2020. The IML tutorials are freely available on CodeCircle.com, tagged RAPID. 23 A particle system is a method often used in computer graphics whereby individual elements, or ‘particles’, appear like a group, often by being programmed to behave in certain specific ways or by following certain rules. They can be thought of as different to swarms, as they are not necessarily attempting to emulate life of any kind. They are often used to generate clusters of physically modeled objects, like elements in an explosion. 24 See ‘CodeCircle – Examples of work’ www.doc.gold.ac.uk/~mus02mg/cc.mp4 – last accessed 23 February 2018. 25 EP patent 1177529, Gielis, Johan, “Method and apparatus for synthesizing patterns”, issued 2005–2002–2002 26 See ‘KINGSLEY ASH: SUPERFORMULA004’ https://freshyorkshireaires.wordpress.com/portfolio/kingsley-ash-superformula004/ – last accessed 23 February 2018. 27 See ‘Paul Prudence –Cyclotone II (LEV & Elektra Festivals)’ www.paulprudence.com/?p=553 – last accessed 23 February 2018. 28 See ‘CodeCircle: Chiptune Advanced’ https://live.codecircle.com/d/xCZs38ihyxxRBupdW – last accessed 23 February 2018. 29 From HEFCE’s Catalyst Fund Call A for Small-scale, ‘experimental’ innovations in learning and teaching www.hefce.ac.uk/lt/innovationfund/
References Abbado, A. (1988). Perceptual correspondences of abstract animation and synthetic sound. Leonardo. Supplemental Issue, 1, p. 3. Birtwistle, A. (2010). Cinesonica: sounding film and video. Manchester: Manchester University Press. Broad, T. and Grierson, M. (2017). Autoencoding blade runner: reconstructing films with artificial neural networks. SIGGRAPH, 2017. Brougher, K. and Mattis, O. (2005). Visual music: synaesthesia in art and music since 1900. London: Thames & Hudson. Carvalho, A. and Lund, C. (2015). The audiovisual breakthrough. Berlin: Collin&Maierski Print GbR. Chion, M. (1994). Audio-vision: sound on screen. New York: Columbia University Press. Daniels, D., Naumann, S. and Thoben, J. (2010). Audiovisuology. An interdisciplinary survey of audiovisual culture. Köln: Walther König. Fiala, J.,Yee-King, M. and Grierson, M. (2016) Collaborative coding interfaces on the web. Proceedings of the International Conference on Live Interfaces, pp. 49–57. Fiebrink, R. and Cook, P. R. (2010) The Wekinator: a system for real-time, interactive machine learning in music. Proceedings of the Eleventh International Society for Music Information Retrieval Conference (ISMIR 2010) (Utrecht). Garro, D. (2012). From sonic art to visual music: divergences, convergences, intersections. Organised Sound, 17(2), pp. 103–113. Grierson, M. (2005). Audiovisual composition. Thesis, Canterbury: University of Kent at Canterbury. Grierson, M., Yee-King, M. and Gillies, M. (2013). Creative programming for digital media and mobile apps. Coursera, accessed 1/10/2016, www.coursera.org/learn/digitalmedia Ikeshiro, R. (2013). Studio composition: live audiovisualisation using emergent generative systems. Doctoral thesis, Goldsmiths: University of London. Itten, J. (1975). Design and form: the basic course at the Bauhaus and later. New York:Van Nostrand Reinhold. Le Grice, M. (1981). Abstract film and beyond. Cambridge, MA: MIT Press. Lund, C. and Lund, H. (2009). Audio visual: on visual music and related media. Stuttgart: Arnoldsche Art Publishers. Maya, 1993, web: http://vibeke.info/maya/, last accessed 19 Feb 2018 McCarthy, J. (1960). Recursive functions of symbolic expressions and their computation by machine. Communications of the ACM, April 1960. McDonnell, M. (2010).Visual music – a composition of the “things themselves”. Sounding Out 5, Bournemouth: Bournemouth University. McLean, A. (2004). Hacking Perl in nightclubs, perl.com, accessed 1/10/2016, www.perl.com/ pub/2004/08/31/livecode.html
334
Creative coding for audiovisual art Moritz, W. (1986). Towards an aesthetics of visual music. Center for Visual Music, accessed 24/3/2017, www. centerforvisualmusic.org/TAVM.htm ———. (1997). The dream of color music, and machines that made it possible. Animation World Magazine. Oram, D. (1972). An individual note: of music, sound and electronics. London: Galliard. Projucer, 2015–2018, web: https://juce.com/projucer, last accessed 19 Feb 2018 Lopes, D. (2009). A philosophy of computer art. London: Routledge. Rees, A. L. (2013). A history of experimental film and video: from the canonical avant-garde to contemporary British practice. Basingstoke, Hampshire: Palgrave Macmillan. Richardson, J., Gorbman, C. and Vernallis, C. (2013). The Oxford handbook of new audiovisual aesthetics. Oxford: Oxford University Press. Sitney, P. A. (2002). Visionary film: the American avant-garde 1943–2000. Oxford: Oxford University Press. Sito, T. (2013). Moving innovation: a history of computer animation. Cambridge: MIT Press. Wang, G. (2008). The ChucK audio programming language: a strongly-timed and on-the-fly environ/mentality. Thesis, Princeton University. Whitney, J. (1980). Digital harmony: on the complementarity of music and visual art. New York: McGraw-Hill. Yee-King, M., Grierson, M. and d’Inverno, M. (2017). Steam works: Student coders experiment more and experimenters gain higher grades. IEEE International Engineering Education Conference, Athens, 2017. Zbyszyński, M., Yee-King, M. and Grierson, M. (2017). Rapid prototyping of new instruments with CodeCircle. Proceedings of the New Interfaces for Musical Expression Conference, Copenhagen.
335
INDEX
Note: Page numbers for figures are in italics. affordances 164 – 165, 204 – 206, 216, 251 – 255 afoxé 27, 45n16 African Americans 99, 117 – 119 Afro-Brazilian instruments 27 Afrofuturism 117 – 119 Afro-Latin music 77 agency 257 AGF (musicians) 165 Aichi University 61, 66 Aimer la concrescence 107 AIMS mailing list 172 Akiyama, K. 59 Akten, M. 329, 329 Albiez, S. 115 – 119 Allen, J. 158 – 160, 167 Allen, S. 119 All Junglists; A London Somet’in Dis (documentary) 120 alpha rhythms 235 alphorns 203, 217n3 Altavilla, A. 164 alternation 257 Alternative Histories of Electronic Music conference 1 Amanhã . . . (Tomorrow . . .) installation 149, 149 Amazon rainforest 192 Ambient Century,The (Prendergast) 99 Ambientes Visuales de Programación Aplicativa (AVISPA) 34 – 35 ambisonics 307 “Amen” (song) 120 Amen Andrew project 120 “Amen Break” (song) 120 “Amen Brother” (song) 120 Amigos de Sian Ka’an 188
8-channel recording 288 – 298, 289, 296 – 298, 306 – 307 40 Part Motet 105 96-channel BEAST system 272, 290, 296, 305 Abbaye de Thoronet 211 Ableton Live (software) 263 absolute clock 251 abstract music 96 Access Grid (software) 265 accessibility research 163 – 164 ACE code editor 320 acid house 118, 125 “Acid Tracks” (song) 118, 122 ACMP (Asia Computer Music Project) 60 – 62 AcMus project 38, 45n35 acousmatic music 4 – 5, 83, 205 – 206, 255, 272 – 311 acoustemology 139, 187 acoustic delay time 250 acoustic ecology 179 – 199 Acoustic Space Lab 211 – 212 actants 102, 166 – 167 Action Plan of Minsk Conference 187 action potential 227 Activating Memory 238 – 240, 238 – 241 actor-network theory (ANT) 100 – 102, 167 ADAT digital tape machines 288 additive synthesisers 224 Adeyemi, F. 127 Adult (band) 105 aelectrosonics 211 – 212, 219n35 Aelia Media 137 AEP (auditory evoked potentials) 235 affect, indicators of 242
336
Index Ash, K. 329 Asia 10 – 12, 49. see also East Asia Asia Computer Music Project (ACMP) 60 – 62 Associação Recreativa Carnavalesca Afoxé Ala n Oyó 27 Aston, P. 1 Asuar, J. 21 ATC loudspeakers 296, 297, 301, 303 Ateliers Populaires 169 Atkin, J. 116 – 118 atomic clocks 250 – 251 Attali, J. 181, 207 attention mechanisms 315 – 316 Attias, B. 115 Atton, C. 101 audience participation 165 – 166 audio samples 316 audiovisual art, coding for 312 – 335 AudioWire 292 auditory evoked potentials (AEP) 235 Augmented Violin 63 Augoyard, J.-F.: Sonic Experience:A Guide to Everyday Sounds 139 AURAL 33, 33 – 34 AURAL2 33 auralisation 208 Aurignacian bird bone 203 Auslander, P. 127 – 128 Australia Council for the Arts 193 Australian Network for Art and Technology 193 Australian Rivers Institute,The 193 – 195 authorship 28 “Autobahn” (song) 123 automatisation 114 avant-garde 3, 8, 99, 118 AVISPA (Ambientes Visuales de Programación Aplicativa) 34 – 35 Avraamov, A. 250 awareness 12 – 14, 137 – 245 Ayers, L.: Cooking with Csound. Part 1:Woodwind and Brass Recipes 58 Aztec Mystic (DJ Rolando) 125
amplification 5 amplitude 224 – 228, 224 – 231 Amsterdam 123 Amsterdam Dance Event 128 An, C. 53 – 54, 62 analog-digital computer systems 21 Analogique A-B 208 analog recordings 103 – 105 analysis/resynthesis approaches 88 Andean culture 24 – 25 Anderson, J. 307 Andriessen, L. 257 Andueza, M. 137 Angelo, M. 193 Anh, D-J. 62 Animat (artificial animal) 223 animation practices 315 – 316 Ansome (band) 125 ANT (actor-network theory) 100 – 102, 167 Antevasin colloquium 43 anthrophony 182 Anthropocene 209 anthropological space 139 Antidatamining VII – Flashcrash 215 – 216 Antunes, J. 21 aperiodic waveforms 214 APG speakers 298, 303 Aphex Twin 99 Apple iPhones 158 – 162, 167 – 172 application programming interface (API) 325 Aquarian Audio Hydrophones 194 aquatic bioacoustics 193 – 197 Aquinas, T.: Summa theologiae 205 – 207 Araújo, S. 143 – 144 Arce-Lopera, C. 26 Arduino platform 22, 35 – 37 Argentina 11, 21, 42 – 44 Argentina Suena 43 Arnstein, S. 169 arousal levels 242 Ars Electronica 7, 99 art: mediaeval theory of 205; nature and 205 – 216 Art Futura festival 30 Artificial Hells (Bishop) 145, 169 – 170 artificiality 213 artificial sentience 30 art music 3 – 4, 7 Art of Noises,The (Russolo) 118, 180 – 181 Artrónica 41 ARTSAT1 INVADER (nano-satellite) 212 ARTSAT2 DESPATCH (Deep Space Amateur Troubadour’s Challenge) space probe 212, 213 art/science collaboration 265 Arts & Humanities Research Council (AHRC) 163, 290 Artsmesh (application) 14, 265 – 268 asfalto 150
“Baby Wants to Ride” (song) 125 Back, L. 126, 168 Bahia 77 Bai, X. 56, 62, 70n28 Bakhtin, M. 102 Balance-Unbalance International Conference series 183 Ball, H.: Gadji beri bimba 212 Bambaataa, A. 117 Ban, W. 56, 60, 66 Bandcamp 103 – 106 Bandt, R. 187, 195; Hearing Places: Sound, Place, Time, Culture 184 bandwidth 257
337
Index Biosphere Soundscapes 178 – 182, 186 – 193, 188, 196 – 198 ‘Bird’s Paradise: Interactive Tropical Birds Soundscape’ 189, 190 Birmingham ElectroAcoustic Sound Theatre (BEAST) 14 – 15, 272 – 311, 293 – 305 Bishop, C. 144, 172; Artificial Hells 145, 169 – 170; Participation 170 bitwise operations 331 Black Atlantic 114, 120 Black Dice (band) 103 Black Secret Science (album) 120 – 121 Black Secret Technology (album) 119 – 122 Blake, J. 128 Blanco, J. 21 Blessure narcissique (album) 106 Blinkhorn, D. 183, 187 Blomberg, J. 168 Blue Gold 195 Blume, F. 189 BMinorThing 325, 326 Boal, A. 152 body music 113, 116, 123 Bogotá Fonográfica 41 Bogotá Museum of Modern Art (MAMBO) 41 Bohlen-Pierce scale 36 Boiler Room (streaming service) 113, 127 – 128 Bolaños, C. 21, 44n1 Bombes of Bletchley 215 Boom Festival 126 Born, G. 97 – 102, 107; Radicalizing Culture 140 Bosse, N. 264 Boulez, P. 6, 70n19 Boulez-Cage correspondence 6 Boulton, E. 179 bouncing sounds 288 Bourdieu, P. 99 – 100 Brackett, D. 100, 107 Bradby, B. 118 Brady, E.: A Spiral Way: How the Phonograph Changed Ethnography 140 brain-computer interface (BCI) technology 30, 234 – 237 brain-computer music interfacing (BCMI) technology 223, 234 – 245, 236, 241, 244n8 brain disorders 222 Brain Orchestra 30 – 31, 31 Bramall Music Building 280, 292, 296, 297 Brant, H. 250, 254 – 255 Brazil 21 – 22, 27 – 33, 37 – 39, 77, 138, 141 – 143, 154n17 – 18 Brazileiro, R. 27 break beats 114, 119 – 120 breakdowns 122 breaking 331 – 323 Brecht, B.: ‘Radio as an Apparatus of Communication’ 268
Banho de Chuva (Rain Shower) installation 150, 151 banking concept 142 Barad, K. 107 Barbosa, A. 249 Barclay, L. 13, 186, 193 – 196 Barragán, H. 35 – 37 barter networks 103 Bartoli, K. 215 – 216 Battier, M. 11 – 12, 66 Bauhaus design 315 Bayle, F. 4, 206 – 208, 218n14 BCI (brain-computer interface) technology 30, 234 – 237 BCMI (brain-computer music interfacing) technology 223, 234 – 245, 236, 241, 244n8 Beach Boys (band) 5 BEAM@NIME concert series 158, 165 – 167, 170, 173 BEAST (Birmingham ElectroAcoustic Sound Theatre) 14 – 15, 272 – 311, 293 – 305 BEASTiary 296, 301 BEAST Main 8 276 – 278, 277 – 279, 282 – 283, 288 – 290, 298 BEASTmulch 290 – 292, 291, 296 BEASTory 296 Beat Bits 263 Beatles (band) 5 Bec, L. 217n2, 217n7 bedroom producers 97 Beech, D. 172; “Include Me Out!” 171 Beethoven, L. van 235 behaviour modelling 8 – 9 Beijing 12, 42 – 44, 51 – 62 Beijing Central Conservatory (CCoM) 12, 42 – 44, 51 – 56, 52, 65 – 67, 78 – 81, 85 – 91, 258, 263 Beijing China Conservatory (CCM) 55 – 56, 65, 79 Beijing Opera 85 – 88 Bejarano, M. 41 “Belleville Three” (band) 117 Bell Labs 2, 140 bells 86 Bellville, B. 127 Benjamin, W. 103 Berlin Love Parade 123 Berlin Technical University 61 Berry, J. 171 – 172 “Big Fun” (song) 116 Bin, K. 66 BINAC (computer) 214 – 215 biochips 243 bio-computing devices 222 biodiversity 179 – 181, 185 – 187, 192 – 196 BioMusic team 223 biophony 181 – 182 bio-silicon musical processors 222 – 223
338
Index CEIArtE (Electronic Arts Experimentation and Research Center) 42 – 43 CEIArtE-UNTREF (Red Cross/Red Crescent Climate Centre and the Electronic Arts Experimentation and Research Centre) 183 Celestial Groove (song) 126 CEMC (Centre for Electronic Music in China) 52, 55, 67, 259 Central Amazon Biosphere Reserve 189 Central Park 191 Centre for Art and Technology of National Taipei University of Art 57 Centre for Contemporary Music Documentation of Paris 53 Centre for Electronic Music in China (CEMC) 52, 55, 67, 259 Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) 29 Centre for Research in Electro-Acoustic Music and Audio (CREAMA) 61 Centre of Autonomous Systems and NeuroRobotics (NRAS) 29 Centro de Estudos e Ações Solidárias da Maré (CEASM) 144 Certeau, M. De 139 Chadabe, J. 98, 182 – 183 “chai-mi-you-yan-jiang-cu-tang” 89 Chan, P. 58, 72n44 Chang, J. 62 Change Ringing: Etude 1 264 Chávez, C.: Toward a New Music: Music and Electricity 181 CHEARS (Chinese EARS) 56 Chen, Q. 51 Chen,Yi 51, 69n2, 79 – 80 Chen,Yuanlin 51 – 54, 79 Cheng, C-W. 67 Cheng,Y. (Perry) 56 Chicago 115 – 118, 125 Chile 21 Chin, U. 61 China 12, 42, 49 – 68, 77 – 95 China Electronic Music Development Events 78 “China Sound Unit” 57 ‘China: The Sonic Avant-Garde’ 89 Chinese EARS (CHEARS) 56 Chinese instruments 81 Chinese Ministry of Culture 52 ‘Chinese Model’ 62 Chinese Opera 64 chip technology 117 chiptune aesthetics 331 Chiptune Marching Band (CTMB) 157 – 161, 159, 167, 170 – 172 Choi, K. 62 Chong, K. 50 Chou, W. 80
Brilliant Noise 207 – 208 Brisbane River 195 Bristle with Anger 87 British free improvisation 101 British Library Sound and Vision Archive 215 British Library Sound Archive 140 British Paraorchestra 238 broadcast formats 253 – 254 browsers 313, 318 – 320 Brunel Electronic and Analogue Music festival 165 Brussels World Fair (1958) 14 Bryant, J. 183 Buchla synthesisers 4 Buddhism 49, 62, 81 – 85, 88 – 90, 258 Bug Music (Rothenberg) 184 Bulgakov, S. 258 Burning Man 126 Burtner, M. 185 Butcher, J. 166 Butler, J. 102, 107 Butler, M. 115 C++ 317 – 318, 323, 324, 331 Caceres, J. 37, 256 ‘Cadenza’ 85 – 86 Cage, J. 2 – 3, 6 – 8, 16n19, 90, 181, 217n8, 257 Calder, A. 8 Calegario, F. 22 – 24, 44n4 call and response 257 Cambria,V. 143 – 144 CANarie network 262 CanDLE: Durées 262 candomblé 45n16 canons 1 – 15, 99, 102 capitalism 103 Caramiaux, B. 164 Cardiff, J. 105 Carioca funk 154n19 Carpenter, J. 306 Carr, K. 189 Carvalho, R. 21 Cascone, K. 215 – 217 Casio synthesisers 4 Casken, J. 272 cassettes 104 – 105 Catsoulissept, J. 184 causation 251 – 252 caveirão 149 Cayko, E. 264 – 265 CBSO Centre 291, 293 CCoM (Central Conservatory of Music) 12, 42 – 44, 51 – 56, 52, 78 – 81, 85 – 91, 258, 263 CCRMA TeleConcert 39 CCRMA–University of Stanford 39 CEASM (Centro de Estudos e Ações Solidárias da Maré) 144
339
Index ‘Computer Music in China’ (Ying) 54 – 55 Computer Music Multidisciplinary Research (CMMR) 40 Computer Music Tutorial,The (Roads) 56 Computer Technology Centre “Renato Archer” (CTI) 33 CONANP (Mexican National Commission for Natural Protected Areas) 188 conceptual art 90, 105 – 107 Concordia Laptop Orchestra (CLOrk) 262 – 263 concrete sound 206 Conference of the Parties (COP21) 191 – 192 Confucian thought 55 – 56 Connected Communities programme 163 Connell, M. 196 conscious effort 235 consciousness 12 – 14, 137 – 245 Conservatoire Edgar Varèse 83 Conservatoire National Supérieur de Musique et de Danse 53, 53 – 54 constraint-based composition 35 Contemporary Music in East Asia (Hee) 68 continuaMENTE 31 – 33, 32 Contradiction harmonieuse 69n2 control 8 – 9 Cooking with Csound. Part 1:Woodwind and Brass Recipes (Ayers and Horner) 58 Coomaraswamy, A. 217n8 COP21 (Conference of the Parties) 191 – 192 Cosgrove, S. 116 Costa, R. 38 Coupigny synthesisers 206 Covent Garden 204 Cowell, H. 16n19 Coyne, R.: Tuning of Place 203 cracking 331 – 323 craftsmanship 205 – 206 CREAMA (Centre for Research in ElectroAcoustic Music and Audio) 61 creative coding 312 – 335 Creative Industries group 163 critical incident technique 164 crossover cover songs 100 CSIRAC (computer) 214 – 215 CSound (software language) 208 CSS3 313 CTI (Computer Technology Centre “Renato Archer”) 33 CTMB (Chiptune Marching Band) 157 – 161, 159, 167, 170 – 172 Cuba 21 Cuba, L. 312 – 313 cultural probes 165, 173 cultural retention, in China 77 – 95 Cultural Revolution 49 – 51, 78 – 79, 83 Culture Lab 158 cumbia 40 – 42
Chowning, J. 36 Chronosphere 62 ChucK (software) 208, 318 Chung, A. 262 – 263 Cia Marginal (theatre company) 148 – 150 CIME (International Confederation of Electroacoustic Music) 55, 70n25 Cinder (software) 317 CIRMMT (Centre for Interdisciplinary Research in Music Media and Technology) 29 citizen participation 169 Citron, M. 99 Clark, C. 184 classical music canon 2 climate change 43, 178 – 202 ClimateWeek NYC 2015 189 – 191 Clinton, G. 116 – 117 CLOrk (Concordia Laptop Orchestra) 262 – 263 cloud, the 268 cloud9 (software development platform) 319 CMMAS (Mexican Centre for Music and Sound Art) 42 – 43 CMMR (Computer Music Multidisciplinary Research) 40 cochleograms 226 code and coding 15, 28, 312 – 335 CodeCircle 2 platform 15 CodeCircle platform 312 – 335, 314, 321 – 324, 327 – 332 code sharing 319 – 320 cognitive processes 28 – 34 Cognitive System and Interactive Media 29 Coleman, C. 58 collabedit.com (software development platform) 319 collaboration 167 – 169, 266 collaborative coding 313, 319 – 320, 331 – 323 Collin, M. 121 Collins, B. 117 Cologne WDR studio 58, 62 Colombia 34 – 37, 40 – 42 Colossus replica (computer) 215 Columbia University 62, 80 command line approaches 2 communication networks 126 – 128 community 23 – 24, 40 – 42, 137, 163 – 164, 178 – 199 compilation 41 – 42, 128 – 129 composed concert music 8 composition: Birmingham ElectroAcoustic Sound Theatre (BEAST) and 294 – 296; constraintbased 35; electroacoustic music 78 – 82, 88; interactive audio-visual 31 – 33 comprador (trader agent) tradition 55 computational neuroscience 222 Computer and Electronic Music Studio 51 computer music 2, 54 – 55
340
Index “Detroit is Jacking” (song) 116 Development of an intelligent software controlled system for the diffusion of electroacoustic music on large arrays of mixed loudspeakers (research project) 290 – 292 Dewan, E. 210 DEW (Distant Early Warning) system 198 – 199 dialogic research 143 – 144 diaspora 10 – 11, 83 difference, repetition and 98 – 102 Diffuse 8-channel array 298, 298 diffusion 15, 272 – 293, 286, 306 – 307 DigiArts project 78 digital audio tape (DAT) 126 digital audio workstation (DAW) 2, 125, 163, 167 Digital Harmony (Whitney) 329 digitalisation 97 digital music 105 Digital Presence Workstation (DPW) 265 Dinger, K. 123 d’Inverno, M. 323 Dirié, G. 187 disabled communities 163 – 164 disco 101, 115 – 118 discussion groups 127 Disko, F. 126 dissonance 87 Distant Early Warning (DEW) system 198 – 199 distant speaker pairs 277 – 281, 287 – 288, 292, 297, 298 distributed modernism 10 DIY approaches 1 – 4, 23 – 24, 103, 157 – 161, 164, 167, 171 dizi 81 – 84 DJ Goa Gil 122 – 123, 126 DJ Pierre 118 DJ Rolando (Aztec Mystic) 125 DJs 125 – 126, 140 DJ Tiësto 123 – 124 DMI design 22 – 24 Doing Anthropology in Sound (Feld) 151 do-it-yourself. see DIY approaches Dolby A (noise reduction system) 273 ‘Double Diamond’ format 289, 289, 298, 299 DPW (Digital Presence Workstation) 265 dramaturgy 83, 88 Dreamcatcher (band) 104 – 105 “Dred Bass” (song) 121 Drever, J. 152 Drott, E. 101 drum’n’bass 113 – 116, 119 – 122 Drury Lane 204 DSP frameworks 317 – 318, 323, 324, 331 dub plates 120, 126 dub reggae 120 dub step 114, 125, 127 Dudas, R. 61 – 62
curatorship 152 Cybernetic Serendipity exhibition 316 Cybotron (band) 115 – 118 Cycling74 company 325 Cyclotone II 329 cyleung274 325, 327 DACS 3D diffusion desk 282, 283 – 284, 290 Dada 144 Dag Hammarskjold Plaza 191 Daily Motion (website) 127 Dal Farra, R. 42 – 44, 183 DAM(N) Project,The 194 Dancecult, Journal of Electronic Dance Music Culture 115 dancehall 119 – 121 dance music 7, 98 – 99, 113 – 133 Daniel Langlois Foundation for Art, Science and Technology 44 darkcore/dark-step 122 darkness 121 – 122 DAT (digital audio tape) 126 data 15, 208 – 209, 212 – 213, 316 data-flow languages 317 – 318 Davies, E. 115 Davies, H. 59 Davis, L. 197 Davis, R. 115 – 117 DAW (digital audio workstation) 2, 125, 163, 167 Dead Dred (group) 121 deep house 113 Deep Listening 217n4, 258 Deep Listening Institute 185 Deep Space Amateur Troubadour’s Challenge (ARTSAT2 DESPATCH) space probe 212, 213 Delalande, F. 210 delay 250 – 251, 255 – 256, 263 DeMarse, T. 223 Demers, J. 103 democratisation 97 – 99, 104, 107 – 108, 156 Demon Boyz (group) 121 Dempster, S. 217n4 Dennis, B. 1 Densidades 41 Derbyshire, D. 16n20 De Ritis, A. 66 Dermineur, M. 215 – 216 Derrida, J. 98, 102 designer-user dichotomy 168 Design Patterns for Inclusive Collaboration (DePIC) 158, 163 – 164, 167 – 169, 172 desk speaker pairs 279 – 280, 282, 298, 301 DESPATCH 212 DESPATCH/ARTSAT2 212, 213 detachment 167 – 168 Detao Group 268 Detroit 114 – 120, 124 – 125
341
Index electro-funk 125 electromechanical robots 60 electronica 7, 99 – 101, 114 – 115 electronic art 40 Electronic Arts Experimentation and Research Center (CEIArtE) 42 – 43 electronic dance music (EDM) 113 – 133 electronic music: Birmingham ElectroAcoustic Sound Theatre (BEAST) and 272 – 311; in China 77 – 95; climate change and 178 – 202; coding for audiovisual art and 312 – 335; in East Asia 11 – 12, 49 – 76; internet and 249 – 271; in Latin America 10 – 11, 21 – 48; in Montreal 96 – 112; music neurotechnology and 222 – 245; problems with participation and 156 – 177; Som de Maré project 137 – 155; technoculture and 113 – 133; terminology for 1 – 17, 56, 61, 96 – 112; tuning, metagesture, and 203 – 221 Electronic Music Foundation 182 – 183 ‘Electronic Night’ 78 Electronic Visualizations in the Arts (EVA) 196 elektronische Musik 2, 274, 309 – 310n2 Elgar Concert Hall 272, 280, 292, 295 – 297, 296, 301 Elisabethkirche 294 Ellis, C. 143 EMAC (Electroacoustic Music Association of China) 55, 65 embodied sonic interaction 164 – 165 emergent behaviours 257 – 258 Emile journal 67 Emmerson, S. 14; The Language of Electroacoustic Music 180 eMotion system 64 empathic responses 210 EMSAN (Electroacoustic Music Studies Asia Network) 65 – 69 EMSAN Day symposium 58, 65, 66 emscripten (transpiler) 323 EMS synthesisers 4, 62 Encyclopedia of Chinese Culture 55 – 56 end users 166 – 167, 173 Engineering and Physical Sciences Research Council (EPSRC) 161 – 163 ‘Entre mi cielo y tu agua’ 183 EP (evoked potentials) 235 epic trance 124 epigonions 210 epistemologies 172 – 174 ERD (event-related desynchronisation) 242 erhu 81 Eshun, K. 118 Espace Mendès France 215 Espaces cachés 307 etherpad.org (software development platform) 319 ethnography 140, 151 – 152, 163 ethnomusicology 77 – 78
Duet of the Living Dead 60 Duffy, M. 184 Dunn, D. 184 duration 224 – 226, 231 – 233 Dust 85 – 86 DXARTS 307 Dye 58 Eagle, D. 264 EAMCSCM (Electro-Acoustic Music Centre of Shanghai Conservatory of Music) 53 – 54 EARS (ElectroAcoustic Resource Site) 56 earth-moon-earth experiments 211 Earth to the Unknown Power 211 Ear to the Earth 182 – 183 East Asia 11 – 12, 49 – 76. see also China East Asian Computer Music Exchange Concert and Lecture symposium 60 – 61 Eaton, J. 14, 241 EAVI Nights 158, 165 – 167, 170 – 173 echo 250 Echoes for Woodblock from Peking Opera 87 – 88 Echoes from the Moon 211 ECH’s reverberation chambers 301, 303 Eckert, F. 62 eclecticism 96 Eco, U. 8, 205 ecoacoustics 182, 185, 209 École Normale de Musique de Paris 52 – 54, 83 ecology of practice 158 – 160 EcoSono network 185 écoute réduite 2, 5 Ecstatic Peace (label) 104 EDM (electronic dance music) 113 – 133 EDM (experimental dance music) 7 EEG (electroencephalogram) 234 – 236, 241 – 243, 244n9 Eiffel Tower 191 Eimert, H. 72n53 electric guitars 5 Electrifyin’ Mojo (DJ) 117 electroacoustic music 3 – 7; in China 77 – 95; climate change and 178 – 202; in East Asia 51 – 62; in Latin America 21, 37, 40 – 44, 77; machine learning (ML) and 325; in Montreal 96 – 112; tuning, metagesture, and 210; see also electronic music Electroacoustic Music Association of China (EMAC) 55, 65 Electro-Acoustic Music Centre of Shanghai Conservatory of Music (EAMCSCM) 53 – 54 Electroacoustic Music Studies Asia Network (EMSAN) 65 – 69 Electroacoustic Music Studies Network – EMS09 International Conference 43 electrocumbia 40 – 42 electroencephalogram (EEG) 234 – 236, 241 – 243, 244n9
342
Index Flowing Sleeves 64, 64 Fluorescent Friends (label) 102 – 105 Flusser,V. 217n2 Fluxus 90 Flying Lotus 128 FM synthesisers 78 folk art 78 folk music 88 Fonoteca Nacional de Mexico 188 – 189 formalisation 82 Form Follows Sound (FFS) workshop 158, 164 – 165, 172 “fort-da” 122 Foster, H. 151 Fourier, J. 213 – 214 Fragments of Extinction 185 Fredericks, I. 78 free improvisation 101 Free Music 181 Freesound 189 free-will music 90 Freire, P. 152; Pedagogy of the Oppressed 142 – 143 French-Brazilian Colloquium on Computer Aided Musical Creation and Analysis 29 ‘French Ring’ format 289, 289, 297, 298 frequency 122, 212 – 213, 224 – 227, 235 Frith, S. 100 front/back speaker pairs 280, 282, 296 – 298 ‘FrostbYte – Red Sound’ 183 Fry, B. 35 Fu-Jen University 56 – 57 Fukunaka, F. 60 Fulldome Workshop 43 Full-Stack 320 Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) 148 funk 117 Funkadelic (band) 117, 123 Fusion works 85 – 86 Futurelearn (learning provider) 323 Future Options Pack (workshop) 162, 169 futurism 115, 118
Euro-American history canon 2 European style 57 EVA (Electronic Visualizations in the Arts) 196 event-related desynchronisation (ERD) 242 evoked potentials (EP) 235 exchange 257 exosomatic systems 204 – 206, 210 – 211 Experimenta Colombia 41 experimental dance music (EDM) 7 experimental film sound 6 – 7 experimental music 1 – 6, 103 Experimental Studio of the Heinrich-Strobel Foundation of SWF 61 Experimental Workshop 49, 58 – 59 expression 257 Extremer 85 – 86 Fabio (DJ-producer) 120 Face,The (magazine) 116 Facebook 103, 113, 127, 268 faders 272 – 273, 278, 281 – 285, 284, 288 Fairlight CMI 80 fanzines 127 FAPERJ (Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro) 148 Fapesp (São Paulo Research Foundation) 38 favelas 154n17 – 18 Feast of Id 61 Feather Rollerball 36 – 37 feedback locking 256 Feenberg, A. 108 Feld, S. 139, 184, 187; Doing Anthropology in Sound 151 Feng Yue 85 – 86 Feral Arts (cultural development agency) 188 Ferrari, L. 138 Festival en tiempo real 41 Festival Internacional de la Imagen (International Image Festival) 39 – 41 Festival ‘L’Œuvre du XXe siècle 72n50 FFS (Form Follows Sound) workshop 158, 164 – 165, 172 FFT/iFFT-based manipulation 323 Fiala, J. 320 Fiebrink, R. 325 field recording 138, 141, 178 Fields, K. 14, 56, 66, 258 – 266 figurativist responses 210 Fikentscher, K. 115 filmmaking practices 315 – 316 Finnegan, R. 100 Fischinger, O. 7 fishecology.org 197 Flashcrash 216 Floating Land event 197 – 198 flood speaker pairs 279, 282, 301, 304 floor speaker pairs 280, 282
gabber house 113, 123 Gabor, D. 208 Gadji beri bimba (Ball) 212 Galison, P. 208 Gallo-Corona, S. 189, 191 gambiarra 38 – 39 Garnier, L. 128 Gaver, B. 165, 173 gay subcultures 99 Geissenkloesterle Cave 203 gender 97, 102 Genelecs speakers 292, 296 – 298, 297 – 304, 301 General’s Order 85 – 86 Generator Music 162 – 163, 166 – 167
343
Index Guattari, F. 216 Gubbi Gubbi language 197 Guo, W. 51 guqin 86 – 88 Guy Called Gerald, A 119 – 122
Gennevilliers Conservatoire 54 genre, technologies of 96 – 112 genres, hybridity of 12, 96 – 98, 107 – 108 gentrification 171 – 172 geometrical space 139 geophony 181 Germany 115 – 118, 203 gestures 31 – 37, 203 – 221 Giacomi, J. 168 Gibson, J. 164, 251 Gielis, J. 329 Gifford, T. 195 Gilmour, H. 262 Gilroy, P. 120 git (version control tool) 319 Gitga’at Nation 182 glitch 7 globalisation 12, 36, 114 global reach 10 – 15, 20 – 133 Global Sustainable Soundscapes Network, The 184 global warming. see climate change glocalisation 113 – 114 GLSL 325, 329 glslsandbox.com (coding platform) 318, 323 Glui 283, 285 GNMISS (Graphic Networked Music Interactive Scoring System) 262 Goa 113 – 114, 123 – 126 Goa trance 113 Going/Places 306 – 307, 308 – 309 Goldie 122 Goldsmiths 163 – 165, 313, 325, 329 Gomez, D. 26 Google Docs platform 319 Goto, S. 61 Gould, G. 90 GPS soundwalks 198 Grainger, P. 181 Grame 83 granular synthesis 208 granulation 88 Graphic Networked Music Interactive Scoring System (GNMISS) 262 Grateful Dead (band) 5 Great Animal Orchestra,The 184 Great Sandy Biosphere Reserve 187 – 188 Gremo, B. 262 – 264 Gresham-Lancaster, S. 211 Grid 2010 260 Grierson, M. 15 Grisey, G. 101 GRM 52 – 54, 83 – 85, 206 – 207, 289 Grooverider (DJ-producer) 120 Groupe de Recherches Musicales 206 Guan, P. 81, 85 – 86, 95 Guastavino, C. 184 Guatemala 21
hacking 164, 167, 331 – 323 “Halyard Hitch” (song) 125 Hammond organ 3 Han Chinese 83 Hangzhou Conservatory 79 Hannesson, M. 262 Hanyang University 61 Happening Festival 259 happenings 171 haptics 36 – 37, 163 – 164 Haptic Wave device 163 – 164, 169 Harbin Academy of Music 65 hardcore rave 114, 120 – 121, 125 – 126 Hardfloor (band) 122 “Hardtrance Acperiance 1” (song) 122 Hardy, R. 118 Hargreaves, B. 103 – 104 Harmonic Choir 211 Harrison, J. 14 – 15, 16n18 Harthouse (label) 122 Harwell Dekatron (computer) 215 Haworth, C. 16n23, 101 Hawtin, R. 117, 128 Hayles, K. 207 HCI (human-computer interaction) 156, 166 – 168, 173, 207, 266 – 268 Heap, I. 166 Hearing Places: Sound, Place,Time, Culture (Bandt) 184 Hee, S.: Contemporary Music in East Asia 68 Hegarty, P. 103 height, stereo 278 – 279, 281, 296 Helguera, P. 137, 152 Helix Arts 166 – 167 Helmholtz, H. von 213 – 214 Hendrix, Jimi 5, 120 hen embryo 224 Henry, P. 39, 210 Hesmondhalgh, D. 100 heterophony 256 high frequency trading (HFT) 216 high/low art 89 HiNRG 123 Hirschhorn, T. 145 HISS (Huddersfield Immersive Sound System) 306 – 307 HNI (human-network interaction) 266 – 268 Hobbes, J. 211 hocket 257 Hoffman, A. 179 Hoketus 257
344
Index Ilú 27, 44n13 Iluminado 27, 27 iMac G3 circuitry 215 iMac Music (performance) 215 – 216 image 140 – 141 ImCognita (Laboratory of Interactive Media and Digital Immersion) 29 – 30 Imitation Archive 215 – 216 IML (interactive machine learning) 325 immersive media 28 – 34 Immigrant Sounds – Res(on)Art (Stockholm) 137 Impey, A. 143 improvisation 8, 101 INA-GRM 54 – 55, 62, 73n68 “Include Me Out!” (Beech) 171 India 104 Indonesia 50 infrared and Hall effect 23 Ingold, T. 139 INMARSAT communications satellites 212 Inner City (band) 116 “Inner City Life” (song) 122 Innis, R. 204, 219n30 Input Devices and Music Interaction Laboratory (IDMIL) 23 – 24, 29 installation art 7 – 8 Institute for Applied Arts, National Chiao Tung University 57 Institute of Advanced Media Arts and Sciences (IAMAS) 60 – 61 Institute of Arts and Faculty of Electrical Engineering 28 integrated development environment (IDE) 313, 317 – 318 intelligent dance music (IDM) 7, 99 interaction 14 – 15, 249 – 335 interactive audio-visual composition 31 – 33 interactive machine learning (IML) 325 interactive media 28 – 34 interactive musical computers 223 interactive programming 313, 318 – 319 interdisciplinarity 172 – 174 Interdisciplinary Centre for Computer Music Research (ICCMR) 222 – 223, 240, 244n2 Interdisciplinary Nucleus for Sound Studies (NICS) 28 – 29, 33, 45n17 interface connection issues 266 – 267, 267 interfaces 9 – 10, 31 interference 214 – 215, 253 intermediation 207 International Atomic Time 250 – 251 International Computer Music Conferences (ICMC) 54 – 55, 58, 62, 67, 79, 306 International Confederation of Electroacoustic Music (CIME) 55, 70n25 International Image Festival (Festival Internacional de la Imagen) 39 – 41
Holmes, T. 98 Holt, F. 99 – 101 Hong Kong 49 – 50, 55, 58 Hong Kong Academy for Performing Arts 58 Hong Kong New Music Ensemble 58 Horner, A.: Cooking with Csound. Part 1:Woodwind and Brass Recipes 58 house music 116 – 118, 125 Housing Project,The 137 ‘How to build a Silent Drum’ (Oliver) 44n9 Hsiao,Y-M. 51 HTML5 development environments 313 Hu, M. 89 Hu, X. 56 Huang, C-F. (Jeff) 57, 70 – 71n33 Huangzhong–Journal of Wuhan Conservatory of Music 54 Huddersfield Immersive Sound System (HISS) 306 – 307 Hui, S. 58, 72n42 human-computer interaction (HCI) 156, 166 – 168, 173, 207, 266 – 268 human-machine interaction 33 – 35 human-network interaction (HNI) 266 – 268 human technology 205 Hutter, R. 123 Hwang, S. 67 hybrid BCMI 241 – 242, 243 hybridisation 12, 96 – 98, 107 – 108, 124, 249 hydrophone recordings 194 – 197 Hykes, D. 211 Hyperdub (label) 127 Hyperreal list 127 Hypogeum Hal-Saflieni 203 – 204 IAMAS (Institute of Advanced Media Arts and Sciences) 60 – 61 Iannix 263 Iazzetta, F. 37 – 40; “Silicon Sounds: Bodies and Machines Making Music” 37 ICCMR (Interdisciplinary Centre for Computer Music Research) 222 – 223, 240, 244n2 ICESI (University of the Colombian Institute of Advanced Studies) 26 Ichiyanagi, T. 58 – 59 ICMC (international computer music conferences) 54 – 55, 58, 62, 67, 79, 306 IDE (integrated development environment) 313, 317 – 318 Ideas Sónicas/Sonic Ideas 42 idiomatic teleology 98 IDM (intelligent dance music) 7, 99 IDMIL (Input Devices and Music Interaction Laboratory) 23 – 24, 29 ieee EDUCON conference 323 Ikeda, R. 315 Illiac II computer 56
345
Index jungle 119 – 122 “Junglism” (song) 120 “Junglist” (song) 121 “Junglist Soldier” (song) 121 just in time (JIT) compilation 313, 318, 331 juxtaposition 257
internet 14 – 15, 28, 249 – 271 Internet2 37 inter-node relationships 254 intertextual approach 101 – 102 Into India 184 Inventionen festival 294 in vitro neuronal networks 222 – 226, 243 iPhones 158 – 162, 167 – 172 IPv6 protocol 266 Irani, L. 168 IRCAM 29, 35, 53 – 54, 62 – 63, 70n19, 164 ISDN networks 211 Ishii, H. 16n25, 60 – 63, 63 Itaú Cultural Foundation 31 iterability 99 Itten, J. 315 IT University Denmark 266 “It What It Is” (song) 116 Ives, C. 250, 254 – 257 Ivrea Institute of Design 35 Izhikevich, E. 227
Kadenze (learning provider) 323, 329 Kagel, M.: Música para la Torre 21 Kahn, D. 211, 217n7 Kainan University 57, 68 Kakinuma, T. 60 Kallberg, J. 101 Kalman lter implementation 23 Kaluli people 151 Kane, B. 16n15 Kang, S. 61 KANTNAGANO (band) 105 – 107 Kao, H-C. 56 – 57 Kaprow, A. 171 Kartadinata, S. 283, 285 Kawasaki, K. 60 Kaze no michi (Wind Way) 63 KEAMS (Korea Electro-Acoustic Music Society) 61 – 62, 67 Keane, D. 180 Kenya 192 Kepa, W. 24 – 25, 44n5 Keystone speakers 295, 300, 301 khoomi 258 Kim, J. 62 Kimura, M. 63 Kinect 263 – 264 King Kong 7 Kinnect cameras 26 Kitchen, The (performance space) 211 Kline, K. 104 – 105 Knuckles, F. 125 Kojima,Y. 61 – 62 Korea Electro-Acoustic Music Society (KEAMS) 61 – 62, 67 Korg synthesisers 4 Kraftwerk (band) 116 – 118, 123 Krause, B. 181 – 184 Kronengold, C. 101 – 102 Kubler, G. 209 Kurokawa, R. 315 Kyma system 56, 62 – 64
jacking 116 Jacktrip software 37, 266 “Jaguar” (song) 125 Jamaican dancehall 119 – 121 Jamaican dub 120 James, M. 121 Japan 49 – 51, 58 – 62 Japan Broadcasting Corporation in Tokyo (NHK) 58 – 59, 62 Japanese Society for Electronic Music (JSEM) 60, 66 Japanese Society for Sonic Arts (JSSA) 66 – 67 Japan Musicological Society 60 Jarre, J-M. 78 – 79 Jauss, H. 100 JavaScript 313, 318 – 319, 323, 331 jazz 84 Jeïta (ou murmure des eaux) 206 – 208 Jeïta Retour 207 Jiaotong University 78 Jin, P. 62, 87 Jing, J. 64 JIT (just in time) compilation 313, 318, 331 Jo, K. 158 – 160, 159, 167 JollysVNC 292 Jonker, J. 120 JOQR studio 58 Jorda, S. 36 JSEM (Japanese Society for Electronic Music) 60, 66 JSHint 320 JSSA (Japanese Society for Sonic Arts) 66 – 67 JUCE (C++ framework) 318
labels 1 – 2 Laboratory of Interactive Media and Digital Immersion (ImCognita) 29 – 30 La Brique (studio collective) 103 LabSURLab 41 – 42 Lacey, J. 182 ladder of citizen participation 169 La Jaqueescool 42
346
Index live coding 312, 318 – 319 live electronic, defined 3 – 5 liveness 127, 249 – 250 Living Form: Socially Engaged Art from 1991 – 2011 (Thompson) 137 LLVM compilers 318 Lo, H-M. 58 local identities 11 – 12, 20 – 133 locked-in syndrome 236, 236 – 238, 238 Lockwood, A. 184, 195 Logan River 195 London 116, 121 – 122, 125 – 126, 171 Lopez, F. 184, 188 loudspeakers 4, 10, 14 – 15, 272 – 309, 275 – 282 lower case 7 lter evaluation protocols 23 LTJ Bukem 121 Lu, M. 56, 70n27 Lucier, A. 3, 90, 210 – 211 Luening, O. 2 Luque, S. 290
Lam, B-C. 58, 72n47 Lam, L. 58, 72n43 Landy, L. 12, 96 – 97 Lane, C. 165 Lang, F. 118 language barriers 56, 61 Language of Electroacoustic Music,The (Emmerson) 180 laptop performance 140, 262 latency 249 – 252, 255 – 256, 267 Latin America 10 – 11, 21 – 48 Latin American Electroacoustic Music Collection 44 Latour, B. 102, 167, 209 lattice-based music 98 Latvia 211 – 212 Lauretis,T. de: Technologies of Gender 97 Lawrence, T. 115 layering 256 – 258 LCD screens 239, 239 – 240 Leafcutter John 165 Le chant intérieur: poème fantastique 52, 82 – 84 LEDs 239 Lee, D. 61 Lee, E. 62 Leili Fengxing (composition) 89 Leituras em diálogo installation 149, 150 Leonardson, E. 184 – 185 Les espaces acoustiques 101 L’Eve future (mirai no ivu) 69n3 Leweton Cultural Group 198 Li, H. 58 Li, P. 82 Li, Q. 66, 81, 86 – 87, 95 Li, R. 89 Liao, L-N. 11 – 12, 51, 66 Ligeti, G. 61, 256 – 257 Lightning Bolt (band) 103 Lima Action Plan 187, 192 Lin, E. 56 Lin Chong Fled at Night 88 Linke, S. 195 Lion in Which the Spirits of the Royal Ancestors Make Their Home,The 184 listener responses 210 listening 138, 254 – 256 Listening to Fish: New Discoveries in Science (Rountree) 197 Listening to the Thames 196 literate music 98 lithophones 206, 217n10 ‘Little Star’ (radio telescope) 212 Liu, J. 54, 79 Liu, S. 51 Liu, X. 90 Live Algorithms for Music 16n28 livecodelab.net (coding platform) 318 – 319
ma 257 Ma, J. 51 MAB (Man and the Biosphere Program) 187, 192 Macau 50, 58 Macau Academy for Performing Arts 58 machine aesthetic 118, 124 machine learning (ML) 325 machine listening 214 – 215 Mackie Control 283, 284, 290 MacKinnon, D. 184 Madrid Action Plan for Biosphere Reserves 187 magnetic tape 273 Magnolia 87 Magritte, R. 207 Mahtani, A. 306 mainland China 50 – 56, 62 – 66 main speaker pairs 277 – 280, 287 – 288, 292, 296, 303 Mak, C. 58 Maker Faire 159 maker movement 23 – 24 making-through-listening approach 206 – 207 Makrolab 212 Malaysia 50 Malta 204 MAMBO (Bogotá Museum of Modern Art) 41 mammoth ivory flutes 203 Man and the Biosphere Program (MAB) 187, 192 Manchester 117 – 119 Manning, P. 98 MANO controller 25 Mantronik, K. 120 Manuel, P. 104 Manzolli, J. 28, 31 – 33 Maré, Rio de Janeiro 138, 141 – 145, 154n17 – 18
347
Index micropolyphony 256 – 257 MIDI devices 53, 60, 80, 106, 264, 283, 290 MIDI pianos 31 – 33, 63, 235 Mills, J. 123 Minami, H. 66 minibus pimps (composition) 89 Minsburg, R. 43 Miranda, E. 14, 227, 238 Mittler, B. 68 mix cassette tapes 127 Mixcloud 127 mixed music 3, 62 – 63, 70n19, 80 – 86, 92n11 Mixed Music Workshop 68 mixers 281 – 282, 284 Mixtur 3 Mizuno, M. 60, 66, 66 ML (machine learning) 325 Mobile project 38, 45n36 modernisation 97 modernism 10, 98 Modulations (Shapiro) 99 Moldau 86 Monacchi, D. 185, 195 Moncada, J. 26 Mongolia 86 mono 273 – 276 Monte Carlo simulations 208 Montero, M. 189 Montreal 12, 96 – 112 MOOCs (Massive open online courses) 313, 323, 329 Moog synthesisers 4 moon-bounce experiments 211 Moore, F. 106 Morel, J. 215 – 216 Moroder, G. 117 Moroi, M. 58 Moroni, A. 33 morphème 258 morphological analysis (morphological box) 24 morphology 81 morro 150 Morton, T. 187 Moseley, R. 105 MotorBEASTs 283, 285, 290 – 292 motorik 123 Motown (label) 117 MOTU systems 292 – 293 Mount Elgon 192 Moving Still 141 multiculturalism 40 multi-electrode array (MEA) devices 223 – 224 Multimodal Brain Orchestra 30 – 31, 31 multimodalities 29 – 31 multiphonic singing 86 multi-site-specific performance 253 Mungo 184
Marseille, F. de 211 Martinez, L. 211 Marxism 156 Mary River 195 – 196 Massive open online courses (MOOCs) 313, 323, 329 Masters At Work (MAW) 128 Mathews, M. 2, 36, 140 Mathews-Boie Radio Drum 36 Matik-Matik 41, 46n58 Matsui, A. 66 Matsuo Akemi Ballet Company 69n3 Matsushita, I. 66 Matthews, K. 165 – 166, 170 Mattson, E. 105 MAW (Masters At Work) 128 Max (programming platform) 317 – 318, 325 Maximilian (programming platform) 317, 323 – 325 Max/Jitter 264 Max/MSP software 35 – 36, 62 May, D. 114 – 119, 125 Maya (software) 312 Mayuzumi, T. 58 – 59 McCauley, S. 137 MC Congo Natty 121 McGill University 22 – 24, 29 McKinney, C. 260 McLaren, N. 7 McLuhan, M. 98, 198 – 199 McMaster Cybernetic Orchestra 263 MCs 126 MEA (multi-electrode array) devices 223 – 224 Medeiros, C. 22 – 23 mediation 139 – 145 Medusa project 39, 45n41 megamixes 116 Mego (label) 99 Meintjes, L. 102 Meireles, M. 141 Meissner, B. 215 Members of the House (band) 116 “Memória do Futuro” exposition 31 Mercat de Les Flores 30 Merleau-Ponty, M. 139 Mestizo, H. 40 – 42 metagesture 203 – 221 Meteor (web development framework) 320 Metropolis (film) 118 Mexican Centre for Music and Sound Art (CMMAS) 42 – 43 Mexican National Commission for Natural Protected Areas (CONANP) 188 Mexico 21, 42 Meyer, L. 100 Meyer-Eppler, W. 72n53 microphones 4
348
Index Neue Deutsche Welle 123 Neue Welle 116. see also New Wave movement Neuhaus, M. 182 neurochips 223 neurones 222 – 245, 228 – 231 Newcastle University 158 New Guinea 151 New Interfaces for Musical Expression (NIME) conference 165 – 166, 172 – 173 New Labour (United Kingdom) 162 Newman, G. 117 New Music Week Festival 53 New Order (band) 117 new stereo 306 Newtonian frame 252 – 254 New Wave movement 7, 53, 53, 70n16, 116 New York 115 – 116, 125, 189 – 191 New York Stock Exchange 216 New York University 60 Ng, K. 58, 72n45 NHK (Japan Broadcasting Corporation in Tokyo) 58 – 59, 62 Ni, Z. 90 Nicolls, S. 165 NICS (Interdisciplinary Nucleus for Sound Studies) 28 – 29, 33, 45n17 Nijenhuis, W. 215 NIME (New Interfaces for Musical Expression) conference 165 – 166, 172 – 173 Nippon Cultural Broadcasting 58 Nisbet, M. 179 Nishioka, T. 61, 66 NMP (network music performance) 260 Nobrego, G. 266 Nodaira, I. 61 – 63 noise 7, 83 – 84, 103 – 106 noise reduction systems 273 Noosa Biosphere Reserve 186, 186 – 188, 193 Noosa Regional Gallery 197 Noosa River 195 – 198 Norman, K. 184 Norman, S. J. 13 – 14 Norowi, N. 50 Northwestern University 56 note-based music 80 – 88, 96 – 98 Nou Ri Lang 259 – 260 Novak, D. 103 Novint Falcon haptic controller 163 NRAS (Centre of Autonomous Systems and NeuroRobotics) 29 Numano,Y. 60 Nuo Ri Lang 82 – 85 NuSom–Research Centre on Sonology 38 – 40, 45n35 – 36 NWEAMO festival 67 Nyman, M. 3
Musée d’art contemporain de Montréal 106 Museu da Maré 144, 147, 148 – 152 Museu d’Art Contemporani de Barcelona 30 Museum of Modern Art of Medellín – MAMM 42 ‘Music, Digitization, Mediation’ project 97 – 98 MUSICACOUSTICA-BEJING festival 52, 55 – 56, 65 – 68, 85 – 87, 258 – 264 musical expression 22 – 24 Música para la Torre (Kagel) 21 ¿Música? series 39 Music for Solo Performer 210 Music Information Retrieval and Atomic synthesis 323 musicking 157, 167, 203 – 206, 209 – 210, 217, 217n1, 249, 252 music neurotechnology 14, 222 – 245 music practice 1 – 17 Musicultura 143 – 144 ‘Music N’ family of programmes 2 musique actuelle 104 musique concrète 2 – 5, 8 – 9, 21, 39, 58, 72n51, 96, 273, 309 – 310n2 musique mixte 3, 62 – 63, 70n19 MUTEK festival 105 myNoise.net 189 MySpace 127 Nagano, K. 106 Nagoya City University 60 – 61 Nanex (data company) 215 – 216 nano-satellites 212 narrowcast formats 253 national characteristics 96 National Chiao Tung University 57, 68 National Museum of Computing 215 National Music Academy of Hanoi 50 National Taipei University of Arts 57 National Taiwan Normal University 51, 56, 68 National Taiwan University in Electrical Engineering 56 National University of Singapore 259 National University of Tres de Febrero 43, 183 nature, art and 205 – 216 n-channel composition 292 – 293 Neolithic trumpets 203 Netconcerts series 39 NetTets Concerts 259 – 260, 262 Network Address Translation (NATs) 266 network delay 263 network exchange 254 Network Music Concert 265 network music performance (NMP) 260 Network Ostinato in F# Minor 264 network-specific practices 249 Network Time Protocol 250 – 251 Neu! (band) 123
349
Index Parkinson, A. 13 Parliament-Funkadelic (band) 117 Parmegiani, B. 15n3 Parque do Flamengo 150 – 151 Parsons School of Design 164 participant observers 168 participation 12 – 14, 137 – 245 Participation (Bishop) 170 participatory design 156 participatory methodologies 142 – 144, 157 participatory sonic arts 137 – 155 particle systems 329, 329 partnerships 167 – 169, 296 PASCAL (computer) 215 Passeio Sonoro Som da Maré installation 150 – 151 passive acoustics 195 – 197 passive loudspeakers 295 PatchWork (programming environment) 34 Pauline Oliveros Foundation 185 Pavón, E. 21 Paynter, J. 1, 15n1 Peavey MIDI fader 290 Pedagogy of the Oppressed (Freire) 142 – 143 Pederson, T.: ‘Situative Space Model’ 266 peer-to-peer (P2P) networks 127 Pegasus (computer) 215 Peljhan, M. 212 People Like Us (musician) 165 People’s Music Publishing House 56 Percussion Studies 264 – 265 performance 5 – 6, 14 – 15, 35 – 37, 171; Birmingham ElectroAcoustic Sound Theatre (BEAST) and 272 – 311; extending 249 – 335; genre and 101; live coding and 318 – 319; time and space in 254 – 256 performance art 105 – 107 performative acoustics 203 – 204 Performative Media Art Centre of the China Academy of Art 57 performativity 102 Performing Research 3 264 Performing Research nr 1 2012 262 – 263 Performing Research nr 2 – Oscilosound 2013 263 Periscope 268 Personne Ensemble 39, 45n42 Peru 21, 24 – 25 Peruvian National Museum of Archaeology 24 – 25 Peterson, K. 266 phase 252 – 253, 274 Philips Natuurkundig Research Laboratory 215 phonème 258 phonocentrism 98 – 99, 103 phonograph cartridges 3 phonographs 140 photography 140 – 141 Phuture (band) 118, 122 physical modeling 36
obeah 120 Oberheim synthesisers 4 objectivity 170 – 171 observer participants 168 O Globo newspaper 149 Ohm, G. 213 – 214 Oiticica, H. 152 Oleon, D. 211 Oliver, J.: ‘How to build a Silent Drum’ 44n9 Oliver La Rosa, J. 24 – 25 Oliveros, P. 2, 6, 36, 185, 211, 217n4 Olympic Park (London) 171 omnidirectionality 253 Ondes Martenot 3 “One Nation Under a Groove” (song) 123 ‘One World 1’ 183 ON-Juku community 66 – 67 open form performances 8 openFrameworks (software tool) 26, 44n11, 317, 325 OpenMusic (programming environment) 34 open source software 28 Open Spaces 36 open work 8 operant conditioning 235 Oracle Room 203 – 204 orality 98, 216 Oram, D. 7, 16n20, 317 Oramics machine 16n22 orchestra of loudspeakers 14 – 15 order 8 – 9 Orellana, J. 21 Øresundsbiennalen SoundAround festival 293 organisation 8 – 9 organology 204, 208 – 210 Ornamentation 54 Orquestra Errante 38 Orton, R. 1 Osaka, N. 60, 65 – 67 OSC 260 – 264, 283 Os Grilos 37 Ossa, M. 26 Oxford Handbook of Computer Music 99 “Pagga” (song) 125 palafitas 150 Panaiotis 217n4 Panama 40 Pandivá 24, 44n4 “Parade of the Athletes” 123 – 124 Paramusical Ensemble, The 238, 238 – 240 Parant, J. 105 – 107 paratexts 103 Paris Agreement 178 – 179 Parish, T. 128 Paris School 58, 72n51 Parker, M. 215
350
Index qi 69n2 Qi, M. 81, 87 – 88, 95 Qinqiang 90 QR codes 41, 106 Qu, X. 51 Quartet of the Living Dead 60 Quasimodo,The Great Lover 210 – 211 Quatorze écarts vers le défi 63 Queen Mary University 163 Queensland river systems 193 – 197 Que sais je? series 96
Piekut, B. 102 piezoelectric sensor gloves 33 Pigeon, S. 189, 190 Pijanowski, B. 184 Pilot ACE (computer) 215 Pini, M. 115 Pink Floyd (band) 5 pipa 80 – 81 pirate radio 122 pitch 84 – 86, 233 – 234 pitch scales 25 Pitt Rivers Museum (Oxford) 140 pixels 316 “Planet Rock” (song) 117 Platohedro (non-profit organization) 42 Plymouth University 222 Pobiner, S. 164 Poitiers planetarium 216 Polli, A. 188 polyphony 256 – 257 Pompeu Fabra University, Barcelona 29 popular culture 40 popular music 3 – 6, 11, 101, 140 portable sound recorders 141 positivism 173 post-laptop music 160 postmodernism 99, 118 post-soul music 114, 117 poststructuralism 102 Potter, A. 223 power spectrum analysis 235 PPP 36 pre-Columbian instruments 24 – 28 prehistoric culture 203 pre-histories 6 Prendergast, M.: The Ambient Century 99 presence 29 – 30, 249, 259, 265 – 266, 273 Presque Rien series 138 pressxtoskip 325, 326 primary texts 1 Principle, J. 125 Probatio project 23 – 24 Processing (software tool) 35, 317 progress, narrative of 98 – 99, 108 project forks 27, 44n14 Proscenium Lighting Bar 281 prosumers 117 prototyping tools 22 – 23 Prudence, P. 315, 329 Psihoyos, L. 183 psytrance 113, 122 – 126 public art 137 pulse-based music 255 – 257 punch speaker pairs 279, 282, 301, 304 Pure Data (PD) programming language 35 – 36, 160 – 161, 169, 208, 317 – 318
Racing Extinction (documentary) 183 – 184 RADAR 41 Radicalizing Culture (Born) 140 radical pluralism 100 Radigue, E. 16n20 radio 3, 214 – 215, 251 – 253 ‘Radio as an Apparatus of Communication’ (Brecht) 268 radiolibre.co 41 ragga 119 – 121 “Rainfall” (song) 121 Rainforest Listening 189 – 191, 191 Rainforest Soundwalks 184 Raisons d’agir festival 215 RAPID-MIX project 325 “Rapid Prototyping of New Instruments with CodeCircle” 325 raster plots 227, 228 – 229, 231, 232 Raudfjorden 183 rave 114, 118 – 128 raw data 316 ray tracing 253 reaching out, meanings of 10 – 15 Reactable 36 Reaktor (software language) 208 rear speaker pairs 277 – 278, 281, 287 – 288, 296 – 298 Reas, C. 35 Rebel MC 121 Rebelo, P. 13, 148 reception phenomenology 5 – 6 recordings, availability of 1 – 2 recursion 30 ReCycle 289 Red Bull Academy 128 RedClara 37 Red Cross Climate Centre 43 Red Cross/Red Crescent Climate Centre and the Electronic Arts Experimentation and Research Centre (CEIArtE-UNTREF) 183 reduced listening 138 Reel to Real Sound Archive 140 reggae 114, 120, 125 – 126 reggae sound systems 126 Regional Youth Work Unit 162, 166 – 167
351
Index ‘Rulers, The’ 23 Rushton, N. 116 Russian school 50 Russolo, L.: The Art of Noises 118, 180 – 181 RYBN (collective) 215 – 216
Reich, S. 2, 6 Reichenbach, F. von 21 Rekengeluiden van PASCAL 215 – 216 relational aesthetics 137 relationship, defined 252 relativity 250 – 251 remixing, post-performance 254 Remix Your ’Hood workshop 162 – 163, 169, 172 remote performance 249 remote processing 259 Renaud, A. 256 re(PER)curso 30, 30 – 31 repetition and difference 98 – 102 RepMus research group 29 re-presence 249 research-creation 21 – 48 ResoNations 2010 (concert) 261 Return to the Source (label) 124 Reus, J. 215 reverberation 84 rewinds 120, 126 Reyes, J. 35 – 37 Reynolds, S. 114, 125 Rhythim Is Rhythim (band) 116, 125 rhythmic patterns 226 – 234, 233 Richards, J. 16n31 Richards, T. 127 Rietveld, H. 12 Rio de Janeiro 13, 138, 141 – 143, 154n17 – 18 ‘Rising Forces,The’ 183 Risset, J-C. 140 River Listening 178 – 182, 186, 193 – 199, 196 Rivers Talk 195 RjDj (app) 160 – 163 Roads, C. 208; The Computer Music Tutorial 56 Robins, K. 114 – 115 Robotic Variations 33, 33 – 34 Rock’n’Roll 288 – 289 Rodgers, T. 213 – 214 Roesch, E. 226 Roland Boutique TR09 125 Roland REAC system 292 Roland synthesisers 4, 125 Roland TR303 Bass-line sequencer 118 Roland TR-909 drum machines 125 Rolling Stone 128 roof speakers 301 rotation 84 Rothenberg, D.: Bug Music 184; Thousand Mile Whale Song 184; Why Birds Sing 184 Rountree, R.: Listening to Fish: New Discoveries in Science 197 Roxy club 125 Roy, E. 104 Royal Albert Hall 250 Royal Hospital for Neuro-disability 236, 240 Rueda, C. 34 – 37
St John, G. 115 St-Onge, A. 105 – 107 salad methodology 260 Saldanha, A. 124 sampled listening 8 sampling 78, 81 – 88, 92, 92n2, 125, 208 Samsara 84 Sanchez, D. 26 São Paulo Research Foundation (Fapesp) 38 Saunderson, K. 114 – 118 Savage, J. 114 scale 207 – 209 Scandinavian workers unions 156 scanned synthesis 36 – 37 Schaeffer, P. 2, 9, 15n3, 16n30, 39, 54, 58, 72n51, 96, 138 – 139, 206 – 207, 210, 218n13 – 14, 273, 309 – 310n2 Schaefferian theory 83 Schafer, M. 138, 181 – 182, 209, 250; The Tuning of the World 181, 203 Schidlowsky, L. 21 Schumann, R. 235 Schwartz, J. 52, 54 Science Museum (London) 1, 16n22 scores 285 secondary texts 1 Seebeck, A. 213 – 214 Self, G. 1 self-referral agents 29 Semiconductor 207 – 208 semiquavers 231, 232 Sensai Na Chikai 262 sensing systems 23 sensor fusion algorithms 23 sensorimotor rhythms 242 Seoul International Computer Music Festival (SICMF) 61 – 62, 67 Seoul National University 61 sequencers 222 Sequential Circuits synthesisers 4 serialism 2 – 4, 8 – 9 Seville Strategy and the Statutory Framework of the World Network of Biosphere Reserves 187 SFX (band) 126 Shaanxi opera 90 shadertoy.com (coding platform) 318 shakuhachi 63 Shamanic Trance (DJ mix) 124 Shandong 65 Shanghai 51 – 55 Shanghai Conservatory 51 – 55, 79
352
Index Society of Malaysian Contemporary Composers 50 software visualisation aids 254 Sogetsu Art Centre 59 Solfège de l’objet sonore 138 somatosensory evoked potentials (SSEP) 235 Som de Maré project 13, 137 – 155, 147 Sommeil 210 Sonare:Arte Sonoro 41 Sónar festival 128 Sonema 41 Sonemalab 41 sonic arts 7 – 8, 151 – 152, 178 – 202 Sonic Arts Research Centre (SARC) 39, 144, 148, 259, 279 Sonic Bikes 166, 170 Sonic Ecologies framework 180, 186 sonic effect 139 Sonic Experience:A Guide to Everyday Sounds (Augoyard and Torgue) 139 sonic experiences methodology 145 – 147 sonic images 274 – 276 Sonic Incident method 164 sonic interaction design (SID) 164 sonification 208, 222, 225, 226, 259 SONOLOGIA 2016 – Out of Phase 40 sonology 38 Sonorities Festival Belfast 39 Sonósferas 41 Sony 59 Soochow University School of Music 65 Sorensen,V. 312 – 313 sound, cultural value of 205 sound art 7 – 8, 41, 57, 71n35, 214 sound-based music 80 – 81, 88, 96 Soundcloud 127 sound converters 21 Soundcraft 200B mixer 282 sound diffusion 15, 272 – 293, 286 sound factory 41 – 42 sound generators 222 sound installation 7 – 8 Sound Map of the Danube, A 195 Sound Map of the Housatonic River, A 195 Sound Map of the Hudson River, A 195 Sound Mirrors 194 sound quality 5, 81 soundscape ecology 179 – 199 soundscapes 5, 138 – 143, 208 sound sculpture 7 – 8 sound synthesis 223 – 226, 315 sound technologies 139 – 142 soundtracks 7 sound waves 204 – 205, 260 source bonding 205, 216, 217n9 South Korea 49 – 50, 61 – 62 Soviet Pop 90
Shanghai Conservatory of Music International Electronic Music Week 67 Shanghai Sound Unit 89 Shapiro, P.: Modulations 99 shared environments 249 “Share this House” (song) 116 Sharp, C. 121 Shen,Y. 53 Shenyang Academy of Music 65 Shenyang Conservatory 79 Shin, S. 60 – 62 Shobi University 62 Shockwaves (concertino) 223, 227 – 234, 244n4 Sian Ka’an Biosphere Reserve 188, 188 – 189, 191 Sian Ka’an Residency 188 Sichuan Conservatory of Music (Chengdu) 55 – 56, 65, 70n26, 79 Sicko, D. 127 SICMF (Seoul International Computer Music Festival) 61 – 62, 67 SID (sonic interaction design) 164 side fill speakers 278, 279 SiDE (Social Inclusion through the Digital Economy) workshop 158, 161 – 163, 162, 168 – 172 Sigal, R. 42 signaletics 258 signal processing 36, 316 – 317, 320 silbadores (whistling vessel jars) 24 – 25 Silent Drum 25, 25 – 27, 36 silent listening 16n27 “Silicon Sounds: Bodies and Machines Making Music” (Iazzetta) 37 Silva, A. 148 Simondon, G. 107 Simon Fraser University 181 Simpson, G. 119 – 122 simultaneity 251 sine waves 214, 230 Singapore 50 sinusoids 213 – 214, 224, 227 – 231, 229 site-specific elements 252 – 253 situated perspectives 168 ‘Situative Space Model’ (Pederson) 266 Skype 260 Small, C. 157, 167, 217n1, 252 Smalley, D. 86, 180, 216, 278 Smallwood, S. 262 Smetana, B. 86 Smite, R. 212 Smits, R. 212 social engagement 137 – 155 social exclusion 12 – 13 Social Inclusion through the Digital Economy (SiDE) workshop 158, 161 – 163, 162, 168 – 172 social mediation 144 – 145
353
Index Suzuki, T. 124 svn (version control tool) 319 Swanson, A. 217n4 Symphonie pour un homme seul 39, 46n43 Symphony of Sirens 250 Synapse Residency 193 – 195 synchronisation 250 – 252, 255 – 256, 261, 263 synème 258 Syneme Project 14, 258 – 259, 262 synthesis 35 – 37, 208, 213 – 214, 223 – 226, 315, 323 synthesisers 2 – 4, 15n3, 21, 206, 213 – 215, 224, 317 synthetic biology 243 synthetic methods 4
Soviet Union 50 space: construction of 253 – 254; Japanese conception of 69n5 space probes 212 space-time relationship 250 – 251, 254 – 258 spatialisation 258, 296 speakers 4, 10, 14 – 15, 272 – 309, 275 – 282 special effects speakers 301, 303 SPECS 29 spectators 166 – 167 spectral space 278 spectromorphology 86 speleothems 206 – 207, 217n10, 219n33 Spem in Alium 105 spheroids 223 – 224, 225 spike trains 223 – 235, 226, 228 – 233 Spiral Way: How the Phonograph Changed Ethnography (Brady) 140 Sputnik 212 SRIF-2 290 – 292 SSEP (somatosensory evoked potentials) 235 Stark Mind, A. 241, 241 – 243, 243 Stati d’Acqua 195 steady-state visual evoked potential (SSVEP) 235 – 242, 236 – 240 STEIM 3, 15n9 stem-based approach 307 stereophony 273 – 285, 275 – 276 Sterne, J. 105 Stimmung 258 Stockhausen, K. 3, 58, 139, 258 Stockhausen electronic studies 58 Stockman, T. 169 Stolet, J. 64 Stones 259 – 260 Straw-Berri 36 streaming services 113, 127 Streams 288 – 289 “Strings of Life” (song) 125 structuralism 100 – 102 structure, of work 285 – 286 Studio 54 206 – 207 Su, C. 51 Suarez, M. 214 Sub Jam (label) 90 sub-woofers 278, 280, 301, 305 Suicide (band) 116, 122 Summa theologiae (Aquinas) 205 – 207 Summer workshop in Calgary 260 suona 86 SuperCollider (software) 62, 208, 260, 317 – 318 superformulas 329 – 330, 330 superposition 257 Sur, S. 198 Surgeon (band) 125 Suzhou Academy of Music 65
tacit knowledge 172 – 174 “Taipei Sound Unit” 57 Taiwan 49 – 51, 55 – 57 Taiwan Computer Music Association (TCMA) 57, 67 Takemitsu, T. 49, 58 – 59 Tallis, T. 105 Tama Art University 212 Tan, D. 51, 79 – 80 Tanaka, A. 13 Tang Dynasty 64 Taoism 49, 82, 88 tape music 2 Taro,Y. 60 Tarzan 7 Tascam DA-88 digital tape machines 288 task space 266 Tavel Arts Technology Research Center 259 taxonomic responses 210 Taylor, T. 99 Taylorism 156, 175n2 TCMA (Taiwan Computer Music Association) 57, 67 teaching, in China 55 – 56 Team Come True, A 104 Technical University Berlin 61 techno 113 – 120, 123 – 128 “Techno City” (song) 117 – 118 technocracies 115 – 116 technoculture 113 – 133 Technologies of Gender (Lauretis) 97 technology, music practice and 1 – 17 “Techno Music” (song) 116 techno practices 11 – 12 Techno! The New Dance Sound of Detroit 116 – 119 Telecom Interactive 211 teleconcerts 37 teleconferencing 265 – 266 telematics 35 – 37, 266 Telemedia Arts 258, 265 – 266 Telemediations project 266 telepresence 266 Telharmonium 3
354
Index “Trans-Europe Express” (song) 123 transmateriality 216 Transmission from Overseas 215 transperception 211 transpiling 323 Truax, B. 138, 181 – 182, 208, 258 ‘Truss’ speakers 298, 299 trust building 167 – 169 Tsabary, E. 262 Tsai,Y-P. 51 Tsang,Y-K. 58 Tseng,Y-C. 57, 71n34 Tsinghua University 79 Tudor, D. 3, 257 Tung, C-M. 67 tuning 13 – 14, 203 – 221 Tuning of Place (Coyne) 203 Tuning of the World,The (Schafer) 181, 203 Tunny cipher machine 215 turntables 3 “Turn Your iPhone into a Sensor Instrument!” (workshop) 158 – 162, 167 – 172 turtles 196 tweeters 278, 280, 295, 298, 300 TX-0 (computer) 214 – 215 Tzeng, S-K. 56, 67
tempo 113 Terai, N. 66 Terminal 215 terminology: for audiovisual art 315; barriers in 56, 61; genre and 96 – 112; music practice and 1 – 17 Test Patterns 215 Tétreault, M. 104 textual regularity 99 textural information 316 texture 257 – 258 Thames 196 Thatcher, M. 161 Theremin 3 Third Wave,The (Toffler) 116 – 117 Thompson, N.: Living Form: Socially Engaged Art from 1991 – 2011 137 Thousand Mile Whale Song (Rothenberg) 184 Three Easy Pieces 58 three paths, the 77 – 95 Three Places in New England (Putnam’s Camp) 257 three-wave model 116 – 117 Tianjin Conservatory 79 Tiaro Landcare 196 Tibetan Buddhism 83 – 86 Tidal (coding platform) 319 Till, R. 128 timbral-textural turn 5 timbre 26, 257 – 258 time-frequency analysis 208 time-keeping standards 250 – 251 “Timeless” (song) 122 time-specific elements 252 – 256 Times Square 189 Times Square (installation) 182 timestamping 250 – 251 Toffler, A.: The Third Wave 116 – 117 Toho Gakuen School of Music 60 Tokyo Denki University 65 – 66 Tokyo University 212 Tokyo University of the Arts 60 – 61, 66 Tomalla, A. 115 – 116 tone, generation of 224 – 226 tool development 205 – 206 Toop, D. 165 TOPLAP collective 318 Torgue, H.: Sonic Experience:A Guide to Everyday Sounds 139 Toshiro, M. 69n3 touch me sound 41 Tout Pour la Musique Contemporaine (TPMC) 68 Toward a New Music: Music and Electricity (Chávez) 181 Toynbee, J. 100 trader agent (comprador) tradition 55 trance 113, 116 – 118, 122 – 124 trance-techno 123
UFRJ (Universidade Federal do Rio de Janeiro) 144, 148 UHJ stereo 307 UK-Dance (discussion group) 127 UK Garage 125 underground musicians 78 – 80, 88 – 91, 103 UNESCO 78, 187 – 189, 192 – 193 UNICAMP (University of Campinas) 28 – 29, 33 United Kingdom 13, 100 – 101, 119 – 122, 125 – 126, 158, 161 – 163 United Nations Climate Change Conference 191 United Nations Framework Convention on Climate Change (UNFCCC) 178 – 179 UNIVAC-1 (computer) 214 – 215 Universidade Federal da Bahia 77 Universidade Federal de Pernambuco 24 Universidade Federal do Rio de Janeiro (UFRJ) 144, 148 Universiti Putra Malaysia 50 University of Bath 163 University of Birmingham 14 – 15, 272 – 311, 297 University of Calgary 258 – 259, 264 University of California at Berkeley 57 University of California at San Diego 60 University of Campinas (UNICAMP) 28 – 29, 33 University of Illinois at Urbana-Champaign 56 University of North Texas 57 University of Oregon 64 University of Plymouth 50 University of São Paulo 37 – 38
355
Index Voov Experience/Vuuv Festival 126 VVVV (programming platform) 317
University of the Colombian Institute of Advanced Studies (ICESI) 26 University of the West of England (UWE) 223 University of Washington 307 Unsound Objects 286 – 287, 287 – 288 urban noise 182 Urtext 275 – 276, 286, 306 – 307 user-centred design 156 users 166 – 168, 173, 320 U.S. National Science Foundation 184 Ussachevsky,V. 2 Utz, C. 68 UWE (University of the West of England) 223
WAC (WebAudio Conference) 318 Wadi-Musa 36 Waisvisz, M. 15n9 Wanderley, M. 22 Wang, C. 64, 64 Wang, C-K. 57, 71n38 Wang, F. 57, 71n36 Wang, H. 260 Wang, M-W. 69n2 Wang,Y. 53 ‘Water Margin’ (Beijing Opera novel) 88 Water Sleeves 64 waveform synthesis 213 – 214 Weaver, S. 261 WebAudio Conference (WAC) 318 webAudio framework 323 WebGL (programming platform) 322, 325, 329 Webster, F. 114 – 115 Weissmuller, J. 7 Wekinator (application programming interface) 325 Wen, B. 58, 72n48 Wen, D. 53 Wen, L-H. 56 Wesley-Smith, M. 78 Westerkamp, H. 181, 184 wetware-silicon devices 223 Whalley, I. 262 whistling vessel jars (silbadores) 24 – 25 Whitelaw, M. 216 whiteness 124 Whitney, James 315 Whitney, John 312 – 317; Digital Harmony 329 Whitney Museum of America Art 313 Why Birds Sing (Rothenberg) 184 wide speaker pairs 277 – 280, 287 – 288, 296 – 298 Widmer, E. 77 Wiimotes 107 Wild Sanctuary initiative 184 Williams, A. 168 Williams, B. 119 Wilson, A. 105 – 107 Wilson, L. 214 – 215 Wilson, S. 290 – 292, 306 Windows Visual Studio 319 Windsor, L. 251 – 252 Wind Way (Kaze no michi) 63 Winner, L. 108 Winsor, P. 56 – 57 Winstons, The 120 WIRA – River Listening 197 – 198 Wire,The (magazine) 99 wireframe superformula 330
Vaggione, H. 21 valence levels 242 Valiquet, P. 12 Van der Woude, M. 26, 44n11 Vanuatu Women’s Water Music 198 Variation 85 – 86 Variations on the numerical principle of seven 58 Väth, S. 122 vector base amplitude panning (VBAP) 292 Veerkracht 26, 26 Vega, R. 26 Velloso, R. 13 Velvet Underground (band) 5 Ventspils International Radio Astronomy Center (VIRAC) 212 VEP (visual evoked potentials) 235 Verplank, B. 35 – 36 Vibert, L. 120 vibrations 204 – 205 video 140 – 141 video post-production tools 316 Viennese school 77 Vietnam 50 Villa Lobos, H. 77 Villiers de l’Isle Adam, A. 69n3 VIRAC (Ventspils International Radio Astronomy Center) 212 Virtual Abbey,The 211 Visages peint dans l’Opéra de Pékin II 82 – 85 Visages peints dans les Opéras de Pékin 62 Visiones Sonoras festival 42 visual data-flow languages 317 – 318 visual evoked potentials (VEP) 235 visual music 43, 315 vocabulary 1 – 17 vocal sounds 88 Voices of the Rainforest 151 Voices of the Wild 184 Voicing the Murray 195 Volt speakers 292 “Voodoo Rage” (song) 119 “Voodoo Ray” (song) 119
356
Index Xu,Y. 53, 53 xun 82
Wiring (programming framework) 35 – 36 Wishart, T. 1, 165, 180 WITCH (computer) 215 Wittgenstein, L. 101 WOCMAT-IRCAM Forum Conference 68 Wolff, C. 259 – 260 women, recognition of 6 Wonder, S. 117 Workshop on Computer Music and Audio Technology (WOCMAT) 67 – 68 workshops 157 – 174 Work XYZ for musique concrète 58 World Forum of Acoustic Ecology 184 World Listening Day 185 – 187 World Listening Project 184 – 185 world live web 268 world music 6, 10 World Network of Biosphere Reserves for 2016 – 2025 192 World Soundscape Project 138, 181, 209 World Wide Web 127 Worrell, B. 117 Wrens 215 Wu, J. 55 Wu,Y. 54, 79 Wuhan Conservatory of Music 54, 65, 79
Yahoo!Finance 216 Yamaha synthesisers 4 Yan, J. 89 – 91 Yan,Y. 90 Yao, C-H. 57, 71n37 Yao, D. 57, 89 – 91 Yao,Y. 89 Yee-algo pattern FX 325, 327 Yee-King, M. 320, 323 Yemas 26, 26 – 27 Yi – Etudes 8 éléments 69n2 Yim, J. 61 – 62 Ying,Y.: ‘Computer Music in China’ 54 – 55 Yong Siew Toh Conservatory of Music 259 Young, M. 264 YouTube 113, 127 – 128, 268 Yuasa, J. 49, 58 – 59, 59 Yun, I. 61, 65 – 73 zero hour model 6 Zhang, R. 56 Zhang, X. 12, 51 – 56, 62, 65 – 68, 66, 78 – 94, 259 – 260 ZHdK Academy of Arts 164 Zhe, W. 90 Zhejiang Conservatory 65, 86 Zhou, L. 51 Zhu, S. 51, 54 Zhuan Jung 84 Zombie Music 60
Xcode 319 Xenakis, I. 208, 250 Xi, J. 62 xiao 82 Xu, S. 53, 53 – 54, 62
357