Complete SuperCollider Opus. [1 ed.]


129 78 176MB

English Pages [2217] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Santa Clara University
Scholar Commons
2014
A Gentle Introduction to SuperCollider (2nd edition)
Bruno Ruviaro
Recommended Citation
I BASICS
Hello World
Server and Language
Booting the Server
Your first sine wave
Error messages
Changing parameters
Comments
Precedence
The last thing always gets posted
Code blocks
How to clean up the Post window
Recording the output of SuperCollider
Variables
``Global'' vs. Local
Reassignment
II PATTERNS
The Pattern family
Meet Pbind
Pseq
Make your code more readable
Four ways of specifying pitch
More keywords: amplitude and legato
Prand
Pwhite
Expanding your Pattern vocabulary
More Pattern tricks
Chords
Scales
Transposition
Microtones
Tempo
Rests
Playing two or more Pbinds together
Using variables
Starting and stopping Pbinds independently
Pbind as a musical score
EventStreamPlayer
Example
III MORE ABOUT THE LANGUAGE
Objects, classes, messages, arguments
Receiver notation, functional notation
Nesting
Enclosures
Quotation marks
Parentheses
Brackets
Curly Braces
Conditionals: if/else and case
Functions
Fun with Arrays
Creating new Arrays
That funny exclamation mark
The two dots between parentheses
How to ``do'' an Array
Getting Help
IV SOUND SYNTHESIS AND PROCESSING
UGens
Mouse control: instant Theremin
Saw and Pulse; plot and scope
Audio rate, control rate
The poll method
UGen arguments
Scaling ranges
Scale with the method range
Scale with mul and add
linlin and friends
Stopping individual synths
The set message
Audio Buses
Out and In UGens
Microphone Input
Multichannel Expansion
The Bus object
Panning
Mix and Splay
Playing an audio file
Synth Nodes
The glorious doneAction: 2
Envelopes
Env.perc
Env.triangle
Env.linen
Env.pairs
Envelopes—not just for amplitude
ADSR Envelope
EnvGen
Synth Definitions
SynthDef and Synth
Example
Under the hood
Pbind can play your SynthDef
Control Buses
asMap
Order of Execution
Groups
V WHAT'S NEXT?
MIDI
OSC
Sending OSC from another computer
Sending OSC from a smartphone
Quarks and plug-ins
Extra Resources
1 Getting started with SuperCollider
1.1 About SuperCollider
1.2 SC overview
1.3 Installation and use
1.4 Objectives, references, typographical conventions
2 Programming in SC
2.1 Programming languages
2.2 Minima objectalia
2.3 Objects in SC
2.4 Methods and messages
2.5 The methods of type {\tt post} and {\tt dump}
2.6 Numbers
2.7 Conclusions
3 Syntax: basic elements
3.1 Brackets
3.2 Expressions
3.3 Comments
3.4 Strings
3.5 Variables
3.6 Symbols
3.7 Errors
3.8 Functions
3.9 Classes, messages/methods and keywords
3.10 A graphic example
3.11 Control Structures
3.12 Yet another GUI example
3.13 Conclusions
4 Synthesis, I: Fundamentals of Signal Processing
4.1 A few hundred words on acoustics
4.2 Analog vs. digital
4.3 Synthesis algorithms
4.4 Methods of {\tt Signal}
4.5 Other signals and other algorithms
4.6 Still on signal processing
4.7 Control signals
4.8 Conclusions
5 SC architecture and the server
5.1 Client vs. Server
5.2 Ontology of the server as an audio synthesis plant
5.3 The server
5.4 SynthDefs
5.5 UGens and UGen graphs
5.6 Synths and Groups
5.7 A theremin
5.8 An example of real-time synthesis and control
5.9 Expressiveness of the language: algorithms
5.10 Expressiveness of the language: abbreviations
5.11 Conclusions
6 Control
6.1 Envelopes
6.2 Generalizing envelopes
6.3 Sinusoids \& sinusoids
6.4 Pseudo-random signals
6.5 Busses
6.6 Procedural structure of SynthDef
6.7 Multichannel Expansion
6.8 Conclusions
7 Organized sound: scheduling
7.1 Server-side, 1: through UGens
7.2 Server side, 2: Demand UGen
7.3 Language-side: Clocks and routines
7.4 Clocks
7.5 Synthesizers vs. events
7.6 Graphic interlude: drawings and animations
7.7 Routines vs. Tasks
7.8 Patterns
7.9 Events and Event patterns
7.10 Conclusions
8 Synthesis, II: introduction to basic real-time techniques
8.1 Oscillators and tables
8.2 Direct generation
8.3 Spectral modelling
8.4 Physical Modeling
8.5 Time-based methods
8.6 Conclusions
9 Communication
9.1 From server to client: use of control buses
9.2 From server to client: use of OSC messages
9.3 OSC to and from other applications
9.4 The MIDI protocol
9.5 Reading and writing: File
9.6 Pipe
9.7 SerialPort
9.8 Conclusions
Cover
Series
SuperCollider for the Creative MusicianA Practical Guide
Copyright
Dedication
Contents
Acknowledgments
About the Companion Website
Introduction
Part I Fundamentals
Chapter 1  Core Programming Concepts
1.1 Overview
1.2 A Tour of the Environment
1.3 An Object-Oriented View of the World
1.4 Writing, Understanding, and Evaluating Code
1.5 Getting Help
1.6 A Tour of Classes and Methods
1.7 Randomness
1.8 Conditional Logic
1.9 Iteration
1.10 Summary
Chapter 2 Essentials of Making Sound
2.1 Overview
2.2 Booting the Audio Server
2.3 Unit Generators
2.4 UGen Functions
2.5 Envelopes
2.6 Multichannel Signals
2.7 SynthDef and Synth
2.8 Alternate Expression of Frequency and Amplitude
2.9 Helpful Server Tools
Part II Creative Techniques
Chapter 3 Synthesis
3.1 Overview
3.2 Additive Synthesis
3.3 Modulation Synthesis
3.4 Wavetable Synthesis
3.5 Filters and Subtractive Synthesis
3.6 Modal Synthesis
3.7 Waveform Distortion
3.8 Conclusions and Further Ideas
Chapter 4 Sampling
4.1 Overview
4.2 Buffers
4.3 Sampling UGens
4.4 Recording UGens
4.5 Granular Synthesis
Chapter 5 Sequencing
5.1 Overview
5.2 Routines and Clocks
5.3 Patterns
5.4 Additional Techniques for Pattern Composition
5.5 Real-Time Pattern Control
Chapter 6 Signal Processing
6.1 Overview
6.2 Signal Flow Concepts on the Audio Server
6.3 Delay-Based Processing
6.4 Real-Time Granular Synthesis
Chapter 7 External Control
7.1 Overview
7.2 MIDI
7.3 OSC
7.4 Other Options for External Control
Chapter 8 Graphical User Interfaces
8.1 Overview
8.2 Basic GUI Principles
8.3 Intermediate GUI Techniques
8.4 Custom Graphics
Part III Large-Scale Projects
Chapter 9 Considerations for Large-Scale Projects
9.1 Overview
9.2 waitForBoot
9.3 Asynchronous Commands
9.4 Initialization and Cleanup Functions
9.5 The Startup File
9.6 Working with Multiple Code Files
Chapter 10 An Event-Based Structure
10.1 Overview
10.2 Expressing Musical Events Through Code
10.3 Organizing Musical Events
10.4 Navigating and Rehearsing an Event-Based Composition
10.5 Indeterminacy in an Event-Based Composition
Chapter 11 A State-Based Structure
11.1 Overview
11.2 Simple State Control
11.3 Composite States
11.4 Patterns in a State-Based Composition
11.5 One-Shots in a State-Based Composition
11.6 Signal Processing in a State-Based Composition
11.7 Performing a State-Based Composition
Chapter 12 Live Coding
12.1 Overview
12.2 A Live Coding Problem and Solution
12.3 NodeProxy
12.4 Additional NodeProxy Features
12.5 TaskProxy
12.6 Recording a Live Coding Performance
Index
Cover
Copyright
Credits
About the Author
About the Reviewers
www.PacktPub.com
Table of Contents
Preface
Chapter 1: Scoping, Plotting, and Metering
Plotting audio, numerical datasets, and functions
Using plot and plot graph
Using plotter
Using SoundFileView
Scoping signals
Scoping waveforms
Scoping spectra
Metering levels
Monitoring signals
Monitoring numerical data
Nonstandard and complex visualizers
Nonstandard visualizers
A complex scope
Summary
Chapter 2: Waveform Synthesis
Waveform synthesis fundamentals
Time domain representation
Waveform species
DC, amplitude, frequency, and phase
Custom waveforms generators
Wavetable lookup synthesis
Using envelopes as wavetables
Custom aperiodic waveform generators
Waveform transformations
Waveshaping
Unary operations
Binary operations
Bitwise operations
Summary
Chapter 3: Synthesizing Spectra
Introducing the frequency domain
Spectra
Fast Fourier Transform in SuperCollider
Synthesizing the spectra
Aggregating and enriching spectra
Sculpting and freezing spectra
Shifting, stretching, and scrambling spectra
Using the pvcalc method
Visualizing spectra
Limitations of spectral scoping
Optimizing spectra for scoping
Summary
Chapter 4: Vector Graphics
Learning the vector graphics fundamentals
Drawing primitive shapes and loading images
Complex shapes and graphics state
Introducing colors, transparency, and gradients
Abstractions and models
Objects and prototypes
Factories
Geometrical transformations, matrices, and trailing effects
Complex structures
Particle systems
Fractals
Summary
Chapter 5: Animation
Fundamentals of motion
Motion species
Using UserView
Animating complex shapes and sprites
Fundamental animation techniques
Trailing effects
Interaction and event-driven programming
Particle systems
Advanced concepts
Animating fractals
Adding dynamics to simulate physical forces
Kinematics
Summary
Chapter 6: Data Acquisition and Mapping
Data acquisition
Dealing with local files
Accessing data remotely
Using OSC
Using MIDI
Using Serial Port
Machine listening
Tracking amplitude and loudness
Tracking frequency
Timbre analysis and feature detection
Onset detection and rhythmical analysis
Basic mappings
Preparing and preprocessing data on client side
Preparing and preprocessing data on server side
Basic encodings and interpolation schemes
Sharing and distributing data
Summary
Chapter 7: Advanced Visualizers
Audio visualizers
Trailing waveforms
Spectrogram
Music visualizers
Rotating windmills
Kinematic patterns
Visualizing and sonifying data
Particles and grains
Fractalizer
Summary
Chapter 8: Intelligent Encodings
and Automata
Analyzing data
Statistical analyses and metadata
Probabilities and histograms
Dealing with textual datasets
Advanced mappings
Complex and intelligent encodings
Neural networks
Automata
Cellular automata
Game of Life
Summary
Chapter 9: Design Patterns and Methodologies
Blackboard
Methodology
Model-View-Controller
Handling multiple files and environments
Threads, semaphores, and guards
The View
Clients and interfaces
Implementation
Strategies and policies
The Model
Aggregates and wrappers
Software agents
Introducing software actors and finalizing the model
The Controller
Game of Life
Finalizing the Controller
Summary
Index
2 - TENOR_BOSTON_2023_paper_5657 Nowakowski.pdf
1. Introduction
2. Method
3. Results
3.1 System Usability Score (SUS)
3.2 AttrakDiff2
3.3 Liveness
4. Discussion
4.1 Limitations and Problems
4.2 Metrics in detail
4.3 Correlating the results
5. Conclusion & Future Work
6. References
3 - TENOR_BOSTON_2023_paper_5929 Loui.pdf
ABSTRACT
1. INTRODUCTION
Techniques for the notation, representation, and visualization of music and sound are inextricably linked to the human understanding of musical structure within their broad contexts. These understandings include the cognitive representations that the ...
2. Studies in Musical Creativity
3. Challenges and Motivations Behind Present Research
4. the BP sequencer
5. experiment 1: sequence production task: generating creative output
6. Experiment 2: Sequence Ratings Task: Perception of creativity
7. Experiment 3: EEG Signatures of Creativity from BP Sequencer data
8. CONCLUSIONS
9. references
Acknowledgments
We acknowledge funding support from NIH R01AG078376, NIH R21AG075232, NSF-CAREER 1945436, and NSF 2240330 to PL. We thank lab members Anjali Asthagiri, Jethro Lee, Catherine Zhou, Kristina Abyad, Carly Monson, Ayla Hadley, Corinna Parish, Eva Wu, and ...
4 - TENOR_BOSTON_2023_paper_8103 Frame.pdf
1. Background
1.1 Documentation for Digital Musical Instruments
1.2 The AirSticks Community
2. Related Work
2.1 Prescriptive notation
2.2 Descriptive notation
2.3 Describing experience?
3. The notation system
3.1 Overview
3.2 Capturing AirStick experiences
3.3 Technical process
3.4 Case study
4. Discussion
4.1 Utility of new systems
4.2 Future work
5. References
5 - TENOR_BOSTON_2023_paper_5652 Celerier.pdf
1. Introduction
2. An ossia score primer
3. Distributing scores
3.1 Abstracting over hardware with groups
3.2 Distribution of interaction
3.3 Polyphony
4. Distributing data
5. Visual language extensions
6. Implementation
7. Distribution examples
7.1 Sending data between machines
7.2 Combining control data across a group of players
7.3 Duplicating an input
7.4 Score for SMC2022
7.5 Polyphony, sharing and visual language
8. Conclusion
6 - TENOR_BOSTON_2023_paper_4288 Privato.pdf
1. Introduction
2. Background
2.1 Instruments-Scores and Non-visual Inscriptions
2.2 Event Scores and Non-visual Inscriptions
2.3 Permanent Magnets
3. The Magnetic Score
3.1 Magnetic Board
3.2 Magnetic Discs
3.3 Sound Processing
4. Presenting the Magnetic Score
5. Discussion
5.1 Magnetic Inscriptions
5.2 The Magnetic Score as Inherent Score
5.3 Relational Inscriptions
6. Future Work
7. Conclusions
8. acknowledgments
9. References
8 - TENOR_BOSTON_2023_paper_7600 Armitage.pdf
1. Introduction
2. Background
2.1 Perspectives on Agency
2.2 Exploring Agency through Boundary Objects
3. Agential Scores
3.1 Agency of Points and Lines
3.2 A Typology of Entanglements with Agential Scores
3.3 Assemblages and Intra-action
3.4 Agential Scores in Practice via Artificial Life
4. Tölvera: a Library of Number Beings
4.1 Number Beings
4.2 Mappings and Visualisations
4.3 Implementation
5. Musical Encounters with Tölvera
5.1 Encounters Summaries
5.1.1 Encounter 1: Boids & Two Guitars
5.1.2 Encounter 2: Physarum & Two Guitars
5.1.3 Encounter 3: Boids, Physarum, Guitar & Conductor
5.1.4 Encounter 4: Reversing Roles from Encounter 3
5.2 Post-Encounters Discussion
6. Discussion
6.1 Fluid Material Agency
6.2 Mapping of Self Onto Agential Materials
6.3 Perceiving the Intra-Actants
6.4 Future Considerations
7. Conclusion
8. References
9 - TENOR_BOSTON_2023_paper_2697 Hori.pdf
1. Introduction
2. Note-Tablature-Form Tree for Monophonic Cases
2.1 Fingering decision based on HMM
2.2 Note-tablature-form tree
3. Note-Tablature-Form Tree for Polyphonic Cases
3.1 From chord to tablature
3.2 From tablature to form
3.2.1 Representing forms by finger numbers
3.2.2 Numbering string-fret pairs
3.2.3 Non-decreasing finger numbers
3.2.4 Enumerating left hand forms
3.2.5 Inserting mandatory separators
3.2.6 Inserting optional separators
4. Conclusion
5. References
10 - TENOR_BOSTON_2023_paper_8126 Panariello.pdf
1. Introduction
2. Motivation
3. Class description
3.1 fileName
3.2 midicents
3.3 magnitudes
3.4 rhythmTree
3.5 metronome
3.6 quantization
3.7 threshold
3.8 dynamics
4. Examples
4.1 Writing a score from patterns
4.2 Writing a score from spectral data
5. Case study – generating a piano piece using SuperOM
6. Limitations
7. Conclusions and Future work
8. References
11 - TENOR_BOSTON_2023_paper_9804 Shapiro.pdf
1. Introduction
2. Related Work
3. Language Features
3.1 Low-Level Fundamentals
3.2 High-Level Templates
3.3 Additional Features
4. Sample Program
5. Compiler Structure
6. Template Expansion Logic
6.1 Backbone Logic
6.1.1 Generating Notes in a Diatonic Scale
6.1.2 Generating Chord Templates in a Diatonic Scale
6.2 Template Expansions
6.2.1 Scales
6.2.2 Chords and Arpeggios
6.2.3 Cadences
6.2.4 Harmonic Sequences
7. Conclusion
8. References
12 - TENOR_BOSTON_2023_paper_6679 Yamamoto.pdf
1. Introduction
2. Preliminaries
2.1 Tonal Pitch Space
2.2 Distance Models concerning Harmonic Features
3. Our Approach
3.1 From Chord Names to Chord Interpretation Paths
3.2 Between Chroma Vectors and Chord Interpretations
3.3 From Chroma Vectors to Chord Interpretation Paths
4. Experiments
4.1 Dataset
4.2 Results
5. Conclusion
6. References
13 - TENOR_BOSTON_2023_paper_9279 Gaulhiac.pdf
1. Introduction
2. Background
3. Harmonic Descriptors
3.1 Implementation & Spectra Computation
3.2 Concordance
3.3 Third Order Concordance
3.4 Roughness
4. From Harmonic Descriptors to Harmonic Maps
4.1 Stability of Sounds
4.2 Timbral Considerations
5. Interactive Harmonic Maps
5.1 Implementation
5.2 MPE Control & Harmonic Trajectories
6. Examples
6.1 Influence of the Number of Partials
6.2 Influence of Timbre
6.3 Influence of Dynamics & Playinng Techniques
6.4 Influence of Harmonicity
6.5 Roughness
6.6 Third Order Concordance
7. Conclusions & Future Work
8. References
14 - TENOR_BOSTON_2023_paper_7968 Lepper.pdf
1. Introduction
2. Beaming Rules as a Transformation Pipeline
2.1 Foundation: Genuine Beams
2.2 Modification of Genuine Beams
2.3 Beams for Rhythms
2.4 Local Transformations of Beam Patterns
3. Additional External Data
3.1 Indirect Influence by Stem Direction
3.2 Direct Influence
3.3 Beams expressing Tempo – ``Feathered'' Beams
4. Two-Dimensional Layout: Vertical Position and Pitch Height
4.1 Ergonomic Significance of Beam Inclination
4.2 Stem Direction of Beam Aggregates
4.3 Graphical Placement of Beam Aggregates
4.4 Fine Tuning against the Staff Lines
4.5 Resolving Conflicts by Breaking Beams
4.6 Resolving Conflicts by Knees
4.7 Resolving Conflicts by Changing Height and/or Inclination
5. Aspects Not Covered
6. Conclusion
7. References
A. Appendices
A.1 Polymetric Constellations Expressible by Beams
16 - TENOR_BOSTON_2023_paper_2367 Onttonen.pdf
1. Introduction
2. Main features
2.1 Leader interface
2.2 Musician interface
3. Design principles
4. Development process
5. Technical implementation and limitations
6. Case: Labra
6.1 General remarks
6.2 Two examples
7. Conclusions and future work
8. References
18 - TENOR_BOSTON_2023_paper_9910 Bell.pdf
1. Introduction
1.1 Are scores maps?
1.2 Maps do not represent time
1.2.1 Databases as an art form
1.2.2 Morton Feldman and the European clock makers
2. Corpus-Based Concatenative Sound Synthesis (CBCS) today
2.1 Timbre Space
2.2 Corpus-Based Concatenative Synthesis - State of the art
3. First attempts
4. Motivations
5. Workflow
5.1 Corpus Selection
5.2 Analysis in FluCoMa
5.2.1 Slicing
5.2.2 mfcc on each slice - across one whole slice/segment
5.2.3 statical analysis over each slice
5.2.4 Normalization
5.2.5 Dimensionality Reduction
5.2.6 Neighbourhood queries
5.3 PatchXR
5.3.1 Interaction and OSC communication
6. Future works: the Raspberry Pi Orchestra
7. Conclusions
8. References
10 - TENOR_BOSTON_2023_paper_8126 Panariello.pdf
1. Introduction
2. Motivation
3. Class description
3.1 fileName
3.2 midicents
3.3 magnitudes
3.4 rhythmTree
3.5 metronome
3.6 quantization
3.7 threshold
3.8 dynamics
4. Examples
4.1 Writing a score from patterns
4.2 Writing a score from spectral data
5. Case study – generating a piano piece using SuperOM
6. Limitations
7. Conclusions and Future work
8. References
Blank Page
Blank Page
1. Introduction
2. Motivation
3. Class description
3.1 fileName
3.2 midicents
3.3 magnitudes
3.4 rhythmTree
3.5 metronome
3.6 quantization
3.7 threshold
3.8 dynamics
4. Examples
4.1 Writing a score from patterns
4.2 Writing a score from spectral data
5. Case study – generating a piano piece using SuperOM
6. Limitations
7. Conclusions and Future work
8. References
10 - TENOR_BOSTON_2023_paper_8126 Panariello.pdf
1. Introduction
2. Motivation
3. Class description
3.1 fileName
3.2 midicents
3.3 magnitudes
3.4 rhythmTree
3.5 metronome
3.6 quantization
3.7 threshold
3.8 dynamics
4. Examples
4.1 Writing a score from patterns
4.2 Writing a score from spectral data
5. Case study – generating a piano piece using SuperOM
6. Limitations
7. Conclusions and Future work
8. References
Recommend Papers

Complete SuperCollider Opus. [1 ed.]

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Santa Clara University

Scholar Commons Faculty Book Gallery

2014

A Gentle Introduction to SuperCollider (2nd edition) Bruno Ruviaro Santa Clara University, [email protected]

Follow this and additional works at: http://scholarcommons.scu.edu/faculty_books Part of the Composition Commons Recommended Citation Ruviaro, Bruno, "A Gentle Introduction to SuperCollider (2nd edition)" (2014). Faculty Book Gallery. 91. http://scholarcommons.scu.edu/faculty_books/91

This Book is brought to you for free and open access by Scholar Commons. It has been accepted for inclusion in Faculty Book Gallery by an authorized administrator of Scholar Commons. For more information, please contact [email protected].

A Gentle Introduction to SuperCollider by Bruno Ruviaro

This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-sa/4.0/.

Contents I

BASICS

1

1 Hello World

1

2 Server and Language 2.1 Booting the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 4

3 Your first sine wave

4

4 Error messages

5

5 Changing parameters

7

6 Comments

8

7 Precedence

9

8 The last thing always gets posted

9

9 Code blocks

10

10 How to clean up the Post window

11

i

11 Recording the output of SuperCollider

11

12 Variables 12.1 “Global” vs. Local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Reassignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12 13 15

II

16

PATTERNS

13 The 13.1 13.2 13.3 13.4 13.5 13.6 13.7 13.8

Pattern family Meet Pbind . . . . . . . . . . . . . . . Pseq . . . . . . . . . . . . . . . . . . . Make your code more readable . . . . Four ways of specifying pitch . . . . . More keywords: amplitude and legato Prand . . . . . . . . . . . . . . . . . . Pwhite . . . . . . . . . . . . . . . . . . Expanding your Pattern vocabulary .

14 More Pattern tricks 14.1 Chords . . . . . . . . . . . . . . . . . 14.2 Scales . . . . . . . . . . . . . . . . . 14.3 Transposition . . . . . . . . . . . . . 14.4 Microtones . . . . . . . . . . . . . . 14.5 Tempo . . . . . . . . . . . . . . . . . 14.6 Rests . . . . . . . . . . . . . . . . . . 14.7 Playing two or more Pbinds together

. . . . . . . ii

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . .

16 16 17 18 19 21 22 23 25

. . . . . . .

29 29 29 30 31 31 32 32

14.8 Using variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Starting and stopping Pbinds 15.1 Pbind as a musical score . . 15.2 EventStreamPlayer . . . . . 15.3 Example . . . . . . . . . . .

III

independently . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MORE ABOUT THE LANGUAGE

34 36 36 37 38

41

16 Objects, classes, messages, arguments

41

17 Receiver notation, functional notation

43

18 Nesting

44

19 Enclosures 19.1 Quotation marks 19.2 Parentheses . . . 19.3 Brackets . . . . . 19.4 Curly Braces . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

47 48 48 48 49

20 Conditionals: if/else and case

50

21 Functions

53

22 Fun with Arrays 22.1 Creating new Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55 56

iii

22.2 That funny exclamation mark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3 The two dots between parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . 22.4 How to “do” an Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57 57 58

23 Getting Help

59

IV

62

SOUND SYNTHESIS AND PROCESSING

24 UGens 24.1 Mouse control: instant Theremin . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 Saw and Pulse; plot and scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62 63 63

25 Audio rate, control rate 25.1 The poll method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64 66

26 UGen arguments

67

27 Scaling ranges 27.1 Scale with the method range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2 Scale with mul and add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3 linlin and friends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68 68 69 70

28 Stopping individual synths

71

29 The set message

71

iv

30 Audio Buses 30.1 Out and In UGens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

72 72

31 Microphone Input

75

32 Multichannel Expansion

76

33 The Bus object

78

34 Panning

79

35 Mix and Splay

81

36 Playing an audio file

83

37 Synth Nodes 37.1 The glorious doneAction: 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84 85

38 Envelopes 38.1 Env.perc . . . . . . . . 38.2 Env.triangle . . . . . . 38.3 Env.linen . . . . . . . 38.4 Env.pairs . . . . . . . 38.4.1 Envelopes—not 38.5 ADSR Envelope . . . . 38.6 EnvGen . . . . . . . .

86 87 87 88 88 89 89 91

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . just for amplitude . . . . . . . . . . . . . . . . . . . . . .

v

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

39 Synth Definitions 39.1 SynthDef and Synth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39.3 Under the hood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92 92 93 96

40 Pbind can play your SynthDef

96

41 Control Buses 99 41.1 asMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 42 Order of Execution 101 42.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

V

WHAT’S NEXT?

106

43 MIDI

106

44 OSC 109 44.1 Sending OSC from another computer . . . . . . . . . . . . . . . . . . . . . . . . . 110 44.2 Sending OSC from a smartphone . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 45 Quarks and plug-ins

111

46 Extra Resources

111

vi

A Gentle Introduction to SuperCollider Bruno Ruviaro September 6, 2014

Part I

BASICS 1

Hello World

Ready for creating your first SuperCollider program? Assuming you have SC up and running in front of you, open a new document (menu File→New, or shortcut [ctrl+N]) and type the following line: 1

"Hello World".postln;

Leave your cursor anywhere in that line (in doesn’t matter if beginning, middle, or end). Press [ctrl+Enter] to evaluate the code. “Hello world” appears in the Post window. Congratulations! That was your first SuperCollider program.

1

Figure 1: SuperCollider IDE interface. TIP: Throughout this document, ctrl (control) indicates the modifier key for keyboard shortcuts that is used on Linux and Windows platforms. On Mac OSX, use cmd (command) instead. Figure 1 shows a screenshot of the SuperCollider IDE (Integrated Development Environment) when you first open it. Let’s take a moment to get to know it a bit. What is the SuperCollider IDE? It is “a cross-platform coding environment developed specifically for SuperCollider (. . . ), easy to start using, handy to work with, and sprinkled with powerful features for experienced coders. It is also very customizable. It runs equally well and looks almost 2

the same on Mac OSX, Linux and Windows.” ∗ The main parts you see on the SC window are the Code Editor, the Help Browser, and Post Window. If you don’t see any of these when you open SuperCollider, simply go to the menu View→Docklets (that’s where you can show or hide each of them). There is also the Status Bar, always located on the bottom right corner of the window. Always keep the Post window visible, even if you don’t understand yet all the stuff being printed there. The Post window displays the responses of the program to our commands: results of code evaluation, various notifications, warnings, errors, etc. TIP: You can temporarily enlarge and reduce the editor font size with the shortcuts [Ctrl++] and [Ctrl+-] (that’s the control key together with the plus or minus keys, respectively). If you are on a laptop without a real plus key, use [Ctrl+shift+=].

2

Server and Language

On the Status Bar you can see the words “Interpreter” and “Server.” The Interpreter starts up turned on by default (“Active”), while “Server” is turned off (that’s what all the zeros mean). What is the Interpreter, and what is the Server? SuperCollider is actually made of two distinct applications: the server and the language. The server is responsible for making sounds. The language (also referred to as client or interpreter ) is used to control the server. The first is called scsynth (SC-synthesizer), the second sclang (SC-language). The Status Bar tell us the status (on/off) of each one of these two components. ∗

Quoted from the SuperCollider Documentation: http://doc.sccode.org/Guides/SCIde.html. Visit that page to learn more about the IDE interface.

3

Don’t worry if this distinction does not make much sense for you just now. The two main things you need to know at this point are: 1. Everything that you type in SuperCollider is in the SuperCollider language (the client): that’s where you write and execute commands, and see results in the Post window. 2. Everything that makes sound in SuperCollider is coming from the server—the “sound engine”, so to speak—, controlled by you through the SuperCollider language.

2.1

Booting the Server

Your “Hello World” program produced no sound: everything happened in the language, and the server was not used at all. The next example will make sound, so we need to make sure the Server is up and running. The easiest way to boot the server is with the shortcut [ctrl+B]. Alternatively, you can also click on the zeros in the Status Bar: a menu pops up, and one of the options is “Boot Server.” You will see some activity in the Post window as the server boots up. After you have successfully started the server, the numbers on the Status Bar will turn green. You will have to do this each time you launch SC, but only once per session.

3

Your first sine wave

“Hello World” is traditionally the first program that people create when learning a new programming language. You’ve already done that in SuperCollider. Creating a simple sine wave might be the “Hello World” of computer music languages. Let’s jump right to it. Type and evaluate the following line of code. Careful—this can be loud. Bring your volume all the way down, evaluate the line, then increase the volume slowly. 4

1

{SinOsc.ar}.play;

That’s a beautiful, smooth, continuous, and perhaps slightly boring, sine wave. You can stop the sound with [ctrl+.] (That’s the control key plus the period key.) Memorize this key combination, because you will be using it a lot to stop any and all sounds in SC. Now let’s make this sine wave a bit more interesting. Type this: 1

{SinOsc.ar(LFNoise0.kr(10).range(500, 1500), mul: 0.1)}.play;

Remember, you just need to leave your cursor anywhere within the line and hit [ctrl+Enter] to evaluate. Alternatively, you could also select the entire line before evaluating it. TIP: Typing the code examples by yourself is a great learning tool. It will help you to build confidence and familiarize yourself with the language. When reading tutorials in digital format, you may be tempted to simply copy and paste short snippets of code from the examples. That’s fine, but you will learn more if you type it up yourself—try that at least in the first stages of your SC learning.

4

Error messages

No sound when you evaluated the last example? If so, your code probably had a typo: a wrong character, a missing comma or parenthesis, etc. When something goes wrong in your code, the Post window gives you an error message. Error messages can be long and cryptic, but don’t panic: with time you will learn how to read them. A short error message could look like this: ERROR: Class not defined. in file ’selected text’ 5

line 1 char 19: {SinOsc.ar(LFNoiseO.kr(12).range(400, 1600), mul: 0.01)}.play; ----------------------------------nil This error message says, “Class not defined,” and points to the approximate location of the error (“line 1 char 19”). Classes in SC are those blue words that start with a capital letter (like SinOsc and LFNoise0). It turns out this error was due to the user typing LFNoiseO with a capital letter “O” at the end. The correct class is LFNoise0, with the number zero at the end. As you can see, attention to detail is crucial. If you have an error in your code, proofread it, change as needed, and try again until it’s fixed. If you had no error at first, try introducing one now so you can see how the error message would look like (for example, remove a comma). TIP: Learning SuperCollider is like learning another language like German, Portuguese, Japanese. . . just keep trying to speak it, work on expanding your vocabulary, pay attention to grammar and syntax, and learn from your mistakes. The worst it can happen here is to crash SuperCollider. Not nearly as bad as taking the wrong bus in São Paulo because of a mispronounced request for directions.

6

5

Changing parameters

Here’s a nice example adapted from the first chapter of the SuperCollider book.∗ As with the previous examples, don’t worry trying to understand everything. Just enjoy the sound result and play with the numbers. 1

{RLPF.ar(Dust.ar([12, 15]), LFNoise1.ar([0.3, 0.2]).range(100, 3000), 0.02)}.play;

Stop the sound, change some of the numbers, and evaluate again. For example, what happens when you replace the numbers 12 and 15 with lower numbers between 1 and 5? After LFNoise1, what if instead of 0.3 and 0.2 you tried something like 1 and 2? Change them one at a time. Compare the new sound with the previous sound, listen to the differences. See if you can understand what number is controlling what. This is a fun way of exploring SuperCollider: grab a snippet of code that makes something interesting, and mess around with the parameters to create variations of it. Even if you don’t fully understand the role of every single number, you can still find interesting sonic results. TIP: Like with any software, remember to frequently save your work with [ctrl+S]! When working on tutorials like this one, you will often come up with interesting sounds by experimenting with the examples provided. When you want to keep something you like, copy the code onto a new document and save it. Notice that every SuperCollider file has the extension .scd, which stands for “SuperCollider Document.” ∗

Wilson, S. and Cottle, D. and Collins, N. (Editors). The SuperCollider Book, MIT Press, 2011, p. 5. Several things in the present tutorial were borrowed, adapted from, or inspired by David Cottle’s excellent “Beginner’s Tutorial,” which is the first chapter of the book. This tutorial borrows some examples and explanations from Cottle’s chapter, but—differently from it—assumes less exposure to computer music, and introduces the Pattern family as the backbone of the pedagogical approach.

7

6

Comments

All text in your code that shows in red color is a comment. If you are new to programming languages, comments are a very useful way to document your code, both for yourself and for others who may have to read it later. Any line that starts with a double slash is a comment. You can write comments right after a valid line of code; the comment part will be ignored when you evaluate. In SC we use a semicolon to indicate the end of a valid statement. 1 2 3

2 + 5 + 10 − 5; // just doing some math rrand(10, 20); // generate a random number between 10 and 20

You can evaluate a line even if your cursor is in the middle of the comment after that line. The comment part is ignored. The next two paragraphs will be written as “comments” just for the sake of the example. 1 2 3 4 5

6 7

// You can quickly comment out one line of code using the shortcut [ctrl+/]. "Some SC code here...".postln; 2 + 2; // If you write a really long comment, your text may break into what looks like a new line that does *not* start with a double slash. That still counts as a single line of comment. /* Use "slash + asterisk" to start a longer comment with several lines. Close the big comment chunk with "asterisk + slash." The shortcut mentioned above also works for big chunks: simply select the lines of code you want to comment out, and hit [ctrl+/]. Same to un−comment. */

8

7

Precedence

SuperCollider follows a left to right order of precedence, regardless of operation. This means, for example, that multiplication does not happen first: 1 2 3 4

// In high school, the result was 9; in SC, it is 14: 5 + 2 * 2; // Use parentheses to force a specific order of operations: 5 + (2 * 2); // equals 9.

When combining messages and binary operations, messages take precedence. For example, in 5 + 2.squared, the squaring happens first.

8

The last thing always gets posted

A small but useful detail to understand: SuperCollider, by default, always posts to the Post window the result of whatever was the last thing to be evaluated. This explains why your Hello World code prints twice when you evaluate. Type the following lines onto a new document, then select all with [ctrl+A] and evaluate all lines at once: 1 2 3 4 5

"First Line".postln; "Second Line".postln; (2 + 2).postln; 3 + 3; "Finished".postln;

All five lines are executed by SuperCollider. You see the result of 2 + 2 in the Post window because there was an explicit postln request. The result of 3 + 3 was calculated, but there was no request to post it, so you don’t see it. Then the command of the last line is executed (the 9

word “Finished” gets posted due to the postln request). Finally, the result of the very last thing to be evaluated is posted by default: in this case, it happened to be the word “Finished.”

9

Code blocks

Selecting multiple lines of code before evaluating can be tedious. A much easier way of running a chunk of code all at once is by creating a code block : simply enclose in parentheses all lines of code that you want to run together. Here’s an example: 1 2 3 4 5 6 7

( // A little poem "Today is Sunday".postln; "Foot of pipe".postln; "The pipe is made of gold".postln; "It can beat the bull".postln; )

The outer parentheses are delimiting the code block. As long as your cursor is anywhere within the parentheses, a single [ctrl+Enter] will evaluate all lines for you (they are executed in order from top to bottom, but it’s so fast that it seems simultaneous). Using code blocks saves you the trouble of having to select all the lines again every time you change something and want to re-evaluate. For example, change some of the words between double quotes, and hit [ctrl+Enter] right after making the change. The entire block of code is evaluated without you having to manually select all lines. SuperCollider highlights the block for a second to give you a visual hint of what’s being executed.

10

10

How to clean up the Post window

This is such a useful command for cleaning freaks that it deserves a section of its own: [ctrl+shift+P]. Evaluate this line and enjoy cleaning the Post window afterwards: 1

100.do({"Print this line over and over...".scramble.postln});

You’re welcome.

11

Recording the output of SuperCollider

Soon you will want to start recording the sound output of your SuperCollider patches. Here’s a quick way: 1 2 3 4 5 6 7 8 9

// QUICK RECORD // Start recording: s.record; // Make some cool sound {Saw.ar(LFNoise0.kr([2, 3]).range(100, 2000), LFPulse.kr([4, 5]) * 0.1)}.play; // Stop recording: s.stopRecording; // Optional: GUI with record button, volume control, mute button: s.makeWindow;

The Post Window shows the path of the folder where the file was saved. Find the file, open it with Audacity or similar program, and verify that the sound was indeed recorded. For more info, look at the “Server” Help file (scroll down to “Recording Support”). Also online at http://doc.sccode.org/Classes/Server.html.

11

12

Variables

You can store numbers, words, unit generators, functions, or entire blocks of code in variables. Variables can be single letters or whole words chosen by you. We use the equal sign (=) to “assign” variables. Run these lines one at a time and watch the Post window: 1 2 3 4 5 6

x = 10; y = 660; y; // check what's in there x; x + y; y − x;

The first line assigns the number 10 to the variable x. The second line puts 660 into the variable y. The next two lines prove that those letters now “contain” those numbers (the data). Finally, the last two lines show that we can use the variables to do any operations with the data. Lowercase letters a through z can be used anytime as variables in SuperCollider. The only single letter that by convention we don’t use is s, which by default represents the Server. Anything can go into a variable: 1 2 3 4 5 6 7 8 9

a = "Hello, World"; // a string of characters b = [0, 1, 2, 3, 5]; // a list c = Pbind(\note, Pwhite(0, 10), \dur, 0.1); // you'll learn all about Pbind later, don't worry // ...and now you can use them just like you would use the original data: a.postln; // post it b + 100; // do some math c.play; // play that Pbind d = b * 5; // take b, multiply by 5, and assign that to a new variable

12

Often it will make more sense to give better names to your variables, to help you remember what they stand for in your code. You can use a ∼ (tilde) to declare a variable with a longer name. Note that there is no space between the tilde and the name of the variable. 1 2 3 4

~myFreqs = [415, 220, 440, 880, 220, 990]; ~myDurs = [0.1, 0.2, 0.2, 0.5, 0.2, 0.1]; Pbind(\freq, Pseq(~myFreqs), \dur, Pseq(~myDurs)).play;

Variable names must begin with lowercase letters. You can use numbers, underscores, and uppercase letters within the name, just not as the first character. All characters must be contiguous (no spaces or punctuation). In short, stick to letters and numbers and the occasional underscore, and avoid all other characters when naming your variables. ∼myFreqs, ∼theBestSineWave, and ∼banana_3 are valid names. ∼MyFreqs, ∼theBest&*#SineWave, and ∼banana!!! are bad names. There are two types of variables that you can create: “global” variables and local variables.

12.1

“Global” vs. Local

The variables you have seen up to now (the single lowercase letters a through z, and those starting with the tilde (∼) character) may be loosely called “global variables,” because once declared, they will work “globally” anywhere in the patch, in other patches, even in other SC documents, until you quit SuperCollider.∗ ∗ Technically speaking, variables starting with a tilde are called Environment variables, and lowercase letter variables (a through z) are called Interpreter variables. SuperCollider beginners do not need to worry about these distinctions, but keep them in mind for the future. Chapter 5 of the SuperCollider book explains the differences in detail.

13

Local variables, on the other hand, are declared with the reserved keyword var at the beginning of the line. You can assign an initial value to a variable at declaration time (var apples = 4). Local variables only exist within the scope of that code block. Here’s a simple example comparing the two types of variables. Evaluate line by line and watch the Post window. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

// Environment variables ~galaApples = 4; ~bloodOranges = 5; ~limes = 2; ~plantains = 1; ["Citrus", ~bloodOranges + ~limes]; ["Non−citrus", ~plantains + ~galaApples]; // Local variables: valid only within the code block. // Evaluate the block once and watch the Post window: ( var apples = 4, oranges = 3, lemons = 8, bananas = 10; ["Citrus fruits", oranges + lemons].postln; ["Non−citrus fruits", bananas + apples].postln; "End".postln; ) ~galaApples; // still exists apples; // gone

14

12.2

Reassignment

One last useful thing to understand about variables is that they can be reassigned : you can give them a new value at anytime. 1 2 3 4 5

// Assign a variable a = 10 + 3; a.postln; // check it a = 999; // reassign the variable (give it a new value) a.postln; // check it: the old value is gone.

A very common practice that is sometimes confusing for beginners is when the variable itself is used in its own reassignment. Take a look at this example: 1 2 3

x = 10; // assign 10 to the variable x x = x + 1; // assign x + 1 to the variable x x.postln; // check it

The easiest way to understand that last line is to read it like this: “take the current value of variable x, add 1 to it, and assign this new result to the variable x.” It’s really not complicated, and you will see later on how this can be useful.∗

∗ This example clearly demonstrates that the equal sign, in programming, is not the same equal sign that you learned in mathematics. In math, x = x + 1 is impossible (a number cannot be equal to itself plus one). In a programming language like SuperCollider, the equal sign can be seen as a kind of action: take the result of the expression on the right side of the sign, and “assign it” to the variable on the left side.

15

Part II

PATTERNS 13

The Pattern family

Let’s try something different now. Type and run this line of code: 1

Pbind(\degree, Pseries(0, 1, 30), \dur, 0.05).play;

13.1

Meet Pbind

Pbind is a member of the Pattern family in SuperCollider. The capital P in Pbind and Pseries stands for Pattern; we’ll meet other members of the family soon. For now, let’s take a closer look at Pbind only. Try this stripped down example: 1

Pbind(\degree, 0).play;

The only thing this line of code does in life is to play the note middle C, one time per second. The keyword \degree refers to scale degrees, and the number 0 means the first scale degree (a C major scale is assumed, so that’s the note C itself). Note that SuperCollider starts counting things from 0, not from 1. In a simple line like the above, the notes C, D, E, F, G. . . would be represented by the numbers 0, 1, 2, 3, 4. . . Try changing the number and notice how the note changes when you re-evaluate. You can also choose notes below middle C by using negative numbers (for example, -2 will give you the note A below middle C). In short, just imagine that the middle C note of the piano is 0, and then count white keys up or down (positive or negative numbers) to get any other note. 16

Now play around a bit with the duration of the notes. Pbind uses the keyword \dur to specify durations in seconds: 1

Pbind(\degree, 0, \dur, 0.5).play;

Of course this is still very rigid and inflexible—always the same note, always the same duration. Don’t worry: things will get better very soon. But first let’s take a look at the other ways you can specify pitch inside a Pbind.

13.2

Pseq

Let’s go ahead and play several notes in sequence, like a scale. Let’s also make our notes shorter, say, 0.2 second long. 1

Pbind(\degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7], 1), \dur, 0.2).play;

This line introduces a new member of the Pattern family: Pseq. As the name might suggest, this pattern deals with sequences. All that Pseq needs in order to play a sequence is: • a list of items between square brackets • a number of repetitions. In the example, the list is [0, 1, 2, 3, 4, 5, 6, 7], and the number of repeats is 1. This Pseq simply means: “play once all the items of the list, in sequence.” Notice that these two elements, list and number of repeats, are inside Pseq’s own parentheses, and they are separated by a comma. Also notice where Pseq appears within the Pbind: it is the input value of \degree. This is important: instead of providing a single, fixed number for scale degree (as in our first simple Pbind), we are providing a whole Pseq: a recipe for a sequence of numbers. With this in mind, we can easily expand upon this idea and use another Pseq to control durations as well: 17

1

Pbind(\degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7], 5), \dur, Pseq([0.2, 0.1, 0.1, 0.2, 0.2, 0.35], inf)).play;

What is happening in this example? First, we have changed the number of repeats of the first Pseq to 5, so the entire scale will play five times. Second, we have replaced the previously fixed \dur value of 0.2 with another Pseq. This new Pseq has a list of six items: [0.2, 0.1, 0.1, 0.2, 0.2, 0.35]. These numbers become duration values for the resulting notes. The repeats value of this second Pseq is set to inf, which stands for “infinite.” This means that the Pseq has no limit on the number of times it can repeat its sequence. Does the Pbind play forever, then? No: it stops after the other Pseq has finished its job, that is, after the sequence of scale degrees has been played 5 times. Finally, the example has a total of eight different notes (the list in the first Pseq), while there are only six values for duration (second Pseq). When you provide sequences of different sizes like this, Pbind simply cycles through them as needed. Answer these questions to practice what you have learned: • Try the number 1 instead of inf as the repeats argument of the second Pseq. What happens? • How can you make this Pbind play forever? Solutions are at the end of the book.1

13.3

Make your code more readable

You may have noticed that the line of code above is quite long. In fact, it is so long that it wraps to a new line, even though it is technically a single statement. Long lines of code can be confusing to read. To avoid this, it is common practice to break the code into several indented 18

lines; the goal is to make it as clear and intelligible as possible. The same Pbind above can be written like this: 1 2 3 4 5 6

( Pbind( \degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7], 5), \dur, Pseq([0.2, 0.1, 0.1, 0.2, 0.2, 0.35], inf) ).play; )

From now on, get into the habit of writing your Pbinds like this. Writing code that looks neatly arranged and well organized will help you a lot in learning SuperCollider. Also, notice that we enclosed this Pbind within parentheses to create a code block (remember section 9?): because it is no longer in a single line, we need to do this to be able to run it all together. Just make sure the cursor is anywhere within the block before evaluating.

13.4

Four ways of specifying pitch

Pbind accepts other ways of specifying pitch, not just scale degrees. • If you want to use all twelve chromatic notes (black and white keys of the piano), you can use \note instead of \degree. 0 will still mean middle C, but now the steps include black keys of the piano (0=middle C, 1=C], 2=D, etc). • If you prefer to use MIDI note numbering, use \midinote (60=middle C, 61=C], 62=D, etc). • Finally, if you’d rather specify frequencies directly in Herz, use \freq. See Figure 2 for a comparison of all four methods. In the next example, the four Pbinds all play the same note: the A above middle C (A4). 19

Figure 2: Comparing scale degrees, note numbers, midinotes, and frequencies 1 2 3 4

Pbind(\degree, 5).play; Pbind(\note, 9).play; Pbind(\midinote, 69).play; Pbind(\freq, 440).play;

TIP: Remember that each type of pitch specification expects numbers in a different sensible range. A list of numbers like [-1, 0, 1, 3] makes sense for \degree and \note, but doesn’t make sense for \midinote nor \freq. The table below compares some values using the piano keyboard as a reference. 20

\degree \note \midinote \freq

13.5

A0 (lowest piano note) -23 -39 21 27.5

C4 0 0 60 261.6

A4 5 9 69 440

C5 7 12 72 523.2

C8 (highest piano note) 21 48 108 4186

More keywords: amplitude and legato

The next example introduces two new keywords: \amp and \legato, which define the amplitude of events, and the amount of legato between notes. Notice how the code is fairly easy to read thanks to nice indentation and spread over multiple lines. Enclosing parentheses (top and bottom) are used to delimit a code block for quick execution. 1 2 3 4 5 6 7 8

( Pbind( \degree, Pseq([0, −1, 2, −3, 4, −3, 7, 11, 4, 2, 0, −3], 5), \dur, Pseq([0.2, 0.1, 0.1], inf), \amp, Pseq([0.7, 0.5, 0.3, 0.2], inf), \legato, 0.4 ).play; )

Pbind has many of these pre-defined keywords, and with time you will learn more of them. For now, let’s stick to just a few—one for pitch (choose from \degree, \note, \midinote, or \freq), one for durations (\dur), one for amplitude (\amp), and one for legato (\legato). Durations are in beats (in this case, 1 beat per second, which is the default); amplitude should be between 0 and 1 (0 = silence, 1 = very loud); and legato works best with values between 0.1 and 1 (if you 21

are not sure about what legato does, simply try the example above with 0.1, then 0.2, then 0.3, all the way up to 1, and hear the results). Take the last example as point of departure and create new Pbinds. Change the melody. Come up with new lists of durations and amplitudes. Experiment using \freq for pitch. Remember, you can always choose to use a fixed number for any given parameter, if that’s what you need. For example, if you want all notes in your melody to be 0.2 seconds long, there is no need to write Pseq[0.2, 0.2, 0.2, 0.2..., not even Pseq([0.2], inf)—simply remove the whole Pseq structure and write 0.2 in there.

13.6

Prand

Prand is a close cousin of Pseq. It also takes in a list and a number of repeats. But instead of playing through the list in sequence, Prand picks a random item from the list every time. Try it: 1 2 3 4 5 6 7 8

( Pbind( \degree, Prand([2, 3, 4, 5, 6], inf), \dur, 0.15, \amp, 0.2, \legato, 0.1 ).play; )

Replace Prand with Pseq and compare the results. Now try using Prand for durations, amplitudes, and legato.

22

13.7

Pwhite

Another popular member of the Pattern family is Pwhite. It is an equal distribution random number generator (the name comes from “white noise”). For example, Pwhite(100, 500) will get you random numbers between 100 and 500. 1 2 3 4 5 6 7 8

( Pbind( \freq, Pwhite(100, 500), \dur, Prand([0.15, 0.25, 0.3], inf), \amp, 0.2, \legato, 0.3 ).trace.play; )

The example above also shows another helpful trick: the message trace just before play. It prints out in the Post window the values chosen for every event. Very useful for debugging or simply to understand what is going on! Pay attention to the differences between Pwhite and Prand: even though both have to do with randomness, they take in different arguments, and they do different things. Inside Pwhite’s parentheses you only need to provide a low and a high boundary: Pwhite(low, high). Random numbers will be chosen from within that range. Prand, on the other hand, takes in a list of items (necessarily between square brackets), and a number of repeats: Prand([list, of, items], repeats). Random items will be chosen from the list. Play around with both and make sure you fully understand the difference.

23

TIP: A Pwhite with two integer numbers will generate only integers. For example, Pwhite(100, 500) will output numbers like 145, 568, 700, but not 145.6, 450.32, etc. If you want floating point numbers in your output, write Pwhite(100, 500.0). This is very useful for, say, amplitudes: if you write Pwhite(0, 1) you are only getting 0 or 1, but write Pwhite(0, 1.0) and you will get everything in between. Try the following questions to test your new knowledge: a) What is the difference in output between Pwhite(0, 10) and Prand([0, 4, 1, 5, 9, 10, 2, 3], inf)? b) If you need a stream of integer numbers chosen randomly between 0 and 100, could you use a Prand? c) What is the difference in output between Pwhite(0, 3) and Prand([0, 1, 2, 3], inf)? What if you write Pwhite(0, 3.0)? d) Run the examples below. We use \note instead of \degree in order to play a C minor scale (which includes black keys). The list [0, 2, 3, 5, 7, 8, 11, 12] has eight numbers in it, corresponding to pitches C, D, E[, F, G, A[, B, C, but how many events does each example actually play? Why? 1 2 3 4 5 6 7

// Pseq ( Pbind( \note, Pseq([0, 2, 3, 5, 7, 8, 11, 12], 4), \dur, 0.15; ).play; )

24

8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

// Pseq ( Pbind( \note, Prand([0, 2, 3, 5, 7, 8, 11, 12], 4), \dur, 0.15; ).play; ) // Pwhite ( Pbind( \note, Pseq([0, 2, 3, 5, 7, 8, 11, 12], 4), \dur, Pwhite(0.15, 0.5); ).play; )

Answers are at the end of this tutorial.2 TIP: A Pbind stops playing when the shortest internal pattern has finished playing (as determined by the repeats argument of each internal pattern).

13.8

Expanding your Pattern vocabulary

By now you should be able to write simple Pbinds on your own. You know how to specify pitches, durations, amplitudes, and legato values, and you know how to embed other patterns (Pseq, Prand, Pwhite) to generate interesting parameter changes.

25

This section will expand your Pattern vocabulary a bit. The examples below introduce six more members of the Pattern family. Try to figure out by yourself what they do. Use the following strategies: • Listen the resulting melody; describe and analyze what you hear; • Look at the Pattern name: does it suggest something? (for example, Pshuf may remind you of the word “shuffle”); • Look at the arguments (numbers) inside the new Pattern; • Use .trace.play as seen earlier to watch the values being printed in the Post window; • Finally, confirm your guesses by consulting the Help files (select the name of the pattern and hit [ctrl+D] to open the corresponding Help file). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

// Expanding your Pattern vocabulary // Pser ( Pbind( \note, Pser([0, 2, 3, 5, 7, 8, 11, 12], 11), \dur, 0.15; ).play; ) // Pxrand // Compare with Prand to hear the difference ( p = Pbind( \note, Pxrand([0, 2, 3, 5, 7, 8, 11, 12], inf),

26

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

\dur, 0.15; ).play; ) // Pshuf ( p = Pbind( \note, Pshuf([0, 2, 3, 5, 7, 8, 11, 12], 6), \dur, 0.15; ).play; ) // Pslide // Takes 4 arguments: list, repeats, length, step ( Pbind( \note, Pslide([0, 2, 3, 5, 7, 8, 11, 12], 7, 3, 1), \dur, 0.15; ).play; ) // Pseries // Takes three arguments: start, step, length ( Pbind( \note, Pseries(0, 2, 15), \dur, 0.15; ).play; ) // Pgeom // Takes three arguments: start, grow, length

27

48 49 50 51 52 53 54 55 56 57 58 59 60 61

( Pbind( \note, Pseq([0, 2, 3, 5, 7, 8, 11, 12], inf), \dur, Pgeom(0.1, 1.1, 25); ).play; ) // Pn ( Pbind( \note, Pseq([0, Pn(2, 3), 3, Pn(5, 3), 7, Pn(8, 3), 11, 12], 1), \dur, 0.15; ).play; )

Practice using these Patterns—you can do a lot with them. Pbinds are like a recipe for a musical score, with the advantage that you are not limited to writing fixed sequences of notes and rhythms: you can describe processes of ever changing musical parameters (this is sometimes called “algorithmic composition”). And this is just one aspect of the powerful capabilities of the Pattern family. In the future, when you feel the need for more pattern objects, the best place to go is James Harkins’ “A Practical Guide to Patterns,” available in the built-in Help files.∗ ∗

Also online at http://doc.sccode.org/Tutorials/A-Practical-Guide/PG_01_Introduction.html

28

14

More Pattern tricks

14.1

Chords

Want to write chords inside Pbinds? Write them as lists (comma-separated values enclosed in square brackets): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

( Pbind( \note, Pseq([[0, 3, 7], [2, 5, 8], [3, 7, 10], [5, 8, 12]], 3), \dur, 0.15 ).play; ) // Fun with strum ( Pbind( \note, Pseq([[−7, 3, 7, 10], [0, 3, 5, 8]], 2), \dur, 1, \legato, 0.4, \strum, 0.1 // try 0, 0.1, 0.2, etc ).play; )

14.2

Scales

When using \degree for your pitch specification, you can add another line with the keyword \scale to change scales (note: this only works in conjunction with \degree, not with \note, \midinote, or \freq): 1 2

( Pbind(

29

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

\scale, Scale.harmonicMinor, \degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7], 1), \dur, 0.15; ).play; ) // Evaluate this line to see a list of all available scales: Scale.directory; // If you need a chromatic note in between scale degrees, do this: ( Pbind( \degree, Pseq([0, 1, 2, 3, 3.1, 4], 1), ).play; ) // The 3.1 above means one chromatic step above scale degree 3 (in this case, F# above F). Note that when you don't explicitly specify a \scale, Scale.major is assumed.

14.3

Transposition

Use the \ctranspose keyword to achieve chromatic transposition. This will work in conjunction with \degree, \note, and \midinote, but not with \freq. 1 2 3 4 5 6

( Pbind( \note, Pser([0, 2, 3, 5, 7, 8, 11, 12], 11), \ctranspose, 12, // transpose an octave above (= 12 semitones) \dur, 0.15; ).play;

30

7

)

14.4

Microtones

How to write microtones: 1 2 3

// Microtones with \note and \midinote: Pbind(\note, Pseq([0, 0.5, 1, 1.5, 1.75, 2], 1)).play; Pbind(\midinote, Pseq([60, 69, 68.5, 60.25, 70], 1)).play;

14.5

Tempo

The values you provide to the \dur key of a Pbind are in number of beats, that is, 1 means one beat, 0.5 means half a beat, and so on. Unless you specify otherwise, the default tempo is 60 BPM (beats per minute). To play at a different tempo, you simply create a new TempoClock. Here’s a Pbind playing at 120 beats per minute (BPM): 1 2 3 4 5

( Pbind(\degree, Pseq([0, 0.1, 1, 2, 3, 4, 5, 6, 7]), \dur, 1; ).play(TempoClock(120/60)); // 120 beats over 60 seconds: 120 BPM )

By the way, did you see that the Pseq above is taking only one argument (the list)? Where is the repeats value that always came after the list? You can hear that the example plays through the sequence only once, but why? This is a common property of all Patterns (and in fact, of many other objects in SuperCollider): if you omit an argument, it will use a built-in default value. In this case, the default repeats for Pseq is 1. Remember your first ridiculously simple 31

Pbind? It was a mere Pbind(\degree, 0).play and it only knew how to play one note. You didn’t provide any info for duration, amplitude, legato, etc. In theses cases Pbind simply goes ahead and uses its default values for those.

14.6

Rests

Here is how to write rests. The number inside Rest is the duration of the rest in beats. Rests can go anywhere in the Pbind, not just in the \dur line. 1 2 3 4 5 6

( Pbind( \degree, Pwhite(0, 10), \dur, Pseq([0.1, 0.1, 0.3, 0.6, Rest(0.3), 0.25], inf); ).play; )

14.7

Playing two or more Pbinds together

To start a few Pbinds simultaneously, simply enclose all of them within a single code block: 1 2 3 4 5 6 7 8 9 10

( // open big block Pbind( \freq, Pn(Pseries(110, 111, 10)), \dur, 1/2, \legato, Pwhite(0.1, 1) ).play; Pbind( \freq, Pn(Pseries(220, 222, 10)), \dur, 1/4,

32

11 12 13 14 15 16 17 18 19

\legato, Pwhite(0.1, 1) ).play; Pbind( \freq, Pn(Pseries(330, 333, 10)), \dur, 1/6, \legato, 0.1 ).play; ) // close big block

In order to play Pbinds in a time-ordered fashion (other than simply evaluating them manually one after the other), you can use { }.fork: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

// Basic fork example. Watch Post window: ( { "one thing".postln; 2.wait; "another thing".postln; 1.5.wait; "one last thing".postln; }.fork; ) // A more interesting example: ( t = TempoClock(76/60); { Pbind( \note, Pseq([[4, 11], [6, 9]], 32), \dur, 1/6, \amp, Pseq([0.05, 0.03], inf) ).play(t);

33

20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

2.wait; Pbind( \note, Pseq([[−25, −13, −1], [−20, −8, 4], \rest], 3), \dur, Pseq([1, 1, Rest(1)], inf), \amp, 0.1, \legato, Pseq([0.4, 0.7, \rest], inf) ).play(t); 2.75.wait; Pbind( \note, Pseq([23, 21, 25, 23, 21, 20, 18, 16, 20, 21, 23, 21], inf), \dur, Pseq([0.25, 0.75, 0.25, 1.75, 0.125, 0.125, 0.80, 0.20, 0.125, 0.125, 1], 1), \amp, 0.1, \legato, 0.5 ).play(t); }.fork(t); )

For advanced ways of playing Pbinds simultaneously and in sequence, check out Ppar and Pspawner. For more about fork, check out the Routine Help file.

14.8

Using variables

In the earlier section “Expanding your Pattern vocabulary,” did you notice how you had to type the same note list [0, 2, 3, 5, 7, 8, 11, 12] several times for multiple Pbinds? Not very efficient to copy the same thing by hand over and over, right? In programming, whenever you find yourself doing the same task repeatedly, it’s probably time to adopt a smarter strategy to accomplish 34

the same goal. In this case, we can use variables. As you may remember, variables allow you to refer to any chunk of data in a flexible and concise way (review section 12 if needed). Here’s an example: 1 2 3 4 5 6 7

// Using the same sequence of numbers a lot? Save it in a variable: c = [0, 2, 3, 5, 7, 8, 11, 12]; // Now you can just refer to it: Pbind(\note, Pseq(c, 1), \dur, 0.15).play; Pbind(\note, Prand(c, 6), \dur, 0.15).play; Pbind(\note, Pslide(c, 5, 3, 1), \dur, 0.15).play;

Another example to practice using variables: let’s say we want to play two Pbinds simultaneously. One of them does an ascending major scale, the other does a descending major scale one octave above. Both use the same list of durations. Here is one way of writing this: 1 2 3 4 5 6 7 8 9 10 11 12 13

~scale = [0, 1, 2, 3, 4, 5, 6, 7]; ~durs = [0.4, 0.2, 0.2, 0.4, 0.8, 0.2, 0.2, 0.2]; ( Pbind( \degree, Pseq(~scale), \dur, Pseq(~durs) ).play; Pbind( \degree, Pseq(~scale.reverse + 7), \dur, Pseq(~durs) ).play; )

Interesting tricks here: thanks to variables, we reuse the same list of scale degrees and durations for both Pbinds. We wanted the second scale to be descending and one octave above 35

the first. To achieve this, we simply use the message .reverse to reverse the order of the list (type ∼scale.reverse on a new line and evaluate to see exactly what it does). Then we add 7 to transpose it one octave above (test it as well to see the result).∗ We played two Pbinds at the same time by enclosing them within a single code block. Exercise: create one additional Pbind inside the code block above, so that you hear three simultaneous voices. Use both variables (∼scale and ∼durs) in some different way—for example, use them inside a pattern other than Pseq; change transposition amount; reverse and/or multiply durations; etc.

15

Starting and stopping Pbinds independently

This is a very common question that comes up about Pbinds, especially the ones that run forever with inf: how can I stop and start individual Pbinds at will? The answer will involve using variables, and we’ll see a complete example soon; but before going there, we need to understand a little more of what happens when you play a Pbind.

15.1

Pbind as a musical score

You can think of Pbind as a kind of musical score: it is a recipe for making sounds, a set of instructions to realize a musical passage. In order for the score to become music, you need to give it to a player: someone who will read the score and make sounds based on those instructions. Let’s conceptually separate these two moments: the definition of the score, and the performance of it. 1

// Define the score ∗

We could also have used \ctranspose, 12 to get the same transposition.

36

2 3 4 5 6 7 8 9 10

( p = Pbind( \midinote, Pseq([57, 62, 64, 65, 67, 69], inf), \dur, 1/7 ); // no .play here! ) // Ask for the score to be played p.play;

The variable p in the example above simply holds the score—notice that the Pbind does not have a .play message right after its closing parenthesis. No sound is made at that point. The second moment is when you ask SuperCollider to play from that score: p.play. A common mistake at this point is to try p.stop, hoping that it will stop the player. Try it and verify for yourself that it doesn’t work this way. You will understand why in the next couple of paragraphs.

15.2

EventStreamPlayer

Clean the Post window with [ctrl+shift+P] (not really needed, but why not?) and evaluate p.play again. Look at the Post window and you will see that the result is something called an EventStreamPlayer. Every time you call .play on a Pbind, SuperCollider creates a player to realize that action: that’s what EventStreamPlayer is. It’s like having a pianist materialize in front of you every time you say “I want this score to be played right now.” Nice, huh? Well, yes, except that after this anonymous virtual player shows up and starts the job, you have no way to talk to it—it has no name. In slightly more technical terms, you have created an object, but you have no way to refer to that object later. Maybe at this point you can see why doing p.stop won’t work: it’s like you are trying to talk to the score instead of talking to the 37

player. The score (the Pbind stored in the variable p) knows nothing about starting or stopping: it is just a recipe. The player is the one who knows about starting, stopping, “would you please take from the beginning”, etc. In other words, you have to talk to the EventStreamPlayer. All you need to do is to give it a name, in other words, store it into a variable: 1 2 3 4 5 6 7

// Try these lines one by one: ~myPlayer = p.play; ~myPlayer.stop; ~myPlayer.resume; ~myPlayer.stop.reset; ~myPlayer.start; ~myPlayer.stop;

In summary: calling .play on a Pbind generates an EventStreamPlayer; and storing your EventStreamPlayers into variables allows you to access them later to start and stop patterns individually (no need to use [ctrl+.], which kills everything at once).

15.3

Example

Here’s a more complex example to wrap up this section. The top melody is borrowed from Tchaikovsky’s Album for the Youth, and a lower melody is added in counterpoint. Figure 3 shows the passage in musical notation. 1 2 3 4 5 6

// Define the score ( var myDurs = Pseq([Pn(1, 5), 3, Pn(1, 5), 3, Pn(1, 6), 1/2, 1/2, 1, 1, 3, 1, 3], inf ) * 0.4; ~upperMelody = Pbind( \midinote, Pseq([69, 74, 76, 77, 79, 81, Pseq([81, 79, 81, 82, 79, 81], 2), 82, 81, 79, 77, 76, 74, 74], inf), \dur, myDurs

38

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

); ~lowerMelody = Pbind( \midinote, Pseq([57, 62, 61, 60, 59, 58, 57, 55, 53, 52, 50, 49, 50, 52, 50, 55, 53, 52, 53, 55, 57, 58, 61, 62, 62], inf), \dur, myDurs ); ) // Play the two together: ( ~player1 = ~upperMelody.play; ~player2 = ~lowerMelody.play; ) // Stop them separately: ~player1.stop; ~player2.stop; // Other available messages ~player1.resume; ~player1.reset; ~player1.play; ~player1.start; // same as .play

First, notice the use of variables. One of them, myDurs, is a local variable. You can tell it’s a local variable because it doesn’t start with a tilde (∼) and it’s declared at the top with the reserved keyword var. This variable holds an entire Pseq that will be used as \dur in both of the Pbinds. myDurs is really only needed at the moment of defining the score, so it makes sense to use a local variable for that (though an environment variable would work just fine too). The other variables you see in the example are environment variables—once declared, they are valid anywhere in your SuperCollider patches. Second, notice the separation between score and players, as discussed earlier. When the Pbinds are defined, they are not played right away—there is no .play immediately after their 39

     44       4         4

 

            

       



         

   

    

Figure 3: Pbind counterpoint with a Tchaikovsky melody closing parenthesis. After you evaluate the first code block, all you have is two Pbind definitions stored into the variables ∼upperMelody and ∼lowerMelody. They are not making sound yet—they are just the scores. The line ∼player1 = ∼upperMelody.play creates an EventStreamPlayer to do the job of playing the upper melody, and that player is given the name ∼player1. Same idea for ∼player2. Thanks to this, we can talk to each player and request it to stop, start, resume, etc. At the risk of being tedious, let’s reiterate this one last time: • A Pbind is just a recipe for making sounds, like a musical score; • When you call the message play on a Pbind, an EventStreamPlayer object is created; • If you store this EventStreamPlayer into a variable, you can access it later to use commands like stop and resume.

40

Part III

MORE ABOUT THE LANGUAGE 16

Objects, classes, messages, arguments

SuperCollider is an Object-Oriented programming language, like Java or C++. It is beyond the scope of this tutorial to explain what this means, so we’ll let you search that on the web if you are curious. Here we’ll just explain a few basic concepts you need to know to better understand this new language you are learning. Everything in SuperCollider is an object. Even simple numbers are objects in SC. Different objects behave in different ways and hold different kinds of information. You can request some info or action from an object by sending it a message. When you write something like 2.squared, the message squared is being sent to the receiver object 2. The dot between them makes the connection. Messages are also called methods, by the way. Objects are specified hierarchically in classes. SuperCollider comes with a huge collection of pre-defined classes, each with their own set of methods. Here’s a good way to understand this. Let’s imagine there is an abstract class of objects called Animal. The Animal class defines a few general methods (messages) common to all animals. Methods like age, weight, picture could be used to get information about the animal. Methods like move, eat, sleep would make the animal perform a specific action. Then we could have two subclasses of Animal: one called Pet, another called Wild. Each one of these subclasses could have even more subclasses derived from them (like Dog and Cat derived from Pet). Subclasses inherit all methods from their parent classes, and implement new methods of their own to add specialized features. For example, both Dog and Cat objects would happily respond to the .eat message, inherited from the Animal class. Dog.name and Cat.name would return the name of 41

the pet: this method is common to all objects derived from Pet. Dog has a bark method, so you can call Dog.bark and it will know what to do. Cat.bark would throw you an error message: ERROR: Message ’bark’ not understood. In all these hypothetical examples, the words beginning with a capital letter are classes which represent objects. The lowercase words after the dot are messages (or methods) being sent to those objects. Sending a message to an object always returns some kind of information. Finally, messages sometimes accept (or even require) arguments. Arguments are the things that come inside parentheses right after a message. In Cat.eat("sardines", 2), the message eat is being sent to Cat with some very specific information: what to eat, and quantity. Sometimes you will see arguments declared explicitly inside the parentheses (keywords ending with a colon). This is often handy to remind the reader what the argument refers to. Dog.bark(volume: 10) is more self-explanatory than just Dog.bark(10).

Figure 4: Hypothetical class hierarchy. 42

OK—enough of this quick and dirty explanation of object-oriented programming. Let’s try some examples that you can actually run in SuperCollider. Run one line after the other and see if you can identify the message, the receiver object, and the arguments (if any). The basic structure is Receiver.message(arguments) Answers at the end of this document.3 1 2 3 4 5 6

[1, 2, 3, "wow"].reverse; "hello".dup(4); 3.1415.round(0.1); // note that the first dot is the decimal case of 3.1415 100.rand; // evaluate this line several times // Chaining messages is fun: 100.0.rand.round(0.01).dup(4);

17

Receiver notation, functional notation

There is more than one way of writing your expressions in SuperCollider. The one we just saw above is called receiver notation: 100.rand, where the dot connects the Object (100) to the message (rand). Alternatively, the exact same thing can also be written like this: rand(100). This one is called functional notation. You can use either way of writing. Here’s how this works when a message takes two or more arguments. 1 2 3 4 5

5.dup(20); // receiver notation dup(5, 20); // same thing in functional notation 3.1415.round(0.1); // receiver notation round(3.1415, 0.1); // functional notation

In the examples above, you might read dup(5, 20) as “duplicate the number 5 twenty times,” and round(3.1415, 0.1) as “round the number 3.1415 to one decimal case.” Conversely, the 43

receiver notation versions could be read as “Number 5, duplicate yourself twenty times!” (for 5.dup(20)) and “Number 3.1415, round yourself to one decimal case!” (for 3.1415.round(0.1)). In short: Receiver.message(argument) is equivalent to message(Receiver, argument). Choosing one writing style over another is a matter of personal preference and convention. Sometimes one method can be clearer than the other. Whatever style you end up preferring (and it’s fine to mix them), the important thing is to be consistent. One convention that is widespread among SuperCollider users is that classes (words that begin with uppercase letters) are almost always written as Receiver.message(argument). For example, you will always see SinOsc.ar(440), but you will almost never see ar(SinOsc, 440), even though both are correct. Exercise: rewrite the following statement using functional notation only: 100.0.rand.round(0.01).dup(4); Solution at the end.4

18

Nesting

The solution to the last exercise has led you to nest things one inside the other. David Cottle has an excellent explanation of nesting in the SuperCollider book, so we will just quote it here.∗ To further clarify the idea of nesting, consider a hypothetical example in which SC will make you lunch. To do so, you might use a serve message. The arguments might be salad, main course, and dessert. But just saying serve(lettuce, fish, banana) may not give you the results you want. So to be safe you could clarify those arguments, replacing each with a nested message and argument. serve(toss(lettuce, tomato, cheese), bake(fish, 400, 20), mix(banana, icecream)) ∗

Cottle, D. “Beginner’s Tutorial.” The SuperCollider Book, MIT Press, 2011, pp. 8-9.

44

SC would then serve not just lettuce, fish, and banana, but a tossed salad with lettuce, tomato, and cheese; a baked fish; and a banana sundae. These inner commands can be further clarified by nesting a message(arg) for each ingredient: lettuce, tomato, cheese, and so on. Each internal message produces a result that is in turn used as an argument by the outer message. 1 2 3 4 5 6 7 8 9 10 11 12 13

// Pseudo−code to make dinner: serve( toss( wash(lettuce, water, 10), dice(tomato, small), sprinkle(choose([blue, feta, gouda])) ), bake(catch(lagoon, hook, bamboo), 400, 20), mix( slice(peel(banana), 20), cook(mix(milk, sugar, starch), 200, 10) ) );

When the nesting has several levels, we can use new lines and indents for clarity. Some messages and arguments are left on one line, some are spread out with one argument per line—whichever is clearer. Each indent level should indicate a level of nesting. (Note that you can have any amount of white space—new lines, tabs, or spaces—between bits of code.) [In the dinner example] the lunch program is now told to wash the lettuce in water for 10 minutes and to dice the tomato into small pieces before tossing them into the salad bowl and sprinkling them with cheese. You’ve also specified where to catch the fish and to bake it at 400 degrees for 20 minutes before serving, and so on. 45

To “read” this style of code you start from the innermost nested message and move out to each successive layer. Here is an example aligned to show how the innermost message is nested inside the outer messages. 1 2 3 4

exprand(1.0, dup({exprand(1.0, sort(dup({exprand(1.0, round(sort(dup({exprand(1.0,

1000.0); 1000.0)}, 100); 1000.0)}, 100)); 1000.0)}, 100)), 0.01);

The code below is another example of nesting. Answer the questions that follow. You don’t have to explain what the numbers are doing—the task is simply to identify the arguments in each layer of nesting. (Example and exercise questions also borrowed and slightly adapted from Cottle’s tutorial.) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

// Nesting and proper indentation ( { CombN.ar( SinOsc.ar( midicps( LFNoise1.ar(3, 24, LFSaw.ar([5, 5.123], 0, 3, 80) ) ), 0, 0.4 ), 1, 0.3, 2) }.play; )

46

a) What number is the second argument for LFNoise1.ar? b) What is the first argument for LFSaw.ar? c) What is the third argument for LFNoise1.ar? d) How many arguments are in midicps? e) What is the third argument for SinOsc.ar? f) What are the second and third arguments for CombN.ar? See the end of this document for the answers.5 TIP: If for whatever reason your code has lost proper indentation, simply select all of it and go to menu Edit→Autoindent Line or Region, and it will be fixed.

19

Enclosures

There are four types of enclosures: (parentheses), [brackets], {braces}, and "quotation marks". Each one that you open will need to be closed at a later point. This is called “balancing,” that is, keeping properly matched pairs of enclosures throughout your code. The SuperCollider IDE automatically indicates matching parentheses (also brackets and braces) when you close a pair—they show up in red. If you click on a parenthesis that lacks an opening/closing match, you will see a dark red selection telling you something is missing. 47

Balancing is a quick way to select large sections of code for evaluation, deletion, or copy/paste operations. You can double click an opening or closing parenthesis (also brackets and braces) to select everything within.

19.1

Quotation marks

Quotation marks are used to enclose a sequence of characters (including spaces) as a single unit. These are called Strings. Single quotes create Symbols, which are slightly different than Strings. Symbols can also be created with a backslash immediately before the text. Thus ’greatSymbol’ and \greatSymbol are equivalent. 1 2

"Here's a nice string"; 'greatSymbol';

19.2

Parentheses

Parentheses can be used to: • enclose argument lists: rrand(0, 10); • force precedence: 5 + (10 * 4); • create code blocks (multiple lines of code to be evaluated together).

19.3

Brackets

Square brackets define a collection of items, like [1, 2, 3, 4, "hello"]. These are normally called Arrays. An array can contain anything: numbers, strings, functions, patterns, etc. Arrays 48

understand messages such as reverse, scramble, mirror, choose, to name a few. You can also perform mathematical operations on arrays. 1 2 3 4 5

[1, 2, 3, 4, "hello"].scramble; [1, 2, 3, 4, "hello"].mirror; [1, 2, 3, 4].reverse + 10; // convert midi to frequency in Hz [60, 62, 64, 65, 67, 69, 71].midicps.round(0.1);

More on Arrays coming soon in section 22.

19.4

Curly Braces

Braces (or “curly braces”) define functions. Functions encapsulate some kind of operation or task that will probably be used and reused multiple times, possibly returning different results each time. The example below is from the SuperCollider book. 1 2

exprand(1, 1000.0); {exprand(1, 1000.0)};

David Cottle walks us through his example: “the first line picks a random number, which is displayed in the post window. The second prints a very different result: a function. What does the function do? It picks a random number. How can that difference affect code? Consider the lines below. The first chooses a random number and duplicates it. The second executes the random-number-picking function 5 times and collects the results in an array.” ∗ 1 2 3

rand(1000.0).dup(5); // picks a number, duplicates it {rand(1000.0)}.dup(5); // duplicates the function of picking a number {rand(1000.0)}.dup(5).round(0.1); // all of the above, then round ∗

Cottle, D. “Beginner’s Tutorial.” The SuperCollider Book, MIT Press, 2011, p. 13.

49

4 5

// essentially, this (which has a similar result) [rand(1000.0), rand(1000.0), rand(1000.0), rand(1000.0), rand(1000.0)]

More about functions soon. For now, here’s a summary of all possible enclosures: Collections [list, of, items] Functions { often multiple lines of code } Strings "words inside quotes" Symbols ’singlequotes’ or preceded by a \backslash

20

Conditionals: if/else and case

If it’s raining, I’ll take an umbrella when I go out. If it’s sunny, I’ll take my sunglasses. Our days are filled with this kind of decision making. In programming, these are the moments when your code has to test some condition, and take different courses of action depending on the result of the test (true or false). There are many types of conditional structures. Let’s take a look at two simple ones: if/else and case. The syntax for an if/else in SC is: if(condition, {true action}, {false action}). The condition is a Boolean test (it must return true or false). If the test returns true, the first function is evaluated; otherwise, the second function is. Try it: 1 2

// if / else if(100 > 50, { "very true".postln }, { "very false".postln });

50

The table below, borrowed from the SuperCollider book∗ , presents some common Boolean operators that you can use. Note the distinction between a single equal sign (x = 10) and two equal signs (x == 10). The single sign means “assign 10 to the variable x,” while the double sign means “is x equal to 10? ” Type and run some of the examples from the true or false columns, and you will actually see true or false results in the Post window. Symbol == != > < >= 5 10 < 99 10 >= 10, 10 >= 3 10 = 99 10 a.size ["wow", 99] ++ a; // concatenates the two arrays into a new one a ++ \hi; // a Symbol is a single character a ++ 'hi'; // same as above a ++ "hi"; // a String is a collection of characters a.add(44); // creates new array with new element at the end a.insert(5, "wow"); // inserts "wow" at position 5, pushes other items forward ( returns new array) a; // evaluate this and see that none of the above operations actually changed the original array

55

18 19 20 21

a.put(2, "oops"); // put "oops" at index 2 (destructive; evaluate line above again to check) a.permute(3); // permute: item in position 3 goes to position 0, and vice−versa a.mirror; // makes it a palindrome a.powerset; // returns all possible combinations of the array's elements

You can do math with arrays: 1 2 3 4 5 6 7

[1, 2, 3, 4, 5] + 10; [1, 2, 3, 4, 5] * 10; ([1, 2, 3, 4, 5] / 7).round(0.01); // notice the parentheses for precedence x = 11; y = 12; // try some variables [x, y, 9] * 100; // but make sure you only do math with proper numbers [1, 2, 3, 4, "oops", 11] + 10; // strange result

22.1

Creating new Arrays

Here are a few ways of using the class Array to create new collections: 1 2 3 4 5 6 7 8 9 10 11 12

// Arithmetic series Array.series(size: 6, start: 10, step: 3); // Geometric series Array.geom(size: 10, start: 1, grow: 2); // Compare the two: Array.series(7, 100, −10); // 7 items; start at 100, step of −10 Array.geom(7, 100, 0.9); // 7 items; start at 100; multiply by 0.9 each time // Meet the .fill method Array.fill(10, "same"); // Compare: Array.fill(10, rrand(1, 10)); Array.fill(10, {rrand(1, 10)}); // function is re−evaluated 10 times

56

13 14 15 16 17 18 19 20

// The function for the .fill method can take a default argument that is a counter. // The argument name can be whatever you want. Array.fill(10, {arg counter; counter * 10}); // For example, generating a list of harmonic frequencies: Array.fill(10, {arg wow; wow+1 * 440}); // The .newClear method a = Array.newClear(7); // creates an empty array of given size a[3] = "wow"; // same as a.put(3, "wow")

22.2

That funny exclamation mark

It is just a matter of time until you see something like 30!4 in someone else’s code. This shortcut notation simply creates an array containing the same item a number of times: 1 2 3 4 5 6 7 8 9

// Shortcut notation: 30!4; "hello" ! 10; // It gives the same results as the following: 30.dup(4); "hello".dup(10); // or Array.fill(4, 30); Array.fill(10, "hello");

22.3

The two dots between parentheses

Here is another common syntax shortcut used to create arrays. 1 2

// What is this? (50..79);

57

3 4 5 6 7 8 9 10 11 12

// It's a shortcut to generate an array with an arithmetic series of numbers. // The above has the same result as: series(50, 51, 79); // or Array.series(30, 50, 1); // For a step different than 1, you can do this: (50, 53 .. 79); // step of 3 // Same result as: series(50, 53, 79); Array.series(10, 50, 3);

Note that each command implies a slightly different way of thinking. The (50..79) allows you to think this way: “just give me an array from 50 to 79.” You don’t necessarily think about how many items the array will end up having. On the other hand, Array.series allows you to think: “just give me an array with 30 items total, counting up from 50.” You don’t necessarily think about who is going to be the last number in the series. Also note that the shortcut uses parentheses, not square brackets. The resulting array, of course, will be in square brackets.

22.4

How to “do” an Array

Often you will need to do some action over all items of a collection. We can use the method do for this: 1 2 3 4

~myFreqs = Array.fill(10, {rrand(440, 880)}); // Now let's do some simple action on every item of the list: ~myFreqs.do({arg item, count; ("Item " ++ count ++ " is " ++ item ++ " Hz. Closest midinote is " ++ item.cpsmidi.round).postln});

5

58

6 7 8 9 10 11

// If you don't need the counter, just use one argument: ~myFreqs.do({arg item; {SinOsc.ar(item, 0, 0.1)}.play}); ~myFreqs.do({arg item; item.squared.postln}); // Of course something as simple as the last one could be done like this: ~myFreqs.squared;

In summary: when you “do” an array, you provide a function. The message do will iterate through the items of the array and evaluate that function each time. The function can take two arguments by default: the array item at current iteration, and a counter that keeps track of number of iterations. The names of these arguments can be whatever you want, but they are always in this order: item, count. See also the method collect, which is very similar to do, but returns a new collection with all intermediate results.

23

Getting Help

Learn how to make good use of the Help files. Often you will find useful examples at the bottom of each Help page. Be sure to scroll down to check them out, even (or specially) if you don’t fully understand the text explanations at first. You can run the examples directly from the Help browser, or you can copy and paste the code onto a new window to play around with it. Select any valid class or method in your SuperCollider code (double-clicking the word will select it), and hit [ctrl+D] to open the corresponding Help file. If you select a class name (for example, MouseX), you will be directed to the class Help file. If you select a method, you will be directed to a list of classes that understand that method (for example, ask for help on the method scramble).∗ ∗

Attention: SuperCollider will display in blue any word that starts with a Capital letter. This means that the

59

Other ways to explore the Help files in the SuperCollider IDE are the “Browse” and “Search” links. Use Browse to navigate the files by categories, and Search to look for words in all Help files. Important note about the Help Browser in the SuperCollider IDE: • Use the top-right field (where it says “Find...”) to look for specific words within the currently open Help file (like you would do a “find” on a website); • Use the “Search” link (to the right of “Browse”) to search text across all Help files. When you first open parentheses to add arguments to a given method, SC displays a little “tooltip help” to show you what the expected arguments are. For example, type the beginning of line that you see in figure 6. Right after opening the first parenthesis, you see the tooltip showing that the arguments to a SinOsc.ar are freq, phase, mul, and add. It also shows us what the default values are. This is exactly the same information you would get from the SinOsc Help file. If the tooltip has disappeared, you can bring it back with [ctrl+Shift+Space].

Figure 6: Helpful info is displayed as you type. Another shortcut: if you would like to explicitly name your arguments (like SinOsc.ar(freq: 890)), try hitting the tab key right after opening the parentheses. SC will autocomplete the color blue does not guarantee that the word is typo-free: for example, if you type Sinosc (with wrong lowercase "o"), it will still show up in blue.

60

correct argument name for you, in order, as you type (hit tab after the comma for subsequent argument names). TIP: Create a folder with your own “personalized help files.” Whenever you figure out some new trick or learn a new object, write a simple example with explanations in your own words, and save it for the future. It may come in handy a month or a year from now. The exact same Help files can also be found online at http://doc.sccode.org/.

61

Part IV

SOUND SYNTHESIS AND PROCESSING At this point you already know quite a lot about the SuperCollider. The last part of this tutorial introduced you to nitty-gritty details about the language itself, from variables to enclosures and more. You also learned how to create interesting Pbinds using several members of the Pattern family. This part of the tutorial will (finally!) introduce you to sound synthesis and processing with SuperCollider. We will start with the topic of Unit Generators (UGens).∗

24

UGens

You have already seen a few Unit Generators (UGens) in action in sections 3 and 18. What is a UGen? A unit generator is an object that generates sound signals or control signals. These signals are always calculated in the server. There are many classes of unit generators, all of which derive from the class UGen. SinOsc and LFNoise0 are examples of UGens. For more details, look at the Help files called “Unit Generators and Synths,” and “Tour of UGens.” When you played your Pbinds earlier in this tutorial, the default sound was always the same: a simple piano-like synth. That synth is made of a combination of unit generators.† You will learn ∗

Most tutorials start with Unit Generators right away; in this intro to SC, however, we chose to emphasize the Pattern family first (Pbind and friends) for a different pedagogical approach. † Since you used Pbinds to make sound in SuperCollider so far, you may be tempted to think: “I see, so the Pbind is a Unit Generator!” That’s not the case. Pbind is not a Unit Generator—it is just a recipe for making musical events (score). “So the EventStreamPlayer, the thing that results when I call play on a Pbind, THAT must be a UGen!” The answer is still no. The EventStreamPlayer is just the player, like a pianist, and

62

how to combine unit generators to create all sorts of electronic instruments with synthetic and processed sounds. The next example builds up from your first sine wave to create an electronic instrument that you can perform live with the mouse.

24.1

Mouse control: instant Theremin

Here’s a simple synth that you can perform live. It is a simulation of the Theremin, one of the oldest electronic music instruments: 1

{SinOsc.ar(freq: MouseX.kr(300, 2500), mul: MouseY.kr(0, 1)}.play;

If you don’t know what a Theremin is, please stop everything right now and search for “Clara Rockmore Theremin” ou YouTube. Then come back here and try to play the Swan song with your SC Theremin. SinOsc, MouseX, and MouseY are UGens. SinOsc is generating the sine wave tone. The other two are capturing the motion of your cursor on the screen (X for horizontal motion, Y for vertical motion), and using the numbers to feed frequency and amplitude values to the sine wave. Very simple, and a lot of fun.

24.2

Saw and Pulse; plot and scope

The theremin above used a sine oscillator. There are other waveforms you could use to make the sound. Run the lines the below—they use the convenient plot method—to look at the shape of the pianist does not generate sound. In keeping with this limited metaphor, the instrument piano is the thing that actually vibrates and generates sound. That is a more apt analogy for a UGen: it’s not the score, nor the player: it’s the instrument. When you made music with Pbinds earlier, SC would create an EventStreamPlayer to play your score with the built-in piano synth. You didn’t have to worry about creating the piano or any of that—SuperCollider did all the work under the hood for you. That hidden piano synth is made of a combination of a few Unit Generators.

63

SinOsc, and compare it to Saw and Pulse. The lines below won’t make sound—they just let you visualize a snapshot of the waveform. 1 2 3

{ SinOsc.ar }.plot; // sine wave { Saw.ar }.plot; // sawtooth wave { Pulse.ar }.plot; // square wave

Now rewrite your theremin line replacing SinOsc with Saw, then Pulse. Listen how different they sound. Finally, try .scope instead of .play in your theremin code, and you will be able to watch a representation of the waveform in real time (a “Stethoscope” window will pop up).

25

Audio rate, control rate

It is very easy to spot a UGen in SuperCollider code: they are nearly always followed by the messages .ar or .kr. These letters stand for Audio Rate and Control Rate. Let’s see what this means. From the “Unit Generators and Synths” Help file: A unit generator is created by sending the ar or kr message to the unit generator’s class object. The ar message creates a unit generator that runs at audio rate. The kr message creates a unit generator that runs at control rate. Control rate unit generators are used for low frequency or slowly changing control signals. Control rate unit generators produce only a single sample per control cycle and therefore use less processing power than audio rate unit generators.∗ In other words: when you write SinOsc.ar, you are sending the message “audio rate” to the SinOsc UGen. Assuming your computer is running at the common sampling rate of 44100 Hz, ∗

http://doc.sccode.org/Guides/UGens-and-Synths.html

64

this sine oscillator will generate 44100 samples per second to send out to the loudspeaker. Then we hear the sine wave. Take a moment to think again about what you just read: when you send the ar message to a UGen, you are telling it to generate forty-four thousand and one hundred numbers per second. That’s a lot of numbers. You write {SinOsc.ar}.play in the language, and the language communicates your request to the server. The actual job of generating all those samples is done by the server, the “sound engine” of SuperCollider. Now, when you use kr instead of ar, the job is also done by the server, but there are a couple of differences: 1. The amount of numbers generated per second with .kr is much smaller.{SinOsc.ar}.play generates 44100 numbers per second, while {SinOsc.kr}.play outputs a little under 700 numbers per second (if you are curious, the exact amount is 44100 / 64, where 64 is the so-called “control period.”) 2. The signal generated with kr does not go to your loudspeakers. Instead, it is normally used to control parameteres of other signals—for example, the MouseX.kr in your theremin was controlling the frequency of a SinOsc. OK, so UGens are these incredibly fast generators of numbers. Some of these numbers become sound signals; others become control signals. So far so good. But what numbers are these, after all? Big? Small? Positive? Negative? It turns out they are very small numbers, often between -1 and +1, sometimes just between 0 and 1. All UGens can be divided in two categories according to the range of numbers they generate: unipolar UGens, and bipolar UGens. Unipolar UGens generate numbers between 0 and 1. Bipolar UGens generate numbers between -1 and +1. 65

25.1

The poll method

Snooping into the output of some UGens should make this clearer. We can’t possibly expect SuperCollider to print thousands of numbers per second in the Post window, but we can ask it to print a few of them every second, just for a taste. Type and run the following lines one at a time (make sure your server is running), and watch the Post window: 1 2 3 4

// just watch the Post window (no sound) {SinOsc.kr(1).poll}.play; // hit ctrl+period, then evaluate the next line: {LFPulse.kr(1).poll}.play;

The examples make no sound because we are using kr—the result is a control signal, so nothing is sent to the loudspeakers. The point here is just to watch the typical output of a SinOsc. The message poll grabs 10 numbers per second from the SinOsc output and prints them out in the Post window. The argument 1 is the frequency, which just means the the sine wave will take one second to complete a whole cycle. Based on what you observed, is SinOsc unipolar or bipolar? What about LFPulse?6 Bring down the volume before evaluating the next line, then bring it back up slowly. You should hear soft clicks. 1

{LFNoise0.ar(1).poll}.play;

Because we sent it the message ar, this Low Frequency Noise generator is churning out 44100 samples per second to your sound card—it’s an audio signal. Each sample is a number between -1 and +1 (so it’s a bipolar UGen). With poll you are only seeing ten of those per second. LFNoise0.ar(1) picks a new random number every second. All of this is done by the server. Stop the clicks with [ctrl+.] and try changing the frequency of LFNoise0. Try numbers like 3, 5, 10, and then higher. Watch the output numbers and hear the results. 66

26

UGen arguments

Most of the time you will want to specify arguments to the UGens you are using. You have already seen that: when you write {SinOsc.ar(440)}.play, the number 440 is an argument to the SinOsc.ar; it specifies the frequency that you will hear. You can be explicit about naming the arguments, like this: {SinOsc.ar(freq: 440, mul: 0.5)}.play. The argument names are freq and mul (note the colon immediately after the words in the code). The mul stands for “multiplier”, and is essentially the amplitude of the waveform. If you don’t specify mul, SuperCollider uses the default value of 1 (maximum amplitude). Using a mul: 0.5 means multiplying the waveform by half, in other words, it will play at half of the maximum amplitude. In your theremin code, the SinOsc arguments freq and mul were explicitly named. You may recall that MouseX.kr(300, 2500) was used to control the frequency of the theremin. MouseX.kr takes two arguments: a low and a high boundary for its output range. That’s what the numbers 300 and 2500 were doing there. Same thing for the MouseY.kr(0, 1) controlling amplitude. Those arguments inside the mouse UGens were not explicitly named, but they could be. How do you find out what arguments a UGen will accept? Simply go to the corresponding Help file: double click the UGen name to select it, and hit [ctrl+D] to open the Documentation page. Do that now for, say, MouseX. After the Description section you see the Class Methods section. Right there, it says that the arguments of the kr method are minval, maxval, warp, and lag. From the same page you can learn what each of them does. Whenever you don’t provide an argument, SC will use the default values that you see in the Help file. If you don’t name the arguments explicitly, you have to provide them in the exact order shown in the Help file. If you do name them explicitly, you can put them in any order, and even skip some in the middle. Naming arguments explicitly is also a good learning tool, as it helps you to better understand your code. An example is given below. 1

// minval and maxval provided in order, no keywords

67

2 3 4

{MouseX.kr(300, 2500).poll}.play; // minval, maxval and lag provided, skipped warp {MouseX.kr(minval: 300, maxval: 2500, lag: 10).poll}.play;

27

Scaling ranges

The real fun starts when you use some UGens to control the parameters of other UGens. The theremin example did just that. Now you have all the tools to understand exactly what was going on in one of the examples of section 3. The three last lines of the example demonstrate step-by-step how the LFNoise0 is used to control frequency: 1 2 3 4 5 6

{SinOsc.ar(freq: LFNoise0.kr(10).range(500, 1500), mul: 0.1)}.play; // Breaking it down: {LFNoise0.kr(1).poll}.play; // watch a simple LFNoise0 in action {LFNoise0.kr(1).range(500, 1500).poll}.play; // now with .range {LFNoise0.kr(10).range(500, 1500).poll}.play; // now faster

27.1

Scale with the method range

The method range simply rescales the output of a UGen. Remember, LFNoise0 produces numbers between -1 and +1 (it is a bipolar UGen). Those raw numbers would not be very useful to control frequency (we need sensible numbers in the human hearing range). The .range takes the output betwen -1 and +1 and scales it to whatever low and high values you provide as arguments (in this case, 500 and 1500). The number 10, which is the argument to LFNoise0.kr, specifies the frequency of the UGen: how many times per second it will pick a new random number.

68

In short: in order to use a UGen to control some parameter of another UGen, first you need to know what range of numbers you want. Are the numbers going to be frequencies? Do you want them between, say, 100 and 1000? Or are they amplitudes? Perhaps you want amplitudes to be between 0.1 (soft) and 0.5 (half the maximum)? Or are you trying to control number of harmonics? Do you want it to be between 5 and 19? Once you know the range you need, use the method .range to make the controlling UGen do the right thing. Exercise: write a simple line code that plays a sine wave, the frequency of which is controlled by a LFPulse.kr (provide appropriate arguments to it). Then, use the .range method to scale the output of LFPulse into something that you want to hear.

27.2

Scale with mul and add

Now you know how to scale the output of UGens in the server using the method .range. The same thing can be accomplished on a more fundamental level by using the arguments mul and add, which pretty much all UGens have. The code below shows the equivalence between range and mul/add approaches, both with a bipolar UGen and a unipolar UGen. 1 2 3 4 5 6 7 8 9

// This: {SinOsc.kr(1).range(100, 200).poll}.play; // ...is the same as this: {SinOsc.kr(1, mul: 50, add: 150).poll}.play; // This: {LFPulse.kr(1).range(100, 200).poll}.play; // ...is the same as this: {LFPulse.kr(1, mul: 50, add: 100).poll}.play;

69

Figure 7 helps visualize how mul and add work in rescaling UGen outputs (a SinOsc is used as demonstration).

Figure 7: Scaling UGen ranges with mul and add

27.3

linlin and friends

For any other arbitrary scaling of ranges, you can use the handy methods linlin, linexp, explin, expexp. The method names hint at what they do: convert a linear range to another linear range (linlin), linear to exponential (linexp), etc. 70

1 2 3 4 5 6

// A bunch of numbers a = [1, 2, 3, 4, 5, 6, 7]; // Rescale to 0−127, linear to linear a.linlin(1, 7, 0, 127).round(1); // Rescale to 0−127, linear to exponential a.linexp(1, 7, 0.01, 127).round(1); // don't use zero for an exponential range

28

Stopping individual synths

Here’s a very common way of starting several synths and being able to stop them separately. The example is self-explanatory: 1 2 3 4 5 6 7 8

// Run one line at a time (don't stop the sound in between): a = { Saw.ar(LFNoise2.kr(8).range(1000, 2000), mul: 0.2) }.play; b = { Saw.ar(LFNoise2.kr(7).range(100, 1000), mul: 0.2) }.play; c = { Saw.ar(LFNoise0.kr(15).range(2000, 3000), mul: 0.1) }.play; // Stop synths individually: a.free; b.free; c.free;

29

The set message

Just like with any function (review section 21), arguments specified at the beginning of your synth function are accessible by the user. This allows you to change synth parameters on the fly (while the synth is running). The message set is used for that purpose. Simple example:

71

1 2 3 4 5

x = {arg freq = 440, amp = 0.1; SinOsc.ar(freq, 0, amp)}.play; x.set(\freq, 778); x.set(\amp, 0.5); x.set(\freq, 920, \amp, 0.2); x.free;

It’s good practice to provide default values (like the 440 and 0.1 above), otherwise the synth won’t play until you set a proper value to the ’empty’ parameters.

30

Audio Buses

Audio buses are used for routing audio signals. They are like the channels of a mixing board. SuperCollider has 128 audio buses by default. There are also control buses (for control signals), but for now let’s focus on audio buses only.∗ Hit [ctrl+M] to open up the Meter window. It shows the levels of all inputs and outputs. Figure 8 shows a screenshot of this window and its correspondence to SuperCollider’s default buses. In SuperCollider, audio buses are numbered from 0 to 127. The first eight (0-7) are by default reserved to be the output channels of your sound card. The next eight (8-15) are reserved for the inputs of your sound card. All the others (16 to 127) are free to be used in any way you want, for example, when you need to route audio signals from one UGen to another.

30.1

Out and In UGens

Now try the following line of code: 1

{Out.ar(1, SinOsc.ar(440, 0, 0.1))}.play; // right channel ∗

We will take a quick look at control buses in section 41.

72

Figure 8: Audio buses and Meter window in SC.

73

The Out UGen takes care of routing signals to specific buses. The first argument to Out is the target bus, that is, where you want this signal to go. In the example above, the number 1 means that we want to send the signal to bus 1, which is the right channel of your sound card. The second argument of Out.ar is the actual signal that you want to “write” into that bus. It can be a single UGen, or a combination of UGens. In the example, it is just a sine wave. You should hear it only on your right speaker (or on your right ear if using headphones). With the Meter window open and visible, go ahead and change the first argument of Out.ar: try any number between 0 and 7, and watch the meters. You will see that the signal goes wherever you tell it to. TIP: Most likely you have a sound card that can only play two channels (left and right), so you will only hear the sine tone when you send it to bus 0 or bus 1. When you send it to other buses (3 to 7), you will still see the corresponding meter showing the signal: SC is in fact sending the sound to that bus, but unless you have an 8-channel sound card you will not be able to hear the output of buses 3-7. One simple example of an audio bus being used for an effect is shown below. 1 2 3 4

// start the effect f = {Out.ar(0, BPF.ar(in: In.ar(55), freq: MouseY.kr(1000, 5000), rq: 0.1))}.play; // start the source n = {Out.ar(55, WhiteNoise.ar(0.5))}.play;

The first line declares a synth (stored into the variable f) consisting of a filter UGen (a Band Pass Filter). A band pass filter takes any sound as input, and filters out all frequencies except the one frequency region that you want to let through. In.ar is the UGen we use to read from an audio bus; so with In.ar(55) being used as input of the BPF, any sound that we send to bus 74

55 will be passed to the band pass filter. Notice that this first synth does not make any sound at first: when you evaluate the first line, bus 55 is still empty. It will only make sound when we send some audio into bus 55, which happens on the second line. The second line creates a synth and stores it into the variable n. This synth simply generates white noise, and outputs it not to the loudspeakers directly, but to audio bus 55 instead. That is precisely the bus that our filter synth is listening to, so as soon as you evaluate the second line, you should start hearing the white noise being filtered by synth f. In short, the routing looks like this: noise synth → bus 55 → filter synth The order of execution is important. The previous example won’t work if you evaluate the source before the effect. This will be discussed in more detail in section 42, “Order of Execution.” One last thing: when you wrote in earlier examples synths like {SinOsc.ar(440)}.play, SC was actually doing {Out.ar(0, SinOsc.ar(440))}.play under the hood: it assumed you wanted to send the sound out to bus 0, so it automatically wrapped the first UGen with an Out.ar(0, ...) UGen. In fact a few more things are happening there behind the scenes, but we’ll come back to this later (section 39).

31

Microphone Input

The example below shows how you can easily access sound input from your sound card with the SoundIn UGen.∗ ∗

Since In.ar will read from any bus, and you know that your soundcard inputs are by default assigned to buses 8-15, you could write In.ar(8) to get sound from your microphone. That works just fine, but SoundIn.ar is a more convenient option.

75

1 2 3 4 5 6 7 8

// Warning: use headphones to avoid feedback {SoundIn.ar(0)}.play; // same as In.ar(8): takes sound from the first input bus // Stereo version {SoundIn.ar([0, 1])}.play; // first and second inputs // Some reverb just for fun? {FreeVerb.ar(SoundIn.ar([0, 1]), mix: 0.5, room: 0.9)}.play;

32

Multichannel Expansion

With your Meter window open—[ctrl+M]—, watch this. 1

{Out.ar(0, Saw.ar(freq: [440, 570], mul: Line.kr(0, 1, 10)))}.play;

We are using a nice Line.kr UGen to ramp up the amplitude from 0 to 1 in 10 seconds. That’s neat. But there is a more interesting magic going on here. Did you notice that there are 2 channels of output (left and right)? Did you hear that there is a different note on each channel? And that those two notes come from a list—[440, 570]—that is passed to Saw.ar as its freq argument? This is called Multichannel Expansion. David Cottle jokes that “multichannel expansion is one [application of arrays] that borders on voodoo.” ∗ It is one of the most powerful and unique features of SuperCollider, and one that may puzzle people at first. In a nutshell: if you use an array anywhere as one of the arguments of a UGen, the entire patch is duplicated. The number of copies created is the number of items in the array. These duplicated ∗

Cottle, D. “Beginner’s Tutorial.” The SuperCollider Book, MIT Press, 2011, p. 14

76

UGens are sent out to as many adjacent buses as needed, starting from the bus specified as the first argument of Out.ar. In the example above, we have Out.ar(0, ... ). The freq of the Saw wave is an array of two items: [440, 570]. What does SC do? It “multichannel expands,” creating two copies of the entire patch. The first copy is a sawtooth wave with frequency 440 Hz, sent out to bus 0 (your left channel); the second copy is a sawtooth wave with frequency 570 Hz, sent out to bus 1 (your right channel)! Go ahead and check that for yourself. Change those two frequencies to any other values you like. Listen to the results. One goes to the left channel, the other goes to the right channel. Go even further, and add a third frequency to the list (say, [440, 570, 980]). Watch the Meter window. You will see that the first three outputs are lighting up (but you will only be able to hear the third one if you have a multichannel sound card). What’s more: you can use additional arrays in other arguments of the same UGen, or in arguments of other UGens in the same synth. SuperCollider will do the housekeeping and generate synths that follow those values accordingly. For example: right now both frequencies [440, 570] are fading in from 0 to 1 in 10 seconds. But change the code to Line.kr(0, 1, [1, 15]) and you’ll have the 440 Hz tone take 1 second to fade in, and the 570 Hz tone take 15 seconds to fade in. Try it. Exercise: listen to this simulation of a “busy tone” of an old telephone. It uses multichannel expansion to create two sine oscillators, each playing a different frequency on a different channel. Make the left channel pulse 2 times per second, and the right channel pulse 3 times per second.7 1 2

a = {Out.ar(0, SinOsc.ar(freq: [800, 880], mul: LFPulse.ar(2)))}.play; a.free;

77

33

The Bus object

Here’s an example that uses everything you just learned in the previous two sections: audio buses, and multichannel expansion. 1 2 3 4 5 6

// Run this first ('turn reverb on' −− you won't hear anything at first) r = {Out.ar(0, FreeVerb.ar(In.ar(55, 2), mix: 0.5, room: 0.9, mul: 0.4))}.play; // Now run this second ('feed the busy tone into the reverb bus') a = {Out.ar(55, SinOsc.ar([800, 880], mul: LFPulse.ar(2)))}.play; a.free;

Thanks to multichannel expansion, the busy tone uses two channels. When (in synth a) we route the busy tone to bus 55, two buses are actually being used up—number 55, and the immediately adjacent bus 56. In the reverb (synth r), we indicate with In.ar(55, 2) that we want to read 2 channels starting from bus 55: so both 55 and 56 get into the reverb. The output of the reverb is in turn also expanded to two channels, so synth r sends sound out to buses 0 and 1 (left and right channels of our sound card). Now, this choice of bus number (55) to connect a source synth to an effect synth was arbitrary: it could have been any other number between 16 and 127 (remember, buses 0-15 are reserved for sound card outputs and inputs). How inconvenient it would be if we had to keep track of bus numbers ourselves. As soon as our patches grew in complexity, imagine the nightmare: “What bus number did I choose again for reverb? Was it 59 or 95? What about the bus number for my delay? I guess it was 27? Can’t recall...” and so on and so forth. SuperCollider takes care of this for you with Bus objects. We only hand-assigned the infamous bus 55 in the examples above for the sake of demonstration. In your daily SuperCollider life, you should simply use the Bus object. The Bus object does the job of choosing an available bus for you and keeping track of it. This is how you use it: 78

1 2 3 4 5 6 7 8

// Create the bus ~myBus = Bus.audio(s, 2); // Turn on the reverb: read from myBus (source sound) r = {Out.ar(0, FreeVerb.ar(In.ar(~myBus, 2), mix: 0.5, room: 0.9, mul: 0.4))}.play; // Feed the busy tone into ~myBus b = {Out.ar(~myBus, SinOsc.ar([800, 880], mul: LFPulse.ar(2)))}.play; // Free both synths r.free; b.free;

The first argument of Bus.audio is the variable s, which stands for the server. The second argument is how many channels you need (2 in the example). Then you store that into a variable with a meaningful name (∼myBus in the example, but it could be ∼reverbBus, ∼source, ∼tangerine, or whatever makes sense to you in your patch). After that, whenever you need to refer to that bus, just use the variable you created.

34

Panning

Panning is the spreading of an audio signal into a stereo or multichannel sound field. Here’s a mono signal bouncing between left and right channel thanks to Pan2:∗ 1 2

p = {Pan2.ar(in: PinkNoise.ar, pos: SinOsc.kr(2), level: 0.1)}.play; p.free;

From the Pan2 Help file, you can see that the argument pos (position) expects a number between -1 (left) and +1 (right), 0 being center. That’s why you can use a SinOsc directly into that ∗

For multichannel panning, take a look at Pan4 and PanAz. Advanced users may want to take a look at SuperCollider plug-ins for Ambisonics.

79

argument: the sine oscillator is a bipolar UGen, so it outputs numbers between -1 and +1 by default. Here’s a more elaborated example. A sawtooth wave goes through a very sharp band pass filter (rq: 0.01). Notice the use of local variables to modularize different parts of the code. Analyze and try to understand as much as you can in the example above. Then answer the questions below. 1 2 3 4 5 6 7 8 9 10 11 12 13

( x = { var lfn = LFNoise2.kr(1); var saw = Saw.ar( freq: 30, mul: LFPulse.kr( freq: LFNoise1.kr(1).range(1, 10), width: 0.1)); var bpf = BPF.ar(in: saw, freq: lfn.range(500, 2500), rq: 0.01, mul: 20); Pan2.ar(in: bpf, pos: lfn); }.play; ) x.free;

Questions: (a) The variable lfn is used in two different places. Why? (What is the result?) (b) What happens if you change the mul: argument of the BPF from 20 to 10, 5, or 1? Why such a high number as 20 was used? (c) What part of the code is controlling the rhythm? Answers are at the end of this document.8 80

35

Mix and Splay

Here’s a cool trick. You can use multichannel expansion to generate complex sounds, and then mix it all down to mono or stereo with Mix or Splay: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

// 5 channels output (watch Meter window) a = { SinOsc.ar([100, 300, 500, 700, 900], mul: 0.1) }.play; a.free; // Mix it down to mono: b = { Mix(SinOsc.ar([100, 300, 500, 700, 900], mul: 0.1)) }.play; b.free; // Mix it down to stereo (spread evenly from left to right) c = { Splay.ar(SinOsc.ar([100, 300, 500, 700, 900], mul: 0.1)) }.play; c.free // Fun with Splay: ( d = {arg fundamental = 110; var harmonics = [1, 2, 3, 4, 5, 6, 7, 8, 9]; var snd = BPF.ar( in: Saw.ar(32, LFPulse.ar(harmonics, width: 0.1)), freq: harmonics * fundamental, rq: 0.01, mul: 20); Splay.ar(snd); }.play; ) d.set(\fundamental, 100); // change fundamental just for fun d.free;

Can you see the multichannel expansion at work in that last Splay example? The only difference is that the array is first stored into a variable (harmonics) before being used in the 81

UGens. The array harmonics has 9 items, so the synth will expand to 9 channels. Then, just before the .play, Splay takes in the array of nine channels and mix it down to stereo, spreading the channels evenly from left to right.∗ Mix has another nice trick: the method fill. It creates an array of synths and mixes it down to mono all at once. 1 2 3 4 5 6 7 8 9 10 11 12 13

// Instant cluster generator c = { Mix.fill(16, {SinOsc.ar(rrand(100, 3000), mul: 0.01)}) }.play; c.free; // A note with 12 partials of decreasing amplitudes ( n = { Mix.fill(12, {arg counter; var partial = counter + 1; // we want it to start from 1, not 0 SinOsc.ar(partial * 440, mul: 1/partial.squared) * 0.1 }) }.play; FreqScope.new; ) n.free;

You give two things to Mix.fill: the size of the array you want to create, and a function (in curly braces) that will be used to fill up the array. In the first example above, Mix.fill evaluates the function 16 times. Note that the function includes a variable component: the frequency of the sine oscillator can be any random number between 100 and 3000. Sixteen sine waves will be created, each with a different random frequency. They will all be mixed down to mono, and you’ll hear the result on your left channel. The second example shows that the function can ∗

The last line before the .play could be explicitly written as Out.ar(0, Splay.ar(snd)). Remember that SuperCollider is graciously filling in the gaps and throwing in a Out.ar(0...) there—that’s how the synth knows it should play into your channels left (bus 0) and right (bus 1).

82

take a “counter” argument that keeps track of the number of iterations (just like Array.fill). Twelve sine oscillators are generated following the harmonic series, and mixed down to a single note in mono.

36

Playing an audio file

First, you have to load the sound file into a buffer. The second argument to Buffer.read is the path of your sound file between double quotes. You will need to change that accordingly so that it points to a WAV or AIFF file on your computer. After buffers are loaded, simply use the PlayBuf UGen to play them back in various ways. TIP: A quick way to get the correct path of a sound file saved on your computer is drag the file onto a blank SuperCollider document. SC will give you the full path automatically, already in double quotes! 1 2 3 4 5 6 7 8 9 10 11 12 13

// Load files into buffers: ~buf1 = Buffer.read(s, "/home/Music/wheels−mono.wav"); // one sound file ~buf2 = Buffer.read(s, "/home/Music/mussorgsky.wav"); // another sound file // Playback: {PlayBuf.ar(1, ~buf1)}.play; // number of channels and buffer {PlayBuf.ar(1, ~buf2)}.play; // Get some info about the files: [~buf1.bufnum, ~buf1.numChannels, ~buf1.path, ~buf1.numFrames]; [~buf2.bufnum, ~buf2.numChannels, ~buf2.path, ~buf2.numFrames]; // Changing playback speed with 'rate'

83

14 15 16 17 18 19 20 21

{PlayBuf.ar(numChannels: 1, bufnum: ~buf1, rate: 2, loop: 1)}.play; {PlayBuf.ar(1, ~buf1, 0.5, loop: 1)}.play; // play at half the speed {PlayBuf.ar(1, ~buf1, Line.kr(0.5, 2, 10), loop: 1)}.play; // speeding up {PlayBuf.ar(1, ~buf1, MouseY.kr(0.5, 3), loop: 1)}.play; // mouse control // Changing direction (reverse) {PlayBuf.ar(1, ~buf2, −1, loop: 1)}.play; // reverse sound {PlayBuf.ar(1, ~buf2, −0.5, loop: 1)}.play; // play at half the speed AND reversed

37

Synth Nodes

In the previous PlayBuf examples, you had to hit [ctrl+.] after each line to stop the sound. In other examples, you assigned the synth to a variable (like x = {WhiteNoise.ar}.play) so that you could stop it directly with x.free. Every time you create a synth in SuperCollider, you know that it runs in the server, our “sound engine.” Each running synth in the server is represented by a node. We can take a peek at this tree of nodes with the command s.plotTree. Try it. A window named NodeTree will open. 1 2 3 4 5 6 7 8 9 10

// open the GUI s.plotTree; // run these one by one (don't w = { SinOsc.ar(60.midicps, 0, x = { SinOsc.ar(64.midicps, 0, y = { SinOsc.ar(67.midicps, 0, z = { SinOsc.ar(71.midicps, 0, w.free; x.free; y.free;

stop 0.1) 0.1) 0.1) 0.1)

the sound) and watch the Node Tree: }.play; }.play; }.play; }.play;

84

11

z.free;

Every rectangle that you see in the Node Tree is a synth node. Each synth gets a temporary name (something like temp_101, temp_102, etc) and stays there for as long as it is running. Try now playing the four sines again, and hit [ctrl+.] (watch the Node Tree window). The shortcut [ctrl+.] ruthlessly stops at once all nodes that are running in the Server. On the other hand, with the .free method, you can be more subtle and free up specific nodes one at a time. One thing that is important to realize is that synths may stay running in the server even if they are generating only silence. Here’s an example. The amplitude of this WhiteNoise UGen will go from 0.2 to 0 in two seconds. After that, we don’t hear anything. But you will see that the synth node is still there, and won’t go away until you free it. 1 2 3

// Evaluate and watch the Node Tree window for a few seconds x = {WhiteNoise.ar(Line.kr(0.2, 0, 2))}.play; x.free;

37.1

The glorious doneAction: 2

Luckily there is a way to make synths smarter in that regard: for example, wouldn’t it be great if we could ask Line.kr to notify the synth when it has finished its job (the ramp from 0.2 to 0), upon which the synth would automatically free itself up? Enter the argument doneAction: 2 to solve all our problems. Play the examples below and compare how they behave with and without doneAction: 2. Keep watching the Node Tree as you run the lines. 1 2 3

// without doneAction: 2 {WhiteNoise.ar(Line.kr(0.2, 0, 2))}.play; {PlayBuf.ar(1, ~buf1)}.play; // PS. this assumes you still have your sound file loaded into ~buf1 from previous section

85

4 5 6 7

// with doneAction: 2 {WhiteNoise.ar(Line.kr(0.2, 0, 2, doneAction: 2))}.play; {PlayBuf.ar(1, ~buf1, doneAction: 2)}.play;

The synths with doneAction: 2 will free themselves up automatically as soon as their job is done (that is, as soon as the Line.kr ramp is over in the first example, and as soon as the PlayBuf.ar has finished playing the sound file in the second example). This knowledge will be very useful in the next section: Envelopes.

38

Envelopes

Up to now most of our examples have been continuous sounds. It is about time to learn how to shape the amplitude envelope of a sound. A good example to start with is a percussive envelope. Imagine a cymbal crash. The time it takes for the sound to go from silence to maximum amplitude is very small—a few miliseconds, perhaps. This is called the attack time. The time it takes for the sound of the cymbal to decrease from maximum amplitude back to silence (zero) is a little longer, maybe a few seconds. This is called the release time. Think of an amplitude envelope simply as a number that changes over time to be used as the multiplier (mul) of any sound-producing UGen. These numbers should be between 0 (silence) and 1 (full amp), because that’s how SuperCollider understands amplitude. You may have realized by now that the last example already included an amplitude envelope: in {WhiteNoise.ar(Line.kr(0.2, 0, 2, doneAction: 2))}.play, we make the amplitude of the white noise go from 0.2 to 0 in 2 seconds. A Line.kr, however, is not a very flexible type of envelope. Env is the object you will be using all the time to define all sorts of envelopes.Env has many useful methods; we can only look at a few here. Feel free to explore the Env Help file to learn 86

more.

38.1

Env.perc

Env.perc is a handy way to get a percussive envelope. It takes in four arguments: attackTime, releaseTime, level, and curve. Let’s take a look at some typical shapes, outside of any synth. 1 2 3 4

Env.perc.plot; // using all default args Env.perc(0.5).plot; // attackTime: 0.5 Env.perc(attackTime: 0.3, releaseTime: 2, level: 0.4).plot; Env.perc(0.3, 2, 0.4, 0).plot; // same as above, but curve:0 means straight lines

Now we can simply hook it up in a synth like this: 1 2 3 4

{PinkNoise.ar(Env.perc.kr(doneAction: 2))}.play; // default Env.perc args {PinkNoise.ar(Env.perc(0.5).kr(doneAction: 2))}.play; {PinkNoise.ar(Env.perc(0.3, 2, 0.4).kr(2))}.play; {PinkNoise.ar(Env.perc(0.3, 2, 0.4, 0).kr(2))}.play;

All you have to do is to add the bit .kr(doneAction: 2) right after Env.perc, and you are good to go. In fact, you can even remove the explicit declaration of doneAction in this case and simply have .kr(2). The .kr is telling SC to “perform” this envelope at control rate (like other control rate signals we have seen before).

38.2

Env.triangle

Env.triangle takes only two arguments: duration, and level. Examples: 1 2 3

// See it: Env.triangle.plot; // Hear it:

87

4 5 6

{SinOsc.ar([440, 442], mul: Env.triangle.kr(2))}.play; // By the way, an envelope can be a multiplier anywhere in your code {SinOsc.ar([440, 442]) * Env.triangle.kr(2)}.play;

38.3

Env.linen

Env.linen describes a line envelope with attack, sustain portion, and release. You can also specify level and curve type. Example: // See it: Env.linen.plot; // Hear it: SinOsc.ar([300, 350], mul: Env.linen(0.01, 2, 1, 0.2).kr(2)).play;

38.4

Env.pairs

Need more flexibility? With Env.pairs you can have envelopes of any shape and duration you want. Env.pairs takes two arguments: an array of [time, level] pairs, and a type of curve (see the Env Help file for all available curve types). 1 2 3 4 5 6 7

( { var env = Env.pairs([[0, 0], [0.4, 1], [1, 0.2], [1.1, 0.5], [2, 0]], \lin); env.plot; SinOsc.ar([440, 442], mul: env.kr(2)); }.play; )

Read the array of pairs like this: At time 0, be at level 0; At time 0.4, be at level 1; 88

At time 1, be at level 0.2; At time 1.1, be at level 0.5; At time 2, be at level 0. 38.4.1

Envelopes—not just for amplitude

Nothing is stopping you from using these same shapes to control something other than amplitude. You just need to scale them to the desired range of numbers. For example, you can create an envelope to control change of frequencies over time: 1 2 3 4 5 6

( { var freqEnv = Env.pairs([[0, 100], [0.4, 1000], [0.9, 400], [1.1, 555], [2, 440]], \lin); SinOsc.ar(freqEnv.kr, mul: 0.2); }.play; )

Envelopes are a powerful way to control any synth parameters that need to change over time.

38.5

ADSR Envelope

All envelopes seen up to now have one thing in common: they have a fixed, pre-defined duration. There are situations, however, when this type of envelope is not adequate. For example, imagine you are playing on a MIDI keyboard. The attack of the note is triggered when you press a key. The release is when you take your finger off the key. But the amount of time that you keep the finger down is not known in advance. What we need in this case is a so-called “sustained envelope.” In other words, after the attack portion, the envelope must hold the note for an indefinite amount

89

of time, and only trigger the release portion after some kind of cue, or message—i.e., the moment you ’release the key’. An ASR (Attack, Sustain, Release) envelope fits the bill. A more popular variation of it is the ADSR envelope (Attack, Decay, Sustain, Release). Let’s look at both. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

// ASR // Play note ('press key') // attackTime: 0.5 seconds, sustainLevel: 0.8, releaseTime: 3 seconds x = {arg gate = 1, freq = 440; SinOsc.ar(freq: freq, mul: Env.asr(0.5, 0.8, 3).kr( doneAction: 2, gate: gate))}.play; // Stop note ('finger off the key' − activate release stage) x.set(\gate, 0); // alternatively, x.release // ADSR (attack, decay, sustain, release) // Play note: ( d = {arg gate = 1; var snd, env; env = Env.adsr(0.01, 0.4, 0.7, 2); snd = Splay.ar(BPF.ar(Saw.ar((32.1, 32.2..33)), LFNoise2.kr(12).range(100, 1000), 0.05, 10)); Out.ar(0, snd * env.kr(doneAction: 2, gate: gate)); }.play; ) // Stop note: d.release; // this is equivalent to d.set(\gate, 0);

Key concepts: Attack The time (in seconds) that it takes to go from zero (silence) to peak amplitude Decay The time (in seconds) that it takes to go down from peak amplitude to sustain amplitude 90

Sustain The amplitude (between 0 and 1) at which to hold the note (important: this has nothing to do with time) Release The time (in seconds) that it takes to go from sustain level to zero (silence). Since sustained envelopes do not have a total duration known in advance, they need a notification as to when to start (trigger the attack) and when to stop (trigger the release). This notification is called a gate. The gate is what tells the envelope to ’open’ (1) and ’close’ (0), thus starting and stopping the note. For an ASR or ADSR envelope to work in your synth, you must declare a gate argument. Normally the default is gate = 1 because you want the synth to start playing right away. When you want the synth to stop, simply send either the .release or .set(\gate, 0) message: the release portion of the envelope will be then triggered. For example, if your release time is 3, the note will take three seconds to fade away from the moment you send the message .set(\gate, 0).

38.6

EnvGen

For the record, you should know that the construction you learned in this section to generate envelopes is a shortcut, as shown in the code below. 1 2 3 4

// This: { SinOsc.ar * Env.perc.kr(doneAction: 2) }.play; // ... is a shortcut for this: { SinOsc.ar * EnvGen.kr(Env.perc, doneAction: 2) }.play;

EnvGen is the UGen that actually plays back breakpoint envelopes defined by Env. For all practical purposes, you can continue to use the shortcut. However, it is useful to know that these 91

notations are equivalent, as you will often see EnvGen being used in the Help files or other online examples.

39

Synth Definitions

So far we have been seamlessly defining synths and playing them right away. In addition, the .set message gave us some flexibility to alter synth controls in real time. However, there are situations when you may want to just define your synths first (without playing them immediately), and only play them later. In essence, this means we have to separate the moment of writing down the recipe (the synth definition) from the moment of baking the cake (creating the sound).

39.1

SynthDef and Synth

SynthDef is what we use to “write the recipe” for a synth. Then you can play it with Synth. Here is a simple example. 1 2 3 4 5 6 7 8 9 10 11 12 13

// Synth definition with SynthDef object SynthDef("mySine1", {Out.ar(0, SinOsc.ar(770, 0, 0.1))}).add; // Play a note with Synth object x = Synth("mySine1"); x.free; // A slightly more flexible example using arguments // and a self−terminating envelope (doneAction: 2) SynthDef("mySine2", {arg freq = 440, amp = 0.1; var env = Env.perc(level: amp).kr(2); var snd = SinOsc.ar(freq, 0, env); Out.ar(0, snd); }).add;

92

14 15 16 17 18 19

Synth("mySine2"); // using default values Synth("mySine2", [\freq, 770, \amp, 0.2]); Synth("mySine2", [\freq, 415, \amp, 0.1]); Synth("mySine2", [\freq, 346, \amp, 0.3]); Synth("mySine2", [\freq, rrand(440, 880)]);

The first argument to SynthDef is a user-defined name for the synth. The second argument is a function where you specify the UGen graph (that’s how your combination of UGens is called). Note that you have to explicitly use Out.ar to indicate to which bus you want to send the signal to. Finally, SynthDef receives the message .add at the end, which means you are adding it to the collection of synths that SC knows about. This will be valid until you quit SuperCollider. After you have created one or more synth definitions with SynthDef, you can play them with Synth: the first argument is the name of the synth definition you want to use, and the second (optional) argument is an array with any parameters you may want to specify (freq, amp, etc.)

39.2

Example

Here’s a longer example. After the SynthDef is added, we use an array trick to create a 6-note chord with random pitches and amplitudes. Each synth is stored in one of the slots of the array, so we can release them independently. 1 2 3 4 5 6 7

// Create SynthDef ( SynthDef("wow", {arg freq = 60, amp = 0.1, gate = 1, wowrelease = 3; var chorus, source, filtermod, env, snd; chorus = freq.lag(2) * LFNoise2.kr([0.4, 0.5, 0.7, 1, 2, 5, 10]).range(1, 1.02); source = LFSaw.ar(chorus) * 0.5; filtermod = SinOsc.kr(1/16).range(1, 10);

93

8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

env = Env.asr(1, amp, wowrelease).kr(2, gate); snd = LPF.ar(in: source, freq: freq * filtermod, mul: env); Out.ar(0, Splay.ar(snd)) }).add; ) // Watch the Node Tree s.plotTree; // Create a 6−note chord a = Array.fill(6, {Synth("wow", [\freq, rrand(40, 70).midicps, \amp, rrand(0.1, 0.5) ])}); // all in a single line // Release notes one by one a[0].set(\gate, 0); a[1].set(\gate, 0); a[2].set(\gate, 0); a[3].set(\gate, 0); a[4].set(\gate, 0); a[5].set(\gate, 0); // ADVANCED: run 6−note chord again, then evaluate this line. // Can you figure out what is happening? SystemClock.sched(0, {a[5.rand].set(\midinote, rrand(40, 70)); rrand(3, 10)});

To help you understand the SynthDef above: • The resulting sound is the sum of seven closely-tuned sawtooth oscillators going through a low pass filter. • These seven oscillators are created through multichannel expansion. 94

• What is the variable chorus? It is the frequency multiplied by a LFNoise2.kr. The multichannel expansion starts here, because an array with 7 items is given as an argument to LFNoise2. The result is that seven copies of LFNoise2 are created, each one running at a different speeds taken from the list [0.4, 0.5, 0.7, 1, 2, 5, 10]. Their outputs are constrained to the range 1.0 to 1.02. • The source sound LFSaw.ar takes the variable chorus as its frequency. In a concrete example: for a freq value of 60 Hz, the variable chorus would result in an expression like 60 ∗ [1.01, 1.009, 1.0, 1.02, 1.015, 1.004, 1.019] in which the numbers inside the list would be constantly changing up and down according to the speeds of each LFNoise2. The final result is a list of seven frequencies always sliding between 60 and 61.2 (60 * 1.02). This is called chorus effect, thus the variable name. • When the variable chorus is used as freq of LFSaw.ar, multichannel expansion happens: we have now seven sawtooth waves with slightly different frequencies. • The variable filtermod is just a sine oscillator moving very slowly (1 cycle over 16 seconds), with its output range scaled to 1-10. This will be used to modulate the cutoff frequency of the low pass filter. • The variable snd holds the low pass filter (LPF), which takes source as input, and filters out all frequencies above its cutoff frequency. This cutoff is not a fixed valued: it is the expression freq * filtermod. So in the example assuming freq = 60, this becomes a number between 60 and 600. Remember that filtermod is a number oscillating between 1 and 10, so the multiplication would be 60 * (1 to 10). 95

• LPF also multichannel expands to seven copies. The amplitude envelope env is also applied right there. • Finally, Splay takes this array of seven channels and mixes it down to stereo.

39.3

Under the hood

This two-step process of first creating the SynthDef (with a unique name) and then calling a Synth is what SC does all the time when you write simple statements like {SinOsc.ar}.play. SuperCollider unpacks that into (a) the creation of a temporary SynthDef, and (b) the immediate playback of it (thus the names temp_01, temp_02 that you see in the Post window). All of it behind the scenes, for your convenience. 1 2 3 4 5 6 7 8 9 10

// When you do this: {SinOsc.ar(440)}.play; // What SC is doing is this: {Out.ar(0, SinOsc.ar(440))}.play; // Which in turn is really this: SynthDef("tempName", {Out.ar(0, SinOsc.ar(440))}).play; // And all of them are shortcuts to this two−step operation: SynthDef("tempName", {Out.ar(0, SinOsc.ar(440))}).add; // create a synth definition Synth("tempName"); // play it

40

Pbind can play your SynthDef

One of the beauties of creating your synths as SynthDefs is that you can use Pbind to play them. Assuming the "wow" SynthDef is still loaded in memory (it should, unless you quit and reopened SC after the last example), try the Pbinds below: 96

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

( Pbind( \instrument, "wow", \degree, Pwhite(−7, 7), \dur, Prand([0.125, 0.25], inf), \amp, Pwhite(0.5, 1), \wowrelease, 1 ).play; ) ( Pbind( \instrument, "wow", \scale, Pstutter(8, Pseq([ Scale.lydian, Scale.major, Scale.mixolydian, Scale.minor, Scale.phrygian], inf)), \degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7], inf), \dur, 0.2, \amp, Pwhite(0.5, 1), \wowrelease, 4, \legato, 0.1 ).play; )

When using Pbind to play one of your custom SynthDefs, just keep in mind the following points: • Use the Pbind key \instrument to declare the name of your SynthDef. 97

• All the arguments of your SynthDef are accessible and controllable from inside Pbind: simply use them as Pbind keys. For example, notice the argument called \wowrelease used above. It is not one of the default keys understood by Pbind—rather, it is unique to the synth definition wow (the silly name was chosen on purpose). • In order to use all of the pitch conversion facilities of Pbind (the keys \degree, \note, and \midinote), make sure your SynthDef has an argument input for freq (it has to be spelled exactly like that). Pbind will do the math for you. • If using a sustained envelope such as Env.adsr, make sure your synth has a default argument gate = 1 (gate has to be spelled exactly like that, because Pbind uses it behind the scenes to stop notes at the right times). • If not using a sustained envelope, make sure your SynthDef includes a doneAction: 2 in an appropriate UGen, in order to automatically free synth nodes in the server. Exercise: write one or more Pbinds to play the "pluck" SynthDef provided below. For the mutedString argument, try values between 0.1 and 0.9. Have one of your Pbinds play a slow sequence of chords. Try arpeggiating the chords with \strum. 1 2 3 4 5 6 7 8 9 10

( SynthDef("pluck", {arg amp = 0.1, freq = 440, decay = 5, mutedString = 0.1; var env, snd; env = Env.linen(0, decay, 0).kr(doneAction: 2); snd = Pluck.ar( in: WhiteNoise.ar(amp), trig: Impulse.kr(0), maxdelaytime: 0.1, delaytime: freq.reciprocal, decaytime: decay,

98

11 12 13 14

coef: mutedString); Out.ar(0, [snd, snd]); }).add; )

41

Control Buses

Earlier in this tutorial we talked about Audio Buses (section 30) and the Bus Object (section 33). We chose to leave aside the topic of Control Buses at that time in order to focus on the concept of audio routing. Control Buses in SuperCollider are for routing control signals, not audio. Except for this difference, there is no other practical or conceptual distinction between audio and control buses. You create and manage a control bus the same way you do with audio buses, simply using .kr instead of .ar. SuperCollider has 4096 control buses by default. The first part of the example below uses an arbitrary bus number just for the sake of demonstration. The second part uses the Bus object, which is the recommended way of creating buses. 1 2 3 4 5 6 7 8 9

// Write a control signal into control bus 55 {Out.kr(55, LFNoise0.kr(1))}.play; // Read a control signal from bus 55 {In.kr(55).poll}.play; // Using the Bus object ~myControlBus = Bus.control(s, 1); {Out.kr(~myControlBus, LFNoise0.kr(5).range(440, 880))}.play; {SinOsc.ar(freq: In.kr(~myControlBus))}.play;

The next example shows a single control signal being used to modulate two different synths at the same time. In the Blip synth, the control signal is rescaled to control number of harmonics 99

between 1 and 10. In the second synth, the same control signal is rescaled to modulate the frequency of the Pulse oscillator. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

// Create the control bus ~myControl = Bus.control(s, 1); // Feed the control signal into the bus c = {Out.kr(~myControl, Pulse.kr(freq: MouseX.kr(1, 10), mul: MouseY.kr(0, 1)))}. play; // Play the sounds being controlled // (move the mouse to hear changes) ( { Blip.ar( freq: LFNoise0.kr([1/2, 1/3]).range(50, 60), numharm: In.kr(~myControl).range(1, 10), mul: LFTri.kr([1/4, 1/6]).range(0, 0.1)) }.play; { Splay.ar( Pulse.ar( freq: LFNoise0.kr([1.4, 1, 1/2, 1/3]).range(100, 1000) * In.kr(~myControl).range(0.9, 1.1), mul: SinOsc.ar([1/3, 1/2, 1/4, 1/8]).range(0, 0.03)) ) }.play; ) // Turn off control signal to compare c.free;

100

41.1

asMap

In the next example, the method asMap is used to directly map a control bus to a running synth node. This way you don’t even need In.kr in the synth definition. 1 2 3 4 5 6 7 8 9 10 11 12 13

// Create a SynthDef SynthDef("simple", {arg freq = 440; Out.ar(0, SinOsc.ar(freq, mul: 0.2))}).add; // Creat control buses ~oneBus = Bus.control(s, 1); ~anotherBus = Bus.control(s, 1); // Start controls {Out.kr(~oneBus, LFSaw.kr(1).range(100, 1000))}.play; {Out.kr(~anotherBus, LFSaw.kr(2, mul: −1).range(500, 2000))}.play; // Start a note x = Synth("simple", [\freq, 800]); x.set(\freq, ~oneBus.asMap); x.set(\freq, ~anotherBus.asMap); x.free;

42

Order of Execution

When discussing Audio Buses in section 30 we hinted at the importance of order of execution. The code below is an expanded version of the filtered noise example from that section. The discussion that follows will explain the basic concept of order of execution, demonstrating why it is important. 1 2 3 4

// Create an audio bus ~fxBus = Bus.audio(s, 1); ~masterBus = Bus.audio(s, 1); // Create SynthDefs

101

5 6 7 8 9 10 11 12 13 14 15 16 17

( SynthDef("noise", {Out.ar(~fxBus, WhiteNoise.ar(0.5))}).add; SynthDef("filter", {Out.ar(~masterBus, BPF.ar(in: In.ar(~fxBus), freq: MouseY.kr (1000, 5000), rq: 0.1))}).add; SynthDef("masterOut", {arg amp = 1; Out.ar(0, In.ar(~masterBus) * Lag.kr(amp, 1))}). add; ) // Open Node Tree window: s.plotTree; // Play synths (watch Node Tree) m = Synth("masterOut"); f = Synth("filter"); n = Synth("noise"); // Master volume m.set(\amp, 0.1);

First, two audio buses assigned to the variables ∼fxbus and ∼masterBus. Second, three SynthDefs are created: • "noise" is a noise source that sends white noise to an effects bus; • "filter" is a band pass filter which takes its input from the effects bus, and sends the processed sound out to the master bus; • "masterOut" takes in the signal from the master bus and applies a simple volume control to it, sending the final sound with adjusted volume to the loudspeakers. Watch the Node Tree as you run the synths in order. Synth nodes in the Node Tree window run from top to bottom. The most recent synths get added to the top by default. In figure 9, you can see that "noise" is on top, "filter" comes second, and "masterOut" comes last. This is the right order we want: reading from top to 102

Figure 9: Synth nodes in the Node Tree window bottom, the noise source flows into the filter, and result of the filter flows into the master bus. If you now try running the example again, but evaluating the lines m, f, and n in reverse order, you will hear nothing, because the signals are being calculated in the wrong order. Evaluating the right lines in the right order is fine, but it might get tricky as your code becomes more complex. In order to make this job easier, SuperCollider allows you to explicitly define where to place synths in the Node Tree. For this we use the target and addAction arguments. 1 2 3

n = Synth("noise", addAction: 'addToHead'); m = Synth("masterOut", addAction: 'addToTail'); f = Synth("filter", target: n, addAction: 'addAfter');

Now, no matter in what order you execute the lines above, you can be sure that nodes will fall in the right places. The "noise" synth is explicitly told to be added to the head of the Node Tree; "masterOut" is added to the tail; and filter is explicitly added right after target n (the 103

noise synth).

42.1

Groups

When you start to have lots of synths—some of them for source sounds, other for effects, or whatever you need—it may be a good idea to organize them into groups. Here’s a basic example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

// Keep watching everything in the NodeTree s.plotTree; // Create some buses ~reverbBus = Bus.audio(s, 2); ~masterBus = Bus.audio(s, 2); // Define groups ( ~sources = Group.new; ~effects = Group.new(~sources, \addAfter); ~master = Group.new(~effects, \addAfter); ) // Run all synths at once ( // One source sound { Out.ar(~reverbBus, SinOsc.ar([800, 890])*LFPulse.ar(2)*0.1) }.play(target: ~sources); // Another source sound { Out.ar(~reverbBus, WhiteNoise.ar(LFPulse.ar(2, 1/2, width: 0.05)*0.1)) }.play(target: ~sources); // This is some reverb { Out.ar(~masterBus, FreeVerb.ar(In.ar(~reverbBus, 2), mix: 0.5, room: 0.9)) }.play(target: ~effects);

104

26 27 28

// Some silly master volume control with mouse {Out.ar(0, In.ar(~masterBus, 2) * MouseY.kr(0, 1))}.play(target: ~master); )

For more information about order of execution, look up the Help files “Synth,” “Order of Execution,” and “Group.”

105

Part V

WHAT’S NEXT? If you have read and more or less understood everything in this tutorial up to now, you are no longer a beginner SuperCollider user! We covered a lot of ground, and from here on you have all the basic tools needed to start developing your personal projects, and continue learning on your own. The following sections provide a brief introduction to a few popular intermediate level topics. The very last section presents a concise list of other tutorials and learning resources.

43

MIDI

An extended presentation of MIDI concepts and tricks is beyond the scope of this tutorial. The examples below assume some familiarity with MIDI devices, and are provided just to get you started. 1 2 3 4 5 6 7 8 9 10 11 12 13

// Quick way to connect all available devices to SC MIDIIn.connectAll; // Quick way to see all incoming MIDI messages MIDIFunc.trace(true); MIDIFunc.trace(false); // stop it // Quick way to inspect all CC inputs MIDIdef.cc(\someCC, {arg a, b; [a, b].postln}); // Get input only from cc 7, channel 0 MIDIdef.cc(\someSpecificControl, {arg a, b; [a, b].postln}, ccNum: 7, chan: 0);

106

14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38

// A SynthDef for quick tests SynthDef("quick", {arg freq, amp; Out.ar(0, SinOsc.ar(freq) * Env.perc(level: amp). kr(2))}).add; // Play from a keyboard or drum pad ( MIDIdef.noteOn(\someKeyboard, { arg vel, note; Synth("quick", [\freq, note.midicps, \amp, vel.linlin(0, 127, 0, 1)]); }); ) // Create a pattern and play that from the keyboard ( a = Pbind( \instrument, "quick", \degree, Pwhite(0, 10, 5), \amp, Pwhite(0.05, 0.2), \dur, 0.1 ); ) // test a.play; // Trigger pattern from pad or keyboard MIDIdef.noteOn(\quneo, {arg vel, note; a.play});

A frequently asked question is how to manage note on and note off messages for sustained notes. In other words, when using an ADSR envelope, you want each note to be sustained for as long as a key is pressed. The release stage comes only when the finger comes off the corresponding key (review section on ADSR envelopes if needed). 107

In order to do that, SuperCollider simply needs to keep track of which synth node corresponds to each key. We can use an Array for that purpose, as shown in the example below. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

// A SynthDef with ADSR envelope SynthDef("quick2", {arg freq = 440, amp = 0.1, gate = 1; var snd, env; env = Env.adsr(0.01, 0.1, 0.3, 2, amp).kr(2, gate); snd = Saw.ar([freq, freq*1.5], env); Out.ar(0, snd) }).add; // Play it with a MIDI keyboard ( var noteArray = Array.newClear(128); // array has one slot per possible MIDI note MIDIdef.noteOn(\myKeyDown, {arg vel, note; noteArray[note] = Synth("quick2", [\freq, note.midicps, \amp, vel.linlin(0, 127, 0, 1)]); ["NOTE ON", note].postln; }); MIDIdef.noteOff(\myKeyUp, {arg vel, note; noteArray[note].set(\gate, 0); ["NOTE OFF", note].postln; }); ) // PS. Make sure SC MIDI connections are made (MIDIIn.connectAll)

To help you understand the code above: • The SynthDef "quick2" uses an ADSR envelope. The gate argument is the one responsible for turning notes on and off. 108

• An Array called "noteArray" is created to keep track of notes being played. The indices of the array are meant to correspond to MIDI note numbers being played. • Every time a key is pressed on the keyboard, a Synth starts playing (a synth node is created in the server), and the reference to that synth node is stored in a unique slot in the array; the array index is simply the MIDI note number itself. • Whenever a key is released, the message .set(\gate, 0) is sent to the appropriate synth node, retrieved from the array by note number. In this short MIDI demo we only discussed getting MIDI into SuperCollider. To get MIDI messages out of SuperCollider, take a look at the MIDIOut Help file.

44

OSC

OSC (Open Sound Control) is a great way to communicate any kind of message between different applications or different computers over a network. In many cases, it is a much more flexible alternative to MIDI messages. We don’t have space to explain it in more detail here, but the example below should serve as a good starting point. The goal of the demo is to send OSC messages from a smartphone to your computer, or computer to computer. On the receiver computer, evaluate this simple snippet of code: 1 2 3 4

( OSCdef( key: \whatever, func: {arg ...args; args.postln},

109

5 6

path: '/stuff') )

Note: hitting [ctrl+.] will interrupt the OSCdef and you won’t receive any more messages.

44.1

Sending OSC from another computer

This assumes that both computers are running SuperCollider and connected to a network. Find the IP address of the receiver computer, and evaluate the following lines in the sender computer: 1 2 3 4

// Use this on the machine sending messages ~destination = NetAddr("127.0.0.1", 57120); // use correct IP address of destination computer ~destination.sendMsg("/stuff", "heelloooo");

44.2

Sending OSC from a smartphone

• Install any free OSC app on the phone (for example, gyrosc); • Enter the IP address of the receiver computer into the OSC app (as ’target’); • Enter SuperCollider’s receiving port into the OSC app (usually 57120); • Check the exact message path that the app uses to send OSC to, and change your OSCdef accordingly; • Make sure your phone is connected to the network As long as your phone is sending messages to the proper path, you should see them arriving on the computer. 110

45

Quarks and plug-ins

You can extended the functionality of SuperCollider by adding classes and UGens created by other users. Quarks are packages of SuperCollider classes, extending what you can do in the SuperCollider language. UGen plugins are extensions for the SuperCollider audio synthesis server. Please visit http://supercollider.sourceforge.net/ to get up-to-date information on how to add plug-ins and quarks to your SuperCollider installation. The “Using Quarks” Help file is also a good starting point: http://doc.sccode.org/Guides/UsingQuarks.html. From any SuperCollider document, you can evaluate Quarks.gui to see a list of all available quarks (it opens in a new window).

46

Extra Resources

This is the end of this introduction to SuperCollider. A few extra learning resources are listed below. Enjoy! • An excellent series of YouTube tutorials by Eli Fieldsteel: http://www.youtube.com/ playlist?list=PLPYzvS8A_rTaNDweXe6PX4CXSGq4iEWYC. • The standard SC get-started tutorial by Scott Wilson and James Harkins, available online and in the built-in Help files: http://doc.sccode.org/Tutorials/Getting-Started/ 00-Getting-Started-With-SC.html • The official SuperCollider mailing list is the best way to get friendly help from a large pool of users. Beginners are very welcome to ask questions in this list. You can sign

111

up here: http://www.birmingham.ac.uk/facilities/BEAST/research/supercollider/ mailinglist.aspx • Find a SuperCollider meet-up group in your city. The official sc-users mailing list is the best way to find out if there is one where you live. If there is no meet-up group in your area, start one! • Lots of interesting snippets of code can be found here: http://sccode.org/. Sign up for an account and share your code too. • Have you heard of SuperCollider tweets? http://supercollider.sourceforge.net/sc140/

112

Notes 1 First question: when you use the number 1 instead of inf as the repeats argument of the second Pseq, the Pbind will stop after 6 notes have been played (that is, after one full sequence of duration values has been performed). Second question: to make a Pbind play forever, simply use inf as the repeats value of all inner patterns. 2

a) Pwhite(0, 10) will generate any number between 0 and 10. Prand([0, 4, 1, 5, 9, 10, 2, 3], inf) will only pick from the list, which has some numbers between 0 and 10, but not all (6, 7, 8 are not there, so they will never occur in this Prand). b) Technically you could use a Prand if you provide a list with all numbers between 0 and 100, but it makes more sense to use a Pwhite for this task: Pwhite(0, 100). c) Prand([0, 1, 2, 3], inf) picks items from the list at random. Pwhite(0, 3) arrives at the same kind of output through different means: it will generate random integer numbers between 0 and 3, which ends up being the same pool of options than the Prand above. However, if you write Pwhite(0, 3.0), the output is now different: because one of the input arguments of Pwhite is written as a float (3.0), it will now output any floating point number between 1 and 3, like 0.154, 1.0, 1.45, 2.999. d) The first Pbind plays 32 notes (4 times the sequence of 8 notes). The second Pbind plays only 4 notes: four random choices picked from the list (remember that Prand, unlike Pseq, has no obligation to play all the notes from the list: it will simply pick as many random notes as you tell it to). The third and last Pbind plays 32 notes, like the first. 3 First line: the Array [1, 2, 3, "wow"] is the receiving object; reverse is the message. Second line: the String "hello" is the receiving object; dup is the message; 4 is the argument to dup. Third line: 3.1415 is the receiving object; round is the message; 0.1 is the argument to round. Fourth line: 100 is the receiver object, rand is the message. Last line: 100.0 is the receiver of the message rand, the result of which is a random number between 0 and 100. That number becomes the receiver of the message round with argument 0.01, so that the random number is rounded to two decimal cases. Then this result becomes the receiving object of the message dup with argument 4, which creates a list with four duplicates of that number. 4 Rewriting using functional notation only: dup(round(rand(100.0), 0.01), 4); 5 Answers: a) 24

113

b) [5, 5.123] [both numbers and brackets) c) Entire LFSaw line d) Only one e) 0.4 f) 1 and 0.3 6

SinOsc is bipolar because it outputs numbers between -1 and +1. LFPulse is unipolar because its output range is 0-1 (in fact, LFPulse in particular only outputs zeros or ones, nothing in between) 7 Solution: a = {Out.ar(0, SinOsc.ar(freq: [800, 880], mul: LFPulse.ar([2, 3])))}.play; 8 (a) The variable lfn simply holds a LFNoise2. The role of LFNoise2 in life is to generate a new random number every second (between -1 and +1), and slide to it from the previous random number (differently from LFNoise0, that jumps to the new number immediately). The first use of this variable lfn is in the freq argument of the BPF: lfn.range(500, 2500). This takes the numbers between -1 and +1 and scales them to the range 500-2500. These numbers are then used as the center frequency of the filter. These frequencies are the pitches that we hear sliding up and down. Finally, lfn is used again to control the position of the panner Pan2. It is used directly (without a .range message) because the numbers are already in the range we want (-1 to +1). The nice result of this is that we couple the change of frequency with the change of position. How? Every second, LFNoise2 starts to slide toward a new random number, and this becomes a synchronized change in frequency of the filter and panning position. If we had two different LFNoise2 in each place, the changes would be uncorrelated (which might be fine too, but it’s a different aural result). (b) a mul: of 1 would just be too soft. Because the filter is so sharp, it takes so much out of the original signal that the amplitude drops too much. We need to boost the signal back to a reasonably audible range, so that’s why we have mul: 20 at the end of the BPF line. (c) The rhythm is driven by the LFPulse that is the mul: argument of the Saw. LFPulse frequency (how many pulses per second) is controlled by an LFNoise1 that produces numbers between 1 and 10 (interpolating between them). Those numbers are the “how many notes per second” of this patch.

114

Computer Music with examples in

SuperCollider 3

David Michael Cottle

05_01_05

Copyright © August 2005 by David Michael Cottle SuperCollider 3 copyright © August 2005 by James McCartney

1

Contents Syllabus ....................................................................................................................................14 Course Information ..........................................................................................................14 Goals................................................................................................................................14 Grades..............................................................................................................................14 1 - Introduction, Music Technology .......................................................................................15 What's in the text?............................................................................................................15 What’s New .....................................................................................................................15 Terms of Use....................................................................................................................16 Why SuperCollider 3?......................................................................................................16 Section I: Introductory Topics...................................................................................................18 2 - Macintosh OS X ("Ten")...................................................................................................18 Why Mac?........................................................................................................................18 The Finder........................................................................................................................18 The Dock .........................................................................................................................19 System Preferences ..........................................................................................................19 Other Applications ...........................................................................................................20 Exposé .............................................................................................................................20 Documents .......................................................................................................................20 Get Info............................................................................................................................21 Storage Devices ...............................................................................................................21 Server Space ....................................................................................................................21 Unmounting or Ejecting a Storage Device........................................................................22 Mac OS X Survival ..........................................................................................................22 What You Don't Need to Know........................................................................................23 3 - Digital Music ....................................................................................................................25 Language .........................................................................................................................25 Hexadecimal, MIDI Numbers...........................................................................................26 File Formats.....................................................................................................................27 MIDI: a popular 80s standard ...........................................................................................27 NIFF ................................................................................................................................27 XML ................................................................................................................................27 Text Converters................................................................................................................28 JPG and PDF....................................................................................................................28 Screen Shots.....................................................................................................................28 AIFF, SDII, WAV............................................................................................................28 MP3 and Other Compression Formats ..............................................................................29 File Size...........................................................................................................................29 Software for MIDI, Music Notation, Optical Character Recognition.................................29 Optical Character Recognition..........................................................................................30 4 - Sound ...............................................................................................................................33 Peaks and Valleys of Density ...........................................................................................33 Patterns, Noise, Periods....................................................................................................33 2

Properties of sound...........................................................................................................34 Frequency ........................................................................................................................34 Phase................................................................................................................................34 Amplitude ........................................................................................................................35 Harmonic Structure ..........................................................................................................35 The Expressive Nature of Musical Instruments [New]......................................................36 5 - The Party Conversation Chapter........................................................................................39 Constructive and Destructive Interference ........................................................................39 Tuning an Instrument using “Beats” [New] ......................................................................39 Phase Cancellation ...........................................................................................................40 Musical Intervals..............................................................................................................40 Tuning, Temperament, and the Pythagorean Comma........................................................42 6 - Sound Quality, Recording, and Editing Techniques...........................................................46 Stereo...............................................................................................................................46 Mono ...............................................................................................................................46 Sample Rate .....................................................................................................................46 Bit Depth .........................................................................................................................47 File Size...........................................................................................................................48 Noise................................................................................................................................49 Distortion.........................................................................................................................50 Setting Levels ..................................................................................................................50 Two Track Recorders .......................................................................................................51 Clean Edits.......................................................................................................................51 7 - Microphones, Cords, Placement Patterns...........................................................................54 Connectors and Cords ......................................................................................................54 Dynamic Mics..................................................................................................................55 Condenser Mics ...............................................................................................................55 Stereo Pairs ......................................................................................................................55 Handling and Care ...........................................................................................................55 Placement, Proximity .......................................................................................................56 Axis .................................................................................................................................56 Patterns ............................................................................................................................57 Mbox, 001, 002 ................................................................................................................57 Section II: Digital Synthesis Using SuperCollider 3 ..................................................................60 8 - SC3 Introduction, Language, Programming Basics ...........................................................60 Basics ..............................................................................................................................60 Error messages .................................................................................................................62 Objects, messages, arguments, variables, functions, and arrays.........................................62 Enclosures (parentheses, braces, brackets)........................................................................63 Arguments .......................................................................................................................64 Sandwich.make ................................................................................................................65 Experimenting With a Patch (Practice) .............................................................................67 Practice ............................................................................................................................68 9 - The Nature of Sound, Writing Audio to a File...................................................................71 Frequency ........................................................................................................................71 Amplitude ........................................................................................................................71

3

10 -

11 -

12 -

13 -

14 -

15 -

16 -

Periods, Shapes and Timbre .............................................................................................72 Phase................................................................................................................................73 Recording (Writing to a File) ...........................................................................................75 Practice ............................................................................................................................76 Keyword Assignment, MouseX.kr, MouseY.kr, Linear and Exponential values ...............79 Keyword Assignment .......................................................................................................79 MouseX.kr and MouseY.kr ..............................................................................................80 Other External Controls....................................................................................................82 Practice ............................................................................................................................82 Variables, Comments, Offset and Scale using Mul and Add .............................................84 Variables and Comments..................................................................................................84 Offset and Scale using Mul and Add ................................................................................85 Practice ............................................................................................................................88 Voltage Control, LFO, Envelopes, Triggers, Gates, Reciprocals.......................................91 Vibrato.............................................................................................................................92 Block Diagrams ...............................................................................................................93 Theremin..........................................................................................................................93 Envelopes.........................................................................................................................96 Triggers, Gates, messages ar (audio rate) and kr (control rate)..........................................97 Duration, Frequency, and Reciprocals ..............................................................................99 Gates.............................................................................................................................. 100 The Experimental Process .............................................................................................. 103 Practice, Bells ................................................................................................................ 103 Just and Equal Tempered Intervals, Multi-channel Expansion, Global Variables ............ 106 Harmonic series ............................................................................................................. 106 Just vs. Equal Intervals [New] ........................................................................................ 111 Practice, Free-Just, and Equal-Tempered Tuning [New] ................................................. 111 Additive Synthesis, Random Numbers, CPU usage ........................................................ 114 Harmonic Series and Wave Shape.................................................................................. 114 Additive Synthesis ......................................................................................................... 116 Shortcuts ........................................................................................................................ 119 Filling an array............................................................................................................... 121 Inharmonic spectra......................................................................................................... 124 Random Numbers, Perception ........................................................................................ 124 Bells............................................................................................................................... 128 CPU Usage .................................................................................................................... 129 Practice: flashing sines, gaggle of sines, diverging, converging, decaying gongs ............ 130 Noise, Subtractive Synthesis, Debugging, Modifying the Source.................................... 135 Noise.............................................................................................................................. 135 Subtractive Synthesis ..................................................................................................... 136 Voltage Controlled Filter................................................................................................ 138 Chimes........................................................................................................................... 140 Debugging, commenting out, balancing enclosures, postln, postcln, postf, catArgs......... 141 Modifying the source code ............................................................................................. 146 Practice, Chimes and Cavern.......................................................................................... 147 Karplus/Strong, Synthdef, Server commands.................................................................. 150

4

Karplus-Strong Pluck Instrument ................................................................................... 150 Delays............................................................................................................................ 150 Delays for complexity [New] ......................................................................................... 152 Synth definitions ............................................................................................................ 152 Practice: Karplus-Strong Patch....................................................................................... 159 17 - FM/AM Synthesis, Phase Modulation, Sequencer, Sample and Hold.............................. 162 AM and FM synthesis or "Ring" Modulation.................................................................. 162 Phase Modulation........................................................................................................... 163 Sequencer....................................................................................................................... 165 Sample and Hold............................................................................................................ 167 Practice S&H FM........................................................................................................... 171 18 - Busses and Nodes and Groups (oh my!), Linking Things Together................................. 176 Disclaimer...................................................................................................................... 176 Synth definitions ............................................................................................................ 176 Audio and Control Busses .............................................................................................. 178 Nodes............................................................................................................................. 182 Dynamic bus allocation .................................................................................................. 185 Using busses for efficiency............................................................................................. 187 Groups ........................................................................................................................... 189 Group Manipulation ....................................................................................................... 189 Practice: Bells and Echoes.............................................................................................. 193 Section III: Computer Assisted Composition........................................................................... 197 19 - Operators, Precedence, Arguments, Expressions, and User Defined Functions ............... 197 Operators, Precedence.................................................................................................... 197 Messages, Arguments, Receivers ................................................................................... 198 Practice, Music Calculator.............................................................................................. 200 Functions, Arguments, Scope ......................................................................................... 201 Practice, just flashing ..................................................................................................... 204 Practice: Example Functions .......................................................................................... 206 20 - Iteration Using do, MIDIOut .......................................................................................... 208 MIDIOut ........................................................................................................................ 210 Practice, do, MIDIOut, Every 12-Tone Row................................................................... 212 21 - Control Using if, do continued, Arrays, MIDIIn, Computer Assisted Analysis [New] ..... 215 Control message "if" ...................................................................................................... 215 while [New] ................................................................................................................... 218 for, forBy [New] ............................................................................................................ 219 MIDIIn [New]................................................................................................................ 220 Real-Time Interpolation [New] ...................................................................................... 220 Analysis [New] .............................................................................................................. 222 Practice .......................................................................................................................... 222 22 - Collections, Arrays, Index Referencing, Array Messages ............................................... 224 Array messages .............................................................................................................. 226 Practice, Bach Mutation ................................................................................................. 228 23 - Strings, Arrays of Strings ............................................................................................... 231 A Moment of Perspective. .............................................................................................. 233 Practice, Random Study ................................................................................................. 234

5

24 - More Random Numbers ................................................................................................. 237 Biased Random Choices................................................................................................. 237 25 - Aesthetics of Computer Music ....................................................................................... 245 Why Computers?............................................................................................................ 245 Fast ................................................................................................................................ 245 Accurate [New].............................................................................................................. 245 Complex and Thorough: I Dig You Don't Work ............................................................. 246 Obedient and Obtuse ...................................................................................................... 247 Escaping Human Bias .................................................................................................... 248 Integrity to the System ................................................................................................... 249 26 - Pbind, Mutation, Pfunc, Prand, Pwrand, Pseries, Pseq, Serialization............................... 252 Pbind.............................................................................................................................. 252 dur, legato, nextEvent..................................................................................................... 253 Prand, Pseries, Pseq........................................................................................................ 256 Serialization Without Synthesis or Server using MIDIout............................................... 260 Practice: Total Serialization using MIDI only................................................................. 261 MIDI Using Pbind.......................................................................................................... 262 27 - Total Serialization Continued, Special Considerations.................................................... 266 Absolute vs. Proportional Values, Rhythmic Inversion................................................... 266 Pitch............................................................................................................................... 266 Duration and next event ................................................................................................. 267 Next Event ..................................................................................................................... 268 Non-Sequential Events................................................................................................... 268 Amplitude ...................................................................................................................... 268 Rhythmic Inversion........................................................................................................ 268 28 - Music Driven by Extra-Musical Criteria, Data Files ....................................................... 272 Extra Musical Criteria .................................................................................................... 272 Text Conversion............................................................................................................. 272 Mapping......................................................................................................................... 273 Working With Files........................................................................................................ 278 29 - Markov Chains, Numerical Data Files............................................................................ 281 Data Files, Data Types ................................................................................................... 290 Interpreting Strings......................................................................................................... 292 30 - Concrète, Audio Files, Live Audio DSP ......................................................................... 295 Music Concrète .............................................................................................................. 295 Buffers ........................................................................................................................... 295 31 - Graphic User Interface Starter Kit .................................................................................. 303 Display........................................................................................................................... 303 Document....................................................................................................................... 304 Keyboard Window ......................................................................................................... 307 Windows and Buttons .................................................................................................... 310 Slider ............................................................................................................................. 312 APPENDIX ............................................................................................................................ 315 A. Converting SC2 Patches to SC3...................................................................................... 315 Converting a simple patch .............................................................................................. 315 iphase............................................................................................................................. 318

6

rrand, rand, choose, Rand, TRand, TChoose................................................................... 318 Spawning Events............................................................................................................ 318 B. Cast of Characters, in Order of Appearance .................................................................... 320 C. OSC ............................................................................................................................... 321 D. Step by Step (Reverse Engineered) Patches .................................................................... 322 // Rising Sine Waves ...................................................................................................... 322 // Random Sine Waves .................................................................................................. 323 // Uplink........................................................................................................................ 326 // Ring and Klank .......................................................................................................... 328 // Tremulate................................................................................................................... 329 // Police State ................................................................................................................ 332 // Pulse .......................................................................................................................... 338 // FM............................................................................................................................. 340 // Filter .......................................................................................................................... 344 // Wind and Metal.......................................................................................................... 345 // Sci-Fi Computer [New]............................................................................................... 348 // Harmonic Swimming [New] ....................................................................................... 349 // Variable decay bell [New]........................................................................................... 352 // Gaggle of sine variation [New].................................................................................... 353 // KSPluck..................................................................................................................... 354 // More .......................................................................................................................... 354 E. Pitch Chart, MIDI, Pitch Class, Frequency, Hex, Binary Converter: ................................... 355 Answers to Exercises .............................................................................................................. 357

7

Index of Examples 4.1. 4.2. 4.3. 5.1. 5.2. 6.1. 6.2. 6.3. 7.1. 8.1. 8.2. 8.3. 8.4. 8.5. 8.6. 8.7. 8.8. 8.9. 8.10. 9.1. 9.2. 9.3. 9.4. 9.5. 9.6. 9.7. 9.8. 9.9. 9.10. 10.1. 10.2. 10.3. 10.4. 10.5. 10.6. 10.7. 11.1. 11.2. 11.3. 11.4. 12.1. 12.2. 12.3. 12.4. 12.5. 12.6. 12.7. 12.8. 12.9. 12.10. 12.11. 12.12.

Waves: Clear Pattern (Periodic), Complex Pattern, No Pattern (Aperiodic) Frequency Spectrum of Speech Graphic Representations of Amplitude, Frequency, Timbre, Phase Constructive and Destructive Interference Interference in Proportional Frequencies: 2:1 and 3:2 Sample Rate (Resolution) and Bit Depth Waves: Sampling Rate and Bit Depth Non-Zero Crossing Edit Connectors: RCA, XLR, TRS, TS Hello World Booting the server First Patch Second Patch Balancing enclosures Balancing enclosures with indents Arguments SinOsc using defaults Experimenting with a patch Rising Bubbles SinOsc Amplitude using mul Distortion Noise wave shapes Phase Phase Phase you can hear (as control) Record bit depth, channels, filename Wandering Sines, Random Sines, Three Dimensions Defaults Keywords First patch using keywords MouseX MouseY controlling amp and freq Exponential change Practice sci-fi computer Variable declaration, assignment, and comments Offset and scale with mul and add Map a range Harmonic swimming from the examples folder, variable decay bells SinOsc as vibrato SinOsc as vibrato Vibrato Theremin Better vibrato Other LFO controls Trigger and envelope Trigger with MouseX Envelope with trigger Duration, attack, decay Envelope using a gate Envelope with LFNoise as gate

8

33 36 37 39 41 48 48 52 54 61 61 61 61 63 63 64 66 68 68 71 72 72 72 73 73 74 74 75 76 79 79 80 80 81 81 82 84 87 88 88 91 92 92 94 94 95 97 98 99 100 101 102

12.13. 12.14. 13.1. 13.2. 13.3. 13.4. 13.5. 13.6. 13.7. 13.8. 14.1. 14.2. 14.3. 14.4. 14.5. 14.6. 14.7. 14.8. 14.9. 14.10. 14.11. 14.12. 14.13. 14.14. 14.15. 14.16. 14.17. 14.18. 14.19. 14.20. 14.21. 14.22. 14.23. 14.24. 14.25. 15.1. 15.2. 15.3. 15.4. 15.5. 15.6. 15.7. 15.8. 15.9. 15.10. 15.11. 15.12. 15.13. 15.14. 15.15. 15.16. 15.17. 16.1. 16.2. 16.3. 16.4.

Complex envelope Bells Intervals Multi-channel expansion Intervals Function return: last line Audio frequencies Ratios from LF to audio rate Ratios from LF to audio rate Tuning String Vibration and Upper Harmonics Vibrating Strings Spectral Analysis of “Four Score” Adding sines together Additive synthesis with a variable Additive saw with modulation Additive saw with independent envelopes additive synthesis with array expansion additive synthesis with array expansion additive synthesis with array expansion Array.fill Array.fill with arg Additive saw wave, separate decays Additive saw wave, same decays Single sine with control Gaggle of sines Inharmonic spectrum rand Test a random array Error from not using a function Client random seed Server random seed Post clock seed random frequencies (Pan2, Mix, EnvGen, Env, fill) flashing (MouseButton, Mix, Array.fill, Pan2, EnvGen, Env LFNoise1) noise from scratch (rrand, exprand, Mix, fill, SinOsc) Types of noise Filtered noise Filtered saw Filter cuttoff as pitch Resonant array chime burst (Env, perc, PinkNoise, EnvGen, Spawn, scope) chimes (normalizeSum, round, Klank, EnvGen, MouseY) Tuned chime (or pluck?) running a selection of a line running a selection of a line commenting out debugging using postln debugging using postln in message chains Formatting posted information postn Subtracitive Synthesis (Klank, Decay, Dust, PinkNoise, RLPF, LFSaw) noise burst Noise burst with delay midi to cps to delay time Delay to add complexity

9

102 103 107 107 108 108 110 110 111 111 114 115 115 116 117 118 119 120 120 121 121 122 122 123 123 124 124 126 126 126 127 127 127 128 130 135 136 137 137 138 139 140 140 141 142 142 143 143 144 144 147 147 150 150 151 152

16.5. 16.6. 16.7. 16.8. 16.9. 16.10. 16.11. 16.12. 16.13. 16.14. 16.15. 16.16. 17.1. 17.2. 17.3. 17.4. 17.5. 17.6. 17.7. 17.8. 17.9. 17.10. 17.11. 17.12. 17.13. 17.14. 17.15. 17.16. 18.1. 18.2. 18.3. 18.4. 18.5. 18.6. 18.7. 18.8. 18.9. 18.10. 18.11. 18.12. 18.13. 18.14. 18.15. 18.16. 18.17. 18.18. 18.19. 18.20. 18.21. 19.1. 19.2. 19.3. 19.4. 19.5. 19.6. 19.7.

playing a synthDef stopping a synthDef playing a synthDef SynthDef Multiple nodes of SH Syntax for passing arguments Transition time between control changes Multiple nodes of SH Multiple nodes of SH SynthDef Browser KSpluck SynthDef (EnvGen, Env, perc, PinkNoise, CombL, choose) Practice: K S pluck (EnvGen, PinkNoise, LFNoise1, Out, DetectSilence) From LFO to FM AM Synthesis (SinOsc, scope, mul, Saw) FM Synthesis PM Synthesis Controls for carrier, modulator, and index Efficiency of PM Carrier and modulator ratio Envelope applied to amplitude and modulation index Sequencer (array, midicps, SinOsc, Sequencer, Impulse, kr) scramble, reverse (Array.rand, postln, scramble, reverse) sequencer variations Latch Latch Latch sample and speed ratio (Blip, Latch, LFSaw, Impulse, mul) Complex Wave as Sample Source (Mix, SinOsc, Blip, Latch, Mix, Impulse) Practice, Sample and Hold, FM Browsing Synth Definitions First Patch (play, SinOsc, LFNoise0, .ar) First SynthDef Audio and Control Busses [New] Assigning busses Patching synths together with a bus Patching synths together with a bus, dynamic control sources Patching synths together with a bus, dynamic control sources Several controls over a single bus node order, head, tail Execution order, node order node order, head, tail Bus allocation and reallocation Bus allocation and reallocation Bus allocation inefficient patch more efficient modular approach using busses Groups, group manipulation Automated node creation Source Group, Fx Group Bells and echoes Operators (+, /, -, *) More operators Binary operators (>, fx2, -> fx3, then I would need to have them and/or their groups in the correct order. But here they are in parallel: the source is routed to both echoes and both echoes are mixed to bus 0, 1. I've changed the output bus for the ping to 16 in order to route it to the echoes. I send all the definitions at once, create a source group for the pings and an fx group for the echoes. The order of creation doesn't matter because their nodes are determined by the group; synthGroup is at the head where it should be and fxGroup is at the tail. I can start and stop them in any order. There are actually going to be three "echoes" in the fx group. Two are echoes using Comb, but in neither of those do I mix in the dry signal. So echo1 and echo2 are just the wet signal, no source. I use a dry synth that just reroutes the source to bus 0 and 1 so that I can control it separately, adding or removing it from the final mix. This could also have been done using Out.ar([0, 1, 16, 17], a), routing the source to 16 and 17 for fx, 0 and 1 for the dry signal. 18.20. Source Group, Fx Group ( SynthDef("ping", {arg fund = 400, harm = 1, rate = 0.2, amp = 0.1; a = Pan2.ar(SinOsc.ar(fund*harm, mul: amp) * EnvGen.kr(Env.perc(0, 0.2), gate: Dust.kr(rate)), Rand(-1.0, 1.0)); Out.ar(16, a) }).load(s); SynthDef("dry", {var signal; signal = In.ar(16, 2); Out.ar(0, signal); }).load(s); SynthDef("echo1", { var signal, echo; signal = In.ar(16, 2); echo = CombC.ar(signal, 0.5, [0.35, 0.5]); Out.ar(0, echo); }).load(s);

SynthDef("echo2", { var signal, echo; signal = In.ar(16, 2); echo = Mix.arFill(3, { CombL.ar(signal, 1.0, LFNoise1.kr(Rand(0.1, 0.3), 0.4, 0.5), 15) }); Out.ar(0, echo*0.2)

192

}).load(s); )

~synthGroup = Group.head(s); ~fxGroup = Group.tail(s); // 12.do will not allow me to access each one, but it doesn't matter ( 12.do({arg i; Synth("ping", [\harm, i+1, \amp, (1/(i+1))*0.4], ~synthGroup)}); ) // "ping" is playing on bus 16, so we don't hear it // Start the echo1 (wet), echo2 (still wet), then dry a = Synth("echo1", target: ~fxGroup); b = Synth("echo2", target: ~fxGroup); c = Synth("dry", target: ~fxGroup); b.free; // remove each in a different order a.free; c.free; // The original ping is still running, so stop it. ~synthGroup.freeAll; // This also works a = Synth("echo1", target: ~fxGroup); b = Synth("echo2", target: ~fxGroup); 12.do({arg i; Synth("ping", [\harm, i+1, \amp, (1/(i+1))*0.4],~synthGroup)}); c = Synth("dry", target: ~fxGroup); ~synthGroup.freeAll; // Stop the source, but the echoes are still running // Start the source again 12.do({arg i; Synth("ping", [\harm, i+1, \amp, (1/(i+1))*0.4],~synthGroup)}); ~synthGroup.set(\rate, 0.8); ~synthGroup.set(\rate, 5); ~synthGroup.free; ~fxGroup.free;

Practice: Bells and Echoes 18.21. Bells and echoes ( SynthDef("bells", {arg freq = 100;

193

var out, delay; out = SinOsc.ar(freq, mul: 0.1) * EnvGen.kr(Env.perc(0, 0.01), gate: Dust.kr(1/7)); out = Pan2.ar(Klank.ar(`[Array.fill(10, {Rand(100, 5000)}), Array.fill(10, {Rand(0.01, 0.1)}), Array.fill(10, {Rand(1.0, 6.0)})], out), Rand(-1.0, 1.0)); Out.ar(0, out*0.4); //send dry signal to main out Out.ar(16, out*1.0); //and send louder dry signal to fx bus }).load(s); SynthDef("delay1", // first echo {var dry, delay; dry = In.ar(16, 2); delay = AllpassN.ar(dry, 2.5, [LFNoise1.kr(2, 1.5, 1.6), LFNoise1.kr(2, 1.5, 1.6)], 3, mul: 0.8); Out.ar(0, delay); }).load(s); SynthDef("delay2", // second echo {var delay, dry; dry = In.ar(16, 2); delay = CombC.ar(dry, 0.5, [Rand(0.2, 0.5), Rand(0.2, 0.5)], 3); Out.ar(0, delay); }).load(s); SynthDef("delay3", // third echo { var signal, delay; signal = In.ar(16, 2); delay = Mix.arFill(3, { CombL.ar(signal, 1.0, LFNoise1.kr(Rand([0.1, 0.1], 0.3), 0.4, 0.5), 15) }); Out.ar(0, delay*0.2) }).load(s); ) //define groups ~fxGroup = Group.tail; ~bellGroup = Group.head; // start one of the echoes and 4 bells f = Synth("delay3", target: ~fxGroup); 4.do({Synth("bells", [\freq, rrand(30, 1000)], target: ~bellGroup)}) // stop existing echo and change to another f.free; f = Synth("delay1", target: ~fxGroup); f.free; f = Synth("delay2", target: ~fxGroup); f.free; f = Synth("delay3", target: ~fxGroup); Synth("delay1", target: ~fxGroup); // add delay1 without removing delay3

194

195

18.

Exercises 18.1. Modify this patch so that the LFSaw is routed to a control bus and returned to the SinOsc using In.kr. {Out.ar(0, SinOsc.ar(LFSaw.kr(12, mul: 300, add: 600)))}.play 18.2. Create an additional control (perhaps, SinOsc) and route it to another control bus. Add an argument for the input control bus on original SinOsc. Change between the two controls using Synth or set. 18.3. Create a delay with outputs routed to bus 0 and 1. For the delay input use an In.ar with an argument for bus number. Create another stereo signal and assign it to 4/5. Set your sound input to either mic or line and connect a signal to the line. Use set to switch the In bus from 2 (your mic or line) to 4 (the patch you wrote). 18.4. Assuming the first four bus indexes are being used by the computer's out and in hardware, and you run these lines: a = Bus.ar(s, 2); b = Bus.kr(s, 2); c = Bus.ar(s, 1); c.free; d = Out.ar(Bus.ar(s), SinOsc.ar([300, 600])); -Which bus or bus pair has a SinOsc at 600 Hz? -What variable is assigned to audio bus 6? -What variable is assigned to control bus 3? -What variable is assigned to audio bus 4? 18.5. Assuming busses 0 and 1 are connected to your audio hardware, in which of these examples will we hear the SinOsc? ({Out.ar(0, In.ar(5))}.play; {Out.ar(5, SinOsc.ar(500))}.play) ({Out.ar(5, SinOsc.ar(500))}.play; {Out.ar(0, In.ar(5))}.play) ({Out.ar(0, In.ar(5))}.play; {Out.kr(5, SinOsc.ar(500))}.play) ({Out.ar(5, In.ar(0))}.play; {Out.ar(0, SinOsc.ar(500))}.play) ({Out.ar(0, SinOsc.ar(500))}.play; {Out.ar(5, In.ar(0))}.play)

196

Section III: Computer Assisted Composition 19 - Operators, Precedence, Arguments, Expressions, and User Defined Functions The previous section dealt with synthesis. From here on we will use SC to construct systems for organizing events, using both the synth definitions from previous sections, or outboard MIDI instruments. Operators, Precedence The examples below combine numbers with operators +, /, -, and *. Evaluate each line separately. 19.1.

Operators (+, /, -, *)

1 + 4 5/4 8*9-5 9-5*8 9-(5*8)

The last three expressions look similar, but have different results. The third is 8*9, then -5 = 67, the fourth is 9-5 then * 8 = 32. The difference is precedence. Precedence is the order in which each operator and value is realized in the expression. Precedence in SC is quite simple (and it is different from the rules you learned in school); enclosures first, then left to right. Since the 8*9 comes first in the third expression it is calculated before the -5. In the line below it 9-5 is calculated first and that result is multiplied by 8. The last line demonstrates how parentheses force precedence. First 5*8, because it is inside parentheses then that result is subtracted from 9. Can you predict the result of each line before you evaluate the code? 19.2.

More operators

1 + 2 / 4 * 6 2 / 4 + 2 * 6 (2 * 6) - 5 2 * (6 - 5)

197

Here are some other binary operators. Run each line and see what SC returns: > greater than, < less than, == equals, % modulo. 19.3.

Binary operators (>, 5 5 < 1 12 == (6*2) 106%30

The > (greater than) and < (less than) symbols return: true and false. SC understands that the number 10 is greater than 5 (therefore true) and 5 is not less than 1 (false). We will use this logic in later chapters. Modulo (%) is a very useful operator that returns the remainder of the first number after dividing by the second. For example, 43%10 will reduce 43 by increments of 10 until it is less than 10, returning what is left. 12946%10 is 6. A common musical application is a modulo 12, which reduces numbers to below 12, or an octave. Can you predict the results of these expressions? 19.4.

Predict

(8+27)%6 ((22 + 61) * 10 )%5

All of these examples use integers. Integers are whole numbers. (If you don't remember math, that's numbers used for counting items, without the decimal point: 1, 2, 3, 4, etc.) Numbers that use a decimal are called floating-point values. In SC you express integers by just writing the number (7, 142, 3452). Floating point values must have the decimal with numbers on both sides, even for values below 1: 5.142, 1.23, 456.928, 0.0001, 0.5 (not .0001 or .5) Messages, Arguments, Receivers You should be comfortable with messages, arguments, and receivers. Remember that numbers can be objects. The message usually has a meaningful name such as sum or abs, and is followed by parentheses that enclose arguments separated by commas. Following are typical messages used in computer music. In previous chapters messages have used the syntax Object.message. This is called receiver notation. The object is the receiver. An equivalent syntax is shown in the examples below, where I use functional notation: message(argument). The object is placed as the first argument in the argument list.

198

5.rrand(10) can be expressed rrand(5, 10). Likewise, min(10, 100) can be expressed 10.min(100). 19.5.

Music related messages

cos(34)

//returns cosine

abs(-12)

//returns absolute value

sqrt(3)

//square root

midicps(56) //given a midi number, this returns //the cycles per second in an equal tempered scale cpsmidi(345)

//given cps, returns midi

midiratio(7)

//given a midi interval, returns ratio

ratiomidi(1.25) rand(30) rand2(20)

//given a ratio, returns midi number

//returns a random value between 0 and 29 //returns a random value between -30 and 30

rrand(20, 100) //returns a random value between 20 and 100 // Here are examples in receiver notation. 30.cos //same as cos(30) 0.7.coin //same as coin(0.7) 20.rand //same as rand(20) 7.midiratio // Binary functions have two arguments. min(6, 5) //returns the minimum of two values max(10, 100) //returns maximum round(23.162, 0.1) //rounds first argument to second argument // Arguments can be expressions min(5*6, 35) max(34 - 10, 4) //returns the maximum of two values

199

Practice, Music Calculator SC is useful even when you're not producing music38. I often launch it just as a music calculator; to calculate the frequency of an equal tempered Ab, how many cents an equal tempered fifth is from a just fifth, or the interval, in cents, of two frequencies. Here are some examples. 19.6.

Music calculator

// Major scale frequencies ([0, 2, 4, 5, 7, 9, 11, 12] + 60).midicps.round(0.01) // Major scale interval ratios [0, 2, 4, 5, 7, 9, 11, 12].midiratio.round(0.001) // Phrygian scale frequencies ([0, 1, 3, 5, 7, 8, 10, 12] + 60).midicps.round(0.01) // Phrygian scale interval ratios [0, 1, 3, 5, 7, 8, 10, 12].midiratio.round(0.001) // Equal and Just Mixolydian scale compared [0, 2, 3, 4, 5, 7, 9, 10, 12].midiratio.round(0.001) [1/1, 9/8, 6/5, 5/4, 4/3, 3/2, 8/5, 7/4, 2/1].round(0.001) // Just ratios (mixolydian) in equal tempered cents // (and therefor their deviation from equal temperament) [1/1, 9/8, 6/5, 5/4, 4/3, 3/2, 8/5, 7/4, 2/1].ratiomidi.round(0.01) // Retrograde of a 12-tone set [0, 11, 10, 1, 9, 8, 2, 3, 7, 4, 6, 5].reverse // Inversion of a 12-tone set 12 - [0, 11, 10, 1, 9, 8, 2, 3, 7, 4, 6, 5] // And of course, retrograde inversion (see where I'm heading?) (12 - [0, 11, 10, 1, 9, 8, 2, 3, 7, 4, 6, 5]).reverse // Random transpositions of a 12-tone set ([0, 11, 10, 1, 9, 8, 2, 3, 7, 4, 6, 5] + 12.rand)%12 // Random permutation of a 12-tone set (out of 479,001,600) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11].permute(479001600.rand) + 60

38

Or should I say sound? These examples fall under the category of imaginary music.

200

Functions, Arguments, Scope A function is a series of expressions enclosed in two braces. The entire function is usually (but not always) assigned to a variable. The lines of code are executed in order and the results of the last line is returned. When the function is called or run anywhere in the program it is as if all the lines were inserted in place of the function. A function is evaluated by using the value message. A simple function with no variables or arguments, with its call: 19.7.

Function

( var myFunc; myFunc = {100 * 20}; myFunc.value.postln; )

The first line declares the variable name that will be used for the function. The second line is the function assignment ("make myfunc equal to the line {100 * 20}, or store the expression 100 * 20 in myFunc). Every place you put myFunc in your code the value 2000 will be used. Arguments are like variables but they can be passed to the function when it is called. Declare arguments right after the opening brace. Next you declare variables if you want to use them. Here is a function with arguments. 19.8.

Function with arguments

( var func; func = { arg a, b; var r; r = (b * 20)%a; r.postln; }; func.value(15, 5); )

[New] It is often useful to pass an array containing a collection of items to the function, for example, when the number of elements you want to use in the function is subject to change. Consider the first example below, which sums three numbers. What if you wanted to sum three values one time, then ten the next, then two or six? Using arguments you would have to change the number of arguments each time, or write a new function. The solution is to pass an array as an argument. To do this you must first declare the array argument by preceding it with three dots, as seen in the last examples. 201

19.9.

Function with array arguments

( var func; func = { arg a = 1, b = 2, c = 4; [a, b, c].sum; }; func.value(15, 5, 100); )

( var func; func = { arg ... a; a.postln; a.sum.postln; (a + 1.0.rand).sum }; func.value(15, 5, 100); func.value(15, 5, 100, 3, 78, 18, 367); func.value(1, 2); ) // Combine both syntaxes ( var func; func = { arg a = 0, b = 0 ... c; [a, b, c].postln; c.sum.postln; (c + 3.0.rand).sum.postln; (a/b*c).postln; }; func.value(15, 5, 100, 45); func.value(15, 5, 100, 3, 99, 754, 78, 18, 367); func.value(1, 2, 3, 4, 5); )

Scope describes the effective range of a variable or argument. A variable can only be used inside the function where it is declared. Functions inside that function can make use of the outer variable. That is to say a function can use a variable that was declared in a function where it is enclosed. A global variable, declared with a tilde, will work everywhere. 19.10. Function with arguments and variables var func, outside = 60; ~myGlobal = 22; func = { arg first = 5, second = 9; var inside = 10; inside = (first * 11)%second; [first, second, inside, outside, ~myGlobal].postln; // all of these work

202

(outside/inside).postln; //works }; //inside.postln; // uncomment this, it will not work func.value(15, 6); // arguments passed to the function.

Can you predict the values of the three myFunc calls before running this code? The first has no arguments passed to the function and will use the defaults (10 and 2). The next two use defaults. 19.11. Function calls (//line 1 var myFunc; myFunc = { arg a = 10, b = 2; b = (b * 100)%a; b.postln; }; myFunc.value; //line 7 myFunc.value(15); //line 8 myFunc.value(11, 30); //line 9 )

You can also use keywords with your own functions. 19.12. Keywords ( var myFunc; myFunc = { arg firstValue = 10, secondValue = 2; firstValue = (firstValue * 100)%secondValue; firstValue.postln; }; myFunc.value; myFunc.value(firstValue: 15); myFunc.value(firstValue: 30, secondValue: 11); myFunc.value(secondValue: 30, firstValue: 11); myFunc.value(secondValue: 23); )

In the previous examples the last line prints the final result. But in most functions there is a value returned to the place where the function is called. The last line of the function is returned. Here is an example that makes a little more musical sense. 19.13. Return ( var octaveAndScale; octaveAndScale = { arg oct = 4, scale = 0; var scales, choice;

203

oct = (oct + 1)*12; //translate "4" (as in C4) to MIDI octave (60) scales = [ [0, 2, 4, 5, 7, 9, 11], //major [0, 2, 3, 5, 6, 8, 9, 11], //octatonic [0, 2, 4, 6, 8, 10] //whole tone ]; scale = scales.at(scale); //more on the "at" message below choice = scale.choose; //choose a pitch choice = choice + oct; //add the octave choice //return the final result }; octaveAndScale.value; //choose from major scale, C4 octave octaveAndScale.value(3); //choose from C3 octave, major scale octaveAndScale.value(7, 2); //choose from C7 octave, whole tone scale octaveAndScale.value(scale: 1); //choose from C4 octave, octatonic scale )

When should you use a function? We have used them in just about every example so far. Messages such as max, choose, midicps, are functions (in receiver notation) that were put together by the authors of SC. When do you write your own functions? Convenience or clarity, for example when you use a section of code you developed, and you use it over and over. Rather than repeat the code each place it would be clearer and more efficient to write a function and use that single function each time you need a matrix. The other situation is when there is no existing message or function that does exactly what you need, so you have to tailor your own39. There is rand, rand2, rrand, bilinrand, exprand, etc., but maybe you need a random number generator that always returns pitches from a major scale in the bass clef. In that case, you could tailor your own function to your needs. Practice, just flashing In this patch the function returns the next frequency to be used in a flashing instrument. Each new pitch has to be calculated based on the previous pitch, because it uses pure ratios, as we did in a previous chapter. I've added a short printout that shows what the actual pitch is, and what the nearest equal tempered equivalent would be40. It begins with a patch from a previous chapter placed in a SynthDef with added arguments; fundamental, decay, and filter. This instrument is played successively using a task and loop. Before entering the loop I define the freqFunc, which picks a new pitch based on the previous pitch and a new pure interval ratio. This is called free just intonation (very difficult to do on natural instruments, very easy for computers). I add a wrap to keep the frequency within a certain range. The nextEvent not only controls when the next event will be, but that

39

Check carefully, there probably is one that does what you want.

40

If rounded to the nearest MIDI value. The actual difference will probably be greater if it strays more than a half step. I couldn't figure out how to do that (without an if statement). You do it and send me the code.

204

value is passed to the Flash instrument as a decay, ensuring one sound will decay shortly after the next sounds. At this writing I'm not convinced the filter works the way it should. 19.14. Function practice, free, just tempered flashing ( //run this first SynthDef("Flash", { arg fund = 400, decay = 4, filter = 1; var out, harm; out = Mix.ar( Array.fill(7, { arg counter; var partial; partial = counter + 1; SinOsc.ar(fund*partial) * EnvGen.kr(Env.linen(0, 0, decay + 2), levelScale: 1/(partial*filter) ) * max(0, LFNoise1.kr(rrand(5.0, 12.0))) }) )*0.3; //overall volume out = Pan2.ar(out, Rand(-1.0, 1.0)); DetectSilence.ar(out, doneAction:2); Out.ar(0, out) } ).play(s); ) ( //then this r = Task({ var freqFunc, pitch = 440, nextEvent; freqFunc = {arg previousPitch; var nextPitch, nextInterval; nextInterval = [3/2, 2/3, 4/3, 3/4, 5/4, 4/5, 6/5, 5/6].choose; nextPitch = (previousPitch*nextInterval).wrap(100, 1000); nextPitch.round(0.01).post; " != ".post; nextPitch.cpsmidi.round(1).midicps.round(0.01).postln; nextPitch }; { nextEvent = [0.5, 0.25, 5, 4, 1].choose; pitch = freqFunc.value(pitch); Synth("Flash", [\fund, pitch, \decay, nextEvent, \filter, rrand(1.0, 4.0)]); //Choose a wait time before next event nextEvent.wait; }.loop;

205

}).play )

Practice: Example Functions Here are some other functions you can try. They should all be inserted into the patch above (replacing the existing var list and freqFunc). 19.15. Pitch functions var freqFunc, pitches, pitch = 440, count, midiNote, nextEvent; pitches = [60, 61, 62, 63, 64]; //declare an array of pitches freqFunc = { midiNote = pitches.choose; //pick a pitch from the array midiNote.midicps; // return the cps for that pitch }; var freqFunc, pitches, pitch = 440, count, midiNote, nextEvent; pitches = [60, 62, 64, 67, 69, 72, 74, 76]; //declare an array of pitches count = 0; //initialize count freqFunc = { midiNote = pitches.wrapAt(count); // wrapped index of count if(count%30 == 29, //every ninth time {pitches = pitches.scramble} //reset "pitches" to a scrambled //verion of itself ); count = count + 1; //increment count midiNote.midicps; //return cps };

// My favorite: var freqFunc, pitches, pitch = 440, count, midiNote, nextEvent; pitches = [60, 62, 64, 67, 69, 72, 74, 76].scramble; freqFunc = { midiNote = pitches.wrapAt(count); // wrap index of count if(count%10 == 9, //every tenth time {pitches.put(5.rand, (rrand(60, 76)))}//put a new pitch between //65 and 75 into the array pitches //at a random index ); count = count + 1; //increment count midiNote.midicps; //return cps };

206

19.

Exercises 19.1. Write a function with two arguments (including default values); low and high midi numbers. The function chooses a MIDI number within that range and returns the frequency of the number chosen. 19.2. Write a function with one argument; root. The function picks between minor, major, or augmented chords and returns that chord built on the supplied root. Call the function using keywords.

207

20 - Iteration Using do, MIDIOut Functions and messages use arguments. Sometimes one of the arguments is another function. The function can be assigned to a variable, then the variable used in the argument list, or it can be nested inside the argument list. Here are both examples. 20.1.

function passed as argument

// function passed as variable var myFunc; myFunc = { (10*22).rand }; max(45, myFunc.value); // function nested max(45, {(10*22.rand)})

The do function or message is used to repeat a process a certain number of times. The first argument is a list of items, typically an array, to be “done.” The second argument is a function, which is repeated for each item in the list. The items in the list need to be passed to the function, so they can be used inside the function. This is done with an argument. You can name the argument anything you want. But it has to be the first argument. 20.2.

do example

do(["this", "is", "a", "list", "of", "strings"], {arg eachItem; eachItem.postln;}) // or do([46, 8, 109, 45.8, 78, 100], {arg whatever; whatever.postln;})

If the first argument is a number, then it represents the number of times the do will repeat. So do(5, {etc}) will “do” 0, 1, 2, 3, and 4. 20.3.

do example

do(5, {arg theNumber; theNumber.postln;})

Be sure the second argument is a function by enclosing it in braces. Try removing the braces in the example above. Notice the difference.

208

To me it makes more sense using receiver notation where the first argument, the 5, is the object or receiver and do is the message. The two grammars are equivalent. 20.4.

do in receiver

do(5, {"boing".postln}) //same result 5.do({"boing".postln;})

In an earlier chapter we found it useful to keep track of each repetition of a process (when generating overtones). The do function has a method for doing this. The items being done are passed to the function, and the do also counts each iteration and passes this number as the second argument to the function. You can also name it anything you want, but the two have to be in the correct order. This next concept is a bit confusing because in the case of the 10.do the first and second argument are the same; it "does" the numbers 0 through 9 in turn while the counter is moving from 0 through 9 as it counts the iterations. 20.5.

do(10) with arguments

do(10, {arg eachItem, counter; eachItem.postln; counter.postln})

It is clearer when using an array as the first argument, or object being done. Remember that the position of the argument, not the name, determine which is which. Notice the second example the names seem incorrect, but the items being iterated over is the first argument, and the count is the second. With numbers (5.do) the difference is academic. But with an array the results can be very different, as shown in the third example. Note also the last item in this array is also an array; a nested array. You can use the term inf for an infinite do. Save everything before you try it! 20.6.

array.do with arguments

[10, "hi", 12.56, [10, 6]].do({arg eachItem, counter; [counter, eachItem].postln}) [10, "hi", 12.56, [10, 6]].do({arg count, list; [count, list].postln}) //wrong [10, 576, 829, 777].do({arg count, items; (items*1000).postln}); [10, 576, 829, 777].do({arg items, count; (items*1000).postln}); inf.do({arg i; i.postln}) //this will, of course, crash SC

209

MIDIOut Though the distinction is blurring, I still divide computer music into two broad categories; synthesis, that is instrument or sound design, and the second category is event/structure design. Event/structure design deals with pitch organization, duration, next event scheduling, instrument choice, and so on. If you are more interested in that type of composition then there really isn't much reason to design your own instruments, you can use just about anything41. This is where MIDI comes in handy. MIDI is complicated by local variations in setup. You may have four external keyboards connected through a midi interface while I'm using SimpleSynth42 on my laptop. Because of this I’ll have to leave the details up to each individual, but the example below shows how I get MIDI playback. To start and stop notes we will use noteOn and noteOff, each with arguments for MIDI channel, pitch, and velocity. The noteOff has a velocity argument, but note-off commands are really just a note-on with a velocity of 0, so I think it is redundant. You could just as well do the note on and the note off with a note on and velocity of 0: m.noteOn(1, 60, 100); m.noteOn(1, 60, 0). The rest of the examples in this text that use MIDI will assume you have initialized MIDI, assigned an out, and that it is called "m." The second example shows a simple alternative instrument. 20.7.

MIDI out

( MIDIClient.init; m = MIDIOut(0, MIDIClient.destinations.at(0).uid); ) m.noteOn(1, 60, 100); //channel, MIDI note, velocity (max 127) m.noteOff(1, 60); //channel, MIDI note, velocity (max 127) // Same thing: m.noteOn(1, 60, 100); //channel, MIDI note, velocity (max 127) m.noteOn(1, 60, 0); //channel, MIDI note, velocity (max 127) // Or if you don't have MIDI (

41

Many of Bach’s works have no instrumentation, and even those that do are transcribed for other instruments. In these works his focus was also on the events, not instrumentation. 42

pete.yandell.com

210

SynthDef("SimpleTone", { //Beginning of Ugen function arg midiPitch = 60, dur = 0.125, amp = 0.9; var out; out = SinOsc.ar(midiPitch.midicps, mul: amp); out = out*EnvGen.kr(Env.perc(0, dur), doneAction:2); Out.ar(0, out) } ).play(s); ) //Then in the examples replace this m.noteOn(arguments) //with Synth("SimpleTone", arguments)

The next example is conceptual or imaginary43 music. In the first few years of my exploration of computer assisted composition I began to think about writing every possible melody. I was sure this was within the scope of a computer program. After realizing all the possible variables (rhythms, lengths, variations, register, articulation), I decided to try a limited study of pitches only; every possible 12-tone row. That experiment was one of the many times I locked up the mainframe. Then I wondered if it were really necessary to actually play every row, or would writing it count as composition? What constitutes creation? Would a print out do? How about a soft copy? (Which I think is actually what crashed the machine back then.) Why not the idea of writing every melody? Couldn't the code itself, with the potential of every 12-tone row be a representation of every possible 12-tone melody? Conceptual music; that's where I left it. And here is the latest incarnation, which actually plays every row, given enough time. It doesn't include inversions, retrograde, or inverted-retrograde because theoretically those will emerge as originals. It begins somewhere in the middle, but then wraps around, so it will eventually play every variation. The total.do will repeat for every possible variation. It differs from a loop in that I can pass the argument count as a counter which is used to advance to each new permutation. The permute message takes a single argument, the number of the permutation. It returns an array with all the elements reordered without repetition. Try the short example first as an illustration. I calculate 12 factorial as 479,001,600; the total possible variations. Once the permutation is chosen a second do steps through that array playing each note. The r.start and r.stop start and

43

See Tom Johnson's wonderful Imaginary Music, unfortunately out of print. Thank heaven for interlibrary loan.

211

stop the Task. The last 127.do is an "all notes off" message to stop any hanging pitches if you execute the r.stop in the middle of an event. The thisThread.clock.sched is used to turn off the MIDI pitch. It schedules the off event for next*art where art is the articulation. So if articulation is set to 0.9, it is rather legato. If it is set to 0.1 it will be staccato. Practice, do, MIDIOut, Every 12-Tone Row 20.8.

Every row

// Permute 25.do({arg count; postf("Permutation %: %\n", count, [1, 2, 3, 4].permute(count));}) //Every row ( //run this first var original, total, begin, next, art; original = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]; total = 479001600; begin = total.rand; next = 0.125; art = 0.9; ("Total playback time = " ++ (total*next/3600).asString ++ " hours.").postln; r = Task({ total.do({arg count; var thisVar; thisVar = original.permute(count+begin); thisVar.postln; (thisVar + 60).do({arg note; m.noteOn(1, note, 100); thisThread.clock.sched(next*art, {m.noteOff(1, note, 100); nil}); (next).wait }); }) }) ) //then these r.start; r.stop; 127.do({arg i; m.noteOff(1, i, 0)})

This is conceptually correct because it steps through every possible permutation. It is, however, a bit pedantic since the variations from one permutation to the other are slight. The following chooses a row at random. Not strictly 12-tone since it is possible for two pitches to be repeated. Theoretically this would take longer to present every possible permutation in a row, since it could repeat permutations. 20.9.

Every random row

212

( var original, total, begin, next, art; original = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]; total = 479001600; next = 0.125; art = 0.9; ("Total playback time = " ++ (total*next/3600).asString ++ " hours.").postln; r = Task({ total.do({ var thisVar; thisVar = original.permute(total.rand); thisVar.postln; (thisVar + 60).do({arg note; m.noteOn(1, note, 100); thisThread.clock.sched(next*art, {m.noteOff(1, note, 100); nil}); (next).wait }); }) }) ) r.start; r.stop; 127.do({arg i; m.noteOff(1, i, 0)})

213

20.

Exercises 20.1. Write an example using two do functions, nested so that it prints a multiplication table for values 1 through 5 (1*1 = 1; 1*2 = 2; . . . 5*4 = 20; 5*5 = 25). 20.2. Write another nested do that will print each of these arrays on a separate line with colons between each number and dashes between each line: [[1, 4, 6, 9], [100, 345, 980, 722], [1.5, 1.67, 4.56, 4.87]]. It should look like this: 1:4:6:9: --------------------100 : 345 : etc.

214

21 - Control Using if, do continued, Arrays, MIDIIn, Computer Assisted Analysis [New] Control message "if" Artificial intelligence and computer-assisted composition begin with logic controls. That is, telling the machine what to do in certain circumstances. If you are hungry do open the fridge. If there is no bread then do go to the store. If you have enough money do buy 4 loaves. Do come home and open the fridge again. If there is jam and jelly do choose between the two and make a sandwich. Don't use marinated tofu. There are several methods of iteration and control such as while, for, and forBy, but do and if are the most common. The if message or function takes three arguments: an expression to be evaluated, a true function, and a false function. It evaluates the expression to be true or false (as in the previous examples where 10 < 20 returned true or false) and returns the results of the first function if true, the second if false. if(expression, {true function}, {false function}) The true or false often results form a comparison of two values separated by the operators covered earlier such as "" for greater than, "==" for equals, etc. (Note the difference between two equals signs and one. "=" means store this number in the variable, "==" means "is it equal to?"). Run both examples of code below. The first one evaluates a statement which returns true (because 1 does indeed equal 1) so it runs the first function. The second is false, so the false function is run. 21.1.

if examples

if(1 == 1, {"true statement";},{"false statement";}) if(1 == 4, {"true statement";},{"false statement";}) // Commented: if( 1 == 1, //expression to be evaluated; "1 is equal to 1" true or false? {"true statement";}, //if the statement is true run this code {"false statement";} //if it is false run this code )

Here are other Boolean operators < less than > greater than = greater than or equal to

215

!= not equal to44 == equal to The message or combines two statements, returning true if either are correct; or(a > 20, b > 100). The message and combines two statements, returning true only if both are correct; and(a > 20, b > 100). The word true is true and false is false. 21.2.

if examples

if((1 == 1).and(5 < 7), {"both are true"},{"maybe only one is true";}) if((1 == 20).and(5 < 7), {"both are true";},{"one or both are false";}) if((1 == 20).and(24 < 7), {"both are true";},{"one or both are false";}) if((1 == 4).or(true), {"true is always true";},{"1 does not equal 4";}) if(false.or(true), {"true is always true";},{"true wins with or";}) if(false.and(true), {"true is always true";},{"but false wins with and";}) if(or(10 > 0, 10 < 0), {34},{78}) if((1 == 1).and((10 > 0).or((5 < 0).or(100 < 200))), {78},{88})

These isolated numerical examples seem moot without a context (i.e. why would I ever use the expression 10 > 0? and why would you just post "true statement" or "false statement"). The if function is usually used in combination with some iterative process such as do. Here is a real musical example. The code below begins at MIDI 60 (C4). It then picks a new MIDI interval, adds it to m and returns that new value. Watch the results. 21.3.

do 50 MIDI intervals

( m = 60; 50.do( { m = m + [6, 7, 4, 2, 11, 8, -2, -6, -1, -3].choose; m.postln; } ) )

I've biased the choices so that there are more intervals up than down. So eventually the MIDI values exceed a reasonable range for most instruments. Even if I carefully balanced the

44

In computer parlance the exclamation point, or "bang" means "not." The inside joke is that it lends a more accurate representation of advertising, such as "The must see movie of the year! (not!)"

216

choices45 it is conceivable that a positive value is chosen 20 times in a row. The if statement below checks the value during each iteration and reduces it by two octaves if it exceeds 84 and increases it by two octaves if below 36. 21.4.

do 50 MIDI intervals

( m = 60; 50.do( { var next; next = [6, 17, 14, 2, 11, 8, -12, -16, -1, -3].choose; "next interval is : ".post; next.postln; m = m + next; "before being fixed: ".post; m.post; if(m > 72, {m = m - 24}); if(m < 48, {m = m + 24}); " after being fixed: ".post; m.postln; } ) )

When writing a little piece of code like this it is worth poking around in SC to see if a function already exists that will do what you want. There is wrap, which wraps a value around, but in this example we are buffering, not wrapping. So in this case we really do need the extra lines of code. Below shows a do iteration over an array of pitch classes with an if test to look for C, D, or E. The computer doesn't understand these as actual pitches (see the discussion on strings below), but just text. Even so it does know how to compare to see if they are equal. 21.5.

pitch class do

( ["C", "C#", "D", "Eb", "E", "F", "F#", "G", "Ab", "A", "Bb", "B"].do( {arg item, count; if((item == "C").or(item == "E").or(item == "G"), //Boolean test {item.post; " is part of a C chord.".postln;}, //True function {item.post; " is not part of a C chord".postln;} //False function ) } ) )

45

There are several better schemes for managing interval choices. For example, you could always choose positive values to add, reduce that result to a single octave using modulo, so that all values are between 0 to 12, then choose the octave separately.

217

You might say we have taught the computer a C chord. This is where AI begins. A series of if statements can be used to define a region on the screen. When the mouse enters the region with an x greater than 0.3, but less than 0.5, and a y that is greater than 0.3, but less than 0.7 (all conditions are "true") it generates a positive value, or a trigger. In this example the * is equivalent to and. There are no true or false functions, just the values 1 (on) and 0 (off). 21.6.

Mouse Area Trigger

( { var aenv, fenv, mgate, mx, my; mx = MouseX.kr(0, 1); my = MouseY.kr(0, 1); mgate = if((mx>0.3) * (mx0.3) * (my 6, $x -> 6, $b -> 6, $T -> 6, $W -> 6, $e -> 11, $o -> 11, $c -> 11, $, -> 11, $. -> 11, $n -> 3, $y -> 3, $m -> 4, $p -> 8, $l -> 9 ];

The numbers associated with the characters are then used as MIDI pitches, after being raised to the appropriate octave. Alternatively, they can be used as MIDI intervals. After using the IdentityDictionary for a few projects I found it cumbersome to change values. (Too much typing.) So I settled on a computationally less efficient algorithm that uses a two

273

dimensional array. The first element of the second dimension is the list of characters comprising a string, the second element is the associated value. They are parsed using a do function which looks at each of the first elements and if a match is found (using includes) the mappedValue variable is set to the second element. 28.3.

mapping array

var mappedValue, intervMap; intervMap = [ ["ae", 2], ["io", 4], [" pst", 5], ["Hrn", -2], ["xmp", -1], ["lfg", -4], ["Th", -5], [".bdvu", 1] ]; intervMap.do({arg item; if(item.at(0).includes($o), {mappedValue = item.at(1)}) });

Here is a patch which controls only pitched elements in an absolute relationship (that is, actual pitches rather than intervals). 28.4.

Extra-Musical Criteria, Pitch Only

( var noteFunc, blipInst, midiInst, channel = 0, port = 0, prog = 0, intervMap, count = 0, ifNilInt = 0, midin = 0, inputString; //The input stream. inputString = "Here is an example of mapping. The, them, there, these," "there, then, that, should have similar musical interpretations." "Exact repetition; thatthatthatthatthatthat will also" "be similar."; //intervMap is filled with arrays containing a collection of //characters and a value. In the functions below the character //strings are associated with the numbers. intervMap = [ ["ae", 2], ["io", 4], [" pst", 5], ["Hrn", 7], ["xmp", 1], ["lfg", 3], ["Th", 6], [".bdvu", 11] ];

"// [Char, Interval, ifNilInt, midi interval, octave, midi]".postln; noteFunc = Pfunc({var parseInt, octave; //Each array in the intervMap is checked to see if the //character (inputString.wrapAt(count)) is included. If //it is then parseInt is set to the value at item.at(1)

274

intervMap.do({arg item; if(item.at(0).includes(inputString.wrapAt(count)), {parseInt = item.at(1)}) }); //If parseInt is notNil, midin is set to that. //ifNilInt is for storing each parseInt to be used if //no match is found and parseInt is nil the next time around. if(parseInt.notNil, {midin = parseInt; ifNilInt = parseInt}, {midin = ifNilInt} ); octave = 60; "//".post; [inputString.wrapAt(count), parseInt, ifNilInt, midin, octave/12, midin + octave].postln; count = count + 1; midin + octave }); Pbind( \midinote, noteFunc, \dur, 0.125, \amp, 0.8, \instrument, "SimpleTone" ).play; )

I admit it's not very interesting. That is partially because pitch values alone usually do not account for a recognizable style. We are more accustomed to a certain level of dissonance, not pitch sequences alone. Dissonance is a function of interval distribution, not pitch distribution. To achieve a particular balance of interval choices the intervMap should contain proportional interval values rather than absolute pitches. The numbers may look pretty much the same, but in the actual parsing we will use midin = parseInt + midin%12 (thus parseInt becomes an interval) rather than midin = parseInt. Interest can also be added by mapping pitches across several octaves, or mapping octave choices to characters. Notice in this example the series "thatthatthatthat" results in an intervallic motive, rather than repeated pitches. 28.5.

Extra-Musical Criteria, Total Control

( var noteFunc, blipInst, midiInst, channel = 0, port = 0, prog = 0, intervMap, count = 0, ifNilInt = 0, midin = 0, ifNilDur = 1,

275

durMap, durFunc, ifNilSus = 1, susMap, susFunc, ifNilAmp = 0.5, curAmp = 0.5, ampMap, ampFunc, inputString; //The input stream. inputString = "Here is an example of mapping. The, them, there, these," "there, then, that, should have similar musical interpretations." "Exact repetition; thatthatthatthatthatthat will also" "be similar."; //intervMap is filled with arrays containing a collection of //characters and a value. In the functions below the character //strings are associated with the numbers. intervMap = [ ["ae", 6], ["io", 9], [" pst", 1], ["Hrn", -3], ["xmp", -1], ["lfg", -4], ["Th", -5], [".bdvu", 1] ]; durMap = [ ["aeiouHhrsnx", 0.125], ["mplf", 0.5], ["g.T,t", 0.25], ["dvc", 2], [" ", 0] ]; susMap = [ ["aei ", 1.0], ["ouHh", 2.0], ["rsnx", 0.5], ["mplf", 2.0], ["g.T,t", 4.0], ["dvc", 1.0] ]; ampMap = [ ["aeHhrsnx ", 0.8], ["ioumplfg.T,tdvc", 1.25] ]; noteFunc = Pfunc({var parseInt, octave = 48; //Each array in the intervMap is checked to see if the //character (inputString.wrapAt(count)) is included. If //it is then parseInt is set to the value at item.at(1) intervMap.do({arg item; if(item.at(0).includes(inputString.wrapAt(count)), {parseInt = item.at(1)}) }); //If parseInt is notNil, midin is set to that plus previous //midin. ifNilInt is for storing each parseInt to be used if //no match is found and parseInt is nil. if(parseInt.notNil, {midin = parseInt + midin%48; ifNilInt = parseInt}, {midin = ifNilInt + midin%48} );

276

[inputString.wrapAt(count)].post; ["pitch", parseInt, midin, octave/12, midin + octave].post; midin + octave }); durFunc = Pfunc({var parseDur, nextDur; durMap.do({arg item; if(item.at(0).includes(inputString.wrapAt(count)), {parseDur = item.at(1)}) }); if(parseDur.notNil, {nextDur = parseDur; ifNilDur = parseDur}, {nextDur = ifNilDur} ); ["dur", nextDur].post; nextDur }); susFunc = Pfunc({var parseSus, nextSus; susMap.do({arg item; if(item.at(0).includes(inputString.wrapAt(count)), {parseSus = item.at(1)}) }); if(parseSus.notNil, {nextSus = parseSus; ifNilSus = parseSus}, {nextSus = ifNilSus} ); ["sustain", nextSus.round(0.01)].post; nextSus }); ampFunc = Pfunc({var parseAmp; ampMap.do({arg item; if(item.at(0).includes(inputString.wrapAt(count)), {parseAmp = item.at(1)}) }); if(parseAmp.notNil, {curAmp = curAmp*parseAmp; ifNilAmp = parseAmp}, {curAmp = curAmp*ifNilAmp} ); count = count + 1; if(0.5.coin, {curAmp = rrand(0.2, 0.9)}); ["amp", curAmp.round(0.01)].postln; curAmp.wrap(0.4, 0.9)

277

});

Pbind( \midinote, noteFunc, \dur, durFunc, \legato, susFunc, \amp, ampFunc, \instrument, "SimpleTone" ).play; )

Working With Files It is not always practical to type the text or data values you want to use into the actual code file. Once you have devised an acceptable map for text you can consider the map the composition and the text a modular component that drives the music. In this case a method for reading the text as a stream of data into the running program is required. With this functionality in place the mapping composition can exist separate from the files that contain the text to be read. SC has standard file management tools that can be used for this purpose. The type of file we are working with is text (see data types below), so we should create text files. But an MS word document, or even an rtf document contains non-text formatting information that will interfere. I tried creating "text" files using SimpleText, BBEdit, SC editor, and Word. Word and BBEdit were the only two that didn't have extra unwanted stuff. Pathnames also require a little explanation. When SC (and most programs) opens a file it first looks for the file in the directory where SC resides. So if the file you are using is in the same folder as SC then the pathname can just be the name of the file. If it resides anywhere else a folder hierarchy must be included. The hierarchy is indicated with folder names separated by a slash. If the data file resides in a folder which is in the folder where SC resides, then the pathname can begin with that folder. If the file resides outside the SC folder, then you need to give the entire path name beginning with e.g. "Users." So a file in the same folder as SC is simply "MyFile", if it is in a subfolder it might be "/Data Files/MyFile", if in another area then perhaps "/Users/Students/Documents/Computer Music/Data Files/MyFile". To open and read the file, first declare a variable to hold the file pointer. Then use File() to open the file and identify what mode you will be using (read, write, append, etc.). In this example we use the read mode, or "r." Once you have opened the file you could retrieve each character one at a time using getChar, but I would suggest reading the entire file and storing it in an array, since in previous examples the text is stored in an array. Here is the code, assuming the text file is named "Text File." This section can be inserted in the patch above (minus the input.postln) in place of the input = "Here is . . . etc. 28.6.

reading a file

278

( var input, filePointer; //declare variables filePointer = File("Test File", "r"); input = filePointer.readAllString; filePointer.close; input.postln; )

The longer, but more user friendly method uses openDialog. I'm usually working with just one file so this is an extra step I don't need. But in many cases it takes out the pathname guesswork. It can also just be used to figure out the exact pathname of any given file. There is one glitch; either everything has to be in the openDialog function or you have to use a global variable and run the two sections of code separately. 28.7.

reading a file

// Print any pathname for later use File.openDialog("", { arg pathName; pathName.postln}); // Open using a dialog ( var input, filePointer; //declare variables File.openDialog("", {arg pathname; filePointer = File(pathname, "r"); input = filePointer.readAllString; input.postln; // Everything has to be inside this function }, {"File not found".postln}); ) // Or open file and store in global ( var filePointer; //declare variables File.openDialog("", {arg pathname; filePointer = File(pathname, "r"); ~input = filePointer.readAllString; }, {"File not found".postln}); ) // Then ~input.postln; // Or include ~input in the patch.

279

28.

Exercises 28.1. None at this writing.

280

29 - Markov Chains, Numerical Data Files Artificial intelligence is describing human phenomena to a computer in a language it understands: numbers, probabilities, and formulae. Any time you narrow the choices a computer makes in a musical frame you are in a sense teaching it something about music, and it is making an "intelligent" or informed choice. This is true with random walks; if you narrow the choice to a MIDI pitch, for example, you have taught the patch (by way of converting MIDI values to cps) about scales, ratios, intervals, and equal tempered tuning. If we limit random choices to a C-major scale, then the cpu is "intelligent" about the whole step and half step relationships in a scale. If we biased those choices such that C was chosen more often than any other pitch, then the cpu understands a little about tonality. This bias map is usually in the form of ratios and probabilities. In a simple biased random choice there is only one level of probability; a probability for each possible choice in the scale. This is known as a Markov Process with a zeroth order probability. A zeroth probability system will not give us a sense of logical progression, since musical lines are fundamentally reliant on relationships between pitches, not the individual pitches themselves or general distribution of pitches in a piece. We perceive melody and musical progression in the relationship of the current pitch to the next pitch, and the last three pitches, and the pitches we heard a minute ago. In order for you to describe a melody, you have to describe the connections between pitches, i.e. intervals. To get a connection between pitches we need to use a higher order Markov chain; first or second at least. The technique is described in Computer Music by Charles Dodge (page 283) and Moore's Elements of Computer Music (page 429). I suggest you read those chapters, but I will also explain it here. The way to describe the connection between two pitches is to have a chart of probable next pitches given the current pitch. Take for example the pitch G in the key of C. If you wanted to describe a tonal system in terms of probability you would say there is a greater chance that C follows G (resulting in a V-I relationship) than say F would follow G (a retrogressive VIV). If the current pitch is F on the other hand, then there is a greater chance that the next pitch is E (resolution of the IV) than C (retrogressive). Markov Chains are not intended solely for tonal musics. In non-tonal musics you might likewise describe relationships by avoiding the connection G to C. So if the current pitch is G and avoiding a tonal relationship is the goal you might say there is a very small chance that the next pitch be C, but a greater chance that the next pitch is D-sharp, or A-flat. You can describe any style of music using Markov Chains. You can even mimic an existing composers style based on an analysis of existing works. For example you could analyze all the tunes Stephen Foster wrote, examining the pitch G (or equivalent in that key) and the note that follows each G. You would then generate a chart with all the possible choices that might follow G. Count each occurrence of each of those subsequent pitches in his music and enter that number in the chart. This would be a probability chart describing precisely Stephen Foster's treatment of the pitch G, or the fifth step of the scale.

281

If we have determined the probabilities of one pitch based on our analysis, the next step is to compute similar probabilities for all possible current pitches and combine them in a chart. This is called a transition table. To create such an analysis of the tune "Frere Jacque" you would first create the chart with all the possible current pitches in the first column and all next pitches in the row above each column. The first pitch is C and it is followed by D. We would represent this single possible combination with a 1 in the C row under the D column.

! C D E F G

ç C 0 0 0 0 0

ç D 1 0 0 0 0

ç

E 0 0 0 0 0

ç

ç

ç F 0 0 0 0 0

ç

ç

ç

ç

ç

ç

ç

ç

G 0 0 0 0 0

Next we count up all the times C is followed by D and enter that number (2) in that column. Next we examine all other Cs and count the number of times C is followed by C, E, F, G, and enter each of those totals into the table. We do the same for the pitch D, then E, and so on. This is the resulting chart or transition table: C D E F G

C 1 0 2 0 0

D 2 0 0 0 0

E 1 2 0 0 1

F 0 0 2 0 0

G 0 0 0 2 0

Total 4 2 4 2 1

For each row the total combinations are listed. The probability for each row is calculated by the number of actual occurrences in that column divided by the total possible outcomes, such that the C column values will be 1/4, 2/4, 1/4. C D E F G

C .25 0 .5 0 0

D .5 0 0 0 0

E .25 1.0 0 0 1.0

F 0 0 .5 0 0

G 0 0 0 1.0 0

Total 4 2 4 2 1

282

This is a first order transition table55. Because we are using only the previous and next note (one connection) it will still lack a convincing sense of melodic progression. To imitate the melody we really need to look at patterns of two or three notes. This brings us to a second order Markov Chain. A second order adds one level to the sequence. That is to say, given the last two pitches, what is the probability of the next pitch being C, D, etc. Here is the same chart expanded to include all of "Frere Jacque" and a second order of probability. There are 36 combinations, but not all of them occur (e.g. C-A), and don't need to be included on the chart, so I've removed those combinations.

! !

ç

ç

ç

ç

ç

ç

ç

ç

C C-C C-D C-E C-G D-E E-C E-F F-E F-G G-C G-E G-F G-G G-A A-G Total

D 1

ç

ç

ç

ç

ç

ç

ç

ç

ç

ç

ç

ç

ç

ç

F

G 2

A

Total 3 2 1 3 2 4 1 2 2 1 1 2 1 2 2 30

E 2

1 2 2 2

1 1

1 2

1

1

2 1 1 2 1 2 9

1

6

2 4

8

2

ç

ç

ç

ç

ç

ç

ç

ç

ç

ç

Here are some guidelines: Note that I've totaled all the combinations at the bottom. This is a quick way to check if you have the correct number of total connections. The total should equal the number of notes in the piece minus two (because the first two don't have a connection of three items—or second order). The other thing you have to watch out for is a broken link. A broken link is a reference to a connection that doesn't have a probability row on the chart. Take for example the combination C-C. If you entered a probability for the F column in the C-C row, then the combination C, C, F could result. But there is no row of

55

One could argue that a set of probabilities describing intervals rather than pitches is already a 1st order transition table, since even a single interval describes the relationship between two pitches.

283

probabilities for C-F, and the program would return a nil value and crash (or get stuck in a loop). I don't have a quick or clever way to check to make sure you don't have any bad leads. You just have to proof carefully. Here is the chart with percentages. C C-C C-D C-E C-G D-E E-C E-F F-E F-G G-C G-E G-F G-G G-A A-G Total

D .33

E

F

G .66

A

1.0 1.0 .66 1.0 .5

.33 .25

.25 1.0

.5

.5

1.0 1.0 1.0 1.0 1.0 1.0 9

1

6

1.0 4

8

2

Total 3 2 1 3 2 4 1 2 2 1 1 2 1 2 2 30

The biggest problem with this type of system is the memory requirements. If, for example, you were to do a chart for the piano works of Webern, assuming a four octave range, 12 pitches for each octave, second order probability would require a matrix of 110,592 references for pitch alone. If you expanded the model to include rhythm and instrument choice, dynamic and articulation, you could be in the billions in no time. So there needs to be efficient ways of describing the matrix. That is why in the Foster example mentioned below I do a few confusing, but space saving short cuts. The chart above for "Frere Jacque" is demonstrated in the file Simple Markov. Following are some explanations of the code. 29.1.

transTable

//A collection of the pitches used legalPitches = [60, 62, 64, 65, 67, 69]; //An array of arrays, representing every possible previous pair. transTable [0, 0], [0, 1], [0, 2], [0, 4], [1, 2], [2, 0], [2, 3], [3, 2], [3, 4], [4, 0],

= [ //C, //C, //C, //C, //D, //E, //E, //F, //F, //G,

C D E G E C F E G C

284

[4, [4, [4, [4, [5,

2], 3], 4], 5], 4]

//G, //G, //G, //G, //A,

E F G A G

];

It would be inefficient to use actual midi values, since so many midi values are skipped in a tonal scheme. So legalPitches is used to describe all the pitches I will be working with and the actual code looks for and works around array positions, not midi values. (That is, array positions which contain the midi values.) The variable transTable describe the first column of my transition table. Each of the possible previous pitches are stored in a two dimensional array (arrays inside an array). The value I use to compare and store the current two pitches is currentPair. It is a single array holding two items, the first and second pitch in the pair I will use in the chain. At the beginning of the program they are set to 0, 0, or C, C. Next I have to match the currentPair with the array transTable. This is done with a do iteration. In this function each of the two position arrays will be compared to the variable currentPair, which is also a two position array. When a match is found the index of that match (or position in the array where it was found) is stored in nextIndex. In other words, I have found the index position of currenPair. This is necessary because I have pared down the table to include only combinations I'm actually using. 29.2.

Parsing the transTable

transTable.do({arg index, i; if(index == currentPair, {nextIndex = i; true;}, {false})});

Next I describe the index for each previous pair. If, for example, the current pair was D, E. Their values in the transTable would be [1, 2], and the lines of code above would find a match at array position 4 (remember to count from 0). That means I should use the probability array at position 4 in the chart below. In this chart it says that I have a 100% chance of following the D, E in currentPair with a C. I modified (where noted) some of the probabilities from the original chart in Dodge. 29.3.

Probability chart

nPitchProb = [ //C D E F G A [0.00, 0.33, 0.00, 0.00, 0.66, 0.00], //C, C [0.00, 0.00, 1.00, 0.00, 0.00, 0.00], //C, D [0.00, 0.00, 0.00, 1.00, 0.00, 0.00], //C, E [0.66, 0.00, 0.00, 0.00, 0.00, 0.33], //C, G [1 .0 0 , 0.00, 0.00, 0.00, 0.00, 0.00], //D , E

285

[0.50, [0.00, [1.00, [0.00, [1.00, [0.00, [0.00, [0.00, [0.00, [0.00,

0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,

0.25, 0.00, 0.00, 0.50, 0.00, 0.00, 1.00, 0.00, 0.00, 0.00,

0.00, 0.00, 0.00, 0.00, 0.00, 1.00, 0.00, 0.00, 0.00, 1.00,

0.25, 1.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 1.00, 0.00,

0.00], //E, C 0.00], //E, F 0.00], //F, E 0.00], //F, G 0.00], //G, C 0.00], //G, E 0.00], //G, F 1.00], //G, G 0.00], //G, A 0.00] //A, G

];

The choice is actually made using windex. The function windex (weighted index) takes an array of probabilities as its first argument. The array nPitchProb is an array of arrays, and I want to use one of those arrays as my probability array, say index 4. The way I identify the array within the array is nPitchProb.at(4). Since it is a multi-dimensional array I have to reference it twice, e.g. nPitchProb.at(4).at(5). Using variables this becomes: (nPitchProb.at(prevPitch).at(nextIndex). The array for windex has to total 1, and I don't remember why I entered values that total 16, but that is fixed with normalizeSum. The return from windex is an array position, which I will store in nextIndex, and the variable that tells me which array to use in nextPitchProbility is nextIndex. The variable nextPitch is an array position that can then be used in conjunction with legalPitches to return the midi value for the correct pitch: legalPitches.at(nextPitch). It is also used for some necessary bookkeeping. I need to rearrange currentPair to reflect my new choice. The value in the second position of the array currentPair needs to be moved to the first position, and the nextPitch value needs to be stored in the second position of the currenPair array. (In other words, currentPair was D, E, or array positions 1, 2, and I just picked a C, according to the table, or a 0. So what was [1, 2] needs to be changed to [2, 0] for the next pass through the function.) currentPair.put(0, currentPair.at(1)); currentPair.put(1, nextPitch); A more complex example of a transition table and Markov process is demonstrated in the file Foster Markov, which uses the chart for Stephen Foster tunes detailed in Dodge's Computer Music on page 287. I wrote this a while back, and I think there are more efficient ways to do the tables (and I make some confusing short cuts), but it does work. For duration you can use the first line; 0.125. This is probably a more accurate realization since we haven't discussed rhythm. But to give it a more melodic feel I've added some random rhythmic cells. The system is not given any information about phrase structure or cadences, and you can hear that missing component. Even so, it is very Fosterish. 29.4.

Foster Markov

286

( var wchoose, legalPitches, previousIndex, prevPitch, currentPitch, nextIndex, nextPitch, nPitchProb, pchoose, blipInst, envelope, pClass, count, resopluck; prevPitch = 3; currentPitch = 1; count = 1; pClass = #["A3", "B3", "C4", "D4", "E4", "F4", "F#4", "G4", "A4", "B4", "C5", "D5"]; //pchoose is the mechanism for picking the next value. pchoose = { legalPitches = [57, 59, 60, 62, 64, 65, 66, 67, 69, 71, 72, 74]; //Both prevPitch and nextPitch are not pitches, but array positions. previousIndex = [ [2], //previous is 0 or A3 [2], //1 or B3 [0, 1, 2, 3, 4, 5, 7, 9, 10], //2: C4 [1, 2, 3, 4, 7, 10], //3: D4 [2, 3, 4, 5, 7, 8], //4: E4 [4, 5, 7, 8], //5: F4 [7], //6: F#4 [2, 4, 5, 6, 7, 8, 10], //7: G4 [2, 4, 5, 6, 7, 8, 10], //8: A4 [8, 10], //9: B5 [7, 8, 9, 10, 11], //10: C5 [7, 9] //11: D5 ]; previousIndex.at(prevPitch).do({arg index, i; if(index == currentPitch, {nextIndex = i; true;}, {false})}); nPitchProb = [ // [00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11] array position // A, B, C, D, E, F, F#, G, A, B, C, D [ //arrays for A3 [00, 00, 16, 00, 00, 00, 00, 00, 00, 00, 00, 00] // one array: C4 ], [ //arrays for B3 [00, 00, 05, 06, 00, 00, 00, 05, 00, 00, 00, 00] // C4 only ], [ //arrays for C4 [00, 00, 16, 00, 00, 00, 00, 00, 00, 00, 00, 00], // A3 [00, 00, 16, 00, 00, 00, 00, 00, 00, 00, 00, 00], // B3 // [00, 02, 02, 09, 02, 10, 00, 00, 00, 00, 00, 00], original C4 [00, 06, 02, 09, 02, 06, 00, 00, 00, 00, 00, 00], // C4

287

[00, [00, [00, [00, [00, [00,

00, 00, 00, 00, 00, 00,

], // A, [ //arrays [00, 00, // [01, 00, [05, 00, // [00, 01, [00, 06, [00, 00, [00, 00, [00, 00, ], [ //arrays [00, 00, // [00, 02, [00, 05, [00, 00, [00, 00, [00, 00, [00, 00, ], // A, [ //arrays [00, 00, [00, 00, [00, 00, [00, 00, ], [ //arrays [00, 00, ], [ //arrays [00, 00, [00, 00, [00, 00, [00, 00, [00, 00, [00, 00, [00, 00, ], // A, [ //arrays [00, 00, [00, 00, [00, 00, [00, 00, [00, 00,

03, 00, 00, 00, 00, 00,

04, 07, 00, 00, 00, 00,

08, 03, 11, 04, 00, 00,

00, 02, 00, 00, 00, 00,

00, 00, 00, 00, 00, 00,

01, 04, 00, 12, 00, 02,

B, for 16, 01, 01, 12, 07, 01, 00, 00,

C, D4 00, 04, 04, 01, 01, 03, 00, 00,

D,

E,

F,

00, 05, 01, 02, 02, 06, 00, 00,

00, 00, 00, 00, 00, 04, 00, 00,

for 00, 07, 04, 03, 00, 00, 00,

E4 12, 03, 03, 04, 00, 00, 00,

03, 02, 02, 06, 04, 02, 00,

B, for 00, 00, 02, 00,

C, F4 08, 00, 00, 00,

00, 00, 05, 00, 00, 11,

00, 00, 00, 00, 00, 03,

00, 00, 00, 00, 16, 00,

00], 00], 00], 00], 00], 00]

F#, G,

A,

B,

C,

00, 00, 00, 00, 00, 00, 05, 00,

00, 01, 01, 00, 00, 01, 08, 00,

00, 00, 00, 00, 00, 01, 03, 00,

00, 01, 01, 00, 00, 00, 00, 16,

00, 03, 03, 00, 00, 00, 00, 00,

00], 00], 00], 00], 00], 00], 00], 00]

// B4 original C4 // C4 original D4 // D4 // E4 // G4 // C5

01, 00, 00, 02, 03, 00, 00,

00, 00, 00, 00, 00, 00, 00,

00, 01, 01, 01, 06, 10, 16,

00, 00, 00, 00, 03, 03, 00,

00, 01, 01, 00, 00, 00, 00,

00, 00, 00, 00, 00, 01, 00,

00], 00], 00], 00], 00], 00], 00]

// C4 original D4 // D4 // E4 // F4 // G4 // A4,

D,

E,

F,

F#, G,

A,

B,

C,

00, 00, 00, 00,

08, 08, 00, 00,

00, 00, 00, 00,

00, 08, 10, 16,

00, 00, 00, 00,

00, 00, 04, 00,

00], 00], 00], 00]

00, 00, 00, 00,

// // // // // //

D4 E4 F4 G4 A4 C5

D

D // // // //

E4 F4 G4 A4,

for F#4 00, 00, 00, 00, 00, 00, 16, 00, 00, 00]

// G4,

for 00, 05, 00, 00, 00, 01, 00,

G4 11, 04, 00, 00, 00, 00, 00,

// // // // // // //

B, for 16, 00, 00, 00, 01,

C, A4 00, 11, 00, 00, 00,

05, 03, 16, 00, 04, 01, 00,

00, 01, 00, 00, 01, 00, 00,

00, 00, 00, 00, 04, 05, 00,

00, 02, 00, 16, 04, 07, 06,

D,

E,

F,

00, 05, 00, 00, 09,

00, 00, 00, 00, 01,

00, 00, 00, 00, 00,

00, 01, 00, 00, 03, 01, 05,

00, 00, 00, 00, 00, 00, 03,

00, 00, 00, 00, 00, 01, 02,

00], 00], 00], 00], 00], 00], 00]

F#, G,

A,

B,

C,

00, 00, 00, 16, 02,

00, 00, 00, 00, 00,

00, 00, 00, 00, 02,

00], 00], 00], 00], 00],

00, 00, 16, 00, 01,

288

C4 E4 F4 F#4 G4 A4 C5

D // // // // //

C4 E4 F4 F#4 G4

[00, 00, 00, 00, 02, 00, 00, 12, 00, 00, 02, 00], // A4 [00, 00, 00, 00, 00, 00, 00, 09, 02, 05, 00, 00] // C5 ], [ //arrays [00, 00, [00, 00, ], // A, [ //arrays [00, 00, [00, 00, [00, 00, [00, 00, [00, 00, ], [ //arrays [00, 00, [00, 00, ] ];

for B5 00, 00, 00, 00, 00, 16, 00, 00, 00, 00], // A4 00, 00, 00, 00, 00, 00, 06, 00, 00, 10] // C5 B, for 00, 00, 00, 00, 00,

C, C5 00, 00, 00, 00, 00,

D,

E,

F,

F#, G,

A,

B,

C,

14, 00, 00, 00, 00,

00, 01, 00, 00, 00,

00, 00, 00, 00, 00,

02, 05, 00, 00, 05,

00, 00, 00, 00, 11,

00, 04, 04, 00, 00,

00], 00], 00], 00], 00]

00, 06, 12, 16, 00,

D // G4 // A4 // B4 // C5 //D5

for D5 00, 00, 00, 00, 00, 16, 00, 00, 00, 00], // G4 00, 00, 00, 00, 00, 00, 00, 00, 16, 00] // B4

nextPitch = (nPitchProb.at(prevPitch).at(nextIndex).normalizeSum).windex; //current is set to previous, next is current for next run. The actual pitch //is returned from legal pitch at nextPitch. [pClass.at(nextPitch), legalPitches.at(nextPitch)].post; // if((count%10) == 0, {"".postln};); count = count + 1; prevPitch = currentPitch; currentPitch = nextPitch; legalPitches.at(nextPitch) }; Pbind( \dur, 0.125, \dur, Prand([ Pseq(#[1]), Pseq(#[0.5, 0.5]), Pseq(#[0.5, 0.5]), Pseq(#[0.25, 0.25, 0.25, 0.25]), Pseq(#[0.5, 0.25, 0.25]), Pseq(#[0.25, 0.25, 0.5]), Pseq(#[0.25, 0.5, 0.25]) ], inf), \midinote, Pfunc(pchoose), \db, -10, // \instrument, "SimpleTone", \pan, 0.5 ).play )

289

Data Files, Data Types As with the previous section, it might be useful to store the data for transition tables in a file so that the program can exist separate from the specific tables for each composition. The data files can then be used as a modular component with the basic Markov patch. Text files, as discussed above, contain characters. But the computer understands them as integers (ascii numbers). The program you use to edit a text file converts the integers into characters. You could use SimpleText to create a file that contained integers representing a transition table, but the numbers are not really numbers, rather characters. To a cpu "102" is not the integer 102, but three characters (whose ascii integer equivalents are 49, 48, and 50) representing 102. The chart below shows the ascii numbers and their associated characters. Numbers below 32 are non-printing characters such as carriage returns, tabs, beeps, and paragraph marks. The ascii number for a space (32) is included here because it is so common. This chart stops at 127 (max for an 8 bit number) but there are ascii characters above 127. The corresponding characters are usually diacritical combinations and Latin letters. 032

033 !

034 "

035 #

036 $

037 %

038 &

039 '

040 (

041 )

042 *

043 +

044 ,

045 -

046 .

047 /

048 0

049 1

050 2

051 3

052 4

053 5

054 6

055 7

056 8

057 9

058 :

059 ;

060


063 ?

064 @

065 A

066 B

067 C

068 D

069 E

070 F

071 G

072 H

073 I

074 J

075 K

076 L

077 M

078 N

079 O

080 P

081 Q

082 R

083 S

084 T

085 U

086 V

087 W

088 X

089 Y

090 Z

091 [

092 \

093 ]

094 ^

095 _

096 `

097 a

098 b

099 c

100 d

101 e

102 f

103 g

104 h

105 i

106 j

107 k

108 l

109 m

110 n

111 o

112 p

113 q

114 r

115 s

116 t

117 u

118 v

119 w

120 x

121 y

122 z

123 {

124 |

125 }

126 ~

127

If you would like to test this, create a text file using BBEdit, SimpleText, or MS Word (save it as text only) and run these lines of code, which retrieves each 8 bit integer from the file then prints it as an integer, then the ascii equivalent. (Do a get info on the file you save to

290

check for a hidden extension such as ".rtf." Note that an rtf file has a lot more information than just the text you wrote.) 29.5.

test ascii

var fp; fp = File("Testascii.rtf", "r"); //open a text file fp.length.do({a = fp.getInt8; [a, a.ascii.postln}); //read file as integers

Data appropriate for a transition table (integers or floats) could be opened in a text editor, but it would display gibberish, not the transition data. So the question is, how do you create a data file? It is not as simple as a text file. It must be done with a program that writes and reads data streams other than characters. SC can create such files. (But be sure to read ahead, there is a simpler method.) The transition tables above used integers, but the probability table used floating points. It is important to distinguish between the two. Below are examples of code for writing, and reading files containing integers and floating point values. There are three messages for integers; putInt8, putInt16, and putInt32 for 8 bits (a byte), 16 bits (two bytes), and 32 bits (four bytes). Each size has a limited capacity. An 8 bit integer can be as large as 128, 16 bit can be as large as 32,768, 32 bit has a 2 billion+ capacity. There are two messages for floats; putFloat and putDouble. A "Double" is larger and therefore more precise, but they take up twice the space, and floats should be sufficient for what we are doing. For characters the messages putChar and putString can be used. It is important to understand these data types because you need to read the same type that you write. If you write data as 16 bit integers but read them as 8 bit integers the numbers will not be the same. Following are code segments that write and read floating-point values and integers. 29.6.

data files

var fp, data; fp = File("TestInt", "w"); //open a file data = [65, 66, 67, 68, 69, 70, 71]; data.do({arg eachInt; fp.putInt16(eachInt)}); //place each int in file fp.close; var fp, data; fp = File("TestInt", "r"); //open a file data = fp.readAllInt16; //read all as Int array data.postln; fp.close; var fp, data; fp = File("TestFloat", "w"); //open a file data = [6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1]; data.do({arg eachFloat; fp.putFloat(eachFloat)}); fp.close;

291

var fp, data; fp = File("TestFloat", "r"); //open a file data = fp.readAllFloat; //read all as array data.postln; fp.close;

I chose to write the integers 65 through 71 because they correspond with ASCII characters A through G. To confirm this, open the TestInt file, which supposedly has only integers, not characters, with SimpleText, MS Word, or SC to confirm that these programs convert them to text characters. Interpreting Strings Files containing integers and floats still present a problem. It is difficult to manage the data especially if they are grouped in arrays of arrays, as in the Markov patch. There is really no fast easy way to check the file to make sure all the data are correct. You can't read it as a text file. You have to read it as 8, 16, or 32 bit ints, or floating point values. If you get any of the data types incorrect, or if a value is in the wrong position, the structures and arrays will be off. (Don't misunderstand; it can be done, it's just a hassle and invites error.) It would be easier if we could create the files using a text editor or SC, but read them as data files. Or read them as strings but parse them into the correct data. For those of you who have worked with C you know that managing and parsing data represents a large amount of programming. True to form, it is very simple in SC. The interpret message translates a string into code that SC will understand. Strings can be read from files and interpreted as code. You can save entire functions, data structures, lists, error messages, and macros in files that can be edited as text with SC or any text editor, but used as code in an SC patch. To test this, first open a new window (just text, not rich text) and type these lines that represent an array of arrays, containing different data types (note that I did not end with a semicolon): [ [1, 2, 3], [2.34, 5.12], ["C4", "D4", "E4"], Array.fill(3, {rrand(1.0, 6.0)}) ]

292

I've intentionally filled this array with several different data types, including strings56, to illustrate how simple this is in SC as compared to more archaic languages. If I were managing this text with a C compiler I would have to keep close track of each data type, the number of items in each array, the total size and length of each line, etc. It was a hassle. Now run this code and notice that while the first postln shows that the array is indeed a string when first read from the file, in the subsequent code it is being treated as code. 29.7.

interpreting a string

var fp, array; fp = File("arrayfile", "r"); array = fp.readAllString; //read the file into a string array.postln; //print to confirm it is a string array = array.interpret; //interpret it and store it again array.at(0).postln; //confirm it is code not a string array.at(1).sum.postln; array.at(2).at(0).postln; array.at(3).postln;

The advantage to this method is that I can open the arrayfile with BBEdit or any text editor and modify the data as text. Even more promising: I can import data from a database such as FileMaker Pro or Excel as a text file. Any data source that can be saved as text could be read into an SC program. Now it is possible to have a Markov patch with modular files containing data for different probabilities. [Future discussion: genetic probabilities]

56

Note that a string within a string will not work if typed directly into SC because of the duplicate quotation marks. For example, a = "[[1, 2, 3], ["C4", "D4"]]" is not allowed. The compiler reads the second quote, next to the C4, as the closing quote. You can get around this using the backslash character: "[[1, 2, 3], [\"C4\", \"D4\"]]". But when contained in a file and interpreted as it is here, the backslash is unnecessary.

293

29.

Exercises 29.1. None at this writing.

294

30 - Concrète, Audio Files, Live Audio DSP Music Concrète The most common definition for concrète music, what you will hear in an electro-acoustic music course, is composition using recordings of real sounds as raw material. This definition takes its meaning partly as a distinction from analog electronic sounds which are purely electronic. The attraction of concrète is the richness and complexity of the source. The sounds have a depth that is difficult to reproduce with pure electronics. The second perhaps more accurate definition, which I like the more I think of it, is any recorded sound. It is concrète because it will never change. Every performance will be exactly the same. A recorded symphony is analogous to a statue; set in stone. This definition reminds us that live music has a dimension recorded pieces do not; the potential for growth and change. A written work that has to be realized is not the performance, but the potential for a performance, or a failed performance, or a brilliant performance, or in the case of aleatory and improvisation, something completely new, never heard before. I believe too many people equate a recording with a performance. In a sense it has tainted our expectations: we attend a concert expecting what we heard on the CD. This chapter could have been titled sound file manipulation but I use concrète in perhaps an ironic sense because the real time dynamic controls available in SC allow us to take manipulation of concrète audio out of the realm of classic concrète in the second sense; not set in stone, but always evolving. Buffers Before processing audio it needs to be loaded into a memory buffer on the server. After audio is loaded into a buffer, either from a sound file on disk or from an input source, it is then available for processing, quotation, or precise playback manipulation. The first line below reads audio from a sound file. The arguments for Buffer are the server where the audio is loaded (think of it as a tape deck), and the sound file name (see the discussion on file pathnames above). Check the post window after running the first line. If the file was buffered correctly it should display Buffer(0, -1, 1, sounds/africa2). The information displayed is buffer number, number of frames, channels, and pathname. You can also retrieve these values using bufNum, numChannels, path, and numFrames. The arguments for PlayBuf are number of channels and buffer number. 30.1.

Loading Audio into and Playing From a Buffer

b = Buffer.read(s, "sounds/africa2"); c = Buffer.read(s, "sounds/africa1", numFrames: 44100); // one second

295

// check data: [b.bufnum, b.numChannels, b.path, b.numFrames].postln; [c.bufnum, c.numChannels, c.path, c.numFrames].postln; {PlayBuf.ar(1, 0)}.play(s); // Your buffer number may differ {PlayBuf.ar(1, 1)}.play(s);

A frame is a single sample from all channels. If your sound file is mono then a frame is a single sample. If it is stereo then a frame has two samples; from both the left and right channels. If your file has 8 channels then a frame contains 8 samples. The number of frames per second is the sample rate, e.g. 44,100 per second. If –1 is given for the number of frames it reads all the frames in the file. You can also record to a buffer from any input. Assuming you have a two channel system, the inputs should be 2 and 3. (Check your sound settings in the system preferences.) You can record over an existing buffer, e.g. the ones we just created above, but it is more likely you will want a new one, which has to be created, or allocated before hand. The size of the buffer is the sample rate * number of seconds. To record, set the input source (using In.ar) and designate the buffer number. You can either check the post window to get the correct buffer number, or use d.bufnum as shown below. If you are using an external mic mute your output or use headsets to avoid feedback. It seems incorrect to use the play message to record, but think of it as hitting the record and play buttons on a cassette recorder. 30.2.

Loading Audio into a Buffer from Live Audio Source

d = Buffer.alloc(s, 44100 * 4.0, 1); // a four second 1 channel Buffer {RecordBuf.ar(In.ar(2), d.bufnum, loop: 0)}.play; d {PlayBuf.ar(1, d.bufnum)}.play(s);

I have to expose a few preferences for this next section. I like loops. Maybe not as foreground, but certainly as background. They add a coherent foundation for customarily more abstract ideas in the foreground of electronic composition. I also lean toward foreign languages for spoken text. I hope it's not just faux sophistication, but rather to disengage the listener's search for meaning, shifting focus to patterns and inflection. I should also reveal an aversion: pitch shift of voices. Pitch shift or modulated playback speed is wonderful with other sounds, but I've never liked it with spoken concrète examples. PlayBuf and RecordBuf have a loop argument, set to 0 (no loop) by default. To loop playback, set this value to 1. In the case of RecordBuf a loop is applied to playback and record. There are two additional arguments that affect the recording process: recLevel and 296

preLevel. The first is the level of the source as it is recorded. The second is the level the existing loop (pre-recorded material) is mixed when repeated. If, for example, the preLevel is set to 0.5 it will slowly fade away with each repetition. One strategy for looping material is to create files that are the length you want the loops to be and play them using PlayBuf with loop on. I think this is inefficient and less flexible; you would have to create a file for each loop. Also, once your loops are defined they won't change with each performance and it is harder to change the loop in real time, e.g. augmentation, diminution, (not just playing the loop slower or faster, but lengthening it by adding more material to the loop) or shifting in time. It is better to load an entire concrète example into a buffer then loop selections using BufRd. The arguments for BufRd are number of channels, buffer number, phase, and loop. Phase is the index position of the file for playback (index in frames) and can be controlled with an audio rate source. Indulge me one more walking-to-school-in-the-snow story. Even in the days where concrète composition was well established we were limited to a tape, decks, and play heads. You could play back at speeds available on the deck, or you could rock the reels back and forth by hand. One clever composition called for the playback head to be removed from the deck and moved by hand over magnetic tape allowing the performer to "draw" the playback on a patch of magnetic tape. It seemed like an interesting idea but the precise tolerance of the head position was impossible to set up correctly and it was not very effective. Here is the SC version of that idea. This example assumes you have loaded a 4 second example into a buffer. The phase control has to be an audio rate, so I use K2R to convert MouseX to an audio rate. 30.3.

Playback with Mouse

b.free; b = Buffer.read(s,"sounds/africa2", 0, 4*44100); {BufRd.ar(1, b.bufnum, K2A.ar(MouseX.kr(0, 4*44100)))}.play

Not as cool as I thought it would be back then. Our next goal is to loop sections of the file. So at 44100 kHz sampling rate a two second loop would have 88200 frames. If we wanted to begin the loop 1.5 seconds into the file then the start point of the loop would be 66150. The end point then would be 66150 + 88200, or 154350. Still using the mouse as a control we would just have to enter those two numbers for start and end. Better yet, use a variable for the sample rate and multiply it by the number of seconds. To create a simple loop on the same section as above we use and LFSaw (but see also Phasor, which I also tried but didn't feel it was as flexible) which generates a smooth ramp from –1 to 1. There are two critical values for controlling the playback position: First the scale (and offset) of the LFSaw have to be correct for the number of frames; second, the rate of the saw has to be correct for normal playback speed. The help file uses two utilities for this: BufFrames 297

which returns the number of frames in a buffer and BufRateScale which returns the correct rate in Hz for normal playback speed of a given buffer. But both of these assume you are playing the entire file. We are looping sections, so we have to do our own calculations. It would be easier to define the loop points in seconds rather than samples. To convert seconds to frames multiply by the sample rate. Defining loops using time requires that you know the length of the entire file. Exceeding the length won't cause a crash, it will just loop around to the beginning. Not a bad thing, just perhaps not what you want, maybe a good thing, so I'll leave that option open. You can also define the loops as a percentage of the entire file. See below for an example. (When first trying these examples I was terribly confused about why an LFSaw that had a scale equal to the total frames of the file, but had not yet been offset, would still play the file correctly. I was sure the LFSaw was generating negative values. It was, but it was just wrapping around back to the beginning of the file, playing it from start to finish while the LFSaw was moving from –1 to 0, then again as it moved from 0 to 1. So you could loop an entire file by only setting the scale to the total frames, but read on.) The scale and offset of the LFSaw could be set in terms of seconds rather than frame number. Given a loop beginning at 2" and ending at 6" the offset would be 4 and the scale 2. Or you could say that offset is (length/2) + beginning, offset is length/2. With these settings the LFSaw will move between 2 and 4. Multiplying the entire LFSaw by the sample rate will return the correct frame positions. I prefer using LinLin which converts any linear set of values to another range. The arguments are input (e.g. the ugen you want to convert), input low and high (i.e. the range of the input ugen by default), and output low and high (what you want it converted to). It does the same as an offset and scale, but is clearer if you are thinking in terms of a range. So these two lines of code have the same result, but with one important difference: with LinLin you can invert the signal (useful for reverse playback) with values such as –1, 1, 1000, 700. The last example won't play, but shows how the LFSaw needs to be setup to correctly sweep through a range of samples. 30.4.

LinLin, LFSaw for Sweeping Through Audio File

// Same thing: {SinOsc.ar(LinLin.kr(SinOsc.kr(5), -1, 1, 700, 1000))}.play {SinOsc.ar(SinOsc.kr(5, mul: 150, add: 850))}.play // Will sweep through 66150 110250, or 1.5" to 2.5" of audio file LinLin.ar(LFSaw.ar(1), -1, 1, 1.5, 2.5)*44100

We have one more complicated calculation before doing a loop; playback rate. The rate of playback is going to be controlled by the frequency of the LFSaw. How fast do we want it to 298

sweep through the given samples? If the length of the loop is 1 second, then we would want the LFSaw to move through all the values in 1 second, so the frequency would be 1. But if the length of the loop is 2 seconds, we would want the saw move through the samples in 2 seconds. Remember that duration and frequency are reciprocal. So for an LFSaw duration of 2 seconds, frequency would be 1/2, or 1/duration. We're not done yet. 1/duration is normal playback. If we want a faster or slower playback rate we would multiply the frequency by that ratio: ratio*(1/duration), or ratio/duration. So 2/duration would playback twice as fast. 0.5/duration half as fast. Phew. Ok, on to the example. No, wait! one more thing. When you are looping possibly random points in the file (which by now you know I'm heading for) you have potential for clicks. If the beginning point is a positive sample in the wave and the end is a negative the result is a sharp transition; the equivalent to a non-zero crossing edit. This problem is corrected with an envelope the length of the loop with a fast attack and decay, and a gate equal to the frequency of the loop. Ok, the example. Oh yeah, and we also want the LFSaw to begin in the middle of its phase, which is where it reaches its lowest value, so the phase of the saw is set to 1 radian. This is not an issue with Phasor, but there was some other problem when I did an example with it. 30.5.

Looping a Section of a File

( { var bufNum = 0, srate = 44100, start = 0, end = 3, duration, rate = 1; // Use these lines for proportional lengths // var bufNum = 0, srate = 44100, start = 0.21, end = 0.74, // rate = 1, duration, total; // total = BufFrames.kr(bufNum)/44100; // end = end*total; start = start*total; duration = abs(end - start); BufRd.ar(1, bufNum, // Buffer 0 LinLin.ar( LFSaw.ar(rate/duration, 1), -1, 1, start, end)*srate )*EnvGen.kr(Env.linen(0.01, 0.98, 0.01), timeScale: duration, gate: Impulse.kr(1/duration)); }.play )

Magically, the same example works with proportional values or a percentage of the total length of a file (commented out above). In this case the ugen BufFrames is useful when divided by sample rate (to return total length of the file in seconds). Of course, when using this method length should not exceed 1.0 and start + length should not exceed 1.0 (but don't let me stop you—it wraps around). 30.6.

Looper

299

( { var bufNum = 0, srate = 44100, start = 0.21, end = 0.74, rate = 1, totalDur = 20, pan = 0; var out, duration, total; start = [0.3, 0.2]; end = [0.2, 0.3]; total = BufFrames.kr(bufNum)/44100; end = end*total; start = start*total; duration = abs(end - start); BufRd.ar(1, bufNum, // Buffer 0 LinLin.ar( LFSaw.ar(rate/duration, 1), -1, 1, start, end)*srate )*EnvGen.kr(Env.linen(0.01, 0.98, 0.01), timeScale: duration, gate: Impulse.kr(1/duration)); }.play )

Enough work, on to the fun. You might place this looper in a SynthDef with a task running different examples, but what I want to demonstrate would be more complicated as such, so the example above is just a stereo array for start and end. Here are some things to try (change the start and end values, remember these are proportional, i.e. percentage of total audio file time): – forward and backward loops of the same material as shown above (start = [0.2, 0.3]; end = [0.3, 0.2];) – loops with slightly different lengths so that they slowly move out, then eventually go back into phase, e.g. every tenth time (start = [0.2, 0.2]; end = [0.31, 0.3];) – different lengths but a given ratio, such as 3/2 (start = 0.2 for both, end = 0.3 for one, so the length is 0.1, then the second length would be 0.15 so end is start + 0.15, or 0.35) so that one plays 3 times while the other plays 2, like an interval–this is a pretty cool effect with short examples (end = [0.35, 0.3]) – expand the playback rate into an array achieving effects similar to different lengths – combine different playback rates with lengths Of course the audio file or live input is a signal and can be modulated in any of the ways previously discussed. Following are a few ideas. They all assume you are using buffer 0, and that you are recording live audio to it, or have loaded a sound file into it. 30.7.

Modulating Audio Buffers

// FM Modulation ( { var bufNum = 0, srate = 44100, start = 0, end = 3, duration, rate = 1, signal; duration = abs(end - start);

300

// or // end = [2.3, 3.5]; signal = BufRd.ar(1, bufNum, // Buffer 0 LinLin.ar( LFSaw.ar(rate/duration, 1), -1, 1, start, end)*srate )*EnvGen.kr(Env.linen(0.01, 0.98, 0.01), timeScale: duration, gate: Impulse.kr(1/duration)); SinOsc.ar(LFNoise1.kr([0.4, 0.43], mul: 200, add: 200))*signal; // or // SinOsc.ar(LFNoise0.kr([12, 15], mul: 300, add: 600))*signal; // or // SinOsc.ar(LFNoise1.kr([0.4, 0.43], mul: 500, add: 1000))*signal; }.play(s) )

// Pulsing in and out ( { var bufNum = 0, srate = 44100, start = 0, end = 3, duration, rate = 1; var pulse; pulse = [6, 10]; duration = abs(end - start); BufRd.ar(1, bufNum, // Buffer 0 LinLin.ar( LFSaw.ar(rate/duration, 1), -1, 1, start, end)*srate )*EnvGen.kr(Env.linen(0.01, 0.3, 0.01), timeScale: duration/pulse, gate: Impulse.kr(pulse/duration)); }.play(s) ) // Filtered ( { var bufNum = 0, srate = 44100, start = 0, end = 3, duration, rate = 1, signal; duration = abs(end - start); signal = BufRd.ar(1, bufNum, // Buffer 0 LinLin.ar( LFSaw.ar(rate/duration, 1), -1, 1, start, end)*srate )*EnvGen.kr(Env.linen(0.01, 0.98, 0.01), timeScale: duration, gate: Impulse.kr(1/duration)); RLPF.ar(signal, LFNoise1.kr([12, 5], mul: 700, add: 1000), rq: 0.05)*0.2; }.play(s) ) // Modulating one with another, dry in left, dry in right, modulated center // Listen for a while for full effect (

301

{ var bufNum = 0, srate = 44100, start = 0, end = 3, duration, rate = 1, signal; end = [2.5, 2.9]; duration = abs(end - start); signal = BufRd.ar(1, bufNum, // Buffer 0 LinLin.ar( LFSaw.ar(rate/duration, 1), -1, 1, start, end)*srate )*EnvGen.kr(Env.linen(0.01, 0.98, 0.01), timeScale: duration, gate: Impulse.kr(1/duration)); (signal*0.1) + (signal.at(0) * signal.at(1)) }.play(s) )

302

31 - Graphic User Interface Starter Kit GUIs are rarely about the function of the patch, but rather connecting controls to a user or performer. When first building on an idea it is far more efficient for me to work with a non graphic (code based) synthesis program. For that reason (and also because I tend toward automation that excludes external control) GUIs are the last consideration for me. That's one of the reasons I prefer working with SC: GUIs are optional. My second disclaimer is that since GUIs are not about the actual synthesis process I care even less about understanding the details of code and syntax. I usually have a few models or prototypes that I stuff my code into. That said, here are a few that I have found useful. Display Display is the quickest and easiest way I've seen to attach sliders, buttons, number boxes and names to a patch. Below is a prototype. The first argument is the display window itself. You use this argument to set attributes such as the name, GUI items, and patch to which it is attached. The next three arguments, a through c, are the sliders and numbers. 31.1. Display window ( Display.make({arg thisWindow, a, b, c; thisWindow.name_("Example"); }).show; )

The default values for the sliders are 0 to 1, starting point of 0.5, and an increment of 0.00001 (or so). If you want to change those defaults, use the .sp message, with arguments of default starting value, low value, high value, step value, and warp. 31.2. header ( Display.make({arg thisWindow, a, b, c, d; a.sp(5, 0, 10, 1); //start, low, high, increment, warp b.sp(400, 0, 4000, 0.1, 1); c.sp(0, -500, 500); d.sp(0.3, 0, 0.9); thisWindow.name_("Example"); }).show; )

Next you can define and attach a patch to the display, connecting the controls as arguments using synthDef. 31.3. Display with synthDef ( Display.make({arg thisWindow, sawFreq, sawOffset, sawScale, volume; sawFreq.sp(5, 0, 10, 1); sawOffset.sp(400, 10, 1000, 0.1, 1);

303

sawScale.sp(0, -500, 500); volume.sp(0.3, 0, 0.9); thisWindow.name_("Example"); thisWindow.synthDef_({arg sawFreq, sawOffset, sawScale, volume; Out.ar(0, SinOsc.ar( abs(LFSaw.kr(sawFreq, mul: sawScale, add: sawOffset)), mul: volume ) )}, [\sawFreq, sawFreq, \sawOffset, sawOffset, \sawScale, sawScale, \volume, volume] ); }).show; )

There is a way to link all these items together using an environment, but I'll leave that to you. I mostly use this as a quick easy interface for examples and experiments. You can declare extra arguments even if they aren't used, so I keep this model handy and just insert the synth as needed. 31.4. Display shell ( Display.make({arg thisWindow, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p; // [Your variable defaults go here], e.g. a.sp(440, 100, 1000, 1, 1); thisWindow.synthDef_({arg a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p; Out.ar(0, // [Your patch goes here with variables a, b, c, etc.], e.g. SinOsc.ar(a); )}, [\a, a, \b, b, \c, c, \d, d, \e, e, \f, f, \g, g, \h, h, \i, i, \j, j, \k, k, \l, l, \m, m, \n, n, \o, o, \p, p] ); thisWindow.name_("Example"); }).show; )

Document The next is Document. This object manages documents (the files represented by windows where you type code) in SC3. You can open, close, change the size, background color and visibility of any window of any document. All objects in SC have attributes. These are stored in variables that you can (if they are coded as such) retrieve or change (get or set). Any open document has a title, background color, and a boundary. You can get them, or change them, with the following lines. Run each line below and follow the post window. The second to last line makes the window nearly invisible, so I've added a last line that returns it to a more opaque state. Seeing the window disappear is a little disconcerting. Just hit enter one more time to run the last line.

304

31.5.

This Document

d = Document.current; d.title; // get the title d.bounds; // get the bounds d.title_("new title"); // set d.bounds_(Rect(100, 100, 500, d.background_(Color(0.5, 0.2, d.background_(Color(0.9, 0.7,

the title 500)); // set the bounds 0.7, 0.3)); // set color and visibility 0.7, 1)); // set color and visibility

The Rect arguments are points (72 to an inch?) and describe the position and size of the document window starting from the left of the screen in, then from the bottom up (used to be top to bottom). That's where the window starts, then the size of the document window is left across, then bottom up. The background Color arguments are red, green, blue, and alpha (transparency)57. Try changing the alpha to 0.1. (Save it first.) It disappears not only from view but is also invisible to the mouse. You will no longer be able to bring the document to the front by clicking the window. You have to click the top of the scroll bars. It's a good idea to use global variables for the document names and the action inside the functions. Local variables will not be understood by subsequent commands. 31.6.

Flashing alpha

~doc = Document.current; Task({ 100.do({ ~doc.background_(Color( 0.2, // red 0.1, // green 0.9, // blue rrand(0.3, 0.9) // alpha, or transparancy )); 0.1.wait; }) }).play(AppClock) // Each open document has an index number: Document.allDocuments.at(1).front; (

57

Look in the help file for a collection of colors by name. These have apparently been disabled in the .sc file, but you can copy and past them.

305

Document.new("Test one", "test text"); Document.new("Test two", "more text"); Document.new("Test three", "three"); Document.new("Four", "four"); Task({ Document.allDocuments.do({arg thisOne; // random thisOne.front; thisOne.background_( Color(rrand(0.1, 0.9), rrand(0.1, 0.9), rrand(0.1, 0.9))); 0.4.wait; }); }).play(AppClock) )

Pretty flashy, but just flashy. A more practical application is being able to attach functions to the actions of keys, mouse clicks of a document or when a document is brought to the front. Take close note of how to turn these actions off. They will engage every time you click the mouse or type or bring the document to the front, and can get annoying pretty fast. First identify the current document and assign it to a variable. The current document is the one you are typing the code into. Then set the toFrontAction with a function to be evaluated. After running the first two lines below, switch to and from that document and watch the post window. 31.7.

This Document to Front Action

( ~doc.toFrontAction_({"bong".postln}); ~doc.endFrontAction_({"bing".postln}); ) ~doc.toFrontAction_({nil}); //turn action off ~doc.endFrontAction_({nil});

The most obvious application of this feature is to start or stop a process. Again, use global variables. (Or use a global synth definition.) Any code that you can evaluate from an open window can be placed inside a toFrontAction. 31.8.

This Document to front

~randSineDoc = Document.current; ~randSineDoc.toFrontAction_({ ~sine = {Mix.ar({SinOsc.ar(rrand(200, 900))}.dup(20))*0.01}.play })

306

~randSineDoc.endFrontAction_({~sine.free})

See what I mean about being able to turn it off? Now every time you try to edit the stupid thing you get random sine waves! You can create similar actions linked to functions using key down or mouse down. The arguments for the keyDownAction function are the document itself (so you could insert thisDoc.title.postln) the key pressed, the modifier, and the ascii number of the key pressed. Use an if statement to test for each key using either key == $j or num == 106 where 106 is the ascii equivalent for j. In the example below there are two ways to turn the events off; explicitly with key = $k and a free message, or globally with the s.freeAll, which is assigned to ascii 32, or the space bar. The last example shows how to assign a freeAll (or any other action) to a mouse down. I use global variables for consistency in this example. It would be better to use Synth or OSC commands to start and stop instruments you've already defined. 31.9.

Mouse and Key Down

( ~doc.keyDownAction_({arg thisDoc, key, mod, num; if(key == $j, /* or num == 106 */ { ~mySine = {SinOsc.ar( LFNoise0.kr(rrand(8, 15), mul: 500, add: 1000), mul: 0.2)* EnvGen.kr(Env.perc(0, 3)) }.play; }); if(key == $h, { ~randSine = { Mix.ar({SinOsc.ar(rrand(200, 900))}.dup(20))*0.01 }.play }); if(key == $k, {~randSine.free}); if(num == 32, {s.freeAll}); // space bar }); ) ~doc.mouseDownAction_({s.freeAll});

Keyboard Window The next very clever interface is the keyboard window. It is currently in the examples folder of the SC3 folder, reproduced without modification below. Run the top part to bring up the window then drag functions onto any key. That key (either typed on the keyboard or clicking the keyboard window) will then execute the dragged function. Note the method for using a single key to turn the same function on and off. 31.10. Keyboard Window From Examples (by JM?)

307

( var w; // window object var courier; // font object // an Array of Strings representing the key layout. var keyboard = #["`1234567890-=", "QWERTYUIOP[]\\", "ASDFGHJKL;'", "ZXCVBNM,./"]; // horizontal offsets for keys. var offsets = #[42, 48, 57, 117]; var actions; // an IdentityDictionary mapping keys to action functions. var makeKey; // function to create an SCDragSink for a key. courier = Font("Courier-Bold", 14); // an IdentityDictionary is used to map keys to functions so that // we can look up the action for a key actions = IdentityDictionary.new; // create actions dictionary // define a function that will create an SCDragSink for a key. makeKey = {|char, keyname, bounds| var v; keyname = keyname ? char.asString; bounds = bounds ? (24 @ 24); v = SCDragBoth(w, bounds); v.font = courier; v.string = keyname; v.align = \center; v.setBoth = false; v.acceptDrag = { SCView.currentDrag.isKindOf(Function) }; v.action = { ("added key action : " ++ keyname).postln; if (char.isAlpha) { actions[char.toUpper] = v.object; actions[char.toLower] = v.object; }{ actions[char] = v.object; }; w.front; }; }; w = SCWindow("keyboard", Rect(128, 320, 420, 150)); w.view.decorator = FlowLayout(w.view.bounds); // define a function to handle key downs.

308

w.view.keyDownAction = {|view, char, modifiers, unicode, keycode| var result; // call the function result = actions[char].value(char, modifiers); // if the result is a function, that function becomes the // new action for the key if (result.isKindOf(Function)) { actions[char] = result; }; }; // make the rows of the keyboard keyboard.do {|row, i| row.do {|key| makeKey.(key) }; if (i==0) { makeKey.(127.asAscii, "del", 38 @ 24) }; if (i==2) { makeKey.($\r, "retrn", 46 @ 24) }; w.view.decorator.nextLine; w.view.decorator.shift(offsets[i]); }; // make the last row makeKey.($ , "space", 150 @ 24); makeKey.(3.asAscii, "enter", 48 @ 24); w.front; )

//////////////////// // Drag these things to the keyboard to test it. ( { var synth, original; original = thisFunction; synth = { SinOsc.ar(exprand(500,1200),0,0.2) }.play; { synth.free; original } } )

( { { Pan2.ar( SinOsc.ar( ExpRand(300,3000), 0,

309

SinOsc.kr(ExpRand(1,15),0,0.05).max(0)), Rand(-1,1)) }.play; } ) { s.sendMsg(\n_free, \h, 0); } // kill head { s.sendMsg(\n_free, \t, 0); } // kill tail ( {{ var eg, o, freq, noise; eg = EnvGen.kr(Env.linen(0.1,2,0.4,0.2), doneAction: 2); freq = Rand(600,1200); noise = {LFNoise2.ar(freq*0.1, eg)}.dup; o = SinOsc.ar(freq,0,noise); Out.ar(0, o); }.play})

( {{ var in, sr; in = LFSaw.ar([21000,21001], 0, LFPulse.kr(ExpRand(0.1,1),0,0.3,0.2,0.02)); sr = ExpRand(300,3000) + [-0.6,0.6]; Out.ar(0, RLPF.ar(in * LFPulse.ar(sr, 0, MouseY.kr(0.01, 0.99)), sr * (LFPulse.kr(ExpRand(0.1,12),0,0.4,0.2,0.2) + LFPulse.kr(ExpRand(0.1,12),0,0.7,0.2)), 0.1)); }.play;}) ( {{ var in; in = In.ar(0,2); ReplaceOut.ar(0, CombN.ar(in, 0.24, 0.24, 8, 1, in.reverse).distort); }.play}) ( {{ var in; in = In.ar(0,2); ReplaceOut.ar(0, in * SinOsc.ar(MouseX.kr(2,2000,1))); }.play})

Windows and Buttons When creating an interface the first thing you need is a window. (See also Sheet and ModalDialog.) It is traditionally assigned to the variable w, but you can use any name you want. We will create two, v and w. Once created, and brought to the front, buttons can be added. The first argument for SCButton is the window where it is placed, the second is its

310

size. The states message describes an array of button states which include name, text color, and background color. Button b has only one state and is placed in window v, using defaults for color while button c has two states with specified colors for state one. It's a good idea to close windows when you're done. Otherwise they pile up. 31.11. Windows and Buttons ( v = SCWindow("Window v", Rect(20, 400, 400, 100)); v.front; w = SCWindow("Window w", Rect(460, 400, 400, 100)); w.front; b = SCButton(v, Rect(20, 20, 340, 30)); b.states = [["Button b"]]; c = SCButton(w, Rect(20, 20, 340, 30)); c.states = [["Button c on", Color.black, Color.red], ["Button c off"]]; ) // When finished experimenting, close both: v.close; w.close;

Next we assign some type of action to each button. Pressing the button evaluates the function and passes the "state" argument (which state the button is after this click). The state is 0 for button b, since there is only one state, 0 or 1 for button c since there are two states. Note that the button is created at state 0, so state 1 will be the first evaluated state when the button is pushed, 0 will be evaluated at next button press. So I label the 0 state "start" because that is what will happen when the button goes to state 1, but state 1 is labeled "stop" because that's what state 0 will do. 31.12. States and Actions of Buttons ( v = SCWindow("Window v", Rect(20, 400, 400, 100)); v.front; w = SCWindow("Window w", Rect(460, 400, 400, 100)); w.front; b = SCButton(v, Rect(20, 20, 340, 30)); b.states = [["Button b"]]; c = SCButton(w, Rect(20, 20, 340, 30)); c.states = [["Start (State 0)", Color.black, Color.red], ["Stop (State 1)"]]; b.action = {w.view.background = Color(0.8, 0.2, rrand(0.2, 0.9))}; c.action = { | state | // shorthand for arg state; if(state.value == 0, {s.freeAll}); if(state.value == 1, {{SinOsc.ar}.play}) };

311

) // When finished experimenting, close both: v.close; w.close;

Slider Next we add a slider to control frequency. In SC2 when you added new items to a window you had to calculate carefully the layout so you knew where to place each new button or slider, but SC3 has a handy feature, FlowLayout which understands the nextLine message and will position the next button, slider, text box, etc. on the next line. The sliders are going to control frequency and amplitude, so this example starts out with a SynthDef with arguments for freq and amp. You can run this entire section at once, or run the expressions one at a time to see each new button and slider appear in the window. The arguments for easy slider are destination window, bounds (I use another shorthand; 500 @ 24 = width and height of slider or button), name, control specifications, and function to be evaluated with the argument of the control value. Control specifications arguments are minimum, maximum, warp, and step value. 31.13. Slider ( SynthDef("WindowSine", {arg freq = 440, amp = 0.9; Out.ar(0, SinOsc.ar(freq, mul: amp)) }).load(s); w = SCWindow("Window w", Rect(460, 400, 600, 200)); w.front; w.view.decorator = FlowLayout(w.view.bounds); c = SCButton(w, 500 @ 24); c.states = [["Start (State 0)"], ["Stop (State 1)"]]; c.action = { | state | // shorthand for arg state; if(state.value == 0, {a.free}); if(state.value == 1, {a = Synth("WindowSine")}) }; w.view.decorator.nextLine; EZSlider(w, 500 @ 24, "Frequency", ControlSpec(200, 1000, \exponential, 1), {|ez| a.set(\freq, ez.value) }); w.view.decorator.nextLine; EZSlider(w, 500 @ 24, "Volume", ControlSpec(0.1, 1.0, \exponential, 0.01), {|ez| a.set(\amp, ez.value) });

312

) // When finished experimenting, close both: v.close; w.close;

Complicated? You're not done yet: slider settings aren't remembered, com-. stops the sound but the button doesn't change, closing the window doesn't stop the sound. I'll refer you to the GUI demonstration in the examples folder for solutions to those problems. There are many situations where I use a GUI. But I think often too much programming time is spent on user interface. Wouldn't you rather be working on the music?

313

30.

Exercises 30.1. None at this writing.

314

APPENDIX A.

Converting SC2 Patches to SC3

Converting a simple patch For years I put off learning SC3 thinking I had to start from scratch, then a student–to whom I am very grateful–took the plunge and discovered the criterion that finally pushed me forward. For the most part, your code will run in SC3. The principle things that won't run are spawners. The other good news is that the language hasn't changed that much. An object is still an object, messages still have argument lists, etc. There are several good help files that explain the conceptual differences. This section is intended to give you the practical details; what changes you need to make to get older code to run. The biggest change is SC now has three components: the program that edits and evaluates code, an internal server, and a local server. The servers are synthesizers. They take the definitions and instruments you design and play them. They know nothing about the code. They just play. The server is the performer. The language is the score. The server can't see the score. It just knows how to play. The two synths are sometimes a source of confusion. You can use either one, but the local server is usually default. You have to send definitions to the one that is running. Not specifying which server may send a definition to the wrong one. In a nutshell the local is more stable, scope only works on internal. Otherwise you can use either one. To make SC3 backward compatible the programmers have added some utilities that are less efficient and have fewer control options than doing it the new way. All of the details, like creating a synth, naming it, and assigning a node are done automatically, so Synth.play({ SinOsc.ar }) and {SinOsc.ar}.play both work. But it is a good idea to designate the server so I usually use {SinOsc.ar}.play(s), where s is the server. 31.14. Converting a simple patch to SC3 // SC2 patch ( Synth.play( { SinOsc.ar(LFNoise0.kr(12, mul: 500, add: 600)) } ) ) // SC3 backward compatible version

315

s = Server.internal.boot; ( { SinOsc.ar(LFNoise0.kr(12, mul: 500, add: 600}) }.play(s) )

When you run this code SC3 creates a synth, names it, chooses a bus out, sends it to the server with a node number and plays it. I was pleasantly surprised to find most of my code still worked using these backward compatible features. The problem with letting SC do all that behind the scenes is that you have no control over the synth once it's playing. If you do those things explicitly you can manipulate the synth once it is on the server. Here is the same code done the SC3 way. The UgenGraphFunc is the same, you just place it in a SynthDef. 31.15. SynthDef // SC2 patch ( Synth.play( { SinOsc.ar(LFNoise0.kr(12, mul: 500, add: 600)) } ) ) // SC3 method s = Server.internal.boot; ( SynthDef("MySine", { var out; out = SinOsc.ar(LFNoise0.kr(12, mul: 500, add: 600)); Out.ar(0, out); }).load(s) // or .play(s) or .send(s) ) // Prototype Synth.play({oldUgenFunc}) SynthDef("Name", { var out; out = oldUgenFunc; Out.ar(0, out);

316

}).load(s)

Look in the help files for the differences between play, load, and send. Now the synth exists on the server but it isn't playing. To play it, run Synth("name"). It is a good idea to assign it to a variable because that allows us to send it commands. 31.16. SynthDef // After the synth has been sent to the server, play it a = Synth("MySine") // And stop it a.free;

If you add arguments to the UgenGraphFunc then you can send commands to change them while the synth is running. 31.17. SynthDef with arguments ( SynthDef("MySine", { arg rate = 12; var out; out = SinOsc.ar(LFNoise0.kr(rate, mul: 500, add: 600)); Out.ar(0, out); }).load(s) // or .play(s) or .send(s) )

a = Synth("MySine") // or a = Synth("MySine", [\rate, 15]); // And stop it a.set(\rate, 22); a.set(\rate, 3); a.free;

All the other details about creating and managing synths, routing to busses, managing node numbers, etc. are completely different from SC2, so you just need to read those sections. Those methods often require you to rework the patches entirely. For example, the patch above could be broken into two synths: the LFNoise0 and the SinOsc, which would be patched together using busses.

317

iphase In several Ugens, most notably LFSaw and LFPulse, there is an additional argument; iphase. If you had a patch that used LFSaw(3, 500, 900) for freq, mul, and add, you will now have to add a 0 for iphase or use keyword assignments. These are the only two I've encountered. There might be more. rrand, rand, choose, Rand, TRand, TChoose Once you send a definition to the server all the parameters are hard wired unless you change them using set. For example, if you use a rrand(200, 800) or [100, 300, 500, 700].choose for a frequency argument the program will choose one value that it will use when the synth is defined. That value—not the rrand function—is hard wired into the synth definition and will stay the same each time the synth is played. The Ugens Rand, TRand, TWChoose, and TChoose, replace the functionality of a rand or rrand on the server side. These will choose a new value each time the synth is played, or when they are triggered. Spawning Events There are no spawners in SC3. The servers don't understand how to spawn events on their own. The way you create multiple events is by sending commands to the server to start several synths or several copies of the same synth. The language side of the program manages when the events occur, how many there are, and what attributes they have. Presumably these events have a beginning and end. The spawners from SC2 recouped the processor power for each of these events when thy were completed. With SC3 we need to do a similar action. There is an argument called doneAction in EnvGen which tells the synth what to do when it is finished. The default is to do nothing. We need to set it to 2, or turn yourself off. If not our spawned events would build up and crash the processor. 31.18. Spawn ( SynthDef("MySine", { arg rate = 12, offset = 1000, scale = 900; var out; out = SinOsc.ar(LFNoise0.kr(rate, mul: scale, add: offset} * EnvGen.kr(Env.perc(0, 2), doneAction: 2); Out.ar(0, out) }).load(s) // or .play(s) or .send(s) ) // Run this line several times. // Each one is equivalent to a spawned event. Synth("MySine", [\rate, rrand(4, 16), \offset, 600, \scale, 400])

318

Now that the synth understands how to play an instrument we need to set up a routine to send start event signals to the server. This would replace the spawn. We can either start new events each time, or modify events that are still sounding. Since there is a rather short decay on our instrument we'll send multiple events. How is this different from just building a trigger into the synth? This way we can have several events happening at once. Note that the language side of the program understands rrand so we can use it here. 31.19. Spawning events with Task ( //run a task to play the synth r = Task({ {Synth("MySine", [\rate, rrand(5, 20), \offset, rrand(400, 800), \scale, rrand(100, 300)]); //Choose a wait time before next event rrand(0, 3.0).wait; }.loop; }).play )

319

B.

Cast of Characters, in Order of Appearance

[Maybe next time]

320

C.

OSC

[See Mark Polishook's tutorial for information on OSC]

321

D.

Step by Step (Reverse Engineered) Patches

Here are some patches that are broken down into steps. Many of them are taken out of the SC2 examples folder (written by James). Server.default = s = Server.internal.boot

// Rising Sine Waves ( { SinOsc.ar(440, mul: 0.4) //SinOsc }.play(s) ) ( { var cont2; //control frequency with LFSaw //midi values are converted to frequency cont2 = LFSaw.kr(2/5, 0, 24, 80).midicps; SinOsc.ar(cont2, mul: 0.4) }.play(s) ) ( { var cont1; cont1 = LFSaw.kr(8, 0, 3, 80).midicps; //second control for offset SinOsc.ar(cont1, mul: 0.4) }.play(s) ) ( { //combine the two, but do the midicps only once var cont1, cont2; cont1 = LFSaw.kr(8, 0, 3, 80); cont2 = LFSaw.kr(2/5, 0, 24, cont1).midicps; SinOsc.ar(cont2, mul: 0.4) }.play(s) ) ( { var cont1, cont2; //add random values and stereo cont1 = LFSaw.kr([rrand(6.0, 8.0), rrand(6.0, 8.0)], 0, 3, 80); cont2 = LFSaw.kr(2/5, 0, 24, cont1).midicps; SinOsc.ar(cont2, mul: 0.4) }.play(s) )

322

( { var cont1, cont2, out; cont1 = LFSaw.kr([rrand(6.0, 8.0), rrand(6.0, 8.0)], 0, 3, 80); cont2 = LFSaw.kr(2/5, 0, 24, cont1).midicps; out = SinOsc.ar(cont2, mul: 0.1); out = CombN.ar(out, 0.2, 0.2, 4); //add echo out }.play(s) ) ( SynthDef("RisingSines", { arg busOut = 3, sweep = 3, rate = 0.4, offset = 80, range = 24; var cont1, cont2, out; cont1 = LFSaw.kr([rrand(6.0, 8.0), rrand(6.0, 8.0)], 0, sweep, offset); cont2 = LFSaw.kr(rate, 0, range, cont1).midicps; out = SinOsc.ar(cont2, mul: 0.1); Out.ar(busOut, out); }).send(s); SynthDef("Echo", { arg busIn = 3, delay = 0.2; var out; out = CombN.ar(In.ar(busIn, 2), delay, delay, 4); Out.ar(0, out); }).send(s) ) Synth("Echo"); Synth("RisingSines"); Synth("RisingSines", [\rate, 1/7, \offset, 60, \range, 32]);

// Random Sine Waves ( { FSinOsc.ar(exprand(700, 2000)) //single random Sine }.play(s) ) ( { FSinOsc.ar(exprand(700, 2000), 0, //Random envelopes using LFNoise1 //Let it run for a while max(0, LFNoise1.kr(3/5, 0.9))) }.play(s) ) (

323

{ Pan2.ar( //pan position FSinOsc.ar(exprand(700, 2000), 0, max(0, LFNoise1.kr(3/5, 0.9))), //random moving pan, let it run for a while LFNoise1.kr(1/3)) }.play(s) ) ( { var sines; sines = 60; //Mix a bunch of them down and decrease frequency of LF env Mix.ar({Pan2.ar( FSinOsc.ar(exprand(700, 2000), 0, max(0, LFNoise1.kr(1/9, 0.7))), LFNoise1.kr(1/3))}.dup(sines))*0.2 }.play(s) ) ( { var sines; sines = 60; //Increase frequency of env Mix.ar({Pan2.ar( FSinOsc.ar(exprand(700, 2000), 0, max(0, LFNoise1.kr(9, 0.7))), LFNoise1.kr(1/3))}.dup(sines))*0.2 }.play(s) ) Sync Saw ( { SyncSaw.ar(440, mul: 0.2) //simple Saw }.play(s) ) ( { SyncSaw.ar( 100, //Saw frequency MouseX.kr(50, 1000), //Sync frequency mul: 0.2) }.scope(1) ) ( {

324

SyncSaw.ar( 100, //Saw frequency //Sync controlled by SinOsc SinOsc.ar(1/5, 0, mul: 200, add: 300), mul: 0.2) }.scope(1) ) ( { SyncSaw.ar( 100, //Saw frequency //Separate phase for left and right channel SinOsc.ar(1/5, [0, 3.0.rand], mul: 200, add: 300), mul: 0.2) }.scope(2) ) ( { SyncSaw.ar( [100, 100*1.02], //Separate freq for L, R SinOsc.ar(1/5, [0, 3.0.rand], mul: 200, add: 300), mul: 0.2) }.scope(2) ) ( { var freq; freq = rrand(30, 80).midicps; //choose freq SyncSaw.ar( [freq, freq*1.02], //freq variable replaces static values SinOsc.ar(1/5, [0, 3.0.rand], mul: freq*2, add: freq*3), mul: 0.2) }.scope(2) )

(//add an envelope { var freq, sig, env; freq = rrand(30, 80).midicps; env = EnvGen.kr(Env.linen(rrand(1.0, 3.0), rrand(4.0, 7.0), rrand(2.0, 3.0))); sig = SyncSaw.ar( [freq, freq*1.002], //Saw frequency SinOsc.ar(1/5, [0, 3.0.rand], mul: freq*2, add: freq*3), mul: 0.1); sig = CombN.ar(sig, 0.3, 0.3, 4, 1); //Add echo sig*env }.scope(2) )

325

(//Send synth def to server with freq argument SynthDef("SyncSaw-Ex", { arg freq; var sig, env; env = EnvGen.kr(Env.linen(rrand(1.0, 3.0), rrand(4.0, 7.0), rrand(2.0, 3.0)), doneAction: 2); sig = SyncSaw.ar( [freq, freq*1.002], //Saw frequency SinOsc.ar(1/5, [0, 3.0.rand], mul: freq*2, add: freq*3), mul: 0.1); sig = CombN.ar(sig, 0.3, 0.3, 4, 1); //Add echo sig = sig*env; Out.ar(0, sig*0.8) }).play(s) ) ( //run a task to play the synth r = Task({ {Synth("SyncSaw-Ex", [\freq, rrand(30, 80).midicps]); //Choose a wait time before next event rrand(2.0, 5.0).wait; }.loop; }).play )

// Uplink ( { LFPulse.ar(200, 0, 0.5, 0.4) //simple pulse }.play(s) ) ( { var freq; freq = LFPulse.kr(10, 0, 0.3, 2000, 200); //freq control LFPulse.ar(freq, 0, 0.5, 0.4) }.play(s) ) ( { var freq; //add random values and additional control for add freq = LFPulse.kr(rrand(10, 20), 0, rrand(0.1, 0.8), LFPulse.kr(1, 0, 0.5, 4000, 700)); LFPulse.ar(freq, 0, 0.5, 0.4) }.play(s) )

326

( { var freq; //duplicate and add the two together freq = LFPulse.kr(20.rand, 0, rrand(0.1, 0.8), LFPulse.kr(rrand(1.0, 5.0), 0, rrand(0.1, 0.8), 8000.rand, 2000.rand)); freq = freq + LFPulse.kr(20.rand, 0, rrand(0.1, 0.8), LFPulse.kr(rrand(1.0, 5.0), 0, rrand(0.1, 0.8), 8000.rand, 2000.rand)); LFPulse.ar(freq, 0.5, 0.1) }.play(s) ) ( { var freq, out, env; //add an envelope env = EnvGen.kr(Env.linen(rrand(4.0, 7.0), 5.0, rrand(2.0, 5.0))); freq = LFPulse.kr(20.rand, 0, rrand(0.1, 0.8), LFPulse.kr(rrand(1.0, 5.0), 0, rrand(0.1, 0.8), 8000.rand, 2000.rand)); freq = freq + LFPulse.kr(20.rand, 0, rrand(0.1, 0.8), LFPulse.kr(rrand(1.0, 5.0), 0, rrand(0.1, 0.8), 8000.rand, 2000.rand)); //pan and echo out = Pan2.ar(LFPulse.ar(freq, 0.5, 0.1), 1.0.rand2); 2.do(out = AllpassN.ar(out, [rrand(0.1, 0.01), rrand(0.1, 0.01)])); out*env }.play(s) ) ( //Send synth def to server with freq argument SynthDef("Uplink-Ex", { var freq, out, env; //add an envelope env = EnvGen.kr(Env.linen(rrand(4.0, 7.0), 5.0, rrand(2.0, 5.0)), doneAction: 2); freq = LFPulse.kr(20.rand, 0, rrand(0.1, 0.8), LFPulse.kr(rrand(1.0, 5.0), 0, rrand(0.1, 0.8),

327

8000.rand, 2000.rand)); freq = freq + LFPulse.kr(20.rand, 0, rrand(0.1, 0.8), LFPulse.kr(rrand(1.0, 5.0), 0, rrand(0.1, 0.8), 8000.rand, 2000.rand)); //pan and echo out = Pan2.ar(LFPulse.ar(freq, 0.5, 0.1), 1.0.rand2); 2.do(out = AllpassN.ar(out, [rrand(0.1, 0.01), rrand(0.1, 0.01)])); out*env; Out.ar(0, out*0.8) }).play(s) ) ( //run a task to play the synth r = Task({ {Synth("Uplink-Ex"); //Choose a wait time before next event rrand(4.0, 9.0).wait; }.loop; }).play )

// Ring and Klank ( { Dust.ar(20, 0.2) //noise bursts }.play(s) ) ( { var partials; partials = 8; Klank.ar( //fill klank with random partials and amplitudes `[Array.rand(partials, 100, 10000), nil, Array.rand(partials, 0.2, 0.9)], Dust.ar(20, 0.2)) }.play(s) ) ( { //ring element SinOsc.ar(LFNoise2.kr(1.0, 200, 300), mul: 0.5) }.play(s) )

328

( { var partials, out, filter, bell; partials = 8; filter = SinOsc.ar(LFNoise2.kr(1.0, 200, 300), mul: 0.3); bell = Klank.ar( `[Array.rand(partials, 100, 10000), nil, Array.rand(partials, 0.2, 0.9)], Dust.ar(20, 0.2))*filter; //ring klank with filter bell }.play(s) ) ( { var partials, out, filter, bell; partials = 8; filter = SinOsc.ar( LFNoise2.kr(rrand(0.7, 1.3), rrand(200, 400), //add random choices rrand(500, 1000)), mul: 0.2); Mix.ar({ //insert inside Mix bell = Klank.ar( `[Array.rand(partials, 100, 10000), nil, Array.rand(partials, 0.2, 0.9)], Dust.ar(12, 0.2))*filter; bell = Pan2.ar(bell, LFNoise2.kr(2/3)); bell}.dup(4))*0.4 }.play(s) )

// Tremulate {FSinOsc.ar(440)}.play //Sine oscillator //Amp control begins with LFNoise2 {LFNoise2.ar(20, 0.9)}.scope(1) //Max removes negative values (makes them 0) {max(0, LFNoise2.ar(20, 0.9))}.scope(1) ( { var ampCont; //Amp controlled by LFNoise ampCont = max(0, LFNoise2.ar(20, 0.4)); FSinOsc.ar(440, mul: ampCont)}.play )

329

( { var ampCont; ampCont = max(0, LFNoise2.ar([20, 30], 0.1)); FSinOsc.ar([400, 500], mul: ampCont)}.play ) ( { var ampCont, rate, freq, chord; rate = rrand(30, 70); freq = 500; chord = [1, 5/4, 3/2, 15/8]; ampCont = max(0, LFNoise2.ar([rate, rate, rate, rate], 0.1)); //create a bunch of these then mix them down Mix.ar(FSinOsc.ar(freq*chord, mul: ampCont))}.play ) ( ({ var ampCont, rate, freq, chord; rate = rrand(30, 70); freq = rrand(300, 1000); chord = [ [1, 5/4, 3/2, 15/8], [1, 6/5, 3/2, 9/5], [1, 4/3, 3/2, 9/5], [1, 9/8, 3/2, 5/3]]; ampCont = max(0, LFNoise2.ar([rate, rate, rate, rate], 0.1)); //choose a chord Mix.ar(FSinOsc.ar(freq*chord.choose, mul: ampCont))}).play ) ( { //Add pan and env var ampCont, rate, freq, chord, env, panp, out; rate = rrand(30, 70); freq = rrand(300, 1000); panp = 1.0.rand2; env = EnvGen.kr(Env.linen(0.1, 2.0, 5.0)); chord = [ [1, 5/4, 3/2, 15/8], [1, 6/5, 3/2, 9/5], [1, 4/3, 3/2, 9/5], [1, 9/8, 3/2, 5/3]]; ampCont = max(0, LFNoise2.ar([rate, rate, rate, rate], 0.1)); //choose a chord out = Mix.ar( Pan2.ar(FSinOsc.ar(freq*chord.choose, mul: ampCont), panp) ); out*env; }.play )

330

Harmonic Swimming and Tumbling ( { FSinOsc.ar(500, mul: 0.3) //Sine oscillator }.play(s) ) ( { FSinOsc.ar(500, //amp control same as tremulate mul: max(0, LFNoise1.kr(rrand(6.0, 12.0), mul: 0.6))) }.play(s) ) ( { FSinOsc.ar(500, mul: max(0, LFNoise1.kr(rrand(6.0, 12.0), mul: 0.6, add: Line.kr(0, -0.2, 20)))) //slow fade }.play(s) ) ( { var freq; freq = 500; //two frequencies a fifth apart FSinOsc.ar(freq*[1, 3/2], mul: max(0, LFNoise1.kr(rrand([6.0, 6.0], 12.0), mul: 0.6, add: Line.kr(0, -0.2, 20)))) }.play(s) ) ( { var signal, partials, freq; signal = 0; partials = 8; //Begin with low fundamental freq = 50; //duplicate and sum frequencies at harmonic intervals partials.do({arg harm; harm = harm + 1; signal = signal + FSinOsc.ar(freq * [harm, harm*3/2], mul: max(0, LFNoise1.kr(rrand([6.0, 6.0], 12.0), mul: 1/(harm + 1) * 0.6, add: Line.kr(0, -0.2, 20)))) }); signal

331

}.play(s) ) ( SynthDef("Tumbling", {arg freq = 50; var signal, partials; signal = 0; partials = 8; partials.do({arg harm; harm = harm + 1; signal = signal + FSinOsc.ar(freq * [harm, harm*3/2], mul: max(0, LFNoise1.kr(Rand([6.0, 6.0], 12.0), mul: 1/(harm + 1) * 0.6) )) }); signal = signal*EnvGen.kr(Env.perc(0.2,20.0), doneAction: 2); Out.ar(0, signal*0.8) } ).send(s) ) ( //run a task to play the synth r = Task({ {Synth("Tumbling", [\freq, rrand(30, 80)]); //Choose a wait time before next event rrand(12.0, 20.0).wait; }.loop; }).play )

// Police State ( { //single siren SinOsc.ar( SinOsc.kr(0.1, 0, 600, 1000), 0, 0.2) }.play(s) ) ( { SinOsc.ar(//random frequencies and phase SinOsc.kr(Rand(0.1, 0.12), 2pi.rand, Rand(200, 600), Rand(1000, 1300)), mul: 0.2)

332

}.play(s) ) ( { SinOsc.ar( SinOsc.kr(Rand(0.1, 0.12), 6.0.rand, Rand(200, 600), Rand(1000, 1300)), //conrol scale mul: LFNoise2.ar(Rand(100, 120), 0.2)) }.play(s) ) ( { //pan and mix several Mix.arFill(4, { Pan2.ar( SinOsc.ar( SinOsc.kr(Rand(0.1, 0.12), 6.0.rand, Rand(200, 600), Rand(1000, 1300)), mul: LFNoise2.ar(Rand(100, 120), 0.1)), 1.0.rand2) }) }.play(s) ) ( { LFNoise2.ar(600, 0.1) //second component }.play(s) ) ( { //ring modulate? LFNoise2.ar(LFNoise2.kr(2/5, 100, 600), LFNoise2.kr(1/3, 0.1, 0.06)) }.play(s) ) ( { //stereo LFNoise2.ar(LFNoise2.kr([2/5, 2/5], 100, 600), LFNoise2.kr([1/3, 1/3], 0.1, 0.06)) }.play(s) ) ( { //add the two and add echo CombL.ar(

333

Mix.arFill(4, { Pan2.ar( SinOsc.ar( SinOsc.kr(Rand(0.1, 0.12), 6.0.rand, Rand(200, 600), Rand(1000, 1300)), mul: LFNoise2.ar(Rand(100, 120), 0.1)), 1.0.rand2) }) + LFNoise2.ar( LFNoise2.kr([2/5, 2/5], 90, 620), LFNoise2.kr([1/3, 1/3], 0.15, 0.18)), 0.3, 0.3, 3) }.play(s) ) Latch or Sample and Hold //Simple Oscillator ( { SinOsc.ar( freq: 440, mul: 0.5 ); }.play(s) ) //Add a frequency control using a Saw ( { SinOsc.ar( freq: LFSaw.ar(freq: 1, mul: 200, add: 600), //Saw controlled freq mul: 0.5 ); }.play(s) ) //Place the LFSaw inside a latch, add a trigger ( { SinOsc.ar( freq: Latch.ar( //Using a latch to sample the LFSaw LFSaw.ar(1, 0, 200, 600), //Input wave Impulse.ar(10) //Trigger (rate of sample) ), mul: 0.5 ); }.play(s) ) //SinOsc is replaced by Blip, try replacing //the 1.1 with a MouseX ( {

334

Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr(1.1, 0, 500, 700), //Input for Latch Impulse.kr(10)), //Sample trigger rate 3, //Number of harmonics in Blip mul: 0.3 //Volume of Blip ) }.play(s) ) //Freq of the Saw is controlled by a Saw ( { Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr( //input for Latch Line.kr(0.01, 10, 100), //Freq of input wave, was 1.1 0, 300, 500), //Mul. and Add for input wave Impulse.kr(10)), //Sample trigger rate 3, //Number of harmonics in Blip mul: 0.3 //Volume of Blip ) }.play(s) ) //A variable is added for clarity. ( { var signal; signal = Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr( 6.18, 0,//Freq of input wave (Golden Mean) 300, 500), //Mul. and Add for input wave Impulse.kr(10)), //Sample trigger rate 3, //Number of harmonics in Blip mul: 0.3 //Volume of Blip ); //reverb 2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) }); signal //return the variable signal }.play(s) )

//Add a Pan2 ( { var signal; signal = Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr( 6.18, 0,//Freq of input wave 300, 500), //Mul. and Add for input wave Impulse.kr(10)), //Sample trigger rate

335

3, //Number of harmonics in Blip mul: 0.3 //Volume of Blip ); signal = Pan2.ar( signal, //input for the pan, LFNoise1.kr(1) //Pan position. -1 and 1, of 1 time per second ); //reverb 4.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.3, add: signal) }); signal //return the variable signal }.play(s) )

//Control the number of harmonics ( { var signal; signal = Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr( 6.18, 0, //Freq of input wave 300, 500), //Mul. and Add for input wave Impulse.kr(10)), //Sample trigger rate LFNoise1.kr(0.3, 13, 14), //Number of harmonics in Blip mul: 0.3 //Volume of Blip ); signal = Pan2.ar( signal, //input for the pan LFNoise1.kr(1) //Pan position. ); //reverb 4.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.3, add: signal) }); signal //return the variable signal }.play(s) )

//Add an envelope ( { var signal, env1; env1 = Env.perc( 0.001, //attack of envelope 2.0 //decay of envelope ); signal = Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr( 6.18, 0,//Freq of input wave 300, 500), //Mul. and Add for input wave Impulse.kr(10)), //Sample trigger rate LFNoise1.kr(0.3, 13, 14), //Number of harmonics in Blip

336

mul: 0.3 //Volume of Blip ); signal = Pan2.ar( signal, //input for the pan LFNoise1.kr(1) //Pan position. ); //reverb 4.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.3, add: signal) }); signal*EnvGen.kr(env1) //return the variable signal }.play(s) ) //Place it in a Pbind ( SynthDef("S_H", { var signal, env1; env1 = Env.perc( 0.001, //attack of envelope 2.0 //decay of envelope ); signal = Blip.ar( //Audio Ugen Latch.kr( //Freq control Ugen LFSaw.kr( Rand(6.0, 7.0), 0,//Freq of input wave Rand(300, 600), Rand(650, 800)), //Mul. and Add for input wave Impulse.kr(Rand(10, 12))), //Sample trigger rate LFNoise1.kr(0.3, 13, 14), //Number of harmonics in Blip mul: 0.3 //Volume of Blip ); signal = Pan2.ar( signal, //input for the pan LFNoise1.kr(1) //Pan position. ); //reverb 4.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.3, add: signal) }); signal = signal*EnvGen.kr(env1, doneAction:2); //return the variable signal Out.ar(0, signal*0.9) }).load(s); SynthDescLib.global.read; e = Pbind( \server, Server.internal, \dur, 0.3, \instrument, "S_H" ).play; )

337

e.mute; e.reset; e.pause; e.play; e.stop; //Add random values for each event e = Pbind( \server, Server.internal, \dur, Prand([0, 0.1, 0.25, 0.5, 0.75, 1], inf), \instrument, "S_H" ).play; e.stop;

// Pulse ( { var out; out = Pulse.ar( 200, //Frequency. 0.5, //Pulse width. Change with MouseX 0.5 ); out }.play(s) )

//Add a control for frequency ( { var out; out = Pulse.ar( LFNoise1.kr( 0.1, //Freq of LFNoise change mul: 20, //mul = (-20, to 20) add: 60 //add = (40, 80) ), 0.5, 0.5); out }.play(s) ) //Control pulse ( { var out; out = Pulse.ar(

338

LFNoise1.kr(0.1, 20, 60), SinOsc.kr( 0.2, //Freq of SinOsc control mul: 0.45, add: 0.46 ), 0.5); out }.play(s) ) //Expand to Stereo ( { var out; out = Pulse.ar( LFNoise1.kr([0.1, 0.15], 20, 60), SinOsc.kr( 0.2, mul: 0.45, add: 0.46), 0.5); out }.play(s) ) //Add reverb ( { var out; out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60), SinOsc.kr( 0.2, mul: 0.45, add: 0.46),0.5); 4.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.4, add: out)}); out }.play(s) ) //Smaller pulse widths ( { var out; out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60), SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5); 4.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.4, add: out)}); out }.play(s) ) //Add an envelope ( { var out, env; env = Env.linen([0.0001, 1.0].choose, 2.0.rand, [0.0001, 1.0].choose); out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60),

339

SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5); 4.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4, mul: 0.4, add: out)}); out*EnvGen.kr(env) }.play(s) )

//Define an instrument ( SynthDef("Pulse1", {arg att = 0.4, decay = 0.4; var out, env; env = Env.linen(att, Rand(0.1, 2.0), decay); out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60), SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5); 4.do({out = AllpassN.ar(out, 0.05, [Rand(0.01, 0.05), Rand(0.01, 0.05)], 4, mul: 0.4, add: out)}); out = out*EnvGen.kr(env, doneAction:2); Out.ar(0, out*0.4); }).load(s); SynthDescLib.global.read; e = Pbind( \server, Server.internal, \dur, 3, \instrument, "Pulse1" ).play; ) e.stop; //Add another instrument and random values ( e = Pbind( \server, Server.internal, \att, Pfunc({rrand(2.0, 5.0)}), \decay, Pfunc({rrand(4.0, 6.0)}), \dur, Prand([0, 1.0, 2.0, 2.5, 5], inf), \instrument, Prand(["S_H", "Pulse1"], inf) ).play; ) e.stop; //Add more structure, more instruments, nest Pseq, Prand, Pfunc, etc.

// FM //Begin with LFO control

340

( { var out; out = SinOsc.ar( SinOsc.ar( //control Osc 5, //freq of control mul: 10, //amp of contrul add: 800), //add of control mul: 0.3 //amp of audio SinOsc ); out }.play(s) ) //Add a control to move into audio range. The MouseX represents //the control frequency, the add is the carrier. Mul is the index. ( { var out; out = SinOsc.ar( SinOsc.ar( //control Osc MouseX.kr(5, 240), //freq of control mul: 10, //amp of contrul add: 800), //add of control mul: 0.3 //amp of audio SinOsc ); out }.play(s) )

//Control of amp, or index. ( { var out; out = SinOsc.ar( SinOsc.ar( //control Osc 131, //freq of control mul: MouseX.kr(10, 700), //amp of contrul add: 800), //add of control mul: 0.3 //amp of audio SinOsc ); out }.play(s) ) //Both ( { var out; out = SinOsc.ar( SinOsc.ar( //control Osc MouseY.kr(10, 230), //freq of control

341

mul: MouseX.kr(10, 700), //amp of contrul add: 800), //add of control mul: 0.3 //amp of audio SinOsc ); out }.play(s) )

//Add must be higher than mul, so a variable is added to //make sure it changes in relation to mul. ( { var out, mulControl; mulControl = MouseX.kr(10, 700); out = SinOsc.ar( SinOsc.ar( //control Osc MouseY.kr(10, 230), //freq of control mul: mulControl, //amp of control add: mulControl + 100), //add will be 100 greater than mulControl mul: 0.3 //amp of audio SinOsc ); out }.play(s) )

//Replace Mouse with LFNoise control ( { var out, mulControl; mulControl = LFNoise1.kr(0.2, 300, 600); //store control in variable out = SinOsc.ar( SinOsc.ar( //control Osc LFNoise1.kr(0.4, 120, 130), //freq of control mul: mulControl, //amp of contrul add: mulControl + 100), //add will be 100 greater than mulControl mul: 0.3 //amp of audio SinOsc ); out }.play(s) ) //Another control ( { var out, mulControl; mulControl = LFNoise1.kr(0.2, 300, 600); out = SinOsc.ar( SinOsc.ar( //control Osc LFNoise1.kr(0.4, 120, 130), //freq of control mul: mulControl, //amp of contrul add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control

342

mul: 0.3 //amp of audio SinOsc ); out }.play ) //Multichannel expansion ( { var out, mulControl; mulControl = LFNoise1.kr([0.2, 0.5], 300, 600); out = SinOsc.ar( SinOsc.ar( //control Osc LFNoise1.kr(0.4, 120, 130), //freq of control mul: mulControl, //amp of contrul add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control mul: 0.3 //amp of audio SinOsc ); out }.play )

//Reverb and envelope ( { var out, mulControl, env, effectEnv; // effectEnv = Env.perc(0.001, 3); env = Env.linen(0.01.rand, 0.3.rand, rrand(0.1, 3.0)); mulControl = LFNoise1.kr([0.2, 0.5], 300, 600); out = SinOsc.ar( SinOsc.ar( //control Osc LFNoise1.kr(0.4, 120, 130), //freq of control mul: mulControl, //amp of contrul add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control mul: 0.3 //amp of audio SinOsc ); out*EnvGen.kr(env, doneAction:2); }.play ) ( SynthDef("FMinst", { var out, mulControl, env, effectEnv; env = Env.linen(Rand(0.01, 1.0), Rand(0.03, 0.09), Rand(0.01, 1.0)); mulControl = LFNoise1.kr([0.2, 0.5], 300, 600); out = SinOsc.ar( SinOsc.ar( //control Osc LFNoise1.kr(0.4, 120, 130), //freq of control mul: mulControl, //amp of contrul add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control mul: 0.3 //amp of audio SinOsc

343

); out = out*EnvGen.kr(env, doneAction:2); Out.ar(0, out) }).load(s) ) SynthDescLib.global.read; //Note that this is not a very interesting composition. But you get the idea. Also be aware that there //are probably more efficient ways to do these using busses. For now I'm just trying to get them //to work. ( e = Pbind( \server, Server.internal, \att, Pfunc({rrand(2.0, 5.0)}), \decay, Pfunc({rrand(4.0, 6.0)}), \dur, Prand([0, 1.0, 2.0, 2.5, 5], inf), \instrument, Prand(["S_H", "Pulse1", "FMinst"], inf) ).play; ) e.stop;

// Filter //Saw and filter ( { RLPF.ar( //resonant low pass filter Saw.ar(100, 0.2), //input wave at 100 Hz MouseX.kr(100, 10000) //cutoff frequency )}.play ) //Control with SinOsc ( { RLPF.ar( Saw.ar(100, 0.2), SinOsc.ar(0.2, 0, 900, 1100) ) }.play ) //Control resonance ( { RLPF.ar( Saw.ar(100, 0.2),

344

SinOsc.kr(0.2, 0, 900, 1100), MouseX.kr(1.0, 0.001) //resonance, or "Q" )}.play(s) ) //Two controls ( { RLPF.ar( Saw.ar(//input wave LFNoise1.kr(0.3, 50, 100),//freq of input 0.1 ), LFNoise1.kr(0.1, 4000, 4400), //cutoff freq 0.04 //resonance )}.play(s) )

//Add a pulse ( { var freq; freq = LFNoise1.kr(0.3, 50, 100); RLPF.ar( Pulse.ar( //input wave freq,//freq of input 0.1, //pulse width 0.1 //add, or volume of pulse ), LFNoise1.kr(0.1, 4000, 4400), //cutoff freq 0.04 //resonance )}.play(s) )

// Wind and Metal {LFNoise1.ar}.scope // random wave {max(0, LFNoise1.ar)}.scope // random wave with max {min(0, LFNoise1.ar)}.scope // random wave with min {PinkNoise.ar(max(0, LFNoise1.ar(10)))}.scope // used as amp control {PinkNoise.ar(max(0, LFNoise1.ar(1)))}.play // let this one run a while {PinkNoise.ar * max(0, LFNoise1.ar([10, 1]))}.play //expanded to two channels {PinkNoise.ar * max(0, LFNoise1.ar([10, 10]))}.play

345

// Scale and offest controls how often LFNoise moves to positive values // Use the mouse to experiment: {max(0, LFNoise1.ar(100, 0.75, MouseX.kr(-0.5, 0.5)))}.scope(zoom: 10) ( { PinkNoise.ar * max(0, LFNoise1.ar([10, 10], 0.75, 0.25)) }.play ) //Klank with one frequency. {Klank.ar(`[[500], [1], [1]], PinkNoise.ar(0.05))}.play //An array of freqs {Klank.ar(`[[100, 200, 300, 400, 500, 600, 700, 800]], PinkNoise.ar(0.01))}.play //Add amplitudes. Try each of these and notice the difference. ( {Klank.ar(`[ [100, 200, 300, 400, 500, 600, 700, 800], //freq [0.1, 0.54, 0.2, 0.9, 0.76, 0.3, 0.5, 0.1] //amp ], PinkNoise.ar(0.01))}.play ) ( {Klank.ar(`[ [100, 200, 300, 400, 500, 600, 700, 800], //freq [0.54, 0.2, 0.9, 0.76, 0.3, 0.5, 0.1, 0.3] //amp ], PinkNoise.ar(0.01))}.play ) ( {Klank.ar(`[ [100, 200, 300, 400, 500, 600, 700, 800], //freq [0.9, 0.76, 0.3, 0.5, 0.1, 0.3, 0.6, 0.2] //amp ], PinkNoise.ar(0.01))}.play ) //Using enharmonic frequencies. {Klank.ar(`[[111, 167, 367, 492, 543, 657, 782, 899]], PinkNoise.ar(0.01))}.play //Use Array.fill to fill an array with exponential values. (biased toward 100) Array.fill(20, {exprand(100, 1000).round(0.1)}) //compare with (even distribution)

346

Array.fill(20, {rrand(100.0, 1000).round(0.1)}) //Added to the patch. Run this several times. The postln will print //the freq array. ( {Klank.ar( `[Array.fill(10, {exprand(100, 1000)}).round(0.1).postln], PinkNoise.ar(0.01))}.play ) //Add LFNoise for amp control. ( {Klank.ar( `[Array.fill(10, {exprand(100, 1000)}).round(0.1).postln], PinkNoise.ar(0.01) * max(0, LFNoise1.ar([10, 10], 0.75, 0.25)))}.play ) //Same thing with variables. ( { var excitation, speed, filters, range; range = {exprand(100, 1000)}; filters = 10; excitation = PinkNoise.ar(0.01) * max(0, LFNoise1.ar([10, 10], 0.75, 0.25)); Klank.ar(`[Array.fill(filters, range).round(0.1).postln], excitation)}.play ) //With ring times and amplitudes. ( { var excitation, speed, filters, range, freqBank, ampBank, ringBank; range = {exprand(100, 1000)}; filters = 10; excitation = PinkNoise.ar(0.01) * max(0, LFNoise1.ar([10, 10], 0.75, 0.25)); freqBank = Array.fill(filters, range).round(0.1).postln; ampBank = Array.fill(filters, {rrand(0.1, 0.9)}).round(0.1).postln; ringBank = Array.fill(filters, {rrand(1.0, 4.0)}).round(0.1).postln; Klank.ar(`[freqBank, ampBank, ringBank], excitation) }.play ) //Finally, slow down the excitation: ( { var excitation, speed, filters, range, freqBank, ampBank, ringBank; range = {exprand(100, 1000)}; filters = 10;

347

excitation = PinkNoise.ar(0.01) * max(0, LFNoise1.ar([0.1, 0.1], 0.75, 0.25)); freqBank = Array.fill(filters, range).round(0.1).postln; ampBank = Array.fill(filters, {rrand(0.1, 0.9)}).round(0.1).postln; ringBank = Array.fill(filters, {rrand(1.0, 4.0)}).round(0.1).postln; Klank.ar(`[freqBank, ampBank, ringBank], excitation) }.play )

// Sci-Fi Computer [New] ( { PMOsc.ar( MouseX.kr(700, 1300), MouseY.kr(700, 1300), 3) }.play ) ( { PMOsc.ar( MouseX.kr(700, 1300), LFNoise0.kr(10, 1000, 1000), MouseY.kr(0.1, 5.0), mul: 0.3) }.play ) ( { PMOsc.ar( LFNoise1.kr(10, 1000, 1000), LFNoise0.kr(10, 1000, 1000), MouseY.kr(0.1, 5.0), mul: 0.3) }.play ) ( { PMOsc.ar( LFNoise1.kr([10, 10], 1000, 1000), LFNoise0.kr([10, 10], 1000, 1000), MouseY.kr(0.1, 5.0), mul: 0.3) }.play ) ( {

348

PMOsc.ar( LFNoise1.kr( MouseX.kr([1, 1], 12), mul: 1000, add: 1000), LFNoise0.kr( MouseX.kr([1, 1], 12), mul: 1000, add: 1000), MouseY.kr(0.1, 5.0), mul: 0.3) }.play ) ( { PMOsc.ar( LFNoise1.kr( MouseX.kr([1, 1], 12), mul: MouseY.kr(10, 1000), add: 1000), LFNoise0.kr( MouseX.kr([1, 1], 12), mul: MouseY.kr(30, 1000), add: 1000), MouseY.kr(0.1, 5.0), mul: 0.3) }.play )

// Harmonic Swimming [New] ( // harmonic swimming play({ var fundamental, partials, out, offset; fundamental = 50; // fundamental frequency partials = 20; // number of partials per channel out = 0.0; // start of oscil daisy chain offset = Line.kr(0, -0.02, 60); // causes sound to separate and fade partials.do({ arg i; out = FSinOsc.ar( fundamental * (i+1), // freq of partial 0, max(0, // clip negative amplitudes to zero LFNoise1.kr( 6 + [4.0.rand2, 4.0.rand2], // amplitude rate 0.02, // amplitude scale offset // amplitude offset

349

) ), out ) }); out }) )

( { var out = 0; 2.do({ |i| out = out + FSinOsc.ar(400 * (i + 1), mul: max(0, LFNoise1.kr(rrand(6.0, 10.0)))) }); out }.play )

( { var out = 0; 4.do({ |i| out = out + FSinOsc.ar(400 * (i + 1), mul: max(0, LFNoise1.kr( rrand(6.0, 10.0), 0.2 )) ) }); out }.play ) ( { var out = 0; 20.do({ |i| out = out + FSinOsc.ar(400 * (i + 1), mul: max(0, LFNoise1.kr( rrand(6.0, 10.0), 0.2 )) ) }); out }.play )

350

( { var out = 0, fundamental = 50, partials = 20; partials.do({ |i| out = out + FSinOsc.ar(fundamental * (i + 1), mul: max(0, LFNoise1.kr( rrand(6.0, 10.0), 0.2 )) ) }); out }.play ) ( { var out = 0, fundamental = 50, partials = 20; partials.do({ |i| out = out + FSinOsc.ar(fundamental * (i + 1), mul: max(0, LFNoise1.kr( rrand(6.0, 10.0), 0.2, MouseX.kr(0, -0.2) )) ) }); out }.play ) ( { var out = 0, fundamental = 50, partials = 20; partials.do({ |i| out = out + FSinOsc.ar(fundamental * (i + 1), mul: max(0, LFNoise1.kr( rrand(6.0, [10.0, 10.0]), 0.2, Line.kr(0, -0.2, 60) )) ) }); out }.play )

351

// Variable decay bell [New]

{SinOsc.ar(400 * LFNoise1.kr(1/6, 0.4, 1))}.play

( { SinOsc.ar( 400 * LFNoise1.kr(1/6, 0.4, 1), mul: EnvGen.kr(Env.perc(0, 0.5), Dust.kr(1)) ) }.play ) // add formula so that low has long decay, high has short ( { SinOsc.ar( 100 * LFNoise1.kr(1/6, 0.4, 1), mul: EnvGen.kr( Env.perc(0, (100**(-0.7))*100), Dust.kr(1)) ) }.play )

( { SinOsc.ar( 3000 * LFNoise1.kr(1/6, 0.4, 1), mul: EnvGen.kr( Env.perc(0, (3000**(-0.7))*100), Dust.kr(1)) ) }.play ) ( { Pan2.ar( SinOsc.ar( 3000 * LFNoise1.kr(1/6, 0.4, 1), mul: EnvGen.kr( Env.perc(0, (3000**(-0.7))*100), Dust.kr(1)) ), LFNoise1.kr(1/8) ) }.play ) ( { Mix.fill(15,

352

{ var freq; freq = exprand(100, 3000); Pan2.ar( SinOsc.ar( freq * LFNoise1.kr(1/6, 0.4, 1), mul: EnvGen.kr( Env.perc(0, (freq**(-0.7))*100), Dust.kr(1/5)) ), LFNoise1.kr(1/8) )*0.2 }) }.play )

// Gaggle of sine variation [New] {SinOsc.ar(400, mul: max(0, FSinOsc.kr(2)))}.play {SinOsc.ar(400, mul: max(0, FSinOsc.kr([2, 4])))}.play {SinOsc.ar([400, 800], mul: max(0, FSinOsc.kr([2, 3])))}.play ( {Mix.ar(SinOsc.ar([400, 800, 1200], mul: max(0, FSinOsc.kr([1, 2, 3]))))*0.1}.play ) ( { var harmonics = 4, fund = 400; Mix.fill(harmonics, {arg count; SinOsc.ar(fund * (count+1), mul: max(0, FSinOsc.kr(count)) ) } )*0.1}.play ) ( { var harmonics = 4, fund = 400; Mix.fill(harmonics, {arg count; SinOsc.ar(fund * (count+1), mul: max(0, FSinOsc.kr(count/5)) ) }

353

)*0.1}.play )

( { var harmonics = 16, fund = 400; Mix.fill(harmonics, {arg count; SinOsc.ar(fund * (count+1), mul: max(0, FSinOsc.kr(count/5)) ) } )*0.1}.play ) ( { var harmonics = 16, fund = 50; Mix.fill(harmonics, {arg count; Pan2.ar( SinOsc.ar(fund * (count+1), mul: max(0, FSinOsc.kr(count/5)) ), 1.0.rand2 ) } )*0.07}.play )

// KSPluck

// More {max(0, LFNoise1.ar)}.scope // random wave with max

354

E. Pitch Chart, MIDI, Pitch Class, Frequency, Hex, Binary Converter: Notes for the pitch chart: PC-Pitch Class, MN-Midi Number, Int-Interval, MI-Midi Interval, ETR-Equal Tempered Ratio, ETF-ET Frequency, JR-Just Ratio, JC-Just Cents, JF-Just Frequency, PR-Pythagorean Ratio, PC-Pyth. Cents, PF-Pyth. Freq., MR-Mean Tone Ratio, MC-MT Cents, MF-MT Freq. The shaded columns are chromatic pitches and correspond with the black keys of the piano. Italicized values are negative. Some scales show intervals, ratios, and cents from middle C. The Pythagorean scale is mostly positive numbers showing intervals and ratios from C1. I did this to show the overtone series. All ratios with a 1 as the denominator are overtones. They are bold. To invert the ratios just invert the fraction; 3:2 becomes 2:3. I restart the cents chart at 000 for C4 in the PC column because I just don't think numbers higher than that are very useful. Also, here is a music number converter. The Hex and Binary refer to the MIDI number. 31.20. Pitch class, MIDI number, Frequency, Hex, Binary conversion GUI ( Sheet({ arg l; var pcstring; pcstring = ["C", "C#", "D", "Eb", "E", "F", "F#", "G", "Ab", "A", "Bb", "B"]; SCStaticText(l, l.layRight(70, 30)).string_("MIDI"); SCStaticText(l, l.layRight(70, 30)).string_("Pitch"); SCStaticText(l, l.layRight(70, 30)).string_("Frequency"); SCStaticText(l, l.layRight(70, 30)).string_("Hex"); SCStaticText(l, l.layRight(70, 30)).string_("Binary"); l.view.decorator.nextLine; m = SCNumberBox(l,l.layRight(70,30)); p = SCTextField(l,l.layRight(70,30)); f = SCNumberBox(l,l.layRight(70,30)); h = SCTextField(l,l.layRight(70,30)); b = SCTextField(l, l.layRight(70, 30)); p.value = "C4"; f.value = 60.midicps.round(0.01); m.value = 60; h.value = "0000003C"; b.value = "00111100"; m.action = { arg numb; var array; numb.value.asInteger.asBinaryDigits.do({arg e, i; array = array ++ e.asString}); p.value = pcstring.wrapAt(numb.value) ++ (numb.value/12 - 1).round(1).asString; f.value = numb.value.midicps.round(0.001); h.value = numb.value.asInteger.asHexString; b.value = array; }; //p.defaultKeyDownAction = {arg a, b, c, d, e; [a, b, c, d, e].value.postln;}; }, "Conversions"); )

355

PC C2 Db D Eb E F F# G Ab A Bb B C3 Db D Eb E F F# G Ab A Bb B C4 Db D Eb E F F# G Ab A Bb B C5 Db D Eb E F F# G Ab A Bb B C6

MN 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84

Int P15 M14 m14 M13 m13 P12 A11 P11 M10 m10 M9 m9 P8 M7 m7 M6 m6 P5 A4 P4 M3 m3 M2 m2 P1 m2 M2 m3 M3 P4 A4 P5 m6 M6 m7 M7 P8 m9 M9 m10 M10 P11 A11 P12 m13 M13 m14 M14 P15

MI 24 23 22 21 20 19 18 16 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

ETR 0.250 0.265 0.281 0.298 0.315 0.334 0.354 0.375 0.397 0.421 0.446 0.472 0.500 0.530 0.561 0.595 0.630 0.668 0.707 0.749 0.794 0.841 0.891 0.944 1.000 1.059 1.122 1.189 1.260 1.335 1.414 1.498 1.587 1.682 1.782 1.888 2.000 2.118 2.244 2.378 2.520 2.670 2.828 2.996 3.174 3.364 3.564 3.776 4.000

ETF 65.41 69.30 73.42 77.78 82.41 87.31 92.50 98.00 103.8 110.0 116.5 123.5 130.8 138.6 146.8 155.6 164.8 174.6 185.0 196.0 207.7 220.0 233.1 246.9 261.6 277.2 293.7 311.1 329.6 349.2 370.0 392.0 415.3 440.0 466.2 493.9 523.3 554.4 587.3 622.3 659.3 698.5 740.0 784.0 830.6 880.0 932.3 987.8 1047

JR 1:4 4:15 5:18 3:10 5:16 1:3 16:4 3:8 5 2:5 5:12 4:9 15:3 2 1:2 8:15 5:9 3:5 5:8 2:3 32:4 3:4 5 4:5 5:6 8:9 15:1 6 1:1 16:1 9:8 5 6:5 5:4 4:3 45:3 3:2 2 8:5 5:3 9:5 15:8 2:1 32:1 9:4 5 12:5 5:2 8:3 45:1 3:1 6 16:5 10:3 18:5 15:4 4:1

JR 0.250 0.266 0.278 0.300 0.312 0.333 0.355 0.375 0.400 0.416 0.444 0.469 0.500 0.533 0.556 0.600 0.625 0.667 0.711 0.750 0.800 0.833 0.889 0.938 1.000 1.067 1.125 1.200 1.250 1.333 1.406 1.500 1.600 1.667 1.800 1.875 2.000 2.133 2.250 2.400 2.500 2.667 2.813 3.000 3.200 3.333 3.600 3.750 4.000

JF 65.406 69.767 72.674 78.488 81.758 87.209 93.023 98.110 104.65 109.01 116.28 122.64 130.81 139.53 145.35 156.98 163.52 174.42 186.05 196.22 209.30 218.02 232.56 245.27 261.63 279.07 294.33 313.95 327.03 348.83 367.91 392.44 418.60 436.04 470.93 490.55 523.25 558.14 588.66 627.90 654.06 697.67 735.82 784.88 837.20 872.09 941.85 981.10 1046.5

PR 0.500 0.527 0.562 0.592 0.633 0.667 0.702 0.750 0.790 0.844 0.889 0.950 1.000 1.053 1.125 1.185 1.266 1.333 1.424 1.500 1.580 1.688 1.778 1.898 2.000 2.107 2.250 2.370 2.531 2.667 2.848 3.000 2.914 3.375 3.556 3.797 4.000 4.214 4.500 4.741 5.063 5.333 5.695 6.000 5.827 6.750 7.111 7.594 8.000

C# Gb G# A#

61 66 80 70

A1 d5 A5 m7

1 6 8 10

1.059 1.414 3.174 1.782

277.2 370.0 830.6 466.2

25:24 64:45 25:16 45:16

1.042 1.422 1.563 1.758

272.53 372.09 408.79 459.89

1.068 279.38 1.045 279.94 1.405 367.50 ** ** 1.602 419.07 ** ** ** ** 1.869 488.98

356

PF 65.406 68.906 73.582 77.507 82.772 87.219 93.139 98.110 95.297 110.37 116.29 124.17 130.81 137.81 147.16 155.05 165.58 174.41 186.24 196.22 190.56 220.75 232.55 248.35 261.63 275.62 294.33 310.06 331.12 348.85 372.52 392.44 381.12 441.49 465.10 496.70 523.25 551.25 588.66 620.15 662.24 697.66 745.01 784.88 762.28 882.99 930.21 993.36 1046.5

MR 0.250 0.268 0.280 0.299 0.313 0.334 0.349 0.374 0.400 0.418 0.447 0.467 0.500 0.535 0.559 0.598 0.625 0.669 0.699 0.748 0.800 0.836 0.895 0.935 1.000 1.070 1.118 1.196 1.250 1.337 1.398 1.496 1.600 1.672 1.789 1.869 2.000 2.140 2.236 2.392 2.500 2.674 2.796 2.992 3.200 3.344 3.578 3.738 4.000

MF 65.406 70.116 73.255 78.226 81.889 87.383 91.307 97.848 104.65 109.36 116.95 122.18 130.81 139.97 146.25 156.45 163.52 175.03 182.88 195.70 209.30 218.72 234.16 244.62 261.63 279.94 292.50 312.90 327.03 349.79 365.75 391.39 418.60 437.44 468.05 488.98 523.25 559.88 585.00 625.81 654.06 699.59 731.51 782.78 837.20 874.88 936.10 977.96 1046.5

Answers to Exercises Answers. Open this file in SC to run examples, since many of the answers are given as code or formulas to calculate the answer. 4.1. Of the wave examples below (each is exactly 1/100th of a second), which is the loudest? softest? highest pitch? brightest sound? is aperiodic? is periodic? A, B, B, B (C?), D, ABC 4.2. What is the length of each of these waves: 10 Hz, 250 Hz, 440 Hz, 1000 Hz? 1000/[10, 250, 440, 1000] 4.3. What is the lowest pitch humans can hear? What is the highest? What is the lowest frequency humans can hear? 20 to 20 kHz, infinite 4.4. The Libby performance hall is about 200 feet deep. If you are at one end of the hall, how long would it take sound to get back to you after bouncing off of the far wall? 400/1000 5.2. Beginning at C4 (261.6) calculate the frequencies for F#4, G3, B4, A3, and E4, using lowest possible ratios (only M2, m3, M3, 4, 5). 44 261.6 261.6 261.6 261.6 261.6

* * * * *

9/8 * 9/8 * 9/8 3/2 3/2 * 5/4 3/2 * 9/8 5/4

5.3. Prove the Pythagorean comma (show your work). 44 A2, A3, A4, A5, A6, A7, A8, A9 = 7 octaves A2, E3, B3, F#4, C#5, G#5, D#6, A#6, F7, C8, G8, D9, A9 = 11 fifths, (110 * pow(3/2, 11) ) * pow(1/2, 7) (110 * pow(2, 7)) - (110 * pow(3/2, 11)) 110*3/2*3/2*3/2*3/2*3/2*3/2*3/2*3/2*3/2*3/2*3/2 110*2*2*2*2*2*2*2 5.4. Story problem! Two trains leave the station, . . . No, wait, two microphones are set up to record a piano. One is 10' away, the other is 11 feet away (a difference of one foot). What frequency will be cancelled out? 44

357

A wave with a length of 2': 500 Hz 1000/500 = 2 6.1. Calculate the sizes of a 10 minute recording in these formats: mono, 22k sampling rate, 8 bit; stereo, 88.2k, 24 bit; mono, 44.1k, 16 bit; and mono, 11k, 8 bit. 52 10 minutes at 44.1, stereo, 16 bit = 100M 12.5, 300M, 50M, 6.25 8.1. Identify all objects, functions, messages, arrays, and argument lists. {RLPF.ar( LFSaw.ar([8, 12], 0, 0.2), LFNoise1.ar([2, 3].choose, 1500, 1600), 0.05, 0.4 )}.play 69 Objects are RLPF, LFSaw, all numbers, LFNoise1, and the Function. The function is everything enclosed in { and }. The messages are .ar, .choose and .play. The arrays are [8, 12]. The keywords are mul. The argument list for RLPF is LFSaw.ar([8, 12], 0, 0.2), LFNoise1.ar([2, 3].choose, 1500, 1600), 0.05, mul: 0.4. The argument list for LFSaw is [8, 12], 0, 0.2. The argument list for LFNoise1 is [2, 3].choose, 1500, 1600. 8.2. Identify the two errors in this example. 800), 0, 0.3}.play 69

{SinOsc.ar(LFNoise0.ar([10, 15], 400

No comma after 400, missing ")" 8.3. Explain what each of the numbers in the argument list mean (using help). MIDIOut.noteOn(0, 60, 100) 69 channel, note, velocity 8.4. Modify the CombN code below so that the contents of each enclosure are indented successive levels. For example, SinOsc.ar(rrand(100, 200)) would become: SinOsc.ar( rrand( 100, 200 ) ) CombN.ar(WhiteNoise.ar(0.01), [1, 2, 3, 4], XLine.kr(0.0001, 0.01, 20), -0.2) 69 CombN.ar(in: WhiteNoise.ar(mul: 0.01), maxdelaytime: 0.01, delaytime: XLine.kr(start: 0.0001, end: 0.01, duration: 20), decaytime: -0.2) 8.5. In the above example, which Ugens are nested? 69 WhiteNoise.ar, XLine.kr 8.6. Which of these are not legal variable names? lfNoise0ar, 6out, array6, InternalBus, next pitch, midi001, midi.01, attOne 69 6out (begins with number), InternalBus (first letter cap), next pitch (not contiguous), midi.01 (period) 9.3. Write a small patch with a SinOsc with a frequency of 670, a phase of 1pi, and an amplitude of 0.5 77

358

{SinOsc.ar(670, 1pi, 0.5)}.play 11.1. Rewrite this patch replacing all numbers with variables. {SinOsc.ar( freq: SinOsc.ar(512, mul: 673, add: LFNoise0.kr(7.5, 768, 600)), mul: 0.6 )}.play 89 var modFreq, carrierFreq, indexFreq, indexScale, indexOffset, amp, modulator, indexControl, signal; modFreq = 512; carrierFreq = 673; indexFreq = 7.5; indexScale = 768; indexOffset = 600; amp = 0.6; indexControl = LFNoise0.kr(indexFreq, indexScale, indexOffset); modulator = SinOsc.ar(indexControl); signal = SinOsc.ar(modulator); {signal}.play; 11.2. Rewrite the patch above with variables so that the scale of the SinOsc (673) is twice the frequency (512), and the offset of LFNoise0 (600) is 200 greater than the scale (768). 89 var modFreq, carrierFreq, indexFreq, indexScale, indexOffset, amp, modulator, indexControl, signal; modFreq = 512; carrierFreq = modFreq*2; indexFreq = 7.5; indexScale = 768; indexOffset = 100 + indexScale; amp = 0.6; indexControl = LFNoise0.kr(indexFreq, indexScale, indexOffset); modulator = SinOsc.ar(indexControl); signal = SinOsc.ar(modulator); {signal}.play; 11.3. How would you offset and scale an LFNoise0 so that it returned values between the range of C2 and C6 (C4 is 60). 89 four octaves above C2 is 36 to 84 scale: (84 - 36)/2, offset: 36 + (84 - 36)/2 11.4. An envelope is generating values between 400 and 1000. What is the offset and scale? 89 Range of 600. Envelopes are default to 0 and 1. Offset = 400, scale = 600. 12.1. In this patch, what is the speed of the vibrato? {SinOsc.ar(400 + SinOsc.ar(7, mul: 5))}.play 104 7 Hz

359

12.2. Rewrite the patch above with a MouseX and MouseY to control the depth and rate of vibrato. 104 {SinOsc.ar(400 + SinOsc.ar(MouseX.kr(2, 7), mul: MouseY.kr(2, 8)))}.play 12.3. Rewrite the patch above using an LFPulse rather than a SinOsc to control frequency deviation. 104 {SinOsc.ar(400 + LFPulse.ar(MouseX.kr(2, 7), mul: MouseY.kr(2, 8)))}.play 12.4. Rewrite the Theremin patch so that vibrato increases with amplitude. 104 ({ var amp; amp = MouseX.kr(0.02, 1); SinOsc.ar( freq: MouseY.kr([3200, 1600], [200, 100], lagTime: 0.5, warp: 1) * (1 + SinOsc.kr(amp*6, mul: 0.02)), //Vibrato mul: abs(amp) //Amplitude ) }.play)

12.5. Write a patch using a Pulse and control the width with a Line moving from 0.1 to 0.9 in 20 seconds. 104 {Pulse.ar(50, Line.kr(0.1, 0.9, 20))}.play 12.6. Write a patch using a SinOsc and EnvGen. Use the envelope generator to control frequency of the SinOsc and trigger it with Dust. 104 {SinOsc.ar(300 + EnvGen.kr(Env.linen(0.1, 0.1, 0.1), gate: Dust.kr(2), levelScale: 1000))}.play 12.7. Start with this patch: {SinOsc.ar(200)}.play. Replace the 200 with another SinOsc offset and scaled so that it controls pitch, moving between 400 and 800 3 times per second. Then insert another SinOsc to control the freq of that sine so that it moves between 10 and 20 once every 4 seconds. Then another to control the frequency of that sine so that it moves between once every 4 seconds (a frequency of 1/4) to 1 every 30 seconds (a frequency of 1/30), once every minute (a frequency of 1/60). 104 Wow. This one was tough. { var lastRange; lastRange = (1/4) – (1/30) SinOsc.ar( SinOsc.ar( SinOsc.ar( SinOsc.ar(1/60, mul: lastRange/2, add: 1/30 + (lastRange/2))), mul: 5, add: 15), mul: 200, add: 600)

360

) }.play

12.8. What is the duration of a single wave with a frequency of 500 Hz? 104 1/500 (0.002) 12.9. What is the frequency, in Hz (times per second), of two and a half minutes? 104 1/150 or 0.00667 13.1. What is the 6th harmonic of a pitch at 200 Hz? 113 1200 13.2. What is the phase of the sine wave in channel 4? SinOsc.ar([10, 15, 20, 25], [0, 1.25], 500, 600) 113 1.25 (the second array wraps for the first array: [0, 1.25, 0, 1.25] 13.3. What value is returned from this function? {a = 10; b = 15; c = 25; a = c + b; b = b + a; b} 113 Just run it: {a = 10; b = 15; c = 25; a = c + b; b = b + a; b}.value 13.4. Arrange these frequencies from most consonant to most dissonant. 450:320, 600:400, 1200:600 113

180:160,

1200:600 = 2/1, 600:400 = 3/2, 180:160 = 9/8, 450:320 = 45/32 14.1. Using any of the additive patches above as a model, create a harmonic collection of only odd harmonics, with decreasing amplitudes for each partial. What is the resulting wave shape? 134 { Mix.fill(20, {arg i; SinOsc.ar(500*(i*2+1).postln, mul: 1/(i+1)) })*0.5 }.scope Pulse 14.2. Begin with the "additive saw with modulation" patch above. Replace all the LFNoise1 ugens with any periodic wave (e.g. SinOsc), offset and scaled to control amplitude, with synchronized frequencies (3/10, 5/10, 7/10). 134 ( { var speed = 14;

361

f = 300; t = Impulse.kr(1/3); Mix.fill(12, {arg i; SinOsc.ar(f*(i+1), mul: SinOsc.ar((i+1)/1, 0.5, 0.5)/(i+1))})*0.5 }.scope(1) ) 14.3. Modify the "additive saw with independent envelopes" patch so that all the decays are 0, but the attacks are different (e.g. between 0.2 and 2 seconds). Use a random function if you'd like. 134 ( { f = 300; t = Impulse.kr(1/3); Mix.fill(12, {arg i; SinOsc.ar(f*(i+1), mul: EnvGen.kr(Env.perc(rrand(0.2, 2.0), 0), t)/(i+1)) })*0.5 }.scope(1) )

14.4. Use Array.fill to construct an array with 20 random values chosen from successive ranges of 100, such that the first value is between 0 and 100, the next 100 and 200, the next 200 and 300, etc. The result should be something like [67, 182, 267, 344, 463, 511, etc.]. 134 Array.fill(20, {arg i; rrand(i*100, i+1*100)}) 14.5. In the patch below, replace 400 and 1/3 with arrays of frequencies, and 0.3 with an array of pan positions. You can either write them out, or use {rrand(?, ?)}.dup(?). 134 ( {Mix.ar( Pan2.ar( SinOsc.ar({rrand(100, 1000)}.dup(8), // freq mul: EnvGen.kr(Env.perc(0, 2), Dust.kr({rrand(5, 1/9)}.dup(8)) // trigger density )*0.1), {1.0.rand}.dup(8) // pan position ) )}.play ) 14.6. Devise a method of ordering numbers and/or letters that would appear random to someone else, but is not random to you. 134 3, 9, 12, 36, 39, 17, 20, 60, 63, 89, 92, 76, 79, 37, 40, 20, 23, 69, 72, 16, 19, 57, 60, 80, 83, 49, 52, 56, 59, 77, 80, 40, 43, 29, 32, 96, 99, 97, 0, 0, 20 i = 0; 20.do({i = (i + 3)%100; i.postln; i = (i*3)%100; i.postln;})

362

15.1. Write a patch using an RLPF with a 50 Hz Saw as its signal source. Insert an LFNoise0 with frequency of 12 to control the frequency cutoff of the filter with a range of 200, 2000. 148 {RLPF.ar(Saw.ar(50), LFNoise0.kr(12, 1100, 900), 0.05)*0.2}.play 15.2. Write a patch similar to "tuned chimes" using Klank with PinkNoise as its input source. Fill the frequency array with harmonic partials but randomly detuned by + or – 2%. In other words, rather than 100, 200, 300, etc., 105, 197, 311, 401, etc. (Multiply each one by rrand(0.98, 1.02)). 148 The order (i + 1)*rrand(0.8, 1.2)*fund is critical. Note that (i+1)*fund*rrand(0.9, 1.1) will give vastly differing results. { var fund = 200; Mix.fill( 6, { var harm; harm = rrand(5, 20); Pan2.ar( Klank.ar( `[Array.fill(harm, {arg i; (i + 1)*rrand(0.8, 1.2)*fund}), 1, Array.fill(harm, {rrand(1.0, 3.0)})], PinkNoise.ar*Decay.ar(Dust.ar(1)) )*0.01, 1.0.rand2) }) }.play 15.3. Without deleting any existing code, rewrite the patch below with "safe" values to replace the existing trigger with a 1 and the fund with 100. 148 ({var trigger, fund; trigger = Dust.kr(3/7); fund = rrand(100, 400); trigger = 1; fund = 400; Mix.ar( Array.fill(16, {arg counter; var partial; partial = counter + 1; Pan2.ar( SinOsc.ar(fund*partial) * EnvGen.kr(Env.adsr(0, 0, 1.0, 5.0), trigger, 1/partial ) * max(0, LFNoise1.kr(rrand(5.0, 12.0))), 1.0.rand2) }))*0.5}.play)

15.4. In the patch above monitor the frequency arguments only of all the SinOscs, LFNoise1s, and the pan positions by only adding postlns. 148

363

({var trigger, fund; trigger = Dust.kr(3/7); fund = rrand(100, 400); trigger = 1; fund = 400; Mix.ar( Array.fill(16, {arg counter; var partial; partial = counter + 1; Pan2.ar( SinOsc.ar((fund*partial).postln) * EnvGen.kr(Env.adsr(0, 0, 1.0, 5.0), trigger, 1/partial ) * max(0, LFNoise1.kr(rrand(5.0, 12.0))), 1.0.rand2) }))*0.5}.play) 15.5. In the patch above comment out the EnvGen with all its arguments, but not the max(). Be careful not to get two multiplication signes in a row (* *), which means square. Check using syntax colorize under the format menu. 148 ({var trigger, fund; trigger = Dust.kr(3/7); fund = rrand(100, 400); trigger = 1; fund = 400; Mix.ar( Array.fill(16, {arg counter; var partial; partial = counter + 1; Pan2.ar( SinOsc.ar((fund*partial).postln) * /* EnvGen.kr(Env.adsr(0, 0, 1.0, 5.0), trigger, 1/partial ) * */ max(0, LFNoise1.kr(rrand(5.0, 12.0))), 1.0.rand2) }))*0.5}.play) 15.6. Without deleting, replacing any existing code, or commenting out, replace the entire Mix portion with a "safe" SinOsc.ar(400). 148 ({var trigger, fund; trigger = Dust.kr(3/7); fund = rrand(100, 400); trigger = 1; fund = 400; Mix.ar( Array.fill(16, {arg counter; var partial; partial = counter + 1; Pan2.ar( SinOsc.ar((fund*partial).postln) * EnvGen.kr(Env.adsr(0, 0, 1.0, 5.0), trigger, 1/partial ) * max(0, LFNoise1.kr(rrand(5.0, 12.0))), 1.0.rand2) }))*0.5; SinOsc.ar(400)}.play)

16.1. In a KS patch, what delay time would you use to produce these frequencies: 660, 271, 1000, 30 Hz? 161

364

Easy: 1/660, 1/271, 1/1000, 1/30 16.2. Rewrite three patches from previous chapters as synth definitions written to the hard disk with arguments replacing some key variables. Then write several Synth() lines to launch each instrument with different arguments. Don’t forget the Out.ar(0, inst) at the end. For example: {SinOsc.ar(400)}.play Would be: SynthDef("MySine", {arg freq = 200; Out.ar(0, SinOsc.ar(freq))}).play Then: Synth("MySine", [\freq, 600]) 161

SynthDef("Chime", { arg baseFreq = 100; var totalInst = 7, totalPartials = 12, out; out = Mix.ar(etc.); Out.ar(0, out) }).load(s)

SynthDef("Cavern", { arg base = 100; var out, totalPartials = 12; out = Mix.ar(etc.); Out.ar(0, out); }).play(s) Etc. Synth("Cavern", [\base, 500]) Synth("Chime", [\baseFreq, 300]) Etc. 17.1. What are the AM sidebands of a carrier at 440 Hz and a modulator of 130? 175 570 and 310 17.2. What are the PM sidebands of a carrier at 400 Hz and a modulator at 50? 175 potentially infinite sidebands, but assuming 6: (-6..6)*50 + 400 17.3. Create a PM patch where the modulator frequency is controlled by an LFNoise1 and the modulation index is controlled by an LFSaw ugen. 175 { PMOsc.ar( LFNoise1.kr(1/3, 800, 1000), LFNoise1.kr(1/3, 800, 1000), LFSaw.kr(0.2, mul: 2, add: 2) ) }.play

365

17.4. Create a PM patch where the carrier frequency, modulator frequency, and modulation index are controlled by separate sequencers (with arrays of different lengths) but all using the same trigger. 175

{ var carSeq, modSeq, indexSeq, trigger; trigger = Impulse.kr(8); carSeq = Select.kr( Stepper.kr(trigger, max: 20), Array.fill(20, {rrand(200, 1000)}).postln;); modSeq = Select.kr( Stepper.kr(trigger, max: 10), Array.fill(10, {rrand(200, 1000)}).postln;); indexSeq = Select.kr( Stepper.kr(trigger, max: 15), Array.fill(15, {rrand(0.1, 4.0)}) .round(0.1).postln;); PMOsc.ar( carSeq, modSeq, indexSeq ) }.play

17.5. Create a PM patch where the carrier frequency is controlled by a TRand, the modulator is controlled by an S&H, and the index a sequencer, all using the same trigger. 175 { var triggerRate, carSeq, modSeq, indexSeq, trigger; trigger = Impulse.kr([6, 9]); carSeq = TRand.kr(200, 1200, trigger);
modSeq = Latch.kr(LFSaw.kr(Line.kr(0.1, 10, 70), 0, 500, 600),trigger);
indexSeq = Select.kr( Stepper.kr(trigger, max: 15), Array.fill(15, {rrand(0.2, 4.0)})); PMOsc.ar( carSeq, modSeq, indexSeq ) }.play 18.1. Modify this patch so that the LFSaw is routed to a control bus and returned to the SinOsc using In.kr. {Out.ar(0, SinOsc.ar(LFSaw.kr(12, mul: 300, add: 600)))}.play 195 {Out.kr(0, LFSaw.kr(12, mul: 300, add: 600))}.play {Out.ar(0, SinOsc.ar(In.kr(0)))}.play 18.2. Create an additional control (perhaps, SinOsc) and route it to another control bus. Add an argument for the input control bus on original SinOsc. Change between the two controls using Synth or set. 195

366

{Out.kr(0, LFSaw.kr(12, mul: 300, add: 600))}.play {Out.kr(1, SinOsc.kr(120, mul: 300, add: 1000))}.play a = SynthDef("Sine", {arg bus = 0; Out.ar(0, SinOsc.ar(In.kr(bus)))}).play a.set(\bus, 1); 18.3. Create a delay with outputs routed to bus 0 and 1. For the delay input use an In.ar with an argument for bus number. Create another stereo signal and assign it to 4/5. Set your sound input to either mic or line and connect a signal to the line. Use set to switch the In bus from 2 (your mic or line) to 4 (the patch you wrote). 195

a = SynthDef("Echo", {arg busIn = 4; Out.ar(0, In.kr(busIn, 1) + CombN.ar(In.ar(busIn, 1), [0.3, 0.5], 2, 4))}).play; {Out.ar(4, SinOsc.ar(LFNoise1.kr(0.1, 500, 1000), mul: LFPulse.kr(3, width: 0.1)))}.play a.set(\busIn, 2); 18.4. Assuming the first four bus indexes are being used by the computer's out and in hardware, and you run these lines: a = Bus.ar(s, 2); b = Bus.kr(s, 2); c = Bus.ar(s, 1); c.free; d = Out.ar(Bus.ar(s), SinOsc.ar([300, 600])); -Which bus or bus pair has a SinOsc at 600 Hz? -What variable is assigned to audio bus 6? -What variable is assigned to control bus 3? -What variable is assigned to audio bus 4? 195 I think bus 7 will have a sine at 600 Hz. There is no variable assigned to bus 6. There is no variable assigned to bus 3. Variable b is assigned to bus 4. 18.5. Assuming busses 0 and 1 are connected to your audio hardware, in which of these examples will we hear the SinOsc? ({Out.ar(0, In.ar(5))}.play; {Out.ar(5, SinOsc.ar(500))}.play) ({Out.ar(5, SinOsc.ar(500))}.play; {Out.ar(0, In.ar(5))}.play) ({Out.ar(0, In.ar(5))}.play; {Out.kr(5, SinOsc.ar(500))}.play) ({Out.ar(5, In.ar(0))}.play; {Out.ar(0, SinOsc.ar(500))}.play) ({Out.ar(0, SinOsc.ar(500))}.play; {Out.ar(5, In.ar(0))}.play) 195 I think this is right: ({Out.ar(0, In.ar(5))}.play; {Out.ar(5, SinOsc.ar(500))}.play) ({Out.ar(5, SinOsc.ar(500))}.play; {Out.ar(0, In.ar(5))}.play) before source ({Out.ar(0, In.ar(5))}.play; {Out.kr(5, SinOsc.ar(500))}.play) ({Out.ar(5, In.ar(0))}.play; {Out.ar(0, SinOsc.ar(500))}.play) has audio too ({Out.ar(0, SinOsc.ar(500))}.play; {Out.ar(5, In.ar(0))}.play)

Yes No, destination No, control bus Yes, though bus 5 Yes

19.1. Write a function with two arguments (including default values); low and high midi numbers. The function chooses a MIDI number within that range and returns the frequency of the number chosen. 205

367

~myfunc = { arg low = 0, high = 100; rrand(low, high).midicps; }; ~myfunc.value; ~myfunc.value(40, 80); 19.2. Write a function with one argument; root. The function picks between minor, major, or augmented chords and returns that chord built on the supplied root. Call the function using keywords. 205

~chord = {arg root = 0; ([[0, 3, 7], [0, 4, 7], [0, 4, 8]].choose + root)%12;}; ~chord.value(4); ~chord.value(7); ~chord.value(2); 20.1. Write an example using two do functions, nested so that it prints a multiplication table for values 1 through 5 (1*1 = 1; 1*2 = 2; . . . 5*4 = 20; 5*5 = 25). 212 5.do({arg i; i = i + 1; 5.do({arg j; j = j + 1; [i, " times ", "".postln; })})

j, " eguals ", (i*j)].do({arg e; e.post});

20.2. Write another nested do that will print each of these arrays on a separate line with colons between each number and dashes between each line: [[1, 4, 6, 9], [100, 345, 980, 722], [1.5, 1.67, 4.56, 4.87]]. It should look like this: 1 : 4 : 6 : 9 : --------------------- 100 : 345 : etc. 212 [[1, 4, 6, 9], [100, 345, 980, 722], [1.5, 1.67, 4.56, 4.87]].do({arg next; next.do({arg number; number.post; " : ".post; }); "\n---------------------------\n".post; }); 21.1. True or false? {true}, {false}) 220

if(true.and(false.or(true.and(false))).or(false.or(false)),

False 21.2. Write a do function that returns a frequency between 200 and 800. At each iteration multiply the previous frequency by one of these intervals: [3/2, 2/3, 4/3, 3/4]. If the pitch is too high or too low, reduce it or increase it by octaves. 220 var freq; freq = rrand(200, 800);

368

100.do({freq =( freq * [3/2, 2/3, 4/3, 3/4].choose).round(0.1); if(freq > 800, {freq.postln; "too high".postln; freq = freq/2}); if(freq < 200, {freq.postln; "too low".postln; freq = freq*2}); freq.postln;}) 21.3. Write a 100.do that picks random numbers. If the number is odd, print it, if even, print an error. (Look in the SimpleNumber help file for odd or even.) 220 100.do({var n; n = 1000.rand; if(n.odd, {n.postln}, {"Error: number is even."})}) 21.4. In the patch below, add 10 more SinOsc ugens with other frequencies, perhaps a diatonic scale. Add if statements delineating 12 vertical regions of the screen (columns) using MouseX, and triggers for envelopes in the ugens, such that motion across the computer screen will play a virtual keyboard. 220 There are certainly better methods to acheive this effect, but for the sake of the "if" assignment, here is how it's done: ( { var mx, mgate1, mgate2, mgate3, mgate4; mx = MouseX.kr(0, 1); mgate1 = if((mx>0) * (mx(1/12)) * (mx(2/12)) * (mx(3/12)) * (mx [ MIDIEndPoint("IAC Driver", "IAC Bus 1"), MIDIEndPoint("IAC Driver", "IAC Bus 2"), MIDIEndPoint("UltraLite mk3 Hybrid", "MIDI Port 1") MIDIEndPoint("UltraLite mk3 Hybrid", "MIDI Port 2") MIDIEndPoint("OSCulator In (8000)", "OSCulator In (8000)"),

   ]

MIDIEndPoint("Oxygen 49", "Oxygen 49"),

A connection to a MIDI destination can be established by creating a new instance of MIDIOut. The most reliable and cross-platform friendliest way to specify a destination is to provide the device and port name via the newByName method: m = MIDIOut.newByName("UltraLite mk3 Hybrid", "MIDI Port 2");

Alternatively, a MIDIOut object can be specified via new, along with the array index of the desired destination. However, the array order may be different if your setup changes, so this is not as reliable as newByName1: m = MIDIOut.new(3); // item in MIDIClient.destinations at index 3

After creating a new instance of MIDIOut, we can apply instance methods to generate and send messages to the target device. For instance, the following line will generate a note-on message on channel 0, corresponding to note number 72 with a velocity of 50 (keep in mind that most receiving devices envision MIDI channels as being numbered 1–16, and will interpret channel n from SC as equivalent to channel n + 1. If the receiving destination is a sound-producing piece of hardware or software, and is actively “listening” for MIDI data, the following line should play a sound: m.noteOn(0, 72, 50);

At this point, from the perspective of the MIDI destination, the situation is no different from a user holding down key 72. This imaginary key can be “released” by sending an appropriate note-off message (note that the release velocity may be ignored by some receiving devices): m.noteOff(0, 72, 50);

The process of sending a sequence of MIDI messages can be automated by constructing and playing a routine, pictured in Code Example 7.6.

217

218

SuperCollider for the Creative Musician

CODE EXAMPLE 7.6: U SING A ROUTINE TO AUTOMATE THE PRODUCTION AND TRANSMISSION OF MIDI MESSAGES TO AN EXTERNAL DEVICE. ( // assumes 'm' is an appropriate instance of MIDIOut r = Routine({ inf.do({ var note = rrand(40, 90); m.noteOn(0, note, exprand(20, 60).asInteger); (1/20).wait; m.noteOff(0, note); (3/20).wait; }); }).play; )

   r.stop; This routine-based approach works well enough, but if the routine is stopped between a note-on message and its corresponding note-off, the result is a “stuck” note. Obviously, [cmd]+[period] will have no effect because the SC audio server is not involved. One solution is to use iteration to send all 128 possible note-off messages to the receiving device: (0..127).do({ |n| m.noteOff(0, n) });

This solution can be enhanced by encapsulating this expression in a function and registering it with the CmdPeriod class, so that the note-off action is performed whenever [cmd]+[period] is pressed. This action can be un-registered at any time by calling remove on CmdPeriod (see Code Example 7.7).

CODE EXAMPLE 7.7: A UTOMATING THE REMOVAL OF STUCK NOTES USING CMDPERIOD. ( ~allNotesOff = { "all notes off".postln; (0..127).do({ |n| m.noteOff(0, n) }); }; CmdPeriod.add(~allNotesOff); )

   CmdPeriod.remove(~allNotesOff); // un-register this action



C h a p t er 7: E x t er n a l C o n t r o l

Events provide a more elegant interface for sending MIDI messages to external devices. In Chapter 5, we introduced note- and rest-type Events. There is also a built-in midi type event, whose keys include \midiout and \midicmd, which specify the instance of MIDIOut to be used, and the type of message to be sent. We can view valid options for \midicmd by evaluating: Event.partialEvents.midiEvent[\midiEventFunctions].keys;

The type of MIDI message being sent determines additional keys to be specified. For example, if we specify \noteOn as the value for the \midicmd key, the Event will also expect \chan, \midinote, \amp, \sustain, and \hasGate. A value between 0 and 1 should be provided for \amp, which is automatically multiplied by 127 before transmission. When \hasGate is true (the default), a corresponding note-off message is automatically sent after \sustain beats have elapsed. If \hasGate is false, no note-off message will be sent. Code Examples 7.8 and 7.9 demonstrate the use of Events to send MIDI data to an external destination.

CODE EXAMPLE 7.8: S ENDING A NOTE-ON AND AUTOMATIC NOTE-OFF MESSAGE TO AN EXTERNAL DEVICE BY PLAYING AN EVENT. ( ( type: \midi, midiout: m, midicmd: \noteOn, chan: 0, midinote: 60, amp: 0.5, sustain: 2 // note-off sent 2 beats later ).play;

   )

CODE EXAMPLE 7.9: U SE OF PBIND TO CREATE A STREAM OF MIDI-TYPE EVENTS. ( t = TempoClock.new(108/60); p = Pbind( \type, \midi, \dur, 1/4, \midiout, m, \midicmd, \noteOn,

219

220

SuperCollider for the Creative Musician

\chan, 0, \midinote, Pseq([60, 72, 75],inf), \amp, 0.5, \sustain, 1/8, ); ~seq = p.play(t); )

   ~seq.stop;

7.3  OSC Open Sound Control (OSC) is a specification for communication between computers, synthesizers, and other multimedia devices, developed by Matt Wright and Adrian Freed at UC Berkeley CNMAT in the late 1990s, and first published in 2002.2 Designed to meet the same general goals of MIDI, it allows devices to exchange information in real-time, but offers advantages in its customizability and flexibility. In contrast to MIDI, OSC supports a greater variety of data types, and includes high-resolution timestamps for temporal precision. OSC also offers an open-ended namespace using URL-style address tags instead of predetermined message types (such as note-on, control change, etc.), and is optimized for transmission over modern networking protocols. It’s a common choice for projects involving device-to-device communication, such as laptop ensembles, multimedia collaborations, or using a smartphone as a multitouch controller. Little needs to be known about the technical details of OSC to use it effectively in SC. To send a message from one device to another, the simplest option is for both devices to be on the same local area network, which helps avoid security and firewall-related obstacles. The sending device needs to know the IP address of the receiving device, as well as the network port on which that device is listening. The structure of an OSC message begins with a URLstyle address, followed by one or more pieces of data. Code Example 7.10 shows an example of an OSC message, as it would be displayed in SC, which includes an address followed by three numerical values.

CODE EXAMPLE 7.10: A N EXAMPLE OF AN OSC MESSAGE IN SC.    ['/sine/freqs', 220, 220.3, 221.05] Since its creation, OSC has been incorporated into many creative audio/video software platforms. SC, in particular, is deeply intertwined with OSC. In addition to being



C h a p t er 7: E x t er n a l C o n t r o l

able to exchange OSC with external devices, OSC is also how the language and server communicate with each other. Any code that produces a server-side reaction (adding a SynthDef, allocating a Buffer, creating a Synth, etc.) is internally translated into OSC messages.

7.3.1  THE TRIVIAL CASE: SELF-SENDING OSC IN THE LANGUAGE To begin demonstrating the basics, it’s instructive to have the SC language send an OSC message to itself. In SC, NetAddr is the class that represents a network device, defined by its IP address (a string), and the port (an integer) on which it listens. You can look up your computer’s local IP address in your network settings, or you can use the self-referential IP address “127.0.0.1”. By default, the language listens for OSC data on port 57120, or sometimes 57121. The incoming OSC port can be confirmed by evaluating: NetAddr.langPort;

The following NetAddr represents the instance of the SC language running on your computer: ~myself = NetAddr.new("127.0.0.1", NetAddr.langPort); OSCdef is a primary class for receiving OSC messages, generally similar to MIDIdef in behavior and design. At minimum, an OSCdef expects (1) a symbol, serving as a unique identifier for the OSCdef, (2) a function, evaluated in response to a received OSC message, and (3) the OSC address against which incoming messages are compared (non-matching addresses are ignored). Inside the function, four arguments can be declared: the OSC message, a timestamp, a NetAddr representing the sending device, and the port on which the message was received. Often, we only need the first of these four arguments. OSC addresses begin with a forward slash, and symbols in SC often begin with a backslash. SC can’t parse this combination of characters, which is why symbols are typically expressed using single-quote enclosures in the context of OSC. \/test; // invalid '/test'; // valid

Code Example 7.11 shows the essentials of sending an OSC message to/from the SC language. After creating instances of NetAddr and OSCdef, we can send an OSC message to ourselves with sendMsg, and including the OSC address tag, followed by any number of comma-separated pieces of data. The message is represented as an array when received by the OSCdef, so we can use an expression like msg[2]‌to access a specific piece of data within the message.

221

222

SuperCollider for the Creative Musician

CODE EXAMPLE 7.11: T HE BASIC STRUCTURE FOR SENDING AN OSC MESSAGE FROM THE LANGUAGE, TO BE RECEIVED BY THE LANGUAGE. THE OSCdef RESPONDS BY INDEXING INTO THE MESSAGE AND PRINTING THE RANDOM VALUE. ( ~myself = NetAddr("127.0.0.1", NetAddr.langPort); OSCdef(\receiver, {

|msg, time, addr, port|



("random value is " ++ msg[2]‌ ).postln; }, '/test'

); )

   ~myself.sendMsg('/test', 5, exprand(100,200), -9); // send a message

7.3.2  SENDING OSC FROM THE LANGUAGE TO THE SERVER Though it’s possible to explicitly send OSC from the language to the server, this is rarely necessary or advantageous, because these OSC messages are already encapsulated in higher-level classes that are more intuitive (Synth, SynthDef, Buffer, etc.). Nonetheless, simulating the “behind-the-scenes” flow of OSC data from language to server may provide clarity on fundamental concepts. With the server booted, consider the two expressions in Code Example 7.12. We can produce the same result by sending OSC messages explicitly.

CODE EXAMPLE 7.12: C REATING A SYNTH AND SETTING AN ARGUMENT VIA THE SYNTH CLASS, IMPLICITLY SENDING OSC MESSAGES TO THE SERVER. x = Synth(\default, [freq: 300], s, \addToHead);

   x.set(\gate, 0); By default, the audio server listens for OSC messages on port 57110, and automatically has an instance of NetAddr associated with it, accessible via s.addr. The server is programmed to



C h a p t er 7: E x t er n a l C o n t r o l

create a new Synth in response to messages with the ‘/s_new’ address. Following the address, it expects the SynthDef name, a unique node ID (assigned automatically when using the Synth class), an integer representing an addAction (0 means \addToHead), the node ID of the target Group or Synth (the default Group has a node ID of 1), and any number of comma-separated pairs, representing argument names and values. Messages with the ‘/n_set’ address tag are equivalent to sending a set message to a Node (either a Synth or Group). It requires the node ID, followed by one or more argument-value pairs. Code Example 7.13 performs the same actions as Code Example 7.12, but builds the OSC messages from scratch. Note that we don’t need to use s.addr to access the NetAddr associated with the server; sendMsg can be directly applied to the instance of the localhost server.

CODE EXAMPLE 7.13: C REATING A SYNTH AND SETTING AN ARGUMENT BY EXPLICITLY SENDING OSC MESSAGES TO THE SERVER. s.sendMsg('/s_new', "default", ~id = s.nextNodeID, 0, 1, "freq", 300);

   s.sendMsg('/n_set', ~id, "gate", 0); Documentation of OSC messages the server can receive are detailed in a help file titled “Server Command Reference.” Keep in mind it’s almost always preferable to use Synth and other language-side classes that represent server-side objects, rather than manually writing and sending OSC messages. The primary goal of this section is simply to demonstrate the usefulness and uniformity of the OSC protocol.

7.3.3  SENDING OSC FROM THE SERVER TO THE LANGUAGE Sending data from the server to the language involves a little more work but is considerably more useful. For example, we might want to know the current value of a UGen on the server, so that we can incorporate that value into a new Synth. The SendReply UGen is designed to do exactly this. It sends an OSC message to the language when it receives a trigger. To continuously capture values from a UGen, the trigger source can be a periodic impulse generator, and its frequency determines the “framerate” at which OSC messages are generated (a frequency between 10–30 Hz is sensible). The trigger signal, SendReply, and the UGen being captured should all be running at the same rate (audio or control). In the UGen function at the top of Code Example 7.14, a low-frequency noise generator drives the center frequency of a band-pass filter, and SendReply transmits this value back to the language twenty times per second. Below, a language-side OSCdef listens for messages with the appropriate address. When this code is evaluated, a stream of data will appear in the post window. The raw message generated by SendReply is verbose; it contains the address, the node ID, a reply ID (–1 if unspecified), and finally, the signal value of the UGen. In most cases, only the UGen value is relevant, which we can access by returning the message item at index three. Once the Synth and OSCdef are created, we have a “live” language-side value that follows the noise generator, which can be freely incorporated into other processes. A third block of code plays a short sine tone whose frequency mirrors the real-time filter frequency.

223

224

SuperCollider for the Creative Musician

CODE EXAMPLE 7.14: U SING SendReply AND OSCdef TO SEND A SERVER-SIDE VALUE TO THE LANGUAGE, WHICH IS THEN INCORPORATED INTO A DIFFERENT SOUND. ( { var sig, freq, trig; trig = Impulse.kr(20); freq = LFDNoise3.kr(0.2).exprange(200, 1600); SendReply.kr(trig, '/freq', freq); sig = PinkNoise.ar(1 ! 2); sig = BPF.ar(sig, freq, 0.02, 4); }.play; ) ( OSCdef(\getfreq, { arg msg; ~freq = msg[3]‌ .postln; }, '/freq'); ) ( { // evaluate repeatedly var sig = SinOsc.ar(~freq * [0, 0.2].midiratio); sig = sig * Env.perc.kr(2) * 0.2; }.play; ) ( s.freeAll; // cleanup OSCdef.freeAll;

   )

7.3.4  EXCHANGING OSC WITH OTHER APPLICATIONS Beyond SC, many software platforms are either natively OSC-compliant or augmentable with external OSC libraries. Other audio languages such as Max, Pd, Kyma, ChucK, and Csound have built-in OSC capabilities, and OSC libraries exist for general-purpose languages such as C++, Java, and Python. Some DAWs can also be manipulated with OSC. Mobile apps like TouchOSC and Lemur transform a smartphone or tablet into a customizable multitouch



C h a p t er 7: E x t er n a l C o n t r o l

controller. Though the design specifics for these platforms vary, general OSC principles remain the same. In the TouchOSC Mk1 editor software, for example, the OSC address and numerical data associated with a graphical widget appears in the left-hand column when it is selected (see Figure 7.1). In this example, we have a knob that sends values between 0 and 1, which arrive with the ‘/1/rotary1’ OSC address.

FIGURE 7.1  A screenshot of the TouchOSC Mk1 Editor software, displaying a graphical knob that transmits values that have the address “/1/rotary1.”

To send OSC to SC from TouchOSC on a mobile device, it should be on the same local network as your computer and needs the IP address and port of your computer, which it considers to be its “host” device. This information can be configured in the OSC settings page of the mobile TouchOSC app, pictured in Figure 7.2 (note that the self-referential IP address “127.0.0.1” cannot be used here, since we are dealing with two separate devices). The final step is to create an OSCdef to receive the data, shown in Code Example 7.15, after which data should appear in the post window when interacting with the TouchOSC interface. To send OSC data from SC to TouchOSC, we create an instance of NetAddr that represents the mobile device running TouchOSC, and send it a message, for example, one that randomly moves the knob. If OSC data doesn’t appear, OSCFunc.trace(true) can be used as a debugging tool that prints all incoming OSC messages. Accidentally mistyping the OSC address or IP address is a somewhat common source of problems that may likely cause OSC transmission to fail.

225

226

SuperCollider for the Creative Musician

FIGURE 7.2  OSC configuration settings in the TouchOSC Mk1 mobile app.

CODE EXAMPLE 7.15: A N OSCDEF THAT RECEIVES AND PRINTS DATA FROM THE GRAPHICAL KNOB PICTURED IN FIGURE 7.1, AND A PAIR OF EXPRESSIONS THAT SEND A MESSAGE FROM THE SC LANGUAGE TO TOUCHOSC. ( OSCdef(\fromTouchOSC, { |msg| "data received: ".post; msg[1]‌.postln; }, '/1/rotary1'); ) ~touchOSC = NetAddr("10.195.91.103", 9000);

   ~touchOSC.sendMsg('/1/rotary1', rrand(0.0, 1.0));

7.3.5  SENDING OSC TO ANOTHER COMPUTER RUNNING SC Sending OSC from one instance of the SC language to another is similar to previous examples and involves the same general principles: both computers should be on the same local network, and the sending computer must know the IP address and port of the receiving computer. The receiver should create an appropriate OSCdef, and the sender should create an appropriate NetAddr and use sendMsg to transmit messages. The receiver can create as many OSCdefs as needed to



C h a p t er 7: E x t er n a l C o n t r o l

accommodate a larger collection of addresses and data, or include conditional logic in an OSCdef in order to create different logical branches. Data “sanitization” may be appropriate in some cases; in Code Example 7.16, the receiving device uses conditional logic and clip to avoid bad values.

CODE EXAMPLE 7.16: A SIMPLE STRUCTURE FOR SENDING OSC MESSAGES FROM THE LANGUAGE TO ANOTHER INSTANCE OF THE SC LANGUAGE RUNNING ON A DIFFERENT COMPUTER. ( // On the receiving computer: OSCdef (\receiver, { |msg| var freq; if(msg[1]‌.isNumber, { freq = msg[1]‌ .clip(20,20000); { var sig = SinOsc.ar(freq) * 0.2 ! 2; sig = sig * XLine.kr(1, 0.0001, 0.25, doneAction:2); }.play(fadeTime:0); }); }, '/makeTone'); ) // On the sending computer: ~sc = NetAddr("192.168.1.15", 57120); // local IP/port of receiver

   ~sc.sendMsg('/makeTone', 500); Though it’s possible to send OSC data from the language to an audio server running on a separate computer, this may not necessarily be the simplest option, for reasons explained in Section 7.3.2. From a practical perspective, you may find that sending data from language to language is a more straightforward option (albeit less direct), in which the receiving computer creates Synths and other server-related actions within the function of an OSCdef. For the interested reader, information on sending OSC directly to a remote audio server can be found in a pair of guide files in the help documentation, titled “Server Guide” and “Multi-client Setups.”

7.4  Other Options for External Control MIDI and OSC are commonly used and cover a lot of ground, but these protocols are not the only options for external device communication. This section briefly details serial communication and the HID specification, which provide two additional options. Relative to MIDI/ OSC, these options are encountered less often, and may not feel as user-friendly. However, they are essential when using certain types of control interfaces.

227

228

SuperCollider for the Creative Musician

7.4.1  SERIAL COMMUNICATION The SerialPort class provides an interface for communicating with certain types of devices connected to a USB port on your computer. Serial port communication can be useful for reading sensor data from microcontrollers and prototyping kits, such as Arduino and Raspberry Pi. In these types of projects, the workflow involves programming your microcontroller to output the desired data, and specifying a baudrate, that is, the rate at which data is transferred. A baudrate of 9,600 bits per second is common, but other options exist. The Arduino code in Figure 7.3 reads and digitizes the value from its 0th analog input and writes it to the serial port 9,600 times per second. The “a” character is used as a delimiter to mark the end of one number and the beginning of the next.

FIGURE 7.3  A simple Arduino program that writes analog input values to its serial output port.

A random segment of the data stream resulting from the Arduino code in Figure 7.3 will look something like this: ...729a791a791a792a793a792a...



C h a p t er 7: E x t er n a l C o n t r o l

Once your controller is connected to your computer’s USB port, you can evaluate SerialPort.devices to print an array of strings that represent available serial port devices. Using this information, the next step is to create an instance of SerialPort that connects to the appropriate device, by providing the name and baudrate. Identifying the correct device name may involve some trial-and-error if multiple devices are available: ~port = SerialPort.new("/dev/tty.usbmodem14201", 9600);

The final step, shown in Code Example 7.17, is to build a looping routine that retrieves data from the serial port by repeatedly calling read, and storing the returned value in a global variable. The details may involve some experimentation, depending on how the transmitted data is formatted. For example, when an Arduino sends a number to the serial port, it writes the data as ASCII values that represent each character in sequence. So, if the Arduino sends a value of 729, it arrives as the integers 55, 50, and 57, which are the ASCII identifiers for the characters “7,” “2,” and “9.” In each routine loop, we read the next value from the serial port and convert it to the symbol it represents, and then we perform a conditional branch based on whether it is a number or the letter “a.” If it’s a number, we add the number to an array. If it’s the letter “a,” we convert the character array to an integer and empty the array. While the routine plays, ~val is repeatedly updated with the latest value from the USB port, which can be used elsewhere in your code. Interestingly, we have a looping routine with no wait time, yet SC does not crash when it runs. This is because read pauses the routine thread while waiting for the Arduino to send its next value. Because the Arduino waits for one millisecond between sending successive values (see Figure 7.3), this wait time is effectively transferred into the routine.

CODE EXAMPLE 7.17: S C CODE THAT READS DATA FROM THE USB PORT AND CONVERTS IT INTO USEABLE DATA. NOTE THAT THE DOLLAR SIGN DESIGNATES A CHARACTER (AN INSTANCE OF THE Char CLASS). ( // assumes ~port is a valid instance of SerialPort var ascii = 0, chars = []; r = Routine({ loop{ ascii = ~port.read.asAscii; if(ascii.isDecDigit) {chars = chars.add(ascii)}; if(ascii == $a) { ~val = chars.collect({ |n| n.digit}).convertDigits; chars = []; }; }; }).play;

   )

229

230

SuperCollider for the Creative Musician

7.4.2  HUMAN INTERFACE DEVICES The Human Interface Device (HID) specification exists to standardize and simplify communication between computers and various types of peripheral devices. Common HIDs include game controllers, joysticks, and computer keyboards/mice, but numerous others exist. A guide file titled “Working with HID” details its implementation and related classes. Notably, at the time of writing this chapter, HID functionality is not yet fully implemented on Windows. Other cross-platform discrepancies and permission issues may also arise (on macOS, for instance, you may need to be logged in as a root-level user and/or adjust access permissions in your security and privacy settings). Despite being a bit unwieldy compared to MIDI/OSC, HID is an essential tool for interaction between SC and certain control devices. Communication with an HID begins by finding and posting information about available devices, which is done by evaluating the following two lines: ( HID.findAvailable; HID.postAvailable; )

If, for example, you have an external mouse connected to your computer, information about that device should appear, and may look something like this: 9:

Usage name and page: Keyboard, GenericDesktop



Vendor name:



Product name: HID Gaming Mouse



Vendor and product ID: 7119, 2232



Path: USB_1bcf_08b8_14200000



Serial Number:



Releasenumber and interfaceNumber: 256, -1

A connection to an HID can be established using open, and specifying the device’s Vendor ID and Product ID: ~device = HID.open(7119, 2232);

Keep in mind your device may cease its normal functions while this connection remains open! A connection can be closed with the close method, or by using the class method closeAll: ~device.close; HID.closeAll;

An HID encompasses some number of HID elements, which represent individual aspects of the device, such as the state of a button, or the horizontal position of a joystick. Once a connection has been opened, the following statement will print a line-by-line list of all the device’s elements:



C h a p t er 7: E x t er n a l C o n t r o l ~device.elements.do({ |n| n.postln});\

For instance, a gaming mouse with several different buttons may print something like this: a HIDElement(0: type: 2, usage: 9, 1) a HIDElement(1: type: 2, usage: 9, 2) a HIDElement(2: type: 2, usage: 9, 3) a HIDElement(3: type: 2, usage: 9, 4) ...etc...

Figuring out which elements correspond to which features may involve some trial-anderror. The process begins by creating an HIDdef, much like creating a MIDIdef/OSCdef. An HIDdef ’s action function expects a relatively large argument list, but the first two values (the normalized value and raw value of the element) are usually the most relevant. An integer, which follows the function as the third argument for HIDdef, determines the element to which the HIDdef is listening. This message-filtering behavior is essentially the same as getting the first MIDIdef in Code Example 7.2 to respond only to the mod wheel.

CODE EXAMPLE 7.18: A N HIDdef THAT PRINTS THE NORMALIZED AND RAW VALUES OF ELEMENT ZERO OF A CONNECTED DEVICE. ( HIDdef.element(\getElem0, { |val, raw| [val, raw].postln; }, elID: 0 // only respond to element 0 );

   )

Once an HID element has been identified, the HIDdef function can be dynamically changed and re-evaluated to accommodate the needs of a particular project. Like MIDIdef and OSCdef, an HIDdef can be destroyed with free, or all HIDdef objects can be collectively destroyed with HIDdef.freeAll.

Notes 1 For Linux users, additional steps are required to connect an instance of MIDIOut to a target destination, because of how MIDI is handled on this operating system. Detailed information is available in the MIDIOut help file, under a section titled “Linux specific: Connecting and disconnecting

231

232

SuperCollider for the Creative Musician

ports.” This section of the help file recommends reliance on MIDIOut.newByName for maximizing cross-platform compatibility. 2 Matthew J. Wright and Adrian Freed, “Open SoundControl: A New Protocol for Communicating with Sound Synthesizers,” in Proceedings of the 1997 International Computer Music Conference, Proceedings of the International Computer Music Association (San Francisco: International Computer Music Association, 1997).

CHAPTER 8

GRAPHICAL USER INTERFACES 8.1 Overview A Graphical User Interface (GUI) refers to an arrangement of interactive objects, like knobs and sliders, that provides a system for controlling a computer program and/or displaying information about its state. Because SC is a dynamically interpreted language that involves realtime code evaluation, a GUI may not always be necessary, and may even be a hindrance in some cases. In other situations, building a GUI can be well worth the effort. If you are reverseengineering your favorite hardware/software synthesizer, emulating an analog device, sending your work to a collaborator who has minimal programming experience, or if you simply want to conceal your scary-looking code, a well-designed GUI can help. Older versions of SC featured a messy GUI “redirect” system that relied on platformspecific GUI classes that were rife with cross-platform pitfalls. Since then, the SC development community has unified the GUI system, which now uses Qt software on all supported operating systems, resulting in more simplicity and uniformity.

8.2  Basic GUI Principles 8.2.1 WINDOWS A GUI begins with a new Window, a class that provides a rectangular space on which other elements can be placed. A newly created window is invisible by default, but can be made visible by calling front on the instance: Window.new().front;

On creation, a new window accepts several arguments, shown in Code Example 8.1. The first argument is a string that appears in the window’s title bar. The second, bounds, is an instance of Rect (short for rectangle) that determines the size and position of the window relative to your computer’s screen. A new Rect involves four integers: the first two determine the horizontal/vertical pixel distance from the bottom-left corner of your screen, and the second two integers determine pixel width/height. Two additional Booleans determine whether a window can be resized and/or moved. Setting these to false prevents the user from manipulating the window with the mouse, which is useful when the window is meant to remain in place. Experimenting with these arguments (especially the bounds) is a great way to understand how they work. A window can be destroyed with close, and multiple windows can be closed with Window.closeAll.

SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024. DOI: 10.1093/oso/9780197616994.003.0008

234

SuperCollider for the Creative Musician

CODE EXAMPLE 8.1: C REATION OF A NEW WINDOW WITH CUSTOM ARGUMENTS. ( w = Window( name: "Hello World!", bounds: Rect(500, 400, 300, 400), resizable: false, border: false ).front; )

   w.close; The class method screenBounds returns a Rect corresponding to the size of your computer screen. By accessing certain attributes of this Rect, a window can be made to appear in a consistent location, irrespective of screen size. The instance method alwaysOnTop can be set to a Boolean that determines whether the window will remain above other windows, regardless of the window that currently has focus. Making this attribute true keeps the window visible while working in the IDE, which can be desirable during GUI development to avoid having to click back and forth between windows. Code Example 8.2 demonstrates the use of these two methods.

CODE EXAMPLE 8.2: C REATION OF A CENTERED WINDOW THAT ALWAYS REMAINS ON TOP OF OTHER WINDOWS. ( w = Window( "A Centered Window", Rect(

Window.screenBounds.width / 2 - 150,



Window.screenBounds.height / 2 - 200,



300,



400 )

) .alwaysOnTop_(true) .front;

   )

8.2.2 VIEWS is the parent class of most recognizable/useful GUI classes, such as sliders, buttons, and knobs, and it’s also the term used to describe GUI objects in general. Table 8.1 provides View



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

a descriptive list of commonly used subclasses. The View class defines core methods and behaviors, which are inherited by its subclasses, establishing a level of behavioral consistency across the GUI library. At minimum, a new view requires a parent view, that is, the view on which it will reside, and a Rect that determines its bounds, relative to its parent. Unlike windows, which are positioned from the bottom-left corner of your screen, a view’s coordinates are measured from the top-left corner of its parent view. If two views are placed on a window such that their bounds intersect, the view created second will be rendered on top of the first, partially obscuring it (this may or may not be desirable, depending on context). Once a view is created, it can be permanently destroyed with remove. Code Example 8.3 demonstrates these techniques by placing a slider on a parent window. Note that the rectangular space on the body of a window is itself a type of view (an instance of TopView, accessible by calling the view method on the window). In the Qt GUI system, the distinction between a window and its TopView is minimal, and both are valid parents.

CODE EXAMPLE 8.3: P LACEMENT OF A SLIDER ON A WINDOW. ( w = Window("A Simple Slider", Rect(500, 400, 300, 400)) .alwaysOnTop_(true).front; x = Slider(w, Rect(40, 40, 40, 320)); ) x.remove; // remove the slider

  

w.close; // close the window

TABLE 8.1  A list of commonly used View classes and their descriptions.

Class Name

Description

Button

multi-state button

Knob

rotary controller

ListView

display list of items

MultiSliderView

array of multiple sliders

NumberBox

modifiable field for numerical values

PopUpMenu

drop-down menu for selectable items

RangeSlider

slider with extendable handle on either end

Slider

linear controller

Slider2D

two-dimensional slider

StaticText

non-editable text display

TextField

simple editable text display

TextView

editable, formattable, multi-line text display

235

236

SuperCollider for the Creative Musician

8.2.3  LAYOUT MANAGEMENT Without tools for managing the placement and organization of views, building a GUI in SC quickly devolves into tedious pixel-hunting. Layout classes, discussed in the “Layout Management” guide file, help avoid this drudgery by automatically making smart choices about how child views should be placed on a parent. These tools are HLayout and VLayout, which organize views in horizontal and vertical lines, GridLayout, which organizes views in a two-dimensional arrangement of rows and columns, and StackLayout, which overlays multiple views in the same space while allowing the user to specify the topmost element. Code Example 8.4 demonstrates the use of these layout tools and highlights the fact that we can “nest” a layout class within another to create more complex arrangements. Notably, when using these layout classes to organize views, we don’t have to specify the parent view (because the layout method is already attached to the parent), nor do we have to specify bounds, which are inferred based on the type of view and the available space on the parent. These layout classes enlarge a window if it’s too small to accommodate its child views, prevent a window from becoming too small to render its views, and dynamically adjust views’ bounds if the parent window is resized. There are some situations (featured in some Companion Code files for this chapter), where a high degree of pixel precision is needed, in which case we may decide not to use layout classes and instead manually specify bounds information. When pixel precision is not an issue, however, we can offload the pixel-hunting onto these layout tools, which saves quite a bit of time.

CODE EXAMPLE 8.4: U SE OF LAYOUT MANAGEMENT CLASSES TO AUTOMATE THE PLACEMENT OF VIEWS ON A WINDOW. ( Window("Layout Management", Rect(100, 100, 250, 500)).front .layout_( VLayout(

HLayout(Knob(), Knob(), Knob(), Knob()),



HLayout(Slider(), Slider(), Slider(), Slider()),



Slider2D(),



Button() )

);

   )

8.2.4  GETTING AND SETTING GUI ATTRIBUTES Getting and setting attributes, introduced in Chapter 1, is one of the primary ways we design and interact with GUIs during development. Views are defined by their attributes, accessible via method calls, which determine their appearances and behaviors. Some attributes are common to all views, while others are specific to individual classes. For instance, every



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

view has a visible attribute, which determines whether it will be displayed, and an enabled attribute, which determines whether the user can interact with the view. Most views also have a background attribute, which determines a background color. As a reminder: to “get” an attribute, we call the method, and the attribute’s value is returned. To “set” an attribute to a new value, we can assign the value using an equals symbol or the underscore syntax, both demonstrated in Code Example 8.5. Recall that the underscore setter is advantageous because it returns the receiver and lets us chain setter commands back-to-back in a single expression. Note that the underscore syntax and setter-chaining have already appeared in Code Example 8.2.

CODE EXAMPLE 8.5: G ETTING AND SETTING ATTRIBUTES OF A VIEW. ( ~slider = Slider(); w = Window("A Slider", Rect(500, 400, 100, 400)).front .alwaysOnTop_(true) .layout_(HLayout(~slider)); ) ~slider.visible; // get attribute (returns "true") ~slider.visible = false; // set attribute (make invisible) // set multiple attributes (visible, non-interactable, and yellow)

   ~slider.visible_(true).enabled_(false).background_(Color(1, 1, 0));

As a side note, color is expressed using the Color class, which encapsulates four floats between 0 and 1. The first three are red, green, and blue amounts, and the fourth is a transparency value called alpha. The alpha value defaults to 1, which represents full opacity, while 0 represents full transparency.

TIP.RAND(); E XPRESSING COLOR The Color class features a flexible variety of creation methods, detailed in its help file. In addition to a handful of convenience methods for common colors (e.g., Color. red, Color.cyan), color can also be specified as follows: Color.new255(250, 160, 20); // RGB integers between 0–255 Color.fromHexString("BF72C4"); // hexidecimal string

   Color.hsv(0.1, 0.6, 0.9); // hue, saturation & value

237

238

SuperCollider for the Creative Musician

8.2.5  VALUES AND ACTIONS GUIs are not just meant to look nice, they’re designed to perform specific actions in response to input from the user. For example, Code Example 8.6 demonstrates a simple approach to controlling the amplitude of a signal using a slider. Here, the methods value, action, and valueAction come into play, which are directly linked to a view’s state and behavior. The value attribute stores the state of a view. In the case of a slider or knob, its value is a float between 0 and 1. The value of a button is an integer corresponding to the index of its current state (for instance, a toggle button has two states, with indices 0 and 1). A view’s action attribute references a function that is evaluated in response to user interaction. An argument declared inside an action function represents the view instance itself, thus enabling access to other view attributes inside the function (this is essential, for instance, when we want a toggle button to perform one of two actions based on its value). When the user interacts with a slider by clicking and dragging the mouse, its value attribute is updated, and the action is performed for each value change. We can also simulate user interaction by calling the valueAction setter, which updates the value and performs the action. By contrast, if we use the value method as a setter, the view’s value is updated, but the action is not performed. This approach is useful for “silently” updating a view’s state.

CODE EXAMPLE 8.6: U SING A SLIDER AND BUTTON TO CONTROL THE AMPLITUDE OF A SOUND. s.boot; ( ~amp = 0.3; ~synth = { |amp, on = 0| var sig = LFTri.ar([200, 201], mul: 0.1); sig = sig * amp.lag(0.1) * on; }.play(args: [amp: ~amp]); ~slider = Slider() .value_(~amp) .action_({ |v| ~amp = v.value; ~synth.set(\amp, ~amp); }); ~button = Button() .states_([ ["OFF", Color.gray(0.2), Color.gray(0.8)], ["ON", Color.gray(0.8), Color.green(0.7)] ]) .action_({ |btn| ~synth.set(\on, btn.value) });



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

Window("Amplitude Control", Rect(500, 400, 100, 400)) .layout_(VLayout(~slider, ~button)) .onClose_({~synth.release(0.1)}) .alwaysOnTop_(true) .front; ) ~slider.valueAction_(rrand(0.0, 1.0)); // simulate random user

   interaction

The code in Code Example 8.6 has a few noteworthy features. Our button has two states, defined by setting the states attribute equal to an array containing one internal array for each state. Each internal array contains three items: a string to be displayed, the string color, and the background color. The lag method is applied to the amplitude argument, which wraps the argument in a Lag UGen, whose general purpose is to smooth out discontinuous changes to a signal over a time interval (in this case, a tenth of a second). Without lagging, any large, instantaneous, or fast changes to the slider may result in audible pops or “zipper” noise (if you remove the lag, re-evaluate, and rapidly move the slider with the mouse, you’ll hear a distinct “roughness” in the sound). Finally, the onClose attribute stores a function to be evaluated when the window closes, useful for ensuring sound does not continue after the window disappears. Although all views understand value, not all views respond to this method in a meaningful or useful way. Some classes rely on one or more alternative method calls. For instance, text-oriented objects such as StaticText and TextView consider their text to be their “value,” which they return in response to string. Likewise, Slider2D returns its values as two independent coordinates through the methods x and y. Companion Code 8.1 combines several of the techniques presented thus far and expands upon Table 8.1 by providing an interactive tour of several different types of views.

8.2.6 RANGE-MAPPING A numerical range between 0 and 1 is acceptable for signal amplitude but unsuitable for frequency, MIDI data, decibels, and many other parameters. Even when the default range is suitable, the inherent linear behavior of sliders and knobs may not be. Suppose we want our slider to control frequency instead of amplitude. To produce a sensible frequency range, one option is to apply a range-mapping method such as linexp before the value is passed to a signal algorithm. Range-mapping can alternatively be handled with ControlSpec, a class designed to map values back and forth between 0 and 1, and another custom range, using the methods map and unmap. A ControlSpec requires a minimum value, a maximum value, and a warp value, which collectively determine its range and curvature. The warp value may be a symbol (e.g., \lin, \exp), or an integer (similar to Env curve values). Code Example 8.7 demonstrates the use of ControlSpec.

239

240

SuperCollider for the Creative Musician

CODE EXAMPLE 8.7: U SE OF CONTROLSPEC TO MAP SLIDER VALUES TO AN APPROPRIATE FREQUENCY RANGE. ( ~freqspec = ControlSpec(100, 2000, \exp); ~freq = ~freqspec.map(0.2); ~synth = { |freq, on = 0| var sig = LFTri.ar(freq.lag(0.1) + [0, 1], mul: 0.05); sig = sig * on; }.play(args: [freq: ~freq]); ~slider = Slider() .value_(0.2) .action_({ |v| ~freq = ~freqspec.map(v.value); ~synth.set(\freq, ~freq); }); ~button = Button() .states_([ ["OFF", Color.gray(0.2), Color.gray(0.8)], ["ON", Color.gray(0.8), Color.green(0.7)] ]) .action_({ |btn| ~synth.set(\on, btn.value) }); Window("Frequency Control", Rect(500, 400, 100, 400)) .layout_(VLayout(~slider, ~button)) .onClose_({~synth.release(0.1)}) .front;

   )

Several pre-built ControlSpecs are available, and can be viewed with: ControlSpec.specs.keys;

And, a ControlSpec can be created by calling asSpec on the relevant symbol: ~freqspec = \freq.asSpec; // -> a ControlSpec suitable for frequency



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

As an exercise to help you understand these fundamental techniques more deeply, consider combining the frequency and amplitude GUIs in Code Examples 8.5 and 8.6 into a single GUI with a pair of sliders and an on/off button.

8.3  Intermediate GUI Techniques 8.3.1  KEYBOARD AND MOUSE INTERACTION On a basic level, sliders, knobs, and other “moveable” views are pre-programmed to respond to input from your mouse and keyboard. Clicking and dragging has a predictable response, as does pressing the arrow keys on your keyboard. A few other built-in keyboard actions exist as well: pressing [r]‌will randomize a slider or knob, [c] will center it, and [n] and [x] will set the view to its minimum/maximum. Pressing [tab] cycles focus through focusable views, and pressing [spacebar] when a button is in focus has the same effect as clicking it. Still, manipulating GUIs with only these elementary modes can be a limiting experience. Notably, the mouse can only interact with one view at a time. Catching and processing a greater variety of keyboard and mouse input can enhance the user experience. Table 8.2 lists some common methods that register actions to be performed in response to keyboard/mouse input. These actions exist alongside a view’s normal action function and are defined by setting the attribute to a function. In contrast to a normal action, mouse/keyboard functions accept a longer list of arguments. Keyboard functions are passed arguments that represent: 1 . 2. 3. 4. 5. 6.

the view instance; the character that was pressed; information on which modifier keys were held; a unicode integer; a hardware-dependent keycode; and a key integer defined by the Qt framework.

The sixth argument is described in the SC help documents as being the “most reliable way to check which key was pressed.” Mouse functions accept arguments that represent some combination of the following: 1 . 2. 3. 4. 5. 6.

the view instance; the horizontal pixel position of the mouse, relative to the view; the vertical pixel position of the mouse, relative to the view; information on which modifier keys were held; an integer corresponding to the mouse button that was pressed; and a click count within your system’s double-click timing window.

One of the best ways to understand these functions and their arguments is to create an empty view that posts argument values, as demonstrated in Code Example 8.8.

241

242

SuperCollider for the Creative Musician TABLE 8.2  A descriptive list of methods for defining keyboard/mouse actions.

Method

Description

.mouseDownAction

evaluated when a mouse button is clicked on the view

.mouseUpAction

evaluated when a mouse button is released on the view

.mouseMoveAction

evaluated when the mouse moves after clicking/holding on the view

.mouseOverAction

evaluated when the mouse moves over the view, regardless of clicks (requires acceptsMouseOver to be true)

.mouseEnterAction

evaluated when the mouse moves into the view’s bounds

.mouseLeaveAction

evaluated when the mouse moves out of the view’s bounds

.keyDownAction

evaluated when the view is in focus and a key is pressed

.keyUpAction

evaluated when the view is in focus and a key is released

CODE EXAMPLE 8.8: A VIEW THAT POSTS INFORMATION ABOUT KEY PRESSES AND MOUSE CLICKS. ( w = Window("Keyboard and Mouse Data").front .layout_(VLayout( StaticText() .align_(\center) .string_("press keys/click the mouse") )); w.view.keyDownAction_({ |view, char, mod, uni, keycode, key| postln("character: " ++ char); postln("modifiders: " ++ mod); postln("unicode: " ++ uni); postln("keycode: " ++ keycode); postln("key: " ++ key); "".postln; }); w.view.mouseDownAction_({ |view, x, y, mod, button, count| postln("x-position: " ++ x); postln("y-position: " ++ y); postln("modifiers: " ++ mod); postln("button ID: " ++ button); postln("click count: " ++ count); "".postln; });

   )



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

As a practical example of incorporating keystrokes into a GUI, Companion Code 8.2 builds a virtual piano keyboard that can be played using the computer keyboard.

8.3.2  CONTROLLING GUI WITH MIDI/OSC External devices that send MIDI or OSC data can be used to control a GUI. For the most part, the process is simple: the action function of a MIDI/OSC receiver should call the appropriate GUI commands. However, MIDIdef/OSCdef functions cannot directly interact with windows and views, because MIDIdef and OSCdef functions are considered to exist outside of the “main application context.” This limitation can be overcome by enclosing problematic code in curly braces and applying the defer method, which passes the task of evaluation to the AppClock, a low-priority scheduler capable of interacting with GUIs. Code Example 8.9 demonstrates this technique by displaying incoming MIDI note numbers in a NumberBox. If the defer enclosure is removed, SC will produce an error when a note-on message arrives.

CODE EXAMPLE 8.9: D EFERRING A FUNCTION TO CONTROL A GUI WITH INCOMING MIDI MESSAGES. ( MIDIIn.connectAll; w = Window("MIDI Control").front .layout_(VLayout( StaticText().align_(\center) .string_("press a key on your MIDI controller"), ~numbox = NumberBox().align_(\center) .enabled_(false) .font_(Font("Arial", 40)); )); MIDIdef.noteOn(\recv, { |vel, num| {~numbox.value_(num)}.defer });

   )

Companion Code 8.3 reinforces these techniques by creating an interface that can dynamically “learn” MIDI inputs and assign their data to control specific GUI objects.

8.3.3  GUI AND CLOCKS Some GUIs involve timing elements, such as a clock that shows elapsed time, or a button with a “cooldown” period that prevents multiple presses within a time window. Generally, these situations are addressable by playing a routine from within a view’s action function. However, code scheduled on TempoClock or SystemClock is (like MIDIdef/OSCdef) considered to exist outside the main application context and must be deferred to the AppClock. Code Example 8.10 creates a button with a cooldown to demonstrate timing considerations. When clicked, the button disables itself and plays a routine that decrements a counter. Once the

243

244

SuperCollider for the Creative Musician

counter has reached a threshold, the button is re-enabled. As an alternative to wrapping GUI code in a deferred function, we can explicitly play the routine on the AppClock. Also, note that the val counter serves two purposes: it is part of the conditional logic that determines when to exit the while loop and simultaneously influences the button’s background color during the cooldown period.

CODE EXAMPLE 8.10: U SING APPCLOCK TO SCHEDULE A “COOLDOWN” PERIOD FOR A BUTTON. ( ~button = Button() .states_([["Click Me", Color.white, Color(0.5, 0.5, 1)]]) .action_({ |btn| btn.enabled_(false); Routine({ var val = 1; while({val > 0.5}, { btn.states_([ [ "Cooling down...", Color.white, Color(val, val, 1) ] ]); val = val - 0.01; 0.05.wait; } ); btn.enabled_(true); btn.states_([ [ "Click Me", Color.white, Color(val, val, 1) ] ]); }).play(AppClock); }); w = Window("Cooldown Button", Rect(100, 100, 300, 75)).front .layout_(VLayout(~button));

   )

Companion Code 8.4 reinforces these timing techniques by creating a simple stopwatch.



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

8.4  Custom Graphics SC supports the creation of simple custom graphics, both static and animated. Although a project’s needs are often met by the core collection of built-in GUI objects, it may be desirable or practical to add custom graphical features to a GUI in order to add emphasis, mimic the design of another interface, or create a more inviting appearance.

8.4.1  LINES, CURVES, SHAPES UserView is a class that provides a blank rectangular canvas on which shapes can be drawn using

the Pen class. Instructions for what to draw on a UserView are contained in a function, stored in the UserView’s drawFunc attribute. Inside this function is the only place the Pen class can be used. Pen is an unusual class in that we don’t create new instances of it. Instead, the class itself serves as a singular, imagined drawing implement, manipulated through method calls. Its methods manage things like setting colors, changing stroke width, and performing various drawing tasks. When using Pen, there is a distinction between constructing a shape and rendering a shape. Construction specifies shape existence but does not actualize it. Rendering, executed by calling fill, stroke, fillStroke, or a related method, is what produces visible results. Code Example 8.11 demonstrates the use of UserView and Pen by creating some basic shapes. Note that because most Pen methods return the Pen class itself, methods can often be chained together into compound expressions. Methods that alter an aspect of Pen persist through the drawFunc until they are changed. For example, because strokeColor remains unchanged after being set for the first shape, the same color is used when fillStroke renders the third shape. Code Example 8.11 also indirectly introduces the Point class, frequently used in GUI contexts to represent a point on the Cartesian plane. A point can be created via Point.new(x, y), or written more concisely using the syntax shortcut x @ y. For example, the addArc method constructs all or part of a circular arc and needs to know the x/y coordinates of the center pixel, which are specified as a Point. The help file for the Pen class documents a large collection of useful and interesting methods, accompanied by numerous code examples. Table 8.3 provides a partial list of Pen methods.

CODE EXAMPLE 8.11: C ONSTRUCTING AND RENDERING BASIC SHAPES USING PEN, USERVIEW, AND DRAWFUNC. ( u = UserView().background_(Color.gray(0.2)) .drawFunc_({ Pen.width_(2) // set Pen characteristics .strokeColor_(Color(0.5, 0.9, 1)) .addArc(100 @ 100, 50, 0, 2pi) // construct a circle .stroke; // render the circle (draw border, do not fill) Pen.fillColor_(Color(0.9, 1, 0.5)) // set Pen characteristics .addRect(Rect(230, 90, 120, 70)) // construct a rectangle .fill; // render the rectangle (fill, do not draw border)

245

246

SuperCollider for the Creative Musician

Pen.width_(6) .fillColor_(Color(0.2, 0.8, 0.2)) .moveTo(90 @ 250) // construct a triangle, line-by-line .lineTo(210 @ 320).lineTo(90 @ 320).lineTo(90 @ 250) .fillStroke; // render the triangle (fill and draw border) Pen.width_(2) .strokeColor_(Color(1, 0.5, 0.2)); 8.do({

Pen.line(280 @ 230, 300 @ 375);



Pen.stroke;



Pen.translate(10, -5); // 'translate' modifies what Pen

perceives as its origin by a horizontal/vertical shift. You can imagine translation as shifting the paper underneath a pen. }); }); w = Window("Pen", Rect(100, 100, 450, 450)) .front.layout_(HLayout(u));

   )

TABLE 8.3  A selection of Pen methods and their descriptions.

Pen Method

Description

.width_(n)

Set the line/curve thickness to n pixels.

.strokeColor_(a Color)

Set the color of drawn lines/curves.

.fillColor_(a Color)

Set the color used to fill closed paths.

.moveTo(x @ y)

Move the pen to the point (x, y).

.lineTo(x @ y)

Construct a line from the current pen position to the point (x, y). After this, the pen position is (x, y).

.line(x @ y, z @ w)

Construct a line between two specified points. After this, the pen position is (z, w).

.addArc(x @ y, r, p, q)

Construct a circular arc around the point (x, y) with radius r (pixels), starting at angle p and rotating by q (radians).

.addRect(a Rect)

Construct the specified rectangle.

.stroke

Render all lines, curves, arcs, etc. previously constructed.

.fill

Render the insides of paths. An unclosed path will be filled as if its endpoints were connected by a straight line.

.fillStroke

Combination of stroke and fill.



C h a p t er 8: G r a ph i ca l Us er I n t er fac es

Companion Code 8.5 explores these techniques by creating a custom transport control interface (i.e., a bank of buttons with icons representing play, pause, stop, etc.).

8.4.2 ANIMATION A UserView’s drawFunc executes when the window is first made visible and can be re-executed by calling refresh on the UserView. If a drawFunc’s elements are static (as they are in Code Example 8.11), refreshing has no discernible effect. But, if the drawFunc includes dynamic elements (such as randomness or some “live” element), refreshing will alter its appearance. A cleverly designed drawFunc can produce animated effects. There is no need to construct and play a routine that repeatedly refreshes a UserView. Instead, this process can be handled internally by making a UserView’s animate attribute true. When animation is enabled, a UserView automatically refreshes itself at a frequency determined by its frameRate, which defaults to 60 frames per second. This value cannot be made arbitrarily high and will be constrained by your computer screen’s refresh rate. In many cases, a frame rate between 20 and 30 provides an acceptable sense of motion. By default, when a UserView is refreshed, it will clear itself before redrawing. This behavior can be changed by setting clearOnRefresh to false. When false, the UserView will draw each frame on top of the previous frame, producing an accumulation effect. By drawing a semi-transparent rectangle over the entire UserView at the start of the drawFunc, we can produce a “visual delay” effect (see Code Example 8.12) in which moving elements appear to leave a trail behind them.

CODE EXAMPLE 8.12: A N ANIMATED “TUNNEL” EFFECT. ( var win, uv, inc = 0; win = Window("Tunnel Vision", Rect(100, 100, 400, 400)).front; uv = UserView(win, win.view.bounds) .background_(Color.black) .drawFunc_({ |v| // draw transparency layer Pen.fillColor_(Color.gray(0, 0.05)) .addRect(v.bounds) .fill; // green color gets brighter as arcs get "closer" Pen.width_(10) .strokeColor_(Color.green(inc.linlin(0, 320, 0.2, 1))) // draw random arc segments with increasing radii .addArc( 200 @ 200, inc.lincurve(0, 320, 0, 320, 6), rrand(0, 2pi), rrand(pi/2, 2pi) ).stroke;

247

248

SuperCollider for the Creative Musician

inc = (inc + 5) % 320;

// counter increases by 5 each frame



// and resets to zero when it reaches 320

}) .clearOnRefresh_(false) .frameRate_(30) .animate_(true);

   )

Bear in mind that SC is not optimized for visuals in the same way it is optimized for audio. A looping process that calls screen updates may strain your computer’s CPU, significantly so if it performs intensive calculations and/or has a relatively high frame rate. It’s usually best to have a somewhat conservative attitude toward animated visuals in SC, rather than an enthusiastically decorative one! To conclude this chapter, Companion Code 8.6 creates an animated spectrum visualizer that responds to sound.

PA R T III

LARGE-SCALE PROJECTS

On an individual basis, the topics presented throughout Parts I and II should be relatively accessible and learnable for a newer SC user. With a little dedication and practice, you will soon start to feel increasingly comfortable creating and adding SynthDefs, working with buffers, playing Synths and Pbinds, and so on. Putting all of these elements together into a coherent and functional large-scale structure, however, often poses unique and unexpected challenges to the user, sometimes manifesting confounding problems with elusive solutions. The beauty (and perhaps, the curse) of SC is that it is a sandbox-style environment, with few limitations placed on the user, and no obvious guideposts that direct the user toward a particular workflow. These final chapters seek to address these challenges by introducing tips and strategies for building an organized performance structure from individual sounds and ideas, and examining specific forms these projects might take.

CHAPTER 9

CONSIDERATIONS FOR LARGESCALE PROJECTS 9.1 Overview SC naturally invites a line-by-line or chunk-by-chunk approach to working with sound. If you’re experimenting, sketching out new ideas, or just learning the basics, this type of interaction is advantageous, as it allows us to express ideas and hear the results with ease. Nearly all the previous examples in this book are split into separate chunks; for example, one line of code boots the server, a second block reads audio files into buffers, another block adds SynthDefs, and so on. However, as your ideas grow and mature, you may find yourself seeking to combine them into a more robust and unified structure that can be seamlessly rehearsed, modified, performed, and debugged. A scattered collection of code snippets can certainly get the job done, and in fact, there is a pair of keyboard shortcuts ([cmd]+[left square bracket] and [cmd]+[right square bracket]), which navigate up and down through parenthetically-enclosed code blocks. However, the chunk-by-chunk approach can also be unwieldy, time-consuming, and prone to errors. It’s arguably preferable to write a program that can be activated with a single keystroke, and which includes intuitive mechanisms for adjustment, navigation, and resetting. It should be noted that not all project types will rely on SC as a real-time performance vehicle. Some projects may only need its signal-generating and signal-processing capabilities. A common example is the use of SC as a “rendering farm” for sonic material (making use of real-time recording features discussed in Section 2.9.6), while using multitrack software for assembly, mixing, and fine-tuning. This is a perfectly sensible way to use SC, particularly for fixed-media compositions, but it forgoes SC’s rich library of algorithmic tools for interactivity, indeterminacy, and other dynamic mechanisms. All things considered, when tackling a big project in SC, it’s a good idea to have a plan. This is not always possible (and experimentation is part of the fun), but even a partially formed plan can supply crucial guidance on the path forward. For example, how will the musical material progress through time? Will the music advance autonomously and deterministically, along a predetermined timeline? Or will a hardware interface (the spacebar, a MIDI controller) be used to advance from one moment to the next? Or is the order of musical actions indeterminate, with some device for dynamic interaction? Will a GUI be helpful to display information during performance? Plunging ahead into the code-void without answers to these types of questions is possible but risks the need for major changes and significant backtracking later.

SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024. DOI: 10.1093/oso/9780197616994.003.0009

252

SuperCollider for the Creative Musician

In recognition of the many divergent paths a large-scale SC project might take, this chapter focuses on issues that are widely applicable to large-scale performance structures, regardless of finer details. Specifically, this chapter focuses on the importance of order of execution, which in this context means the sequence in which setup- and performance-related actions must be taken. This concept is relevant in all programming languages; when a program is compiled, an interpreter parses the code in the order it is written. Variables must be declared before they can be used, memory must be allocated before data can be stored there, and functions/routines must be defined before they can be executed. In SC, which exists as a client-server duo communicating via OSC, the order in which setup actions take place is perhaps even more important, and examples of pitfalls are plenty: a Buffer must be allocated before samples can be recorded into it, a SynthDef must be fully added before a corresponding Synth can be spawned, and of course, the audio server must be booted before any of this can happen.

9.2  waitForBoot If one of our chief goals is to be able to run an arbitrarily complex sound-generating program with a single keystroke, then a good first step is to circumvent the inherently two-step process of (1) booting the server and (2) creating sound after booting is complete. Sound-generating code can only be called after the booting process has completely finished, and a direct attempt to bundle these two actions into a single chunk of code will fail (see Code Example 9.1), a bit like pressing the accelerator pedal in a car at the instant the ignition begins to turn, but before the engine is actually running.

CODE EXAMPLE 9.1: A FAILED ATTEMPT TO BOOT THE SERVER AND PLAY A SOUND IN ONE CODE EVALUATION. s.quit; // quit first to properly demonstrate ( s.boot; {PinkNoise.ar(0.2 ! 2) * XLine.kr(1, 0.001, 2, doneAction: 2)}.play;

   )

The essence of the problem is that the server cannot receive commands until booting is complete, which requires a variable amount of time, usually at least a second or two. The language, generally ignorant of the server’s status, evaluates these two expressions with virtually no time between them. The play method, seeing that the server is not booted, posts a warning, but there is no inherent mechanism for delaying the second expression until the time is right. Instead, the pink noise function is simply not received by the server. The waitForBoot method, demonstrated in Code Example 9.2, provides a solution. The method is applied to the server and given a function containing arbitrary code. This method will boot the server and evaluate its function when booting is complete.



C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects

CODE EXAMPLE 9.2: U SING waitForBoot TO BOOT THE SERVER AND PLAY A SOUND IN ONE CODE EVALUATION. s.quit; // quit first to properly demonstrate ( s.waitForBoot({ {PinkNoise.ar(0.2 ! 2) * XLine.kr(1, 0.001, 2, doneAction: 2)}.play; });

   )

In practice, a waitForBoot function usually contains a combination of server-side setup code, such as creating Groups, adding SynthDefs, allocating Buffers, and instantiating signalprocessing Synths.

9.3  Asynchronous Commands Packing all your server-side code into a waitForBoot function is tempting, but this alone will not guarantee that your program will work correctly. Within a waitForBoot function, some code might rely on the completion of previous code, for example, the creation of a Synth is only possible if its corresponding SynthDef has already been built and added to the server. Like booting the server, building a SynthDef requires a variable amount of time that depends on the number of UGens in the SynthDef, and the complexity of their interconnections. Code Example 9.3 demonstrates an example of this problem.

CODE EXAMPLE 9.3: A FAILURE TO CREATE A NEW SYNTH, RESULTING FROM THE FACT THAT ITS CORRESPONDING SYNTHDEF IS NOT YET FULLY ADDED TO THE SERVER WHEN SYNTH CREATION IS ATTEMPTED. s.quit; // quit first to properly demonstrate ( s.waitForBoot({ SynthDef(\tone_000, {

var sig = SinOsc.ar([350, 353], mul: 0.2);



sig = sig * XLine.kr(1, 0.0001, 2, doneAction: 2);



Out.ar(0, sig); }).add; Synth(\tone_000);

});

   )

253

254

SuperCollider for the Creative Musician

When the language tries to create the Synth, the server has not yet finished building the SynthDef, and produces a “SynthDef not found” error. On second evaluation, the code in Code Example 9.3 will work properly, because enough time will have passed to allow the SynthDef-building process to finish. This issue can be a common source of confusion, resulting in code that always fails on the first try, but works fine on subsequent tries. This example highlights the difference between synchronous and asynchronous commands, discussed in a guide file titled “Synchronous and Asynchronous Execution.” In the world of digital audio, certain actions must occur with a high level of timing precision, such as processing and playing audio signals. Without sample-accurate timing, audio samples get dropped during calculation, producing crackles, pops, and other unacceptable glitches. These time-sensitive actions are referred to as “synchronous” in SC and receive the highest scheduling priority. Asynchronous actions, on the other hand, are those that require an indeterminate amount of time to complete, and which generally do not require precise timing or high scheduling priority, such as adding a SynthDef or allocating a Buffer. The problem of waiting for the appropriate duration while asynchronous tasks are underway is solved with the sync method, demonstrated in Code Example 9.4. When the language encounters s.sync, it sends a message to the server, asking it to report back when all of its ongoing asynchronous commands are complete. When the server replies with this confirmation, the language then proceeds to evaluate code that occurs after the sync message. Note that although the SynthDef code remains the same in Code Example 9.4, its name has been changed so that the server interprets it as a brand new SynthDef that has not yet been added.

CODE EXAMPLE 9.4: U SING sync TO ALLOW ASYNCHRONOUS TASKS TO COMPLETE BEFORE PROCEEDING. s.quit; // quit first to properly demonstrate ( s.waitForBoot({ SynthDef(\tone_001, { var sig = SinOsc.ar([350, 353], mul: 0.2); sig = sig * XLine.kr(1, 0.0001, 2, doneAction: 2); Out.ar(0, sig); }).add; s.sync; Synth(\tone_001); });

   )



C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects

Because a sync message involves suspending and resuming code evaluation in the language, it can only be called from within a routine, or inside a method call (such as waitForBoot) that implicitly creates a routine. Thus, the code in Code Example 9.5 will fail, but will succeed if enclosed in a routine and played.

CODE EXAMPLE 9.5: A FAILURE PRODUCED BY CALLING sync OUTSIDE OF A ROUTINE. ( // audio server assumed to be already booted SynthDef(\tone_002, { var sig = SinOsc.ar([350, 353], mul: 0.2); sig = sig * XLine.kr(1, 0.0001, 2, doneAction: 2); Out.ar(0, sig); }).add; s.sync; // -> ERROR: yield was called outside of a Routine. Synth(\tone_002);

   )

Liberal usage of sync messages is generally harmless but does not provide any benefits. For example, if adding several SynthDefs to the server, there is no need to sync between each pair; a SynthDef is an independent entity, whose existence does not rely on the existence of other SynthDefs. Similarly, there’s is no need to sync between a block of buffer allocations and a block of SynthDefs, since these two processes do not (or at least should not) depend on each other. Generally, a sync message is only necessary before running code that depends on the completion of a previous asynchronous server command.

9.4  Initialization and Cleanup Functions The [cmd]+[period] shortcut is useful when composing, testing, and experimenting, and provides an essential “panic button.” The downside of this keystroke is that it indiscriminately wipes out every Synth and Group on the server. This behavior can be annoying when working on a project that involves one or more signal-processing Synths, such as reverbs or delays. When the audio server is wiped with [cmd]+[period], these Synths must be re-instantiated before rehearsal or performance can resume. Without taking steps to automate this process, re-instantiation becomes laborious, and may require lots of scrolling up and down through your code. Consider the following example, in which we allocate a bus, create a source/reverb SynthDef pair, and instantiate a reverb Synth. After evaluating the setup code block, we can begin a “performance” by creating a source Synth and routing it to the bus (see Code Example 9.6). If we press [cmd]+[period] while the sound is playing, both Synths are destroyed, and a re-instantiation of only the source Synth will produce silence since it relies on the reverb as part of its output path. As a result, we must first navigate back to our setup code and re-evaluate it (or at least, re-instantiate the reverb Synth) to hear sound again. Though

255

256

SuperCollider for the Creative Musician

not a huge chore in this specific case, this back-and-forth becomes tedious in a more complex project.

CODE EXAMPLE 9.6: I NITIALIZING THE SERVER WITH AN AUDIO BUS AND A PAIR OF SOURCE/REVERB SYNTHS. s.quit; // quit first to properly demonstrate ( s.newBusAllocators; ~bus = Bus.audio(s, 2); s.waitForBoot({ SynthDef(\source, { |out = 0|

var sig, env, freq, trig;



trig = Trig.kr(Dust.kr(4), 0.1);



env = EnvGen.kr(Env.perc(0.001, 0.08), trig);



freq = TExpRand.kr(200, 1500, trig);



sig = SinOsc.ar(freq ! 2, mul: 0.2);



sig = sig * env;



Out.ar(out, sig); }).add; SynthDef(\reverb, { |in = 0, mix = 0.2, out = 0|



var sig, fx;



sig = In.ar(in, 2);



fx = FreeVerb2.ar(sig[0]‌ , sig[1], 1, 0.85);



sig = sig.blend(fx, mix);



Out.ar(out, sig); }).add; s.sync; ~reverb = Synth(\reverb, [in: ~bus]);

}); )

   Synth(\source, [out: ~bus, freq: exprand(200, 1500)]); ServerBoot, ServerTree, and ServerQuit are classes that allow automation of server-related

tasks. Each class is essentially a repository where specific action functions can be registered, and each class evaluates its registered actions when the audio server enters a particular state. ServerBoot evaluates registered actions when the server boots, ServerQuit does the same when the server quits, and ServerTree evaluates its actions when the node tree is reinitialized, that is, when all nodes are wiped via [cmd]+[period] or by evaluating s.freeAll. Actions are



C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects

registered by encapsulating the desired code in a function, and adding the function to the appropriate repository. Code Example 9.7 demonstrates a simple example of making SC say “good-bye” whenever the server quits.

CODE EXAMPLE 9.7: U SE OF ServerQuit TO POST A GOOD-BYE MESSAGE WHEN THE SERVER QUITS. ( ~quitMessage = { " ***************** ".postln; " *** good-bye! *** ".postln; " ***************** ".postln; }; ServerQuit.add(~quitMessage); ) s.boot;

   s.quit; // message appears in post window A specific action can be removed from a repository with the remove method: ServerQuit.remove(~quitMessage);

Or, all registered actions can be removed from a repository with removeAll: ServerQuit.removeAll;

TIP.RAND(); R ISKS OF REMOVING ALL ACTIONS FROM A SERVER ACTION REPOSITORY Most of the time, calling removeAll on a repository class will not disrupt your workflow in a noticeable way. However, when SC is launched, these three repository classes may have one or more functions that are automatically attached to them. At the time of writing, ServerBoot includes automated actions that allow the spectrum analyzer (FreqScope.new) and Node proxies (discussed in Chapter 12) to function properly. If you evaluate ServerBoot.removeAll and reboot the server, you’ll find that these objects no longer work correctly. If you ever need to reset one or more repositories to their default state(s), the simplest way to do so is to recompile the SC class library, which can be done with [cmd]+[shift]+[L]‌. Alternatively, you can avoid the use of   removeAll and instead remove custom actions individually. Care should be taken to avoid accidental double-registering of functions. If the code in Code Example 9.8 is evaluated multiple times, ServerTree will interpret ~treeMessage as a new function on each evaluation, even though the code is identical. As a result, pressing

257

258

SuperCollider for the Creative Musician

[cmd]+[period] will cause the message to be posted several times. For functions that only post text, duplicate registrations clog the post window, but don’t do any real harm. However, if these functions contain audio-specific code, duplicate registrations can have all sorts of unexpected and undesirable effects. A safer approach is to remove all actions before re-evaluating/ re-registering, as depicted in Code Example 9.9.

CODE EXAMPLE 9.8: C ODE THAT INADVERTENTLY RE-REGISTERS A FUNCTION TO A REPOSITORY WHEN EVALUATED A SECOND TIME. ( s.waitForBoot({ ~treeMessage = {"Server tree cleared".postln}; ServerTree.add(~treeMessage); }); )

   // press [cmd]+[period] to see the message

CODE EXAMPLE 9.9: C ODE THAT AVOIDS ACCIDENTAL RE-REGISTERING OF SERVER REPOSITORY FUNCTIONS. ( s.waitForBoot({ ServerTree.removeAll; ~treeMessage = {"Server tree cleared".postln}; ServerTree.add(~treeMessage); }); )

   // press [cmd]+[period] to see the message

TIP.RAND(); I NITIALIZING THE NODE TREE Initializing the Node tree is an inherent part of the server-booting process, so any actions registered with ServerTree will occur when the server boots. For example, if you run the code in Code Example 9.9, and then quit and reboot the server, “Server tree cleared” will appear in the post window, despite the fact that we did not press   [cmd]+[period] or evaluate s.freeAll.



C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects

Registered repository actions can be manually performed by calling run on the repository class: ServerTree.run;

Calling run only evaluates registered actions and spoofs the normal triggering mechanism (e.g., evaluating ServerBoot.run does not actually boot the server—it only evaluates its registered functions). Armed with these new tools, Code Example 9.10 improves the audio example in Code Example 9.6. Specifically, we can automate the creation of the reverb Synth by adding an appropriate function to ServerTree. To avoid accidental double-registrations, we also define a cleanup function that wipes Nodes and removes all registered actions. This cleanup function is called once when we first run the code, and also whenever we quit the server.

CODE EXAMPLE 9.10: I NITIALIZING THE SERVER WITH THE HELP OF SERVER ACTION REPOSITORY CLASSES. ( s.newBusAllocators; ~bus = Bus.audio(s, 2); ~cleanup = { s.freeAll; ServerBoot.removeAll; ServerTree.removeAll; ServerQuit.removeAll; }; ~cleanup.(); ServerQuit.add(~cleanup); s.waitForBoot({ SynthDef(\source, { |out = 0|

var sig, env, freq, trig;



trig = Trig.kr(Dust.kr(4), 0.1);



env = EnvGen.kr(Env.perc(0.001, 0.08), trig);



freq = TExpRand.kr(200, 1500, trig);



sig = SinOsc.ar(freq ! 2, mul: 0.2);



sig = sig * env;



Out.ar(out, sig); }).add; SynthDef(\reverb, { |in = 0, mix = 0.2, out = 0|



var sig, fx;



sig = In.ar(in, 2);



fx = FreeVerb2.ar(sig[0]‌ , sig[1], 1, 0.85);



sig = sig.blend(fx, mix);



Out.ar(out, sig); }).add;

259

260

SuperCollider for the Creative Musician

s.sync; ~makeReverb = {~reverb = Synth(\reverb, [in: ~bus])}; ServerTree.add(~makeReverb); ServerTree.run; }); )

   Synth(\source, [out: ~bus, freq: exprand(200, 1500)]); In this improved version, pressing [cmd]+[period] no longer requires jumping back to our setup code. Instead, a new reverb Synth automatically appears to take the place of its predecessor, allowing us to focus our creative attention exclusively on our sound-making code. At the same time, we’ve taken appropriate precautions with our cleanup code, so that if we do re-evaluate our setup code (intentionally or unintentionally), it does not create any technical problems. When we’re finished, we can quit the server, which removes all registered actions.

9.5  The Startup File If you have code that you want to be evaluated every time you launch SC, you can include it in the language startup file, detailed in a reference file titled “Sclang Startup File.” When SC is launched, it looks for a file named “startup.scd” in your user configuration folder, which is located at the path returned by Platform.userConfigDir. If a properly named file exists at this location, the interpreter evaluates its contents on startup (technically, this file is evaluated whenever the class library is recompiled, which is one of several actions that occur at startup). Though we’re accustomed to seeing an outermost pair of parentheses around a code block, these parentheses are mainly a convenience for being able to evaluate code with [cmd]+[return] and need not be included in the startup file. If you find yourself running the same code at the start of every session, consider moving it to the startup file to save time and space. Common startup actions include setting the server’s sample rate, selecting a hardware device for audio input/output, registering actions with ServerTree, or just booting the server. The startup file might also include actions that occur post-boot, like adding SynthDefs or allocating buffers. Any valid code is fair game. Remember that your startup file is specific to your computer. If you move a code project from one computer to another, the startup file does not automatically accompany it. So, if you’re working on a project that will eventually be run on a different machine, using the startup file might not be a good choice. However, if your work will always be performed on the same computer, the startup file may be beneficial and is worth considering.

9.6  Working with Multiple Code Files A large body of code stored in a single file becomes more cumbersome to navigate as it grows, even with impeccable organization. The IDE has a few features that facilitate navigation. For example, the “Find” function allows quick jumping to a specific piece of text (particularly



C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects

useful if you leave unique combinations of characters as “bookmarks” for yourself). Similarly, the “split” feature allows multiple parts of the same document to be displayed simultaneously (see Figure 9.1). Still, even with the advantages of these features, there are limits to what the interpreter can handle. If you try to evaluate an enormous block of code that contains thousands of nested functions, you may even encounter the rare “selector table too big” error message.

FIGURE 9.1  Viewing two parts of the same code file in the IDE using a split view. Split options are available in the “View” drop-down menu.

For large or even medium-sized projects, partitioning work into separate code files has distinct advantages. Modularization yields smaller files that focus on specific tasks. These files are generally easier to read, understand, and debug. Once a sub-file is functional and complete, you can set it aside and forget about it. In some cases, some of your sub-files may be general-purpose enough to be incorporated into other projects (for example, a sub-file that contains a library of your favorite SynthDefs). Code in a saved .scd file can be evaluated by calling load on a string that represents the absolute path to that file. If your code files are all stored in the same directory, the process is even simpler: you can call loadRelative on a string that contains only the name and extension of the file you want to run. The loadRelative method requires that the primary file and the external file have previously been saved somewhere on your computer. Otherwise, the files don’t technically exist yet, and SC will have no idea where to look for them. You can create a simple demonstration of loadRelative as follows. First, save a SC document called “external.scd” that contains the following code:

261

262

SuperCollider for the Creative Musician s.waitForBoot({ { var sig = SinOsc.ar([500, 503], mul: 0.2); sig = sig * Env.perc.kr(2); }.play; });

Then, save a second SC file named “main.scd” (or some other name) in the same location. In the main file, evaluate the following line of code. The server should boot, and you should hear a tone. "external.scd".loadRelative;

To conclude this chapter, Companion Code 9.1 brings several of these concepts together and presents a general-purpose template for large-scale SC projects, which can be adapted for a variety of purposes.

CHAPTER 10

AN EVENT-BASED STRUCTURE 10.1 Overview One advantage of creating music with SC is its potential to introduce randomness and indeterminacy into a composition, enabling musical expression that is difficult to achieve in other contexts. In other words, choices about certain musical elements can be driven by algorithms, without having predetermined orders or durations, allowing the composer to take a higherlevel approach to music-making. However, algorithmic composition introduces new layers of open-endedness that can prove challenging for composers who are new to this kind of approach. As an entryway into algorithmic composition, this chapter focuses on structuring a composition that follows a chronological sequence of musical events. We begin with structures that are completely deterministic, that is, those that produce the exact same result each time. Toward the end of the chapter, we add some indeterminate elements that leverage SC’s algorithmic capabilities, while maintaining an overall performance structure that remains ordered.

10.2  Expressing Musical Events Through Code In the context of musical performance, what exactly do we mean by “musical event”? In short, this term refers to some discrete action, occurring at a distinct point in time, which produces some musically relevant result (note that we are not referring to the Event class). Some examples of musical events might include the production of a sound, the articulation of a musical phrase or sequence, or the onset of some parameter trajectory, such as vibrato or a timbral shift. With varying degrees of complexity, virtually any piece of music can be represented as a sequence of discrete events. The ability of music to be discretized in this manner is, in part, what enables us to represent music via score notation, and it is also a fundamental principle behind the MIDI protocol, similarly designed to represent a musical performance through discrete messages. Thinking broadly and simply about musical events, we could perhaps group them into the following six categories: 1 . 2. 3. 4. 5. 6.

Create a new sound. Modify a parameter of an existing sound. Terminate an existing sound. Create/Enable a new signal-processing effect. Modify a parameter of an existing signal-processing effect. Terminate/Disable an existing signal-processing effect.

In practice, a musical event may be treated as a singular unit but may comprise several different simultaneous events from this list. For example, an event might create ten or 100 new sounds at once, or an event might simultaneously terminate one sound while creating another. Further still, an event might initiate a timed sequence of sounds whose onsets are separated in SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024. DOI: 10.1093/oso/9780197616994.003.0010

264

SuperCollider for the Creative Musician

time (such as an arpeggiated chord). Even though multiple sounds are produced, we can still conceptualize the action as a singular event. So how are these types of musical events represented in code? A Synth is arguably the most basic unit of musical expression. When a new Synth is created, a signal-calculating unit is born on the server. Other times, we play a Pbind to generate a sequence of Synths. Thus, categories (1) and (4) can be translated into the creation of Synths and/or the playing of Pbinds. For categories (2) and (5), a set message is a logical translation, which allows manipulation of Synth arguments in real-time. Alternatively, we might iterate over an array of Synths, or issue a set message to a Group, to influence multiple Synths at once. In the context of Pbind, we might call upon a Pdefn to modify some aspect of an active EventStreamPlayer. Lastly, categories (3) and (6) might translate into using free to destroy one or more Synths, or more commonly, using a set message to zero an envelope gate to create a smooth fade, or we might update a Pdefn to gradually fade the amplitude values of an EventStreamPlayer. Musical events can also be categorized as “one-shots” or “sustaining” events. In a one-shot event, all generative elements have a finite lifespan and terminate themselves automatically, usually as a result of a fixed-duration envelope or finite-length pattern. The Synths and Pbinds in a one-shot event typically do not need to be stored in named variables, because they are autonomous and there is rarely a need to reference them after creation. Sustaining events, on the other hand, have an indefinite-length existence, and rely on a future event for termination. Unlike one-shots, sound-generating elements in sustaining events must rely on some storage mechanism and naming scheme, so that they can be accessed and terminated later. The first step toward building a chronological event structure is to express your musical events as discrete chunks of code. In other words, determine what’s going to happen and when, and type out the code that performs each step. In early stages of composing, this can be as easy as putting each chunk of code in its own parenthetical enclosure, so that each event can be manually evaluated with [cmd]+[return]. As an initial demonstration, consider the code in Code Example 10.1, which represents a simple (and uninspired) event-based composition.

CODE EXAMPLE 10.1: A SIMPLE EVENT-BASED COMPOSITION, EXPRESSED AS DISCRETE CHUNKS OF CODE. SEVERAL SUBSEQUENT CODE EXAMPLES IN THIS CHAPTER RELY ON THESE TWO SYNTHDEFS. s.boot; ( SynthDef(\sine, { arg atk = 1, rel = 4, gate = 1, freq = 300, freqLag = 1, amp = 0.1, out = 0; var sig, env; env = Env.asr(atk, 1, rel).kr(2, gate); sig = SinOsc.ar(freq.lag(freqLag) + [0, 2]); sig = sig * amp * env; Out.ar(out, sig); }).add;



C h a p t er 10: A n E v en t-Bas ed St r u ct u r e

SynthDef(\noise, { arg atk = 1, rel = 4, gate = 1, freq = 300, amp = 0.2, out = 0; var sig, env; env = Env.asr(atk, 1, rel).kr(2, gate); sig = BPF.ar(PinkNoise.ar(1 ! 2), freq, 0.02, 7); sig = sig * amp * env; Out.ar(out, sig); }).add; ) ( // event 0: create sine synth ~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]); ) ( // event 1: create two noise synths ~noise0 = Synth(\noise, [freq: 75.midicps, amp: 0.15]); ~noise1 = Synth(\noise, [freq: 53.midicps, amp: 0.3]); ) ( // event 2: modify frequencies of noise synths ~noise0.set(\freq, 77.midicps); ~noise1.set(\freq, 56.midicps); ) ( // event 3: modify frequency of sine synth ~sine0.set(\freq, 59.midicps); ) ( // event 4: fade all synths [~sine0, ~noise0, ~noise1].do({ |n| n.set(\gate, 0) });

   )

Though devoid of brilliance and nuance, this example illustrates the essential technique of discretizing a musical performance into individual actions. In practice, your events may likely be substantially larger and more complex, involving dozens of Synths and/or Pbinds. Your SynthDefs, too, may also be larger, more developed, and more numerous. In this current form, our “composition” is performable, but involves some combination of mouse clicking, scrolling, and/or keyboard shortcuts. We can improve the situation by loading these event chunks into an ordered structure, explored in the next section.

265

266

SuperCollider for the Creative Musician

10.3  Organizing Musical Events SC offers various options for organizing a sequence of musical events. The Collection class, for instance, is a parent class of several useful subclasses. The Array class naturally rises to the surface as a simple, effective solution. A non-ordered collection, such as a Dictionary, offers some benefits that arrays do not provide. Further still, a stream generated from a Pseq provides an elegant solution as well.

10.3.1  AN ARRAY OF MUSICAL EVENTS Consider the even-simpler event sequence depicted in Code Example 10.2, which includes a version of the “sine” SynthDef from Code Example 10.1, modified with a fixed-duration envelope. The first event creates a one-shot Synth, and the next event creates another.

CODE EXAMPLE 10.2: A SIMPLE PERFORMANCE SEQUENCE CONSISTING OF TWO GENERATIVE ONESHOT EVENTS. ( SynthDef(\simple, { arg atk = 0.2, rel = 1, gate = 1, freq = 300, freqLag = 1, amp = 0.1, out = 0; var sig, env; env = Env.linen(atk, 1, rel).kr(2); sig = SinOsc.ar(freq.lag(freqLag) + [0, 2]) * amp * env; Out.ar(out, sig); }).add; ) Synth(\simple, [freq: 330, amp: 0.05]); // event 0

   Synth(\simple, [freq: 290, amp: 0.05]); // event 1 Code Example 10.3 shows a tempting but erroneous attempt to populate an array with these two events. When the array is created, both sounds play immediately. An attempt to access either item returns the Synth object, but no sound is produced. This behavior is a byproduct of the separation between the language and the server. To create the array, the language must interpret the items it contains. But, once the language interprets a Synth, an OSC message to the server is automatically generated, and the Synth comes into existence.



C h a p t er 10: A n E v en t-Bas ed St r u ct u r e

CODE EXAMPLE 10.3: A N INCORRECT ATTEMPT TO POPULATE AN ARRAY WITH MUSICAL EVENTS, RESULTING IN THE SOUNDS BEING HEARD IMMEDIATELY. ( ~events = [ Synth(\simple, [freq: 330, amp: 0.05]), Synth(\simple, [freq: 290, amp: 0.05]) ]; )

   ~events[0]‌; // returns the Synth, but no sound is produced The solution is to wrap each event in a function, shown in Code Example 10.4. When the interpreter encounters a function, it only sees that it is a function, but does not “look inside” until the function is explicitly evaluated.

CODE EXAMPLE 10.4: A VALID APPROACH FOR CREATING AND PERFORMING AN ARRAY OF MUSICAL EVENTS. ( ~events = [ {Synth(\simple, [freq: 330, amp: 0.05])}, {Synth(\simple, [freq: 290, amp: 0.05])} ]; ) ~events[0]‌ .(); // play 0th event

   ~events[1]‌.(); // play 1st event If the array is large, we can avoid dealing with a long list of evaluations by creating a global index and defining a separate function that evaluates the current event and increments the index, demonstrated in Code Example 10.5. If the index is beyond the range of the array, access attempts return nil, which can be harmlessly evaluated. The index can be manually reset to zero to return to the beginning, or some other integer to jump to a point in the middle of the sequence.

267

268

SuperCollider for the Creative Musician

CODE EXAMPLE 10.5: U SING AN INDEX THAT IS AUTOMATICALLY INCREMENTED WHENEVER AN EVENT IS PERFORMED. ( ~index = 0; ~events = [ {Synth(\simple, [freq: 330, amp: 0.05])}, {Synth(\simple, [freq: 290, amp: 0.05])}, {Synth(\simple, [freq: 420, amp: 0.05])}, {Synth(\simple, [freq: 400, amp: 0.05])} ]; ~nextEvent = { ~events[~index].(); ~index = ~index + 1; }; ) ~nextEvent.(); // evaluate repeatedly

   ~index = 0; // reset to beginning Code Example 10.5 essentially reinvents the behaviors of next and reset in the context of routines (introduced in Section 5.2), but without the ability to advance through events automatically with precise timing. If precise timing is desired, we can bundle event durations into the array and play a simple routine that retrieves these pieces of information from the array as needed. In Code Example 10.6, each item in the event array is an array containing the event function and a duration.

CODE EXAMPLE 10.6: U SING A ROUTINE TO AUTOMATICALLY ADVANCE THROUGH AN ARRAY OF MUSICAL EVENTS WITH SPECIFIC TIMING. ( ~index = 0; ~events = [ [{Synth(\simple, [freq: 330, amp: 0.05])}, 2], [{Synth(\simple, [freq: 290, amp: 0.05])}, 0.5], [{Synth(\simple, [freq: 420, amp: 0.05])}, 0.25], [{Synth(\simple, [freq: 400, amp: 0.05])}, 0], ];



C h a p t er 10: A n E v en t-Bas ed St r u ct u r e

~seq = Routine({ ~events.do({

~events[~index][0]‌.();



~events[~index][1]‌.wait;



~index = ~index + 1; });

}); )

   ~seq.play; As an exercise for the reader, consider converting the performance events in Code Example 10.1 into either the array or routine structures that appear in Code Examples 10.5 and 10.6.

10.3.2  A DICTIONARY OF MUSICAL EVENTS is an unordered collection. At first glance, it seems like an odd choice for storing a chronological musical sequence. Instead of a numerical index, each item in a dictionary is paired with a symbol, which allows a more verbose and human-readable naming scheme.1 Basic usage appears in Code Example 10.7. When using an array, it can be difficult to remember exactly what happens at each index (consider, for example, the meaninglessness of ~events[37]), which can pose difficulties in making changes or debugging. A naming scheme such as ~events[\modulateSubTone] tends to be more meaningful and memorable. Dictionary

CODE EXAMPLE 10.7: U SING A DICTIONARY TO STORE AND PERFORM A SEQUENCE OF MUSICAL EVENTS. ( ~events = Dictionary() .add(\play330sine -> {Synth(\simple, [freq: 330, amp: 0.05])}) .add(\play290sine -> {Synth(\simple, [freq: 290, amp: 0.05])}); ) ~events[\play330sine].();

   ~events[\play290sine].(); The decision of whether to use a dictionary or array is a matter of taste, and perhaps also dictated by the nature of the project; it depends on whether the familiarity of named events outweighs the convenience of numerical ordering. Though the ability to name events is useful, using a dictionary is somewhat more prone to human error (e.g., accidentally using the same name twice). Arguably, a similar effect can be achieved with arrays by strategically placing

269

270

SuperCollider for the Creative Musician

comments at points throughout your code. Or, perhaps your memory is sharp enough to recall events by number alone!

10.3.3  A STREAM OF MUSICAL EVENTS We typically use Pseq in the context of Pbind, but it has an application here as well. As we saw in Chapter 5, a pattern responds to asStream by returning a routine. Code Example 10.8 revisits the code in Code Example 10.1 and uses Pseq to express these events as a stream.

CODE EXAMPLE 10.8: T HE MUSICAL EVENTS FROM CODE EXAMPLE 10.1, EXPRESSED AND PERFORMED AS A STREAM CREATED FROM A Pseq. THIS EXAMPLE RELIES ON THE SYNTHDEFS IN CODE EXAMPLE 10.1. ( ~events = Pseq([ {

~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]) }, {



~noise0 = Synth(\noise, [freq: 75.midicps, amp: 0.15]);



~noise1 = Synth(\noise, [freq: 53.midicps, amp: 0.3]); }, {



~noise0.set(\freq, 77.midicps);



~noise1.set(\freq, 56.midicps); }, {



~sine0.set(\freq, 59.midicps); }, {



[~sine0, ~noise0, ~noise1].do({ |n| n.set(\gate, 0) }); }

], 1).asStream; ) ~events.next.(); // evaluate repeatedly;

   ~events.reset; // reset to beginning When using Pseq, we don’t need to maintain an event index, and instead merely need to call next on the stream to retrieve the next function. The stream can also be reset at will. Once the stream is reset, we can jump to a point along the event timeline by extracting a certain number of events from the stream, but not evaluating them, as shown in Code Example 10.9.



C h a p t er 10: A n E v en t-Bas ed St r u ct u r e

CODE EXAMPLE 10.9: A TECHNIQUE FOR SKIPPING AHEAD IN A STREAM OF MUSICAL EVENTS. ( ~events.reset; 1.do({~events.next}); // retrieve the first event but do not evaluate )

   ~events.next.(); // retrieve and evaluate the next event Retrieving without evaluating is a useful technique, but it reveals a problem. In a chronological event-based composition, it’s common to have events that depend on the execution of previous events. For example, the penultimate event in Code Example 10.8 sets the frequency of a Synth created during the first event. If we skip to this event by changing 1.do to 3.do in Code Example 10.9, the next event produces no sound, and we’ll see a failure message in the post window. Solutions to this problem are explored in the next section.

10.4  Navigating and Rehearsing an EventBased Composition For timeline-based compositions, the ability to “skip ahead” is essential for rehearsal and troubleshooting. If there’s a problem at the eleventh minute, you shouldn’t have to sit through ten minutes of music to get there! Notated scores have measure numbers and rehearsal marks for this purpose, and in DAWs and waveform editors, we can click or drag the playback cursor to a desired location or set timeline markers. In a programming language like SC, a bit of work is needed to address this issue. It can be tempting to ignore the need for rehearsal cues, and simply accept the fact that some sounds may be absent or inaccurate if we start in the middle. This approach is sometimes workable, but highly impractical in other cases. For example, if the first musical event of a composition plays a sustained sound, and the remaining events manipulate this sound, then no events will do anything unless the first event is executed. Another tempting option is to quickly “mash” through earlier events to arrive at a target location. Doing so ensures that preceding events are performed in order, but the accelerated timing may produce unpleasantly loud or distorted sound, and/or overload your CPU. Some preceding sounds may also be one-shots with long durations, and we’ll have to wait for them to end. A better approach is to identify key moments that serve as useful starting points, and build a special collection of rehearsal cues, which each place the program in a specific state when executed. Generally, rehearsal cues create Synths and/or play Pbinds, while supplying parameter values that are appropriate for that specific moment. Code Example 10.10 depicts a dictionary of rehearsal cues named ~startAt, meant to accompany the musical events in Code Example 10.8. Pseq is not the best option for rehearsal cues, because we may want to “jump around” from one rehearsal cue to another in non-chronological order, so next is of little use to us. An array would be slightly better, but the number of rehearsal cues may not

271

272

SuperCollider for the Creative Musician

match the number of events, in which case numerical indexing is unhelpful. Therefore, we use a dictionary for storing rehearsal cues, for the ability to give each cue a meaningful name.

CODE EXAMPLE 10.10: A Dictionary OF REHEARSAL CUES THAT AUGMENTS THE MUSICAL EVENTS IN CODE EXAMPLE 10.8, MEANT TO FACILITATE PLAYING THE COMPOSITION FROM ARBITRARY POINTS ALONG ITS TIMELINE. THIS EXAMPLE RELIES ON THE SYNTHDEFS IN CODE EXAMPLE 10.1 AND THE ~events STREAM FROM CODE EXAMPLE 10.8. ( ~startAt = Dictionary() .add(\event1 -> { ~events.reset; 1.do({~events.next}); ~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]); }) .add(\event2 -> { ~events.reset; 2.do({~events.next}); ~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]); ~noise0 = Synth(\noise, [freq: 75.midicps, amp: 0.15]); ~noise1 = Synth(\noise, [freq: 53.midicps, amp: 0.3]); }) .add(\event3 -> { ~events.reset; 3.do({~events.next}); ~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]); ~noise0 = Synth(\noise, [freq: 77.midicps, amp: 0.15]); ~noise1 = Synth(\noise, [freq: 56.midicps, amp: 0.3]); }) .add(\event4 -> { ~events.reset; 4.do({~events.next}); ~sine0 = Synth(\sine, [freq: 59.midicps, amp: 0.05]); ~noise0 = Synth(\noise, [freq: 77.midicps, amp: 0.15]); ~noise1 = Synth(\noise, [freq: 56.midicps, amp: 0.3]); }); )



C h a p t er 10: A n E v en t-Bas ed St r u ct u r e

~events.reset; ~events.next.(); // evaluate repeatedly // press [cmd]+[period], then cue an event ~startAt[\event3].value;

   ~events.next.(); // play the new "next" event With this improvement, if we interrupt a performance with [cmd]+[period], we can call a rehearsal cue, which transports us to a specific moment in time. For example, the function stored in ~startAt[\event3] navigates to the appropriate stage within the ~events stream, creates all three Synths, and provides appropriate frequency values for the noise Synths (which are normally set by the previous event). This approach of building rehearsal cues is no different for larger and more complex pieces. At its core, it involves identifying useful rehearsal points, identifying active musical processes at these moments, and building a collection of functions that construct these moments. To this end, creating a spreadsheet, or even using pencil and paper, can be helpful for sketching out a “map” of events, which may help clarify exactly what musical processes occur, when they occur, and how long they last. Not every moment in a composition is a practical place for a rehearsal cue, much in the same way it can be awkward for an instrumental ensemble to begin rehearsing in the middle of a musical phrase. There is also a point of diminishing returns for rehearsal cues; beyond a certain quantity, additional rehearsal cues require time and effort to code, but they do not provide much practical benefit. So, it’s wise to be judicious in selecting moments for rehearsal cues.

10.5  Indeterminacy in an Event-Based Composition An event-based composition can be identical with each performance but doesn’t need to be. As discussed at the start of this chapter, we have the option of introducing randomness and other algorithmic techniques to produce a composition that remains chronological when viewed as a whole, but which has varying degrees of unpredictability in its lower-level details. The simplest and most obvious way to add randomness to a composition is to build random features into the event code itself. For example, we might have an event that creates (and another that later fades) a random collection of pitches from a scale, or a one-shot pattern that produces a tone burst with a random number of articulated notes (see Code Examples 10.11 and 10.12).

273

274

SuperCollider for the Creative Musician

CODE EXAMPLE 10.11: A MUSICAL EVENT THAT PLAYS A CLUSTER OF SYNTHS WHOSE PITCHES ARE RANDOMLY SELECTED FROM A SCALE. THIS EXAMPLE RELIES ON THE “NOISE” SYNTHDEF IN CODE EXAMPLE 10.1. ( var scl = [0, 3, 5, 7, 9, 10]; var oct = [72, 72, 84, 84, 84, 96]; var notes = oct.collect({ |n| n + scl.choose}); ~synths = ([60] ++ notes).collect({ |n| // MIDI note 60 is prepended to avoid randomizing the bass note Synth(\noise, [freq: n.midicps, amp: 0.15]); }); )

  

~synths.do({ |n| n.set(\gate, 0)});

CODE EXAMPLE 10.12: A MUSICAL EVENT THAT PLAYS A PATTERN WITH A RANDOM NUMBER OF NOTES. THIS EXAMPLE RELIES ON THE “SINE” SYNTHDEF IN CODE EXAMPLE 10.1. ( Pfin( exprand(3, 15).round, // creates between 3–15 synths Pbind(

\instrument, \sine,



\dur, 0.07,



\scale, [0, 3, 5, 7, 9, 10],



\degree, Pbrown(10, 20, 3),



\sustain, 0.01,



\atk, 0.002,



\rel, 0.8,



\amp, 0.03 )

).play;

   )



C h a p t er 10: A n E v en t-Bas ed St r u ct u r e

Randomness can also be implemented at a higher level, influencing the sequential arrangement of events. In Code Example 10.13, a Pseq is nested inside the outermost Pseq to randomize the number of tone burst patterns that are played before the filtered noise chord is faded out. The noteworthy feature of this inner Pseq is that the repeats value is expressed as a function, guaranteeing that the number of repeats will be uniquely generated each time the sequence is performed. Using a similar technique, we can randomize the order in which certain events occur. Code Example 10.14 uses Pshuf to randomize the order of the first two events, so that either the tone bursts or the filtered noise chord may occur first.

CODE EXAMPLE 10.13: U SING AN INSTANCE OF PSEQ INSIDE THE EVENT SEQUENCE TO RANDOMIZE THE NUMBER OF TIMES AN EVENT REPEATS BEFORE MOVING ON. ( ~events = Pseq([ { // create cluster chord var scl = [0, 3, 5, 7, 9, 10]; var oct = [72, 72, 84, 84, 84, 96]; var notes = oct.collect({ |n| n + scl.choose}); ~synths = ([60] ++ notes).collect({ |n| Synth(\noise, [freq: n.midicps, amp: 0.15]); }); }, // repeat a pattern a random number of times Pseq([ { Pfin( exprand(3, 15).round, Pbind( \instrument, \sine, \dur, 0.07, \degree, Pbrown(10, 20, 3), \scale, [0, 3, 5, 7, 9, 10], \sustain, 0.01, \atk, 0.002, \rel, 0.8, \amp, 0.03 ) ).play; } ], {rrand(3, 5)}), // true (we're live!)

Every proxy has a source, which is the content for which the proxy serves as a placeholder. Initially, a NodeProxy’s source is nil, which results in silence when played: ~n.source; // -> nil

A NodeProxy’s source is usually a UGen function, similar to the content of a SynthDef. However, in the case of NodeProxy, there’s no need to use In/Out UGens (in fact, doing so may create problems). As we’ll see shortly, routing is automatically and dynamically handled by internal NodeProxy mechanisms. The following two expressions each define a different source. Assuming you’ve evaluated the previous lines, you can run the following two lines repeatedly, in any order, and as many times as you like. There’s no risk of creating awkward gaps of silence or redundant copies of the proxy. ~n.source = {PinkNoise.ar(0.1 ! 2)}; ~n.source = {SinOsc.ar([300, 301], mul: 0.2)};



C h a p t er 12: Li v e C o d i n g

Throughout this chapter, keep in mind that live coding is a dynamic, improvisatory experience. A book isn’t an optimal venue for conveying the “flow” of a live coding session, because it can’t easily or concisely capture code modifications and deletions that take place over time. When changing a proxy’s source, you might type out a new line (as pictured above), or you might simply overwrite the first line. In any case, you’re encouraged to improvise and explore as you follow along with these examples and figure out a style that suits you! On a basic level, we’ve solved the problem presented in the previous section. Instead of having to remove an old Synth and create a new one, we now have a singular object, whose content can be spontaneously altered. However, this change is abrupt, and this might not be what we want. Every NodeProxy also has a fadeTime that defaults to 0.02 seconds, which defines the length of a crossfade that is applied whenever the source changes: ~n.fadeTime = 1; ~n.source = {PinkNoise.ar(0.1 ! 2)}; ~n.source = {SinOsc.ar([300, 301], mul: 0.2)};

If the source of a new NodeProxy is an audio rate signal, it automatically becomes an audio rate proxy, and the same is true for control rate sources/proxies. When a NodeProxy’s source is defined, it automatically plays to a private audio bus or control bus. This bus index, too, is selected automatically. When we play a NodeProxy, what we’re actually doing is establishing a routing connection from the private bus to hardware output channels, and as a result, we hear the signal. The stop message undoes this routing connection but leaves the source unchanged and still playing on its private bus. The stop method accepts an optional fade time. The play method also accepts a fade time, but because it’s not the first argument, it must be explicitly specified by keyword if provided all by itself: ~n.stop(3); ~n.play(fadeTime: 3);

TIP.RAND(); W HEN TO PLAY A NODEPROXY Don’t get into the habit of instinctively calling play on every NodeProxy you create. It’s rarely a good idea to play a control rate proxy, even though this is technically possible (the control rate signal will be upscaled to the audio rate using linear interpolation). Proxies that represent control rate signals, such as LFOs, are generally not signals we want to monitor, because their numerical output ranges often extend well beyond ±1, and may result in loud, distorted sound. You should only call play on   proxies meant to be heard directly! The clear message, which also takes an optional fade time, will fully reset a NodeProxy, severing the monitoring connection and resetting its source to nil: ~n.clear(3);

301

302

SuperCollider for the Creative Musician

12.3.2  NODEPROXY ROUTING AND INTERCONNECTIONS In previous chapters, we’ve relied on manual bus allocation and the use of input/output UGens to route signals from one Synth to another, while carefully managing node order on the server. When using NodeProxy, the process is simpler and more intuitive. In particular, instances of NodeProxy can be nested inside of other NodeProxy sources. Let’s have a look at a few examples, starting with a simple square wave: ~sig = NodeProxy().fadeTime_(3).play; ~sig.source = {Pulse.ar([100, 100.5], mul: 0.03)};

Suppose we want to pass this oscillator through a low-pass filter. While it’s true that we could just redefine the source to include a filter, this approach doesn’t showcase the modularity of NodeProxy. Instead, we’ll create and play a second proxy that represents a filter effect: ~filt = NodeProxy().fadeTime_(3).play;

And now, with ~sig still playing, we define the source of the filter proxy and supply the oscillator proxy as the filter’s input. We also stop monitoring the square wave proxy, which is now being indirectly monitored through the filter proxy. These two statements can be run simultaneously, or one after the other. How you evaluate these statements primarily depends on how/whether you want the filtered and unfiltered sounds to overlap. ( ~filt.source = {RLPF.ar(~sig.ar(2), 800, 0.1)}; ~sig.stop(3); )

When we nest a NodeProxy inside of another, best practice is to supply the proxy rate and number of channels. Providing only the name (e.g., ~sig) may work in some cases but may produce warning messages about channel size mismatches in others. Let’s add another component to our signal flow, but in the upstream direction instead of downstream: we’ll modulate the filter’s cutoff frequency with an LFO. The process here is similar: we create a new proxy, define its source, and plug it into our existing NodeProxy infrastructure. This new proxy automatically becomes a control rate proxy, due to its control rate source. Note that we don’t play the LFO proxy; its job is not to be heard, but rather to influence some parameter of another signal. ~lfo = NodeProxy(); ~lfo.source = {SinOsc.kr(0.5).range(55, 80).midicps}; ~filt.source = {RLPF.ar(~sig.ar(2), ~lfo.kr(1), 0.1)};



C h a p t er 12: Li v e C o d i n g

With these basic concepts in mind, now is an excellent time to take the wheel and start modifying these sounds on your own! Or, if you’re finished, you can simply end the performance: ~filt.clear(3); // fade the source signal first; ( ~sig.clear; // then clean up the others ~lfo.clear; )

12.3.3  NDEF: A STYLISTIC ALTERNATIVE Practically speaking, using NodeProxy requires the use of environment variables to maintain named references. With this approach, there is always a risk of accidentally overwriting a variable with a new NodeProxy assignment, in which case the old proxy remains alive but becomes inaccessible. For example, if we evaluate ~n = NodeProxy().play twice in a row, we’ll create two instances of NodeProxy, but the variable name ~n only refers to the second version, and this might become a problem later on. Ndef is a subclass of NodeProxy, which thus inherits all of NodeProxy’s methods and behaviors while offering a more concise syntax style that avoids the need for environment variables. Ndef follows the same syntax of other “def-style” classes in SC (e.g., MIDIdef, OSCdef, Pdef), involving a symbol name and the object it represents. If the content for an existing Ndef is redefined, the new data replaces the old data stored in that existing Ndef (instead of creating a new, additional Ndef). Code Example 12.1 replicates the filter/LFO example from the previous section, using Ndef instead of NodeProxy. Once again, note the syntax for providing the rate and channel count when nesting one Ndef inside of another. As with other def-style classes, it’s possible to access every instance of Ndef with all, which allows the use of iteration for wide message broadcasts, such as clearing every proxy.

CODE EXAMPLE 12.1: U SING Ndef TO RECREATE THE INTERCONNECTED NODEPROXY EXAMPLE PRESENTED IN PIECES THROUGHOUT THE BEGINNING OF CHAPTER 12. // play the square wave proxy Ndef(\sig, {Pulse.ar([100, 100.5], mul: 0.03)}).fadeTime_(3).play; // play the filter proxy, nesting the oscillator proxy inside Ndef(\filt, {RLPF.ar(Ndef.ar(\sig, 2), 800, 0.1)}).fadeTime_(3).play;

303

304

SuperCollider for the Creative Musician

// stop playing the original proxy Ndef(\sig).stop(3); // create the LFO proxy (but don't play it) Ndef(\lfo, {SinOsc.kr(0.5).range(55, 80).midicps}); // patch the LFO into the filter proxy Ndef(\filt, {RLPF.ar(Ndef.ar(\sig, 2), Ndef.kr(\lfo, 1), 0.1)}); // fade everything over 5 seconds

   Ndef.all.do({ |n| n.clear(5) });

12.3.4  PROXYSPACE: ANOTHER STYLISTIC ALTERNATIVE If you’ve been following along with this book, you’ve probably come to rely on environment variables, i.e., the often-called “global” variables that begin with a tilde and don’t require a separate declaration step. Technically, these named containers are not global but are local to the environment in which they were created. They only seem global because there’s rarely a need to change from one environment to another. An Environment is a collection-type class that functions as a space in which languageside data can be stored by name. It is the immediate parent class of the Event class, to which it is nearly identical. The current environment SC is using is accessible through the special keyword currentEnvironment, demonstrated in Code Example 12.2.

CODE EXAMPLE 12.2: C REATION OF TWO ENVIRONMENT VARIABLES, AND ACCESS OF THE CURRENT ENVIRONMENT IN WHICH THEY ARE STORED. ( ~a = 5; // create two environment variables ~b = 7; )

   currentEnvironment; // ~a and ~b live here, possibly other items too It’s possible to create and use a different environment to manage a totally separate collection of variables, and then return to the original environment at any time, where all of our original variables will be waiting for us. Environments let us compartmentalize our data, instead of having to messily stash everything in one giant box. A good way to think about managing multiple environments is to imagine them in a spring-loaded vertical stack, with the current environment on the top, as illustrated in



C h a p t er 12: Li v e C o d i n g

Code Example 12.3. At any time, we can push a new environment to the top of the stack, pressing the others down, and the new topmost environment becomes our current environment. When finished, we can pop that environment out of the stack, after which the others rise to fill the empty space, and the new topmost environment becomes our current environment.

CODE EXAMPLE 12.3: M ANAGING MULTIPLE ENVIRONMENTS BY PUSHING AND POPPING. e = Environment().push; // create and push to the top of the stack ( ~apple = 17; // store some data ~orange = 19; ) currentEnvironment; // check the data: only ~apple and ~orange live here e.pop; // pop it from the stack

   currentEnvironment; // we've returned to our previous environment What does all this have to do with live coding? A ProxySpace is a special type of environment that facilitates the use of NodeProxy objects. We start by pushing a new ProxySpace to the top of the stack: p = ProxySpace().push;

Inside of a ProxySpace, every environment variable is an instance of NodeProxy, created as soon as its name is queried: ~sig; // -> a NodeProxy

Instead of setting fade times on an individual basis, we can set a fade time for the ProxySpace itself, which will be inherited by all proxies within the environment, although we retain the ability to override the fade time of an individual NodeProxy: p.fadeTime = 3; // all NodeProxies adopt a 3-second fade time

Code Example 12.4 recreates the square wave/filter/LFO example using ProxySpace. Note that all proxies in a ProxySpace can be simultaneously cleared by sending a clear message to the environment itself.

305

306

SuperCollider for the Creative Musician

CODE EXAMPLE 12.4: U SING ProxySpace TO RECREATE THE INTERCONNECTED NDEF EXAMPLE PRESENTED IN THE PREVIOUS SECTION. p = ProxySpace().fadeTime_(3).push; // (if not already inside a ProxySpace) ~sig.play; ~sig.source = {Pulse.ar([100, 100.5], mul: 0.03)}; ~filt.play; ~filt.source = {RLPF.ar(~sig.ar(2), 800, 0.1)}; ~sig.stop(3); ~lfo.source = {SinOsc.kr(0.5).range(55, 80).midicps}; ~filt.source = {RLPF.ar(~sig.ar(2), ~lfo.kr(1), 0.1)}; p.clear(5);

   p.pop; The beauty of ProxySpace is that it provides full access to the NodeProxy infrastructure without us ever having to type “NodeProxy” or “Ndef ”. The primary disadvantage of using ProxySpace, however, is that every environment variable is an instance of NodeProxy, and therefore they cannot be used to contain other types of data, like Buffers or GUI classes, because these types of objects are not valid sources for a NodeProxy. For example, the following line of code looks perfectly innocent, but fails spectacularly while inside of a ProxySpace: ~b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");

Luckily, the global interpreter variables (lowercase letters a through z) can still be used normally inside of a ProxySpace. There are only twenty-six of these, so using them on an individual basis is not sustainable! Instead, Code Example 12.5 demonstrates a better approach, involving the creation of a multilevel collection-type structure, such as nested Events, all contained within a single interpreter variable.



C h a p t er 12: Li v e C o d i n g

CODE EXAMPLE 12.5: U SING NESTED EVENTS STORED WITHIN A SINGLE INTERPRETER VARIABLE WHILE INSIDE A PROXYSPACE, TO CONTAIN DATA THAT ARE INVALID PROXY SOURCES. b = (); // main structure b[\buf] = (); // a "subfolder" named 'buf' // inside this subfolder, a buffer named '0' b[\buf][\0] = Buffer.read(s, Platform.resourceDir +/+ "sounds/ a11wlk01.wav"); b[\buf][\0].play; // it is accessible and playable b[\chords] = (); // another subfolder named 'chords'

   b[\chords][\major] = [0, 4, 7]; // store a chord in this subfolder, etc. When finished using a ProxySpace, it’s always a good idea to pop it from the stack and return to the previous environment: p.pop; currentEnvironment; // back where we started

TIP.RAND(); S IMILARITIES BETWEEN NDEF AND PROXYSPACE The Ndef and ProxySpace styles are not actually that different from each other. When using an Ndef-based approach, there is a singular instance of ProxySpace used by all Ndefs, which resides in the background. This ProxySpace is accessible by calling proxyspace on any Ndef. You can then chain additional methods that would ordinarily be sent to an instance of ProxySpace, like setting a global fadeTime for all Ndefs. The following example demonstrates these similarities: Ndef(\a).proxyspace.fadeTime_(5); Ndef(\a).fadeTime; // -> all Ndefs (current and new) have a 5-second fade time

   Ndef(\b).fadeTime; // -> 5

307

308

SuperCollider for the Creative Musician

Whichever of these three styles (NodeProxy, Ndef, ProxySpace) you decide to use is ultimately a matter of individual preference. Speaking from personal experience, most users seem to prefer Ndef, perhaps for its concise style and avoidance of environment variables, while retaining the ability to freely use environment variables as needed. Keep in mind, however, that these styles aren’t amenable to a “mix-and-match” approach. For a live coding session, it’s necessary to pick one style and stick with it.

12.4  Additional NodeProxy Features The techniques in the previous section should be enough to get you started with some live coding experiments of your own. NodeProxy has some additional features that may be useful in some situations, detailed here.

12.4.1  NODEPROXY ARGUMENTS Like a SynthDef, the source of a NodeProxy can include a declaration of arguments, which will respond to set messages, demonstrated in Code Example 12.6. We also have the option of calling xset, which uses the proxy’s fade time to crossfade between argument values.5 An argument can be set to a number (as is typical) or another NodeProxy object. Note that once a proxy argument has been set, it will “remember” its value, even if the source is redefined.

CODE EXAMPLE 12.6: U SING THE set/xset METHODS WITH NODEPROXY. Ndef(\t).fadeTime_(2).play; ( Ndef(\t, { arg freq = 200, width = 0.5; VarSaw.ar(freq + [0, 2], width: width, mul: 0.05); }); ) Ndef(\t).set(\freq, 300); // immediate change Ndef(\t).xset(\freq, 400); // crossfaded change ( // after a source change, freq remains at 400, even though the default is 200 Ndef(\t, { arg freq = 200, width = 0.5; var sig = SinOsc.ar(freq + [0, 2], mul: 0.1); sig = sig * LFPulse.kr(6, 0, 0.3).lag(0.03); }); )



C h a p t er 12: Li v e C o d i n g

( Ndef(\lfo, {LFTri.kr(0.25).exprange(300, 1500)}); Ndef(\t).xset(\freq, Ndef(\lfo)); )

   Ndef.all.do({ |n| n.clear(2) });

12.4.2  CLOCKS AND QUANTIZATION We saw in parts of Chapter 5 that a Pbind or routine can be rhythmically quantized when played, which guarantees proper timing alignment. A nearly identical process can be applied to a NodeProxy. Each NodeProxy has a pair of attributes named clock and quant, which determine the timing information for source changes. Code Example 12.7 demonstrates the usage of these attributes.

CODE EXAMPLE 12.7: N ODEPROXY QUANTIZATION TECHNIQUES. ( // first, create a clock at 108 bpm and post beat information t = TempoClock(108/60); t.schedAbs(0, {t.beats.postln; 1;}); ) // play a proxy and specify timing information Ndef(\p).fadeTime_(0).clock_(t).quant_(4).play; ( // now, any source change to the proxy will be quantized: Ndef(\p, { |freq = 1000| var trig, sig; trig = Impulse.kr(t.tempo); sig = SinOsc.ar(freq) * 0.1 ! 2; sig = sig * Env.perc(0, 0.1).kr(0, trig); }); ) ( Ndef(\p).clear; // clean up t.stop;

   )

Note that using clock and quant does not require a fadeTime of 0, but is used here to create a quantized source change that occurs precisely on a desired beat. If a source change occurs and the fadeTime is greater than zero, the crossfade will begin on the next beat specified by quant.

309

310

SuperCollider for the Creative Musician

When using ProxySpace instead of Ndef, the instance of ProxySpace can be assigned clock and quant values, and all of the proxies within that space automatically inherit this information. At the same time, we retain the ability to override an individual proxy’s attributes to be different from that of its environment. These techniques are demonstrated in Code Example 12.8.

CODE EXAMPLE 12.8: Q UANTIZING PROXIES WHILE INSIDE A PROXYSPACE. ( // create a clock, post beats, and push a new ProxySpace t = TempoClock(108/60); t.schedAbs(0, {t.beats.postln; 1;}); p = ProxySpace(clock: t).quant_(8).push; // all proxies inherit clock/quant ) ~sig.play; ( // source changes are quantized to the nearest beat multiple of 8 ~sig = { var freq, sig; freq = ([57, 60, 62, 64, 67, 69, 71]).scramble .collect({ |n| n + [0, 0.1] }).flat.midicps; sig = Splay.ar(SinOsc.ar(freq)) * 0.05; }; ) ~sig.quant_(0); // override quant for this proxy ( // change now occurs immediately ~sig = { var freq, sig; freq = ([57, 60, 62, 64, 67, 69, 71] - 2).scramble .collect({ |n| n + [0, 0.1] }).flat.midicps; sig = Splay.ar(SinOsc.ar(freq)) * 0.05; }; ) ( p.clear; t.stop; )

   p.pop;



C h a p t er 12: Li v e C o d i n g

12.4.3  “RESHAPING” A NODEPROXY Every NodeProxy has a channel size, stored in its numChannels attribute. Often, the channel size of a NodeProxy is determined automatically, based on the channel size of the source. Other times, the channel size (and rate) of a NodeProxy can be provided at creation time, while its source is still nil. If a NodeProxy is created without a source and without specifying a channel size, it defaults to two channels if running at the audio rate, and one channel if running at the control rate. If we play a proxy, we are implicitly telling SC it should run at the audio rate. But, regardless of how a proxy’s channel size is determined, a good question is: What happens if we change the source such that its channel size is different from that of its proxy? Code Example 12.9 demonstrates the use of an attribute called reshaping, which determines a NodeProxy’s behavior in response to such a change. By default, reshaping is nil, which means the NodeProxy will not adapt to accommodate a differently sized source. If the new source has more channels than available in the proxy, excess channels are mixed with lower-numbered channels, in order to match the proxy’s size. If reshaping is set to the symbol \elastic, it will grow or shrink to accommodate the channel size of its new source, if different from the previous source. If reshaping is \expanding, the proxy will grow, but not shrink, to accommodate a differently sized source.

CODE EXAMPLE 12.9: T HE reshaping BEHAVIOR OF NODEPROXY IN RESPONSE TO DIFFERENTLY SIZED SOURCES. Ndef(\sines).play; Ndef(\sines).numChannels; // -> 2 Ndef(\sines).reshaping; // -> nil (no reshaping) ( // Define a 2-channel source Ndef(\sines, { var sig = SinOsc.ar([425, 500]); sig = sig * Decay2.ar(Impulse.ar([2, 3]), 0.005, 0.3, 0.1); }); ) ( // Define a 4-channel source. No reshaping is done, and excess channels are mixed with the lowest two. A notification appears in the post window. Ndef(\sines, { var sig = SinOsc.ar([425, 500, 750, 850]); sig = sig * Decay2.ar(Impulse.ar([2, 3, 4, 5]), 0.005, 0.3, 0.1); }); )

311

312

SuperCollider for the Creative Musician

Ndef(\sines).numChannels; // -> 2 Ndef(\sines).reshaping_(\elastic); // change reshaping behavior ( // Defining a 4-channel source now reshapes the proxy. All four signals are on separate channels. If working with only two speakers, we'll only hear the first two channels. Ndef(\sines, { var sig = SinOsc.ar([425, 500, 750, 850]); sig = sig * Decay2.ar(Impulse.ar([2, 3, 4, 5]), 0.005, 0.3, 0.1); }); ) Ndef(\sines).numChannels; // -> 4 ( // An elastic proxy will shrink to accommodate a smaller source Ndef(\sines, { var sig = SinOsc.ar([925, 1100]); sig = sig * Decay2.ar(Impulse.ar([6, 7]), 0.005, 0.3, 0.1); }); ) Ndef(\sines).numChannels; // -> 2

   Ndef(\sines).clear;

12.4.4  PBIND AS A NODEPROXY SOURCE Strange though it may seem, a Pbind is a valid NodeProxy source, demonstrated in Code Example 12.10. An EventStreamPlayer is not technically a type of Node, but is a process that often generates a timed sequence of Synths, and can therefore be treated similarly. Attributes such as fadeTime and quant can still be used and have predictable results. A NodeProxy that has a Pbind source can be used as part of another NodeProxy’s source, without the need for any special steps or precautions.

CODE EXAMPLE 12.10: U SING PBIND AS A NODEPROXY SOURCE. ( b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav"); SynthDef(\play, { arg atk = 0.002, rel = 0.08, buf = 0,



C h a p t er 12: Li v e C o d i n g

rate = 1, start = 0, amp = 0.5, out = 0; var sig, env; env = Env.perc(atk, rel).ar(2); sig = PlayBuf.ar( 1, buf, rate * BufRateScale.kr(buf), startPos: start ); sig = sig * env * amp ! 2; Out.ar(out, sig); }).add; t = TempoClock(108/60); t.schedAbs(0, {t.beats.postln; 1;}); ) // create a proxy and provide an initial source Ndef(\a).fadeTime_(0).clock_(t).quant_(4).play; ( // source change occurs on a quantized beat Ndef(\a, Pbind( \instrument, \play, \dur, 1/2, \buf, b, \amp, 0.2, \start, 36000, )); ) // set a crossfade time of four beats Ndef(\a).fadeTime_(t.beatDur * 4); ( // pattern-based proxies are crossfaded as expected Ndef(\a, Pbind( \instrument, \play, \dur, 1/4, \buf, b, \amp, 0.3, \start, Pwhite(50000, 70000), \rate, -4.midiratio, )); ) // create an effect and route the first proxy through it Ndef(\reverb).fadeTime_(5).play; ( Ndef(\reverb, {

313

314

SuperCollider for the Creative Musician

var sig; sig = Ndef.ar(\a, 2); sig = LPF.ar(GVerb.ar(sig.sum, 300, 5), 1500) * 0.2; }); ) ( // clean up: Ndef.all.do({ |n| n.clear }); t.stop;

   )

12.5  TaskProxy serves as a proxy for a routine (introduced in Section 5.2), allowing us to manipulate a timed sequence in real-time, even when that sequence isn’t fully defined.6 Though it’s possible to use TaskProxy directly (similar to using NodeProxy), here we will only focus on its subclass Tdef, which (like Ndef) inherits all the methods of its parent and avoids the need to use environment variables. Note that a Tdef’s source should not be an instance of a routine, but rather a function that would ordinarily appear inside of one. Code Example 12.11 provides a basic demonstration. If a TaskProxy is in a “playing” state when its source is defined, it will begin executing its source function immediately. If the source stream is finite, the proxy will remain in a playing state when finished and will begin executing again as soon as a new source is supplied—even if the new source is identical to the previous one. Similarly, if a TaskProxy has reached its end, the play message will restart it. Like routines, an infinite loop is a valid source for a TaskProxy (just be sure to include a wait time!). When paused and resumed, a TaskProxy “remembers” its position and continues from that point. When stopped, it will start over from the beginning when played again. Like other proxies, clear will stop a TaskProxy and reset its source to nil. TaskProxy

CODE EXAMPLE 12.11: B ASIC USAGE OF TaskProxy, ACCESSED THROUGH ITS SUBCLASS, Tdef. Tdef(\t).play; ( // a finite-length source — execution begins immediately Tdef(\t, { 3.do{

[6, 7, 8, 9].scramble.postln;



0.5.wait; };

"done.".postln }); )



C h a p t er 12: Li v e C o d i n g

Tdef(\t).play; // do it again ( // a new, infinite-length source Tdef(\t, { ~count = Pseq((0..9), inf).asStream; loop{

~count.next.postln;



0.25.wait; };

}); ) Tdef(\t).pause; Tdef(\t).resume; // continues from pause location Tdef(\t).stop; Tdef(\t).play; // restarts from beginning

   Tdef(\t).clear; Every TaskProxy runs on a clock and can be quantized, as shown in Code Example 12.12. If unspecified, a Tdef uses the default TempoClock (60 beats per minute) with a quant value of 1. Usage is essentially identical to Ndef. Like \dur values in a Pbind, wait times in a Tdef are interpreted as beat values with respect to its clock.

CODE EXAMPLE 12.12: S PECIFYING CLOCK AND QUANTIZATION INFORMATION FOR A TDEF. ( // create a verbose clock at 108 bpm t = TempoClock(108/60); t.schedAbs(0, {t.beats.postln; 1;}); ) // create a task proxy and set clock/quant values Tdef(\ticks).clock_(t).quant_(4).play; ( // post a visual effect, execution begins on next quantized beat Tdef(\ticks, { loop{ 4.do{ |n| "*---".rotate(n).postln;

315

316

SuperCollider for the Creative Musician

0.25.wait; } } }); ) ( // clean up Tdef(\ticks).clear; t.stop;

   )

Code Examples 12.11 and 12.12 aim to demonstrate basic usage, but mainly post values and don’t really do anything useful. How might a Tdef be used in a practical setting? Whenever you have some sequence of actions to be executed, Tdef is always a reasonable choice, especially if you envision dynamically changing the sequence as it runs. One option is to automate the sending of set messages to a NodeProxy. In Code Example 12.13, we have an Ndef that plays a sustained chord. Instead of repeatedly setting new note values ourselves, we can delegate this job to a Tdef.

CODE EXAMPLE 12.13: U SING TDEF TO AUTOMATE THE PROCESS OF UPDATING A NODE PROXY WITH set MESSAGES. Ndef(\pad).fadeTime_(3).play; ( Ndef(\pad, { |notes = #[43, 50, 59, 66]| var sig; sig = notes.collect({ |n| 4.collect({ LFTri.ar( freq: (n + LFNoise2.kr(0.2).bipolar(0.25)).midicps, mul: 0.1 ); }).sum }); sig = Splay.ar(sig.scramble, 0.5); sig = LPF.ar(sig, notes[3]‌ .midicps * 2); }); )



C h a p t er 12: Li v e C o d i n g

( Tdef(\seq, { var chords = Pseq([

[48, 55, 62, 64],



[41, 48, 57, 64],



[55, 59, 64, 69],



[43, 50, 59, 66], ], inf).asStream; loop{



Ndef(\pad).xset(\notes, chords.next);



8.wait; }

}).play ) ( // clean up Tdef(\seq).stop; Ndef(\pad).clear(8);

   )

TIP.RAND(); A RRAY ARGUMENTS IN A UGEN FUNCTION It is possible for an argument declared in a UGen function (e.g., in a SynthDef or NodeProxy function) to be an array of numbers, as is the case in Code Example 12.13. However, once an array argument is declared, its size must remain constant. Additionally, the argument’s default value must be declared as a literal array, which involves preceding the array with a hash symbol (#). Literal arrays were briefly   mentioned in Section 1.6.9, but not discussed in detail.

12.6  Recording a Live Coding Performance What options do we have for capturing a live coding session? We could evaluate s.makeGui and record the audio straight to our hard drive, but this only captures the sound, and not the code, which is arguably a crucial element of the performance. If at some point midperformance, we happened to make some truly impressive sound, the code that generated it might be lost in the process; for example, it may have been deleted or overwritten in the heat of the moment. We could alternatively use screen recording software to capture the visual elements, but video files tend to be relatively large, and piling this recording task on top of

317

318

SuperCollider for the Creative Musician

real-time audio processing might strain your computer. Plus, we’d still need to copy the code manually when watching the video later. The History class, demonstrated in Code Example 12.14, provides a lightweight solution for recording code as it is evaluated over time. When a history session is started, it captures every subsequent piece of code that runs, exactly when it runs, until the user explicitly ends the history session. Once a history session has ended, we have the option of recording the code history to a text file, so that it can be studied, sent to a collaborator, or replayed. The saveCS method (short for “save as compile string,” writes a file to a path as a List of individual code evaluations, formatted as strings, which can be loaded in as the current history using History.loadCS, and played back automatically with History.play. This will feel as if a “ghost” version of yourself from the past is typing at your keyboard! If desired, a history playback can be stopped partway through with History.stop. Alternatively, a history can be saved with saveStory, which produces a more human-readable version, complete with commented timestamps. This type of save is meant to be played back manually and is well-suited for studying or editing.

CODE EXAMPLE 12.14: U SE OF THE History CLASS TO RECORD AND REPLAY CODE EVALUATION. History.start; // start a history session // now, run a sequence of code statements: s.boot; Ndef(\k).fadeTime_(1).play; Ndef(\k, {SinOsc.ar([200, 201], mul: 0.1)}); Ndef(\k, {SinOsc.ar([250, 253], mul: 0.1)}); ( Ndef(\k, { var sig, mod; mod = LFSaw.kr(0.3, 1).range(2, 40); sig = LFTri.ar([250, 253], mul: 0.1); sig = sig * LFTri.kr(mod).unipolar(1); }); ) Ndef(\k).clear(3); s.quit; History.end; // stop the session



C h a p t er 12: Li v e C o d i n g

// save to your home directory History.saveCS("~/myHistoryCS.scd".standardizePath); History.clear; // clear the history, to demonstrate properly History.play; // confirm history is currently empty History.loadCS("~/myHistoryCS.scd".standardizePath); // load recorded history History.play; // replay // save to your home directory in "story" format

   History.saveStory("~/myHistoryStory.scd".standardizePath); Companion Code 12.1 features a live coding demonstration/performance, which exists in both history formats and a live screen recording.

Notes 1 2 3 4 5

https://top​lap.org/about/. https://sonic-pi.net/tutor​ial.html. https://doc.scc​ode.org/Overvi​ews/JIT​Lib.html. https://doc.scc​ode.org/Tutori​als/JIT​Lib/jitli​b_ba​sic_​conc​epts​_01.html. Technically, when calling xset on a NodeProxy, this method does not directly “crossfade” between two argument values (which would create a glissando or something similar); rather, an xset message instantiates a new source Synth using the updated argument value and crossfades between it and the older Synth. This behavior can be confirmed by visually monitoring the Node tree. 6 TaskProxy is a reference to the Task class, a close relative of routines. Don’t be distracted by the fact that this class is not named “RoutineProxy”—routines and tasks are very similar, and both are subclasses of Stream. Though tasks and routines are not identical nor fully interchangeable, it is rare during practical usage to encounter an objective can be handled by one but not the other.

319

INDEX For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. Page numbers followed by f refer to figure; t to table; c to code example; tr to tip.rand().  absolute file paths. See resolving relative file paths actions of GUI elements, 238–39 addActions for Nodes, 195, 196 ADSR envelope, 52 aliasing, 37–38, 92–93 all-pass filter. See delay effect (with feedback) amplitude, 40, 41, 44tr, 52, 71, 81, 104, 162–63 animation of GUI elements, 247–48 AppClock, 243–44 arguments for functions, 24–25 for methods, 8 for NodeProxies, 308–9 for UGens, 40–42 for UGen functions, 43–45 in a SynthDef, 68–69, 123 trigger-type, 59c, 59, 126 Array, 22–24. See also dup as storage mechanism for Buffers, 119 as storage mechanism for musical events, 266–69 arguments in a SynthDef or NodeProxy, 317, 317tr audio setup, 35, 41, 41tr, 185–86   binary operators, 18, 18t bitcrushing, 112–13 Blip, 82 block size. See control block Boolean values, 21, 123 bouncing to disk. See recording to an audio file brown noise, 99 bufnum, 117 BufRd, 127–28 Buf Wr, 134–38 busses, 183–88 resetting allocation counter, 188   classes, 6 client, the. See language, the clipping, 109 smooth variants, 110–11 CmdPeriod, 218 color, 237 comb filter. See delay effect (with feedback) command-period (keyboard shortcut), 42 comments, 12–13

control block, 38, 193 ControlSpec, 239–41 conversions decibels to amplitude, 71 MIDI note numbers to Hertz, 71 semitones to frequency ratios, 71   debugging. See polling a UGen; see postln; see trace MIDI data, 212 default SynthDef, the, 162 defer. See AppClock delay effect, 133, 137, 199–200 with feedback, 203–4 Dictionary as storage mechanism for musical events, 269–70 digital audio, 37–38 doneActions, 54, 124, 279 dup, 23, 60 DynKlang, 83 DynKlank, 108 echo. See delay effect   effects inserts vs. sends, 183 enclosures, 7, 9 Env, 55–59, 142, See also Pseg EnvGen, 55–59 Environment, 304–5 escape character, 20c, 20 evaluating code multi-line block, 9 single line, 7 Event as module in a state-based composition, 281 as playable action, 160–66 as storage mechanism, 120, 160 midi type, 219–20 note type, 161–66 rest type, 170–73 EventStreamPlayer, 167, 285–88 exclamation mark. See dup   fadeTime, 46, 301. See also release feedback acoustic, 190, 190–91tr delay lines, 203

322

Index filters, 102–6 quality, 103, 104, 105–6 types, 102–3, 105–6 flanger effect, 202 Float, 18 folding, 109 foldover. See aliasing free a Group, 198 an HIDdef, 231 a MIDIdef, 214 a Synth, 45 freeAll applied to a Group, 198 applied to HIDdef, 231 applied to MIDIdef, 214 FreqScope, 73f, 74–75 frequency, 37–38, 39, 40–41, 71, 92, 163–64 Function, 24–25, 267 function-dot-play, 42, 46, 65–66 gated envelopes, 57, 165–66 getting an attribute, 26 of a GUI element, 236–37   GrainBuf, 140–42 Group, 195–98 default Group, the, 195   help browser, the, 3f, 4, 13–15 HID, 230–31 History, 318–19   IDE, 3f, 4 if-then-else, 30 In. See inputting a signal; see also SoundIn inputting a signal, 184, 190, See also busses instances, 6 Integer, 18 Integrated Development Environment. See IDE interaction with GUIs using MIDI/OSC, 243 using the keyboard/mouse, 241–43 interpreter, the, 4–5 iteration for additive synthesis, 81–82 for reading audio files into Buffers, 120, 121–22c for sequencing, 149–50   JITLib, 299–300 Klang, 82–83 Klank, 108   Lag (UGen), 239, 282, 282c language, the, 4–5 Latch, 113 layout management of GUIs, 236 LFO, 84, 105–6, 202 literals, 26–27 loading external code files, 261–62

looping a Routine, 150 a sample, 124, 128 low-frequency noise generators, 101–2 low-frequency oscillator. See LFO   Markov chain, 294–95 memory allocation (for delay lines), 200–1 messages. See methods methods, 6 MIDIdef, 212–16 modulation, 49, 84 amplitude, 85–86 frequency, 88–90 ring, 86–87 mul/add, 41–42 multichannel expansion, 60–63 multichannel mixing. See Splay   Ndef, 303–4 NetAddr, 221–23 nil, 22 Node, 195 node tree, the, 73f, 75–76, 192 noise, 99–102 normalization, 115 notes. See Event (note type) Nyquist frequency, 37–38, 39   objects, 6 object-oriented programming, 5–6 one-shot sample playback, 127 OOP. See object-oriented programming order of execution for signal processing, 192–95 for server actions, 252 order of operations, 19 Osc (oscillator UGen), 94 OSCdef, 221–22 oscilloscope. See Stethoscope Out. See outputting a signal. See also busses outputting a signal, 66–68, 184, 188–89, 193 overdubbing, 131–32, 138 overloading, 20   panning, 60, 64 PathName, 120 pattern types, 159t pausing a Synth, 279–82 Pbind, 166–70, 285–88 as NodeProxy source, 312–14 Pbindef, 180–82 Pdef, 179–80 Pdefn, 177–79 Pen, 245–48 phase, 40–41 Phasor, 128, 136 pink noise, 37t, 99 pitch, 34, 71, 124–25, 129, 163–64 pitch-shifting delay-based, 202

granular, 207–10 play a Buffer, 115–16 an Event, 160 a NodeProxy, 300, 301 a Pbind, 167 a Routine, 149–50, 153 a UGen function. See function-dot-play playback speed, 124–26 PlayBuf, 122–27 plotting Buffers, 115–16 UGen functions, 73–74 Point, 245 polling a UGen, 72–73 polymorphism, 6 postln, 12, 161, 161tr ProxySpace, 305–7 proxy objects, 176, 299–300 Pseg, 285, 285c, 287   quantization, 154–55, 169, 178, 309 range-mapping for GUI elements, 239–41 for UGens. See UGen ranges read an audio file into a Buffer, 115–18 data from a USB device, 229 readChannel, 118, 139 receiver, 7 RecordBuf, 130–33 recording to an audio file, 75 a live coding performance, 317–19. See also History Rect, 233 Ref, 82–83 release, 46, 52, 68 reshaping a NodeProxy, 311–12 resolving relative file paths, 121–22 resonator. See delay effect with feedback Resonz, 107 rests. See Event (rest type) returning a value, 7–8 reverberation, 194, 204 Ringz, 107   sample rate, 37, 39–40 sclang. See language, the scsynth. See server, the SendReply, 223–24 SerialPort, 228–29 server, the, 4, 5 meters, 60, 61f, 73f ServerBoot/ServerQuit/ServerTree, 256–60 set message applied to Synths, 43–44, 281–82 applied to Groups, 198 applied to NodeProxies, 308. See also xset

Inde x setting an attribute, 26, 178 of a GUI element, 236–37 Shaper, 96 Signal, 94 signal generators. See UGen (common examples) sine wave oscillator. See SinOsc SinOsc, 40, 40f SoundIn, 192 spectrum analyzer. See FreqScope Splay, 65 Stethoscope, 73f, 74 stop an EventStreamPlayer, 168–69 a NodeProxy, 301 a Routine, 150 a TempoClock, 155 all sound. See command-period. See also free Stream, 146 as storage mechanism for musical events, 270–71 String, 20 sustain, 166, 219. See also gated envelopes Symbol, 21 sync, 254–55 synchronized sample playback, 129   targets for Nodes, 195 TempoClock, 152–56 permanence, 155 TGrains, 143 trace, 168tr tremolo, 85   UGen common examples, 37t ranges, 50, 51t, 112 rates, 38–40 unary operators, 18, 18t USB communication. See SerialPort UserView, 245–48   variables, 9 environment, 11–12 interpreter, 11–12 local, 10 vibrato, 88 View, 234–35 common types, 235t volume, 41, 41tr, 73f, 75 VOsc, 95–96   Warp 1, 143–44 waveshaping, 96–97 while, 148 white noise, 37t, 99 Window, 233–34 wrapping, 109 xset, 308

323

The SuperCollider Book

edited

by

Scott

Wilson, David Cottle, and Nick Collins

The MIT Press

Cambridge, Massachusetts London, England

Contents

Foreword ix James McCartney Introduction xiii Scott Wilson, David Cottle, and Nick Collins Tutorials

1 1 2 3 4

Advanced Tutorials 5 6 7 8

Beginner’s Tutorial 3 David Cottle The Unit Generator 55 Joshua Parmenter Composition with SuperCollider 81 Scott Wilson and Julio d’Escriván Ins and Outs: SuperCollider and External Devices 105 Stefan Kersten, Marije A. J. Baalman, and Till Bovermann 125 Programming in SuperCollider 127 Iannis Zannos Events and Patterns 179 Ron Kuivila Just-in-Time Programming 207 Julian Rohrhuber and Alberto de Campo Object Modeling 237 Alberto de Campo, Julian Rohrhuber, and Till Bovermann

vi

Contents

Platforms and GUI

271

9 10 11 12

Practical Applications 13

14 15 16 17 18

Mac OSX GUI 273 Jan Trützschler von Falkenstein SwingOSC 305 Hanns Holger Rutz SuperCollider on Windows 339 Christopher Frauenberger “Collision with the Penguin”: SuperCollider on Linux Stefan Kersten and Marije A. J. Baalman 379

Sonification and Auditory Display in SuperCollider 381 Alberto de Campo, Julian Rohrhuber, Till Bovermann, and Christopher Frauenberger Spatialization with SuperCollider 409 Marije A. J. Baalman and Scott Wilson Machine Listening in SuperCollider 439 Nick Collins Microsound 463 Alberto de Campo Alternative Tunings with SuperCollider 505 Fabrice Mogini Non-Real-Time Synthesis and Object-Oriented Composition 537 Brian Willkie and Joshua Parmenter

Projects and Perspectives 19 20

21 22 23

355

573

A Binaural Simulation of Varèse’s Poème Électronique 575 Stefan Kersten, Vincenzo Lombardo, Fabrizio Nunnari, and Andrea Valle High-Level Structures for Live Performance: dewdrop_lib and chucklib 589 James Harkins Interface Investigations 613 Thor Magnusson SuperCollider in Japan 629 Takeko Akamatsu Dialects, Constraints, and Systems within Systems 635 Julian Rohrhuber, Tom Hall, and Alberto de Campo

vii

Contents

Developer Topics 24 25 26

657 The SuperCollider Language Implementation Stefan Kersten Writing Unit Generator Plug-ins 691 Dan Stowell Inside scsynth 721 Ross Bencina

Appendix: Syntax of the SuperCollider Language Iannis Zannos Subject Index 745 Code Index 751

659

741

Foreword James McCartney

Why use a computer programming language for composing music? Specifically, why use SuperCollider? There are several very high-level language environments for audio besides SuperCollider, such as Common Music, Kyma, Nyquist, and Patchwork. These all demonstrate very interesting work in this area and are worth looking into. SuperCollider, though, has the unique combination of being free, well supported, and designed for real time. It is a language for describing sound processes. SuperCollider is very good at allowing one to create nice sounds with minimal effort, but more important, it allows one to represent musical concepts as objects, to transform them via functions or methods, to compose transformations into higherlevel building blocks, and to design interactions for manipulating music in real time, from the top-level structure of a piece down to the level of the waveform. You can build a library of classes and functions that become building blocks for your working style and in this way make a customized working environment. With SuperCollider, one can create many things: very long or infinitely long pieces, infinite variations of structure or surface detail, algorithmic mass production of synthesis voices, sonification of empirical data or mathematical formulas, to name a few. It has also been used as a vehicle for live coding and networked performances. Because of this openendedness, early on, I often felt it difficult to know how best to write the documentation. There were too many possible approaches and applications. Thus, I am pleased that there will now be a book on SuperCollider, and the best part of it for me is that I have not had to do much of the hard work to get it done. Since I made SuperCollider open source, it has taken on a life of its own and become a community-sustained project as opposed to being a project sustained by a single author. Many people have stepped up and volunteered to undertake tasks of documentation, porting to other operating systems, interfacing to hardware and software, writing new unit generators, extending the class library, maintaining a Web site, mailing lists, and a wiki, fixing bugs, and, finally, writing and editing the chapters of this book. All of these efforts have resulted in more features, better documentation, and a more complete, robust, and bugfree program.

x

James McCartney

SuperCollider came about as the latest in a series of software synthesis languages that I have written over the years. I have been interested in writing software to synthesize electronic music ever since I was in high school. At that time, I had written on a piece of notebook paper a set of imaginary subroutine calls in BASIC for implementing all of the common analog synthesizer modules. Of course, doing audio synthesis in BASIC on the hardware of that time was completely impractical, but the idea had become a goal of mine. When I graduated college, I went to have a look at E-mu in California and found that it was operating out of a two-story house. I figured that the synthesizer industry was a lot smaller than I had imagined and that I should rethink my career plans. The first software synthesizer I wrote was a graphical patching environment called Synfonix that generated samples for the Ensoniq Mirage sampling keyboard using a Macintosh computer. I attempted to sell this program but sold only two copies, one to Ivan Tcherepnin at Harvard and another to Mark Polishook. A business lesson I learned from this is not to sell a product that requires purchasers to already own two niche products. The intersection of two small markets is near zero. In 1990, I wrote a program called Synth-O-Matic that I used personally but never meant to distribute, even though one copy I gave to a friend got circulated. I created this program after learning CSound and deciding that I never wanted to actually have to use CSound’s assembly-language-like syntax. Synth-O-Matic had a more expression-oriented syntax for writing signal flow graphs and a graphical user interface for editing wave tables. I used this program on and off, but it was quite slow so I stopped using it, for the most part, in favor of hardware synthesizers. It wasn’t until the PowerPC came out that it became practical to do floating-point signal processing in real time on a personal computer. At the time I had been working on music for a modern dance piece, using Synth-O-Matic to do granular synthesis. It was taking a long time to generate the sound, and I was running behind schedule to get the piece done. On the day in March 1994 when the first PowerPC-based machine came out, I went and bought the fastest one. I recompiled my code, and it ran 32 times faster. I was then able to complete the piece on time. I noticed that my code was now running faster than real time, so I began working on a program designed to do real-time synthesis. Around this time I got a note in the mail from Curtis Roads, who had apparently gotten one of the circulating copies of Synth-O-Matic, encouraging me to further develop the program. So I took the Synth-O-Matic engine and combined it with the Pyrite scripting language object which I had written for MAX. This became SuperCollider version 1, which was released in March 1996. The first two orders were from John Bischoff and Chris Brown of The Hub. The name “SuperCollider” has an amusing origin. During the early 1990s I worked in the Astronomy Department of the University of Texas at Austin on the Hubble Space Telescope Astrometry Science Team, writing software for data analysis

xi

Foreword

and telescope observation planning. On the floors below my office was the Physics Department, some of the members of which were involved in the Superconducting Super Collider project. In 1993, Congress cut the funding for the project, and there were many glum faces around the building after that. I had been thinking about this merging, or “collision,” if you will, of a real-time synthesis engine with a high-level garbage collected language, and it seemed to me that it was an experiment that would likely fail, so I named it after the failed Big Science project of the day. Except that the experiment didn’t fail. To my surprise, it actually worked rather well. The version 1 language was dynamically typed, with a C-like syntax, closures borrowed from Scheme, and a limited amount of polymorphism. After using version 1 for a couple of years, and especially after a project on which I was invited by Iannis Zannos to work on a concert in an indoor swimming pool in Berlin, I realized that it had severe limitations on its ability to scale up to create large working environments. So I began working on version 2, which borrowed heavily from Smalltalk. It was for the most part the same as the language described in this book except for the synthesis engine and class library, which have changed a great deal. The goal of SuperCollider version 2 and beyond was to create a language for describing real-time interactive sound processes. I wanted to create a way to describe categories of sound processes that could be parameterized or customized. The main idea of SuperCollider is to algorithmically compose objects to create soundgenerating processes. Unit generators, a concept invented by Max Matthews for his Music N languages, are very much like objects and are a natural fit for a purely object-oriented Smalltalk-like language. In 2001 I was working on version 3 of SuperCollider, and because of the architecture of the server, it was looking like it really should be open source, so that anyone could modify it however they liked. I was (barely) making a living at the time selling SuperCollider, so the decision to give it away was a difficult one to make. But financially it was looking like I would need a “real” job soon, anyway. I was also worried that the period of self-employment on my résumé would begin looking suspect to potential employers. Ultimately, I did get a job, so I was able to open source the program. On the day that I made all of my previously copyright-protected programs free, my Web site experienced an eightfold increase in traffic. So obviously there was a lot of interest in an open source audio language and engine, especially one that is free. I hope that this book will enable and inspire the reader to apply this tool in useful and interesting ways. And I hope to hear and enjoy the results!

Introduction Scott Wilson, David Cottle, and Nick Collins

Welcome to The SuperCollider Book. We’re delighted to present a collection of tutorials, essays, and projects that highlight one of the most exciting and powerful audio environments. SuperCollider (SC to its friends) is a domain-specific programming language specialized for sound but with capabilities to rival any generalpurpose language. Though it is technically a blend of Smalltalk, C, and ideas from a number of other programming languages, many users simply accept SuperCollider as its own wonderful dialect, a superlative tool for real-time audio adventures. Indeed, for many artists, SuperCollider is the first programming language they learn, and they do so without fear because the results are immediately engaging; you can learn a little at a time in such a way that you hardly notice that you’re programming until it’s too late and you’re hooked! The potential applications in real-time interaction, installations, electroacoustic pieces, generative music, audiovisuals, and a host of other possibilities make up for any other qualms. On top of that, it’s free, powerful, and open source, and has one of the most supportive and diverse user and developer communities around. Pathways This book will be your companion to SuperCollider; some of you will already have experience and be itching to turn to the many and varied chapters further on from this point. We’ll let you follow the book in any order you choose! But we would like to take care to welcome any newcomers and point them straight in the direction of chapter 1, which provides a friendly introduction to the basics. For those on Windows or Linux it may be read together with chapter 11 or 12, respectively, which cover some of the cross-platform installation issues. From there we suggest beginners continue through until chapter 4, as this path will provide you with some basic skills and knowledge which can serve as a foundation for further learning. For more advanced users, we suggest you look at the more “topics”-oriented chapters which follow. These chapters aren’t designed to be read in any particular

xiv

Scott Wilson, David Cottle, and Nick Collins

order, so proceed with those of particular interest and relevance to you and your pursuits. Naturally we have referred to other chapters for clarification where necessary and have tried to avoid duplication of materials except where absolutely crucial for clarity. These “topics” chapters are divided into sections titled Advanced Tutorials, Platforms and GUI, and Practical Applications. They begin with chapter 5, “Programming in SuperCollider,” which provides a detailed overview of SuperCollider as a programming language. This may be of interest to beginners with a computer science background who’d rather approach SC from a language design and theory perspective than through the more user-friendly approach in chapter 1. Chapters on a variety of subjects follow, including sonification, spatialization, microsound, GUIs, machine listening, alternative tunings, and non-real-time synthesis. Following these chapters is a section for intermediate and advanced users titled Projects and Perspectives. The material therein provides examples of how SuperCollider has been used in the real world. These chapters also provide some philosophical insight into issues of language design and its implications (most specifically in chapter 23, “Dialects, Constraints, and Systems-Within-Systems”). This sort of intellectual pursuit has been an important part of SuperCollider’s development; SC is a language that self-consciously aims for good design, and to allow and encourage elegance, and even beauty, in the user’s code. Although this might seem a little abstract at first, we feel that this sets SC apart from other computer music environments and that as users advance, awareness of such things can improve their code. Finally, there is a section titled Developer Topics, which provides detailed “under the hood” information on SC. These chapters are for advanced users seeking a deeper understanding of the SC and its workings and for those wishing to extend SC, for instance, by writing custom unit generator plug-ins. Code Examples and Text Conventions Initially SuperCollider was Mac only, but as an open source project since 2001, it has widened its scope to cover all major platforms, with only minor differences between them. Most code in this book should run on all platforms with the same results, and we will note places where there are different mechanisms in place; most of the time, the code itself will already have taken account of any differences automatically. For instance, SC includes cross-platform GUI classes such as View, Slider, and Window. These will automatically redirect to the correct GUI implementation, either Cocoa (on Mac OSX; see chapter 9) or SwingOSC (on all platforms; see chapter 10). However, there are some differences in the programming editor environments (such as available menu items) and the keyboard shortcuts. You are referred

xv

Introduction

as well to the extensive Help system that comes with the SuperCollider application; a Help file on keyboard Shortcuts for the various platforms is prominently linked from the main Help page. Just to note, when you come across keyboard shortcuts in the text, they’ll appear like this: [enter] designates the “enter” (and not the “return”) key, [ctrl+a] means the control key plus the “a” key, and so on. Furthermore, all text appearing in the FRGH IRQW will almost always be valid SuperCollider code (very occasionally there may be exceptions for didactic purposes, such as here). You will also encounter some special SuperCollider terms (e.g., Synth, SynthDef, and Array) that aren’t in code font and are discussed in a friendly manner; this is because they are ubiquitous concepts and it would be exhausting to have them in the code font every time. You may also see them appearing with a capital letter (i.e., Synths), or all lower case (synths), depending again on how formal we are being. Anyway, if you’re new to SuperCollider, don’t worry about this at all; chapter 1 will start you on the righteous path, and you’ll soon be chatting about Synths and UGens like the rest of us. The Book Web Site This brings us to the accompanying Web site for the book (), which contains all the code reproduced within, ready to run, as well as download links to the application itself, its source code, and all sorts of third-party extras, extensions, libraries, and examples. A standardized version of SuperCollider is used for the book, SuperCollider 3.4, for which all book code should work without trouble. Of course, the reader may find it productive to download newer versions of SuperCollider as they become available, and it is our intention to provide updated versions of the example code where needed. Although we can make no hard promises, in this fast-paced world of operating system shifts, that the code in this book will remain eternally correct—the ongoing development and improvement of environments such as SuperCollider are a big part of what makes them so exciting— we’ve done our best to present you with a snapshot of SuperCollider that should retain a core validity in future years. Final Thoughts Please be careful with audio examples; there is of course the potential to make noises that can damage your hearing if you’re not sensible with volume levels. Until you become accustomed to the program, we suggest you start each example with the volume all the way down, and then slowly raise it to a comfortable level. (And if you’re not getting any sound, remember to check if you’ve left the monitors off or

xvi

Scott Wilson, David Cottle, and Nick Collins

the computer audio muted.) Some examples may use audio input and have the potential to feedback unless your monitoring arrangements are correct. The easiest way to deal with such examples is to monitor via headphones. We couldn’t possibly cover everything concerning SuperCollider, and there are many online resources to track down new developments and alternative viewpoints, including mailing lists, forums, and a host of artists, composers, technology developers, and SuperCollider maniacs with interesting pages. We have provided a few of the most important links (at the time of writing) below, but Wikigoopedigle, or whatever your contemporary equivalent is, will allow you to search out the current SuperCollider 15.7 as necessary. We’re sure you’ll have fun as you explore this compendium, and we’re also sure you’ll be inspired to some fantastic art and science as you go. Enjoy exploring the SuperCollider world first charted by James McCartney but since expanded immeasurably by everyone who partakes in this infinitely flexible open source project. Primary Web Resources Main community home page: Application download and project site: James McCartney’s home page: The venerable swiki site: Acknowledgments We owe a debt of gratitude to the chapter contributors and to the wider SuperCollider community, who have supported this project. SC’s community is one of its greatest strengths, and innumerable phone calls, e-mails, chats over cups of tea at the SC Symposium, and many other interactions, have contributed to making this book and SC itself stronger. We apologize to all who have put up with our insistent editing and acknowledge the great efforts of the developers to prepare a stable SuperCollider 3.4 version for this book. A thousand thank-yous: GR^GRPRDULJDWRJR]DLPDVKLWDSRVWOQ`

Thanks to all at MIT Press who have assisted in the handling of our proposal and manuscript for this book. Thank you to friends, family, and colleagues who had to deal with us while we were immersed in the lengthy task of organizing, editing, and assembling this book. Editing may seem like a solitary business, but from their perspective we’re quite sure it was a team effort!

xvii

Introduction

Many thanks to our students, who have served as guinea pigs for pedagogical approaches, tutorial materials, experimental developments, and harebrained ideas. Now back to your exercise wheels! Finally, the editors would like to thank each other for support during the period of gestation. At the time of writing, we’ve been working on this project for 2 years to bring the final manuscript to fruition. Though Scott and Nick have met on many occasions (and were co-organizers of the 2006 SuperCollider Symposium at the University of Birmingham), neither has ever met David in person (we sometimes wonder if he really exists!); but you should see the number of e-mails we’ve sent each other. For many months, in the heat of the project, David modified his routine to include a 3 A.M. e-mail check to keep up with those using GMT, which imparts a celestially imposed 8-hour advantage. In any case, although it has at times been exhausting, seeing this book through to fruition has been a pleasure, and we hope it brings you pleasure to read it and learn about SC.

3

Composition with SuperCollider Scott Wilson and Julio d’Escriván

3.1

Introduction The actual process of composing, and deciding how to go about it, can be one of the most difficult things about using SuperCollider. People often find it hard to make the jump from modifying simple examples to producing a full-scale piece. In contrast to Digital Audio Workstation (DAW) software such as Pro Tools, for example, SC doesn’t present the user with a single “preferred” way of working. This can be confusing, but it’s an inevitable side effect of the flexibility of SC, which allows for many different approaches to generating and assembling material. A brief and incomplete list of ways people might use SC for composition could include the following: Real-time interactive works with musicians Sound installations • Generating material for tape music composition (to be assembled later on a DAW), perhaps performed in real time • As a processing and synthesis tool kit for experimenting with sound • To get away from always using the same plug-ins • To create generative music programs • To create a composition or performance tool kit tailored to one’s own musical ideas. • •

All of these activities have different requirements and suggest different approaches. This chapter attempts to give the composer or sound artist some starting points for creative exploration. Naturally, we can’t hope to be anywhere near exhaustive, as the topic of the chapter is huge and in some senses encompasses all aspects of SC. Thus we’ll take a pragmatic approach, exploring both some abstract ideas and concrete applications, and referring you to other chapters in this book where they are relevant.

82

Scott Wilson and Julio d’Escriván

3.1.1 Coding for Flexibility The notion of making things that are flexible and reusable is something that we’ll keep in mind as we examine different ideas in this chapter. As an example, you might have some code that generates a finished sound file, possibly your entire piece. With a little planning and foresight, you might be able to change that code so that it can easily be customized on the fly in live performance, or be adapted to generate a new version to different specifications (quad instead of stereo, for instance). With this in mind, it may be useful to utilize environment variables which allow for global storage and are easily recalled. You’ll recall from chapter 1 that environment variables are preceded by a tilde (~). VRPHFRGHZHPD\ZDQWWRXVHODWHU aVRPHWKLQJ ^3XOVHDU  (QY*HQDU (QYSHUFGRQH$FWLRQ ` ZKHQWKHWLPHFRPHVMXVWFDOOLWE\LWVQDPHDQGSOD\LW aVRPHWKLQJSOD\

Since environment variables do not have the limited scope of normal variables, we’ll use them in this chapter for creating simple examples. Keep in mind, however, that in the final version of a piece there may be good reasons for structuring your code differently. 3.2

Control and Structure When deciding how to control and structure a piece, you need to consider both practical and aesthetic issues: Who is your piece for? Who is performing it? (Maybe you, maybe an SC Luddite . . .) What kind of flexibility (or expressiveness!) is musically meaningful in your context? Does pragmatism (i.e., maximum reliability) override aesthetic or other concerns (i.e., you’re a hard-core experimentalist, or you are on tenure track and need to do something technically impressive)? A fundamental part of designing a piece in SC is deciding how to control what happens when. How you do this depends upon your individual needs. You may have a simple list of events that need to happen at specific times, or a collection of things that can be triggered flexibly (for instance, from a GUI) in response to input from a performer, or algorithmically. Or you may need to combine multiple approaches. We use the term structure here when discussing this issue of how to control when and how things happen, but keep in mind that this could mean anything from the macro scale to the micro scale. In many cases in SC the mechanisms you use might be the same.

83

3

Composition with SuperCollider

3.2.1 Clocks, Routines, and Tasks Here’s a very simple example that shows you how to schedule something to happen at a given time. It makes use of the 6\VWHP&ORFN class. 6\VWHP&ORFNVFKHG ^IRRSRVWOQ` 

The first argument to the VFKHG message is a delay in seconds, and the second is a that will be evaluated after that delay. In this case the Function simply posts the word “foo,” but it could contain any valid SC code. If the last thing to be evaluated in the Function returns a number, SystemClock will reschedule the Function, using that value as the new delay time.

)XQFWLRQ

IRRUHSHDWVHYHU\VHFRQG 6\VWHP&ORFNVFKHG ^IRRSRVWOQ`  EDUUHSHDWVDWDUDQGRPGHOD\ 6\VWHP&ORFNVFKHG ^EDUSRVWOQUDQG`  FOHDUDOOVFKHGXOHGHYHQWV 6\VWHP&ORFNFOHDU

SystemClock has one important limitation: it cannot be used to schedule events which affect native GUI widgets on OSX. For this purpose another clock exists, called $SS&ORFN. Generally you can use it in the same way as SystemClock, but be aware that its timing is slightly less accurate. There is a shortcut for scheduling something on the AppClock immediately, which is to wrap it in a Function and call GHIHU on it. FDXVHVDQRSHUDWLRQFDQQRWEHFDOOHGIURPWKLV3URFHVVHUURU 6\VWHP&ORFNVFKHG ^6&:LQGRZQHZIURQW`  GHIHUUHVFKHGXOHV*8,FRGHRQWKH$SS&ORFNVRWKLVZRUNV 6\VWHP&ORFNVFKHG ^^6&:LQGRZQHZIURQW`GHIHU` 

GUI, by the way, is short for Graphical User Interface and refers to things such as windows, buttons, and sliders. This topic is covered in detail in chapters 9 and 10, so although we’ll see some GUI code in a few of the examples in this chapter, we won’t worry too much about the nitty-gritty details of it. Most of it should be pretty straightforward and intuitive, anyway, so for now, just move past any bits that aren’t clear and try to focus on the topics at hand. Another Clock subclass, 7HPSR&ORFN, provides the ability to schedule events according to beats rather than in seconds. Unlike the clocks we’ve looked at so far, you need to create an instance of TempoClock and send sched messages to it, rather than to the class. This is because you can have many instances of TempoClock, each with its own tempo, but there’s only one each of SystemClock and AppClock. By varying

84

Scott Wilson and Julio d’Escriván

a TempoClock’s tempo (in beats per second), you can change the speed. Here’s a simple example. W 7HPSR&ORFNQHZPDNHDQHZ7HPSR&ORFN WVFKHG ^+HOORSRVWOQ`  WWHPSR WZLFHDVIDVW WFOHDU

TempoClock also allows beat-based and bar-based scheduling, so it can be particularly useful when composing metric music. (See the TempoClock Help file for more details.) Now let’s take a look at Routines. A 5RXWLQH is like a Function that you can evaluate a bit at a time, and in fact you can use one almost anywhere you’d use a Function. Within a Routine, you use the yield method to return a value and pause execution. The next time you evaluate the Routine, it picks up where it left off. U 5RXWLQH ^ IRR\LHOG EDU\LHOG `  UYDOXHIRR UYDOXHEDU UYDOXHZH YHUHDFKHGWKHHQGVRLWUHWXUQVQLO

Routine has a commonly used synonym for YDOXH, which is QH[W. Although “next” might make more sense semantically with a Routine, “value” is sometimes preferable, for reasons we’ll explore below. Now here’s the really interesting thing: since a Routine can take the place of a Function, if you evaluate a Routine in a Clock, and yield a number, the Routine will be rescheduled, just as in the SystemClock example above. U 5RXWLQH ^ IRRSRVWOQ \LHOGUHVFKHGXOHDIWHUVHFRQG EDUSRVWOQ \LHOG IRREDUSRVWOQ `  6\VWHP&ORFNVFKHG U 

85

3

Composition with SuperCollider

)HUPDWD VERRW U 5RXWLQH ^  [ 6\QWK ?GHIDXOW>IUHTPLGLFSV@   ZDLW   [UHOHDVH    \ 6\QWK ?GHIDXOW>IUHTPLGLFSV@   :DLWLQJSRVWOQ  QLO\LHOGIHUPDWD   \UHOHDVH    ] 6\QWK ?GHIDXOW>IUHTPLGLFSV@   ZDLW  ]UHOHDVH `  GRWKLVWKHQZDLWIRUWKHIHUPDWD USOD\ IHHOWKHVZHHWWRQLF USOD\

Figure 3.1 A simple Routine illustrating a musical use of yield.

Figure 3.1 is a (slightly) more musical example that demonstrates a fermata of arbitrary length. This makes use of ZDLW, a synonym for \LHOG, and of Routine’s SOD\ method, which is a shortcut for scheduling it in a clock. By yielding nil at a certain point, the clock doesn’t reschedule, so you’ll need to call play again when you want to continue, thus “releasing” the fermata. Functions understand a message called IRUN, which is a commonly used shortcut for creating a Routine and playing it in a Clock. ^ VRPHWKLQJSRVWOQ ZDLW VRPHWKLQJHOVHSRVWOQ `IRUN

Figure 3.2 is a similar example with a simple GUI control. This time we’ll use a 7DVN, which you may remember from chapter 1. A Task works almost the same way

86

Scott Wilson and Julio d’Escriván

W 7DVN ^  ORRS ^ ORRSWKHZKROHWKLQJ   GR ^ GRWKLVWLPHV    [UHOHDVH      [ 6\QWK ?GHIDXOW>IUHTPLGLFSV@     ZDLW    [UHOHDVH      [ 6\QWK ?GHIDXOW>IUHTPLGLFSV@     ZDLW   `    , PZDLWLQJIRU\RXWRSUHVVUHVXPHSRVWOQ   QLO\LHOGIHUPDWD   [UHOHDVH     [ 6\QWK ?GHIDXOW>IUHTPLGLFSV@    ZDLW   [UHOHDVH  `  `  Z :LQGRZQHZ 7DVN([DPSOH5HFW  IURQW ZYLHZGHFRUDWRU )ORZ/D\RXW ZYLHZERXQGV  %XWWRQQHZ Z5HFW  VWDWHVB >>3OD\5HVXPH&RORUEODFN &RORUFOHDU@@  DFWLRQB ^WUHVXPH  `  %XWWRQQHZ Z5HFW  VWDWHVB >>3DXVH&RORUEODFN&RORUFOHDU@@  DFWLRQB ^WSDXVH`  %XWWRQQHZ Z5HFW  VWDWHVB >>)LQLVK&RORUEODFN&RORUFOHDU@@  DFWLRQB ^   WVWRS   [UHOHDVH     ZFORVH  ` 

Figure 3.2 Using Task so you can pause the sequence.

87

3

Composition with SuperCollider

that a Routine does, but is meant to be played only with a Clock. A Task provides some handy advantages, such as the ability to pause. As well, it prevents you from accidentally calling play twice. Try playing with the various buttons and see what happens. Note that the example above demonstrates both fixed scheduling and waiting for a trigger to continue. The trigger needn’t be from a GUI button; it can be almost anything, for instance, audio input. (See chapter 15.) By combining all of these resources, you can control events in time in pretty complicated ways. You can nest Tasks and Routines or combine fixed scheduling with triggers; in short, anything you like. Figure 3.3 is an example that adds varying tempo to the mix, as well as adding some random events. You can reset a Task or Routine by sending it the UHVHW message. UUHVHW

3.2.2 Other Ways of Controlling Time in SC There are 2 other notable methods of controlling sequences of events in SC: Patterns and the Score object. Patterns provide a high-level abstraction based on Streams of events and values. Since Patterns and Streams are discussed in chapter 6, we will not explore their workings in great detail at this point, but it is worth saying that Patterns often provide a convenient way to produce a Stream of values (or other objects), and that they can be usefully combined with the methods shown above. Figure 3.4 demonstrates two simple Patterns: 3VHT and 3[UDQG. Pseq specifies an ordered sequence of objects (here numbers used as durations of time between successive events) and a number of repetitions (in this case an infinite number, indicated by the special value LQI). Pxrand also has a list (used here as a collection of pitches), but instead of proceeding through it in order, a random element is selected each time. The “x” indicates that no individual value will be selected twice in a row. Patterns are like templates for producing Streams of values. In order to use a Pattern, it must be converted into a 6WUHDP, in this case using the DV6WUHDP message. Once you have a Stream, you can get values from it by using the QH[W or YDOXH messages, just as with a Routine. (In fact, as you may have guessed, a Routine is a type of Stream as well.) Patterns are powerful because they are “reusable,” and many Streams can be created from 1 Pattern template. (Chapter 6 will go into more detail regarding this.) As an aside, and returning to the idea of flexibility, the YDOXH message above demonstrates an opportunity for polymorphism, which is a fancy way of saying that different objects understand the same message.1 Since all objects understand “value” (most simply return themselves), you can substitute any object (a )XQFWLRQ, a

88

Scott Wilson and Julio d’Escriván

U 5RXWLQH ^  F 7HPSR&ORFNQHZPDNHD7HPSR&ORFN  VWDUWD ZREEO\ ORRS  W 7DVN ^   ORRS ^    [UHOHDVH      [ 6\QWK ?GHIDXOW>IUHTPLGLFSVDPS@     ZDLW    [UHOHDVH      [ 6\QWK ?GHIDXOW>IUHTPLGLFSVDPS@     UUDQG  ZDLWUDQGRPZDLWIURPWRVHFRQGV   `   `F XVHWKH7HPSR&ORFNWRSOD\WKLV7DVN  WVWDUW  QLO\LHOG   QRZDGGVRPHQRWHV  \ 6\QWK ?GHIDXOW>IUHTPLGLFSVDPS@   QLO\LHOG  \UHOHDVH    \ 6\QWK ?GHIDXOW>IUHTPLGLFSVDPS@   FWHPSR GRXEOHWLPH  QLO\LHOG  WVWRS\UHOHDVH  [UHOHDVH  VWRSWKH7DVNDQG6\QWKV `  UQH[WVWDUWORRS UQH[WILUVWQRWH UQH[WVHFRQGQRWHORRSJRHV GRXEOHWLPH

UQH[WVWRSORRSDQGIDGH

Figure 3.3 Nesting Tasks inside Routines.

89

3

Composition with SuperCollider

UDQGRPQRWHVIURPO\GLDQEVFDOH S 3[UDQG >@LQI DV6WUHDP RUGHUHGVHTXHQFHRIGXUDWLRQV T 3VHT >@LQI DV6WUHDP W 7DVN ^  ORRS ^   [UHOHDVH     [ 6\QWK ?GHIDXOW>IUHTSYDOXHPLGLFSV@    TYDOXHZDLW  `  `  WVWDUW WVWRS[UHOHDVH  

Figure 3.4 Using Patterns within a Task. 5RXWLQH,

a number, etc.) that will return an appropriate value for S or T in the example above. Since S and T are evaluated each time through the loop, it’s even possible to do this while the 7DVN is playing. (See figure 3.5.) Taking advantage of polymorphism in ways like this can provide great flexibility, and can be useful for anything from generic compositions to algorithmically variable compositions. The second method of controlling event sequences is the 6FRUH object. Score is essentially an ordered list of times and 26& commands. This takes the form of nested Arrays. That is, > >WLPH>FPG@@ >WLPH>FPG@@  @

As you’ll recall from chapter 2, OSC stands for Open Sound Control, which is the network protocol SC uses for communicating between language and server. What you probably didn’t realize is that it is possible to work with OSC messages directly, rather than through objects such as Synths. This is a rather large topic, so since the OSC messages which the server understands are outlined in the Server Command Reference Help file, we’ll just refer you there if you’d like to explore further. In any case, if you find over time that you prefer to work in “messaging style” rather than “object style,” you may find 6FRUH useful. Figure 3.6 provides a short example. Score also provides some handy functionality for non-real-time synthesis (see chapter 18).

90

Scott Wilson and Julio d’Escriván

S DFRQVWDQWQRWH T 3VHT >@LQI DV6WUHDPRUGHUHGVHTXHQFHRIGXUDWLRQV W 7DVN ^  ORRS ^   [UHOHDVH     [ 6\QWK ?GHIDXOW>IUHTSYDOXHPLGLFSV@    TYDOXHZDLW  `  `  WVWDUW QRZFKDQJHS S 3VHT >@LQI DV6WUHDPWRD3DWWHUQGRUHPL S ^UUDQG  `WRD)XQFWLRQUDQGRPQRWHVIURPD FKURPDWLFRFWDYH WVWRS[UHOHDVH  

Figure 3.5 Thanks to polymorphism, we can substitute objects that understand the same message.

6\QWK'HI 6FRUH6LQH^DUJIUHT  2XWDU   6LQ2VFDU IUHT  /LQHNU GRQH$FWLRQ ` DGG [ > DUJVIRUVBQHZDUHV\QWKGHIQRGH,'DGG$FWLRQWDUJHW,'V\QWKDUJV >>?VBQHZ?6FRUH6LQH?IUHT@@ >>?VBQHZ?6FRUH6LQH?IUHT@@ >>?VBQHZ?6FRUH6LQH?IUHT@@ >>?FBVHW@@GXPP\FRPPDQGWRPDUNHQGRI157V\QWKHVLVWLPH @ ] 6FRUH [  ]SOD\

Figure 3.6 Using “messaging style”: Score.

91

3

Composition with SuperCollider

KHUH VDV\QWKGHIWKDWDOORZVXVWRSOD\IURPDEXIIHUZLWKDIDGHRXW 6\QWK'HI SOD\EXI^DUJRXW EXIJDWH   2XWDU RXW   3OD\%XIDU EXI%XI5DWH6FDOHNU EXI ORRS    

/LQHQNU JDWHGRQH$FWLRQ UHOHDVHV\QWKZKHQIDGHGRQH  ` DGG ORDGDOOWKHSDWKVLQWKHVRXQGVIROGHULQWREXIIHUV aVRPH6RXQGV VRXQGV SDWK0DWFKFROOHFW^_SDWK_%XIIHUUHDG VSDWK ` QRZKHUH VWKHVFRUHVRWRVSHDN H[HFXWHWKHVHRQHOLQHDWDWLPH aQRZ3OD\LQJ 6\QWK SOD\EXI>EXIaVRPH6RXQGV>@@  aQRZ3OD\LQJUHOHDVHaQRZ3OD\LQJ 6\QWK SOD\EXI>EXIaVRPH6RXQGV>@@  aQRZ3OD\LQJUHOHDVHaQRZ3OD\LQJ 6\QWK SOD\EXI>EXIaVRPH6RXQGV>@@  aQRZ3OD\LQJUHOHDVH IUHHWKHEXIIHUPHPRU\ aVRPH6RXQGV%XIIHUHGGR BIUHH 

Figure 3.7 Executing one line at a time.

3.2.3 Cue Players Now let’s turn to a more concrete example. Triggering sound files, a common technique when combining live performers with a “tape” part, is easily achieved in SuperCollider. There are many approaches to the construction of cue players. These range from a list of individual lines of code that you evaluate one by one during a performance, to fully fledged GUIs that completely hide the code from the user. One question you need to ask is whether to play the sounds from RAM or stream them from hard disk. The former is convenient for short files, and the latter for substantial cues that you wouldn’t want to keep in RAM. There are several classes (both in the standard distribution of SuperCollider and within extensions by third-party developers) that help with these 2 alternatives. Here’s a very simple example which loads 2 files into RAM and plays them: aP\%XIIHU %XIIHUUHDG VVRXQGVDZONZDY ORDGDVRXQG aP\%XIIHUSOD\SOD\LWDQGQRWLFHLWZLOOUHOHDVHWKHQRGHDIWHU SOD\LQJ

Buffer’s play method is really just a convenience method, though, and we’ll probably want to do something fancier, such as fade in or out. Figure 3.7 presents an

92

Scott Wilson and Julio d’Escriván

6\QWK'HI SOD\EXI^DUJRXW EXIJDWH   2XWDU RXW   3OD\%XIDU EXI%XI5DWH6FDOHNU EXI ORRS   

/LQHQNU JDWHGRQH$FWLRQ     ZLWK GRQH$FWLRQ ZHUHOHDVHV\QWKZKHQIDGHLVGRQH ` DGG aVRPH6RXQGV VRXQGV SDWK0DWFKFROOHFW^_SDWK_%XIIHUUHDG VSDWK ` Q DFRXQWHU KHUH VRXU*8,FRGH Z :LQGRZQHZ 6LPSOH&XH3OD\HU5HFW  IURQW ZYLHZGHFRUDWRU )ORZ/D\RXW ZYLHZERXQGV  WKLVZLOOSOD\HDFKFXHLQWXUQ %XWWRQQHZ Z5HFW  VWDWHVB >>3OD\&XH&RORUEODFN &RORUFOHDU@@ DFWLRQB ^  LI QaVRPH6RXQGVVL]H^   LI Q ^aQRZ3OD\LQJUHOHDVH`    aQRZ3OD\LQJ 6\QWK SOD\EXI>EXIaVRPH6RXQGV>Q@@ Q Q  `  `  WKLVVHWVWKHFRXQWHUWRWKHILUVWFXH %XWWRQQHZ Z5HFW  VWDWHVB >>6WRS5HVHW&RORUEODFN &RORUFOHDU@@ DFWLRQB ^Q aQRZ3OD\LQJUHOHDVH`  IUHHWKHEXIIHUVZKHQWKHZLQGRZLVFORVHG ZRQ&ORVH ^aVRPH6RXQGVGR BIUHH `

Figure 3.8 Playing cues with a simple GUI.

example which uses multiple cues in a particular order, played by executing the code one line at a time. It uses the 3OD\%XI UGen, which you may remember from chapter 1. The middle 2 lines of the latter section of figure 3.7 consist of 2 statements, and thus do 2 things when you press the enter key to execute. You can of course have lines of many statements, which can all be executed at once. (Lines are separated by carriage returns; statements, by semicolons.) The “1 line at a time” approach is good when developing something for yourself or an SC-savvy user, but you might instead want something a little more elaborate or user-friendly. Figure 3.8 is a simple example with a GUI. SC also allows for streaming files in from disk using the 'LVN,Q and 9'LVN,Q UGens (the latter allows for variable-speed streaming). There are also a number of

93

3

Composition with SuperCollider

third-party extension classes that do things such as automating the required housekeeping (e.g., Fredrik Olofsson’s 5HG'LVN,Q6DPSOHU). The previous examples deal with mono files. For multichannel files (stereo being the most common case) it is simplest to deal with interleaved files.2 Sometimes, however, you may need to deal with multiple mono cues. Figure 3.9 shows how to sort them based on a folder containing subfolders of mono channels. 3.3

Generating Sound Material The process of composition deals as much with creating sounds as it does with ordering them. The ability to control sounds and audio processes at a low level can be great for finding your own compositional voice. Again, an exhaustive discussion of all of SuperCollider’s sound-generating capabilities would far exceed the scope of this chapter, so we’ll look at a few issues related to generating and capturing material in SC and give a concrete example of an approach you might want to adapt for your own purposes. As before, we will work here with sound files for the sake of convenience, but you should keep in mind that what we’re discussing could apply to more or less any synthesis or processing technique. 3.3.1

Recording

At some point you’re probably going to want to record SC’s output for the purpose of capturing a sound for further audio processing or “assembly” on a DAW, for documenting a performance, or for converting an entire piece to a distributable sound file format. To illustrate this, let’s make a sound by creating an effect that responds in an idiosyncratic way to the amplitude of an input file and then record the result. You may not find a commercial plug-in that will do this, but in SC, you should be able to do what you can imagine (more or less!). The 6HUYHU class provides easy automated recording facilities. Often, this is the simplest way to capture your sounds. (See figure 3.10.) After executing this, you should have a sound file in SC’s recordings folder (see the doc for platform-specific locations) labeled with the date and time SC began recording: SC_YYMMDD_HHMMSS.aif. 6HUYHU also provides handy buttons on the Server window (appearance or availability varies by platform) to prepare, stop, and start recording. On OSX it may look like this, or similar (see figure 3.11). The above example uses the default recording options. Using the methods SUHSDUH)RU5HFRUG SDWK , UHF&KDQQHOVB, UHF+HDGHU)RUPDWB, and UHF6DPSOH)RUPDWB, you can customize the recording process. The latter 3 methods must be called before SUHSDUH)RU5HFRUG. A common case is to change the sample format; the default is to

94

Scott Wilson and Julio d’Escriván

JDWKHUDOO\RXUIROGHUSDWKV WKLVZLOOSDWKPDWFKHDFKIROGHULQWKHFROOHFWLRQLHZHZLOOKDYHDFROOHFWLRQ RIFROOHFWLRQVRISDWKV aJURXS2ILQGLY&XH)ROGHUV VRXQGV SDWK0DWFKFROOHFW^_LWHP_ LWHPDV6\PERO  SDWK0DWFK` 3RVWaJURXS2ILQGLY&XH)ROGHUVVHHWKHPDOO FKHFNKRZPDQ\FXHV\RXZLOOKDYHLQWKHHQG aJURXS2ILQGLY&XH)ROGHUVVL]H DXWRPDWHWKHEXIIHULQJSURFHVVIRUDOOFXHV aEXIIHUHG&XHV aJURXS2ILQGLY&XH)ROGHUVFROOHFW^_LWHPL_LWHPFROOHFW^_SDWK_ %XIIHUUHDG VSDWK ``QRZDOORXUFXHILOHVDUHVLWWLQJLQWKHLUEXIIHUV aEXIIHUHG&XHV>@KHUHLVFXH VHHLWLQWKHSRVWZLQGRZ 3RVWaEXIIHUHG&XHV>@ SOD\WKHPDOOLQD*URXSXVLQJRXUSUHYLRXVV\QWKGHI ZHXVHELQGKHUHWRHQVXUHWKH\VWDUWVLPXOWDQHRXVO\ VELQG ^  aQRZ3OD\LQJ *URXSQHZ V DJURXSWRSXWDOOWKHFKDQQHOV\QWKVLQ  aEXIIHUHG&XHV>@GR ^_FXH_6\QWK SOD\EXI>EXIFXH@aQRZ3OD\LQJ ` `  IDGHWKHPRXWWRJHWKHUE\VHQGLQJDUHOHDVHPHVVDJHWRWKHJURXS aQRZ3OD\LQJUHOHDVH

Figure 3.9 Gathering up files for multichannel cues.

95

3

Composition with SuperCollider

VERRWPDNHVXUHWKHVHUYHULVUXQQLQJ ILUVWHYDOXDWHWKLVVHFWLRQ E %XIIHUUHDG VVRXQGVDZONZDY DVRXUFH VSUHSDUH)RU5HFRUGSUHSDUHWKHVHUYHUWRUHFRUG \RXPXVWGRWKLVILUVW  VLPXOWDQHRXVO\VWDUWWKHSURFHVVLQJDQGUHFRUGLQJ VELQG ^  KHUH VRXUIXQN\HIIHFW  [ ^YDUFROXPELDDPS   FROXPELD 3OD\%XIDU EORRS    DPS $PSOLWXGHDU FROXPELD  VWLFN\ DPSIROORZHU   2XWDU 5HVRQ]DU FROXPELDDPS ILOWHUIUHTIROORZVDPS   `SOD\ VUHFRUG `  VSDXVH5HFRUGLQJSDXVH VUHFRUGVWDUWDJDLQ VVWRS5HFRUGLQJVWRSUHFRUGLQJDQGFORVHWKHUHVXOWLQJVRXQGILOH

Figure 3.10 Recording the results of making sounds with SuperCollider.

Figure 3.11 A screen shot of a Server window.

96

Scott Wilson and Julio d’Escriván

record as 32-bit floating-point values. This has the advantage of tremendous dynamic range, which means you don’t have to worry about clipping and can normalize later, but it’s not compatible with all audio software. VUHF6DPSOH)RUPDWB LQW 

More elaborate recording can be realized, of course, by using the 'LVN2XW UGen. Server’s automatic functionality is in fact based on this. SC also has non-real-time synthesis capabilities, which may be useful for rendering CPU-intensive code. (See chapter 18.) 3.3.2

Thinking in the Abstract

Something that learners often find difficult to do is to stop thinking about exactly what they want to do at the moment, and instead consider whether the problem they’re dealing with has a general solution. Generalizing your code can be very powerful. Imagine that we want to make a sound that consists of 3 bands of resonated impulses. We might do something like this: ^ 5HVRQ]DU 'XVWDU    5HVRQ]DU 'XVWDU    5HVRQ]DU 'XVWDU    UHFLSURFDOVFDOHWRHQVXUH QRFOLSSLQJ `SOD\

Now, through a bit of careful thinking, we can abstract the problem from this concrete realization and come up with a more general solution: I  Q  ^ 0L[ILOO Q^_L_5HVRQ]DU 'XVWDU  I  L  `

QUHFLSURFDOVFDOHWRHQVXUHQRFOLSSLQJ `SOD\

This version has an equivalent result, but we’ve expressed it in terms of generalized instructions. It shows you how to construct a Synth consisting of resonated impulses tuned in whole-number ratios rather than as an exact arrangement of objects and connections, as you might do in a visual patching language such as Max/MSP. We’ve

97

3

Composition with SuperCollider

also used variables (f for frequency and n for number of resonators) to make our code easy to change. This is the great power of abstraction: by expressing something as a general solution, you can be much more flexible than if you think in terms of exact implementations. Now it happens that the example above is hardly shorter than the first, but look what we can do with it: I  Q  ^ 0L[ILOO Q^_L_5HVRQ]DU 'XVWDU  I  L  `

QUHFLSURFDOVFDOHWRHQVXUHQRFOLSSLQJ `SOD\

By changing I and Q we’re able to come up with a much more complex variant. Imagine what the hard-coded version would look like with 50 individual 5HVRQ] 8*HQV typed out by hand. In this case, not only is the code more flexible, it’s shorter; and because of that, it’s much easier to understand. It’s like the difference between saying “Make me 50 resonators” and saying “Make me a resonator. Make me a resonator. Make me a resonator. . . .” This way of thinking has potential applications in almost every aspect of SC, even GUI construction (see figure 3.12). 3.3.3

Gestures

For a long time, electroacoustic and electronic composition has been a rather “manual” process. This may account for the computer’s being used today as a virtual analog studio; many sequencer software GUIs attest to this way of thinking. However, as software has become more accessible, programming may in fact be replacing this virtual splicing approach. One of the main advantages of a computer language is generalization, or abstraction, as we have seen above. In the traditional “tape” music studio approach, the composer does not differentiate gesture from musical content. In fact, traditionally they amount to much the same thing in electronic music. But can a musical gesture exist independently of sound? In electronic music, gestures are, if you will, the morphology of the sound, a compendium of its behavior. Can we take sound material and examine it under another abstracted morphology? In ordinary musical terms this could mean a minor scale can be played in crescendo or diminuendo and remain a minor scale. In electroacoustic music this can happen, for example, when we modulate 1 sound with the

98

Scott Wilson and Julio d’Escriván

I  Q QXPEHURIUHVRQDWRUV W $UUD\ILOO Q^_L_ ^ 5HVRQ]DU 'XVWDU  I  L 

QUHFLSURFDOVFDOHWRHQVXUHQRFOLSSLQJ `SOD\ `  QRZPDNHD*8, DVFUROOLQJZLQGRZVRZHGRQ WUXQRXWRIVSDFH Z :LQGRZQHZ %XWWRQV5HFW  VFUROOWUXH  ZYLHZGHFRUDWRU )ORZ/D\RXWQHZ ZYLHZERXQGV DXWROD\RXWWKHZLGJHWV QGR ^_L_ %XWWRQQHZ Z5HFW  VWDWHVB > >)UHT I  L 2Q&RORUEODFN&RORUZKLWH@ >)UHT I  L 2II&RORUZKLWH&RORUEODFN@ @ DFWLRQB ^DUJEXWW W>L@UXQ EXWWYDOXH   `  `  ZIURQW

Figure 3.12 A variable number of resonators with an automatically created GUI.

spectrum of another. The shape of 1 sound is generalized and applied to another; we are accustomed to hearing this in signal-processing software. In this section we would like to show how SuperCollider can be used to create “empty gestures,” gestures that are not linked to any sound in particular. They are, in a sense, gestures waiting for a sound, abstractions of “how to deliver” the musical idea. First we will look at some snippets of code that we can reuse in different patches, and then we will look at some Routines we can call up as part of a “Routine of Routines” (i.e., a score, so to speak). If you prefer to work in a more traditional way, you can just run the Routines with different sounds each time, record them to hard disk, and then assemble or sample as usual in your preferred audio editing/sequencing software. However, an advantage of doing the larger-scale organization of your piece within SC is that since you are interpreting your code during the actual performance of your piece, you can add elements of variability to what is normally fixed

99

3

Composition with SuperCollider

at the time of playback. You can also add elements of chance to your piece without necessarily venturing fully into algorithmic composition. (Naturally, you can always record the output to a sound file if desired.) This, of course, brings us back to issues of design, and exactly what you choose to do will depend on your own needs and inclinations. 3.3.4 Making “Empty” Gestures Let’s start by making a list where all our Buffers will be stored. This will come in handy later on, as it will allow us to call up any file we opened with our file browser during the course of our session. In the following example we open a dialogue box and can select any sound(s) on our hard disk: \RXZLOOEHDEOHWRDGGPXOWLSOHVRXQGILOHVMXVWVKLIWFOLFNZKHQ VHOHFWLQJ YDUILOHVRXQG3DWK aEXIIHUV /LVW>@ 'LDORJJHW3DWKV ^DUJSDWKV SDWKVGR ^_VRXQG3DWK_ SRVWWKHSDWKWRYHULI\WKDWLWLVWKHRQH\RXH[SHFW VRXQG3DWKSRVWOQ DGGVWKHUHFHQWO\VHOHFWHG%XIIHUWR\RXUOLVW aEXIIHUVDGG %XIIHUUHDG VVRXQG3DWK  ` ` 

You can check to see how many Buffers are in your list so far (watch the post window!), aEXIIHUVVL]H

and you can see where each sound is inside your list. For example, here is the very first sound stored in our Buffer list: aEXIIHUV>@

Now that we have our sound in a Buffer, let’s try some basic manipulations. First, let’s just listen to the sound to verify that it is there: aEXIIHUV>@SOD\

Now, let’s make a simple 6\QWK'HI so we can create Synths which play our Buffer (for example, in any 5RXWLQH, 7DVN, or other 6WUHDP) later on. For the purposes of this demonstration we will use a very simple percussive envelope, making sure we have GRQH$FWLRQ in order to free the synth after the envelope terminates:

100

Scott Wilson and Julio d’Escriván

EXIIHUSOD\HUZLWKGRQHDFWLRQDQGFRQWURORIHQYHORSHDQGSDQQLQJ 6\QWK'HI ?VDPSOH3OD\HU^DUJRXW EXI  UDWH DW UHO SRV S6SHHG OHY  YDUVDPSOHSDQ7DPSDX[ VDPSOH 3OD\%XIDU EXIUDWH %XI5DWH6FDOHNU EXI   SDQ7 )6LQ2VFNU S6SHHG  DPS (QY*HQDU (QYSHUF DWUHOOHY GRQH$FWLRQ  2XWDU RXW3DQDU VDPSOHSDQ7DPS  ` DGG

As mentioned in chapter 1, we use the DGG method here rather than one of the more low-level SynthDef methods such as VHQG. In addition to sending the def to the server, DGG also stores it within the global 6\QWK'HVF/LE in the client app, so that its arguments can be looked up later by the Patterns and Streams system (see chapter 6). We’ll need this below. Let’s test the SynthDef: 6\QWK ?VDPSOH3OD\HU>?RXW?EXIQXPaEXIIHUV>@?UHO@ 

As you can hear, it plays 0.25 second of the selected sound. Of course, if you have made more than 1 Buffer list, you can play sounds from any list, and also play randomly from that list. For example, from the list we defined earlier we could do this: 6\QWK ?VDPSOH3OD\HU>?RXW?EXIQXPaEXIIHUVFKRRVH?UHO@ 

Let’s define a Routine that allows us to create a stuttering /rushing gesture in a glitch style. We’ll use a new Pattern here, 3JHRP, which specifies a geometric series.3 Note that Patterns can be nested. Figure 3.13 shows a Pseq whose list consists of two Pgeoms. Remember that you can use a 7DVN or 5RXWLQH to sequence several such gestures within your piece. You can, of course, modify the Routine to create other accel/decel Patterns by substituting different Patterns. You can also add variability by making some of them perform choices when they generate their values (e.g., using 3UDQG or 3[UDQG). You can use this, for example, to choose which speaker a sound comes from without repeating speakers: 3[UDQG >@LQI

The advantage of having assigned your gestures to environment variables (using the tilde shortcut) is that now you are able to experiment in real time with the ordering, simultaneity, and internal behavior of your gestures. Let’s take a quick look at 1 more important Pattern: 3ELQG. It creates a Stream of Events, which are like a kind of dictionary of named properties and associated values. If you send the message SOD\ to a Pbind, it will play the Stream of Events, in

101

3

Composition with SuperCollider

 DURXWLQHIRUFUHDWLQJDULWDUGDQGRVWXWWHUZLWKSDQQLQJ\RXPXVWKDYH UXQWKHFRGHLQILJVRWKDWWKLVURXWLQHPD\ILQGVRPHVRXQGVDOUHDG\ORDGHG LQWREXIIHUV\RXFDQFKDQJHWKHLQGH[RIaEXIIHUHG&XHVWRWHVWWKHURXWLQHRQ GLIIHUHQWVRXQGV 

aVWXW 5RXWLQH ^YDUGXUSRV aVWXW3DWW 3VHT >3JHRP  3Q  3JHRP  @  aVWU aVWXW3DWWDV6WUHDP GR^ GXU aVWUQH[W GXUSRVWOQ VRZHFDQFKHFNYDOXHVRQWKHSRVWZLQGRZ aVDPSOH 6\QWK VDPSOH3OD\HU>?RXW?EXIaEXIIHUHG&XHV>@?DW ?UHO?S6SHHG@  GXUZDLW ` `  QRZSOD\LW aVWXWSOD\ UHVHWEHIRUH\RXSOD\DJDLQ aVWXWUHVHW

Figure 3.13 Making a stuttering gesture using a geometric Pattern.

a fashion similar to the Clock examples above. Here’s a simple example which makes sound using what’s called the “default” SynthDef: UDQGRPO\VHOHFWHGIUHTXHQF\GXUDWLRQVHFRQG 3ELQG ?IUHT3UDQG >@ ?GXU SOD\

It’s also possible to substitute Event Streams as they play. When you call SOD\ on a 3DWWHUQ, it returns an (YHQW6WUHDP3OD\HU, which actually creates the individual Events from the Stream defined by the Pattern. EventStreamPlayer allows its Stream to be substituted while it is playing. aJHVW 3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXU?UHO  aSOD\HU aJHVWSOD\PDNHLWSOD\ aSOD\HUVWUHDP 3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXU?UDWH 3[UDQG >@LQI ?UHO DV6WUHDPVXEVWLWXWHWKH VWUHDP aSOD\HUVWRS

102

Scott Wilson and Julio d’Escriván

If you have evaluated the expressions above, you will notice that you don’t hear the simple default SynthDef, but rather the one we made earlier. Since we added it above, the Pbind is able to look it up in the global library and get the information it needs about the def. Now, the Pbind plays repeatedly at intervals specified by the \dur argument, but it will stop playing as soon as it receives nil for this or any other argument. So we can take advantage of this to make Streams that are not repetitive and thus make single gestures (of course, we can also choose to work in a looping/ layering fashion, but more of that later). Here is a Pbind making use of our accelerando Pattern to create a rushing sound: aJHVW 3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXU3JHRP   ?UHO  aJHVWSOD\

When the Stream created from the Pgeom ended, it returned nil and the EventStreamPlayer stopped playing. If you call play on it again, you will notice that it makes the same rushing sound without the need to reset it, as we had to do with the Routine, since it will return a new EventStreamPlayer each time. More complex gestures can be made, of course, by nesting patterns: 3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXU3VHT >3JHRP   3JHRP  @ ?UHO?S6SHHG SOD\ 3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXU3VHT >3JHRP   3JHRP  @ ?UDWH3[UDQG >@LQI ?UHO ?S6SHHG SOD\

Similar things can be done with the 3GHI class from the -,7 library (see chapter 7). Let’s designate another environment variable to hold a sequence of values that we can plug in at will and change on the fly. This Pattern holds values that would work well for \dur: aUK\WKP 3VHT >QLO@ WKHQLOLVVRLWZLOO VWRS

We can then plug it into a 3GHI, which we’ll call ?D: aJHVW 3GHI ?D3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXUaUK\WKP?UHO ?S6SHHG   aJHVWSOD\

If we define another sequence of values we want to try, aUK\WKP 3VHT > QLO@ 

103

3

Composition with SuperCollider

and then reevaluate the 3GHI, aJHVW 3GHI ?D3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?GXUaUK\WKP?UHO ?S6SHHG  

we can hear that the new aUK\WKP has taken the place of the previous one. Notice that it played immediately, without the need for executing aJHVWSOD\. This is one of the advantages of working with the 3GHI class: once the 6WUHDP is running, anything that is “poured” into it will come out. In the following example, we assign a Pattern to the rate values and obtain an interesting variation: aJHVW 3GHI ?D3ELQG ?LQVWUXPHQW?VDPSOH3OD\HU?DWW?UHO ?OHY^UUDQG  `?GXU?UDWH3VHT >3EURZQ   @ 

Experiments like these can be conducted by creating Patterns for any of the arguments that our 6\QWK'HI will take. If we have “added” more than 1 6\QWK'HI, we can even modulate the ?LQVWUXPHQW by getting it to choose among several different options. Once we have a set of gestures we like, we can trigger them in a certain order using a 5RXWLQH, or we can record them separately and load them as audio files to our audio editor. The latter approach is useful if we want to use a cue player for the final structuring of a piece. 3.4

Conclusions What next? The best way to compose with SuperCollider is to set yourself a project with a deadline! In this way you will come to grips with specific things you need to know, and you will learn it much better than just by reviewing everything it can do. SuperCollider offers a variety of approaches to electronic music composition. It can be used for sound creation thanks to its rich offering of UGens (see chapter 2), as well as for assembling your piece in flexible ways. We have shown that the assembly of sounds itself can become a form of synthesis, illustrated by our use of Patterns and Streams. Another approach is to review some of the classic techniques used in electroacoustic composition and try to re-create them yourself using SuperCollider. Below we refer you to some interesting texts that may enhance your creative investigations.

Further Reading Budón, O. 2000. “Composing with Objects, Networks, and Time Scales: An Interview with Horacio Vaggione.” Computer Music Journal, 24(3): 9–22. Collins, N. 2010. Introduction to Computer Music. Chichester: Wiley.

104

Scott Wilson and Julio d’Escriván

Dodge, C., and T. A. Jerse. 1997. Computer Music: Synthesis, Composition, and Performance, 2nd ed. New York: Schirmer. Holtzman, S. R. 1981. “Using Generative Grammars for Music Composition.” Computer Music Journal, 5(1): 51–64. Loy, G. 1989. “Composing with Computers: A Survey of Some Compositional Formalisms and Music Programming Languages.” In M. V. Mathews and J. R. Pierce, eds., Current Directions in Computer Music Research, pp. 291–396. Cambridge, MA: MIT Press. Loy, G., and Abbott, C. 1985. “Programming Languages for Computer Music Synthesis, Performance, and Composition.” ACM Computing Surveys (CSUR), 17(2): 235–265. Mathews, M. V. 1963. “The Digital Computer as a Musical Instrument.” Science, 142(3592): 553–557. Miranda, E. R. 2001. Composing Music with Computers. London: Focal Press. Roads, C. 2001. Microsound. Cambridge, MA: MIT Press. Roads, C. 1996. The Computer Music Tutorial. Cambridge, MA: MIT Press. Wishart, T. 1994. Audible Design: A Plain and Easy Introduction to Practical Sound Composition. York, UK: Orpheus the Pantomime.

Notes 1. You may have noticed that the terms “message” and “method” used somewhat interchangeably. In polymorphism the distinction becomes clear: different objects may respond to the same message with different methods. In other words, the message is the command, and the method is what the object does in response. 2. Scott Wilson’s De-Interleaver application for OSX and Jeremy Friesner’s cross-platform command line tools audio_combine and audio_split allow for convenient interleaving and deinterleaving of audio files. 3. A geometric series is a series with a constant ratio between successive terms.

25

Writing Unit Generator Plug-ins Dan Stowell

Writing a unit generator (UGen) for SuperCollider 3 can be extremely useful, allowing the addition of new audio generation and processing capabilities to the synthesis server. The bulk of the work is C++ programming, but the API (Application Programming Interface) is essentially quite simple — so even if you have relatively little experience with C/C++, you can start to create UGens based on existing examples. You’re probably already familiar with UGens from other chapters. Before creating new UGens of your own, let’s first consider what a UGen really is, from the plug-in programmer’s point of view. 25.1

What Is a UGen, Really? A UGen is a component for the synthesis server, defined in a plug-in, which can receive a number of floating-point data inputs (audio- or control-rate signals or constant values) and produce a number of floating-point data outputs, as well as “side effects” such as writing to the post window, accessing a buffer, or sending a message over a network. The server can incorporate the UGen into a synthesis graph, passing data from 1 UGen to another. When using SC language, we need to have available a representation of each UGen which provides information about its inputs and outputs (the number, type, etc.). These representations allow us to define synthesis graphs in SC language (SynthDefs). Therefore, each UGen also comes with an SC class; these classes are always derived from a base class, appropriately called 8*HQ. So to create a new UGen you need to create both the plug-in for the server and the class file for the language client.

25.2

An Aside: Pseudo UGens Before we create a “real” UGen, we’ll look at something simpler. A pseudo UGen is an SC class that “behaves like” a UGen from the user’s point of view but doesn’t

692

Dan Stowell

involve any new plug-in code. Instead, it just encapsulates some useful arrangement of existing units. Let’s create an example, a simple reverb effect: 5HYHUE^  DU^_LQ_ YDURXW LQ RXW $OOSDVV1DU RXWUDQG  ARXW ` `

This isn’t a very impressive reverb yet, but we’ll improve it later. As you can see, this is a class like any other, with a single class method. The DU method name is not special — in fact, you could use any method name (including

QHZ). We are free to use the full power of SC language, including constructs such as UDQG, to choose a random delay time for our effect. The only real requirement for a pseudo UGen is that the method returns something that can be embedded in a synth graph. In our simple example, what is returned is an $OOSDVV1 applied to the input. Copy the above code into a new file and save it as, for instance, Reverb1.sc in your SCClassLibrary or Extensions folder; then recompile. You’ll now be able to use 5HYHUEDU within your SynthDefs, just as if it were a “real” UGen. Let’s test this: VERRW [ ^ YDUIUHTVRQRXW &KLUSVDWDUELWUDU\PRPHQWV IUHT (QY*HQDU (QYSHUF  'XVWDU   VRQ 6LQ2VFDU IUHT  :HDSSO\UHYHUEWRWKHOHIWDQGULJKWFKDQQHOVVHSDUDWHO\ RXW ^5HYHUEDU VRQFXWRII `GXS `SOD\ V  [IUHH

You may wish to save this usage example as a rudimentary Help file, Reverb1.html. To make the reverb sound more like a reverb, we modify it to perform 6 similar all-pass delays in a row, and we also add some LPF units in the chain to create a nice frequency roll-off. We also add parameters: 5HYHUE^  DU^_LQZHW FXWRII _ YDURXW LQ GR^RXW /3)DU $OOSDVV1DU RXWUDQG FXWRII `

693

25

Writing Unit Generator Plug-ins

A RXW ZHW  LQ  ²ZHW  ` `

This is on the way toward becoming a useful reverb unit without having created a real plug-in at all. This approach has definite limitations. It is of course confined to processes that can be expressed as a combination of existing units — it can’t create new types of processing or new types of server behavior. It may also be less efficient than an equivalent UGen, because it creates a small subgraph of units that pass data to each other and must maintain their own internal states separately. Now let’s consider what is involved in creating a “real” UGen. 25.3

Steps Involved in Creating a UGen 1. First, consider exactly what functionality you want to encapsulate into a single unit. An entire 808-drum machine, or just the cymbal sound? Smaller components are typically better, because they can be combined in many ways within a SynthDef. Efficiency should also be a consideration. 2. Second, write the Help file. Really — it’s a good idea to do this before you start coding, even if you don’t plan to release the UGen publicly. As well as being a good place to keep the example code which you can use while developing and testing the UGen, it forces you to think clearly about the inputs and outputs and how the UGen will be used in practice, thus weeding out any conceptual errors. A Help file is also a good reminder of what the UGen does — don’t underestimate the difficulties of returning to your own code, months or years later, and trying to decipher your original intentions! The Help file will be an HTML file with the same name as the UGen. There is a “Documentation Style Guide” in the SC Help system which includes tips and recommendations for writing Help documentation. But, of course, during development the Help file doesn’t need to be particularly beautiful. 3. Third, write the class file. You don’t need to do this before starting on the C++ code, but it’s a relatively simple step. Existing class files (e.g., for SinOsc, LPF, Pitch, Dwhite) can be helpful as templates. More on this shortly. 4. Fourth, write the plug-in code. The programming interface is straightforward, and again existing plug-in code can be a helpful reference: all UGens are written as plug-ins — including the “core” UGens — so there are lots of code examples available. We now consider writing the class file and writing the plug-in code.

694

25.4

Dan Stowell

Writing the Class File A class file for a UGen is much like any other SC class, with the following conditions: It must be a subclass of 8*HQ. This is so that methods defined in the 8*HQ class can be used when the language builds the SynthDef (synth graph definition). The name of the class must match the name used in the plug-in code — the class name is used to tell the server which UGen to instantiate. It must implement the appropriate class methods for the rates at which it can run (e.g., DU, NU, and/or LU). These method names are referenced for rate checking during the SynthDef building process. The class methods must call the PXOWL1HZ method (defined in the main 8*HQ class), which processes the arguments and adds the UGen correctly to the SynthDef that is being built. The class file does not have any direct connection with the C++ plug-in code — after all, it’s the server that uses the plug-in code, while the class file is for the language client. Let’s look at a well-known example: 6LQ2VF8*HQ^  DU^ DUJIUHT SKDVH PXO DGG  AWKLVPXOWL1HZ DXGLR IUHTSKDVH PDGG PXODGG `  NU^ DUJIUHT SKDVH PXO DGG  AWKLVPXOWL1HZ FRQWURO IUHTSKDVH PDGG PXODGG ` `

As you can see, 6LQ2VF is a subclass of 8*HQ and implements 2 class methods. Both of these methods call PXOWL1HZ and return the result, which is 1 or more instances of the UGen we are interested in. The methods also call PDGG, which we’ll discuss shortly. The first argument to PXOWL1HZ is a symbol to indicate the rate at which the particular UGen instance will be operating: this could be “DXGLR,” “FRQWURO,” “VFDODU,” or “GHPDQG.” The remaining arguments are those that will actually be passed to the C++ plug-in — here IUHT and SKDVH. If any of these arguments are arrays, PXOWL1HZ performs multichannel expansion, creating a separate unit to handle each channel. Indeed, this is why the method is called PXOWL1HZ. Note that the PXO and DGG arguments are not being passed in to PXOWL1HZ. This means that the actual plug-in code for SinOsc will never be able to access them. In-

695

25

Writing Unit Generator Plug-ins

stead, this UGen makes use of the PDGG method, which is essentially a convenience for multiplication and addition of the unit’s output. As well as avoiding the programmer’s having to implement the multiplication and addition part of the process, the PDGG method performs some general optimizations (e.g., in the very common degenerate case of multiplying by 1 and adding 0; no processing is really required, so the UGen is simply returned unaltered). It is the convention to add PXO and DGG arguments to UGens as the final 2 arguments, as is done here; these 2 arguments are often very useful and are supported by many UGens. (Due to their commonness, they are often undocumented in Help files.) Let’s start to draft the class file for a UGen we can implement. We’ll create a basic “flanger” which takes some input and then adds an effect controlled by rate and depth parameters: )ODQJHU8*HQ^  DU^ DUJLQUDWH GHSWK PXO DGG  AWKLVPXOWL1HZ DXGLR LQUDWHGHSWK PDGG PXODGG `  NU^ DUJLQUDWH GHSWK PXO DGG  AWKLVPXOWL1HZ FRQWURO LQUDWHGHSWK PDGG PXODGG ` `

Save this as Flanger.sc in your extensions directory. If you recompile, you’ll find that this is sufficient to allow you to use )ODQJHUDU or )ODQJHUNU in SynthDefs, which the SuperCollider language will happily compile — but of course those SynthDefs won’t run yet, because we haven’t created anything to tell the server how to produce the Flanger effect. 25.4.1

Checking the Rates of Your Inputs

Because SuperCollider supports different signal rates, it is useful to add a bit of “sanity checking” to your UGen class to ensure that the user doesn’t try to connect things in a way that doesn’t make sense: for example, plugging an audio-rate value into a scalar-rate input. The 8*HQ class provides a FKHFN,QSXWV method which you can override to perform any appropriate checks. When the SynthDef graph is built, each UGen’s FKHFN,QSXWV method will be called. The default method defined in 8*HQ simply passes through to FKHFN9DOLG,QSXWV, which checks that each of the inputs is really something that can be plugged into a synth graph (and not some purely client-side object such as, say, an 6&:LQGRZ or a 7DVN).

696

Dan Stowell

The %XI:U UGen is an example which implements its own rate checking. Let’s look at what the class does: FKHFN,QSXWV^ LI UDWH  DXGLR DQG^LQSXWVDW  UDWH  DXGLR `^  A SKDVHLQSXWLVQRWDXGLRUDWHLQSXWVDW  LQSXWVDW   UDWH  `  AWKLVFKHFN9DOLG,QSXWV `

If %XI:U is used to write audio-rate data to a buffer, then the input specifying the phase (i.e., the position at which data is written) must also be at audio rate — there’s no natural way to map control-rate index data to a buffer which is taking audio-rate data. Therefore the class overrides the FKHFN,QSXWV method to test explicitly for this. The UDWH variable is the rate of the unit under consideration (a symbol, just like the first argument to PXOWL1HZ). The LQSXWV variable is an array of the unit’s inputs, each of which will be a UGen and thus will also have a UDWH variable. So the method compares the present unit’s rate against its first input’s rate. It simply returns a string if there’s a problem (returning anything other than nil is a sign of an error found while checking input). If there’s not a problem, then it passes through to the default FKHFN9DOLG,QSXWV method — if you implement your own method checking, don’t forget to pass through to this check. Many UGens produce output at the same rate as their first input — for example, filters such as /3) or +3). If you look at their class definition (or their superclass, in the case of /3) and +3) — an abstract class called )LOWHU), you’ll see that they call a convenience method for this common case called FKHFN6DPH5DWH$V)LUVW,QSXW. Observe the result of these checks: VERRW [ ^/3)DU :KLWH1RLVHNU `SOD\ V (UURU [ ^/3)DU :KLWH1RLVHDU `SOD\ V 2. [IUHH [ ^/3)NU :KLWH1RLVHDU `SOD\ V (UURU [ ^/3)NU :KLWH1RLVHNU `SOD\ V 2. [IUHH

What happens if you don’t add rate checking to your UGens? Often it makes little difference, but ignoring rate checking can sometimes lead to unusual errors that are hard to trace. For example, a UGen that expects control-rate input is relatively safe, because it expects less input data than an audio-rate UGen — so if given audio-rate data, it simply ignores most of it. But in the reverse case, a UGen that expects audiorate data but is given only control-rate data may read garbage input from memory that it shouldn’t be reading.

697

25

Writing Unit Generator Plug-ins

Returning to the )ODQJHU example created earlier, you may wish to add rate checking to that class. In fact, since the Flanger is a kind of filter, you might think it sensible to use the FKHFN6DPH5DWH$V)LUVW,QSXW approach, either directly or by modifying the class so that it subclasses )LOWHU rather than 8*HQ. 25.5

Writing the C++ Code 25.5.1 Build Environments: Xcode, scons . . . UGen plug-ins are built just like any other C++ project. To make things easier for yourself as a developer, you can use and adapt 1 of the project files which are distributed along with SuperCollider’s source code: On Mac, the Xcode project file Plugins.xcodeproj is used to build the core set of SuperCollider plug-ins. It’s relatively painless to add a new “target” to this project in order to build your own plug-ins — this is the approach used in the SuperCollider Help document “Writing Unit Generators,” which has more details about the Xcode specifics. On Linux, the scons project file SConstruct is used to build SuperCollider as a whole. You can edit this file using a text editor to add your plug-in’s build instructions. Alternatively, the “sc3-plug-ins” SourceForge project provides an SConstruct file purely for building UGens — you may find it easier to start from that as a template. On Windows, Visual Studio project files are provided to compile plug-ins, including a UGEN_TEMPLATE_VCPROJ.vcprojtemplate file which you can use as a basis. You can, of course, use other build environments if you prefer. 25.5.2 When Your Code Will Be Called The server (scsynth) will call your plug-in code at 4 distinct points: When scsynth boots, it calls the plug-in’s ORDG function, which primarily declares which UGens the plug-in can provide. When a UGen is instantiated (i.e., when a synth starts playing), scsynth calls the UGen’s constructor function to perform the setting up of the UGen. To produce sound, scsynth calls each UGen’s calculation function in turn, once for every control period. This is typically the function which does most of the interesting work in the UGen. Since it is called only once during a control period, this function must produce either a single control-rate value or a whole block’s worth of audiorate values during 1 call. (Note: Demand UGens don’t quite fit this description and will be covered later.)

698

Dan Stowell

When a synth is ended, some UGens may need to perform some tidying up, such as freeing memory. If so, these UGens provide a destructor function which is called at this point. 25.5.3

The C++ Code for a Basic UGen

The code in figure 25.1 shows the key elements we need to include in our Flanger plug-in code. Here is what this code does: First, the LQFOXGH command calls the main header file for SuperCollider’s plug-in interface, SC_PlugIn.h. This is sufficient to include enough SuperCollider infrastructure for most types of UGen. (For phase vocoder UGens, more may be needed, as described later.) The static ,QWHUIDFH7DEOH pointer is a reference to a table of SuperCollider functions such as the ones used to register a new UGen. We define a data structure (a “struct”) which will hold any data we need to store during the operation of the UGen. This struct, which needs to be remembered or passed from 1 audio block to the next, must be stored here. Note that the struct inherits from the base struct 8QLW — this is necessary so that scsynth can correctly write information into the struct, such as the rate at which the unit is running. We declare our UGen’s functions, using the H[WHUQ& specifier so that the scsynth executable is able to reference the functions using C linkage. In a given plug-in we are allowed to define 1 or more UGens. Each of these will have 1 constructor (“Ctor”) function, 1 or more calculation (“next”) functions, and optionally 1 destructor (“Dtor”) function. Our constructor function, )ODQJHUB&WRU , takes a pointer to a Flanger struct and must prepare the UGen for execution. It must do the following 3 things: 1. Initialize the Flanger struct’s member variables appropriately. In this case we initialize the GHOD\VL]H member to a value representing a 20-millisecond maximum delay, making use of the 6$03/(5$7( macro which the SuperCollider API provides to specify the sample rate for the UGen. For some of the other struct members, we wish to calculate the values based on an input to the UGen. We can do this using the ,1 macro, which grabs a single control-rate value from the specified input. Here, we use ,1  — remembering that numbering starts at 0, this corresponds to the second input, defined in the Flanger class file as “rate.” These macros (and others) will be discussed later. 2. Tell scsynth what the calculation function will be for this instance of the UGen. The 6(7&$/& macro stores a reference to the function in our unit’s struct. In our example there’s only 1 choice, so we simply call 6(7&$/& )ODQJHUBQH[W . It’s possible

699

25

Writing Unit Generator Plug-ins

LQFOXGH6&B3OXJ,QK VWDWLF,QWHUIDFH7DEOH IW WKHVWUXFWZLOOKROGGDWDZKLFKZHZDQWWRSDVVIURPRQHIXQFWLRQWRDQRWKHU HJIURPWKHFRQVWUXFWRUWRWKHFDOFIXQF RUIURPRQHFDOORIWKHFDOFIXQFWRWKHQH[W VWUXFW)ODQJHUSXEOLF8QLW^  IORDWUDWHGHOD\VL]HIZGKRSUHDGSRV  LQWZULWHSRV ` IXQFWLRQGHFODUDWLRQVH[SRVHGWR& H[WHUQ&^  YRLGORDG ,QWHUIDFH7DEOH LQ7DEOH   YRLG)ODQJHUB&WRU )ODQJHU XQLW   YRLG)ODQJHUBQH[W )ODQJHU XQLWLQWLQ1XP6DPSOHV  `

YRLG)ODQJHUB&WRU )ODQJHU XQLW ^   +HUHZHPXVWLQLWLDOLVHVWDWHYDULDEOHVLQWKH)ODQJHUVWUXFW  XQLW!GHOD\VL]H 6$03/(5$7( I)L[HGPVPD[GHOD\  7\SLFDOO\ZLWKUHIHUHQFHWRFRQWUROUDWHVFDODUUDWHLQSXWV  IORDWUDWH ,1    5DWKHUWKDQXVLQJUDWHGLUHFWO\ZH UHJRLQJWRFDOFXODWHWKHVL]HRI  MXPSVZHPXVWPDNHHDFKWLPHWRVFDQWKURXJKWKHGHOD\OLQHDWUDWH  IORDWGHOWD  XQLW!GHOD\VL]H UDWH 6$03/(5$7(  XQLW!IZGKRS GHOWDI  XQLW!UDWH UDWH        `

,03257$177KLVWHOOVVFV\QWKWKHQDPHRIWKHFDOFXODWLRQIXQFWLRQ IRUWKLV8*HQ 6(7&$/& )ODQJHUBQH[W  6KRXOGDOVRFDOFVDPSOH VZRUWKRIRXWSXW² HQVXUHVHDFKXJHQ VSLSHVDUHSULPHG )ODQJHUBQH[W XQLW 

Figure 25.1 C++ code for a Flanger UGen. This code doesn’t add any effect to the sound yet, but contains the key elements required for all UGens.

700

Dan Stowell

YRLG)ODQJHUBQH[W )ODQJHU XQLWLQWLQ1XP6DPSOHV ^  

IORDW LQ ,1   IORDW RXW 287  



IORDWGHSWK ,1  

    

IORDWUDWH IORDWIZGKRS IORDWUHDGSRV LQWZULWHSRV LQWGHOD\VL]H



IORDWYDOGHOD\HG

 

IRU LQWL LLQ1XP6DPSOHVL ^  YDO LQ>L@

 

 

'RVRPHWKLQJWRWKHVLJQDOEHIRUHRXWSXWWLQJ  QRW\HWGRQH

 

 `

RXW>L@ YDO

  `

XQLW!ZULWHSRV ZULWHSRV XQLW!UHDGSRV UHDGSRV

XQLW!UDWH XQLW!IZGKRS XQLW!UHDGSRV XQLW!ZULWHSRV XQLW!GHOD\VL]H

YRLGORDG ,QWHUIDFH7DEOH LQ7DEOH ^ 

IW LQ7DEOH

 `

'HILQH6LPSOH8QLW )ODQJHU 

Figure 25.1 (continued)

701

25

Writing Unit Generator Plug-ins

to define multiple calculation functions and allow the constructor to decide which one to use. This is covered later. 3. Calculate one sample’s worth of output, typically by calling the unit’s calculation function and asking it to process 1 sample. The purpose of this is to “prime” the inputs and outputs of all the unit generators in the graph and to ensure that the constructors for UGens farther down the chain have their input values available so they can initialize correctly. Our calculation function, )ODQJHUBQH[W , should perform the main audio processing. In this example it doesn’t actually alter the sound — we’ll get to that shortly — but it illustrates some important features of calculation functions. It takes 2 arguments passed in by the server: a pointer to the struct and an integer specifying how many values are to be processed (this will be 1 for control-rate, more for audio-rate — typically 64). The last thing in our C++ file is the ORDG function, called when the scsynth executable boots up. We store the reference to the interface table which is passed in — note that although you don’t see any explicit references to IW elsewhere in the code, that’s because they are hidden behind macros which make use of it to call functions in the server. We must also declare to the server each of the UGens which our plug-in defines. This is done using a macro 'HILQH6LPSOH8QLW )ODQJHU , which tells the server to register a UGen with the name Flanger and with a constructor function named Flanger_Ctor. It also tells the server that no destructor function is needed. If we did require a destructor, we would instead use 'HILQH'WRU8QLW )ODQJHU , which tells the server that we’ve also supplied a destructor function named Flanger_Dtor. You must name your constructor/destructor functions in this way, since the naming convention is hardcoded into the macros. So what is happening inside our calculation function? Although in our example the input doesn’t actually get altered before being output, the basic pattern for a typical calculation function is given. We do the following: Create pointers to the input and output arrays which we will access: IORDW LQ  The macros ,1 and 287 return appropriate pointers for the desired inputs/outputs — in this case the first input and the first output. If the input is audio-rate, then LQ>@ will refer to the first incoming sample, LQ>@ to the next incoming sample, and so on. If the input is control-rate, then there is only 1 incoming value, LQ>@. We use the macro ,1 again to grab a single control-rate value, here the “depth” input. Note that ,1 is actually a shortcut to the first value in the location referenced by ,1 . ,1  is exactly the same as ,1  >@. ,1  IORDW RXW 287  ;

702

Dan Stowell

We copy some values from the UGen’s struct into local variables. This can improve the efficiency of the unit, since the C++ optimizer will typically cause the values to be loaded into registers. Next we loop over the number of input frames, each time taking an input value, processing it, and producing an output value. We could take values from multiple inputs, and even produce multiple outputs, but in this example we’re using only 1 full-rate input and producing a single output. Two important notes: If an input/output is control-rate and you mistakenly treat it as audio-rate, you will be reading/writing memory you should not be, and this can cause bizarre problems and crashes; essentially this is just the classic C/C++ “gotcha” of accidentally treating an array as being bigger than it really is. Note that in our example, we assume that the input and output are of the same size, although it’s possible that they aren’t — some UGens can take audio-rate input and produce control-rate output. This is why it is useful to make sure your SuperCollider class code includes the rate-checking code described earlier in this chapter. You can see why the FKHFN6DPH5DWH$V)LUVW,QSXW approach is useful in this case. The server uses a “buffer coloring” algorithm to minimize use of buffers and to optimize cache performance. This means that any of the output buffers may be the same as 1 of the input buffers. This allows for in-place operation, which is very efficient. You must be careful, however, not to write any output sample before you have read the corresponding input sample. If you break this rule, then the input may be overwritten with output, leading to undesired behavior. If you can’t write the UGen efficiently without breaking this rule, then you can instruct the server not to alias the buffers by using the 'HILQH6LPSOH&DQW$OLDV8QLW or 'HILQH'WRU&DQW$OLDV8QLW macros in the ORDG function, rather than the 'HILQH6LPSOH8QLW or 'HILQH'WRU8QLW macros. (The Help file on writing UGens provides an example in which this ordering is important.) Finally, having produced our output, we may have modified some of the variables we loaded from the struct; we need to store them back to the struct so the updated values are used next time. Here we store the UDWH value back to the struct — although we don’t modify it in this example, we will shortly change the code so that this may happen. The code in figure 25.1 should compile correctly into a plug-in. With the class file in place and the plug-in compiled, you can now use the UGen in a synth graph: VERRW [ ^ YDUVRQGO\RXW VRQ 6DZDU >@ PHDQ RXW )ODQJHUDU VRQ 

703

25

Writing Unit Generator Plug-ins

RXWGXS  `SOD\ V 

Remember that Flanger doesn’t currently add any effect to the sound. But we can at least check that it runs correctly (outputting its input unmodified and undistorted) before we start to make things interesting. 25.5.4

Summary: The Three Main Rates of Data Output

Our example has taken input in 3 different ways: Using ,1 in the constructor to take an input value and store it to the struct for later use. Since this reads a value only once, the input is being treated as a scalar-rate input. Using ,1 in the calculation function to take a single input value. This treats the input as control-rate. Using ,1 in the calculation function to get a pointer to the whole array of inputs. This treats the input as audio-rate. Typically the size of such an input array is accessed using the LQ1XP6DPSOHV argument, but note that if you create a control-rate UGen with audio-rate inputs, then LQ1XP6DPSOHV will be wrong (it will be 1), so you should instead use the macro )8//%8)/(1*7+ (see table 25.2). If the data that one of your UGen’s inputs is fed is actually audio-rate, there is no danger in treating it as control-rate or scalar-rate. The end result is to ignore the “extra” data provided to your UGen. Similarly, a control-rate input can safely be treated as scalar-rate. The result would be crude downsampling without low-pass filtering, which may be undesirable but will not crash the server. 25.5.5

Allocating Memory and Using a Destructor

Next we can develop our Flanger example so that it applies an effect to the sound. In order to create a flanging effect, we need a short delay line (around 20 milliseconds). We vary the amount of delay and mix the delayed sound with the input to produce the effect. To create a delay line, we need to allocate some memory and store a reference to that memory in the UGen’s data structure. And, of course, we need to free this memory when the UGen is freed. This requires a UGen with a destructor. Figure 25.2 shows the full code, with the destructor added, as well as the code to allocate, free, and use the memory. Note the change in the ORDG function — we use 'HILQH'WRU 8QLW rather than 'HILQH6LPSOH8QLW . (We’ve also added code to the calculation function which reads and writes to the delay line, creating the flanging effect.)

704

Dan Stowell

LQFOXGH6&B3OXJ,QK VWDWLF,QWHUIDFH7DEOH IW WKHVWUXFWZLOOKROGGDWDZKLFKZHZDQWWRSDVVIURPRQHIXQFWLRQWRDQRWKHU HJIURPWKHFRQVWUXFWRUWRWKHFDOFIXQF RUIURPRQHFDOORIWKHFDOFIXQFWRWKHQH[W VWUXFW)ODQJHUSXEOLF8QLW^  IORDWUDWHGHOD\VL]HIZGKRSUHDGSRV  LQWZULWHSRV   DSRLQWHUWRWKHPHPRU\ZH OOXVHIRURXULQWHUQDOGHOD\  IORDW GHOD\OLQH ` IXQFWLRQGHFODUDWLRQVH[SRVHGWR& H[WHUQ&^  YRLGORDG ,QWHUIDFH7DEOH LQ7DEOH   YRLG)ODQJHUB&WRU )ODQJHU XQLW   YRLG)ODQJHUBQH[W )ODQJHU XQLWLQWLQ1XP6DPSOHV   YRLG)ODQJHUB'WRU )ODQJHU XQLW  `

YRLG)ODQJHUB&WRU )ODQJHU XQLW ^           

+HUHZHPXVWLQLWLDOLVHVWDWHYDULDEOHVLQWKH)ODQJHUVWUXFW XQLW!GHOD\VL]H 6$03/(5$7( I)L[HGPVPD[GHOD\ 7\SLFDOO\ZLWKUHIHUHQFHWRFRQWUROUDWHVFDODUUDWHLQSXWV IORDWUDWH ,1   5DWKHUWKDQXVLQJUDWHGLUHFWO\ZH UHJRLQJWRFDOFXODWHWKHVL]HRI MXPSVZHPXVWPDNHHDFKWLPHWRVFDQWKURXJKWKHGHOD\OLQHDWUDWH IORDWGHOWD  XQLW!GHOD\VL]H UDWH 6$03/(5$7( XQLW!IZGKRS GHOWDI XQLW!UDWH UDWH XQLW!ZULWHSRV  XQLW!UHDGSRV 

 $OORFDWHWKHGHOD\OLQH  XQLW!GHOD\OLQH  IORDW 57$OORF XQLW!P:RUOGXQLW!GHOD\VL]H  VL]HRI IORDW   ,QLWLDOLVHLWWR]HURHV

Figure 25.2 Completed C++ code for the Flanger UGen.

705

25

Writing Unit Generator Plug-ins



PHPVHW XQLW!GHOD\OLQHXQLW!GHOD\VL]H VL]HRI IORDW 

       `

,03257$177KLVWHOOVVFV\QWKWKHQDPHRIWKHFDOFXODWLRQIXQFWLRQ IRUWKLV8*HQ 6(7&$/& )ODQJHUBQH[W  6KRXOGDOVRFDOFVDPSOH VZRUWKRIRXWSXW² HQVXUHVHDFKXJHQ VSLSHVDUHSULPHG )ODQJHUBQH[W XQLW 

YRLG)ODQJHUBQH[W )ODQJHU XQLWLQWLQ1XP6DPSOHV ^  

IORDW LQ ,1   IORDW RXW 287  



IORDWGHSWK ,1  

     

IORDWUDWH XQLW!UDWH IORDWIZGKRS XQLW!IZGKRS IORDWUHDGSRV XQLW!UHDGSRV IORDW GHOD\OLQH XQLW!GHOD\OLQH LQWZULWHSRV XQLW!ZULWHSRV LQWGHOD\VL]H XQLW!GHOD\VL]H



IORDWYDOGHOD\HGFXUUDWH



FXUUDWH ,1  

       

LI UDWH FXUUDWH ^  UDWHLQSXWQHHGVXSGDWLQJ  UDWH FXUUDWH  IZGKRS  GHOD\VL]H UDWH  6$03/(5$7( I `

   

   

IRU LQWL LLQ1XP6DPSOHVL ^  YDO LQ>L@ :ULWHWRWKHGHOD\OLQH GHOD\OLQH>ZULWHSRV@ YDO LI ZULWHSRV GHOD\VL]H  ZULWHSRV 

Figure 25.2 (continued)

706

Dan Stowell

        

        

5HDGIURPWKHGHOD\OLQH GHOD\HG GHOD\OLQH> LQW UHDGSRV@ UHDGSRV IZGKRS 8SGDWHSRVLWLRQ1%ZHPD\EHPRYLQJIRUZDUGVRUEDFNZDUGV  GHSHQGLQJRQLQSXW ZKLOH LQW UHDGSRV! GHOD\VL]H  UHDGSRV GHOD\VL]H ZKLOH LQW UHDGSRV  UHDGSRV GHOD\VL]H

  

  `

0L[GU\DQGZHWWRJHWKHUDQGRXWSXWWKHP RXW>L@ YDO GHOD\HG GHSWK 

    `

XQLW!UDWH UDWH XQLW!IZGKRS IZGKRS XQLW!ZULWHSRV ZULWHSRV XQLW!UHDGSRV UHDGSRV

YRLG)ODQJHUB'WRU )ODQJHU XQLW ^  57)UHH XQLW!P:RUOGXQLW!GHOD\OLQH  ` YRLGORDG ,QWHUIDFH7DEOH LQ7DEOH ^ 

IW LQ7DEOH

 `

'HILQH'WRU8QLW )ODQJHU 

Figure 25.2 (continued)

707

25

Writing Unit Generator Plug-ins

Table 25.1 Memory Allocation and Freeing Typical C Allocation/Freeing

In SuperCollider (using the real-time pool)

YRLG SWU PDOORF QXPE\WHV IUHH SWU

YRLG SWU 57$OORF XQLW!P:RUOGQXPE\WHV 57)UHH XQLW!P:RUOGSWU

SuperCollider UGens allocate memory differently from most programs. Ordinary memory allocation and freeing can be a relatively expensive operation, so SuperCollider provides a real-time pool of memory from which UGens can borrow chunks in an efficient manner. The functions to use in a plug-in are in the right-hand column of table 25.1, and the analogous functions (the ones to avoid) are shown in the left-hand column. 57$OORF and 57)UHH can be called anywhere in your constructor/calculation/ destructor functions. Often you will 57$OORF the memory during the constructor and 57)UHH it during the destructor, as is done in figure 25.2. Memory allocated in this way is taken from the (limited) real-time pool and is not accessible outside the UGen (e.g., to client-side processes). If you require large amounts of memory or wish to access the data from the client, you may prefer to use a buffer allocated and then passed in from outside — this is described later. 25.5.6

Providing More Than 1 Calculation Function

Your UGen’s choice of calculation function is specified within the constructor rather than being fixed. This gives an opportunity to provide different functions optimized for different situations (e.g., 1 for control-rate and 1 for audio-rate input) and to decide which to use. This code, used in the constructor, would choose between 2 calculation functions according to whether the first input was audio-rate or not: LI ,15$7(   FDOFB)XOO5DWH ^ 6(7&$/& )ODQJHUBQH[WBD  `HOVH^ 6(7&$/& )ODQJHUBQH[WBN  `

You would then provide both a )ODQJHUBQH[WBD and a )ODQJHUBQH[WBN function. Similarly, you could specify different calculation functions for audio-rate versus control-rate output (e.g., by testing whether %8)/(1*7+ is 1; see table 25.2), although this is often catered for automatically when your calculation function uses the LQ1XP6DPSOHV argument to control the number of loops performed, and so on.

708

Dan Stowell

Table 25.2 Useful Macros for UGen Writers Macro

Description

,1(index)

A float* pointer to input number index

287(index)

A float* pointer to output number index

,1(index)

A single (control-rate) value from input number index

287(index)

A single (control-rate) value at output number index

,15$7((index)

The rate of input index, an integer value corresponding to 1 of the following constants: FDOFB6FDODU5DWH (scalar-rate) FDOFB%XI5DWH (control-rate) FDOFB)XOO5DWH (audio-rate) FDOFB'HPDQG5DWH (demand-rate)

6(7&$/&(func)

Set the calculation function to func

6$03/(5$7(

The sample rate of the UGen as a double. Note: for control-rate UGens this is not the full audio rate but audio rate/blocksize)

6$03/('85

Reciprocal of 6$03/(5$7( (seconds per sample)

%8)/(1*7+

Equal to the block size if the unit is audio rate and to 1 if the unit is control rate

%8)5$7(

The control rate as a double

%8)'85

The reciprocal of %8)5$7(

*(7%8)

Treats the UGen’s first input as a reference to a buffer; looks this buffer up in the server, and provides variables for accessing it, including IORDW EXI'DWD, which points to the data; XLQW EXI)UDPHV for how many frames the buffer contains; XLQWEXI&KDQQHOV for the number of channels in the buffer

&OHDU8QLW2XWSXWV(unit,

A function which sets all the unit’s outputs to 0

inNumSamples) 3ULQW(fmt, ...)

Print text to the SuperCollider post window; arguments are just like those for the C function printf

'RQH$FWLRQ(doneAction, unit)

Perform a “doneAction,” as used in EnvGen, DetectSilence, and others

57$OORF(world, numBytes)

Allocate memory from the real-time pool— analogous to malloc(numBytes)

575HDOORF(world, pointer,

Reallocate memory in the real-time pool— analogous to realloc(pointer, numBytes)

numBytes)

709

25

Writing Unit Generator Plug-ins

Table 25.2 (continued) Macro

Description

57)UHH(world, pointer)

Free allocated memory back to the real-time pool— analogous to free(pointer)

6HQG7ULJJHU(node, triggerID, value)

Send a trigger from the node to clients, with integer ID, triggered, and float value value

)8//5$7(

The full audio sample rate of the server (irrespective of the rate of the UGen) as a double

)8//%8)/(1*7+

The integer number of samples in an audio-rate input (irrespective of the rate of the UGen)

The unit’s calculation function can also be changed during execution — the 6(7&$/& macro can safely be called from a calculation function, not just from the constructor. Whenever you call 6(7&$/& , this changes which function the server will call, from the next control period onward. The Help file on writing UGens shows more examples of 6(7&$/& in use. 25.5.7

Trigger Inputs

Many UGens make use of trigger inputs. The convention here is that if the input is nonpositive (i.e., 0 or negative), then crosses to any positive value, a trigger has occurred. If you wish to provide trigger inputs, use this same convention. The change from nonpositive to positive requires checking the trigger input’s value against its previous value. This means that our struct will need a member to store the previous value for checking. Assuming that our struct contains a float member SUHYWULJ, the following sketch outlines how we handle the incoming data in our calculation function: ҽRDWWULJ ,1  2UZKLFKHYHULQSXW\RXZLVK ҽRDWSUHYWULJ XQLW!SUHYWULJ LI SUHYWULJ  WULJ! ^ GRVRPHWKLQJ ` XQLW!SUHYWULJ WULJ6WRUHFXUUHQWYDOXH³³QH[WWLPHLW OOEHWKH SUHYLRXVYDOXH

The sketch is for a control-rate trigger input, but a similar approach is used for audio-rate triggering, too. For audio-rate triggering, you need to compare each value in the input block against the value immediately previous. Note that for the very first value in the block, you need to compare against the last value from the previous block (which you must have stored).

710

Dan Stowell

For complete code examples, look at the source of the 7ULJ UGen, found in TriggerUGens.cpp in the main SC distribution. 25.5.8 Accessing a Buffer When a buffer is allocated and then passed in to a UGen, the UGen receives the index number of that buffer as a float value. In order to get a pointer to the correct chunk of memory (as well as the size of that chunk), the UGen must look it up in the server’s list of buffers. In practice this is most easily achieved by using a macro called *(7B%8). You can call *(7B%8) near the top of your calculation function, and then the data are available via a float pointer EXI'DWD along with 2 integers defining the size of the buffer, EXI&KDQQHOV and EXI)UDPHV. Note that the macro assumes the buffer index is the first input to the UGen (this is the case for most buffer-using UGens). For examples which use this approach, look at the code for the 'LVN,Q or 'LVN2XW UGens, defined in DiskIO_UGens.cpp in the main SC distribution. Your UGen does not need to free the memory associated with a buffer once it ends. The memory is managed externally by the buffer allocation/freeing server commands. 25.5.9

Randomness

The API provides a convenient interface for accessing good-quality pseudo-random numbers. The randomness API is specified in SC_RGen.h and provides functions for random numbers from standard types of distribution: uniform, exponential, bilinear, and quasi-Gaussian (such as VXPUDQG, also available client-side). The server creates an instance of the random number generator for UGens to access. The following excerpt shows how to generate random numbers for use in your code: 5*HQ UJHQ  XQLW!P3DUHQW!P5*HQ ҽRDWUҽ UJHQIUDQG $UDQGRPҽRDWXQLIRUPO\GLVWULEXWHGWR  LQWUYDO UJHQLUDQG  $UDQGRPLQWHJHUXQLIRUPO\GLVWULEXWHG WRLQFOXVLYH ҽRDWUJDXV UJHQVXPUDQG  4XDVL*DXVVLDQOLPLWHGWRUDQJH“

25.5.10

When Your UGen Has No More to Do

Many UGens carry on indefinitely, but often a UGen reaches the end of its useful “life” (e.g., it finishes outputting an envelope or playing a buffer). There are 3 specific behaviors that might be appropriate if your UGen does reach a natural end:

711

25

Writing Unit Generator Plug-ins

1. Some UGens set a “done” flag to indicate that they’ve finished. Other UGens can monitor this and act in response to it (e.g., 'RQH, )UHH6HOI:KHQ'RQH). See the Help files for examples of these UGens. If you wish your UGen to indicate that it has finished, set the flag as follows: XQLW!P'RQH WUXH

This doesn’t affect how the server treats the UGen — the calculation function will still be called in future. 2. UGens such as (QY*HQ, /LQHQ, 'XW\, and /LQH provide a “doneAction” feature which can perform actions such as freeing the node once the UGen has reached the end of its functionality. You can implement this yourself simply by calling the 'RQH$FWLRQ macro, which performs the desired action. You would typically allow the user to specify the doneAction as an input to the unit. For example, if the doneAction is the sixth input to your UGen, you would call 'RQH$FWLRQ ,1  XQLW

Since this can perform behaviors such as freeing the node, many UGens stop calculating/outputting after they reach the point of calling this macro. See, for example, the source code for 'HWHFW6LOHQFH, which sets its calculation function to a no-op 'HWHFW6LOHQFHBGRQH function at the point where it calls 'RQH$FWLRQ. Not all GRQH$FWLRQV free the synth, though, so additional output is not always redundant. 3. If you wish to output zeroes from all outputs of your unit, you can simply call the &OHDU8QLW2XWSXWV function as follows: &OHDU8QLW2XWSXWV XQLWLQ1XP6DPSOHV 

Notice that this function has the same signature as a calculation function: as arguments it takes a pointer to the unit struct and an integer number of samples. You can take advantage of this similarity to provide an efficient way to stop producing output: 6(7&$/& &OHDU8QLW2XWSXWV 

Calling this would mean that your calculation function would not be called in future iterations. Instead, &OHDU8QLW2XWSXWV would be called. Therefore this provides an irreversible but efficient way for your UGen to produce silent output for the remainder of the synth’s execution. 25.5.11

Summary of Useful Macros

Table 25.2 summarized some of the most generally useful macros defined for use in your UGen code. Many of these are discussed in this chapter, but not all are covered explicitly. The macros are defined in SC_Unit.h and SC_InterfaceTable.h.

712

25.6

Dan Stowell

Specialized Types of UGen 25.6.1 Multiple-Output UGens In the C++ code, writing UGens which produce multiple outputs is very straightforward. The 287 macro gets a pointer to the desired-numbered output. Thus, for a 3-output UGen, assign each one (287  , 287  , 287  ) to a variable, then write output to these 3 pointers. In the SuperCollider class code, the default is to assume a single output, and we need to modify this behavior. Let’s look at the 3LWFK UGen to see how it’s done: 3LWFK0XOWL2XW8*HQ^  NU^DUJLQ LQLW)UHT PLQ)UHT PD[)UHT  H[HF)UHT PD[%LQV3HU2FWDYH PHGLDQ  DPS7KUHVKROG SHDN7KUHVKROG GRZQ6DPSOH  AWKLVPXOWL1HZ FRQWURO LQLQLW)UHTPLQ)UHTPD[)UHTH[HF)UHT  PD[%LQV3HU2FWDYHPHGLDQDPS7KUHVKROGSHDN7KUHVKROGGRZQ6DPSOH ` LQLW^DUJWKH,QSXWV LQSXWV WKH,QSXWV AWKLVLQLW2XWSXWV UDWH  ` `

There are 2 differences from an ordinary UGen. First, 3LWFK is a subclass of 0XOWL2XW8*HQ rather than of 8*HQ; 0XOWL2XW8*HQ takes care of some of the changes needed to work with a UGen with multiple outputs. Second, the LQLW function is overridden to say exactly how many outputs this UGen will provide (in this case, 2). For 3LWFK, the number of outputs is fixed, but in some cases it might depend on other factors. 3OD\%XI is a good example of this: its number of outputs depends on the number of channels in the EXIIHU(s) it is expecting to play, specified using the QXP&KDQQHOV argument. The init method for 3OD\%XI takes the QXP&KDQQHOV input (i.e., the first value from the list of inputs passed to init) and specifies that as the number of outputs. 25.6.2

Passing Arrays into UGens

25.6.2.1 The class file As described earlier, the PXOWL1HZ method automatically performs multichannel expansion if any of the inputs are arrays — yet in some cases we want a single unit to handle a whole array, rather than having 1 unit per array element. The %XI:U and 5HFRUG%XI UGens are good examples of UGens that do exactly this: each UGen can

713

25

Writing Unit Generator Plug-ins

take an array of inputs and write them to a multichannel buffer. Here’s how the class file handles this: 5HFRUG%XI8*HQ^ 

DU^DUJLQSXW$UUD\EXIQXP RIIVHW UHF/HYHO  SUH/HYHO UXQ ORRS WULJJHU   AWKLVPXOWL1HZ/LVW > DXGLR EXIQXPRIIVHWUHF/HYHOSUH/HYHO UXQORRSWULJJHU@LQSXW$UUD\DV$UUD\  ` `

Instead of calling the 8*HQ method PXOWL1HZ, we call PXOWL1HZ/LVW, which is the same except that all the arguments are a single array rather than a separated argument list. This means that the LQSXW$UUD\ argument (which could be either a single unit or an array), when concatenated onto the end of the argument list using the  array concatenation operator, in essence appears as a set of separate input arguments rather than a single array argument. Note that 5HFRUG%XI doesn’t know in advance what size the input array is going to be. Because of the array flattening that we perform, this means that the 5HFRUG%XI C++ plug-in receives a variable number of inputs each time it is instantiated. Our plug-in code will be able to detect how many inputs it receives in a given instance. Why do we put LQSXW$UUD\ at the end of the argument list? Why not at the beginning, in parallel with how a user invokes the 5HFRUG%XI UGen? The reason is to make things simpler for the C++ code, which will access the plug-in inputs according to their numerical position in the list. The UHF/HYHO input, for example, is always the third input, whereas if we inserted LQSXW$UUD\ into the list before it, its position would depend on the size of LQSXW$UUD\. The 3ROO UGen uses a very similar procedure, converting a string of text into an array of ASCII characters and appending them to the end of its argument list. However, the 3ROO class code must perform some other manipulations, so it is perhaps less clear as a code example than 5HFRUG%XI. But if you are developing a UGen that needs to pass text data to the plug-in, 3ROO shows how to do it using this array approach. 25.6.2.2 The C++ code Ordinarily we access input data using the ,1 or ,1 macro, specifying the number of the input we want to access. Arrays are passed into the UGen as a separate numeric input for each array element, so we access these elements in exactly the same way. But we need to know how many items to expect, since the array can be of variable size. The 8QLW struct can tell us how many inputs in total are being provided (the member XQLW!P1XP,QSXWV. Look again at the 5HFRUG%XI class code given above. There

714

Dan Stowell

are 7 “ordinary” inputs, plus the array appended to the end. Thus the number of channels in our input array is XQLW!P1XP,QSXWV² . We use this information to iterate over the correct number of inputs and process each element. 25.6.3

Demand-Rate UGens

25.6.3.1 The class file Writing the class file for a demand-rate UGen is straightforward. Look at the code for units such as 'VHULHV, 'JHRP, or 'ZKLWH as examples. They differ from other UGen class files in 2 ways: 1. The first argument to PXOWL1HZ (or PXOWL1HZ/LVW) is GHPDQG . 2. They implement a single class method, QHZ, rather than DU/ NU/ LU. This is because although some UGens may be able to run at multiple rates (e.g., audio rate or control rate), a demand-rate UGen can run at only 1 rate: the rate at which data are demanded of it. 25.6.3.2 The C++ code The C++ code for a demand-rate UGen works as normal, with the constructor specifying the calculation function. However, the calculation function behaves slightly differently. First, it is not called regularly (once per control period) but only when demanded, which during a particular control period could be more than once or not at all. This means that you can’t make assumptions about regular timing, such as the assumptions made in an oscillator which increments its phase by a set amount each time it is called. Second, rather than being invoked directly by the server, the calculation function calls are actually passed up the chain of demand-rate generators. Rather than using the ,1 or ,1 macros to access an input value (whose generation will have been coordinated by the server), we instead use the '(0$1',1387 macro, which requests a new value directly from the unit farther up the chain, “on demand.” Note: because of the method used to demand the data, demand-rate UGens are currently restricted to being single-output. 25.6.4

Phase Vocoder UGens

Phase vocoder UGens operate on frequency-domain data stored in a buffer (produced by the ))7 UGen). They don’t operate at a special “rate” of their own: in reality they are control-rate UGens. They produce and consume a control-rate signal which acts as a type of trigger: when an FFT frame is ready for processing, its value

715

25

Writing Unit Generator Plug-ins

is the appropriate buffer index; otherwise, its value is –1. This signal is often referred to as the “chain” in SC documentation. 25.6.4.1 The class file As with demand-rate UGens, phase vocoder UGens (PV UGens) can have only a single rate of operation: the rate at which FFT frames are arriving. Therefore, PV UGens implement only a single QHZ class method, and they specify their rate as “control” in the call to PXOWL1HZ. See the class files for 39B0DJ0XO and 39B%ULFN:DOO as examples of this. PV UGens process data stored in buffers, and the C++ API provides some useful macros to help with this. The macros assume that the first input to the UGen is the one carrying the FFT chain where data will be read and then written, so it is sensible to stick with this convention. 25.6.4.2 The C++ code PV UGens are structured just like any other UGen, except that to access the frequency-domain data held in the external buffer, there are certain macros and procedures to use. Any of the core UGens implemented in PV_UGens.cpp should serve as a good example to base your own UGens on. Your code should include the header file FFT_UGens.h, which defines some PV-specific structs and macros. Two important macros are 39B*(7B%8) and 39B*(7B%8), one of which you use at the beginning of your calculation function to obtain the FFT data from the buffer. These macros implement the special PV UGen behavior: if the FFT chain has “fired,” then they access the buffer(s) and continue with the rest of the calculation function; but if the FFT chain has not “fired” in the current control block, then they output a value of –1 and return (i.e., they do not allow the rest of the calculation function to proceed). This has the important consequence that although your calculation function code will look “as if” it is called once per control block, in fact your code will be executed only at the FFT frame rate. 39B*(7B%8) will take the FFT chain indicated by the first input to the UGen and create a pointer to these data called EXI. 39B*(7B%8) is for use in UGens which process 2 FFT chains and write the result back out to the first chain: it takes the FFT chain indicated by the first and second inputs to the UGen and creates pointers to the data called EXI and EXI. It should be clear that you use 39B*(7B%8) or 39B*(7B%8), but not both.

Having acquired a pointer to the data, you will of course wish to read/write that data. Before doing so, you must decide whether to process the complex-valued data as polar coordinates or Cartesian coordinates. The data in the buffer may be in

716

Dan Stowell

either format (depending on what has happened to it so far). To access the data as Cartesian values you use 6&&RPSOH[%XI S 7R&RPSOH[$S[ EXI 

and to access the data as polar values you use 6&3RODU%XI S 7R3RODU$S[ EXI 

These 2 data structures, and the 2 functions for obtaining them, are declared in FFT_UGens.h. The name S for the pointer is of course arbitrary, but it’s what we’ll use here. FFT data consist of a complex value for each frequency bin, with the number of bins related to the number of samples in the input. But in the SuperCollider context the input is real-valued data, which means that (a) the bins above the Nyquist frequency (which is half the sampling frequency) are a mirror image of the bins below, and can therefore be neglected; and (b) phase is irrelevant for the DC and Nyquist frequency bins, so these 2 bins can be represented by a single-magnitude value rather than a complex value. The end result of this is that we obtain a data structure containing a single DC value, a single Nyquist value, and a series of complex values for all the bins in between. The number of bins in between is given by the value QXPELQV, which is provided for us by 39B*(7B%8) or 39B*(7B%8). The data in a Cartesian-type struct (an 6&&RPSOH[%XI) are of the form S!GF S!ELQ>@UHDO S!ELQ>@LPDJ S!ELQ>@UHDO S!ELQ>@LPDJ  S!ELQ>QXPELQV²@UHDO S!ELQ>QXPELQV²@LPDJ S!Q\T

The data in a polar-type struct (an 6&3RODU%XI) is of the form S!GF S!ELQ>@PDJ S!ELQ>@SKDVH S!ELQ>@PDJ S!ELQ>@SKDVH  S!ELQ>QXPELQV²@PDJ S!ELQ>QXPELQV²@SKDVH S!Q\T

717

25

Writing Unit Generator Plug-ins

Note that the indexing is slightly strange: engineers commonly refer to the DC component as the “first” bin in the frequency-domain data. However in these structs, because the DC component is represented differently, ELQ>@ is actually the first nonDC bin — what would sometimes be referred to as the second bin. Similarly, keep in mind that QXPELQV represents the number of bins not including the DC or Nyquist bins. To perform a phase vocoder manipulation, simply read and write to the struct (which actually is directly in the external buffer). The buffer will then be passed down the chain to the next phase vocoder UGen. You don’t need to do anything extra to “output” the frequency-domain data. When compiling your PV UGen, you will need to compile/link against SCComplex .cpp from the main SuperCollider source, which provides the implementation of these frequency-domain data manipulations. 25.7

Practicalities 25.7.1

Debugging

Standard C++ debugging procedures can be used when developing UGens. The simplest method is to add a line into your code which prints out values of variables — you can use the standard C++ SULQWI method, which in a UGen will print text to the post window. For more power, you can launch the server process, then attach a debugger such as gdb (the GNU debugger) or Xcode’s debugger (which is actually gdb with a graphical interface) to perform tasks such as pausing the process and inspecting values of variables. On Mac, if you use the debugger to launch SuperCollider.app, remember that the local server runs in a process different from the application. You can either launch the application using the debugger and booting the internal server, or you can launch just the server (scsynth) using the debugger, which then runs as a local server. In the latter case you need to ensure your debugger launches scsynth with the correct arguments (e.g., X). When debugging a UGen that causes server crashes, you may wish to look at your system’s crash log for scsynth. The most common cause of crashes is introduced when using 57$OORF and 57)UHH — if you try to 57)UHH something that has not yet been 57$OORF’ed, or otherwise is not a pointer to the real-time memory pool, this can cause bad-access exceptions to appear in the crash log. If the crash log seems to reveal that your UGen is somehow causing crashes inside core UGens which normally behave perfectly, then check that your code does not write data outside of the expected limits: make sure you 57$OORF the right amount of space for what you’re

718

Dan Stowell

doing (for example, with arrays, check exactly which indices your code attempts to access). 25.7.2

Optimization

Optimizing code is a vast topic and often depends on the specifics of the code in question. However, we can suggest some optimization tips for writing SuperCollider UGens. The efficiency/speed of execution is usually the number-one priority, especially since a user may wish to employ many instances of the UGen simultaneously. The difference between a UGen that takes 2.5% and another that takes 1.5% CPU may seem small, but the first limits you to 40 simultaneous instances, while the second will allow up to 66; a 65% increase. Imagine doing your next live performance on a 4-year-old processor — that’s essentially the effect of the less efficient code. Avoid calls to “expensive” procedures whenever possible. For example, floatingpoint division is typically much more expensive than multiplication, so if your unit must divide values by some constant value which is stored in your struct, rewrite this so that the reciprocal of that value is stored in the struct and you can perform a multiplication rather than a division. If you want to find an integer power of 2, use bit shifting (  Q) rather than the expensive math function (SRZ  Q ). Other expensive floating-point operations are square-root finding and trigonometric operations (VLQ, FRV, WDQ, etc.). Precalculate and store such values wherever possible, rather than calculating them afresh every time the calculation function is called. As a typical example, often a filter UGen will take a user parameter (such as cutoff frequency) and use it to derive internal filter coefficients. If you store the previous value of the user parameter and use this to check whether it has changed at all — updating the coefficients only upon a change — you can improve efficiency, since often UGens are used with fixed or rarely changing parameters. One of the most important SuperCollider-specific choices is, for reading a certain input or even performing a given calculation, whether to do this at scalar/control/ audio rate. It can be helpful to allow any and all values to be updated at audio rate, but if you find that a certain update procedure is expensive and won’t usually be required to run at audio rate, it may be preferable to update only once during a calculation function. Creating multiple calculation functions, each appropriate to a certain context (e.g., to a certain combination of input rates, as demonstrated earlier), and choosing the most appropriate, can allow a lot of optimization. For example, a purely controlrate calculation can avoid the looping required for audio-rate calculation and typically produces a much simpler calculation as a result. There is a maintenance overhead in providing these alternatives, but the efficiency gains can be large. In this

719

25

Writing Unit Generator Plug-ins

tension between efficiency and code comprehensibility/reusability, you should remember the importance of adding comments to your code to clarify the flow and the design decisions you have made. In your calculation function, store values from your struct as well as input/output pointers/values as local variables, especially if referring to them multiple times. This avoids the overhead of indirection and can be optimized (by the compiler) to use registers better. Avoid 'HILQH6LPSOH&DQW$OLDV8QLW and 'HILQH'WRU&DQW$OLDV8QLW. As described earlier, 'HILQH6LPSOH&DQW$OLDV8QLW is available as an alternative to 'HILQH6LPSOH8QLW in cases where your UGen must write output before it has read from the inputs, but this can decrease cache performance. Avoid peaky CPU usage. A calculation function that does nothing for the first 99 times it’s called, then performs a mass of calculations on the 100th call, could cause audio dropouts if this spike is very large. To avoid this, “amortize” your unit’s effort by spreading the calculation out, if possible, by precalculating some values which are going to be used in that big 100th call. On Mac, Apple’s vDSP library can improve speed by vectorizing certain calculations. If you make use of this, or other platform-specific libraries, remember the considerations of platform independence. For example, use preprocessor instructions to choose between the Mac-specific code and ordinary C++ code: LI6&B'$5:,1 7KH0DFVSHFLҼFYHUVLRQRIWKHFRGH LQFOXGLQJHJY'63IXQFWLRQV HOVH 7KHJHQHULFYHUVLRQRIWKHFRGH HQGLI

is a preprocessor value set to 1 when compiling SuperCollider on Mac (this is set in the Xcode project settings). Branching like this introduces a maintenance overhead, because you need to make sure that you update both branches in parallel.

6&B'$5:,1

25.7.3

Distributing Your UGens

Sharing UGens with others contributes to the SuperCollider community and is a very cool thing to do. A SourceForge project, “sc3-plug-ins,” exists as a repository for downloadable UGen plug-ins produced by various people. You may wish to publish your work either there or separately. Remember that SuperCollider is licensed under the well-known GPL (GNU Public License) open-source license, including the plug-in API. So if you wish to distribute your plug-ins to the world, they must also be GPL-licensed. (Note: you retain

720

Dan Stowell

copyright in any code you have written. You do not have to sign away your copyright in order to GPL-license a piece of code.) Practically, this has a couple of implications: You should include a copyright notice, a copy of the GPL license text, and the source code with your distributed plug-in. • If your plug-in makes use of third-party libraries, those libraries must be available under a “GPL-compatible” copyright license. See the GNU GPL Web site for further discussion of what this means. •

25.8

Conclusion This chapter doesn’t offer an exhaustive list of all that’s possible, but it provides you with the core of what all UGen programmers need to know. If you want to delve deeper, you will find the online community to be a valuable resource for answers to questions not covered here; and the source code for existing UGens provides a wealth of useful code examples. The open-source nature of SuperCollider makes for a vibrant online developer community. Whether you are tweaking 1 of SuperCollider’s core UGens or developing something very specialized, you’ll find the exchange of ideas with SuperCollider developers can be rewarding for your own projects as well as for others and can feed into the ongoing development of SuperCollider as a uniquely powerful and flexible synthesis system.

26

Inside scsynth Ross Bencina

This chapter explores the implementation internals of scsynth, the server process of SuperCollider 3, which is written in C++. This chapter is intended to be useful to people who are interested in modifying or maintaining the scsynth source code and also to those who are interested in learning about the structure and implementation details of one of the great milestones in computer music software. By the time you’ve finished this chapter, you should have improved your understanding of how scsynth does what it does and also have gained some insight into why it is written the way it is. In this chapter sometimes we’ll simply refer to scsynth as “the server.” “The client” usually refers to sclang or any other program sending OSC commands to the server. Although the text focuses on the server’s real-time operating mode, the information presented here is equally relevant to understanding scsynth’s non-real-time mode. As always, the source code is the definitive reference and provides many interesting details which space limitations didn’t allow to be included here. Wherever possible, the data, structure, and function names used in this chapter match those in the scsynth source code. However, at the time of writing there was some inconsistency in class and structure naming. Sometimes you may find that the source file, the class name, or both may have an SC_ prefix. I have omitted such prefixes from class and function names for consistency. Also note that I have chosen to emphasize an object-oriented interpretation of scsynth using UML diagrams to illuminate the code structure, as I believe scsynth is fundamentally object-oriented, if not in an idiomatically C++ way. In many cases structs from the source code appear as classes in the diagrams. Where appropriate, I have taken the liberty to interpret inheritance where a base struct is included as the first member of a derived struct. However, I have resisted the urge to translate any other constructs (such as the psuedo member functions mentioned below). All other references to names appear here as they do in the source code. Now that formalities are completed, in the next section we set out on our journey through the scsynth implementation with a discussion of scsynth’s coding style. Following that, we consider the structure of the code which implements what I call the

722

Ross Bencina

scsynth domain model: Nodes, Groups, Graphs, GraphDefs, and their supporting infrastructure. We then go on to consider how the domain model implementation communicates with the outside world; we consider threading, interthread communications using queues, and how scsynth fulfills real-time performance constraints while executing all of the dynamic behavior offered by the domain model. The final section briefly highlights some of the fine-grained details which make scsynth one of the most efficient software synthesizers on the planet. scsynth is a fantastically elegant and interesting piece of software; I hope you get as much out of reading this chapter as I did in writing it! 26.1

Some Notes on scsynth Coding Style scsynth is coded in C++, but for the most part uses a “C++ as a better C” coding style. Most data structures are declared as plain old C structs, especially those which are accessible to unit plug-ins. Functions which in idiomatic C++ might be considered member functions are typically global functions in scsynth. These are declared with names of the form 6WUXFW7\SHB0HPEHU)XQFWLRQ1DPH 6WUXFW7\SH V>@ , where the first parameter is a pointer to the struct being operated on (the “this” pointer in a C++ class). Memory allocation is performed with custom allocators or with PDOORF , IUHH , and friends. Function pointers are often used instead of virtual functions. A number of cases of what can be considered inheritance are implemented by placing an instance of the base class (or struct) as the first member of the derived struct. There is very little explicit encapsulation of data using getter/setter methods. There are a number of pragmatic reasons to adopt this style of coding. Probably the most significant is the lack of an Application Binary Interface (ABI) for C++, which makes dynamically linking with plug-ins using C++ interfaces compilerversion-specific. The avoidance of C++ constructs also has the benefit of making all code operations visible, in turn making it easier to understand and predict the performance and real-time behavior of the code. The separation of data from operations and the explicit representation of operations as data-using function pointers promotes a style of programming in which types are composed by parameterizing structs by function pointers and auxilliary data. The use of structs instead of C++ classes makes it less complicated to place objects into raw memory. Reusing a small number of data structures for many purposes eases the burden on memory allocation by ensuring that dynamic objects belong to only a small number of size classes. Finally, being able to switch function pointers at runtime is a very powerful idiom which enables numerous optimizations, as will be seen later.

723

26.2

26

Inside scsynths

The scsynth Domain Model At the heart of scsynth is a powerful yet simple domain model which manages dynamic allocation and evaluation of unit generator graphs in real time. Graphs can be grouped into arbitrary trees whose execution and evaluation order can be dynamically modified (McCartney, 2000). In this section we explain the main behaviors and relationships between entities in the domain model. The model is presented without concern for how client communication is managed or how the system is executed within real-time constraints. These concerns are addressed in later sections. Figure 26.1 shows an implementation-level view of the significant domain entities in scsynth. Each class shown on the diagram is a C++ class or struct in the scsynth source code. SC users will recognize the concepts modeled by many of these classes. Interested readers are advised to consult the “ServerArchitecture” section of the Help files for further information about the roles of these classes and the exact operations which can be performed by them. :RUOG is the top-level class which (with the exception of a few global objects) aggregates and manages the run-time data in the server. It is created by :RUOGB1HZ when scsynth starts up. An instance of :RUOG2SWLRQV is passed to :RUOGB1HZ . It stores the configuration parameters, which are usually passed to scsynth on the command line. scsynth’s main task is to synthesize and process sound. It does this by evaluating a tree of dynamically allocated 1RGH instances (near middle-left of figure 26.1), each of which provides its own 1RGH&DOF)XQF function pointer, which is called by the server to evaluate the Node at the current time step. 1RGHP,' is an integer used by clients to identify specific Nodes in server commands (such as suspending or terminating the Node, or changing its location in the tree). There are 2 subtypes of 1RGH: *UDSK and *URXS. *UDSK is so named because it executes an optimized graph of UGens. It can be likened to a voice in a synthesizer or an “instrument” in a Music N-type audio synthesis language such as Csound. The *UDSK type implements the SuperCollider concept of a Synth. *URXS is simply a container for a linked list of 1RGH instances, and since *URXS is itself a type of 1RGH, arbitrary trees may be constructed containing any combination of *URXS and *UDSK instances; readers may recognize this as the Composite design pattern (Gamma et al., 1995). The standard 1RGH&DOF)XQF for a Group (*URXSB&DOF in 6&B*URXSFSS) simply iterates through the Group’s contained Nodes, calling each Node’s 1RGH&DOF)XQF in turn. Although most code deals with Nodes polymorphically, the 1RGHP,V*URXS field supports discriminating between Nodes of type *UDSK and of *URXS at runtime. Any node can be temporarily disabled using the QBUXQ server command, which switches NodeCalcFuncs. When a Node is switched off, a 1RGH&DOF)XQF which does

724

Ross Bencina

Figure 26.1 Class diagram of significant domain entities.

725

26

Inside scsynths

nothing is substituted for the usual one. Disabling a Group disables the whole tree under that Group. A *UDSK is an aggregate of interconnected 8QLW subclasses (also known as Unit Generators or UGens). 8QLW instances are responsible for performing primitive audio DSP operations such as mixing, filtering, and oscillator signal generation. Each *UDSK instance is carved out of a single memory block to minimize the number of expensive calls to the memory allocator. Units are efficiently allocated from the Graph’s memory block and evaluated by iterating through a linear array containing pointers to all of the Graph’s Units. Each 8QLW instance provides a 8QLW&DOF)XQF function pointer to compute samples, which affords the same kind of flexibility as 1RGH&DOF)XQF described above. For example, many Units implement a form of self-modifying code by switching their UnitCalcFuncs on the fly to execute different code paths, depending on their state. Graphs are instantiated using a *UDSK'HI (Graph Definition), which defines the structure of a class of Graphs. The *UDSK'HI type implements the SuperCollider concept of a SynthDef. A *UDSK'HI includes both data for passive representation (used on disk and as communicated from clients such as sclang), and optimized in-memory information used to efficiently instantiate and evaluate Graphs. *UDSK'HI instances store data such as memory allocation size for *UDSK instances, Unit initialization parameters, and information about the connections between Units. When a new GraphDef is loaded into the server, most of the work is done in *UDSK'HIB5HDG , which converts the stored representation to the run-time representation. Aside from allocating and initializing memory and wiring in pointers, one of the main tasks *UDSK'HIB5HDG performs is to determine which inter-Unit memory buffers will be used to pass data between Units during Graph evaluation. The stored GraphDef representation specifies an interconnected graph of named 8QLW instances with generalized information about input and output routing. This information is loaded into an in-memory array of 8QLW6SHF instances where each Unit name is resolved to a pointer to a 8QLW'HI (see below), and the Unit interconnection graph is represented by instances of ,QSXW6SHF and 2XWSXW6SHF. This interconnection graph is traversed by a graph-coloring algorithm to compute an allocation of inter-Unit memory buffers, ensuring that the minimum number of these buffers is used when evaluating the Graph. Note that the order of Unit evaluation defined by a GraphDef is not modified by scsynth. scsynth’s tree of Nodes is rooted at a Group referenced by :RUOGP7RS*URXS. :RUOG is responsible for managing the instantiation, manipulation, and evaluation of the tree of Nodes. :RUOG also manages much of the server’s global state, including the buses used to hold control and audio input and output signals (e.g., :RUOGP$XGLR%XV) and a table of 6QG%XI instances (aka Buffers) used, for example, to hold sound data loaded from disk. An instance of :RUOG is accessible to 8QLW plug-ins via 8QLWP:RUOG

726

Ross Bencina

and provides :RUOGIW, an instance of ,QWHUIDFH7DEOH, which is a table of function pointers which Units can invoke to perform operations on the World. An example of Units using World state is the ,Q and 2XW units which directly access :RUOGP$XGLR%XV to move audio data between Graphs and the global audio buses. 8QLW subclasses provide all of the signal-processing functionality of scsynth. They are defined in dynamically loaded executable “plug-ins.” When the server starts, it scans the nominated plug-in directories and loads each plug-in, calling its ORDG function; this registers all available Units in the plug-in with the World via the ,QWHUIDFH7DEOHI'HILQH8QLW function pointer. Each call to I'HILQH8QLW results in a new 8QLW'HI being created and registered with the global J8QLW'HI/LE hash table, although this process is usually simplified by calling the macros defined in 6&B,QWHUIDFH7DEOHK, such as 'HILQH6LPSOH8QLW and 'HILQH'WRU8QLW . Some server data (more of which we will see later) is kept away from Unit plug-ins in an instance of +LGGHQ:RUOG. Of significance here are +LGGHQ:RUOGP1RGH/LE, a hash table providing fast lookup of Nodes by integer ID; +LGGHQ:RUOGP*UDSK'HI/LE, a hash table of all loaded GraphDefs, which is used when a request to instantiate a new Graph is received; and +LGGHQ:RUOGP:LUH%XI6SDFH, which contains the memory used to pass data between Units during Graph evaluation. 26.3

Real-Time Implementation Structure We now turn our attention to the context in which the server is executed. This includes considerations of threading, memory allocation, and interthread communications. scsynth is a real-time system, and the implementation is significantly influenced by real-time requirements. We begin by considering what “real-time requirements” means in the context of scsynth and then explore how these requirements are met. 26.3.1

Real-Time Requirements

scsynth’s primary responsibility is to compute blocks of audio data in a timely manner in response to requests from the OS audio service. In general, the time taken to compute a block of audio must be less than the time it takes to play it. These blocks are relatively small (on the order of 2 milliseconds for current generation systems), and hence tolerances can be quite tight. Any delay in providing audio data to the OS will almost certainly result in an audible glitch. Of course, computing complex synthesized audio does not come for free and necessarily takes time. Nonetheless, it is important that the time taken to compute each block is bounded and as close to constant as possible, so that exceeding timing constraints occurs only due to the complexity or quantity of concurrently active Graphs, not to the execution of real-time unsafe operations. Such unsafe operations include

727

26

Inside scsynths

Algorithms with high or unpredictable computational complexity (for example, amortized time algorithms with poor worst-case performance) • Algorithms which intermittently perform large computations (for example, precomputing a lookup table or zeroing a large memory block at Unit startup) • Operations which block or otherwise cause a thread context switch. •

The third category includes not only explicit blocking operations, such as attempting to lock a mutex or wait on a file handle, but also operations which may block due to unknown implementation strategies, such as calling a system-level memory allocator or writing to a network socket. In general, any system call should be considered real-time unsafe, since there is no way to know whether it will acquire a lock or otherwise block the process. Put simply, no real-time unsafe operation may be performed in the execution context which computes audio data in real time (usually a thread managed by the OS audio service). Considering the above constraints alongside the dynamic behavior implied by the domain model described in the previous section and the fact that scsynth can read and write sound files on disk, allocate large blocks of memory, and communicate with clients via network sockets, you may wonder how scsynth can work at all in real time. Read on, and all will be revealed. 26.3.2

Real-Time Messaging and Threading Implementation

SuperCollider carefully avoids performing operations which may violate real-time constraints by using a combination of the following techniques: Communication to and from the real-time context is mediated by lock-free First In First Out (FIFO) queues containing executable messages • Use of a fixed-pool memory allocator which is accessed only from the real-time context • Non-real-time safe operations (when they must be performed at all) are deferred and executed asynchronously in a separate “non-real-time” thread • Algorithms which could introduce unpredictable or transient high computational load are generally avoided • Use of user-configurable nonresizable data structures. Exhaustion of such data structures typically results in scsynth operations failing. •

The first point is possibly the most important to grasp, since it defines the pervasive mechanism for synchronization and communication between non-real-time threads and the real-time context which computes audio samples. When a non-realtime thread needs to perform an operation in the real-time context, it enqueues a message which is later performed in the real-time context. Conversely, if code in the

728

Ross Bencina

real-time context needs to execute a real-time unsafe operation, it sends the message to a non-real-time thread for execution. We will revisit this topic on a number of occasions throughout the remainder of the chapter. Figure 26.2 shows another view of the scsynth implementation, this time focusing on the classes which support the real-time operation of the server. For clarity, only a few key classes from the domain model have been retained (shaded gray). Note that $XGLR'ULYHU is a base class: in the implementation different subclasses of $XGLR'ULYHU are used depending on the target OS (CoreAudio for Mac OS X, PortAudio for Windows, etc.). Figure 26.3 illustrates the run-time thread structure and the dynamic communication pathways between threads via lock-free FIFO message queues. The diagram can be interpreted as follows: thick rectangles indicate execution contexts, which are either threads or callbacks from the operating system. Cylinders indicate FIFO message queue objects. The padlock indicates a lock (mutex), and the black circle indicates a condition variable. Full arrows indicate synchronous function calls (invocation of queue-member functions), and half arrows indicate the flow of asynchronous messages across queues. The FIFO message queue mechanism will be discussed in more detail later in the chapter, but for now, note that the :ULWH method enqueues a message, 3HUIRUP executes message-specific behavior for each pending message, and )UHH cleans up after messages which have been performed. The :ULWH , 3HUIRUP , and )UHH FIFO operations can be safely invoked by separate reader and writer threads without the use of locks. Referring to figures 26.2 and 26.3, the dynamic behavior of the server can be summarized as follows: 1. One or more threads listen to network sockets to receive incoming OSC messages which contain commands for the server to process. These listening threads dynamically allocate 26&B3DFNHWV and post them to “The Engine,” using the 3URFHVV26&3DFNHW function, which results in 3HUIRUPB7R(QJLQHB0VJ (a )LIR0VJ)XQF) being posted to the P2VF3DFNHWV7R(QJLQH queue. 26&B3DFNHW instances are later freed, using )UHH26&3DFNHW (a )LIR)UHH)XQF) by way of 0VJ)LIR)UHH , via a mechanism which is described in more detail later. 2. “The Synthesis Engine,” or “Engine” for short (also sometimes referred to here as “the real-time context”), is usually a callback function implemented by a concrete $XGLR'ULYHU which is periodically called by the OS audio service to process and generate audio. The main steps relevant here are that the Engine calls 3HUIRUP on the P2VF3DFNHWV7R(QJLQH and P7R(QJLQH queues, which execute the P3HUIRUP)XQF of any messages enqueued from other threads. Messages in P2VF3DFNHWV7R(QJLQH carry 26&B3DFNHW instances which are interpreted to manipulate the Node tree, instantiate

729

26

Inside scsynths

Figure 26.2 Real-time threading and messaging implementation structure.

730

Ross Bencina

Figure 26.3 Real-time thread and queue instances and asynchronous message channels.

731

26

Inside scsynths

new Graphs, and so on. Whenever the Engine wants to perform a non-real-time safe operation, it encodes the operation in a )LIR0HVVDJH instance and posts it to the nonreal-time thread for execution via the P)URP(QJLQH queue. Results of such operations (if any) will be returned via the P7R(QJLQH queue. After processing messages from P2VF3DFNHWV7R(QJLQH, P7R(QJLQH, and any previously scheduled OSC messages in P6FKHGXOHU, the Engine performs its audio duties by arranging for real-time audio data to be copied between OS buffers and P:RUOG!P$XGLR%XV and evaluating the 1RGH tree via P:RUOG!P7RS*URXS. When the Engine has completed filling the OS audio output buffers, it calls 6LJQDO on P$XGLR6\QF and returns to the OS. 3. Before the server starts servicing OS audio requests, it creates a thread for executing real-time unsafe operations (the non-real-time or NRT thread). This thread waits on P$XGLR6\QF until it is signaled by the Engine. When the non-real-time thread wakes up, it calls )UHH and 3HUIRUP on the P)URP(QJLQH queue to perform any non-real-time safe operations which the server has posted, then processes the P7ULJJHUV, P1RGH(QGV, and P'HOHWH*UDSK'HIV queues. These queues contain notifications of server events. Performing the enqueued notification messages results in OSC messages being sent to clients referenced by 5HSO\$GGUHVV. After calling 3HUIRUP on all queues, the non-real-time thread returns to waiting on P$XGLR6\QF until it is next wakened by the Engine. Note that P$XGLR6\QF is used to ensure that the NRT thread will always wake up and process Engine requests in a timely manner. However, it may never sleep, or it may not process the queues on every Engine cycle if it is occupied with time-consuming operations. This is acceptable since the Engine assumes non-real-time operations will take as long as necessary. The description above has painted the broad strokes of the server’s real-time behavior. Zooming in to a finer level of detail reveals many interesting mechanisms which are worth the effort to explore. A number of these are discussed in the sections which follow. 26.3.2.1 Real-time memory pool allocator Memory allocations performed in the real-time context, such as allocating memory for new *UDSK instances, are made using the $OORF3RRO class. $OORF3RRO is a reimplementation of Doug Lea’s fast general-purpose memory allocator algorithm (Lea, 2000). The implementation allocates memory to clients from a large, preallocated chunk of system memory. Because $OORF3RRO is invoked only by code running in the real-time context, it doesn’t need to use locks or other mechansims to protect its state from concurrent access and hence is real-time safe. This makes it possible for the server to perform many dynamic operations in the real-time thread without needing to defer to an NRT thread to allocate memory. That said, large allocations and other memory operations which are not time-critical are performed outside the real-time context. Memory allocated with an $OORF3RRO must of course also be freed

732

Ross Bencina

into the same $OORF3RRO, and in the same execution context, which requires some care to be taken. For example, )LIR0VJ instances posted by the Engine to the NRT thread with a payload allocated by $OORF3RRO must ensure that the payload is always freed into $OORF3RRO in the real-time execution context. This can be achieved using 0VJ)LIR)UHH , which is described in the next section. 25.3.2.2 FIFO queue message passing As already mentioned, scsynth uses FIFO queues for communicating between threads. The basic concept of a FIFO queue is that you push items on one end of the queue and pop them off the other end later, possibly in a different thread. A fixedsize queue can be implemented as a circular buffer (also known as a ring buffer) with a read pointer and a write pointer: new data are placed in the queue at the write pointer, which is then advanced; when the reader detects that the queue is not empty, data are read at the read pointer and the read pointer is advanced. If there’s guaranteed to be only 1 reading thread and 1 writing thread, and you’re careful about how the pointers are updated (and take care of atomicity and memory ordering issues) then it’s possible to implement a thread-safe FIFO queue without needing to use any locks. This lock-free property makes the FIFO queue ideal for implementing realtime interthread communications in scsynth. The queues which we are most concerned with here carry a payload of message objects between threads. This is an instance of the relatively well known Command design pattern (Gamma et al., 1995). The basic idea is to encode an operation to be performed as a class or struct, and then pass it off to some other part of the system for execution. In our case the Command is a struct containing data and a pair of function pointers, one for performing the operation and another for cleaning up. We will see later that scsynth also uses a variant of this scheme in which the Command is a C++ class with virtual functions for performing an operation in multiple stages. But for now, let’s consider the basic mechanism, which involves posting )LIR0VJ instances to a queue of type 0VJ)LIR. Figure 26.2 shows that P2VF3DFNHWV7R(QJLQH, P7R(QJLQH, and P)URP(QJLQH queues carry )LIR0VJ objects. The code below shows the )LIR0VJ)XQF type and the key fields of )LIR0VJ. W\SHGHIYRLG )LIR0VJ)XQF VWUXFW)LIR0VJ  VWUXFW)LIR0VJ^  )LIR0VJ)XQFP3HUIRUP)XQF )LIR0VJ)XQFP)UHH)XQF YRLG P'DWD  `

733

26

Inside scsynths

To enqueue a message, the sender initializes a )LIR0VJ instance and passes it to 0VJ)LIR:ULWH . Each FifoMsg contains the function pointer members P3HUIRUP )XQF and P)UHH)XQF. When the receiver calls 0VJ)LIR3HUIRUP , the P3HUIRUP)XQF of each enqueued message is called with a pointer to the message as a parameter. 0VJ)LIR also maintains an additional internal pointer which keeps track of which messages have been performed by the receiver. When 0VJ)LIR)UHH is called by the sending execution context, the P)UHH)XQF is invoked on each message whose P3HUIRUP)XQF has already completed. In a moment we will see how this mechanism is used to free SequencedCommand objects allocated in the real-time context. A separate 0VJ)LIR1R)UHH class is provided for those FIFOs which don’t require this freeing mechanism, such as P7ULJJHUV, P1RGH(QGV, and P'HOHWH*UDSK'HIV. These queues carry specialized notification messages. The functionality of these queues could have been implemented by dynamically allocating payload data and sending it using )LIR0VJ instances; however, since 0VJ)LIR and 0VJ)LIR1R)UHH are templates parameterized by message type, it was probably considered more efficient to create separate specialized queues using message types large enough to hold all of the necessary data rather than invoking the allocator for each request. The )LIR0VJ mechanism is used extensively in scsynth, not only for transporting OSC message packets to the real-time engine but also for arranging for the execution of real-time unsafe operations in the NRT thread. Many server operations are implemented by the FifoMsgFuncs defined in 6&B0LVF&PGVFSS. However, a number of operations need to perform a sequence of steps alternating between the real-time context and the NRT thread. For this, the basic )LIR0VJ mechanism is extended using the 6HTXHQFHG&RPPDQG class. 26.3.2.3 SequencedCommand Unlike )LIR0VJ, which just stores two C function pointers, 6HTXHQFHG&RPPDQG is a C++ abstract base class with virtual functions for executing up to 4 stages of a process. Stage 1 and 3 execute in the real-time context, while stages 2 and 4 execute in the NRT context. The 'HOHWH function is always called in the RT context, potentially providing a fifth stage of execution. SequencedCommands are used for operations which need to perform some of their processing in the NRT context. At the time of writing, all 6HTXHQFHG&RPPDQG subclasses were defined in 6&B6HTXHQFHG&RPPDQG FSS. They are mostly concerned with the manipulation of SndBufs and GraphDefs. (See table 26.1 for a list of SequencedCommands defined at the time of writing.) To provide a concrete example of the SequencedCommand mechanism, we turn to the Help file for Buffer (aka 6QG%XI), which reads: “Buffers are stored in a single global array indexed by integers beginning with zero. Buffers may be safely allocated, loaded and freed while synthesis is running, even while unit generators are using them.” Given that a SndBuf’s sample storage can be quite large, or contain

734

Ross Bencina

Table 26.1 Subclasses of 6HTXHQFHG&RPPDQG Defined in SC_SequencedCommand.cpp Buffer Commands

BufGenCmd, BufAllocCmd, BufFreeCmd, BufCloseCmd, BufZeroCmd, BufAllocReadCmd, BufReadCmd, SC_BufReadCommand, BufWriteCmd

GraphDef Commands

LoadSynthDefCmd, RecvSynthDefCmd, LoadSynthDefDirCmd

Miscellaneous

AudioQuitCmd, AudioStatusCmd, SyncCmd, NotifyCmd, SendFailureCmd, SendReplyCmd, AsyncPlugInCmd

sample data read from disk, it is clear that it needs to be allocated and initialized in the NRT thread. We now describe how the SequencedCommand mechanism is used to implement this behavior. To begin, it is important to note that the 6QG%XI class is a relatively lightweight data structure which mainly contains metadata such as the sample rate, channel count, and number of frames of the stored audio data. The actual sample data are stored in a dynamically allocated floating-point array pointed to by 6QG%XIGDWD. In the explanation which follows, we draw a distinction between instance data of 6QG%XI and the sample data array pointed to by 6QG%XIGDWD. In contrast to the client-oriented worldview presented in the Help file, :RUOG actually maintains 2 separate arrays of 6QG%XI instances: P6QG%XIV and P6QG%XIV1RQ 5HDO7LPH0LUURU. Each is always in a consistent state but is accessed or modified only in its own context: P6QG%XIV in the RT context via :RUOGB*HW%XI and P6QG%XIV1RQ 5HDO7LPH0LUURU in the NRT thread via :RUOGB*HW157%XI . On each iteration the engine performs messages in P7R(QJLQH and then evaluates the 1RGH tree to generate sound. Any changes to P6QG%XIV made when calling P7R(QJLQH!3HUIRUP are picked up by dependent Units when their UnitCalcFunc is called. The code may reallocate an existing SndBuf’s sample data array. It is important that the old sample data array is not freed until we can be certain no 8QLW is using it. This is achieved by deferring freeing the old sample data array until after the new one is installed into the RT context’s P6QG%XIV array. This process is summarized in figure 26.4. The details of the individual steps are described below. We now consider the steps performed at each stage of the execution of%XI$OORF 5HDG&PG, a subclass of 6HTXHQFHG&RPPDQG, beginning with the arrival of an 26&B3DFNHW in the real-time context. These stages are depicted in 4 sequence diagrams, figures 26.5 through 26.8. The exact function parameters have been simplified from those in the source code, and only the main code paths are indicated to aid understanding. The OSC message to request allocation of a Buffer filled with data from a sound file is as follows: EBDOORF5HDGEXIQXPSDWKVWDUW)UDPHQXP)UDPHV

735

26

Inside scsynths

Figure 26.4 Overview of multithreaded processing of the EBDOORF5HDG command.

Stage 1 (see figure 26.5): The real-time context processes an OSC packet containing the EBDOORF5HDG message. The OSC dispatch mechanism looks up the correct function pointer to invoke from J&PG/LEUDU\, in this case PHWKBEBDOORF5HDG . PHWKBEBDOORF5HDG calls &DOO6HTXHQFHG&RPPDQG to instantiate a new %XI$OORF 5HDG&PG instance (a subclass of 6HTXHQFHG&RPPDQG) which we will call FPG. &DOO 6HTXHQFHG&RPPDQG calls FPG!,QLW , which unpacks the parameters from the OSC packet and then calls FPG!&DOO1H[W6WDJH , which in turn invokes FPG!6WDJH , which in the case of %XI$OORF5HDG&PG does nothing. It then enqueues FPG to the NRT thread, using 6HQG0HVVDJH)URP(QJLQH with 'R6HTXHQFHG&RPPDQG as the )LIR0VJ )XQF. Stage 2 (see figure 26.6): Some time later, the P)URP(QJLQH FIFO is processed in the NRT thread. The )LIR0VJ containing our FPG is processed, which results in FPG!6WDJH being called via 'R6HTXHQFHG&RPPDQG and FPG!&DOO1H[W6WDJH . cPG!6WDJH does most of the work: first it calls :RUOGB*HW157%XI , which retrieves a pointer to the NRT copy of the 6QG%XI record for FPG!P%XI,QGH[. Then it opens the sound file and seeks to the appropriate position. Assuming no errors have occurred, the pointer to the old sample data array is saved in FPG!P)UHH'DWD so it can be freed later. Then DOORF%XI is called to update the 6QG%XI with the new file information and to allocate a new sample data array. The data are read from the file into the sample data array and the file is closed. A shallow copy of the NRT SndBuf is saved in FPG!P6QG%XI. Finally, FPG!&DOO1H[W6WDJH enqueues the FPG with the real-time context. Stage 3 (see figure 26.7): Similarly to stage 2, only this time in the real-time context, FPG!6WDJH is called via 'R6HTXHQFHG&RPPDQG and FPG!&DOO1H[W6WDJH . A pointer to the real-time copy of the 6QG%XI for index FPG!P%XI,QGH[ is retrieved

736

Ross Bencina

Figure 26.5 Stage 1 of processing the EBDOORF5HDGcommand in the real-time context.

Figure 26.6 Stage 2 of processing the EBDOORF5HDG command in the non-real-time (NRT) context.

737

26

Inside scsynths

Figure 26.7 Stage 3 of processing the EBDOORF5HDG command in the real-time context.

using :RUOGB*HW%XI FPG!P%XI,QGH[ , and the 6QG%XI instance data initialized in stage 2 is shallow copied into it from FPG!P6QG%XI. At this stage the sample data array which was allocated and loaded in stage 2 is now available to Units calling :RUOGB*HW%XI . FPG is then sent back to the non-real-time thread. Stage 4 (see figure 26.8): Once again, back in the non-real-time thread, FPG!6WDJH is invoked, which frees the old sample data array which was stored into FPG!P)UHH'DWD in stage 2. Then the 6HQG'RQH routine is invoked, which sends an OSC notification message back to the client who initiated the Buffer allocation. Finally, FPG is enqueued back to the real-time context with the )UHH6HTXHQFHG&RPPDQG )LIR0VJ)XQF, which will cause FPG to be freed, returning its memory to the real-time $OORF3RRO. 26.3.2.4 Processing and dispatching OSC messages The 3URFHVV26&3DFNHW function provides a mechanism for injecting OSC messages into the real-time context for execution. It makes use of P'ULYHU/RFN to ensure that only 1 thread is writing to the P2VF3DFNHWV7R(QJLQH queue at any time (this could occur, for example, when multiple socket listeners are active). To inject an OSC packet using 3URFHVV26&3DFNHW , the caller allocates a memory block using PDOORF , fills it with an OSC packet (for example, by reading from a network socket), and then calls 3URFHVV26&3DFNHW . 3URFHVV26&3DFNHW takes care of enqueuing the packet to the P2VF3DFNHWV7R(QJLQH queue and deleting packets, using IUHH , once they are no longer needed.

738

Ross Bencina

Figure 26.8 Stage 4 of processing the EBDOORF5HDG command in the non-real-time (NRT) context.

Once the real-time context processes OSC packets, they are usually freed using the 0VJ)LIR message-freeing mechanism; however, packets whose time-stamp values are in the future are stored in the P6FKHGXOHU 3ULRULW\4XHXH for later execution. Once a scheduled packet has been processed, it is sent to the NRT thread to be freed. scsynth dispatches OSC commands by looking up the 6&B&RPPDQG)XQF associated with a given OSC address pattern. At startup 6&B0LVF&PGVFSS wraps these functions in /LE&PG objects and stores them into both the J&PG/LE hash table and J&PG$UUD\ array. OSC commands sent to the server may be strings or special OSC messages with a 4-byte address pattern in which the low byte is an integer message index. Command strings are compatible with any OSC client, whereas the integer command indices are more efficient but don’t strictly conform to the OSC specification. When integer command indices are received, 3HUIRUP26&0HVVDJH looks up the appropriate 6&B&RPPDQG)XQF in the J&PG$UUD\ array; otherwise it consults the J&PG/LE hash table. The P7ULJJHUV , P1RGH(QGV , and P'HOHWH*UDSK'HIV FIFOs are used by the realtime context to enqueue notifications which are translated into OSC messages in the NRT thread and are sent to the appropriate reply address by invoking 5HSO\$GGUHVVP5HSO\)XQF. 26.3.2.5 Fixed-size data structures In real-time systems a common way to avoid the potential real-time unsafe operation of reallocating memory (which may include the cost of making the allocation and of

739

26

Inside scsynths

copying all of the data) is simply to allocate a “large enough” block of memory in the first place and have operations fail if no more space is available. This fixed-size allocation strategy is adopted in a number of places in scsynth, including the size of • • • •

FIFO queues which interconnect different threads P$OORF3RRO (the real-time context’s memory allocator) The P6FKHGXOHU priority queue for scheduling OSC packets into the future The P1RGH/LE hash table, which is used to map integer 1RGH IDs to 1RGH pointers.

In the case of P1RGH/LE the size of the table determines the maximum number of the server can accommodate and the speed of 1RGH lookup as P1RGH/LE becomes full. The sizes of many of these fixed-size data structures are configurable in :RUOG2SWLRQV (in general, by command line parameters), the idea being that the default values are usually sufficient, but if your usage of scsynth causes any of the default limits to be exceeded, you can relaunch the server with larger sizes as necessary. 1RGHV

26.4

Low-Level Mechanisms As may already be apparent, scsynth gains much of its power from efficient implementation mechanisms. Some of these fall into the category of low-bounded complexity methods which contribute to the real-time capabilities of the server, while others are more like clever optimizations which help the server to run faster. Of course the whole server is implemented efficiently, so looking at the source code will reveal many more optimizations than can be discussed here; however, a number of those which I have found interesting are briefly noted below. As always, consult the source code for more details. The Str4 string data type consists of a string of 32-bit integers, each containing 4 chars. Aside from being the same format that OSC uses, the implementation improves the efficiency of comparison and other string operations by being able to process 4 chars at once. • Hash tables in scsynth are implemented using open addressing with linear probing for collision resolution. Although these tables don’t guarantee constant time performance in the worst case, when combined with a good hashing function (Wang, 2007) they typically provide close to constant performance so long as they don’t get too full. • One optimization to hashing used in a number of places in the source code is that the hash value for each item (such as a 1RGH) is cached in the item. This improves performance when resolving collisions during item lookup. • The :RUOG uses a “touched” mechansim which Units and the AudioDriver can use to determine whether audio or control buses have been filled during a control cycle: •

740

Ross Bencina

maintains the P%XI&RXQWHU, which is incremented at each control cycle. When a Unit writes to a bus, it sets the corresponding touched field (for example, in the P$XGLR%XV7RXFKHG array for audio buses) to P%XI&RXQWHU. Readers can then check the touched field to determine whether the bus contains data from the current control cycle. If not, the data doesn’t need to be copied and zeros can be used instead. • Delay lines typically output zeros until the delay time reaches the first input sample. One way to handle this is to zero the internal delay storage when the delay is created or reset. The delay unit generators in scsynth (see 'HOD\8*HQVFSS) avoid this time-consuming (and hence real-time unsafe) operation by using a separate UnitCalcFunc during the startup phase. For example, %XI'HOD\1BQH[WB] outputs zeros for the first EXI6DPSOHV samples, at which point the UnitCalcFunc is switched to %XI'HOD\1BQH[W , which outputs the usual delayed samples. • For rate-polymorphic units, the dynamic nature of UnitCalcFuncs is used to select functions specialized to the rate type of the Unit’s parameters. For example, %LQDU\2S8JHQVFSS defines UnitCalcFuncs which implement all binary operations in separate versions for each rate type. For example, there are separate functions for adding an audio vector to a constant, DGGBDL , and adding 2 audio vectors, DGGBDD . When the binary-op 8QLW constructor %LQDU\2S8*HQB&WRU is called, it calls &KRRVH1RUPDO)XQF to select among the available UnitCalcFuncs based on the rate of its inputs. :RUOG

This concludes our little journey through the wonderful gem that is scsynth. I invite you to explore the source code yourself; it has much to offer, and it’s free! References Gamma, E., R. Helm, R. Johnson, and J. Vlissides. 1995. Design Patterns: Elements of Reusable Design. Reading, MA: Addison-Wesley. Lea, D. 2000. “A Memory Allocator,” (accessed January 9, 2008). McCartney, J. 2002. “Rethinking the Computer Music Language: SuperCollider.” Computer Music Journal, 26(4): 61–68. Wang, T. 1997. “Integer Hash Function,” (accessed January 9, 2008).

Subject Index

This index includes topics from the main body of the text. Ubiquitous topics have been limited to principal references. For messages and classes from the SC language, see the code index. For definitions of terms, see the syntax appendix. 12-Tone Matrix, 34–35 Abstraction, 210–211. See also chapter 7 Additive Synthesis, 3, 6, 34–37, 128 AIFF, 25, 195, 254, 483–485 Algorithm (algorithmic), 122, 420, 460, 560 composition, 599 inside a method, 161–162 as pattern, 607–608 for pitch extraction, 441–442 synthesis, 385, 390, 644, 653 Ambient Lights (project), 118 Ambisonics, 424–425 Analysis FFT, 431 real time, 440–446 signal, 61, 65 UGens, 122 Arduino, 120–124 Arguments, 6–10, 132–133, 148–149 Array, 11–14, 23, 28–36, 56–57 indexing, 238–240 literal, 742–744 nested, 89 ASCII, 115, 121, 128, 165, 362 Association, 163, 744 Audio rate, 17, 42, 56, 196 Balancing enclosures, 12 Beat Tracking. See Machine listening

Binary, 65, 133–134 numbers, 640, 642 operators, 12 Binaural, 420–422, 560, 582–586. See also chapter 19 Bipolar, 21, 42–46, 57, 65 Boolean, 31–33, 65 BPF (Band pass filter). See Filter Buffer, 24–29, 61, 76, 151, 184, 200, 205, 367–372, 480–484, 710, 725, 731–734, 737 Bus, 25–36, 43, 57–60, 80, 538, 550 Byte Code, 146–147, 676–679 C++, 55, 120, 128, 178, 240, 357, 483, 578, 659, 697–704, 712–723, 741–742 Carrier (phase modulation), 15–16, 20, 45 Cents, 508–509 Char, 287, 742 Class (classes), 56, 128–130, 168–172 as object models, 241–243 tree, 173 writing, 694–695 Clock (class), 83, 87, 101, 219, 228 AppClock, 83, 234, 246, 282, 626 SystemClock, 67, 83–84, 549, 626 TempoClock, 83–84, 197–202, 645 Cloud (CloudGenerator), 258, 261, 478– 480

746

Subject Index

Cocoa, 349, 356, 375 CocoaDocument (see Document) Coding conventions, 659–660 networked live, 230 scsynth style, 722 Collection, 12, 14, 28–29, 60, 87, 115, 118, 128, 132, 134, 152, 162–165 Comb (N, L, C), 61, 76, 80 Combinatorics, 230 Comments, 10–11, 52, 719 Compilation (compiler), 146, 659–660, 664–666 Composition. See chapter 3 DAW style, 53, 81, 93 object oriented (see chapter 18) Compression, 62 Conductor. See Patterns, conductor Constraints. See chapter 23 Control rate, 17, 26, 56, 79–80, 196 ControlSpec, 277, 279 Convolution, 417 CPU (usage), 72–76, 264, 401, 718–719 Crucial library. See Libraries, crucial Csound, 61, 723 Cue Players, 91 DAW (Digital audio workstation) Composition (see Composition, DAW) DC (offset), 716–717 Debugging, 48, 55, 62, 108, 325, 361, 717 Decorrelation, 428–436 Delay, 61–62, 76, 79–80, 415–417 Devices, external. See External devices Dialects, 635–637. See also chapter 23 Dialogue (windows), 99, 151, 302 Dictionary, 139, 141, 163, 165 Distortion, 62, 494 Document Emacs, 373 OS X GUI (Cocoa), 299–303 Dot (receiver dot message), 10–11, 114, 130 Drag and Drop, 288 Emacs (scel), 355–357, 366–374 Encapsulation, 557–564

Enclosures, 12 Envelope, 18, 25, 47, 99, 343, 345, 414 Environment, 55, 120–124, 166–167, 648 variables, 25, 41, 139 Evaluation (of code), 4, 7, 146 Event, 180–182. See also Patterns; chapter 6 note (keys), 184–189 as object models, 241–243 PatternProxy, 220–221 protoEvent, 193–197, 202–205, 603–605 streams, 220–222, 225–230 triggering, 55 Extensions. See Libraries External devices. See chapter 4 FFT, 440–442 Filter, 5, 17, 61, 68, 80 BPF, 122, 212 HPF, 80, 122 Klank, 36, 38, 50 Lag (Lag2), 62, 69, 185, 212 LeakDC, 402 LPF, 80, 397, 429, 692–693 Median, 122 Ringz, 231–232, 346, 353 RLPF, 3, 345 Flange, 695, 698, 703 Float (floating-point), 11, 239 Flow control, 160–162 FlowLayout, 285 Fourier, 357, 417 Frequency modulation (FM). See Modulation, frequency FreeVerb, 68–70, 417, 419 Function, 11–14, 60, 143–144, 147 FunctionDef, 671 iterating, 152 return, 130–131, 144 Garbage collection, 659–660, 684–685 Gate, 18, 28, 151 Gestures, 97 Granular synthesis, 64, 80, 197, 258, 432, 465–469. See also Microsound client-side, 432–433 server side, 480–483

747

Subject Index

sound files, 489–490 wave sets, 490–500 Grouping. See Precedence GUI (Graphical user interface). See also Platforms; chapters 9–12 cross-platform, 298–299 dynamically generated, 295–297 Emacs (see chapter 12) JSCUserView, 319–321 OS X, 274–276 SCUser, 291–294 static (singleton), 298 SwingOSC (see chapter 10) tuning, 529 windows, 349 GVerb, 68–70, 417, 419 Harmonic spectrum. See Spectrum, harmonic series HID (Human Interface Devices), 105–111 Linux, 365 HierSch, 644–647 History, 230–235, 243 HPF (High pass filter). See Filter Human Interface Devices. See HID IdentityDictionary, 163, 165–166, 183 If (statements). See Flow control Inharmonic spectrum. See Spectrum, inharmonic Inheritance, 129, 168–171 Instance methods, 130, 137, 139 Instance variables. See Variables, instance Interpolation, 56, 65, 68, 74, 76, 79–80 Interpreter, 180, 182, 205, 208–209, 240, 246, 307, 679–685 variables, 140–141 Introspection. See Linux, introspection iPhone, 633 Iteration, 28–31, 152–153 ixiQuarks, 614–619, 624–628 JACK. See Linux, JACK Japan. See chapter 22 Java, 128–309, 315–319, 326–329 JITLib (Just In Time), 102, 480, 603, 648

JSCUserView. See GUI Juggling, 395 Key Tracking. See Machine listening Keyboard and Mouse, 286–287 Keywords, 16, 132, 171, 741 Klank. See Filter Lag (Lag2), 62, 69, 185, 212 LazyEnvir, 211, 215, 646 LeakDC. See Filter Libraries C, 659 chucklib, 603–607 crucial, 303 dewdrop_lib, 589–611 extensions, 55, 62, 79, 303, 546, 572 Linux, 359–360 platform specific, 719–720 Windows (platform), 351 quarks, 615 Linear. See Interpolation Linux, 3–4, 11. See also chapter 12 ALSA, 363–365 introspection, 371–372 JACK, 362–363 Live performance. See chapter 20 ListPattern, 141 Literals, 129–130 Localization, 385 Logical expressions, 33. See also ==, !=, >, =, (binary shift-right). Bitwise operations are techniques fundamental to low-level computing and to the architecture of hardware processors themselves. Fundamentally, they are primitive ways to manipulate binary numerals at the level of their individual bit patterns. For instance, & will return 1 only if both of its operands are 1, | will return 1 if at least of its operands is 1, and so on. In particular, and as far as audio signals are concerned, the & operator will return 1 if the absolute values of both its operands are greater or equal to 1 and at least one of them is a positive value, -1 if they are both smaller or equal to -1, and 0 in all other cases. The | operator will return 1 if at least one of its operands is greater or equal to 1, -1 if at least one of its operators is equal or lesser than -1, and 0 in all other cases. The bitXor operator is very similar to |, the only difference being that it generates 0 if both operands' absolute values are greater or equal to 1. The not operator is unary and will return 1 when the input is negative and 0 when the input is positive. [ 39 ]

Waveform Synthesis

Bit-shifting operators are a bit idiosyncratic, especially for signals. The > operators will shift the bit pattern of the left operand to the left or to the right, respectively, by as many digits as instructed by the value of the right operand. For decimal numbers, we can think of these operations as multiplying the left operand with the power of 2 of the right operand in the case of the operand (for instance, 24 >> 3 equals 24 / (2**3), that is, 3). As far as signals are concerned, bit shifting operations are only meaningful with integer values and will truncate their operands' values to the nearest integer. Therefore, in order to have any serious effects on our waveforms, we have to resort to the common trick of DC-biasing or amplifying our signals before the operation—but this time we will probably have to be more extreme. Due to the nature of the bitwise operations all the resulted waveforms will consist of straight-line and rectangular segments. Nonetheless, these operations are very useful as they can sculpt waveforms in ways that are almost impossible to achieve otherwise. Consider the following examples: // bitwise operations on waveforms {(SinOsc.ar(mul:1.2) | WhiteNoise.ar(mul:1.2))*0.7}.scope; // complex waveform from two basic ones {(LeakDC.ar(LFSaw.ar(mul:4,add:2) > 1) * 0.1}.scope; // sculpt a sawtooth wave using bit-shifting {(LeakDC.ar( SinOsc.ar(mul:4,add:2) > 3) * 0.2}.scope; // generate complex shapes using bit-shifting

The resulting waveforms are shown in the following image:

[ 40 ]

Chapter 2

Summary

In this chapter, we discussed time domain audio representation and we elaborated on various ways to synthesize and manipulate signals to achieve imaginative waveforms. These included standard waveshaping and wavetable lookup techniques, as well as less the common ones such as bitwise transformations, demand rate based stochastic generators or envelope-based oscillators. In the next chapter, we will pinpoint the frequency domain and examine techniques to synthesize and process spectra—the equivalent of waveforms in the frequency-domain.

[ 41 ]

Synthesizing Spectra Audio signals are usually dealt with either as functions of time or as functions of frequency. In the previous chapter, we discussed time-domain audio representation and elaborated on various ways in which we can synthesize and manipulate waveforms. Likewise, now we will discuss frequency-domain audio representation and elaborate on the various techniques we can use to synthesize or manipulate spectra (the equivalents of waveforms in the frequency domain). Again, we will be primarily concerned with their visual aspects rather than with their acoustic properties, as there is already a plentitude of technical resources relevant to the latter, which is available to the reader. The topics that will be covered in this chapter are as follows: • Frequency-domain fundamentals • Fast Fourier Transform (FFT) in SuperCollider • Synthesizing new spectra • Transforming the existing spectra • Optimizing spectra for scoping

Introducing the frequency domain

The frequency domain is nothing more than an alternative way to represent some signal. Nevertheless, it is of fundamental importance as it visualizes certain kinds of information that cannot be appreciated otherwise.

Synthesizing Spectra

Spectra

Signals in the frequency-domain are represented as functions of amplitude (vertical axis) versus frequency (horizontal axis). As such, a spectrum is fundamentally different from a waveform in that it represents how sound manifests in perceptual, rather than physical space. Indeed, spectra give no indication on how a signal would manifest in the physical world if translated to sound, yet they do accurately describe what the harmonic content of this sound would be. This should be of no surprise if we are familiar with the mechanics of hearing and particularly, of the physiology of the inner ear. Therein, a number of hearing cells inside the basilar membrane, each of which is sensitive to a particular frequency range, will fire neural spikes when stimulated. That is to say that the inner ear performs some sort of spectral analysis to inform the brain of a sound's harmonic content. Leaving cognition aside, we largely perceive sound as a time-varying spectrum. Despite some superficial analogies, spectra and waveforms are very different beings and subsequently, it is not very helpful to think of one in terms of the other. Their visual characteristics stand for completely different kinds of information. In a very similar fashion to our auditory apparatus, we can analyze a time-domain signal according to a bank of fixed frequency ranges (the so-called bins) in order to represent it in the frequency-domain. Before being able to listen to such a signal, of course, we have to first synthesize a waveform out of it, that is, convert it back into the time-domain. From the plethora of algorithms that implement spectral analysis, the most important is unanimously the Fast Fourier Transform (FFT). Jean Baptiste Joseph Fourier was an 18th-19th century French mathematician who claimed that any kind of continuous periodic signal, however complex it may be, can be accurately represented as a sum of arbitrary sinusoid and cosine waves. Today, after his ideas have been thoroughly refined and evolved, we can rely on FFT to accurately model any kind of signal as a sum of partials (that is, frequency coefficients) that can be optimized to be fast enough for real-time applications. The basilar membrane is a stiff membrane within the cochlea of the inner ear, which separates two liquid-filled organs (the scala media and the scala tympani) and that is also the base for the sensory cells of human hearing.

[ 44 ]

Chapter 3

Fast Fourier Transform in SuperCollider

A formal discussion of FFT would be far beyond the scope of this book; it suffices to say that the FFT algorithm analyzes temporal snapshots of our signal in order to generate a time-varying spectrum. Nonetheless, it has to be said that FFT is not a transparent process; there are several caveats to consider, the most important being a trade-off between spectral resolution and accurate timing: the greater the first, the less accurate the second, and vice versa. Technically speaking, a spectrum is usually represented in either Cartesian (complex) or Polar form. These two forms are merely different ways to represent the same information, yet they can be conceptualized differently. In its complex flavor, the signal represents the amplitudes of the cosine coefficients (real part) and the amplitudes of the sine coefficients (imaginary part) that would synthesize the original signal if added together. Since cosine signals are essentially just sinusoid ones with their phase shifted by π radians, we can easily think of the polar representation as consisting of the magnitudes of the bins and their phase offsets. In SuperCollider, both the FFT as well as the inverted FFT (that is, to synthesize a time-domain signal out of a spectrum) are implemented, namely the FFT and the IFFT UGens. FFT will analyze the time-domain signals and store that data inside an instance of Buffer (we typically use LocalBuf for convenience) and return an FFT chain. Note that the size of the Buffer object has to be a power of two and a multiple of SuperCollider's block size; typical sizes are 512, 1024, 2048, and 4096. The resolution of the spectral analysis (the number of bins) depends on the size of our Buffer object; bear in mind, however, that as already mentioned, the greater the size, the slower the analysis and thus, the lesser the time-accuracy. A singleton FFT buffer will hold both the magnitudes and the phases, so for each of these measurements only half of its size is available, yet there is no real information loss, since the output of an FFT analysis for digital signals is made of two halves, mirrored at half of the sampling frequency. Note also that for multichannel signals, we need to provide an appropriate array of Buffer objects (and not a multichannel one). Then, having switched to the frequency domain, we can chain up instances of the various available phase-vocoder (identified by the PV_ prefix) UGens to manipulate and process spectra. These UGens will convert, as needed, between Cartesian and polar representations, making it impossible to know in which form the values will be at any given time. Finally, when we are done processing in the frequency domain, we can use the IFFT UGen to synthesize a time-domain signal out of the FFT chain.

[ 45 ]

Synthesizing Spectra

Nevertheless, we do not necessarily have to rely on frequency-domain techniques to synthesize spectra; we can do so by relying solely on time-domain techniques as will soon be demonstrated. Yet, in order to have some spectrum visualized, we do need to perform some kind of spectral analysis (which is usually part of the visualizer's very implementation as is the case with FreqScope). This implies that our visualization will suffer FFT artifacts even if our audio signal does not. We will discuss ways to compensate for this later in this chapter. In mathematics, the Cartesian coordinate system specifies each point in two-dimensional space with a pair of numerical coordinates, which represents the distances from the nominal point at which the two axes meet. Cartesian coordinates may also be represented using complex numbers. In mathematics, the Polar coordinate system specifies a point in a two-dimensional space coordinate system, wherein each point on a plane is determined by a distance from a fixed point and an angle from a fixed direction.

Creating and manipulating spectra

Much unlike waveforms, which can only convey limited information on how a signal sounds, spectra, to some extent, reflect the way we perceive sound, and therefore the shape of a spectrum is a very straightforward indication of how a signal will sound. This explains why spectral synthesis techniques are very common. Here we will assume that the reader is already accustomed with basic techniques, such as additive or subtractive synthesis and amplitude/frequency modulation, and rather emphasize less obvious ways to synthesize or manipulate spectra.

Aggregating and enriching spectra

The most straightforward way to synthesize a custom spectrum would be to simply aggregate individual signals of a known spectral content together. The idea is obviously following the well-known additive synthesis paradigm, yet we extend this stratagem here to any kind of signal and not merely sinusoids. In such a context, we can use pure sine waves to pointillistically add specific frequencies, more complex oscillators to create series of harmonically related partials, and band-limited (that is, filtered) aperiodic generators to add energy in consecutive frequency ranges. Moreover, we can use control signals to dynamically control how our spectra will evolve in the course of time. We can easily extend this technique to the frequencydomain using PV_Add, which simply performs spectral addition. In the following code, we use a series of individually modulated units to synthesize a time-varying periodical spectrum: [ 46 ]

Chapter 3 ( // synthesizing spectra by aggregating time-domain signals Server.default.waitForBoot({ // boot server { // amplitude-varying sine wave SinOsc.ar(540,mul:SinOsc.kr(0.1,pi).range(0,0.3)) // amplitude-varying band-limited noise wave + Resonz.ar(ClipNoise.ar,3000,0.1,mul:SinOsc.kr(0.05). range(0,1)) // frequency-varying sawtooth oscillator + Saw.ar(LFTri.kr(0.1).range(200,260),mul:0.2) // amplitude-varying additive synthesis (3 partials) + ( Klang.ar('[[800, 803, 811],[0.3, 0.7, 0.4], [0, 0, pi]]) * SinOsc.kr(0.5).range(0,1) ); }.scopeResponse; }); )

We can see the spectrum in the following screenshot:

[ 47 ]

Synthesizing Spectra

Sometimes we may want to enrich, that is add harmonics to, an existing spectrum so that everything we add follows the original spectrum's permutations over time. However, adding harmonics is neither possible nor meaningful for all kinds of signals. Our best chances are with simple oscillators or spectra consisting of a few partials only. We can easily add harmonics to such spectra relying either on the good old amplitude/frequency/ring modulation, which we will assume is already known here, or with standard waveshaping. For example: //adding harmonics with clip {Mix.new(SinOsc.ar((1..5)*LFNoise2.ar(10).range(200,500)))}. scopeResponse; // original {LeakDC.ar((Mix.new(SinOsc.ar((1..5)*LFNoise2.ar(10) .range(200,500))).clip))}.scopeResponse; // with harmonics added

Remember that whenever using waveshaping techniques, as already discussed in the previous chapter, we should always have to be careful for potential time-domain side effects, such as DC-bias. While we can suspect DC problems by the presence of excessive energy in the lowest bins, it is a good idea to use a standard Stethoscope when experimenting with spectral synthesis.

Sculpting and freezing spectra

The polar opposite to aggregating spectra would be removing or manipulating specific partials of a complex one, much like a sculptor. In the time domain, we can easily do this by using standard filters or resonators. For example: ( // sculpting a spectrum in the time-domain { var signal = ClipNoise.ar(0.1); // start with a signal rich in partials signal = DynKlank.ar('[[400,800,1300,4000],nil,[0.3,1,0.4,1]],signal, SinOsc.kr(0.1).range(0.5,4)); // use a bank of resonators to sculpt away partials and modulate their resonant frequencies signal = BPF.ar(signal, 2000,0.1); // band-pass filter to remove more from the original }.scopeResponse; )

[ 48 ]

Chapter 3

The subsequent screenshot demonstrates the spectrum:

We have more options in the frequency domain; however, we could use PV_BrickWall (or its interpolated flavor PV_Cutoff) as very drastic low-pass filters, PV_MagAbove or PV_MagBelow to clear off selected bins above or below some threshold, PV_MagClip to clip partials to a certain threshold, PV_MagSmear to average magnitudes with respect to adjacent amplitudes, or PV_RectComb to create periodic magnitude gaps (much alike in a comb). For instance: ( // sculpt a complex spectra with PV_ Ugens {var sound = FFT(LocalBuf(512),ClipNoise.ar()); // start with a complex spectrum sound = PV_BrickWall(sound,SinOsc.kr(0.5).range(0,0.1)); //filter off the low end sound =PV_RectComb(sound,SinOsc.kr(0.2).range(2,7),0, SinOsc.kr(1).range(0,0.5)); // create a varying number of gaps of varying width sound = IFFT(sound); // synthesize the time-domain signal }.scopeResponse; )

The following screenshot shows the spectrum:

[ 49 ]

Synthesizing Spectra

Another very common frequency-domain technique is that of momentarily freezing the spectra, an operation meaningful only for time-varying spectra, of course. We can freeze all bins or solely their magnitudes using PV_Freeze or PV_MagFreeze, respectively. The latter does not freeze changes in phase data and therefore, bins will have the same magnitude but spectral changes within each bin will pass through. In both cases, all we need to do is set the freeze argument to a non-zero value whenever we want our spectrum freezed. For instance: ( // freezing spectra var buffer = Buffer.read(Server.default, Platform.resourceDir +/+ "sounds/a11wlk01.wav"); // read a soundfile into the buffer {var signal = PlayBuf.ar(1,buffer,BufRateScale.kr(buffer),loop:1); signal = FFT(LocalBuf(1024),signal); signal = PV_Freeze(signal,Duty.kr(1,0,Dseq([0,1],inf))); // freeze signal every other second signal = IFFT(signal); // synthesize time-domain equivalent }.scopeResponse; )

Shifting, stretching, and scrambling spectra

Another common frequency-domain technique is to displace the position of the partials in a spectrum. We can easily shift or stretch them using the PV_BinShift or the PV_MagShift (the latter will affect the position of only the magnitudes of each bin). Their use is straightforward; we just provide a stretch factor to scale bins (or magnitudes) accordingly and a shift offset to move the whole spectrum to the left or the right. In the following example we periodically stretch and shift the magnitudes of a given spectrum: ( // shifting and stretching magnitudes var buffer = Buffer.read(server.default, Platform.resourceDir +/+ "sounds/a11wlk01.wav"); // read a soundfile into the buffer { var signal = PlayBuf.ar(1,buffer,BufRateScale.kr(buffer),loop:1); // playback the soundfile signal = FFT(LocalBuf(1024),signal); // spectral analysis signal = PV_MagShift(signal, stretch: LFTri.kr(0.1).range(0.2,4), // stretch magnitudes shift: LFTri.kr(0.07).range(0,100) // shift magnitudes ); signal = IFFT(signal) * 0.5; // synthesize time-domain equivalent }.scopeResponse; )

[ 50 ]

Chapter 3

Another way to displace partials is to randomly scramble them using the PV_BinScramble UGen. We can define how many bins should be scrambled (setting the wipe argument from zero (for none) to one (for all)) and their maximum allowed deviation (setting the width again from zero to one). We can also force it to generate new random orderings by means of sending an audio trigger. In the following example, scramble all bins and then gradually morph back to the original spectrum: (// scrambling bins var buffer = Buffer.read(Server.default, Platform.resourceDir +/+ "sounds/a11wlk01.wav"); // read a soundfile into the buffer {var signal = PlayBuf.ar(1,buffer,BufRateScale.kr(buffer),loop:1); signal = FFT(LocalBuf(1024),signal); signal = PV_BinScramble(signal,Line.kr(1,0,15),1); // scramble bins signal = IFFT(signal) * 0.5; // synthesize time-domain equivalent }.scopeResponse; )

There are other UGens that we can use to randomize spectra too, such as PV_ RandComb, which will create random gaps in our spectrum or PV_Diffuser, which will shift each bin with a random phase offset. For example: (// random gaps and phase offsets var buffer = Buffer.read(Server.default, Platform.resourceDir +/+ "sounds/a11wlk01.wav"); // read a soundfile into the buffer {var signal = PlayBuf.ar(1,buffer,BufRateScale.kr(buffer),loop:1); signal = FFT(LocalBuf(1024),signal); signal = PV_RandComb(signal, LFNoise0.kr(1).range(0.3,1)); // add random gaps, modulately randomly signal = PV_Diffuser(signal,Impulse.kr(1)); // randomly bias phases, new distributions every second signal = IFFT(signal); // synthesize time-domain equivalent }.scopeResponse; )

[ 51 ]

Synthesizing Spectra

Using the pvcalc method

We can manually modify a frequency-domain signal using the PV_ChainUGen's pvcalc method with our own custom function as argument. Our function must return an array containing two arrays: one with the desired magnitudes and one with the desired phases. The function will be passed an array with the input's magnitudes and an array with the input's phases as arguments. Subsequently, we can calculate the output either with respect to the input or independently. We can also pinpoint a specific range of bins by means of the frombin and tobin parameters. Note that in all cases, the returning arrays have to be of an adequate size, which is either half that of the FFT window plus one, or equal to the custom range set with frombin/tobin. In the following example, we explicitly create three spectra with energy in specific bins and alternate through them using demand-rate UGens: ( // using pvcalc to create a custom spectrum { var sound = FFT(LocalBuf(512),Silent.ar()); // we start with silence since we will populate the signal manually sound = sound.pvcalc(512, { var magnitudes,phases; magnitudes = Array.fill(257,{arg i; if (i.isPowerOfTwo) {1} {0} }); // for each of the numbers from 0 to 256, either 1 (if the number is a power of two) or zero phases = Array.fill(257,{1.0.rand}); // random phases [magnitudes,phases]}); // return the signal sound = IFFT(sound) * 5; // synthesize the time-domain equivalent }.scopeResponse; )

In this example, we start with silence (since we will replace the entire signal anyway), and then we invoke pvcalc with the FFT window's size and our custom function as the only arguments. Inside the function, we use Dseq to create a sequence of arrays that describe the magnitudes; we use an array with random values as phases. Note that the size of each returning array is half the size of the FFT window plus one (as defined internally in the FFT algorithm). In this case, we explicitly create the whole signal ourselves; however, we can easily use the pvcalc method to process some input spectrum, as in the following example, where we silence out a specific region of a spectrum: ( // replace the 100th to 200th partials in a 512 window { var sound = FFT(LocalBuf(512),ClipNoise.ar()); // we start with clip noise sound = sound.pvcalc(512,{ arg inMags, inPhases; // input's magnitudes and phases are passed as arguments var outMags, outPhases; [ 52 ]

Chapter 3 outMags = 0 ! 101; // fill an array with 101 zeroes outPhases = inPhases[100..200]; // copy the input's 100th to 200th partials' phases [outMags,outPhases]}, // return 101 magnitudes and 101 phases frombin:100,tobin:200); // replace the 100th-200th partials of the input with the ones we generated herein sound = IFFT(sound)*0.5; // synthesize time-domain signal }.scopeResponse; )

Visualizing spectra

Unlike waveform scoping, the spectral one is idiosyncratic. Deciphering or fine-tuning the spectral visualizations can be subtler and more involved as to be explained here.

Limitations of spectral scoping

When scoping the spectra, we get a time-varying representation of them as fluctuations of energy per bin. The horizontal axis represents the continuum of the frequency range (typically from DC to the Nyquist frequency), divided in discrete frequency ranges (that is, the bins). The vertical axis stands for the magnitude of energy of the bins. Accordingly, each point in the graph represents the energy of a particular frequency range (and not that of singleton partials). The graph is constantly updated with respect to the FFT temporal window; once the FFT algorithm has analyzed the snapshot of our signal, the graph is updated to represent it. As already explained, the more time-accurate a spectral scope is, the less number of bins it can accurately represent. As discussed in Chapter 1, Scoping, Plotting, and Metering, we can typically select between linear (that is, all bins have the same width in the horizontal axis) or logarithmic (that is, bins in the lower register occupy larger regions than those in the higher one) scaling to achieve a representation closer to the way we perceive sound. (Note, however, that scopeResponse will always result in logarithmically scaled graphs). In either mode, the resolution of the spectral visualization does not depend exclusively on the specifics of the FFT analysis but also on the dimensions of the View we use. SuperCollider will not complain if we use a 400px-width View to visualize 4096 bins; yet, since it is impossible to fit them all, we would end up with a graph illustrating approximately 10 percent of the available spectral information. This is a pretty serious limitation given that we are always restrained with the physical dimensions of our screen. Evidently, the spectral analysis resolution should ideally match the width of the scoping View, or we may end up with both low spectral resolution and slower update rates than what the FFT algorithm may deliver. [ 53 ]

Synthesizing Spectra

Another serious limitation is that FFT is not a linear process. Even when visualizing a single sinusoid, the scope will erroneously indicate that the spectral energy exists to a broader area, as FFT analysis will typically create artificial ramps around any isolated partials. Such artifacts and inaccuracies are not related with the scope per se, but on intrinsic limitations related with the underlying mathematical formulas and the very nature of discrete signals. The Nyquist frequency, named after electronic engineer Harry Nyquist, is the half of the sampling rate of the signal processing system and represents the highest frequency that this system can accurately reproduce.

Optimizing spectra for scoping

To some extent and depending on the context, we can compensate for some of the limitations mentioned earlier. The first step would be to decide whether a linear or a logarithmic representation is better for our particular application. We can then pinpoint the amplitude range that interests us setting dbRange accordingly (in an instance of FreqScopeView). Unfortunately, in its current implementation, FreqScopeView does not allow us to set a custom frequency region. Therefore, if we want to pinpoint specific bins, we need to come up with a hack: we can have FreqScopeView with extraordinary large bounds, and position it inside a smaller Window (or CompositeView) object so that only the part we are interested in is visible (the rest will be out of bounds). In the next example, just by means of tweaking the amplitude range and via this ingenious hack, we will visualize the first example of this chapter in a very different way: ( // pinpoint on a certain frequency range only // the window var window = Window("Optimized FreqScoping", Rect(0, 0, 600, 300)).front.onClose_({ freqScope.kill; sound.free; }); // the freqscope // the signal var sound = {SinOsc.ar(540,mul:SinOsc.kr(0.1,pi).range(0,0.3)) + Resonz.ar(ClipNoise.ar,3000,0.1,mul:SinOsc.kr(0.05). range(0,1)) + Saw.ar(LFTri.kr(0.1).range(200,260),mul:0.2) + ( Klang.ar('[[800, 803, 811],[0.3, 0.7, 0.4],[0, 0, pi]]) * SinOsc.kr(0.5).range(0,1) ); }.play; var freqScope = FreqScopeView(window, Rect(-600,0,2100,300)) // notice the dimensions

[ 54 ]

Chapter 3 .active_(true).freqMode_(1); // logarithmic scaling freqScope.background_(Color.cyan); freqScope.waveColors_([Color.red]); freqScope.dbRange_(30); // custom db range )

We could even use a scrolling Window object (just set scroll to true) if we want to achieve better resolution, and still be able to focus on various parts of the spectrum. Much like what we did in the previous chapter, we could also use instances of Bus, and various synthesis techniques to optimize our signals for frequency scoping. For example, we could use PV_MagAbove to focus only on the most prominent partials, or we could stretch our spectrum using PV_BinShift so that some particular range of bins we are interested in expands to a greater area and is better appreciated. Traditional time-domain filters, such as BPF, LPF, HPF, and BRF, could be very helpful in allowing us to smoothly filter off the energy in bins that we are not interested in visualizing. Depending on the context, we could even try more adventurous optimizations, for instance, using PV_Freeze to freeze the partials at regular intervals. Consider yet another way to frequency-scope the previous signal: ( // optimizing spectrum for frequency-scoping // Window var window = Window("Optimized FreqScoping", Rect(0, 0, 600, 300)).front.onClose_({ freqScope.kill; sound.free; }); // audio bus var bus = Bus.audio(Server.default); // sound var sound = { var signal; signal = SinOsc.ar(540,mul:SinOsc.kr(0.1,pi).range(0,0.3)) + Resonz.ar(ClipNoise.ar,3000,0.1,mul:SinOsc.kr(0.05).range(0,1)) + Saw.ar(LFTri.kr(0.1).range(200,260),mul:0.2) + ( Klang.ar('[[800, 803, 811],[0.3, 0.7, 0.4],[0, 0, pi]]) * SinOsc.kr(0.5).range(0,1) ); Out.ar(0,signal); // write to audio output // optimize for scoping signal = FFT(LocalBuf(4096),signal); signal = PV_BinShift(signal,4); // stretch bins for better resolution signal = PV_MagAbove(signal,3); // do not show weak bins signal = IFFT(signal); Out.ar(bus,signal); // write to bus }.play; var freqScope = FreqScopeView(window, Rect(0,0,600,300)).active_(true).freqMode_(0).inBus_(bus); // set freq-scope to read from bus freqScope.background_(Color.cyan); freqScope.waveColors_([Color.red]); freqScope.dbRange_(80); // custom db range ) [ 55 ]

Synthesizing Spectra

In the following figure, we can see how the last two visualizations compare with the original:

Summary

In this chapter, we discussed a series of audio synthesis techniques to synthesize new, or manipulate pre-existent, spectra so that we are capable of creating optimized, ready-to-scope signals in the frequency domain too. In the next chapter, we will introduce the fundamentals of computer graphics and learn how to draw shapes and structures of arbitrary complexity in SuperCollider using the Pen class.

[ 56 ]

Vector Graphics So far, we have elaborated on how to scope, plot, and meter signals and data as well as on how to create good-looking, (in any subjective way) ready-to-scope audio signals. In this chapter, we will introduce ourselves with two-dimensional vector graphics, and we will learn how to use the Pen class to generate shapes of arbitrary complexity as well as more sophisticated structures such as fractals and particle systems. It has to be said that while SuperCollider is arguably less featured than other dedicated computer graphics environments, such as the various OpenGL-based frameworks, it is nevertheless powerful enough and a lot easier to master; more importantly, it is also bundled with one of the most advanced audio synthesis engines on the planet, thereby, simplifying the task of integrating Computer-generated Imagery (CGI) with computer-generated audio. The topics that will be covered in this chapter are as follows: • Learning the vector graphics fundamentals • Drawing simple and complex shapes • Modeling complex objects and structures • Geometrical transformations and trailing effects • Designing particle systems and fractals

Vector Graphics

Learning the vector graphics fundamentals

Generating vector graphics involves formally describing a drawing in mathematical terms by means of geometrical primitives, graphics state transformations, and simple drawing instructions. In this context, a drawing consists of paths, which are made of one or more line segments connected by two or more anchor points. Paths are to be delimited in our canvas (that is, the View we draw into) using Cartesian coordinates, which are pairs in the form of (x,y) where x and y denote the horizontal and vertical deviations respectively from a nominal point. We may speak of absolute coordinates when the nominal (0,0) point is fixed in space; relative coordinates are those that are relative to some other arbitrary points. An (x,y) pair can be also understood as a complex number with x representing the real and y the imaginary part, thus simplifying mathematical operations in certain contexts. To represent coordinates in SuperCollider, we use instances of Point, which can be also created using the convenient form: x@y. Unlike the traditional Cartesian notation, the 0@0 point in SuperCollider stands for the upper-left corner of some View, with x incrementing rightward and y downward. Once we have defined the anchor points of a path, we can sketch it by means of combining straight lines and curves or built-in primitive shapes, defining colors, thicknesses of various lines, and various other graphics state attributes. Vector graphics need to be rendered to pixels before they are sent to our screen, which requires a conversion to raster graphics, yet this approach has significant advantages over drawing raster graphics in the first place. For instance, when zooming in to some detail in the vector domain, we preserve the maximum resolution since we merely render a different set of instructions rather than magnifying individual pixels. More importantly, we can easily prototype complex structures and transformations as minimal sets of instructions so that we can later generate contingent instances of these efficiently and in different contexts. But even if it wasn't for efficiency reasons, there are other reasons that affect their performance; it is a lot faster to perform mathematical operations on a limited number of anchor points, rather than scanning and altering the state of multidimensional matrices of pixels.

[ 58 ]

Chapter 4

Paths are shapes or line segments delimited and used as the building blocks of all drawings in a vector graphics context. Anchor points are those points that delimit a path in a vector graphics context. Raster graphics is an alternative to the vector graphics paradigm, wherein, a drawing is represented as a dot matrix structure with the color values of each of the individual pixels necessary to print or project it to some medium. Pixels are either the elementary atoms of some raster image or the smallest, addressable element in a display device such as a computer screen.

Drawing primitive shapes and loading images

In SuperCollider, we can draw simple lines and basic shapes invoking the appropriate methods from Pen inside the drawfunc method of Window. There are primitives for arcs, lines, rectangles, ellipses, and wedges. For example:

( // primitive shapes with Pen var window = Window("Pen Example", 450@450).front; // a window window.drawFunc_({ // all the drawing has to be done in this function Pen.line(0@0,100@100); // a line between 2 points Pen.line(350@100,450@0); // a line between 2 points Pen.addArc(200@150,20,0,pi); // half a circle (an arc with angle of pi radians) Pen.addArc(250@200,40,pi,pi/2); // 1/4th of a circle Pen.addRect(Rect(50,100,350,300)); // a rectangle Pen.addOval(Rect(100,220,250,80)); // an ellipse Pen.addWedge(350@350,40,1.5pi,pi/2); // a pi/2 radians wedge Pen.addAnnularWedge(345@355,15,40,0,1.5pi); // and an annular wedge Pen.stroke; // draw only the outlines }); )

[ 59 ]

Vector Graphics

In this example, no drawing will occur unless we explicitly instruct Pen to do so and after having defined the desired paths, we used *stroke to only draw their outlines. Every path has a stroke (that is, an outline) and a fill (that is, the surface delimited by its outline) area that we can selectively draw using *stroke (stroke only), *fill (fill only), or *fillStroke (both strokes and fills). There also exists a *draw method, which will draw paths according to the given argument. The numbers 0, 2, and 3 are equivalent to fill, stroke, and fillStroke respectively and 1 and 4 are for drawing either fills or stroke and fills, following the even-odd rule. This rule guarantees that the adjacent areas will not be filled so that the internal fragmentation of some path is always respected. We can also load and display images in our canvas using the Image class. However, note that Image is not functional in the current (as of this writing) SuperCollider stable Version (3.6). We need to use 3.7, which is already available as a source code bundle, to evaluate the following code: ( // loading and displaying images var image = Image.new("path/to/some/png/image/here"); // load some image Window.new.front.drawFunc_({ image.drawAtPoint(0@0,image.bounds); // display image }) )

Of course, instead of the dummy path/to/some/png/image/here, we are expected to provide a valid path pointing to a real file in our computer. To only display a part of an image, we could have passed an instance of Rect instead of their bounds.

Complex shapes and graphics state

We can easily draw custom shapes of arbitrary complexity by means of simply connecting anchor points together with line segments using the methods: *moveTo, *arcTo, *lineTo, *curveTo, and *quadCurveTo (quadratic curves). The *moveTo method merely sets the current position of Pen to some point, while the rest of the methods create segments, whatever the current position may be, to some ending point that was already provided; this will be subsequently casted to the new position. These directives stand for arcs, lines, and Bezier curves. For example: ( // generating a custom path var window = Window("Pen Example", 450@450).front; window.drawFunc_({ Pen.moveTo(78@122); // go to point 70@122 Pen.curveTo(284@395,280@57,78@122); /* make a Bezier curve from 78@122 to 284@395 (which is now the new current position). 280@57 and 78@122 are curvature points */ Pen.curveTo(280@57,80@332,284@395); // make another Bezier curve Pen.curveTo(80@332,405@225,280@57); // make another Bezier curve Pen.curveTo(405@225,78@122,80@332); // make another Bezier curve [ 60 ]

Chapter 4 Pen.curveTo(78@122,284@395,405@225); // make another Bezier curve Pen.draw(4); // fill according to the even-odd rule }); )

One important thing to note is that if we had invoked *draw prior to having all of the segments described, we would have ended up with a very different drawing. Only those segments between the drawing methods are assigned to the same path. The reader is invited to try and insert more Pen.draw(4) statements between the Bezier curves in the preceding code and find out for himself/herself. Pen also features a set of methods and variables to change the graphics state itself, for example, *width (changes the width of the stroke), *smoothing (switches anti-aliasing on/off for smoother images), *joinStyle (changes the way lines are joined), *lineDash (sets up a dash line pattern), *alpha (sets global transparency), and others to be discussed in detail shortly. Note that graphics state transformations are always cumulative and will affect all of the subsequent drawing commands unless they are reset. Bézier curves are parametric curves named after the French engineer Pierre Bézier who first systemized their study in the '60s.

Introducing colors, transparency, and gradients

Adding color to our drawing is easy using the *fillColor or *strokeColor variables of Pen to define colors for the fill and the stroke of our paths, and the background variable of window to set the background color of the canvas. As with all graphics state transformations, once we set a specific color, it will be casted with the new default value until we explicitly set another. Colors in SuperCollider are represented as instances of the Color class. Typical use is either through convenient methods (such as *red, *white, *black, *yellow, and so on) or by means of describing the color in terms of its RGBA (Red, Green, Blue, and Alpha) or its HSVA (Hue, Saturation, Value, and Alpha) coefficients, where saturation signifies colorfulness and value signifies brightness. The Alpha channel stands for how transparent or opaque a color is. We can create specific colors via *new (expects Float in the range of 0 to 1), *new255 (expects Integer in the range of 0 to 255), *fromHexString (expects an eightcharacter-long string with the RGBA coefficients in hexadecimal notation, that is, in the range of 00-FF), or *hsv (expects the HSVA coefficients as Float) methods. Other useful methods to remember are *rand, which will generate a random color, and the various binary operators such as add, subtract, multiply, divide, and blend among others. Consider the following example: [ 61 ]

Vector Graphics ( // transparency and custom color example var window = Window("Pen Example", 450@450).front; window.background_(Color.white); // set background color window.drawFunc_({ Pen.width_(10); // set stroke width as 10 pixels Pen.strokeColor_(Color.cyan); // set cyan as stroke color Pen.fillColor_(Color.fromHexString("FF0000FF")); // set red as fill color Pen.addRect(Rect(30,30,300,300)); // add a rectangle Pen.draw(4); // draw rectangle Pen.strokeColor_(Color.rand); // set a random color as stroke Pen.fillColor_(Color.new255(0,255,0,50)); // set a transparent green as fill color Pen.addRect(Rect(220,220,200,200)); // draw another rectangle Pen.draw(4); // draw }); )

Apart from solid colors, we can also fill our paths using gradients, that is, smooth progressions between two colors. Gradients come in two flavors: axial gradients, specified by two points and a color at each point, and radial gradients, specified by one color at the outer perimeter of a circular arc and another at its center. In both cases, the colors in the middle are calculated with linear interpolation. The Pen class has two corresponding methods: *fillAxialGradient and *fillRadialGradient. For example: ( // custom path with gradient var window = Window("Pen Example", 450@450).front.drawFunc_({ Pen.moveTo(78@122); Pen.curveTo(284@395,280@57,78@122); Pen.curveTo(280@57,80@332,284@395); Pen.curveTo(80@332,405@225,280@57); Pen.curveTo(405@225,78@122,80@332); Pen.curveTo(78@122,284@395,405@225); Pen.fillRadialGradient(225@225, 225@225, 0,250,Color.red, Color. green); }); )

[ 62 ]

Chapter 4

As we can see in the following screenshot, the preceding code results in a windmill-like shape:

Abstractions and models

Suppose that we really like the particular shape in the preceding screenshot and that we want to integrate it in a series of different drawings, in other words, to cast it as a sprite (that is, an independent structure integrated to a broader scheme). Of course, having to manually define the positions of the anchor points for each different case would be tedious, counterintuitive, and really shortsighted from a programmer's point of view, so we need to come up with some kind of abstraction. We could just put all the necessary instructions inside a function and make all the calculations relative to its arguments. However, this approach proves shortsighted too, as sooner or later we will encounter situations wherein we would want to interact with our shape after it's being created. What we really need is an abstract prototype we could use to spawn unique independent instances of our structure that we can later interact with. Furthermore, using prototypes, we can easily go beyond modeling simple sprites to modeling whole families of contingent structures, such as windmills having different number of wings, different color combinations, and different positioning and sizes.

[ 63 ]

Vector Graphics

Objects and prototypes

SuperCollider being a purely object-oriented programming language fosters object modeling through Class, Environment, or Event—every approach having its pros and cons. Using classes in SuperCollider is a bit idiosyncratic; we have to recompile the whole class library every time we make some minor change to a definition, and more to this, it's not really intuitive to have all sorts of project-specific classes globally available every time we launch SuperCollider. Classes are ideal when we want to extend SuperCollider's overall functionality with objects we will either plan to use very often or with features we want to be globally available, such as the custom scope meter we designed in Chapter 1, Scoping, Plotting, and Metering. As far as projects of more limited scope are concerned, such as our windmill herein, using Event makes more sense. Also, it's always easier to convert the latter into a Class, if we do happen to use it that often, rather than the opposite. Notwithstanding, there are certain caveats to using Event as an object prototype. Firstly, we should never use names for our variables or methods that match the existent Event (or its superclasses'), for instance, size, at, play, resume, pause, release, update, fill, use, test, and others. Doing so will certainly lead to very obscure and difficult-to-track bugs. A fast way to get a complete reference of all problematic names is to type Event and press command + I (or Ctrl + I) while in the SCide. Secondly, we should be extremely cautious about typos, as the interpreter will not complain if we attempt to access or set some nonexistent entry. Thirdly, SuperCollider does not support private membership (this is also true for classes unfortunately), therefore, we cannot easily distinguish between a model's interface (that is, methods and data the user is supposed to access) and its implementation (that is, methods and data for internal use that should be hidden from the user). We will soon describe how to partly compensate for this; it is a good tactic, nevertheless, to only interact with objects following certain conventions. Throughout this book, we will only interact with our objects via methods such as refresh, animate, or draw and will never directly set some member variable. For instance, the structure of a windmill object could appear as shown in the following code: ( // Event as an object prototype position: /* sprite's position */, points: /* the anchor points */, refresh: { arg self, newPosition; // set new position and recalculate anchor points here }, draw: { arg self; // draw path here } ) [ 64 ]

Chapter 4

Notice that the first argument in every method is always named self. This is a standard mechanism to share data inside our object; every method will be implicitly passed the whole Event as an argument so that we can easily access other member variables and methods from within. This argument is not visible externally when invoking refresh, in this case, only one argument, newPosition, will be considered.

Factories

Having modeled a windmill object, we also need a mechanism to create and initialize instances of it, namely, a windmill factory. The idea is to use an instance of Function with the desired attributes of our windmills as arguments and have it define, initialize, and return an instance of it. A significant gain in this approach is that now we can define private data members and methods within the body of our function that will only be accessible to our object's methods and not to its clients, thereby, achieving information hiding, which is a key concept in more sophisticated objectoriented designs yet not directly supported by some built-in structure. Another important plus is that we can now segregate between defining, initializing, and using an object so that we only perform those calculations when needed. Back to our example, a proper windmill factory should be capable of producing more than just one type of windmill, all having different number of wings, size, and colors. Carefully considering what the interface of our factory should be is the first step towards designing it. A possible structure for our windmill factory could look as shown in the following code: {

arg position, radius, numOfWings, colors; var object; // ..private data/methods and auxiliary calculations here object = ( // define and initialize position: /* define and initialize position */, points: /* define and initialize anchor points */, refresh: { arg self, newPosition; // define refresh method // set newPosition here }, draw: { arg self; // define draw method // draw path here } ); object; // explicitly return the object

};

[ 65 ]

Vector Graphics

And now all we have to do is programmatically describe the specifics of our object's construction and use, which of course, is a task largely dependent on the kind of object we are dealing with. In the windmill's case, we first need to calculate the angular distance between the wings so that we can space them accordingly, and then, via an iterative structure to calculate the starting, ending, and curvature points for each segment with respect to its position and radius. A possible windmill factory implementation is given in the following code. However, note that due to the nature of the math involved, and in order to keep things fairly simple, this particular factory will properly create windmills whose number of wings is not a multiple of 4 plus 6 (that is, 6+[n*4]). Note also that only those operations needed for the actual drawing and updating exist within our body of Event. Everything related to initialization is calculated inside the factory's body and then either stored as a data member of our object (if it should ever be modulated, for instance, position or points) or hard coded into its methods' definitions (if it should be immutable, for example, numOfWings). ( // windmill factory ~windmillFactory = { arg position = 0@0, radius = 100, numberOfWings = 5, colors = [Color.red, Color.green]; // calculate step (angular difference between consecutive points) var step = if (numberOfWings.odd) { (2pi / numberOfWings) * (numberOfWings/2).floor; } { (2pi / numberOfWings) * ((numberOfWings/2)-1); }; // calculate points' coordinates and store in an array var points = Array.fill(numberOfWings, { /* we only need one point per wing as they are connected with each other diametrically */ arg i; var x, y; x = position.x + (radius * cos((step * i))); y = position.y + (radius * sin((step * i))); x@y; // return the anchor point point }); var windmill = ( // event as an object prototype position: position, // sprite's position points: points, // the anchor points (to be updated if needed) refresh: { arg self, newPosition; self.position = newPosition; // set new position // re-calculate points according to newPosition self.points = Array.fill(numberOfWings, { arg i;

[ 66 ]

Chapter 4 var x, y; x = newPosition.x + (radius * cos((step * i))); y = newPosition.y + (radius * sin((step * i))); x@y; // return the anchor point point }); }, draw: { arg self; Pen.moveTo(self.points[0]); // move to the first point (numberOfWings).do{ // iterate over the array of anchor points arg i; var pointA, pointB, pointC; // get three consecutive points pointA = self.points[i]; pointB = self.points.wrapAt(i+1); pointC = self.points.wrapAt(i+2); Pen.curveTo(pointB,pointC,pointA); // define Bezier segment }; // fill with radial gradient Pen.fillRadialGradient(self.position, self.position,0, radius*1.5,colors[0], colors[1]); } ); windmill;

// return windmill

}; )

We should save the preceding file independently so that we can automatically call it from our code and use it as follows: ( // draw windmills var windmillA, windmillB, windmillC, windmillD; (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_04_06.scd").loadPaths; // first load the windmill factory windmillA=~windmillFactory.(150@150,150,15, [Color.red,Color.magenta]); windmillB=~windmillFactory.(100@500,80,23); windmillC=~windmillFactory.(500@100,100,5, [Color.magenta,Color.black]); windmillD=~windmillFactory.(400@420,200,9,[Color.black,Color.blue]); Window.new("Windmills",640@640).background_(Color.white).front. drawFunc_({ windmillA.draw(); windmillB.draw(); windmillC.draw(); windmillD.draw(); }); )

[ 67 ]

Vector Graphics

Note that the proper way to load files is using Document.current.dir, which will return the path of the folder that contains the current file. Unfortunately this is broken in the current (as of this writing) version of SCide (however, it is functional in other IDEs such as emacs), therefore, we will have to either use the not so preferred PathName(thisProcess.nowExecutingPath).pathOnly or wait for the next major update. Information hiding is, in Computer Science, the principle of segregation between an object's interface (what the users of the object will encounter and use) and its implementation (intrinsic design details of which might change). Object-oriented design is a certain approach to software development wherein systems of interacting objects are used to solve a problem.

Geometrical transformations, matrices, and trailing effects

Geometrical transformations are operations that will map each individual point in a set to another unique point. They invaluably simplify the task of modeling some particular structure. The most important geometrical transformations are *translate (move the whole coordinate system by x and y offsets), *scale (scale a drawing according to scaling factors for the horizontal and vertical dimensions), *skew (skew paths with respect to the given coordinates), and *rotate (rotate the path around a given point). As is the case with all graphics state operations, geometrical transformations will affect all of the subsequent drawing commands and are cumulative. However, quite often, we will want to apply some geometrical or other transformations to a specific structure only, and some other times, we will need to revert to an unknown graphics state (for example, when transformation occurs within the draw method of some prototype, we want them to be valid only locally and revert to the previous state, whatever it may be).

[ 68 ]

Chapter 4

Luckily, there are simple ways to deal with such situations, namely, using the transformation matrices, which are really nothing more than just a description of the current graphic's state. Whenever we want to apply geometrical transformations or otherwise alter the graphic's state (for example, setting a different color) in a given context only, we can simply push a new matrix wherein we will apply all our transformations; once done, we can pop (destroy) it to revert to the previous graphic's state. Push and pop are operations associated with stack, which is a Last-In-First-Out (LIFO) container, which is used internally to hold an arbitrary number of matrices. In this way, we can easily revert to a default graphics state and in addition to this, we can efficiently stack an arbitrary number of matrices on top of each other. A standard way to handle matrices in SuperCollider is via the *use method of Pen, which will evaluate an instance of Function within a new matrix and then revert to the previous graphic's state automatically. Therefore, and since Pen does cater for a *push and a *pop method, we will stick with those methods throughout this book for reasons of conceptual clarity as well as because this is the standard way most major computer graphics frameworks handle matrices anyway. In the following code, we perform basic geometrical transformations to create trailing effects with our windmills: ( // trailing effects using geometrical transformations (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_04_06.scd").loadPaths; // first load the windmill factory Window("Trailing Effects Example",640@480).background_(Color.white). front.drawFunc_({ // trailing effects with rotation Pen.push; // push a new matrix 5.do{ arg i; var windmill = ~windmillFactory.value(150@200,130,11); // create 5 instances of an 11-winged windmill Pen.rotate(i * 0.1,150,200); // incrementally rotate each instance around its own axis Pen.alpha_(1-(i*0.1)); // decrementally set transparency windmill.draw(); // draw the windmills }; Pen.pop; // pop matrix to revert to original graphics state // trailing effects with translation Pen.push; // push a new matrix 10.do{ arg i;

[ 69 ]

Vector Graphics var windmill = ~windmillFactory.(420@120,130,7); // create 10 instances of a 7-winged windmill Pen.translate(10,10); /* cummulatively translate each instance 10 pixels upwards and downwards */ Pen.alpha_(1-(i*0.1)); // decrementally set transparency windmill.draw(); // draw the windmills }; Pen.pop; // pop matrix to revert to original graphics state // trailing effects with scaling Pen.push; // push a new matrix 3.do{ arg i; var windmill = ~windmillFactory.(80@400,60,7); // create 3 instances of a 7-winged windmill Pen.scale(1.7,1); // cummulatively scale each instance's horizontal dimension Pen.alpha_(1-(i*0.1)); // decrementally set transparency windmill.draw(); // draw the windmills }; Pen.pop; // pop matrix to revert to original graphics state }); )

The following screenshot illustrates the result:

[ 70 ]

Chapter 4

Complex structures

We can achieve more sophisticated structures and systems of arbitrary complexity by means of combining individual sprites, transformations, and a set of specialized techniques. Typical examples are the particle systems or the fractals.

Particle systems

A particle system is the granular synthesis (that is to synthesize complex sounds by means of using elementary sonic grains) equivalent to a computer graphics context, wherein we generate complex visual structures by means of dispersing elementary particles in space. The latter are usually, but not exclusively, instances of the same prototype. Much like a granular synthesis engine, we typically permute each particle's appearance to allow divergence. Particles may be distributed in space in a number of ways according to canonical, noncanonical, and even more complex patterns. The following code randomly spreads windmills on our canvas: ( // An a-canonical particle system var window = Window("An a-canonical particle system",640@480). background_(Color.black).front; (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_04_06.scd").loadPaths; // first load the windmill factory window.drawFunc_({ 500.do{ // iterate 500 times var x,y, radius, windmill; x = window.bounds.width.rand; // a random x (but within bounds) y = window.bounds.height.rand; // a random y (but within bounds) radius = rrand(10,50); // a random radius Pen.push; // push a new matrix Pen.alpha_(1.0.rand); // set a random level of transparency Pen.rotate(2pi.rand,x,y); // randomly rotate each particle around its own axis windmill = ~windmillFactory.value(x@y,radius,(5,7..25).choose, [Color.rand,Color.rand]); /* generate a windmill object centered at x@y with a random ratio, a random even number of wings, and random colorings */ windmill.draw(); // draw windmills Pen.pop; // destroy matrix and revert to default state }}); )

[ 71 ]

Vector Graphics

A possible result is shown in the following screenshot:

Note that when resizing our Window, its drawFunc will be evaluated again, so we will get a different random distribution. The following code demonstrates a canonical distribution this time: ( // A canonical particle system var window = Window("A canonical particle system",640@480).background_ (Color.yellow).front; (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_04_06.scd").loadPaths; // first load the windmill factory window.drawFunc_({ forBy(0,window.bounds.width-50,50,{ // iterate over width minus 50 (to leave a margin) by steps of 50 arg ix; forBy(0,window.bounds.height-50,50,{ /* for each iteration over width, iterate over height minus 50 (to leave a margin) by steps of 50 */ arg iy;

[ 72 ]

Chapter 4 var x,y,windmill; // coordinates and our windmill x = 45 + ix; /* incrementally (by 50) calculate x, add offset it by 45 so that the first element is not right on the edge */ y = 45 + iy; /* incrementally (by 50) calculate y, add offset it by 45 so that the first element is not right on the edge */ Pen.push; // push new matrix Pen.rotate(2pi.rand,x,y); // randomly rotate each windmill around its own axis /* generate windmills so that each row has more wings than the previous and so that colors are a function of position */ windmill = ~windmillFactory.(x@y,20,(5,7..27).wrapAt(ix/50), [Color(sin(ix/50).abs,sin(iy/50).abs,1), Color.black]); windmill.draw(); // draw windmill Pen.pop; // pop matrix }); }); }); )

[ 73 ]

Vector Graphics

Fractals

Fractals are structures characterized by replication too, yet of a very different kind. Fractals are characterized by some prominent patterns ever present in all scales; hence, they are self-similar. Fractals are everywhere in the natural world, consider for example, some coastline; it looks self-similar however much we zoom in or zoom out to/from some part of it. We can generate fractals of arbitrary complexity recursively or iteratively. In computer science, we may speak of recursion whenever a part of the definition of some function is a call to itself. A physical world analogy would be that of holding a mirror against another. Consider the following code wherein we compute factorials recursively: f = {arg n; if (n>1) {n * f.value(n-1)} {1} }; // a recursive function f.(5).postln; // factorial of 5

In all recursive functions, it is imperative to use some kind of mechanism to prevent infinite function calls, a state also referred to as infinite recursion or infinite loop, which would crash the interpreter at once. In the factorial example, we used an if statement to ensure that however big n is, the recursive calls will indeed cease at some point. We can also use the thisFunction keyword instead of the function's own name to emphasize that we are indeed within a recursive structure; however, we should always assign it to some local variable to clarify what function we are referring to, otherwise, we may encounter very obscure bugs whenever nested functions are involved. We will follow this approach herein for reasons of conceptual clarity. To create fractals, we need to define some drawing pattern, which will repeat itself on an arbitrary number of levels. Each level would consist of several branches, each being the parent of child branches and so on until the last level is reached, which would only feature its own branches. In the following example, we start from a central point and create line segments (our branches) that canonically spread in all directions. To achieve canonicity, all angles between the adjacent branches must be identical, thus equal to 2π/numBranches radians. Each branch starting at its own center (the center of the parent segment) will spawn its own children branches until the last level is reached. In actual programming practice and to avoid infinite loops, we typically start with a variable set at the maximum level and decrement it in subsequent recursive calls until we reach 0 when recursion seizes. Again, we will use a factory so that we can create contingent structures with different characteristics with respect to a number of levels, number of branches, size (radius), and a changing factor (used to modulate the amount of change between subsequent levels).

[ 74 ]

Chapter 4 ( // a fractal factory ~fractalFactory = { arg numLevels, numBranches, position, radius, changeFactor; var fractalFunc = thisFunction; // assign thisFunction to a variable var points, children, fractal; // declare variables // calculate ending points for our segments points = Array.fill(numBranches, {arg i; var x, y; x = position.x + (radius * numLevels * cos(((2pi/numBranches) * i))); y = position.y + (radius * numLevels * sin(((2pi/numBranches) * i))); x@y; }); // generate children if (numLevels > 0) { // if there are more levels to go var childrenPoints, childrenRadius; // calculate the children points for each for the branches childrenPoints = Array.fill(numBranches, {arg i; var x,y; x = (points[i].x + position.x) / 2; y = (points[i].y + position.y) / 2; x@y }); // calculate the children radiuses childrenRadius = radius * changeFactor; /* for each level generate all branches and add them to fChildren array */ numBranches.do{ arg i; children = children.add(fractalFunc.(numLevels-1, numBranches, childrenPoints[i], childrenRadius, changeFactor)); }; } { // if there are more levels to go set children to nil children = nil; };

[ 75 ]

Vector Graphics // create fractal object fractal = ( children: children, /* an array with the children (all of them fractal objects, too or nil if in the last level) */ branches: numBranches, // how many branches draw: {arg self, colorFunc; // drawing function, // draw self self.branches.do{arg i; Pen.strokeColor_(colorFunc.()); // set a color for each branch Pen.line(position,points[i]); // create lines Pen.stroke; // stroke lines }; // draw children if (self.children.notNil) { // if there are children // draw all of their branches self.children.do{arg item; item.draw(colorFunc); }; }; }; ); fractal; // explicitly return fractal }; )

The preceding code can be used as follows: ( // a fractal example var window, fractal; // declare variables (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_04_12.scd").loadPaths; // first load the windmill factory window = Window("a fractal !", 640@640).background_(Color.black). front; fractal = ~fractalFactory.(6, 4,window.bounds.center,60,0.6); window.drawFunc_({ fractal.draw({Color.rand}); }); )

[ 76 ]

Chapter 4

The preceding code results in a fractal as shown in the following screenshot:

Modulating our factory's arguments we may achieve a whole family of very different, albeit related, fractals. We could try with these settings for example: ~fractalFactory.(6, 6,window.bounds.center,100,0.5);

[ 77 ]

Vector Graphics

And this way, obtaining the image displayed in the following screenshot:

Fractals is a very intriguing, albeit mathematically involved subject. There are numerous kinds of fractals and numerous ways to implement them. The fractal factory herein is merely an example and should not be considered as a rule set in stone; nonetheless, it does exemplify how to handle the most fundamental concepts, namely, recursion levels and children branches. When dealing with fractals, we should always bear in mind that they are typical examples of the exponential growth. Consider that each additional level exponentially raises the recursive function calls since every single branch will automatically acquire additional levels, each of which will have several braches having several children each and so on. Fractals are greedy beings computationally, so we should always try with just a few levels/branches and gradually increment them to make sure of what our computer can handle. [ 78 ]

Chapter 4

Summary

Throughout this chapter, we introduced ourselves to both the fundamental and more advanced notions and techniques as far as two-dimensional vector graphics are concerned. Also, with numerous examples, we have demonstrated how to generate simple drawings as well as more sophisticated structures and shapes. In the next chapter, we will discuss the types of motion and learn how to animate our singleton shapes as well as more complex structures in various ways.

[ 79 ]

Animation In the previous chapter, we learned how to generate complex shapes and structures using Pen and a series of fundamental techniques. In this chapter, we will introduce ourselves to the fundamentals of motion, and we will learn how to animate vector graphics using UserView. A series of more advanced concepts and techniques are also discussed, such as how to simulate physical forces to make our animations behave in a more natural way and how to animate articulated bodies. The topics that will be covered in this chapter are as follows: • Fundamentals of motion • Animating shapes and sprites • Creating trailing effects • User interaction and event-driven programming • Animating particle systems and fractals • Dynamics and kinematics

Fundamentals of motion

Animation is just a succession of different images that produces an illusion of movement. Therefore, to create interesting animations, we need to familiarize ourselves with the various ways in which we can set shapes, sprites, and more complex structures in motion, as well as with motion as a medium per se.

Animation

Motion species

Different kinds of motion evoke different emotional and cognitive responses. That is to say that motion has its own significance, which has to be carefully considered as a fundamental quality of a work, be it of artistic, scientific, or of any other nature. In a computer graphics context, we can distinguish between three basic types of motion: • Uniform motion: In this type of motion, the direction and the speed of the moving object(s) are kept unchanged • Accelerated motion: In this type of motion, the direction and the speed of the moving object(s) are dependent on various forces • Chaotic motion: This type of motion is random and unpredictable to some degree Uniform motion is almost absent in the physical world wherein gravity, friction, acceleration, and miscellaneous other forces affect the way objects move in a completely causal way. We could emulate this behavior within the context of accelerated motion, nevertheless, the latter may stand for motion dependent on any kind of forces, even uncanny, out-of-the-world ones; coherency and causality are not specific to reality. Chaotic motion is largely computer specific and is more or less dependent on complex stochastic equations to behave in an explicitly non-linear and unpredictable manner. We should be comfortable with all types of motions so that we can grant our animations a particular quality we are after in any given context. Of course, different kinds of motion can be integrated into the same scenery either temporarily or spatially so that we can achieve more complex scenarios.

Using UserView

Having discussed the different kinds of motion, we need mechanisms to implement them. The standard way to animate vector graphics in all major frameworks is through some sort of a callback function. This function will be typically evaluated several times per second (depending on the frame rate settings), each time calculating what the subsequent frame (that is, every individual image in the animating sequence) will look like. To animate something, we basically have to calculate how they should change for every consecutive frame and redraw them accordingly. Therefore, unlike traditional or stop-motion animation, computer animation is achieved simply by algorithmically describing how scenery will permute over time.

[ 82 ]

Chapter 5

Animation in SuperCollider is primarily addressed through some specialized UserView class. This will evaluate its drawFunc several times in a second, which can be modulated through the frameRate variable, to draw the resulting images once its animate variable is set to true. We can either select to have each resulting image replace the previous one or merge with it using the clearOnRefresh variable (its default value is true, which means that drawings will be replaced). Then, for a very basic animation, all we need to do is redraw a sprite to a new position every time so that it appears as if it is moving towards some direction. There are two ways to do this: by means of manually calculating what the new coordinates are and by means of using geometrical transformations. In both cases, we need some sort of a counter variable that will increment with respect to some unit of time, and we will use it to modulate our sprite's positioning. A readily available counter is the frame instance variable of UserView, which corresponds to the number of frames that have passed since the animation started. For example: ( // A descending circle var window = Window("a decending circle", 400@400).front; // create the window var userView = UserView(window, 450@450) // create the UserView .background_(Color.white) // set background color .animate_(true) // start animation ! .frameRate_(60) // set frameRate to 60 frames per second .drawFunc_({ // callback drawing function var counter = userView.frame; // count the number of frames var x = 100; // no change in the horizontal axis var y = counter % (userView.bounds.height+200); // calculate y as the modulo of the passed frames Pen.fillColor_(Color.yellow); // set color to yellow Pen.addOval(Rect(x,y-200,200,200)); // create a circle Pen.fillStroke; // draw circle }); )

Notice how we use the % (modulo) operation herein to calculate the y coordinate. With modulo, we can easily map an ever-incrementing left operand into the range 0-right-operand-minus-one, which in our case ensures that when our circle goes out of bounds, it will wrap back to its initial positioning. In this way, we can achieve a constantly repeated movement. Note that at each frame, the position of our circle is 1 pixel after the previous one (since frame is incremented by one every time). Dividing or multiplying our counter to control the difference (in pixels) between every subsequent sprite's position will allow us to change the speed of its motion accordingly. However tempting it may be, it is not a good idea to modulate the frame rate to achieve different speeds. Consider, for instance, what will happen if we have to deal with several objects, all moving at different speeds. [ 83 ]

Animation

Stop motion is an animation technique wherein objects are made to physically move in small increments between individually photographed frames to create the illusion of movement when these frames are animated.

Animating complex shapes and sprites

Remember the windmills we designed in the previous chapter? In the following code, we are rotating three of them in different ways. For this code to work, we need to evaluate the file holding the windmill's factory definition—be sure to update the path if needed. ( // rotating windmills var window, userView, windmills; (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_05_windmill_factory.scd").loadPaths; // first load the windmill factory windmills = [ // an array with three windmills ~windmillFactory.(100@100,80), ~windmillFactory.(300@100,80), ~windmillFactory.(500@100,80) ]; window = Window("animation and mouse interaction", 600@200).front; userView = UserView(window, 600@200).background_(Color.white).animate_ (true).frameRate_(60).drawFunc_({ // setup UserView and callback func var speed = 100; // change this to make rotation faster or slower Pen.push; // uniform motion Pen.rotate( userView.frame/speed, 100, 100); // simply use frame count windmills[0].draw(); Pen.pop; Pen.push; // accelerated motion: back and forth Pen.rotate( sin(userView.frame / speed) * 2pi, 300,100); // use the sinusoid of frame count windmills[1].draw(); Pen.pop; Pen.push; // even more accelerated ! Pen.rotate( tan(userView.frame / speed) * 2pi, 500,100); // use the tangent of frame count windmills[2].draw(); Pen.pop; }); ) [ 84 ]

Chapter 5

The preceding code exemplifies how to easily achieve accelerated motion with trigonometric operations, as well as how to imply the existence of some environmental force. Indeed, the windmills look as if they are rotating because of the wind. Note that drawFunc will be evaluated several times in a second; therefore, to optimize the performance, we should ensure that no redundant calculations are performed therein. This is why in the previous chapter, we created a windmill factory in such a way that its construction and initialization stages are separated. If we use a drawing function instead, we will have to perform the same calculations to compute the angular distances between each wing, 60 times per second for every windmill. This would result in unnecessary calculations.

Fundamental animation techniques

By using counters and simple mathematical calculations, we can indeed describe all sorts of movements a sprite of arbitrary complexity may perform over time. Notwithstanding, animation is not limited to only moving the sprites around; quite often actually, we will be looking into implementing certain effects or more complex kinds of motion.

Trailing effects

A typical case is that of adding trailing effects to an animation. If done wisely, trailing effects will make our animations a lot more interesting and organic. We can easily achieve such effects if we merge the current frame with the previous ones rather than replacing them. Consider the following code wherein we set the clearOnRefresh variable to false (to instruct UserView to merge every frame with the previous ones) and use a semitransparent rectangle to dampen the previous contents before actually drawing the new content. ( // rotating windmill trailing effect var window, userView, windmill, speed = 100; (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_05_windmill_factory.scd").loadPaths; // first load the windmill factory windmill = ~windmillFactory.(225@225,150); // a new windmill window = Window("Traling Effect", 450@450).front; userView = UserView(window,450@450).background_(Color.white).animate_ (true).frameRate_(60).clearOnRefresh_(false).drawFunc_({ Pen.fillColor_(Color(1,1,1,0.4)); // a transparent white Pen.addRect(Rect(0,0,450,450)); /* create a semi-transparent rectangle to dampen previous contents */ Pen.fill; // draw rectangle Pen.push; Pen.rotate( tan(userView.frame / speed) * 2pi, 225, 225); // rotating windmill windmill.draw(); // draw windmill Pen.pop; }); ) [ 85 ]

Animation

Interaction and event-driven programming

For certain applications, we will need to interact with our animations to dynamically change some scenery at will. The generic strategy is to use variables of broader scope inside our drawFunc variables, so that we can later access and modify them externally somehow. Departing from the previous example, all we need to do is make sure we change the positioning of our windmill with respect to some x and y variables inside our callback function: windmill.refresh(x@y);

We will then use sliders to change the value of x and y respectively, as shown in the following code snippet: EZSlider.new(window,430@40,"x",ControlSpec(0,440),{arg slider; x = slider.value}); EZSlider.new(window,430@40,"y",ControlSpec(0,440),{arg slider; y = slider.value});

The entire code can be found online with the code bundle of this book. The result is shown in the following screenshot:

[ 86 ]

Chapter 5

Sometimes we would want to interact not using some GUI but user actions, such as typing through the keyboard, clicking the mouse, resizing a window, or even through some audio signal (we will elaborate on such cases in subsequent chapters). Whenever the flow of a program relies on such user actions, we may speak of Event-driven Programming (EDP). In this programming paradigm, a typical way to associate some user action with a specific task is through event handlers that are dedicated callback subroutines, which will perform some task when a particular user action is detected. Dedicated event handlers for a wide range of user actions are already implemented in UserView (actually, in any kind of View), including moving/dragging/clicking the mouse, using the keyboard, performing drag-anddrop, resizing/moving a Window object, and so on. Using event handlers in our context is very similar to using GUI objects as before; the only difference being that we will use the former to modulate our variables. We then need to pass our handler a callback function, which will be evaluated when the corresponding user action is detected with several arguments implicitly passed. These include the parent View itself, the handler it is attached to, as well as a number of other arguments relevant to individual user actions; for instance, in the case of mouseDownAction, the arguments passed are the parent View, the cursor's x and y coordinates, modifiers (which modifier keys are in effect, if any), buttonNumber (which button is pressed), and clickCount (for single, double, or more clicks). We could easily make the previous example appropriate and use the mouse's cursor to control the positioning of our windmill and mouse clicks to select a new random motion speed (single-click) or a new random density for the trailing effect (doubleclick). We will have to add the event handlers, as shown in the following code: // event handlers window.acceptsMouseOver_(true); // this has to be set to true for the handlers to function properly userView.mouseDownAction_({arg view, x, y, modifiers,buttonNumber, clickCount; if (clickCount==1) { // on one click speed = rrand(10,200); // use this to change speed } { // on more clicks trailsDensity = rrand(0.1,0.6); // change trailing effect's density } }); userView.mouseOverAction_({arg view, x, y; // on mouseOver position = x@y; // use this to change rotation's center windmill.refresh(x@y); // change windmill's positioning });

[ 87 ]

Animation

We have to make sure that we declare the variables we need; we update the drawFunc function so that rotation occurs with respect to the new positioning and the speed and trailing effect density are modulated. The entire code can be found online in this book's code bundle. Consider the following screenshot:

Particle systems

Sometimes we would want to set groups of related objects or particle systems in motion. All we need to do is iterate through all of the elements and describe the motion with respect to the iterator's index (or indices) if we want to achieve some sort of interdependent movement. Consider the following code as the departing point: ( // animating particles var particleEngine = { arg width, height, distance, counter; (width/distance).floor.do{arg ix; (height/distance).floor.do{arg iy; var x,y; // positioning var color, radius, xoffset, yoffset; // replace the following as needed color = Color.white; radius = 30; xoffset = 0; yoffset = 0; [ 88 ]

Chapter 5 x = (distance/2) + (ix * distance) + xoffset; y = (distance/2) + (iy * distance) + yoffset; Pen.fillColor_(color); Pen.push; Pen.rotate(2pi.rand,x,y); Pen.addArc(x@y,radius,0,2pi); Pen.fill; Pen.pop; }; }; }; var window = Window("animating particles", 640@640).front; var userView = UserView(window, 640@640).background_(Color.black). animate_(true).frameRate_(60).drawFunc_({ var counter = userView.frame / 30; // counter particleEngine.value(640,640,70,counter); // width, height, distance between articles' centers and counter }); )

Now we can simply change the way we calculate radius, color, offset, and counter to achieve various kinds of motion, as shown in the following code: color = Color(sin(counter).abs,cos(counter).abs,sin(counter/4).abs); // modulate color radius = sin(counter / 2).abs * 20; // modulate radius xoffset = sin(counter) * 10; // move left and right yoffset = sin(counter/2) * 10; // move up and down

Alternatively, to achieve a different kind of motion, we could try the following code: color = Color(sin(ix).abs,cos(ix).abs,sin(iy+ix).abs); // modulate color radius = sin((ix+1) * (iy+1) * (counter/10)).abs * 30; // modulate radius xoffset = 0; yoffset = 0;

[ 89 ]

Animation

Accelerated motion can be realized by changing the global speed settings as follows: counter = tan(userView.frame / 100).abs;

A still from the animation is shown in the following screenshot:

Advanced concepts

More complex animations can be easily generated by means of combining the fundamental techniques examined previously. Notwithstanding, certain kinds of motion and certain kinds of structures are impossible to deal with without sufficient understanding of more advanced concepts—to be touched upon herein. While it is impossible to scrutinize such specialized topics in depth herein, we will attempt to have an introduction and give several examples. Readers interested in such topics may look for more specialized resources.

[ 90 ]

Chapter 5

Animating fractals

As far as fractals are concerned, there are lots of ways in which we can set them in motion. In the following code, for instance, we gradually animate the fractal rather than have all of it drawn at once. The idea is to start with lines of zero length and gradually extend them until the whole fractal is formed. In the fractal factory, we implemented in the previous chapter, we calculated the anchor points of all the line segments the fractal consisted of. Consequently, we already know what the initial and final coordinates for every segment are. All we need to do, then, is use a counter variable that increments from 0 to 1 at certain steps (to be defined by the speed variable) to compute what the ending points for our segments should be for any given frame. Accordingly, we will progressively blend their colors (using the blend instance method of color and with respect to the counter variable) too. We only need to add a couple of additional variables and an additional animate method to the fractal object we designed in the previous chapter as follows: counter: 0, // counter for animation animatePoints: Array.newClear(numBranches), // intermediate points for animation animate: { arg self, speed = 0.01, colors = [Color.red, Color.green]; self.branches.do{arg i; if (self.counter < 1) { // if not done self.counter_(self.counter + speed); // update counter // calculate line-segments to draw with respect to counter self.animatePoints[i] = Point( position.x + ((points[i].x - position.x) * self.counter), position.y + ((points[i].y - position.y) * self.counter), ); // draw segment Pen.strokeColor_(colors[0].blend(colors[1],self.counter)); // progressively blend colors with respect to counter Pen.line(position,self.animatePoints[i]); Pen.stroke; } { // if done

[ 91 ]

Animation // draw the completed fractal Pen.strokeColor_(colors[0].blend(colors[1],self.counter)); Pen.line(position,self.animatePoints[i]); Pen.stroke; self.counter_(0); // reset counter to start from scratch }; }; // animate children if (self.children.notNil) { // if there are children // draw all of their branches self.children.do{arg item; item.animate(speed,colors); }; }; },

The full factory can be found in this book's code bundle online. Now we can simply continue as follows: ( // fractal animation var window, userView, fractal; // first load the fractal factory (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_05_07.scd").loadPaths; window = Window("fractal animation", 640@640).front; fractal = ~fractalFactory.value(5, 7, window.bounds.center, 60,0.5); // create a fractal userView = UserView(window, 640@640).background_(Color.black).animate_ (true).frameRate_(30).drawFunc_({ fractal.animate(0.001,[Color.red,Color.green]); // animate it }); )

[ 92 ]

Chapter 5

A still frame from the animation is shown in the following screenshot:

By the way, this is a CPU-demanding example that might cause SuperCollider to crash in weak computers; it is always a good idea to start with fractals of just a few levels and gradually attempt deeper ones to avoid surprises.

[ 93 ]

Animation

Adding dynamics to simulate physical forces

We have already demonstrated how using simple trigonometric functions to achieve accelerated motion implies the presence of wind. We can go further than merely implying physical forces and rather emulate them by means of adding dynamics to our animations. Simulating physics can be very tedious and mathematically involved. However, bear in mind that we don't necessarily have to programmatically describe all the laws of physics to have our sprites move in more realistic ways. Actually, we don't even have to be necessarily interested in real-world physics to use dynamics; we may as well want to create our own systems by means of defining non-real-world physical rules specific to the latter. In any case, we can programmatically describe physics and cast the motion behavior accordingly. This can be done by means of defining the individual physical forces that affect a scenery as well as the individual physical properties of every structure that is affected. The latter modulates how the former affects a body. For example, a bouncing ball may be affected by gravity and wind acceleration (physical forces), but it does so explicitly because of their mass and flex. To successfully emulate the behavior of such an object, we merely need to calculate their new positioning with respect to those forces and qualities. However, before actually doing so, we need to familiarize ourselves with vectors. Certain quantities, such as weight, mass, or flex, have magnitude alone and, consequently, can be described by some simple number. Such quantities are broadly referred to as scalars. Forces, however, are not scalars as they typically have both magnitude and direction. Consider for example, acceleration, velocity, or pressure; none of those forces can be described simply with a number. Such quantities are referred to as vectors in physics. In a computer graphics context, a vector is a complex mathematical entity characterized by both magnitude and a direction. The easiest way to conceptualize a vector is as an n-dimensional arrow pointing at a certain direction and having its origin at the center (whatever it may stand for) of the structure that is affected. Then, any force can be represented as a set of n-dimensional Cartesian co-ordinates representing the point at the end of that arrow minus its origin (which is usually considered as the zero point for simplicity). Relying on vector algebra, we can calculate what the overall effect of individually applied forces would be on some form. In the following figure, for example, we can see towards what direction a body would move if we apply to it three vectors: P(2,0,0), P(0,3,0), and P(0,0,5). The result is P(2,3,5).

[ 94 ]

Chapter 5

z 5

P (2, 3, 5)

3

O

y

2

x

The significant advantage of using vectors is that we will not have to perform all these calculations ourselves but rather rely on appropriate methods of some vector object. While there is no built support for vectors in SuperCollider, there is indeed a Quark extension, namely VectorSpace, which provides us with miscellaneous vector classes we can use to represent vectors and perform all major algebraic operations on vectors. The following code exemplifies how we can use RealVector3D (RealVector2D is similar in spirit). a = RealVector3D[1,2,3]; // create a new 3d vector a.x; // access first coefficient a.y; // access second coefficient a.z; // access third coefficient a = a * RealVector3D[pi,2.4,3]; // multiply it with another

[ 95 ]

Animation

Now we will use vectors to emulate how a ball would bounce in the presence of various forces such as gravity, wind, and friction, and with respect to its mass and flex. We will start by designing a model for the ball. We need variables for position, mass and flex, as well as velocity and acceleration. We will also need an addForce method that will first normalize the force added with respect to the ball's mass and then calculate what the ball's acceleration should be with respect to this normalized force. We can later have our ball affected by as many forces as we want (using addForce) and have the ball's acceleration updated accordingly. Afterwards, we need a draw method that will calculate the current ball's velocity with respect to acceleration, compute what the ball's positioning should be, and draw it accordingly. Its bouncing behavior is implemented inside draw using if structures. The ball will bounce whenever it encounters any of our canvas' edges with respect to its flex; some of its acceleration should also be lost due to friction as shown in the following code: (// a ball factory ~ballFactory = { arg radius=40, initialPosition=0@0, color=Color. green, mass=10, flex=0.9, bounds; // the bounds define when the ball should bounce var ball = ( velocity: RealVector3D[0,0,0], // initial velocity mass: mass, // the mass of the ball flex: flex, // 1 is perfectly non-elastic and 0 perfectly elastic position: initialPosition, // position of the ball acceleration: RealVector3D[0,0,0], // acceleration of the movement addForce: { arg self,force; // add forces to the ball var normalizedForce; normalizedForce = force * self.mass; // force should be affected by the mass self.acceleration_(self.acceleration + normalizedForce); // calculate acceleration }, draw: {arg self; // draw ball self.velocity_(self.velocity + self.acceleration); // calculate current velocity self.acceleration_(RealVector3D[0,0]); // reset acceleration self.position_(self.position + self.velocity); /* calculate new position - we can indeed add a RealVector2D with a point ! */ // make ball bounce if (self.position.y > bounds.y) { self.velocity[1] = self.velocity[1].neg * self.flex; self.position.y = bounds.y; }; [ 96 ]

Chapter 5 if (self.position.y < 1) { self.velocity[1] = self.velocity[1].neg * self.flex; self.position.y = 1; }; if ((self.position.x > bounds.x) || (self.position.x < 1)) { self.velocity[0] = self.velocity[0].neg * self.flex; }; Pen.fillColor_(color); Pen.addArc(self.position,radius,0,2pi); Pen.fill; } ); ball; }; )

Now we can proceed as follows: ( // bouncing balls example var window, userView, ball; // window, userView and ball var wind, gravity, frictionX, frictionY; // various forces // first load the ball factory (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_05_10.scd").loadPaths; window = Window("bouncing ball", 640@640).front; userView = UserView(window, 640@640).background_(Color.black).animate_ (true).frameRate_(60).drawFunc_({ if ((userView.frame % 240) == 0) { // every 4 seconds (4 x 60frames) // create a ball of random characteristics var mass = rrand(5,15); var flex = rrand(0.7,1.0).trunc(0.1); ball = ~ballFactory.(radius:40, initialPosition:Point(rrand(0,400),0), color:Color.rand, mass:mass, flex:flex, bounds: 640@640); ("New ball of mass" + mass + "and of flex" + flex + "created"). postln; // create random forces wind = RealVector3D[rrand(0.01,0.3).trunc(0.01),rrand(0.01,0.3). trunc(0.01),rrand(0,0.3).trunc(0.01)]; // wind from some random direction

[ 97 ]

Animation gravity = RealVector3D[0,0.4,0]; // gravity is always 0.8 frictionX = RealVector3D[0.1,0,0]; // horizontal friction frictionY = RealVector3D[0,0.1,0]; // vertical friction ("Forces applied are: Wind," + wind + "Gravity," + gravity + "Horizontal Friction," + frictionX + "Verical Friction" + frictionY).postln; }; // add forces to the ball ball.addForce(wind); ball.addForce(gravity); if ((ball.position.x > 640) || (ball.position.x < 1)) { // if touching horizontal edges apply horizontal friction ball.addForce(frictionX); } { // else if touching the bottom apply vertical friction if (ball.position.y == 640); ball.addForce(frictionY); }; ball.draw(); // draw the ball }); )

Kinematics

Hitherto, we have only dealt with monolithic sprites that move as a whole towards some direction. But what if the body we want to set in motion is articulated? Therein, we would have to calculate what the new position should be for each one of its parts according to its intrinsic rules. And in fact, how can we model articulated bodies that behave organically? To deal with such cases, we must resort to kinematics, which is the study of how mechanical points, bodies, and systems of bodies move. Herein, we will attempt a demonstration of how we can model and move an articulated snake-like creature. Our snake will consist of line segments of gradually decrementing width, each of which will be able to bend up to a certain angle. Then we need to describe programmatically how every segment should move when the whole body is asked to move towards some arbitrary direction. We will have a variable (named theta) constantly incremented by a small number so that the tail of the snake has the tendency to move and then have every movement of the head back-propagate accordingly. This is done by means of computing what the position of each segment should be as a function of simple trigonometric operations and with respect to the positioning of the adjacent ones. A snake factory is given herein. Notice that the color argument should be a function and that it will be implicitly passed an index incrementing from 0 to 1. We can use this to create smooth color progressions and gradients as shown in the following code:

[ 98 ]

Chapter 5 ( // a kinematic snake factory ~snakeFactory = { arg numberOfSegments = 50, length = 20, colorFunc = {arg i; Color(1,0,i)};

width = 40,

// the body var body = Array.fill(numberOfSegments, {arg i; (position: 0@(i * 2), radius: (2 * (50 - i) / 2)); // an event }); var snake = ( // the snake position: 0@0, // the current position of the sname theta: 0.1, // used to calculate the angles draw: { arg self; self.theta_(self.theta + 0.0005); // increment theta body[0].position_(self.position); // the position of the head body[1].position_(Point(body[0].position.x + sin( pi + self. theta), body[0].position.y + cos( pi + self.theta).neg)); // the next to the head segment is calculated as a function of theta /* calculate the position and color of each segment with respect to the adjacent ones */ (numberOfSegments-4).do{ arg i; /* iterate over the rest segments (-4 because we access i+2,i+1 and i herein) */ var newPosition, hypotenuse, points; var index = i + 2; var color = colorFunc.(index/numberOfSegments); // calculate color newPosition = body[index].position - body[index-2].position; // calculate the new position as a function of a previous segment's position hypotenuse = newPosition.x.hypot(newPosition.y); // calculate the hypotenuse between x and y of this new position body[index].position_( body[index-1].position + ((newPosition * length) / hypotenuse)); // set the positioning of this snake points = [ // array with the positions of 2 consecutive segments body[index-1].position, body[index].position ]; // draw segment Pen.strokeColor_(color); Pen.width_(width*(numberOfSegments-index)/numberOfSegments); Pen.line(points[0],points[1]); [ 99 ]

Animation Pen.stroke(); }; }, refresh: { arg self, newPosition; self.position_(newPosition); // update position } ); snake; }; )

Then, we can use the model as shown in the following code: ( // kinematics example var window, userView, snake; // first load the snake factory (PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_05_12.scd").loadPaths; snake = ~snakeFactory.(50,20,40); window = Window("Kinematic snake", 640@640).front.acceptsMouseOver_ (true); // to enable mouse actions userView = UserView(window, 640@640).background_(Color.black).animate_ (true).frameRate_(60).drawFunc_({ snake.draw; }); userView.mouseOverAction_({arg m,x,y; snake.refresh(x@y); }); )

A screenshot is shown as follows:

[ 100 ]

Chapter 5

Summary

In this chapter, we discussed animation and elaborated on both basic and more advanced techniques. These include animating monolithic shapes and sprites, implementing interaction and trailing effects, emulating the effect of environmental forces, and setting in motion particle systems, fractals, and articulated bodies. In the next chapter, we will learn how to retrieve data from various sources, including off-line and online databases as well as by means of analyzing audio signals; we'll also learn how to manipulate and preprocess data and perform data mappings and encodings.

[ 101 ]

Data Acquisition and Mapping In the previous two chapters, we dealt with computer-generated graphics and the various ways to achieve animation, thus preparing the groundwork for designing more sophisticated data/audio visualizers. Doing so, however, also involves retrieving, manipulating, and encoding data appropriately, which will be discussed in this chapter. More to the point, we will examine various mechanisms to acquire and generate data from a wide range of possible sources, as well as the methodologies to process, encode, and distribute them within our programs. Such techniques are invaluable in miscellaneous contexts, and even if at this point they appear largely irrelevant with visualization, they are rather fundamental to it and are encountered even in the simplest scenarios. Readers primarily interested in the latter will have to be patient during this chapter; all will make sense in the next one. The topics that will be covered in this chapter are as follows: • Retrieving data from local or remote databases • Using OSC and serial communication protocols • Machine listening and audio information retrieval • Testing and preprocessing data • Basic mappings and encodings • Exchanging data within our programs

Data Acquisition and Mapping

Data acquisition

These days, data is literally everywhere: stored on local or remote databases, accessed through dedicated Application Programming Interfaces (APIs), distributed through File Transfer Protocol (FTP), and even generated dynamically by specialized hardware or software. Data acquisition stands for those techniques involved in acquiring the already existent data, and is not to be confused with the relevant, albeit fundamentally different, tasks of information retrieval or feature extraction. The latter refer to generating or extracting (otherwise nonexistent) information by means of analyzing data. In all cases, to import data in SuperCollider we need a source and a channel, the former being the place where the data of interest happens to be at, and the latter being the way to retrieve the data. An Application Programming Interface specifies how certain software components are to be accessed extrinsically and how they are supposed to interact with each other. A File Transfer Protocol is a network protocol used to transfer files from one host to another and is typically used on the Internet.

Dealing with local files

The most fundamental of all data acquisition techniques is how to read from or write to some local file, or in other words, how to perform file I/O (that is, input and output) operations. The reason is that quite often, we can simplify more complex data acquisition problems if we simply use local files as intermediates. In SuperCollider, all file I/O tasks are addressed through the File class. Writing data to a file is very easy: 1. Open the file for either writing (using the "w" keyword or the "wb" keyword for the binary files) or for appending (using the "a" keyword or the "ab" keyword for the binary files). 2. Invoke write with our dataset as an argument.

[ 104 ]

Chapter 6

When in writing mode, a new file will be created, replacing already existent ones having the same name (there is no way to undo this so we should be careful). In appending mode, a new file will be created too, but this time if there is an already existent one with the same name, then data will be appended to its end. A simple example follows: ( // storing data to a local file var data = Array.fill(1000,{rrand(0,1000)}); // an array of random values var file = File("dataset.dat".absolutePath,"w"); // open for writing operations data.do{ arg i; file.write(i + "\n"); // write data adding new line character to the end }; file.close; // close file when done )

Note that we have to add some kind of delimiter (a newline in this case) between each piece of data if we want to be able to distinguish between entries later; otherwise, each datum would stick next to each other, in this case resulting in a single number of 1000 digits. We can use the absolutePath (or the standardizePath) method to resolve ~ (which stands for the home directory in POSIX (that is, Unix-like) operating systems and which is incomprehensible to Microsoft Windows) to a proper path (which in my case is /Users/Marinos/). If we only provide it with a filename instead of a full path, our new file will be created in the default directory, which is in the folder wherein SuperCollider is installed. Then, to read the contents of our newly generated file we can use the readAllString method: ( // read a file var file = File("dataset.dat".absolutePath,"r"); // open for reading operations var data = file.readAllString; data.postln; file.close; // close file when done )

[ 105 ]

Data Acquisition and Mapping

Apparently this method will read the files as a single String object. This approach, however, can be problematic for several reasons. The major drawback with this way is that we merely copy the contents of the whole file to our computer's memory, which could be easily overloaded if large datasets are read. When dealing with large datasets or with datasets of unknown sizes, it is wiser to read chunks of data one at a time instead. We can do so using the getLine, getChar, or getFloat methods (depending on what kind of data we need to retrieve) within some routine, for example: ( // reading chunks of data var file, data; file = File("~/dataset.dat".absolutePath,"r"); // open for reading operations fork{loop{ // use a routine to read a chunk at a time if (file.pos != file.length) { // if there are data left data = file.getLine; // get a new line of data data.postln; // do something with data } { // close file and stop routine when done "done !".postln; file.close; thisThread.stop; }; 0.01.wait; // wait before iterating through the remaining data }}; )

Every open file is associated with an implicit variable indicating the position in the file from where the next value will be read. This position pointer starts at 0 and increments accordingly every time we access the data. In this example, we use pos to access this variable and test it against the total length value of the file to stop when we have read all data. It is worth mentioning too, that but for having used a newline (that is, \n) delimiter before, we wouldn't be able to use the getLine method in this example. Generally speaking, it is of great importance to know how the contents of a file are structured before attempting to read them. Most kinds of files containing data follow the convention of beginning with some sort of header (that is, a string of text giving information on the kind of data, and so on) followed by the data entries separated by some kind of delimiter. Regarding the header, we may want to read it and have our algorithm configured accordingly, or we can totally ignore it (by simply omitting the first line) if we already know what kind of data we are dealing with. As far as delimiters are concerned, the most common ones are tabs, commas, semicolons, new lines, or spaces. We have already demonstrated how to get the next entry when data are delimited with the new lines. There are also specialized file readers available for other kinds of delimiters namely, TabFileReader, SemiColonFileReader, CSVFileReader (CSV stands for Comma Separated Values), or the generic FileReader delimiter which can be instructed to identify any kind of delimiter. For example: [ 106 ]

Chapter 6 ( // using custom delimiters var data = Array.fill(1000,{rrand(0,1000)}); // an array of random values var file = File("dataset.dat".absolutePath,"w"); // open for writing operations data.do{ arg item; file.write(item.asString + $@); // write data adding a custom delimiter }; file.close; // close file when done // read data with FileRead data = FileReader.read("dataset.dat".absolutePath,delimiter: $@); data[0].postln; // print data in the post window )

Note that FileReader.read will return a 2D array wherein each entry represents data found on each line. In our case, all of our data are placed consecutively in a single line, therefore all of them are to be found in the first entry of the result.

Accessing data remotely

Cases that involve accessing data from some remote location are probably the norm rather than the exception these days, be it via the World Wide Web, FTP, or some private host. Our basic approach is to first download the corresponding file locally and then proceed as before. We can programmatically download files locally by means of some third-party command-line utility and SuperCollider's shell support. A shell is a command-line interface for an operating system, and as such it will respect different kinds of commands depending on what our platform is. In this book, I assume a POSIX operating system, such as Mac OS X or some flavor of Linux, albeit most of these commands are compatible with Windows too, with no or with minor modifications. Shell support in SuperCollider is implemented through a set of dedicated methods of the String class, namely, unixCmd (execute a command asynchronously, that is, without waiting for it to finish before our program advances), systemCmd (execute a command synchronously, that is, to wait for it to finish before continuing), unixCmdGetStdOut (execute a command synchronously and return the output), and unixCmdGetStdOutLines (execute a command synchronously and store each line of the output into an Array object).

[ 107 ]

Data Acquisition and Mapping

As far as the third-party utility is concerned, there are many options, the most famous of which are probably wget and curl. We will use the latter tool hereinafter which, if not already installed, can be download from http://curl.haxx.se/ download.html (binaries exist for most major operating systems). Curl will print the contents of the file in stdout (that is, the standard output, which is typically our screen) or write them to a file if a –o flag and a filename are specified. Consider the following example, wherein we download a file from a remote server containing comma-separated numerical data and then read it into an array: ( // accessing remotely stored data with curl var data, path; path = "arrhythmia.data".absolutePath; // destination path ("curl \"http://archive.ics.uci.edu/ml/machine-learning-databases/ arrhythmia/arrhythmia.data\" -o" + path).systemCmd; "ok reading !".postln; data = CSVFileReader.read("arrhythmia.data".absolutePath,true,true); )

This particular dataset is multidimensional, contains cardiac arrhythmia measurements, and is freely available on the Internet in plain text format. If our file was HTML formatted, we could use the readAllStringHTML method of File instead, to strip it out of all code and only keep the actual text. There are also cases, wherein we can only access data through some dedicated API. The website http://www.random.org offers on demand true (emphasis added) random numbers that are generated using measurements of atmospheric noise. To get the numbers we have to use its specialized HTML API to describe how many and what kind of numbers we want; details on that particular API can be found at http://www.random.org/clients/http/. In the following example, we demand 52 random integers in plain text format: ( // accessing remote data from random.org var data = "curl \"http://www.random.org/sequences/?min=1&max=52&col=1 &format=plain&rnd=new\" ".unixCmdGetStdOutLines; data.postln; )

In this particular case wherein the API is an HTML interface, all we had to do is use curl as before, but with a properly formatted HTTP address this time. In other cases, we might have to use some other kind of utility or programming environment to access the data. Afterwards, depending on what particular tools we use, we may either store the data to some file and then read them in SuperCollider, use shell support to read them from the standard output, or send them to SuperCollider via one of the supported communication protocols, as to be discussed promptly. [ 108 ]

Chapter 6

Using OSC

There are cases wherein we need to establish real-time communication with SuperCollider and a third-party software or hardware; for example, when data is generated on the fly elsewhere or when we rely on some specialized API which will send data asynchronously. Open Sound Control (OSC) is currently the most significant communication protocol, being highly optimized for modern networking technology, accurate, fast, and highly customizable. If the hardware/software we are interested in bridging with SuperCollider supports it, this is the protocol to be used. By the way, all of the SuperCollider's internal language/server communication is also built on OSC. Before discussing OSC's messaging style, we need to discuss how to establish communication between the various parts involved. OSC protocol is based upon the so-called User Datagram Protocol (UDP), which is a purely network-oriented protocol. Hence, we have to first set up a network and establish communication with the other end, even if it is just an application in the same computer. The sender has to be configured to send OSC messages to that Internet Protocol (IP) address and to the port that the SuperCollider language (SCLang) is listening to. As far as the port is concerned, the default is 57120, but we can always evaluate NetAddr.langPort just to double-check it. The situation is a bit more complicated as far as IP addresses are concerned. If communication is about to occur only within our computer, then we can simply use 127.0.0.1, which is our local IP address. If we want to communicate with some device in our local network, that is, directly connected to our computer or to the same router we are connected with, we need to find out our internal IP address, using some utility such as ifconfig (in POSIX systems) or ipconfig (in Windows); in my computer if I evaluate "ifconfig".unixCmd, I get several lines of text, some of which are: en1: flags=8863mtu 1500 ether 00:26:bb:09:16:09 inet6 fe80::226:bbff:fe09:1609%en1 prefixlen 64 scopeid 0x5 inet 192.168.10.9 netmask 0xffffff00 broadcast 192.168.10.255 media: autoselect status: active

[ 109 ]

Data Acquisition and Mapping

Therefore my internal IP address is 192.168.10.9. If communication is to be established between remote clients outside our local network, for instance with some webserver or with some computer in a geographically different location, we need to find out our external IP address. This we cannot do locally, since our operating system is unaware of it. We rather have to use some service such as http://ip.alt.io/ and http://www.whatismyip.com/. Bear in mind, however, that both the internal and external IP addresses will most likely change when we reboot our router or when we disconnect and reconnect to the network. There are ways to guarantee a static IP address if needed for critical applications (the easiest, but not free way, is to ask one from our Internet Service Provider (ISP); another alternative would be to use Dynamic DNS services and configure our router accordingly, note that several ISPs consider this a violation of the contract). Thereafter, we need to register an OSCFunc object to schedule something to happen when the desired message arrives. Such kinds of objects are referred to as responders. We can configure our responder to only listen to the messages arriving from a particular IP address, port, or to a particular kind of message, passing the appropriate arguments. As for ports, we need to clarify that the sender's port is the one from which messages are dispatched, and that this is not (necessarily) the same port to which we are sending the messages. OSCFunc detects the former, rather than the latter, which can be found either by consulting the specifications of the thirdparty software/hardware we are using or by means of evaluating OSCFunc.trace. The latter will print detailed information on all the incoming OSC messages in the post window, wherein we can also see the sender's IP address and port number, as well as the very message itself. Note that the server tends to send a lot of OSC messages to the SCLang if the former is active; so we shouldn't be surprised if we witness a lot of messages that we didn't send. Having established a communication channel, we can start sending OSC messages to SuperCollider. The latter are identified by their path, which is a string of keywords that are separated by slashes, for instance "/msg/test". To register a responder for this message, we can simply: ( // respond to an incoming OSC message OSCFunc({ arg msg; msg.postln; // print the message bundle },'/msg/test'); // listen to /msg/test message )

[ 110 ]

Chapter 6

The actual body of an OSC message may consist of an arbitrary number of characters. We can also send Arrays but they have to consist of 8-bit integers only. We can test our responder within SuperCollider as follows: ( // send an OSC message to the Client var receiver = NetAddr.new("127.0.0.1", 57120); // localhost var data = Int8Array.fill(100,{rrand(0,100)}); // 8bit data receiver.sendMsg("/msg/test", data); // send OSC message )

It has to be stressed that the UDP protocol does not check whether our data has arrived intact or not, therefore it is not a good strategy to send large datasets at once via OSC because if they never arrive we might not be able to tell. Possible solutions to these problems are either sending chunks of data, so that even if some messages never arrive the cost is bearable or implementing a custom communication system ourselves, wherein the responder will reply to the sender once it has received a message and the latter will wait for the receiver's confirmation before advancing to the next message. Depending on the context, even more sophisticated communication could be implemented, wherein each message sent would also contain information on its unique ID and the number of chunks remaining, so that the receiver could keep track of everything and explicitly ask for a particular chunk if it never arrived. UDP, designed by David Reed in 1980, is one of the core network protocols used for the Internet, which allows computer applications to send messages to other hosts without having to explicitly set up special transmission channels beforehand. Internet Protocol address is a numerical label assigned to each device participating in a computer network that uses the Internet Protocol for communication.

[ 111 ]

Data Acquisition and Mapping

Using MIDI

The Musical Instrument Digital Interface (MIDI) is reminiscent of the 80s, but still remains the protocol of choice for several software or hardware manufacturers. In my opinion, it is both unfortunate and sad that certain contemporary pieces of software or hardware only support MIDI and not OSC, nevertheless this is a scenario encountered quite often these days, so we must be fluent with MIDI as well. Bear in mind that using MIDI is our only option when we need to communicate with outdated hardware synthesizers, computers, and relevant equipment that were created before OSC was standardized. When compared to OSC, MIDI is a very limited protocol. The kinds of messages we can send are very specific, namely, note on messages (comprising of a MIDI note number and a velocity value), note off messages, control change messages (comprising of a controller number and its new value), and program change messages (to change a device's patch). Other specialized kinds of MIDI may be encountered too, such as System exclusive (SysEx), which are device specific and follow no definite standard, as well as various kinds of timing messages such as MIDI Time Code (MTC) or Society of Motion Picture and Television Engineers (SMPTE) time code. Indicative of MIDI's limitations is that, as far as the standard messages are concerned, all data sent has to be in the range of 0-127 (7 bits). To communicate via MIDI we need to physically connect some MIDI capable device to our computer via either DIN-5 cables and some specialized MIDI interface for older hardware, or USB for more recent devices. When only software is concerned, we have to rely on some software utility to create and configure a virtual MIDI path between SuperCollider and the application in question. Listening to MIDI messages in SuperCollider is addressed by the MIDIFunc class either through the generic *new method or the specific *cc, *noteOn, *noteOff, *sysEx, *program, *smpte, and *mtcQuarterFrame methods. The MIDI protocol defines 16 discrete channels of communication. Both MIDI-capable hardware and software are typically configured to send messages either to a particular one or to all of them. Likewise, we can setup an instance of MIDIFunc to listen to all channels or to some in particular and furthermore, we can configure it to only respond to messages coming from a particular node by supplying its unique source ID. As with OSCFunc, there is a *trace method available which we can use to monitor all incoming messages and find out the specifics of an individual sender. The following code registers MIDI responders for control change and system exclusive messages: ( // registering MIDI responders MIDIIn.connectAll; // connect incoming ports MIDIFunc.cc({arg value, ccNumber; // listen to control change messages "Control Change message received !".postln; [value, ccNumber].postln; // do sth with the value and the controller's number [ 112 ]

Chapter 6 }, nil); /* nil stands for listening to any cc message coming from everywhere */ MIDIFunc.sysex({arg data; // listen to sysex messages "System Exclusive received !".postln; [data].postln; // do sth with the message },nil); // listen to any message coming from everywhere )

SysEx messages are of particular interest, as they are the only kind of MIDI messages we can use to send packets of data. Such packets should always start with the hexadecimal number 0xf0 and end with the hexadecimal number 0xf7. We can test the previous responders from within SuperCollider. Firstly, we need to make sure that some virtual MIDI port is installed and enabled on our computer. Then we can try the following code: ( // sending MIDI messages var midi, data, sysExPacket; midi = MIDIOut(0); // assuming a virtual midi port at index 0 midi.control(10,34,124); // send 124 at cc34 channel 10 data = Int8Array.fill(15,{rrand(0,100)}); // generate data sysExPacket = data.insert(0,0xf0).add(0xf7); // format data as sysEx midi.sysex(sysExPacket); // send a syxEx packet )

Using Serial Port

It may be that we want to retrieve the data from some microcontroller or other specialized hardware over some serial computer bus. How to do so largely depends on the kind of hardware we want to interface with, yet there does exists a generic SerialPort class. We first need to connect our hardware to our computer and identify what its serial bus is by invoking SerialPort.listDevices; then, we can proceed as follows: ( // reading bytes from some serial bus var port = SerialPort( /* port path here */, baudrate: 9600, crtscts: true /* enable hardware data flow control */); fork{5.do{ // read 5 next bytes port.next.postln; // read next byte 1.wait; // wait 1 second; }}; SerialPort.closeAll; // close all ports when done; )

[ 113 ]

Data Acquisition and Mapping

Of course, the baud rate (that is, the rate of dataflow) we have set must match that of the hardware we use. In the case of the famous (at least in the DIY circles) Arduino series of microcontroller-based prototyping boards, there are specialized quarks already available that we can use instead of the SerialPort class namely, Arduino or SCPyduino. An example with the latter (remember to install it first) would look as follows: ( // polling data from Arduino var arduino, loop; // connect on given port and baud rate set to 57600 arduino = SCPyduino.new("/dev/tty.usbmodem411", 57600); arduino.analog[0].active_(1); // activate polling on Analog pin 0 loop = fork{1000.do{ // read 1000 bytes from arduino arduino.iterate; // sync with arduino's clock arduino.analog[0].value; // do sth with data read }}; arduino.close; // close when done )

In this example, the "/dev/tty.usbmodem411" parameter is our device's path and 57600 is the baud rate we have used. In order to make this code work, we also need to load the StandardFirmata example code in our Arduino, which we can find in the Examples | Firmata submenu of the Arduino Integrating Development Environment (IDE), which in this case is the Arduino software we have downloaded from www.arduino.cc. We also have to make sure that the baud rate used in StandardFirmata does indeed match ours in SuperCollider. Firmata is a specialized library designed for fast and efficient communication with microcontrollers such as Arduino. The Arduino Quark achieves communication via the standard Serial console instead. An Integrated Development Environment is an application targeting the software developers and providing them with relevant facilities such as a source code editor, build automation tools, an interpreter, and a debugger.

[ 114 ]

Chapter 6

Machine listening

So far we've examined in detail how we can acquire data from various sources. In a visualization context, however, we may encounter situations wherein we will need to control some elements of an animation with respect to some particular characteristic of a signal, for example, their amplitude or their frequency. Yet, these kinds of information are attributes of the signal, rather than parts of it. In other words, we need something to happen not with respect to some existent data (that is, our signal in this context) but with respect to certain characteristics of a data flow. Consider that an audio signal is completely unaware of how loud it is or of what its frequency is. Remember that audio signals are merely streams of numbers and that sounds are merely fluctuations of air pressure. The reason we understand sounds as having loudness or pitch, is because our auditory apparatus analyzes them and provides the brain with information on certain sonic qualities. Further, more sophisticated perceptua and cognitive processes perform additional kinds of analyses to extract as well as attribute information and meanings, so that we perceptually decipher what we hear. Likewise, we can say that a signal is periodic and has a certain frequency, only if we somehow analyze it. Remember that the output of a sinusoidal wave at a frequency of 200 Hz is just a flow of numbers between ±1. The datum 200 is not part of this signal, so the only way to make something happen with respect to this number is to actually generate it by means of analyzing the audio signal against its frequency. The task of retrieving statistical and other kinds of information from audio signals is generally referred to as machine listening.

[ 115 ]

Data Acquisition and Mapping

Machine listening is, in essence, to analyze signals in order to generate information that represent certain qualities of these signals. To properly understand and evaluate the kind of information we may get from some machine listening algorithm, it is worth distinguishing briefly the different kinds of properties a signal may have. Acoustic properties refer to physical properties of sound, and consequently of audio signals, particularly qualities such as amplitude, frequency, and spectrum. Psychoacoustic properties refer to low-level perceptional properties of audio signals, such as loudness, pitch, and timbre. Psychoacoustic properties are fundamentally different than their acoustic equivalents, the latter being intrinsically linked to perception. For instance, loudness refers to how loud something sounds, while the amplitude stands for the actual amount of the displacement of the air particles that occurs in the physical space. It has to be stressed that the various psychoacoustic qualities do relate and depend upon the acoustic properties of sound; nonetheless, the relationships are very complex and not that straightforward as they may appear to be. For example, loudness does not depend exclusively upon amplitude, but it also depends upon frequency, spectral content, and even upon a series of psychological and other factors. We can also speak of several families of higher-level perceptional properties, such as musical ones (scale, tonality, rhythm, genre, expressivity, and so on), cognitive ones (semantics, symbolical signification, and so on), and psychological ones (irritability, entertainability, ability to cause relaxation, and so on). Again, such properties may depend or relate to some extent to the acoustic or psychoacoustic qualities of sound; yet the inter-relationships may be extremely complex and even not fully understood in certain cases. Machine listening algorithms are not limited only to simple acoustic properties of a signal; sophisticated algorithms have been proposed for more complex problems as well such as musical style recognition and rhythm extraction. As far as musical qualities are concerned, the more specialized term musical information retrieval is sometimes encountered too. In SuperCollider we can easily perform basic audio analyses to retrieve information on both physical as well as certain perceptional properties of audio signals using the available machine listening UGens, the most important of which will be discussed immediately. Music Information Retrieval (MIR) is an interdisciplinary field of science dealing with how to retrieve and classify information from music.

[ 116 ]

Chapter 6

Tracking amplitude and loudness

In Chapter 1, Scoping, Plotting, and Metering, we briefly demonstrated how to use the Amplitude UGen to track peak amplitude linearly. We can also use the Peak UGen, which will return the maximum peak amplitude every time it receives a trigger or the PeakFollower UGen which smoothly decays from the maximum value by some specified decay time. To track the minimum or the maximum value of a signal we can use the RunningMin or RunningMax UGens. To track Root Mean Square (RMS) amplitude, we can use the RunningSum UGen. The following example shows how to use these UGens: (// tracking amplitude { var sound = SinOsc.ar(mul:LFNoise2.kr(1).range(0,1)); // source RunningSum.rms(sound,100).poll(label:'rms'); // rms Amplitude.kr(sound).poll(label:'peak'); // peak Peak.kr(sound, Impulse.kr(1)).poll(label:'peak_trig'); // peak when triggered PeakFollower.kr(sound).poll(label:'peak_dec'); // peak with decay RunningMin.kr(sound).poll(label:'min'); // minimum RunningMax.kr(sound).poll(label:'max'); // maximum Out.ar(0,sound); // write to output }.play; )

Sometimes we may want something to happen when a signal is silent or at least when it is below a certain level. In such cases we can use DetecteSilence. There also exists a Loudness UGent which will estimate loudness in Sones (the measure of loudness). It is designed to analyze spectra and requires an FFT window of size 1024 for sampling rates of 44100 or 48000 and of the size 2048 for 88200 or 96000, respectively. For example: ( // track loudness { var sound, loudness; sound = SinOsc.ar(LFNoise2.ar(1).range(100,10000), mul:LFNoise0.ar(1).range(0,1)); // source loudness = FFT(LocalBuf(1024),sound); // sampling rates of 44.1/48K // loudness = FFT(LocalBuf(1024),sound); // sampling rates of 88.2/96K loudness = Loudness.kr(loudness).poll(label:\loudness); Out.ar(0, sound); }.play; )

[ 117 ]

Data Acquisition and Mapping

Tracking frequency

As far as frequency is concerned, there are a number of relevant UGens, each of them implemented differently. The most simple one is ZeroCrossing, which will estimate the frequency by keeping track of how often an input signal crosses the horizontal axis, which represents 0 in terms of amplitude. Pitch is a more accurate frequency tracker, which also allows for some tweaking. Note that, regardless of its name, it performs frequency tracking rather than pitch tracking, the latter also depending on a series of other factors. More advanced frequency trackers are Tartini (which is based on the method used in the homonymous open source pitch tracker) and Qitch (which has to be used along with one of the special auxiliary WAV files it is distributed with). Tartini and Qitch are not included in the standard SuperCollider distribution but on the SC3Plugins extension bundle (available at http://sc3plugins.sourceforge.net/). Pitch, Tartini, and Qitch will all return an array of instances of OutProxy obtaining both the estimated frequency as well as a flag of 1 or 0 to denote whether they successfully tracked some frequency or not. When attempting to track frequency we should always bear in mind that the former being a complicated process, not all trackers would work equally well for all kinds of signals. For example: ( // frequency tracking var qitchBuffer = Buffer.read (Server.default,"/Users/marinos/Library/Application Support/ SuperCollider/Extensions/SC3plugins/PitchDetection/extraqitchfiles/ QspeckernN2048SR44100.wav"); // path to auxiliary wav file for Qitch { // a complex signal var sound = Saw.ar(LFNoise2.ar(1).range(500,1000).poll(label: \ActualFrequency)) + WhiteNoise.ar(0.4); ZeroCrossing.ar(sound).poll(label:\ZeroCross); Pitch.kr(sound).poll(label:\Pitch); Tartini.kr(sound).poll(label:\Tartini); Qitch.kr(sound,qitchBuffer).poll(label:\Qitch); Out.ar(0,sound!2); }.play; )

For this signal, Qitch is probably the most reasonable choice, judging by the output on my machine: ActualFrequency: 864.222 ZeroCross: 6368.27 Pitch: 171.704 Pitch: 1 Tartini: 95.0917 Tartini: 1 Qitch: 845.466 Qitch: 1 [ 118 ]

Chapter 6

Timbre analysis and feature detection

Timbre is a psycho-acoustic quality, and refers to what makes sounds distinct even if they have the same loudness and pitch. Of course this is a broad oversimplification of a very complex subject; in reality there isn't even a consensus on what exactly timbre stands for. While timbre has been proposed to depend on several qualities, in a machine listening context timbre recognition is almost exclusively based on analyzing spectra. Herein, we will focus on how to broadly detect several spectral features, rather than timbre per se, which is a rather indefinite quality. By the term feature we refer to anything that could be characteristic about a signal's spectral characteristics. In SuperCollider there is a plethora of relevant UGens, both in the standard distribution as well as in extension libraries. Of the most useful are SpecCentroid and ScpeFlatness used to calculate the spectral centroid and the spectral flatness, respectively. The former roughly stands for the most perceptually prominent frequency range in our signal while the latter is an indicator of how complicated our signal is (for example, for a sinusoid it would be 0 while for white noise close to 1). The SpecPcile UGen will calculate the cumulative distribution of a spectrum, and given a percentile of spectral energy as an argument, will return that frequency from which the given percentile of spectral energy lies below. In the SC3Plugins extensions bundle we will also find the FFTCrest UGen, which will calculate the spectral crest of a signal, which, in short, indicates how flat or peaky a signal is, and the SensoryDissonance UGen, which will attempt to calculate how dissonant a signal is (with 1 being totally dissonant and 0 being totally consonant). The FFTSpread UGen measures the spectral spread of a signal, that is how wide or narrow its spectrum is and FFTSlope calculates the slope of the linear correlation line derived from the spectral magnitudes. Finally, the Goertzel UGen calculates the magnitude and phase at a single specified frequency. For example: ( // feature extraction { var sound = SinOsc.ar(240,mul:0.5) + Resonz.ar(ClipNoise.ar,2000,0.6,mul:SinOsc.kr(0.05).range(0,0.5)) + Saw.ar(2000,mul:SinOsc.kr(0.1).range(0,0.3)); var fft = FFT(LocalBuf(2048),sound); // a complex signal SpecCentroid.kr(fft).poll(label:\Centroid); SpecFlatness.kr(fft).poll(label:\Flatness); SpecPcile.kr(fft,0.8).poll(label:\Percentile); FFTCrest.kr(fft,1800,2200).poll(label:\Crest); SensoryDissonance.kr(fft).poll(label:\Dissonance); Out.ar(0,sound!2); }.play; )

[ 119 ]

Data Acquisition and Mapping

Onset detection and rhythmical analysis

There are some specialized UGens we can use to perform beat tracking, which is to analyze the rhythmical characteristics of a signal. BeatTrack, for example, will return an array comprising of the current detected tempo as well as impulse ticks at quarter, eighth, and sixteenth note ratios. Note that it takes about six seconds for it to start predicting. A similar in spirit beat tracker is BeatTrack2, which, however, follows a different approach internally. An example with BeatTrack is as follows: ( // Beatracking example var buffer = Buffer.read(Server.default, "/path/to/some/audio/file/ with/prominent/rhythm"); // use an audio file with prominent rhythm here { var sound = PlayBuf.ar(1,buffer,BufRateScale.ir(buffer),loop:1)*4; .// loop through the file var fft = FFT(LocalBuf(512),sound); var analysis = BeatTrack.kr(fft); // analyze it var tempo = analysis[3].poll(label:\EstimatedTempo); .// print the estimated tempo var beat = Decay.kr(analysis[1],0.2) * WhiteNoise.ar(0.1); // clicks produced on the right channel Out.ar(0,[sound,beat]); }.play; )

Supercollider also features a series of UGens which we can use to perform the onset detection. Onset detectors, generally speaking, have the ability to spot changes. Depending on both—the kind of signal and the algorithm we use—these changes may signify tonal, chord, morphological, spectral, or other kinds of permutations. The most important time-domain onset detectors are the Coyote (to be found in the SC3Plugins extensions bundle) and Slope UGens, the first of which performs a sophisticated amplitude analysis and the latter measures the rate of a signal's change per second. Slope will return an array of two instances of OutProxy. Several specialized onset detectors are available too, for example, PV_HainsworthFoote, PV_ JensenAndersen, and Onsets UGens, the latter specializing in musical signals. We can see all of them in action in the following example; note that some of those UGens outputs trigger, which won't be visible if polled directly, therefore we have them trigger a nominal signal instead. ( // onset detection example Server.default.waitForBoot({ { // a complex signal var sound, sequence, fft, analysis; sequence = Demand.kr(Impulse.kr(2),0, Dseq([[250,300],[420,650],[100,150],[1000,2300]],inf)); sound = Saw.ar(sequence,mul:0.2)+ Resonz.ar(ClipNoise.ar(),sequence,0.3,1); [ 120 ]

Chapter 6 sound = sound * EnvGen.ar(Env([0,1,0],[0,0.5]),Impulse.kr(2)); Coyote.kr(sound).poll(label:\Coyote); Slope.ar(sound).poll(label:\Slope); fft = FFT(LocalBuf(512),Mix.new(sound)); analysis = PV_HainsworthFoote.ar(fft,0.5,0.5,threshold:1); K2A.ar(1).poll(K2A.ar(analysis),label:\HainsworthFoote); analysis = PV_JensenAndersen.ar(fft,threshold:0.2); K2A.ar(1).poll(K2A.ar(analysis),label:\JensenAndersen); analysis = Onsets.kr(fft,threshold:1); K2A.ar(1).poll(K2A.ar(analysis),label:\Onsets); Out.ar(0,sound); }.play; }) )

Basic mappings

Having discussed both how to acquire data and how to extract information out of audio signals, it is time to discuss how we can map them to other ranges so that we can use them to control visual elements, or in general, other parts of our programs. As far as mappings are concerned, we can distinguish between a series of tasks that are likely to be involved namely, generate, acquire, store, probe, preprocess, and finally encode and distribute. Depending on the nature of each project and of the kind of data involved, some of these steps might be non applicable or may be extrinsic to SuperCollider and in exceptional cases it could be that more steps are involved. We have already talked extensively about how to generate data by means of analyzing signals as well as about how to acquire them from various sources, and while doing so, we have also demonstrated ways in which we can store data. Before we elaborate on the later stages, we need to briefly discuss a fundamental schism in SuperCollider's architecture namely, the one between the Server (that is, the audio synthesis engine(s)) and the Client or SCLang (these refer to the SuperCollider programming language). These two parts of SuperCollider are largely independent, information exchange carried out internally using the OSC protocol. We can exchange data in both directions, therefore we will have to examine both client-side and server-side mapping techniques, so that we are in a position to select the most efficient stratagem in every context. Note that while sending data from a server to the client via OSC is generally an acceptable practice, using the poll method as we did in the previous examples should only be used for testing purposes and never in final projects since it is a CPU-intensive task.

[ 121 ]

Data Acquisition and Mapping

Preparing and preprocessing data on the client side

It is probably the norm, rather than the exception, that a dataset will require some sort of preparation before we can go on and use it. This may happen for several reasons. For example, our dataset might not be in the right format; it may contain strings or booleans when what we really need are floats. It is also very typical that data may have been corrupted resulting in invalid entries that need to be filtered out. It is also quite often the case that we need to compensate for software/hardware inaccuracies or for random environmental events that might have biased our data. For example, some machine listening algorithm which failed to identify pitch in somebody's cough, or some kind of hardware measurement device which was affected by somebody's cellphone, or by electrical induction. More importantly, it is quite often the case that the kind of information we need can only be acquired if we somehow analyze the data, for technical or other reasons. Imagine for example, that we are interested in tracking frequencies only within the range 400-800 Hz: the only way to do this is to first track frequencies and then filter out those that are outside our range of interest. Of course there is no predefined way to prepare our data, but there are some fundamental methodologies based upon which we can achieve all kinds of complex manipulations. First thing to consider is that we need to perform data tests so that we know if and what kind of invalid data there may exist. Tests can be either inclusive, that is, to test if every unique entry is valid, or exclusive, that is, to test if there are invalid entries in our dataset. The former is safer but the latter may be faster, as we don't have to test every single element. Of course when a real-time dataflow is concerned, we will have to appropriate these techniques so that they are meaningful in this context. At a more rudimentary level, we will probably have to perform tests to find out in what form our data has arrived, as this is unknown sometimes. Consider, for example, this cardiac arrhythmia data that we have retrieved from the Internet in the beginning of our chapter. Let's perform some basic tests to see in what form the data has arrived: // probing a dataset ~data = CSVFileReader.read("arrhythmia.data".absolutePath,true,true); // read data ~data.class; // dataset it is an instance of Array ~data.collect(_.species).as(IdentitySet); // containing other Arrays ~data.size; // 452 of them actually ~data.flatten.collect(_.species).as(IdentitySet); // each of which contains Strings

[ 122 ]

Chapter 6

Now, having determined the structure of our dataset, we can decide what kind of transformations we will need to perform before we can actually use that data in a specific project. Indeed, we will most likely have to convert it into a monodimensional array with numbers, rather than strings. Apparently, the kinds of tests we need to do largely depend on what we want to do subsequently with our data. In this example, we demonstrated how to use generic methods such as size, species, and as to perform basic tests. collect(_.species) is just a shortcut for collect{arg item; item.species}. collect merely evaluates the given function for each element and returns a new collection containing the results. A very useful trick is to then convert the dataset into an instance of the IdentitySet class, this ways removing all duplicate entries. In this particular case, we do so in order to see what kinds (species) of objects our dataset consists of, but in a different context we could have done so to probe the kind of different elements a dataset would consist of. More useful testing methods are inherited to all collections by their base class Collection, such as includes (tests if some object is included in the collection), includesAny (tests whether any of a series of objects in included), includesAll (tests whether all given objects are included), occurrencesOf (returns the number of occurrences of an object in the collection), any (answers whether a given instance of Function returns true for at least one item in the collection), every (answers whether a given instance of Function returns true for at least one item in the collection), and count (answers the number of items for which a given instance of Function returns true). As far as manipulating the Collection base class is concerned, we can apply almost any possible transformation by using collect, select, or reject. The latter will evaluate the given instance of Function for every item and will return a new collection consisting of only those items for which it returned true (for select) or false (for reject). And of course any other method that maybe useful. To see how this works in practice, we will now assume that we are only interested in nonrepeating, non-zero entries out of our arrhythmia dataset, and that we want to omit (rather than substitute) all invalid entries. We could proceed as follows: // filter irrelevant data ~data = ~data.flatten.collect(_.asInteger); // convert to mono-dimensional array of Integers ~data = ~data.select(_!=0); // remove zeros ~data = ~data.as(IdentitySet).asArray; /* convert to IdentitySet and back to Array to filter out duplicates */

[ 123 ]

Data Acquisition and Mapping

Preparing and preprocessing data on the server side

Thereafter, preprocessing on server side is very different in philosophy. Firstly, we will not normally need to probe our data to find out what their structure is, and what they consist of; since every kind of information handled by the server is already a signal of some sort. Indeed, the only kind of data we may acquire at server side is data retrieved by analyzing signals. Secondly, all operations that we can perform on signals are exclusively via UGens or via certain operations (which technically redirect to UGens themselves). That being said, the kind of manipulations we can do is far from fundamental, there are specialized objects for all kinds of simple and more sophisticated operations. As far as tests are concerned, there are operations or UGens that will return either 1 or 0 with respect to some input's characteristic. To test some signal against some range of values, we may use the InRange UGen, or the various comparison operations (>, =, 0) { // if there are more levels to go var childrenPoints, childrenRadius; // calculate the children positions for each for the branches childrenPoints = points!numBranches; // points are the same /* for each level generate all branches and add them to fChildren array */ numBranches.do{ arg i; fChildren = fChildren.add(fractalFunc.(numLevels-1, numBranches, childrenPoints[i],colors)); }; } { // else set children to nil fChildren = nil; }; // create fractal object fractal = ( children: fChildren, /* an array with the children (all of them fractal objects, too), or nil */ branches: numBranches, // how many branches originalPoints: points, counter: 0, animatePoints: nil, // to be updated by animate colors: colors, animate: { arg self, speed = 0.01, centerPoint; var localCounter; self.counter = self.counter + speed; // increment counter localCounter = self.counter.fold2(1).abs; [ 144 ]

Chapter 7 // set animate points self.animatePoints = Array.fill(self.branches, { arg i; self.originalPoints * (localCounter.sin); }); Pen.scale(0.99,0.99); // scale the whole fractal self.branches.do{arg i; // for each branch Pen.moveTo(self.animatePoints[i][0]); // start at first point Pen.scale(0.99,0.99); // scale subsequent segments Pen.rotate(i/2pi, centerPoint.x,centerPoint.y); // rotate subsequent segments Pen.strokeColor_(colors[0].blend(colors[1],localCounter)); // gradually move to target color // create segments to all subsequent points self.animatePoints[i].do { arg point; Pen.lineTo(point); }; Pen.stroke; // draw strokes only }; // animate children if (self.children.notNil) { // if there are children // draw all of their branches self.children.do{arg item; item.animate(speed,centerPoint); }; }; }; ); fractal; // explicitly return fractal }; )

In our following example, we will use an instance of Routine to create new fractal objects using our factory and with respect to random data retrieved in real time by http://random.org. We will ask for different sets of data so that we determine the points and colors variables of every fractal independently. The same data used for determining the points variable will be also encoded accordingly to create glissando trajectories in the audio part. We will repeat this process every 15 seconds so that we constantly get new fractals.

[ 145 ]

Advanced Visualizers

Our instance of Routine part would appear as follows: // retrieve and encode data loop = fork{loop{ { // defer var data, points, colors; // retrieve points "curl \"http://www.random.org/integers/?num=10&min=1&max=640&col=1 &base=10&format=plain&rnd=new\" > data.temp".unixCmd({ // this function is called when the process has exited data = FileReader.read("data.temp", delimiter: '\n'); data = data.collect(_.asInteger); // convert to Integer data = data.reshape(6,2); // reshape as pairs points = data.collect(_.asPoint); // convert to Point "rm data.temp".unixCmd; // delete temporary file // map points as frequencies for our Synths points.do{ arg point; var freqA, freqB; freqA = point.x.linlin(0,640,100,1000); // linear mapping freqB = point.y.linlin(0,640,100,1000); // linear mapping sound.free; // first stop previous synth sound = Synth(\gliss,[\freqA, freqA,\freqB, freqB, \dur, 15]); }; // retrieve colors "curl \"http://www.random.org/integers/?num=6&min=1&max=255&col= 1&base=10&format=plain&rnd=new\" > data.temp".unixCmd({ // this function is called when the process has exited data = FileReader.read("data.temp", delimiter: '\n'); data = data.collect(_.asInteger); // convert to Integer data = data.reshape(2,3); // reshape as triples colors = [ Color.new255(data[0][0],data[0][1],data[0][2]), Color.new255(data[1][0],data[1][1],data[1][2]), ]; "rm data.temp".unixCmd; // delete temporary file }); // create a new fractal fractal = ~spiralFractalFactory.value(4,3,points,colors); }); }.defer; 15.wait; // repeat process every 15 seconds }}; [ 146 ]

Chapter 7

And then we simply invoke our new fractal object's animate from within drawFunc. Note that if we access www.random.org synchronously, our program would freeze until all data has been retrieved. This would cause glitches in both our animation and sound synthesis. This is why we use the asynchronous unixCmd method herein to retrieve the data. We ask the shell to download the data and save it to some temporary file. Once the command is executed, the provided instance of Function will be evaluated, wherein we read the data from the temporary file into the data variable, preprocess and encode them accordingly, and then we delete the temporary file when it's done. The full code for the previous example can be found online in the book's code bundle. A still from the following fractal animation is shown in the following screenshot:

[ 147 ]

Advanced Visualizers

Summary

In this chapter, we have demonstrated how to implement advanced visualizers by means of combining several techniques and methodologies that we had previously introduced in this book. The examples cover a wide range of scenarios including how to achieve more complex waveform scoping, how to implement a spectrogram, how to visualize patterns, and musical information using sprites and kinematics structures, as well as how to implement data-driven fractals and particle systems. In the next chapter, we will deal with more advanced topics such as automata and complex encodings and introduce ourselves to probability distributions, textual parsing, and neural networks, among others.

[ 148 ]

Intelligent Encodings and Automata This chapter aspires to introduce and familiarize the reader with more advanced concepts, such as statistical data analyses, textual parsing, and ways to implement intelligent encodings. We will, further, examine the concept of automaton and demonstrate how we can implement autonomous systems that generate audiovisual structures on their own. Yet, it has to be emphasized that this chapter serves primarily as a pragmatic introduction rather than a formal treatise to the aforementioned. Even though I have done my best to ensure that the examples so far are indicative of both the complexities as well as the potential of the topics discussed, those interested in an in-depth discussion of the technical challenges involved in any of those areas should refer to more specialized resources. The topics that will be covered in this chapter are as follows: • Statistical analysis and probability distributions • Textual parsing • Intelligent encodings • Neural networks • Cellular automata • The Game of Life

Intelligent Encodings and Automata

Analyzing data

In Chapter 6, Data Acquisition and Mapping, we discussed how to acquire data as well as how to generate them by means of machine-listening techniques. It is also often the case that we need to analyze non-audio signals or data collections of some sort. However, data analysis stands for an infinite range of operations we may perform on some collection; additionally, it is often the case that we blindly probe the collection for potentially interesting patterns rather than looking for something in particular. Dealing with such cases in real-life projects would be overwhelming if there were no generalized methodologies that serve as the starting point. Fortunately, there is already a kind of science dedicated to the systemic study of the collection, organization, analysis, interpretation, and presentation of any kind of data, namely statistics. As such, it provides us with a very sophisticated background to perform analyses and feature extraction of various sorts. By applying statistical analysis to our datasets, we can easily interpret our data with respect to some desired feature (as long as we can mathematically formalize the latter) as well as probe it for interesting behavior by means of calculating some standard measures. Now we will discuss the most fundamental concepts and techniques that we can combine to achieve even more complicated analyses.

Statistical analyses and metadata

Let us introduce ourselves to some fundamental statistical notions and measures. The mode of data collection is the value with highest probability or, in other words, the value that appears more often. The opposite of the mode is usually referred to as the least repeated element. Note that, paradoxically, in a list of numbers wherein no number is repeated and all the values have equal chances of appearing, the mode is also least repeated number. The head of a dataset stands for those values that appear quite often, and the tail represents the remaining. Mean is the average of the data collection in question. Median is that value that separates the higher from the lower half of the dataset, or in other words, the "middle-value" of the dataset. Range is simply the distance between the lowest and the highest number. Since the later will be very misleading if our dataset includes just a couple of very big or very small numbers, an interquartile range (usually abbreviated as iqr), defined as the distance between the upper and the lower quartile, has also been introduced. Variance stands for the average of the squared differences of the mean. Standard deviation or σ (the Greek letter sigma) is the square root of the variance and, hence, another measure of dispersion. Of course, most of these measures are meaningful only for numerical data. In the following code, we will demonstrate how to calculate them for our arrhythmia dataset. Note that, apart from the standard select and reject instance methods, we also use maxItem and minItem, which will return the item that will either give the maximum or the minimum result respectively when passed to the supplied function, as shown in the following code: [ 150 ]

Chapter 8 ( // calculate statistical meta-data var data, mode, leastProbableNumber, head, tail, mean, median, range, iqr, variance, deviation; // first load and prepare our dataset data = CSVFileReader.read("arrhythmia.data".absolutePath,true,true); // read from file data = data[0].collect(_.asInteger); // consider just a chunk and convert its elements to Integers data = data.select(_!=0); // remove zeros // calculate meta-data mode = data.maxItem({arg item; data.occurrencesOf(item)}); ("Mode is: " + mode).postln; leastProbableNumber = data.minItem({arg item; data.occurrencesOf(item)}); ("Least Probable number is: " + leastProbableNumber).postln; head = data.select{arg item; data.occurrencesOf(item) >= 6}; // only those values that appear at least 6 times ("Head is: " + head.as(IdentitySet)).postln; tail = data.reject{arg item; data.occurrencesOf(item) >= 6}; // values that appear less than 6 times ("Tail is: " + tail.as(IdentitySet)).postln; mean = data.sum / data.size; // the sum of all data divided by the size of the dataset ("Mean is: " + mean).postln; median = data.sort[data.size/2]; // the 'middle' element when the array is sorted ("Median is: " + median).postln; range = data.max - data.min; // range ("Range is: " + range).postln; iqr = data.at(( (data.size/4) .. ((data.size*3)/4) )); // return an array with only the second and the third quartilion iqr = iqr.max - iqr.min; // calculate iqr range ("Interquartile Range is: " + iqr).postln; variance = (data.collect{arg item; (itemmean).squared}).sum/data.size; // calculate variance ("Variance is: " + variance).postln; deviation = variance.sqrt; // calculate deviation ("Deviation is: " + deviation).postln; )

Calculating those measures essentially results in the generation of metadata (descriptive metadata, to be precise), which is a fundamental concept for statistics and data analysis in general. Metadata stands for data that represents abstract characteristics, properties, or attributes of other data, which usually originate from the analysis of other data. [ 151 ]

Intelligent Encodings and Automata

Probabilities and histograms

Two very important statistical notions are those of probability and of probability distribution. Probability is a measure of how likely it is for an event to happen. When dealing with discrete datasets, an event would be to retrieve the next element of a dataset. We can easily calculate probabilities by simply dividing the occurrences of a specific element within our dataset and dividing it with the latter's total size. Graphs of elements (horizontal dimension) versus their occurrences within a dataset are termed histograms and are extremely useful in allowing one to have an overview of the probability distribution of all the elements in a dataset. Naturally, nonexistent elements are represented by a probability of zero. In the following example, we will calculate probability distribution as an array of as many indices as the range of possible values in the original dataset along with entries representing how many instances of each particular index are contained in the latter. Of course, when we are dealing with negative values, we need to bias everything accordingly and then compensate for it on the graph using Plotter class' domainSpecs instance variable to set a new horizontal range. As of this writing, however, this approach will fail to properly set up the values due to an internal bug that is to be fixed in some future version of SuperCollider. ( // calculate a histogram var data, histogram, histSize; data = "curl \"http://www.random.org/integers/?num=1000&min=100&max=100&col=1&base=10&format=plain&rnd=new\" ".unixCmdGetStdOutLines; // retrieve random numbers in the range (-100,100) from random.org data = data.collect(_.asInteger); // convert to integers histSize = data.max-data.min + 1; // calculate the size histogram = Array.fill(histSize,{0}); // a signal with as many elements as the range of values we are interested in data.do({ arg item; var count, histoIndex; histoIndex = item + data.min.abs; // to compensate for negative items count = histogram.at(histoIndex); // read previous value histogram.put(histoIndex, count + 1); // increment it }); histogram.plot().domainSpecs_([-100,100,\lin,1].asSpec); // make a histogram )

[ 152 ]

Chapter 8

In the probability theory, independent events are those that are not affected in any possible way by other events, for instance, any toss of a coin has exactly 50 percent probability of either being heads or tails. We can also speak of joint probability, that is, the probability of coming across more than one event, for example, what is the probability of the next three items retrieved from a set to be of particular values. We can calculate joint probabilities simply by multiplying the probabilities of each individual event together or, in mathematical notation, P(A and B) = P(A) x P(B); here, P(A) stands for the probability of the event A. Likewise, dependent events are events that depend on other events. For example, when we are iterating through a dataset rather than randomly asking numbers out of it, the probability of an item having a certain value does not depend solely on the number of its occurrences within this dataset, but also on how many times this value has been already retrieved and how many items are left in the dataset; in principle, we would have to recalculate its probability for the remaining dataset before we can calculate the actual probability. In the case of dependent events, we can speak of conditional probability, which stands for the probability of an event given some other condition. In mathematical terms, we can calculate the probability of A given B as P(A | B) = P(A and B) / P(A). By means of these simple rules, we can calculate the probability of complicated events and implement algorithms that target very specific cases. However, always bear in mind that probabilities are just indicators and at times can be erroneous—it could be that the next element in a dataset is one with a probability of only 0.1 percent.

Dealing with textual datasets

So far, we have only dealt with numerical datasets. It is, nonetheless, quite common to deal with a certain kind of non-numerical data, such as text-related data. Depending on the specifics of each application and what kind of information we are interested in extracting, dealing with textual datasets could be anything from extremely simple to overwhelmingly complicated. For instance, we can easily calculate how probable it is for a certain string to appear by means of counting their occurrences in a dataset, and then we can even calculate their probability distributions or map them to audio synthesis parameters. Yet, it would be extremely challenging, if not completely impossible with the current technology, to automatically synthesize the abstract of this chapter, provided the contents are available as a String object. Typically, performing sophisticated tasks with text involves stages, such as lexical analysis, syntactical analysis (or parsing), and semantical analysis, which are complicated enough to be the subject of dedicated books and hence, impossible to discuss in depth herein.

[ 153 ]

Intelligent Encodings and Automata

Let us consider a quite complicated, albeit very useful, syntactical analysis problem. Assuming that the whole text of this chapter is stored in some plain text file, how can we analyze it so that we can extract only those blocks of text that are valid SuperCollider code, and later, evaluate them at will? Given that SuperCollider is able to both parse and lexically analyze text formatted as valid SuperCollider code, all we need to do is analyze the text, identify such blocks, and encapsulate them as individual String objects; then, we can simply invoke their interpret instance method when we need to evaluate them. To actually implement such an algorithm, we need to describe it in a computer-understandable way that makes a code block different from irrelevant text. For our implementation herein, we will scan the text until we identify a parentheses followed by a blank character and a comment line delimiter (that is, a '( //' character); then, all we need to do is find the match of this parenthesis, which signifies the end of the block. This is not necessarily a prerequisite for all of the valid code in SuperCollider; however, throughout this chapter, we have followed the convention that all standalone examples are formatted this way. We can implement a basic parentheses-matching algorithm if we simply increment a counter for every opening parentheses and decrement it for every closing one. When the counter equals zero, we know that we have found the ending of the code block in question. There is still a problem though since we have already used the '( //' token in this chapter and outside the context of a code block (like in this very sentence), and since within the code examples, parentheses could be (are actually in this case) contained within instances of String, Symbol, Char or within comments that are neither necessarily matched nor signify the opening of a block of code. For example in this chapter, we have intentionally enclosed those off-code-block appearances in quotes and made sure that the in-code ones are either matched or preceded by a quote or a $ symbol so that, with the addition of some simple rules, we can safely ignore them. The code is as follows: ( // extract and evaluate code from text file var file, path, text; // used to read the text from file var cues; // of initial position of and '( //' var chunks; // array with chunks of text containing potential code var code; // an array with the parsed code path = PathName(thisProcess.nowExecutingPath).pathOnly ++ "9677OS_08_chapterInPlainText.txt"; // the path to the file file = File(path,"r"); // open for reading operations text = file.readAllString; // read all text to a string file.close; // close files

[ 154 ]

Chapter 8 cues = text.findAll("( //"); // find the positions of al occurences of '( //' cues = cues.select{ arg index; (text[index - 1] != $') && (text[index - 1] != $") }; // remove all invalid parenthesis (ones preceded by ' or ") (cues.size-1).do{ arg index; chunks = chunks.add(text[(cues[index] .. cues[index+1])].toString); // copy all text between subsequent cues and put it on chunks array }; chunks = chunks.add(text[(cues.last .. text.size)].toString); // also add last chunk chunks.do{ arg item, index; // for every chunk of text var counter = 0, position = 0, done = false; item.do{arg char,i; // for every character in chunk if (done.not) { // if not done, increment counter for every '(' and decrement it for every ')' case {char == $( } { counter = counter + 1 } {char == $) } { counter = counter - 1 }; if (counter == 0) {position = i; done = true;}; // if counter equals 0, then the code ends at position i and the done flag is set to true } }; code = code.add(item[(0 .. position)].toString); // copy the parsed code to the code array }; (code.size + " blocks of code have been successfully extracted from text file").postln; "Fourth code block will be now evaluated".postln; code[5].interpret; // evaluate the sixth example )

However simplistic this example is, having to correctly parse and evaluate the code sent to SuperCollider from some remote client is a real-life scenario, or at least something that I personally had to do many times in various projects. It does not necessarily make sense to do something like that, yet it is theoretically possible to even implement our own programming language within SuperCollider; this is so that the latter would correctly parse and translate it to its equivalent code that could later be evaluated by sclang. [ 155 ]

Intelligent Encodings and Automata

Advanced mappings

In Chapter 6, Data Acquisition and Mapping, we demonstrated how we can essentially map any consecutive range to any other with respect to distribution curves. In this section, we will extend our arsenal of encoding techniques and introduce ourselves to how to implement complex and intelligent encodings.

Complex and intelligent encodings

There are situations wherein what we need is some kind of intelligence that will take the necessary decisions and select the appropriate process from a broader range of candidates in order to encode our data properly. To realize such mappings, we need some kind of mechanism that ensures the right decisions are taken and, of course, we need to define alternative behaviors. A simplistic way to implement decision-making algorithms would be by using test mechanisms and control flow structures, such as if or case. For the following simplistic example, assume that we want to sonify floating-point numerical values in the range of 0 to 1 so that they control oscillators that are either in a low (200 to 400) or in a high (2000 to 4000) frequency register. That is to say that our destination range is not continuous. Consider this possible solution: ( // simple decision-making encoder Server.default.waitForBoot({ var data = Array.fill(100,{rrand(0,1.0)}); // our dataset var mappingFunc = { arg datum; // the mapping function if (datum 3)} {newState = 0} // it dies from overpopulation. {state.asBoolean.not && (neighbours == 3)} {newState = 1}; // birth // update cells[xIndex][yIndex] = newState; };

[ 167 ]

Intelligent Encodings and Automata

And now using the same instance of SynthDef as before, we can proceed with th drawFunc function like this: .drawFunc_({ // setup UserView and callback func var speed = userView.frame % 4; synth.set(\array, cells.flatten); // sonify cells.do{arg xItem, xIndex; // for each cell xItem.do{arg yItem, yIndex; if (yItem!=0) { // draw current state Pen.fillColor_(Color.new255(214,176,49)); Pen.addRect(Rect(xIndex*20,yIndex*20,20,20); Pen.fill; }; if (speed==0) {updateCell.(xIndex,yIndex);}; // calculate and draw new state }; }; });

The full code for this example can be found online at Packt's website in the book's code bundle. A still from the visualization is as follows:

[ 168 ]

Chapter 8

Summary

In this chapter, we dealt with more advanced topics, such as how to perform statistical analyses on datasets, how to parse textual information, how to perform intelligent encodings, and how to implement one-dimensional and two-dimensional cellular automata. The later are examples of generative systems that produce an audio and a video autonomously. In the next chapter, we will implement a more sophisticated generative system as a case to demonstrate various design patterns and software architecture paradigms so that we are in position to properly design and implement code for more sophisticated systems.

[ 169 ]

Design Patterns and Methodologies Being towards the end of this book and having discussed in depth how to address the various challenges of mapping and visualizing audio and data, as well as how to implement animation and audiovisual generative programs in SuperCollider, we will now dedicate this chapter to software architecture itself. We will discuss how to design, implement, and finalize a fairly involved example. With the introduction of miscellaneous common-design patterns, we will exemplify how certain problems can be cast as trivial, using the appropriate techniques. Of course, there is no single approach to software design and we may have to come up with our own particular designs for certain kinds of problems; nevertheless, the methodologies examined herein are very likely to occur in real-life projects since the kind of problems they solve are very common. It has to be said that this chapter does not pretend to be a treatise of design patterns, but rather it follows a pragmatic approach to demonstrate how we can solve real-life problems in efficient and elegant ways by means of using well-known computer science paradigms, even if in a broader or abusive fashion. The topics that will be covered in this chapter are as follows: • Understanding the Model-View-Controller paradigm • Modeling objects with Environment • Handling multiple files • Designing patterns • Understanding software agents • Introducing actors

Design Patterns and Methodologies

Blackboard

In this section we will examine the overall structure of our application and take decisions that impose a very particular kind of architecture on all the files and models we will be using. Prior to doing this, however, we need to discuss the details and the ramifications of our methodology.

Methodology

No programming language is an island, and SuperCollider is no exception. The chances are that, from a computer science perspective, the type of problems we are likely to encounter have already been encountered, studied, analyzed, and solved by others. More importantly, relevant algorithms, design patterns, and whole programming paradigms do exist and we can exploit them to accelerate our creativity. It must also be said that familiarizing oneself with such techniques has a significant psychological advantage too, as it fosters a more abstract way of thinking, wherein everything is solvable once we identify the kind of structural elements we will most likely use. Therefore, it is of fundamental importance, even for a casual programmer, to be aware of several recurring design patterns and strategies, so that they may efficiently and quickly conceptualize possible solutions to various problems they will encounter. Note that we have already encountered such patterns, for example, Factories or Handlers. The most important stage in software design is to actually conceptualize and formally describe a possible solution. Sometimes, writing a code is the least significant task of a programmer these days and may only occupy a small fragment of his time. This may sound an exaggeration, yet, consider how trivial it is to write the necessary code for a series of short, well-documented, and scholasticallydescribed algorithms and what it takes to arrive at this stage. As far as complex projects are concerned, once we have conceptualized and formally described a possible prototype, we are half way there. The first step to designing a project is to formalize and then study the requirements; these are descriptions of what our program should do (not how, but what) in the textual form. Without haste, we will proceed with a solid idea of a generative audiovisual application that combines several techniques we have encountered hitherto. Due to an obvious lack of inspiration, we will name our project as Snakes. The requirements for the project are as follows: • Snakes must be a generative, audiovisual work featuring real-time, computer-generated audio and video. • An initial number of intelligent kinematic snake-like creatures must wander freely in a two-dimensional space, also producing sound. [ 172 ]

Chapter 9

• Each creature must have a body (its visual representation), a personality (which could be either introvert or extrovert), a distinct voice (a unique sound-synthesis algorithm), and a very basic form of artificial intelligence (an ANN-driven brain). • The snakes must largely be unaware of their spatial environment and should only be able to sense an evenly present datastream, which we will call gestalt (being largely influenced by Greg Egan's novel Diaspora), which in essence is merely random data retrieved in real time from www.random.org. • The snakes, being autonomous, intelligent creatures, should be able to interpret the data they sense, and decide how to move and vocalize at will. Much like man, no snake is an island and whenever more than one society (may be, population is a better term when snakes are concerned) is formed, one creature affects the other. Therefore, every time a snake encounters another, there are three possible outcomes with respect to their personalities: 1. Whenever an introvert snake and an extrovert snake meet, the chances are that they fall in love and give birth to a new snake. 2. Whenever two introvert snakes meet, the chances are that they will simply ignore each other and hence nothing will happen; finally 3. Whenever two extrovert snakes meet, the chances are that they will fight each other, an action leading to mutual death. However, life is strange and there is always a small possibility that things do not work out in this way, so our program should make sure that occasionally the results are different than those suggested. The next step is to carefully consider what our program has to do, based on this description, and think of possible implementation designs. We should start with small steps and follow the divide-and-conquer paradigm, which dictates that we should break up complex problems to simpler, easier-to-solve ones. The idea is to have a rough plan, as soon as possible, which will lead to a working prototype. Always bare in mind that programming is an iterative process; in all probability, you will definitely revisit this design and revise it. The complexities of some problems cannot be fully appreciated, or even understood, until we actually start implementing them. Bear in mind that the code presented hereinafter was not written in this way. In fact, I typically had to revisit the original design and make minor modifications to compensate for various kinds of problems that emerged. Nevertheless, once there is an initial quasiworking design, it is easy to make it appropriate.

[ 173 ]

Design Patterns and Methodologies

Model-View-Controller

If we reflect a bit on the previously mentioned requirements, we can identify certain objects that we will most likely need to design. In this world of object-oriented design, we should treat all key nouns and subjects, encountered in this text, as candidate objects to model, and all the key verbs as candidate methods. Even if it is highly unlikely that the final design will be a straightforward representation of these elements, it will be very close or will make significant use of them. For example, in our case here and even at such an early stage, studying these key elements (which are all highlighted in the preceding text will make it quite obvious that our solution will definitely feature a snake factory and that we will also need models for its various parts, such as the body or the brain). It's also quite obvious that our project is quite involved and therefore, following the fundamental divide-and-conquer rule, we will have to break it into smaller parts, each of which will have a distinct role, and also implement some mechanism to permit interactions between them. Herein we will be using the infamous ModelView-Controller (MVC) architecture in a rather generalized and liberated fashion. The idea is to separate our program into three distinct parts: the Model, which will comprise all the models and factories for the various structures we will use in our program; the View, which will be the program's frontend, that is, that part of it responsible for delivering video and (in our case) audio; and the Controller, which will act as a mediator between the two when needed. That being so, the Controller is both—the Model's interface and its modulator. As we will see, the View and the Model should be distinct and isolated from each other, only providing public interfaces for the Controller to bridge them. Typically, in an event-driven design, the Controller's role is twofold, to update the Model's behavior with respect to the View (with which the user interacts) and vice versa (since the changes in the Model should somehow reach the user). Our project is quite different, however, since we will not allow any user interaction (other than simply closing the animation window). Hence, in our case, the roles will be slightly different. The Controller's role would be only to update the Model's internal state (such as managing the population of snakes) and set up the View so that it visualizes this state. In any case, a question arises: who will take care of defining and initiating the Model, the View, and the Controller? This introduces us to another design pattern, that is, Blackboard, which will be our main, initial process that will take care of defining and launching the various parts of our program (Model, View, Controller) in the correct order.

[ 174 ]

Chapter 9

The following figure exemplifies what we know until now about Snakes: Blackboard

Model

Controller

some kind of process

Population

View

Audio Synthesis Server UserView

Snake

Body Personality Brain

VoiceDef

Gestalt

asynchronous communication

www.random.org

Notice that models are represented in diamonds in the preceding figure (contrasting simple variables, functions, and routines). In the figure, VoiceDef is used instead of Voice; this is because the unique synthesis algorithm in our requirement is apparently closer to a unique SynthDef object rather than a unique Synth object, as we will discuss later.

[ 175 ]

Design Patterns and Methodologies

Handling multiple files and environments

Before discussing the specifics of the various parts, we need to start with Blackboard itself and consider the various kinds of actions it will have to perform. At this point, we can safely assume that our project will most likely be large enough to qualify to be split into multiple files. It does make sense, from a conceptual viewpoint, to keep at least the code for Blackboard, Model, View, and Controller in separate files. Note that all these parts are models themselves, yet we will need exactly one instance of every object in our project. Therefore, we only need to model them as singleton objects; no factory is needed; all we need to do is, evaluate the proper code once. By the way, since the whole idea of loading code from another file to the body of our program is similar (at least superficially) to the #include preprocessor directive in C/C++ languages, it does make sense (at least for me) to implement a similar class in SuperCollider, as shown in the following code: Include { // class to evaluate code from other files. *absolute{ arg path; path.loadPaths; // evaluate code in file } *relative{ arg path; var pathToLoad = Document.current.dir +/+ path; pathToLoad.loadPaths; // evaluate code in file } }

This Class does nothing more than invoking loadPaths on an absolute or relative path. However, in this chapter, we will prefer to include rather than load it since in our context it is conceptually more clear. Note that, as of writing, Document.dir was broken in the SCide and while it will probably be fixed in some future update, one may use (PathName(thisProcess.nowExecutingPath).pathOnly instead, as we did in the previous chapters. We can then include our various resources as shown in the following code snippet: // include resources Include.relative("Model.scd"); // evaluate code in Model.scd Include.relative("View.scd"); // evaluate code in View.scd Include.relative("Controller.scd"); // evaluate code in Controller.scd

[ 176 ]

Chapter 9

In reality, we only plan to include the definitions for the various models and the helper functions that we will use in these files, yet it should be explicit that the Include class does not import definitions; rather, it evaluates the code in these files like a normal block of SuperCollider code. It is up to the designer (us) to reinforce any particular restrictions on what kind of code should be contained therein. It should also be highlighted that, from Blackboard's point of view, once a file is included, there should be a way to access the various definitions so that it is up to Blackboard to decide when and under what circumstances it will initiate and launch its elements. At least that is the architecture we are trying to reinforce herein. Hitherto, we had used environment variables (the ones prefixed with ~) inside auxiliary files so that we can address these objects globally by means of their name. However, this is not the best approach, since the corresponding object would indeed be available globally to all the running programs, including the ones we try to isolate from each other (for instance, Model and View). Generally speaking, this is unacceptable, as it opens the door to all sorts of disasters (may be not the kind of disasters that actually kill people or ruin buildings, but nevertheless disasters that may cost hours of sitting miserably in front of the screen trying to locate some bug well hidden under several nested Include). Actually the problem is even more complicated than this; we need to ensure that only certain parts of our program are allowed to access certain others, and also need to ensure that we don't accidently use a name already reserved for something else, which may occur if we are including files that include other files. In other words, we need an additional safety net so that we can control who can access the code and under what circumstances. Starting with the naming problem, wouldn't it be nice if we could still use simple names such as draw, refresh, update, and so on (instead of more explicit ones such as someObjectDraw), yet have them within a protected scope, that is, a namespace? Among other things, this would also guarantee that only those objects that have access to this scope may access these names. We can easily achieve these using Environment, as shown in the following code: ( // Blackboard.scd var snakesProject = Environment.new; snakesProject.use{ // ------------------- LOAD RESOURCE ---------------------------Include.relative("Model.scd"); /* Model.scd is evaluated WITHIN this (specific to this program) environment */ ~model[\someElement].postln; // access elements within model }; snakesProject[\model][\someElement].postln; /* the same but outside the .use structure */ )

[ 177 ]

Design Patterns and Methodologies

and also in the following code snippet: // Model.scd ~model = Environment.new; /* define a new Environment WITHIN the current environment */ ~model.use{ // create elements WITHIN the model Environment ~someElement = "this is an element"; }

In SuperCollider, the ~ symbol is a shortcut for currentEnvironment.at, however, in our preceding example, the currentEnvironment object for all code within Model.scd is not the default one, but a new Environment object, accessible only to the code within the Blackboard.scd file (to be precise, only to that part of the code that is within parentheses). The preceding code demonstrates the syntax for accessing the internals of the model. Note that if need be, we can also have our code evaluated in some anonymous Environment object, using Environment. use directly. In this chapter we will elaborate on how we can create solid and safe object models using Environment, however, it should be noted that beyond all our precautions, SuperCollider will allow us to replace what key holds with something totally different, therefore, under any circumstances we should do so. What we have achieved with this small trick may not be evident immediately, but if we consider it for a minute or two, we will soon realize that it is quite impressive. First of all, we now have a file-specific namespace nested within a program-specific namespace, and we can allow more nested layers if need be. In this way, within Model.scd, we can refer to our element simply as ~someElement, yet not err because Blackboard has to explicitly refer to it as ~model[\someElement]. So even if there was an element with the same name, for example, in View.scd (and if we follow the same architecture throughout of course) this would only be accessible with ~view[\ someElement]. More importantly, we have now achieved privacy for our Blackboard object. There is no way for anything other than Blackboard to access ~model[\ someElement], unless we explicitly include it there too. In this way, we know that nothing can mess up our program, unless we allow it to do so. There is also a third, equally if not more, important gain with this architecture that we will discuss later in this chapter. In object-oriented programming, the singleton pattern is a design pattern that restricts the instantiation of a class to solely one object.

[ 178 ]

Chapter 9

Threads, semaphores, and guards

It does not take a lot for a project to involve tasks that depend upon other tasks or even situations of asynchronous communication wherein we should wait for something though we have no idea when it will end. Actually, in SuperCollider such cases are norms rather than exceptions. Considering that even if we were to finalize the simplest of all programs, namely the {SinOsc.ar}.play program, in SuperCollider, we would still have to ensure that it is evaluated once an instance of Server has been booted. Waiting until something is done before a block of code is always evaluated is necessary when sound is involved. Yet, even in cases wherein it is not strictly necessary, it is sometimes a nice idea. For example, in our Snakes project, it does make sense that first the View is initialized, then the Model, and then the Controller. It makes sense, because the Controller is a mediator to the Model and the View, therefore, they should already exist. Whether this is strictly necessary or not depends upon the specifics of our code; it would be a nice idea nevertheless, to always ensure the parts of our program are initiated in this order. The same applies for the various internal elements of these structures; certain elements are likely to depend on the creation of others. In other words, we need a solid synchronization mechanism. We have already demonstrated how we can synchronize with asynchronous events while: adding SynthDefs to Server, waiting for it to boot, reading files, invoking Unix commands, or even waiting for triggers or messages via OSC or MIDI. Now we need to discuss ways in which we can synchronize custom blocks of code with each other. We can easily do it since SuperCollider is a multithreading environment that allows us to divide the flow of our programs into different threads of execution. Note that by multithreading, herein, we are not referring to our computer processor's multithreading capabilities (even if SuperCollider tries to take advantage of them implicitly); we are simply referring to how we can implement parallelism in our programs by means of several pseudo-simultaneous threads (we do not need to be concerned with what this may mean). Actually this is very typical, we do it all the time in SuperCollider and we even do more complicated stuff, such as sharing data between different instances of Thread. Consider that in essence, every instance of Task or Routine is a different thread (by the way, they are both instances of Thread, which is an abstract class).

[ 179 ]

Design Patterns and Methodologies

Many of those who are new to SuperCollider will typically have their threads wait for an arbitrary amount of time whenever they want them to sync with something, so that they allow it to realize its job. Apart from not being elegant at all, there are obvious drawbacks in this approach since in all cases we will either wait more or less than the actual need. Luckily there are other elegant ways around synchronization problems. Computer science formalizes two possible solutions with the Semaphore and the Condition design patterns. Luckily, both are already implemented in SuperCollider, so we can use them directly. If we want to allow only a specific number of concurrent instances of Routine, we can use instances of the Semaphore class to control them. More typically, however, we will want to wait for something before we proceed. We can achieve this using the Condition class. Consider the following example: ( // Sync with Condition var condition = Condition(false); fork{ condition.wait; // wait for the other thread to finish before you start "Thread A: I'm running now, yeah!".postln; }; fork{ "ThreadB: Let's imagine that I have to do sth asynchronously that will last for 10 seconds".postln; 10.wait; condition.test_(true); // set to true condition.signal; // notify anyone interested } )

Therefore our Blackboard can be implemented as follows: ( // Blackboard.scd var snakesProject = Environment.new; snakesProject.use{ // ------------------- LOAD RESOURCES ---------------------------Include.relative("Model.scd"); Include.relative("View.scd"); Include.relative("Controller.scd"); // ---------------------- Conditions ---------------------------// flags to control the flow of execution ~viewReady = Condition(false); // etc ... // ------------------- INIT FUNCTIONS ----------------------------

[ 180 ]

Chapter 9 // initiate View ~initView = { "Attempting to initate View".postln; // notify system ~view[\init].value; // initiate View ~view[\initiated].value.wait; // wait for it to be initialized "Done initiating View".postln; // notify system ~viewReady.test_(true); // change condition when done initiating ~viewReady.signal; // propagate change }; // also initiate the Model and Controller here ... // ------------------- LAUNCH PROGRAM --------------------------fork { // initiate the various parts of the program ~initView.value; // initiate View ~viewReady.wait; // proceed only when done ! // etc ... "All parts of the program have been initiated succesfully!". postln; "Now the program will start !!".postln; }; }; )

The complete code can be found online in this book's code bundle. All are pretty straightforward. However, there are a couple of things that need to be clarified. First, notice how Blackboard imposes a particular design on the levels below it. Model, View, and Controller should all implement an init method, which should return an instance of Condition to indicate that they are successfully initiated. It's up to Blackboard whether it should exploit this feature or not, but generally speaking it should. Blackboard's role is simply to redirect the initiation to each corresponding object and then it's this particular object's responsibility to initiate itself properly and notify Blackboard when done. Typically we will apply the same methodology when needed to ensure that the various parts of Model, View, and Controller objects are initiated in the right order. Also note that we have added notification messages everywhere so that in the case of errors (which, as we already know, happen more than often while developing code) we know exactly in what part of our program it occurred and therefore, we can easily identify, isolate, and fix it. Again we should follow the same style throughout all our objects to facilitate debugging. An abstract class is a class that is not supposed to be instantiated, but is rather designed to be specifically used as a base class for other subclasses to inherit from it.

[ 181 ]

Design Patterns and Methodologies

The View

We are now ready to proceed with our View. In our case, the View is not only responsible for what we see, but also for all those parts of our program that constitute its output. Since Snakes is an audiovisual work, the View should be responsible for constructing and holding an instance of Window with a properly configured instance of UserView, as well as for initiating the sound synthesis engine, that is an instance of Server. Note that the View is not just a file packed with code for decompressing Blackboard. It is an object of its own sake and has certain tasks and responsibilities to carry out, also reinforced by Blackboard's requirement that it should implement an init method and an initiated member.

Clients and interfaces

A properly designed object should only expose the parts of itself that the other parts of the program need to access, and nothing more. Moreover, it should only expose these parts in a safe way, that is, in a way that does not allow third parties to alter their internal structure, thus fostering encapsulation. Every decent, well-designed object should communicate with the world exclusively through a specialized interface. We have already discussed interfaces in Chapter 4, Vector Graphics, wherein we highlighted the importance of keeping certain things private. This is a fundamental principle that we should always attempt to maintain at all costs. Nothing should be exposed unless completely necessary, and what is exposed should only be exposed through some specialized interface. The interface's task is twofold. On the one hand it isolates the internals of an object so that they cannot be accidentally or erroneously modified in unwanted ways. For example, our Controller will have to control the way certain elements of View behave, but not these very elements per se. If the View did expose the underlying UserView or Window, it would introduce a vulnerability to the stability of our program, since it would make it possible to break View's implementation from within the Controller. This is the way to disaster as it could cause the whole program to crash, simply because of a typo. We will see later how to solve this particular problem efficiently and safely. Now we need to consider the second fundamental task of an interface: to isolate the internals of an object in order to hide their complexities. This is also an encapsulation, but of a different and more subtle form. Consider our View object; all we want it to do is just initiate itself, provide an instance of Condition so that its clients know, and a couple of methods to change its behavior. This means, all the underlying tasks of constructing and setting up the various GUI elements need not be exposed as objects or as information. In other words, the interface should be simple and easy to use from its client's perspective.

[ 182 ]

Chapter 9

So who is this client? In our case we have two clients, Blackboard and the Controller, as no one and nothing else is supposed to ever use the View. But generally speaking, the client of an object is anything or anyone who might use it. This includes actual human beings as well as pieces of software and even hardware. Note also that the whole idea of creating abstract models is to also foster code reusability; subsequently an object's client may end up being something/someone completely different than what/who we had in mind while designing the interface. It is also important to realize that, from the client's perspective, an object's interface should be as simple and as conceptually straightforward as possible and thus it should completely hide all the intrinsic mechanics of the object and the complexities of its implementation. The interface should reflect the client's desires and not the logic of its implementation. On account of that, some simple rules follow. Nothing should mess up with the internal state of an object. Do not return objects directly, rather provide an accessor (sometimes also referred to as getter) method that returns these objects or their value. The actual objects should remain encapsulated within the original object, therefore ensuring it is protected. Notice that this is why in the preceding Blackboard's code we call .value. wait on ~view[\iniated] and not simply .wait, because ~initiated is an instance of Function returning a Condition. This is safer than returning the instance of Condition itself. Likewise, if we want to allow the modification of some object, we should do this again with specialized methods (referred to as modifiers or setters) that should ideally also ensure that their argument satisfies a certain criterion (such as being of the right type or in the right range of values) before actually modifying something. Other than this, we should simply make sure that our interfaces are easy to use and easy to understand. Coming back to our View object, its public interface should feature, apart from the standard in our design init and initiated methods, just a setDrawFunc method and a setOnCloseFunc method to set up our UserView object's drawFunc function and the onClose handler of our parent Window object respectively. In object-oriented programming, encapsulation is an attribute of object design wherein a class' data members remain intrinsic and hidden within the object itself and are generally accessible only to other members of the same class and their respective subclasses.

[ 183 ]

Design Patterns and Methodologies

Implementation

Languages such as C++, Java, or Objective-C, for example, are fundamentally structured around the schism between an interface and its implementation and hence provide special keywords to define whether some element is publicly accessible or is to be kept private. In SuperCollider, private membership is not explicitly supported in any object, yet it can be easily implemented, as we demonstrated in Chapter 4, Vector Graphics, wherein we designed our first Factory. The idea is simple; we can use variables with a limited scope, therefore, inaccessible to anything outside this scope, and have our public methods act as a bridge towards them. SuperCollider will automatically destroy variables and their contents once they are out of scope and once there is nothing pointing (or referring, if you prefer) at them. Consider the following code snippet: // some object in a separate scd file var someVariable = 30; var someFunction = {someVariable = 50;}; ~someObject = Environment.new; ~someObject.use{ // public interface ~accessor = { someVariable.postln; someFunction.value; someVariable.postln; } }

When we use Include to evaluate this file from another file we can only access its interface that is, ~someObject[\accessor], through which we can access (but not set) someVariable and someFunction, which can be understood as existent private member variables of someObject. These are still referred to in our program even if they are technically just remnants of ordinary variables that have gone out of scope. Note that the implementation of an object is fundamentally different in scope than its interface and therefore, not the same guidelines apply; quite the contrary actually. Herein our target group is programmers (which could be just us really, albeit this is not a less serious situation). Bare in mind that we can never tell the scope of our code and that, even if our objects are project specific, we may quite often return to them to make them appropriate for some other project. Let's take the View as an example; the way we will implement it here is of potential use to any kind of audiovisual project, therefore, we should try our best to keep it easily configurable. Subsequently, it is always a good thing to be scholastic in writing both meaningful and useful comments and, if necessary, more sophisticated textual descriptions of our code.

[ 184 ]

Chapter 9

A possible implementation of View along with its public interface is shown in the following code: // View.scd // ------------------- IMPLEMENTATION ---------------------------var parentWindow, userView; // graphics elements var serverReady = Condition(false); // a condition var guiReady = Condition(false); // a condition var drawFunc = {}; // default drawing function var initiated = Condition(false); // condition to notify clients // --- private helper functions ----var bootServer = { // function to boot the server "Attempting to initiate the sound synthesis engine".postln; Server.default.waitForBoot({ "Sound synthesis engine is running !".postln; serverReady.test_(true); // change condition to proceed serverReady.signal; // propagate change }); }; var makeGui = { // create GUI elements { // defer "Attempting to create the animation window..".postln; parentWindow = Window.new("Snakes Project", 640@640,false).front; userView = UserView.new(parentWindow,parentWindow.bounds); userView.background_(Color.black).animate_(true).frameRate_ (30).clearOnRefresh_(false).drawFunc_({ // add trailing effect Pen.fillColor_(Color(0,0,0,0.5)); // a transparent black Pen.addRect(Rect(0,0,640,640)); /* create a semi-transparent rectangle to cover previous contents (for trailing effects) */ Pen.fill; // draw rectangle /* call custom drawingFunc with userView.frame passed as argument */ drawFunc.value(userView.frame); }); guiReady.test_(true); // set initiate flag to true guiReady.signal; // notify anybody interested "The animation window has been created!".postln; }.defer; }; [ 185 ]

Design Patterns and Methodologies var initFunc = { fork { // initate View bootServer.value; // boot server serverReady.wait; // wait for Server to boot makeGui.value; // createGUI elements guiReady.wait; // wait for Window to be made initiated.test_(true); // set initiate flag to true initiated.signal; // notify anybody interested }}; var setDrawFunc = { arg f; /* set a custom Function as the drawing Function */ drawFunc = f; }; var setOnCloseFunc = { arg f; /* set a custom Function as the drawing Function */ parentWindow.onClose_(f); }; // ------------------- PUBLIC INTERFACE ---------------------------~view = Environment.new; ~view.use{ // public interface of View ~init = initFunc; ~initiated = {initiated}; ~setDrawFunc = setDrawFunc; ~setOnCloseFunc = setOnCloseFunc; };

Strategies and policies

View's implementation hereinbefore is very straightforward. Let us focus on the following parts of the code: .drawFunc_({ //.. trailing effects implementation drawFunc.value(userView.frame); });

and setDrawFunc = { arg f; // set a custom Function as the drawing Function drawFunc = f; }; [ 186 ]

Chapter 9

This may seem quite uncanny to some, but it perfectly exemplifies the spirit of this whole chapter. It is straightforward for the client, elegant for the programmer, and more importantly, as safe as it gets. To appreciate why, consider an alternative as shown in the following code snippet: setDrawFunc = { arg f; // set a custom Function as the drawing Function userView.drawFunc_({f}); };

This may seem fine, but let us have a second look. There are at least three problems with this code. First, it would be up to the Controller to implement the trailing effects, yet it is not really its role to control such a kind of decorative effect, it's the View's. Doing so would be conceptually wrong and would mess up with the architecture of our program. Second, in this way we cannot access userView.frame from within our function, therefore, we would have to define some sort of a custom getFrameCount accessor, which may indeed be something trivial, and yet, since the place where we would use the counter is inside our function, we may complicate the kind of code our client should write. This is not an elegant approach. By the way, now that we are in the last chapter, you will probably agree with me that elegance, as far as programming is concerned, is the door to both aesthetic and pragmatic rewards. The most important argument against such a design, however, is that the setDrawFunc setter in the preceding code still returns userView implicitly, which is a dangerous practice. For instance: ~view[\setDrawFunc].value({/* some drawing function here */}).postln;

We could of course have our function return something different, but what and why? Now consider our solution again. Here, setDrawFunc returns just a function, which happens to be exactly the function we want to use, so there is absolutely no way we can use it do something different; userView is inaccessible. Since userView calls our custom f within its drawFunc function, we can still have it run the View-specific code before, or even after, the Function object is evaluated. The roles of the various objects are not violated this way and View's clients can be limited only to what is conceptually meaningful. And finally, we can now access the invaluable userView. frame counter from within drawFunc in a very elegant way (which also happens to be quite idiomatic in SuperCollider and therefore, is preferred). Now we can do things as follows: ~view[\setDrawFunc].value({arg counter; /* use counter here */ });

[ 187 ]

Design Patterns and Methodologies

This can be done easily without violating View's privacy, as there's still no way to access userView externally. Even if the function we provide to setDrawFunc is erroneous and causes an error, it will keep on evaluating without affecting the View (unless the error is so severe that the whole interpreter crashes), therefore, we will even have a chance to fix things, at least in theory. By the way, we will not use this counter in our Snakes project, but it's a feature worth having, and also makes the View a nice candidate to be considered in the context of other projects.

The Model

The Model is probably the most complicated object in our project, since it holds all the custom models we will use in Snakes. We need to design two singleton objects, namely population and gestalt. Both are singleton because in Snakes we need exactly one instance of both. We also need snakeFactory to construct snake objects. However, as we have already discussed, the latter consists of several parts, therefore, we need to model each of them as well and implement all the necessary factories, since we will need more than one. Because of its complexity, we will only give excerpts of the code here; the complete Model.scd file can be found online in this book's code bundle.

Aggregates and wrappers

A snake entity is complex, or what we call in computer science, an aggregate (or a composition), since they consist of various dissimilar parts. For instance, they have body, personality, brain, and voiceDef associated with them. voiceDef stands for a unique sound synthesis algorithm for each snake, which is the basis of its voice. In essence, voiceDef is an instance of SynthDef, therefore, the task of voiceDefFactory is to algorithmically create a new and a unique SynthDef object. For example: // -- voiceDefFactory -var voiceDefFactory = { // voiceDef should be a unique SynthDef var uniqueName = Main.elapsedTime.asSymbol; // use elapsedTime as a uniqueName identifier /* create a unique SynthDef by means of choosing through possible UGens */ var voiceDef = SynthDef(uniqueName,{ arg freq = 200; var sound = [SinOsc,LFSaw,LFTri,LFPulse].choose.ar(freq); sound = [SinOsc,Saw,Pulse].choose.ar(sound.range(freq*0.5,freq)); sound = sound * [ [SinOsc,Saw,Pulse].choose.ar(rrand(freq/2,freq*2)), [WhiteNoise,BrownNoise].choose.ar(rrand(0.2,0.8)) ].choose; [ 188 ]

Chapter 9 sound = sound * EnvGen.kr(Env([0,1,0],[1,1]) .circle,timeScale:rrand(0.1,3)); Out.ar(0,Pan2.ar(sound,1.0.rand2)*0.4); }); voiceDef; // return synthDef };

The code is quite straightforward; each time the factory is called, it will create an instance of SynthDef by means of choosing random candidate UGens from its list, and combining them. We already have body; remember that we implemented a similar factory in Chapter 5, Animation. Yet, from this chapter's perspective there are certain problems with this model. In particular, its interface is not in accordance to the guidelines we have set hereinbefore in several respects; the most important being that it violates privacy. In such cases, and to avoid rewriting the whole factory, we can simply wrap our body with a new object that will guarantee a new, and better in some respects, interface to our old object. Such objects are called wrappers. For example: // -- bodyFactory – var include = Include.relative("9677OS_kinematicSnakeFactory.scd"); // include the kinematic snake factory var bodyFactory = { arg position = Point(100,100), numberOfSegments = 20, length = 10, width = 30; var bodyWrapper; /* a wrapper around the body to implement a new interface */ var colorFunc = { // a function returning a function to use as color var a = rrand(0.4,1); // a random coefficient var b = rrand(0,1); // another random coefficient [ {arg i; Color.new(a,b,i)}, {arg i; Color.new(i,a,b)}, {arg i; Color.new(a,i,b)}, {arg i; Color.new(b,a,i)}, {arg i; Color.new(b,i,a)} ].choose; // choose a random colorFunc var body = ~snakeFactory.value(numberOfSegments, length, width, colorFunc.value); body.refresh(position); /* set position (the old interface wouldn't do that directly) */ // our new public Interface bodyWrapper = ( // snake body's public interface getPosition: {body.position}, refresh: { arg self, position; body.refresh(position); }, draw: { body.draw} ); bodyWrapper; // return the body object };

[ 189 ]

Design Patterns and Methodologies

Notice that we use the kinematicSnakeFactory.scd file (which we introduced in Chapter 5, Animation) that contains the definition of the original ~snakeFactory directory. Generally speaking, it is better to include such dependencies in the beginning of our files (as we do in the proper Model.scd file found online) so that we can tell immediately if a file depends on others. The reason we assign the result of Include in a variable is not because we plan to use it, but simply to avoid breaking our particular style here, wherein we introduce variables with their definition rather than having them all grouped in the beginning (which I personally find counterintuitive). As far as personality is concerned, we do not need to make a new object; we can simply use an instance of Symbol to declare whether the snake is introvert or extrovert. The brain could be ANN trained, using a random input and a provided numerical rule (which we will call brainSeed) as the desired output. A possible design is shown in the following code snippet: // -- brainFactory -var brainFactory = { arg brainSeed = [ [1,1],[0,0] ]; // create a brain ! var ann = NeuralNet(4,20,2,0.01,1.0); // a primitive brain var sample = brainSeed.collect{arg i; [ Array.fill(4,{rrand(0.0,1.0)}), i ] }; /* pairs of random values are mapped to each of the brainSeed elements */ "Now creating a new artificial brain for the snake!".postln; ann.trainExt(sample,0.1,1000); // train network ann; // explicitly return ann };

Software agents

And now that we have all its parts ready, we can finally code snakeFactory in the following manner: // -- snakeFactory -var snakeFactory = { arg position, numberOfSegments, length, width, brainSeed, gestalt; // create the various parts out their factories here // … var initRoutine = fork { // setUp snake ! /* use a routine and a condition to check if the brain has been trained here */ // … // add voiceDef, sync to the Server and start a voice here … // now our little snake is ready to start a life on its own !! [ 190 ]

Chapter 9 life = fork { loop{ var value, newPosition, newFrequency; value = brain.calculate(gestalt.value); // first interpret gestalt // decide towards where to move and do it ! newPosition = body.getPosition.translate(value.linlin(0,1,10,10).asPoint); // newPosition is relative to old position newPosition = newPosition.wrap(0,640); body.refresh(newPosition); // set new position // decide what to say and speak! newFrequency = value.sum.linlin(0,2,100,3000); voice.set(\freq, newFrequency); 1.0.rand.wait; // move every random amount of seconds }}; snakeReady.test_(true); // change flag now that snake is ready snakeReady.signal; // propagate change to anyone interested }; var killFunc = { // kill snake: free voice Synth and stop life Routine here }; var snake = ( // snake's public interface getPersonality: { personality }, // return personality (a Function) getPosition: { body.getPosition }, // return position (a Function) kill: killFunc, draw: {body.draw}, isReady: {snakeReady} ); snake; // return snake };

Again, only the most important parts are shown here; the complete code can be found online in this book's code bundle. During this stage of object modeling, it is essential to write testing code for all of our objects so that we know whether they behave the way they should, before we proceed. In this way we will save debugging time in the long run.

[ 191 ]

Design Patterns and Methodologies

The snake objects generated by our factory are quite particular; more than being simply complex aggregates made from various other parts, they are also alive! They are living entities that move and produce sounds on their own, being driven by some internal motor (their brain). The latter may be primitive, but it is nevertheless some form of intelligence that casts our snakes as autonomous. During their lifetime, our snakes will regularly call value on gestalt. This is its only means to probe and interpret its surroundings and behave according to what it perceives. Subsequently, a snake object once created, acts on its own, without the intervention of any client being necessary. Such elements that are independent, autonomous, and act on their own are called software agents. (By the way, nowadays software agents are to be found everywhere in computing, especially on the World Wide Web where they probe for information of all sorts, for example, while indexing websites on behalf of search engines and similar applications). So, since our snakes are autonomous and since clients should not directly interact with them, how do we control them? Well, we do not need to control them, at least that was the idea in the first place; remember our original requirements? The only thing we need to do is make sure we can kill existent snakes or spawn new ones when needed; this is the only rule of the game. Note that a snake's public interface provides accessors for position and personality so that we can later use them to perform all the necessary tests and decide under which particular circumstances should a particular snake be killed or kept alive.

Introducing software actors and finalizing the model

It will be the task of population to kill or spawn snakes when needed. A population is really nothing more than a frontend to an instance of an Array containing snakes but with a proper public interface so that nobody can access the snake object themselves. The code is trivial so it will not be discussed here. An instance of population is a special kind of object too. It is responsible for managing the life and death of other objects, therefore, it is a kind of Collection (which is a well-known design pattern in SuperCollider), yet, unlike most collections we normally use, it also has complex responsibilities and is capable of communicating with its elements through their public interface. Instances of population will be present constantly from the beginning until the end of our program, and will wait for their client (it will be the Controller really) to tell it what to do. Then and only then, will it perform some action. Such kinds of objects are sometimes referred to as software actors, and are different from agents as actors must be explicitly asked to do something before they do it. In other words, an actor will not perform an action autonomously. (I'm really not sure if this is the case with human actors, by the way.) [ 192 ]

Chapter 9

The last element of our Model is the gestalt object, which is a singleton object responsible for retrieving data from www.random.org and distributing them at will. We will access the data using the asynchronous unixCmd and a temporary file, as we did in the example in Chapter 7, Advanced Visualizers. The code for gestalt is also trivial and will not be discussed here. To finish our Model, all we need is its initFunc, which will initialize the Model and its public interface. Initialization simply retrieves the first chunk of data from www.random.org and creates an initial population of 10 snakes. The latter is pretty straightforward and allows its clients to only access initFunc, initiated Condition, and the public interfaces of population and gestalt. For example: // ------------------- PUBLIC INTERFACE ---------------------------~model = Environment.new; // define a new environment ~model.use{ /* notice that there is no way to directly access snakeFactory */ // snake objects can be accessed indirectly only ~init = initFunc; ~initiated = {initiated}; ~population = population; ~gestalt = gestalt; };

The complete Model.scd file can be found online in the code bundle of this book.

The Controller

Having implemented the View and the Model, we now need to ensure that they communicate with each other and that their elements are properly updated. The Controller is a simple object, at least when compared to the Model. It only has two basic responsibilities: to be the mediator between the Model and the View, and to update the state of the Model itself according to the rules that govern our system. The Controller will consist of the standard init and initiated (in our design) members as well as the necessary gestaltUpdate and populationUpdate agents. Note that the Controller needs access to both the Model and the View, therefore, when we initialize it, we should make sure we assign the latter to variables accessible to all elements of the Controller.

[ 193 ]

Design Patterns and Methodologies

Game of Life

In essence, Snakes is a kind of Game of Life. The population changes according to a very specific set of rules that determine what will happen when two snakes meet. Keeping in my mind the result of what will happen when two particular snakes meet and our requirements, it is easy to implement the following code: // ------- Rules ------var resultOfEncounterFunc = { arg personalityA = \introvert, personalityB = \introvert; var result, scenarios; scenarios = [\love, \nothing, \death]; // the posible results case {personalityA != personalityB} { // if an introvert and extrovert meet scenarios.wchoose([0.8,0.1,0.1]) // they will most likely love each other } { (personalityA == personalityB) && (personalityA == \introvert) } { // if they are both introvert scenarios.wchoose([0.1,0.8,0.1]) // the chances are that they will ignore each other } { (personalityA == personalityB) && (personalityA == \extrovert) } { // if they are both extrovert scenarios.wchoose([0.1,0.1,0.8]) // the chances are that they will kill each other }; };

[ 194 ]

Chapter 9

Of course we will need an ever-present background agent to observe and regulate population according to these rules: // ------- gameOfLife ------var gameOfLifeFunc = { populationUpdate = fork{loop{ // population updating agent var killIndices = Set.new; /* if we remove some item while in the loop the next iterations while be affected so we will indices here and kill snakes afterwards (we use a set so that we don't attempt to delete the same element twice) */ var spawnNewPositions = Set.new; // nested do loops to test each element with each every other one model[\population].getNumberOfSnakes.value.do{ arg indexA; model[\population].getNumberOfSnakes.value.do{ arg indexB; if (indexB > indexA) { /* to avoid testing an item with itself, as well as with items it has already being tested */ var dist = model[\population].getPosition(indexA). dist(model[\population].getPosition(indexB)); // calculate distance if (dist < 10) { // if distance is less than 25 pixels var action = resultOfEncounterFunc. (model[\population].getPersonality(indexA), model[\population].getPersonality(indexB)); // dist.postln; ("Two snakes have encountered.. the result is: " + action).postln; // notify of the result of encounter case {action == \love} { // spawn a new snake 30 pixels away from the first one spawnNewPositions = model[\population].getPosition(indexA) + 30; }

[ 195 ]

Design Patterns and Methodologies {action == \nothing} { /* do nothing */ } {action == \death} { // keep indices to kill later killIndices = killIndices.add(indexA); killIndices = killIndices.add(indexB); }; } } } }; // kill snakes killIndices.do{ arg item; model[\population].kill(item); }; // spawn new spawnNewPositions.do{arg position; model[\population].spawnNew(position, 25,rrand(5,15),rrand(5,30), Array.fill(rrand(1,4), { Array.fill(2,{rrand(0,1.0)})}), {model[\gestalt]. returnDatum}).wait; // remember it returns a condition ! }; 0.1.wait; // wait 2 seconds (so that actions do not occur immediately) }}; };

Do not worry about the fact that model stands for nothing here; we will make sure it points to the actual Model when we initialize the Controller, which, if we follow our design throughout, is guaranteed to happen before we actually call gameOfLifeFunc. Other than this, there are some complexities in this code, the major one being how to test the positioning of one snake against all others. These sort of problems are fundamental in programming, generally speaking. One thing is sure: in a process looping every 0.1 seconds we do not want to perform more tests than absolutely required. The implementation above is not optimal, yet it does the job in an easy-tounderstand way. In principle we nest a full iteration over all the snake objects into another, so we can test the positioning of each snake with every other, but we only perform tests when the index of the latter is greater than that of the first. In this way we ensure that we do not test the position of a snake with itself and also guarantee that we don't perform the same test twice. By the way, we could have implemented the same code using a singleton do structure, but I find this approach to be conceptually more explicit and easier to read. An important complication in all cases is that we do not want to kill or spawn snakes within the body of such an iterative loop; this would alter the very collection that we are currently iterating through and would immediately open the door to strange bugs and errors. [ 196 ]

Chapter 9

Finalizing the Controller

Now all that is left to do is initiate our Controller and provide a public interface. The responsibilities of the init method are to assign the Model and View (that should be passed as arguments) to the model and view variables respectively, to launch an agent for updating gestalt at regular intervals, to set up the View's drawFunc and onCloseFunc functions and to launch the agent responsible for updating the population. The code is trivial and can be found online in this book's code bundle. And now we are ready to launch the last, and definitely the most complicated example of this book! Time to enjoy an ever changing organic population of snakes moving in all directions and producing all sorts of funny noises while leading their tiny artificial snake lives. Sometimes they are making love to each other to spawn new children (a fundamental and quite joyful activity of living creatures, by the way), killing each other (an equally fundamental, albeit not-so-joyful property of life) or simply ignoring each other (which sometimes seems like the most fundamental of all of life's properties). A still from the animation is shown as follows

[ 197 ]

Design Patterns and Methodologies

The following figure abstractly describes the entire architecture of our program: Blackboard

Model

Controller

Population

View

run

Audio Synthesis Server

init

UserView

initiated

Snake

Body

SnakeFactory (from chapter5)

Position theta draw refresh getPosition refresh draw Brain (a NeuraNet really)

VoiceDef (a SynthDef really) personality (a Symbol really) getPersonality getPosition kill draw is Ready getNumberOfSnakes getPersonality getPosition killAll kill spawnNew

Gestalt

returnDaturn retriveNewData init initated

[ 198 ]

Chapter 9

Summary

In this chapter we demonstrated how to deal with complicated real-life situations and apply certain methodologies to design and implement them. On occasions of a quite involved example, we illustrated how to break more complicated tasks into small parts and apply well-known programming patterns to easily and efficiently code and finalize it. It must be noted that some of these patterns have been used abusively hereinbefore, and in a rather broader sense, however, this chapter does not pretend to be an exhaustive treatise of such a subject but rather a hands-on introduction to an object-oriented way of thinking. Being pragmatic, we simply pinpointed how to solve real-life problems in efficient and elegant ways by means of well-known computer science paradigms.

[ 199 ]

Index Symbols *draw method 61 *joinStyle method 61 *lineDash method 61 *moveTo method 60 *rotate, geometrical transformation 68 *scale, geometrical transformation 68 *skew, geometrical transformation 68 *trace method 112 *translate, geometrical transformation 68 *width method 61

A abstractions 63 accelerated motion 82 addForce method 96 Aliasing distortion 27 amplitude 28 anchor points 59 animation interaction and event-driven programming 86, 87 particle systems 88-90 trailing effects, adding 85 ANN 159-162 aperiodic waveform generators 32 Application Programming Interfaces (APIs) 104 artificial neural networks. See  ANN Audio 24 Audio visualizers about 131 spectrogram 133-136 waveforms, trailing 132

Automata about 162 cellular automaton 163, 166 Game of Life 166-168 axial gradients 62

B baud rate 114 Bezier curves 61 binary operations clipping operations 38 comparison operations 38 mathematical operations 38 quantization operations 38 simple mathematical operations 38 biological neuron 162 Bitwise operations 39, 40 Blackboard about 172 methodology 172, 173 multiple files, handling 176-178 Buffer 45

C Cartesian coordinate system 46 cellular automaton 163-166 chaotic motion 82 clearOnRefresh variable 83 client 121 clipping operations 37, 38 Collection base class 123 colors 61 Comma Separated Values. See  CSV comparison operations 38 complex visualizers 21

F

CompositeView 54 Controller about 174, 193 finalizing 197, 198 Game of Life 194-196 ControlSpecs class 127 CSV 106 curl URL 108 currentEnvironment object 178 curverange method 127 custom aperiodic waveform generators 32

D data analyzing 150 histograms 152, 153 probability 152, 153 statistical analyses and metadata 150, 151 textual datasets 153-155 data acquisition about 104 data, accessing remotely 107, 108 local files, dealing with 104-106 Musical Instrument Digital Interface (MIDI), using 112, 113 Open Sound Control (OSC), using 109-111 Serial Port, using 113, 114 Digital Signal Processing (DSP) 23 direct current (DC) 27 Document.dir 176 domainSpecs instance 152 drawFunc 85 drawFunc variable 86 dynamics adding, to simulate physical forces 94-97

E encapsulation 182 envelopes using, as wavetables 31, 32 Event-driven Programming (EDP) 86-88 expexp 127 explin 127 external IP address 110

Factories 65-68 Fast Fourier Transform. See  FFT FFT in SuperCollider 45 FFTSpread UGen 119 FileReader.read 107 File Transfer Protocol (FTP) 104 fillstroke 60 Firmata 114 Fractalizer 144-147 fractals about 74, 77, 78 animating 91-93 frameRate variable 83 FreqScopeView 54 FreqScopeView method 16, 17 frequency domain about 43 spectra 44 Function 123

G Game of Life about 166 rules 166 gaussCurve 127 geometrical transformations 68 getLine method 106 getter 183 Goertzel UGen 119 gradients about 62 axial gradients 62 radial gradients 62 grains 142, 143

H HSVA (Hue, Saturation, Value, and Alpha) 61

[ 202 ]

I images loading 59 index position 12 information hiding 68 Integrating Development Environment (IDE) 114 internal IP address 109 Internet Protocol (IP) address 109 Internet Service Provider (ISP) 110 interpolation 126 interquartile range (iqr) 150

K Kinematic patterns 138, 139 Kinematics 98, 100

L LevelIndicator class 17, 19 LevelIndicator object 19 levels metering 17 numerical data, monitoring 19 signals, monitoring 17, 18 lincurve 127 linexp 127 logarithmic and exponential operations 36

M Machine Learning 162 machine listening about 115, 116 amplitude, tracking 117 features, detecting 119 frequency, tracking 118 onset, detecting 120 Timbre 119 Tracking frequency 118 mappings about 121 auxiliary functions, code 158, 159 basic encodings 126 data, distributing 128, 129 data, preparing on client side 122, 123

data, preparing on server side 124 data, preprocessing on client side 122, 123 data, preprocessing on server side 124 data, sharing 128, 129 encodings, complex 156, 157 encodings, intelligent 156, 157 interpolation scheme 126, 127 Markov chains 33 mathematical operations about 38 miscellaneous mathematical operations 37 matrices 69 metadata 151 MIDI 112 Model about 62, 63, 174, 188 aggregates 188, 190 finalizing 192 software actors 192 software agents 190, 192 Model-View-Controller 174 modifiers 183 motion about 81 accelerated motion 82 chaotic motion 82 complex shapes, animating 84, 85 species 82 sprites, animating 84, 85 uniform motion 82 UserView, using 82, 83 Musical Instrument Digital Interface. See  MIDI Music Information Retrieval (MIR) 116 Music visualizers about 136 Kinematic patterns 138, 139 windmills, rotating 137

N namespace 177 NeuralNet 160, 161 neurone. See  biological neuron Nonstandard visualizers 20 numerical data 140 Nyquist frequency 53 [ 203 ]

O

R

object-oriented design 68 objects 64, 65 Open Sound Control (OSC) 109 OSCFunc object 110

radial gradients 62 random walk 140 raster graphics 58 readAllStringHTML method 108 rester graphics 59 RGBA (Red, Green, Blue, and Alpha) 61 Root Mean Square (RMS) amplitude 117

P particle system 71, 72, 141, 142 paths 59 peak amplitude 117 phase offset 28 Pitch 118 pixels 59 plot graph using 8, 9 plotGraph method 9 plot index 12 plotter using 10, 12 points variable 145 Polar coordinate system 46 polarity operations 36 polymorphic 8 polymorphism 9 port 109 POSIX 105 primitive shapes drawing 59, 60 probability 152 prototypes 64, 65 PV_BinShift 55 PV_BrickWall 49 pvcalc method 52 PV_Cutoff 49 PV_Freeze 55 PV_MagAbove 55 PV_MagFreeze 50 PV_MagSmear 49 PV_RectComb 49

Q Qitch 118 quantization operations 38

S SCLang 121 SCMIR library URL 160 scopeResponse method 16, 53 ScopeView class 132 scrollTo method 13 Semaphore 180 SerialPort class 113, 114 Server 121 setDrawFunc function 187 setters 183 shapes complex shapes, animating 84 signals about 24 scoping 13 spectra, scoping 16, 17 waveforms, scoping 14-16 simple mathematical operations 38 Sones 117 SortedArray 159 sound 24 SoundFileView using 12, 13 SpecPcile UGen 119 spectra about 44 aggregating 46, 48 enriching 46 freezing 48, 50 optimizing, for scoping 54-56 scoping 16, 48 scoping, limitations 53, 54 scrambling 50, 51 sculpting 48, 50 shifting 50, 51 [ 204 ]

stretching 50 synthesizing 46 visualizing 53 spectral centroid 119 spectral crest 119 spectral features 119 spectral flatness 119 spectral spread 119 Spectrogram 133, 134 sprites animating 84, 85 stack 69 static IP address 110 stdout 108 SuperCollider about 58, 154 FFT 45 SynthDef function 175 SystemClock class 13 systemCmd 107 System Exclusive (SysEx) 112

T Tartini 118 textual datasets 153 thisFunction keyword 74 threads 179, 180 time domain representation about 24 amplitude 28 DC (direct current) 27 frequency 28 Waveform, species 26 trailing effects about 69 adding, to animation 85 train method 161 Trigonometric operations 37

U UDP 109 unary operations clipping operations 37 logarithmic and exponential operations 36 miscellaneous mathematical operations 37 polarity operations 36 Trigonometric operations 37 uniform motion 82 unixCmd 107 unixCmdGetStdOut 107 unixCmdGetStdOutLines 107 unmap method 127 updateColumns 134 User Datagram Protocol. See  UDP UserView using 82, 83

V vector graphics about 58 colors 61 gradients 62 shapes 60 state 60 View about 174, 182 clients and interfaces 182, 183 implementation 184 strategies and policies 186-188 VoiceDef 175

[ 205 ]

W wave 24 Waveform aperiodic 26 generators 29 periodic 26 scoping 14, 15 synthesis fundamentals 24 trailing 132 transformations 33 Waveshaping about 34-36 binary operations 38 bitwise operations 39 unary operations 36, 37 Wavetable lookup synthesis 29, 30 wavetables envelopes, using as 31, 32 weblog. See  blog windmills rotating 137, 138 wrappers 189

Z ZeroCrossing 118

[ 206 ]

Thank you for buying

Mapping and Visualization with SuperCollider About Packt Publishing

Packt, pronounced 'packed', published its first book "Mastering phpMyAdmin for Effective MySQL Management" in April 2004 and subsequently continued to specialize in publishing highly focused books on specific technologies and solutions. Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today's systems, applications, and frameworks. Our solution based books give you the knowledge and power to customize the software and technologies you're using to get the job done. Packt books are more specific and less general than the IT books you have seen in the past. Our unique business model allows us to bring you more focused information, giving you more of what you need to know, and less of what you don't. Packt is a modern, yet unique publishing company, which focuses on producing quality, cutting-edge books for communities of developers, administrators, and newbies alike. For more information, please visit our website: www.packtpub.com.

About Packt Open Source

In 2010, Packt launched two new brands, Packt Open Source and Packt Enterprise, in order to continue its focus on specialization. This book is part of the Packt Open Source brand, home to books published on software built around Open Source licences, and offering information to anybody from advanced developers to budding web designers. The Open Source brand also runs Packt's Open Source Royalty Scheme, by which Packt gives a royalty to each Open Source project about whose software a book is sold.

Writing for Packt

We welcome all inquiries from people who are interested in authoring. Book proposals should be sent to [email protected]. If your book idea is still at an early stage and you would like to discuss it first before writing a formal book proposal, contact us; one of our commissioning editors will get in touch with you. We're not just looking for published authors; if you have strong technical skills but no writing experience, our experienced editors can help you develop a writing career, or simply get some additional reward for your expertise.

FusionCharts Beginner's Guide: The Official Guide for FusionCharts Suite ISBN: 978-1-849691-76-5

Paperback: 252 pages

Create interactive charts in JavaScript (HTML5) and Flash for your web and enterprise applications 1.

Go from nothing to delightful reports and dashboards in your web applications in super quick time

2.

Create your first chart in 15 minutes and customize it both aesthetically and functionally

3.

Create a powerful reporting experience with advanced capabilities like drill-down and JavaScript integration

Clojure Data Analysis Cookbook ISBN: 978-1-782162-64-3

Paperback: 342 pages

Over 110 recipes to help you dive into the world of practical data analysis using Clojure 1.

Get a handle on the torrent of data the modern Internet has created

2.

Recipes for every stage from collection to analysis

3.

A practical approach to analyzing data to help you make informed decisions

Please check www.PacktPub.com for information on our titles

OpenSceneGraph 3.0: Beginner's Guide ISBN: 978-1-849512-82-4

Paperback: 412 pages

Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engine 1.

Create high quality 2D plots by using Matplotlib productively

2.

Incremental introduction to Matplotlib, from the ground up to advanced levels

3.

Embed Matplotlib in GTK+, Qt, and wxWidgets applications as well as web sites to utilize them in Python applications

4.

Deploy Matplotlib in web applications and expose it on the Web using popular web frameworks such as Pylons and Django

Statistical Analysis with R ISBN: 978-1-849512-08-4

Paperback: 300 pages

Take control of your data and produce superior statistical analyses with R 1.

An easy introduction for people who are new to R, with plenty of strong examples for you to work through

2.

This book will take you on a journey to learn R as the strategist for an ancient Chinese kingdom!

3.

A step by step guide to understand R, its benefits, and how to use it to maximize the impact of your data analysis

Please check www.PacktPub.com for information on our titles

SuperOM: a SuperCollider class to generate music scores in OpenMusic Claudio Panariello, Emma Frid

To cite this version: Claudio Panariello, Emma Frid. SuperOM: a SuperCollider class to generate music scores in OpenMusic. International Conference on Technologies for Music Notation and Representation (TENOR), May 2023, Boston, United States. �10.17760/D20511476�. �hal-04331456�

HAL Id: hal-04331456 https://hal.science/hal-04331456 Submitted on 8 Dec 2023

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

SUPEROM: A SUPERCOLLIDER CLASS TO GENERATE MUSIC SCORES IN OPENMUSIC Claudio Panariello Sound and Music Computing group KTH Royal Institute of Technology, Stockholm, Sweden [email protected]

Emma Frid IRCAM STMS Lab, Paris, France KTH Royal Institute of Technology Stockholm, Sweden [email protected]

ABSTRACT

necessarily the composition process, which is the purpose of the SuperOM presented in this paper. More specifically, the purpose of the current work is to translate code into scores, not to generate scores directly from audio files 1 . Sound synthesis tools used in assisted composition usually do not include automatic music notation features, mainly because these tools were not designed with that particular use case as a primary motivation. However, having the possibility to visualize sounds in standard Western music notation can be useful in many composition contexts. Different attempts have been made to fill this gap, but the efforts have often been characterized by a lack of documentation, making it difficult to present a full review of the tools and methods used. A reoccurring strategy in this context seems to be to use SuperCollider 2 [6] classes that can bridge with LilyPond [7], either directly or through third-party software. SuperCollider is a programming language for audio synthesis and algorithmic composition. LilyPond 3 is a free system to write music. One of the oldest attempts to bridge sound synthesis software with automatic notation tools is LilyCollider 4 , developed by Bernardo Barros [8]. LilyCollider is an interactive software that can build sequences of music notation in an interactive way, extending the SuperCollider programming language (sclang). LilyCollider wraps the LilyPond music notation software, meaning that it can be used to generate a LilyPond score from SuperCollider code. However, the system has some limitations when it comes to rendering time; it requires you to wait until the score is engraved [9]. Today, LilyCollider has been abandoned in favor of SuperFomus 5 , which was also developed by Barros. SuperFomus relies both on LilyPond and FOMUS 6 to generate a music score. FOMUS is an open-software application developed by David Psenicka which allows the automation of many musical notation tasks for composers and musicians. It was designed with composers who work with algorithms and computer music software languages in mind and facilitates the process of creating professionally notated scores. An interesting aspect of SuperFomus

This paper introduces SuperOM, a class built for the software SuperCollider in order to create a bridge to OpenMusic and thus facilitate the creation of musical scores from SuperCollider patches. SuperOM is primarily intended to be used as a tool for SuperCollider users who make use of assisted composition techniques and want the output of such processes to be captured through automatic notation transcription. This paper first presents an overview of existing transcription tools for SuperCollider, followed by a detailed description of SuperOM and its implementation, as well as examples of how it can be used in practice. Finally, a case study in which the transcription tool was used as an assistive composition tool to generate the score of a sonification – which later was turned into a piano piece – is discussed. 1. INTRODUCTION Automatic generation of notation is a complex topic [1]. The design of computational algorithms to convert acoustic music signals into some form of music notation, socalled Automated Music Transcription (AMT), is a challenging task in signal processing and artificial intelligence [2]. Typically, AMT systems take an audio waveform as input and compute a time-frequency representation, after which a representation of pitches over time is outputted [2]. This process involves, among other subtasks, multipitch estimation (MPE), onset and offset detection, beat and rhythm tracking, interpretation of expressive timing and dynamics, as well as score typesetting. A comprehensive overview of signal processing methods for music transcription was presented in [3]. Discussions of challenges for AMT have been published in [4, 2]. While the technological aspects of AMT tools are highly relevant to the work presented in this paper, they differ somewhat from Computer-Aided Composition (CAC) tools (software that allows composers to design computer processes that generate musical structures and data [5]) in the sense that they are primarily designed to aid the transcription process, not

1 Although that would be possible using the SuperOM, for example using spectral analysis features. 2 https://supercollider.github.io/ 3 https://lilypond.org/ 4 https://github.com/smoge/LilyCollider 5 https://github.com/smoge/superfomus 6 https://fomus.sourceforge.net/

Copyright: © 2022 Author Name. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

68

is that it allows for musicXML export. In other words, it can be used to generate a file that easily can be edited in any scorewrite software. However, SuperFomus has some limitations, specifically when it comes to more advanced rhythm algorithms, and it may not be the optimal solution for working with metric structures [9]. In other words, it can be somewhat unreliable when dealing with complex music notation structures. In addition, it appears as though is no longer maintained, which may add some troubleshooting time when installing it. Another system is Fosc 7 , which stands for FO-rmalised S-core C-ontrol (FO-r S-uperC-ollider). Fosc is an API that ports much of the Python code base of Abjad 8 , a system that supports composers in building complex pieces of music notation in iterative and incremental ways, to SuperCollider. Since Abjad wraps the LilyPond music notation package, Fosc can be used for the generation of musical notation in LilyPond. Despite being powerful, Fosc does not allow for musicXML export, thus limiting the quantity and quality of information that can be preserved in the score. Finally, a custom-made SuperCollider class called SonaGraphLily was included in the SonaGraph framework, a harmonic spectrum analyzer suitable for assisted and algorithmic composition, developed by Andrea Valle [10]. SonaGraphLily manages mapping from sonographic data (i.e. spectrum over time) to music notation, using LilyPond code. It creates LilyPond source files that are rendered as graphic files. However, besides the fact that this class is optimized to work on a SonaGraph instance, it doesn’t support musicXML export.

the workflow to code programs in OpenMusic is based on patching together different modules. However, as opposed to such tools, the output of the OM processes is also visualized using conventional music notation. A screenshot of the OpenMusic interface is displayed in Figure 1. OpenMusic has been used by a wide community of composers and computer musicians throughout the years. Notable composers include, among others Kaija Saariaho, Marco Stroppa, Brian Ferneyhough, Philippe Manoury, Fabien L´evy, and Mauro Lanza. 12 There are several reasons why OpenMusic was adopted in this project. Firstly, OpenMusic’s powerful capabilities, especially in terms of handling very complex music structures, as well as its exporting features, were important features. OpenMusic allows for export into many different file formats, e.g. MIDI, bach [14], and musicXML. As such, it enables the generation of files that can be opened and edited in most common scorewriter software (such as MuseScore, Sibelius, Finale, or Dorico, just to name a few). Another reason is the simplicity of the overall installation: SuperOM’s only dependency is, in fact, OpenMusic, which installation is quite straightforward. Moreover, no prior knowledge of OpenMusic is actually required, since the only OpenMusic features required by a SuperCollider user are import/export file. Finally, yet another reason is the aim to bridge two open-source software solutions as well as their communities. In fact, .omi files generated by the SuperOM in SuperCollider are completely legit and working OpenMusic files. This means that they can be manipulated directly by OpenMusic users. This aspect can encourage collaborative frameworks in which SuperCollider and Open Music users can work together, exchanging their own material. However, as for all software solutions, the adoption of OpenMusic might have some drawbacks. For example, the playback functionality in OpenMusic could be considered an obstacle, since it requires a third-party software synth player to work, which is out of the scope of this paper. Nevertheless, it seems that future versions of OpenMusic will have an embedded synth available. 13 On the other hand, if a SuperCollider user decides to use OpenMusic merely as a way to export a musicXML file for a notation software, the playback part can indeed be skipped.

2. MOTIVATION Although the above-described software solutions may be useful for certain use cases, they all fall short when it comes to generating musically complex scores while allowing for musicXML export, which is the main aim of the SuperOM, described in the forthcoming sections. In other words, the goal of SuperOM is to enable generation of scores from SuperCollider in the fastest way possible with as few dependencies as possible, while at the same time preserving extremely high precision in the notation, and allowing for musicXML export. The SuperOM is a SuperCollider class that produces music scores in the form of OpenMusic (OM) files, i.e. in the form of .omi files. OpenMusic [11] is an open-source environment 9 dedicated to music composition, developed at IRCAM in the end of the 1990s (see e.g. [12, 13]). It is a visual programming language based on Lisp allowing the user to design processes for the generation and manipulation of music material, but it can also be used for other applications [13]. Similarly to other graphical environments such as Pure Data (Pd) 10 and Max/MSP 11 ,

3. CLASS DESCRIPTION The main method of SuperOM is .writeOMfile, which takes the following arguments: fileName, midicents, magnitudes, rhythmTree, metronome, quantization, threshold, dynamics (see Listing 1). Once an .omi file has been produced, it can be imported in an OpenMusic patch and opened from there. The file can then be edited directly in OpenMusic and exported as an XML file (for example using the POLY object).

7

https://github.com/n-armstrong/fosc https://abjad.github.io/ 9 It can be downloaded for free from openmusic-project.github.io/openmusic/ 10 https://puredata.info/ 11 https://cycling74.com/ 8

12

https://en.wikipedia.org/wiki/OpenMusic Please see the OpenMusic IRCAM Forum https://discussion.forum.ircam.fr/t/ open-music-midi-player/39930, accessed 14 December 2022.

https://

13

69

Figure 1. A basic example of an OpenMusic patch. In this example, several modules are connected to generate a harmonic series on the fundamental frequency 240, up to the 24th harmonic. The window CHORD is displaying the result using conventional music notation. expressed either in decibels (dBs) or as MIDI velocities (i.e. values from 0 to 127), the SuperOM will take care of interpreting the given array in the correct way with the method .dbvelSmart. magnitudes must match the size of midicents. If no magnitudes are given, all the notes will automatically be set to velocity = 100.

1 o = SuperOM.new; 2 o.writeOMfile(fileName:, midicents:, magnitudes:, rhythmTree:, metronome :, quantization:, threshold:, dynamics:); Listing 1. SuperOM

3.4 rhythmTree

A more detailed description of the arguments taken by .writeOMfile follows below.

rhythmTree is an array of rhythms, provided as floats (or fractions), i.e. 0.5 or 1/4 for a quarter note, 0.25 or 1/8 for an eight note, etc. A positive value represents the duration of a note, while a negative value represents a pause. For example, 1/16 represents a rest of a sixteenth note. A rest, however, can also be specified using the SuperCollider’s operand Rest(). If this argument is nil, the magnitudes argument will be used as a source to create a rhythm tree (see Section 4.2).

3.1 fileName The argument fileName is the name for the output file, including the .omi extension (e.g. "test.omi"). The output will be produced in the same folder where the SuperCollider file is located. 3.2 midicents The argument midicents is an array of notes, expressed in midicents. This argument is always passed as an array of staves, thus leaving to SuperOM the task to interpret how many staves should be written in the OM output file. For example, in the case of [[l0 , l1 , l2 , ...], [m0 , m1 , m2 , ...], [n0 , n1 , n2 , ...], ...] the rows are interpreted as subsequent separated staves; the first staff will contain notes [l0 , l1 , l2 , ...], the seconds staff will contain notes [m0 , m1 , m2 , ...], the third staff will contain notes [n0 , n1 , n2 , ...], etcetera. Chords can be specified with additional brackets, e.g. [[n0 , n1 , [n2 , n3 ], n4 , ...]]. In this example, there is one staff and the notes n2 and n3 are interpreted as a bichord.

3.5 metronome

3.3 magnitudes

3.7 threshold

The argument magnitudes is an array (or an array of arrays) of the notes’ magnitudes. magnitudes can be

This argument, expressed in dB, sets a threshold for the magnitudes, meaning that values below this level will be

This argument specifies the metronome of the score. If only one value is given, the metronome will be the same for the entire score. As an alternative, an array of metronomes can be specified, matching the rows size of the midicents array, thus assigning a different metronome to each staff. The default metronome value is 60 bpm. 3.6 quantization Amount of quantization of the notes, expressed in MIDICENTS. The default value is 50 (i.e. a quarter-tone).

70

der to create a score with eighth-tones, we have to specify the correct quantization in writeOMfile, using quantization: 25. 1 2 3 4 5

var pitches = (6000, 6025..7200); var rhythm = {1/32}.dup(pitches.size); o = SuperOM.new; o.writeOMfile("example1.omi", [pitches ], rhythmTree: rhythm, metronome: 144, quantization: 25); Listing 3. A chromatic scale with eighth-tones starting from C4.

Figure 2. Instance of an OpenMusic POLY class. The figure shows the content of the file produced by the code shown in Listing 2.

Listing 4 shows a simple variation of the previous example, adding random magnitudes and printing them in the score, with the flag for the argument dynamics set to true. The result imported in OM will look like the one displayed in Figure 3.

considered as silence, i.e. pauses. The default value is 36 dB. 3.8 dynamics The dynamics flag can be set to true or false. If true, the output file will display the notes’ magnitudes as music dynamics (i.e. from “ppp” to “fff ”) in the score. The MIDI velocities are converted into music dynamics using the method .veldyn. The default value is false. 4. EXAMPLES In the following section, a number of examples are provided to demonstrate potential use cases for SuperOM. All of the example .omi files, as well as the corresponding XML output, are available for download from here: https://tinyurl.com/9j2fba5p. Listing 2 shows the most basic example of SuperOM: the array pitches contains a list of notes and a chord, everything expressed in midicents; the array rhythm contains a list of durations and pauses. Line 5 of Listing 2 produces an .omi file with the name "example0.omi" that will contain the music material specified in the two arrays. The result is a file that, once imported in OM, will look like the one displayed in Figure 2.

1 var pitches = (6000, 6025..7200); 2 var mags = {rrand(-18, -3)}.dup( pitches.size); 3 var rhythm = {1/32}.dup(pitches.size); 4 5 o = SuperOM.new; 6 o.writeOMfile("example2.omi", [pitches ], magnitudes: mags, rhythmTree: rhythm, metronome:144, quantization : 25, dynamics:true); Listing 4. A chromatic scale with eighth-tones starting from C4, with dynamics.

1 var pitches = [6000, [6200, 6550, 6800], 7000]; 2 var rhythm = [1/2, -1/6, 2/6, -2/5, 3/5]; 3 4 o = SuperOM.new; 5 o.writeOMfile("example0.omi", [pitches ], rhythmTree: rhythm, metronome: 144);

Figure 3. Instance of an OpenMusic POLY class showing the content of the file produced by the code shown in Listing 4. Using standard features provided in SC, a more complex score can be generated with a few lines of code. Listing 5 shows the code to produce a score with the following properties: five staves with different chromatic scales; rhythms and pauses chosen by a given set, with each staff having a different metronome.It is worth noticing that in this example the argument pitches is passed without extra brackets (see line 7) since it has already been created as an array of staves (see line 2). The result imported in OM will look like the one displayed in Figure 4.

Listing 2. Basic usage of SuperOM. Listing 3 shows another simple usage of the class, this time producing a chromatic scale with eighth-tones, starting from C4, as 32th notes. Please notice that in or-

71

Figure 4. Instance of an OpenMusic POLY class showing the content of the file produced by the code shown in Listing 5. 4.1 Writing a score from patterns 1 var staves = 5; 2 var pitches = {(6000, 6050..7200)+( Patterns are typical SuperCollider data structures that alrrand(-5, 5)*100)}.dup(staves); low for the management of events in time, specifying the 3 var rhythm = {{[1/4, 1/8, 1/16].choose rules for the production of such events [1]. As demon*[-1, 1].choose}.dup(pitches.shape strated in Listing 7, a musical piece expressed through [1]*2)}.dup(staves); a pattern can then be translated into a score using the 4 var metronomes = {rrand(102, 144)}.dup SuperOM. In order to do that, events must be conveniently (staves); stored into separate arrays, for example using the method 5 .collect. These can then be used as arguments for gen6 o = SuperOM.new; erating the output score. 7 o.writeOMfile("example3.omi", pitches, rhythmTree: rhythm, metronome: 1 var length = 50, pitches, rhythm; metronomes); 2 p = Pbind( 3 \midinote, Pxrand([60, 62, 64, Listing 5. Snippet of code to produce a score with five 66, 68, 70], inf), staves with different chromatic scales, rhythms with pauses 4 \dur, Prand([1/16, 1/8, Rest chosen by a given set, and five different metronomes. (1/16), Rest(1/8)], inf)); 5 Listing 6 shows code that produces a score with eight 6 e = p.asStream; staves, each of them containing random frequencies taken 7 pitches = length.collect({e.next(()). from a harmonic series, random magnitudes, and ranmidinote})*100; dom rhythms, quantized to 32th notes. The frequencies 8 rhythm = length.collect({e.next(()). are conveniently translated into midicents thanks to the dur}); .cpsmidicents method. 9 10 o = SuperOM.new; 1 var notes = 200, staves = 8; 11 o.writeOMfile("example6.omi", [pitches 2 var pitches = {{Array.fill(24, {|i|(i ], rhythmTree: rhythm, metronome +1*rrand(50, 51))}).choose}.dup( :144); notes)}.dup(staves).cpsmidicents; 3 var mags = {{rrand(-18, -3)}.dup(notes )}.dup(staves); 4 var rhythm = {{rrand(0.1, 1).softRound (1/32, 0, 1)}.dup(notes)}.dup( staves); 5 6 o = SuperOM.new; 7 o.writeOMfile("example4.omi", pitches, mags, rhythm, 84, 25, -36, true);

Listing 7. Writing a score from a pattern. 4.2 Writing a score from spectral data Another interesting application of SuperOM is the ability to write a score from spectral data, as seen in Listing 8. This feature can be useful in situations when you want to create a score from a series of frequencies and magnitudes from a spectral analysis process. The strategy used here is based on grouping subsequent notes that have the same frequency and magnitude. Grouping in this context means that rhythmic values are summed together (for example,

Listing 6. Snippet of code to produce a more complex score with eight staves.

72

two 1/8 notes are replaced by one 1/4 note). Notes that have different magnitudes will not be grouped. Magnitudes below the given threshold value are interpreted as silence, meaning that such notes are transformed into pauses.

actual sonification was implemented; 3) MIDI messages were generated in SuperCollider and sent to the Disklavier, which played the newly generated piano piece. To establish a strong connection between the video and the generated sound, the sonification used harmonic content derived from the same Debussy piece that drove the creation of the video.

1 var freqs = {rrand(400, 500)}.dup(50). cpsmidicents; 2 var mags = {[-18, -12, -9, -6].choose }.dup(freqs.size); 3 freqs.postln; 4 mags.postln; 5 o = SuperOM.new; 6 o.writeOMfile("example5.omi", [freqs], magnitudes: mags, rhythmTree: nil, metronome:144, quantization: 100, threshold: -12, dynamics:true);

Once the sonification was realized, SuperOM was used to generate an actual score of it. In order to do that, the mappings relative to pitches, note durations and velocities were stored into separate arrays, and used to initialize an instance of an SuperOM. The output score was an accurate representation of the sonification: as a matter of fact, the complex metric structures produced with the sonification were completely captured in the score. Once exported in musicXML, the file could be opened without errors in a commercial music notation software, where the composition process continued by selecting and merging musically interesting materials, and by reducing their complexity in order to make them playable. However, this workflow had some limitations. Firstly, despite the score containing all the note velocity information, thus making its MIDI playback sound correctly, it did not have any music dynamics printed on the staves. This was solved by improving the code and adding the dynamics flag presented in Section 3. A second obstacle was that the score produced by the SuperOM contained a large number of staves (namely 25), resulting from how the sonification data was stored. This aspect made the score quite impracticable to read and difficult to work on, and it required a staves merging operation, done by hand, in order to achieve a typical pianolooking score. In the future, such a problem could be solved by carefully designing SuperOM methods to collapse many staves into one.

Listing 8. Writing a score from spectral data 5. CASE STUDY – GENERATING A PIANO PIECE USING SUPEROM The SuperOM has been intensively used by the first author in his artistic practice. The tool has been very useful especially since it allows for the creation of transcriptions of material derived from spectral analysis. Below is a short account of a case study in which SuperOM was used in a composition process involving sonification of video material. SuperOM was successfully used in a joint study carried out together with the pianist and researcher Johan Fr¨ost. This project made use of sonification of a video that was created from multiple video recordings of Fr¨ost playing Debussy’s “Reflects dans l’eau”. More specifically, the aim of the project was to sonify moving images using a Disklavier piano, a self-playing piano designed by Yamaha 14 . Sonification is defined as the use of nonspeech audio to convey information [15] 15 . The video to be sonified was created starting from multiple video recordings of Fr¨ost’s performance. The videos were edited together, highlighting musical events and the musical narrative of Debussy’s piece. The final merged video was then sonified, and the sonification was used as starting material in the composition process. The aim of the composition process was to create a new piano piece to be played in synch with the original video in a live performance at a concert series in the Spring of 2023. The incpit of piano piece is shown in Figure 5. The overall sonification workflow involved three stages and two different softwares: 1) the video was loaded into Max/MSP, in which video processing took place using, among others, the cv.jit package 16 ; 2) the data was sent to SuperCollider via OSC 17 , where the mappings for the

It should be noted that the final product of the process outlined above was not a literal sonification mapping the video input directly to output (i.e. the final piece) in an objective way; the author interacted with the generated material, merging and adapting different generated parts into a final piece, thus disrupting the direct connection between input and output. In other words, sonification was used as a subtask within the composition process, assisting the author’s composition process by providing ideas for the final piano piece. This process is somewhat similar to what is referred to as mixed-initiative interaction in the field of Human Computer Interaction (HCI), in which each agent (human or computer) contributes what it is best suited at the most appropriate time [16]. Using this mixed strategy outlined above, the piece closes the circle of re-mediation from a piano piece by Debussy, to: 1) a video made with the intent to visualize the musical material of Debussy’s piano music; 2) a sonification based on this video, using material from the original piano music; 3) the use tonal material created in step 2), which in turn served as material for a new composition for the piano; and finally 4) a performance played in real-time by a pianist.

14

https://www.disklavier.com/ This definition has later been expanded by Thomas Hermann to: “(...) data-dependent generation of sound, if the transformation is systematic, objective and reproducible (...), see https:// sonification.de/son/definition/. 16 https://jmpelletier.com/cvjit/ 17 https://ccrma.stanford.edu/groups/osc/index. html 15

73

Figure 5. Excerpt from the beginning of the piano piece composed starting from a sonification of video images. 6. LIMITATIONS A limitation of the current version of the SuperOM concerns combining midicents staves of different lengths. As a matter of fact, midicents arrays should all have the same length, in order to create a final rectangular matrix. One workaround to solve this issue is to fill the shorter arrays with zeros (a sort of zero padding), thus matching the size of the longest array, see Listing 9. In this way, the additional 0 pitches will be ignored, as long as the rhythm tree doesn’t contain rhythm information. 1 var pitches1 = [7200, 7400, 7500, 7600]; 2 var rhythm1 = [1/6, -2/6, 1/4, 1/4, 1/4]; 3 4 var pitches2 = [6000, 6200, 6550, 6800, 7000, 6800, 5300, 5625, 6378, 6740]; 5 var rhythm2 = [1/2, -1/6, 0, 0, 2/6, -2/5, 3/5, 1/6, -1/6, 0, 0, 1/6, 1/4]; 6 7 var pitches3 = [5500, [5600, 5950], 5700, 6050]; 8 var rhythm3 = [-1/8, 1/8, 1/8, 1/8]; 9 10 pitches2.do({pitches1=pitches1++0}); 11 pitches2.do({pitches3=pitches3++0}); 12 13 o = SuperOM.new; 14 o.writeOMfile("example9.omi", [ pitches1, pitches2, pitches3], rhythmTree: [rhythm1, rhythm2, rhythm3], metronome: 144);

Interestingly, rhythm trees that contain zeroes make subsequent notes collapse, thus creating chords. This effect is demonstrated in the example in Listing 10, in which two ways of writing the same musical score are compared. As before, also here we need to zero pad the shortest arrays. 1 var pitches1 = [6000, [6200, 6550, 6800], 7000, 6800, [5300, 5625, 6378], 6740]; 2 var rhythm1 = [1/2, -1/6, 2/6, -2/5, 3/5, 1/6, -1/6, 1/6, 1/4]; 3 4 var pitches2 = [6000, 6200, 6550, 6800, 7000, 6800, 5300, 5625, 6378, 6740]; 5 var rhythm2 = [1/2, -1/6, 0, 0, 2/6, -2/5, 3/5, 1/6, -1/6, 0, 0, 1/6, 1/4]; 6 7 pitches2.size.do({pitches1=pitches1 ++0}); 8 9 o = SuperOM.new; 10 o.writeOMfile("example10.omi", [ pitches1, pitches2], rhythmTree: [ rhythm1, rhythm2], metronome: 144); Listing 10. Comparison of two different ways of producing the same score in output.

7. CONCLUSIONS AND FUTURE WORK This paper presents SuperOM, a class for bridging SuperCollider with OpenMusic thus enabling generation of complex music scores with a high level of precision. Files generated with SuperOM can be imported and edited in OpenMusic, which allows for collaborative frameworks between the two software.

Listing 9. Generating a score from midicents arrays with different lengths.

7

The SuperCollider code of the implementation of SuperOM is still in progress and is continuously improved. The code is readily available on from here: https://github.com/claudiopanariello/ SuperOM. The material provided includes the SuperOM files and a tutorial file. At the point of writing, the first author is using SuperOM in several projects, for example in the generation of scores from algorithmic compositions realized in SuperCollider, or in the creation of music transcriptions of material derived from spectral analysis of audio recorded material (especially audio feedback recordings). This ongoing work will continue to inform the design of SuperOM, allowing it to be iteratively improved over time. The use case presented in Section 5 serves as a formative evaluation of the SuperOM, carried out by the author from a first-person perspective. In the future, there is a need for empirical evaluation with actual users, to identify weaknesses and areas of possible improvement. It is worth mentioning that the music scores discussed in this paper refer to traditional Western score. There are many situations in which other score notations might be more appropriate. Therefore, a possible future direction might be to design other classes that could allow for nonstandard notations, similarly to what Ghisi and Agostini did in extending bach by introducing the dada library [17].

Journal of New Music Research, vol. 46, no. 1, pp. 3– 14, 2017. [6] S. Wilson, D. Cottle, and N. Collins, The SuperCollider Book. The MIT Press, 2011. [7] H.-W. Nienhuys and J. Nieuwenhuizen, “LilyPond, a system for automated music engraving,” in Proceedings of the XIV Colloquium on Musical Informatics (XIV CIM 2003), vol. 1. Citeseer, 2003, pp. 167–171. [8] B. Barros, “LilyCollider and rhythmic structures,” Revista V´ortex, vol. 2, no. 2, 2014. [9] Bernardo Barros, “Music Notation with SuperCollider.” [Online]. Available: http://bernardobarros.com/ files/lilycollider-sc2013/slides.pdf [10] A. Valle, “SonoGraph. a cartoonified spectral model for music composition,” in Proceedings of the 16th Sound & Music Computing Conference. SMC, 2019, pp. 462–469. [11] G. Assayag, C. Rueda, M. Laurson, C. Agon, and O. Delerue, “Computer-assisted composition at IRCAM: From PatchWork to OpenMusic,” Computer Music Journal, vol. 23, no. 3, pp. 59–72, 1999. [12] C. Agon, “Openmusic: Un langage visuel pour la composition musicale assist´ee par ordinateur,” Ph.D. dissertation, Paris 6, 1998.

Acknowledgments The first author would like to express his gratitude to Mauro Lanza, who kindly helped out with the OpenMusic code.

[13] J. Bresson, C. Agon, and G. Assayag, “OpenMusic: Visual programming environment for music composition, analysis and research,” in Proceedings of the 19th ACM international conference on Multimedia, 2011, pp. 743–746.

8. REFERENCES [1] A. Valle, Introduction to SuperCollider. Logos Verlag Berlin GmbH, 2016.

[14] A. Agostini and D. Ghisi, “A Max library for musical notation and computer-aided composition,” Computer Music Journal, vol. 39, no. 2, pp. 11–27, 2015.

[2] E. Benetos, S. Dixon, Z. Duan, and S. Ewert, “Automatic music transcription: An overview,” IEEE Signal Processing Magazine, vol. 36, no. 1, pp. 20–30, 2018.

[15] G. Kramer, B. Walker, T. Bonebright, P. Cook, J. H. Flowers, N. Miner, and J. Neuhoff, “Sonification report: Status of the field and research agenda,” University of Nebraska - Lincoln, Tech. Rep., 2010.

[3] A. Klapuri and M. Davy, Signal processing methods for music transcription. Springer Science & Business Media, 2007.

[16] J. E. Allen, C. I. Guinn, and E. Horvtz, “Mixedinitiative interaction,” IEEE Intelligent Systems and their Applications, vol. 14, no. 5, pp. 14–23, 1999.

[4] E. Benetos, S. Dixon, D. Giannoulis, H. Kirchhoff, and A. Klapuri, “Automatic music transcription: Challenges and future directions,” Journal of Intelligent Information Systems, vol. 41, no. 3, pp. 407–434, 2013.

[17] D. Ghisi and A. Agostini, “Extending bach: A family of libraries for real-time computer-assisted composition in Max,” Journal of New Music Research, vol. 46, no. 1, pp. 34–53, 2017.

[5] D. Bouche, J. Nika, A. Chechile, and J. Bresson, “Computer-aided composition of musical processes,”

75