NIME07 – Concert 3

June 11, 2007 by

19:00 – Concert 3
Frederick Loewe Theater, NYU

  • PercusBot Study No. 1
    Troy Rogers
  • Cyber Kendang
    Noichi Yagashima
  • Digital Sankirna
    Ajay Kapur, Eric Singer
  • Desconcierto pt2
    Gregory Kowalski, Andrea Pensado
  • Things In An Open Field (for Laser Koto)
    Miya Masaoka
  • Six Axioms
    Randy Jones
  • the electronic unicorn
    Georg Holzmann
  • Them and the Others
    Tom Mays

NIME07 – Concert 2

June 11, 2007 by

New Works for Commissioned Performers
Kathleen Supové, piano
Todd Reynolds, violin

  • Music For Sextet and Computer
    Cort Lippe

Kathleen Supové

  • Private Lesson
    Eric Lyon
  • Landmine
    Dafna Naphtali
  • Delta Space
    Lukas Ligeti

    –>

  • Digits
    Neil Rolnick

Todd Reynolds

    Grande Étude Symphonique
    Phil Kline

    –>

  • Beginner’s Mind
    Todd Reynolds
  • September Canons
    Ingram Marshall
  • For Reynolds
    Andreas Weixler, Se-Lien Chuang
  • Three Pieces to End One Half of a Concert
    Todd Reynolds
  • Requiem
    Todd Reynolds

    –>

  • Lasso and Corral: Variations on an Ill-Formed Meter
    Dan Trueman, Ken Thomson on Bass Clarinet

Arduino Workshop

June 11, 2007 by

Arduino is an open-source physical computing platform based on a simple i/o board, and a development environment for writing Arduino software. The Arduino programming language is an implementation of the language used by Wiring, while the Arduino environment is based on Processing. Arduino can be used to develop interactive objects, taking inputs from a variety of switches or sensors, and controlling a variety of lights, motors, and other outputs. Arduino projects can be stand-alone, or they can be communicate with software running on your computer (e.g. Flash, Processing, MaxMSP). Arduino received an Honory Mention in the Digital Communities section of the 2006 Ars Electronica Prix.

Paper Session 6 – Systems and Standards

June 10, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

    * New Musical Interfaces in Context: Sonic Interaction Design in the Urban Setting
Karmen Franinovic, Yon Visell

– recorded sounds are played back by spinning black dishes
– work in public space,.. ot’s about interaction design
– social listening, exploration about what such a system is about, negotiate with strangers about what is the soundscape
– interactive installation in public space: various users, platforms for discovery of public space, turning production and consumption into a more creative space, people discover new and unintended experiences
– design by discovery (urban probes)
– these instruments are an alternative media to let passers by appropriate public space
– expanded performance
– social and political questions: it’s about sociallity as expanded (musical) performance
– see also situationists movements: citizen is pushed into a creative environment
– If you want a lot of people make something that is visually striking
– Questions of context: sound is an ecologically sensitive medium, performance, sensory perception, timbre perception are cultural acts, musicology offers cultural settings for this type of art

    * Extended Applications of the Wireless Sensor Array (WISEAR)
David Topper, Virginia Center for Computer Music

– genreal idea: using Linux based SBCs in performance, robotics, etc.
– linux is close to OS X
– linux is open source
– Desire for something more flexible / universal /easier to use than BASIC stamps & PIC chips
– first generation WISEAR: built on TS-5500 board from technologis Systems (www.embeddedarm.com
– second generation board: less power hungry: 6 sensors, data transmission via off the shelf wireless
– But still too many problems: now we’re moving to Gumstix (www.gumstix.com)
– Gumstix: built in audio, ADC, DIO, GPIO, modular platform with extensive features: usb, bluetooth, ct, etc; 600mhz, wiki docs, better user base, very small


* CELERITAS: Wearable Wireless System
Giuseppe Torre, Mikael Fernstrom, Brendan O’Flynn

– a wearable wireless sensor unit for the purpose of live dance performance
– main tasks: low latency (-15ms with eight nodes), low costs, allowing solo and group performances, usability (programmable by composer and choreographer), reliability (last through a whole performance)
– signal path: 25 mm WIMU, base station, serial port, computer (driver), max/msp object (external),mapping, av output
– measures: 50 x 25 x 25 mm, weight 30g, 3.4 volt lithium-ion rechargeable battery (approx 3h life)
– FIFO: first in, first out
– driver is compiled for win and apple and runs with max/msp and pd
– objective: user friendly interface
– to create a 3d virtual interface instrument around he body of the dancer. For this we need the exact position of each node, now they only relative position.
– Negative: strong jitter which is solved by averaging in max/msp
– results: acceleration, speed, distance, orientation, angles, quaternion, etc.

    * Defining a control standard for easily integrating haptic virtual environments with existing audio / visual systems
Stephen Sinclair, Marcelo Wanderley (input devices and music interaction laboratory, McGill University, Montreal)
– opening up the ability in order to easily play with haptic systems (force feedback systems)
– force feedback haptics, virtual environments, communication
– sensors: force-feedback using motors = sense of tough (feels as if you can feel the touch of an object in virtual space)
– haptic devices: SensAble Phantom Omni, Force Dimension Omega, Novint Falcon (now approx $200!!!!!) , ERGOS, MPBT Freedom 6S
– previous works: virtual instruments: ANCROE: CORDIS ANIMA, Mulder er al: virtual musical instruments, Verplank: The Plank; Gillespie, etc
– Integrating VR with Audio: many libraries support graphics but not sound
– ideal: physical object of an object and then extract audio output
– should be realistic and accurate, energy preserving, computationally demanding, difficult to integrate, impose a synthesis model
– Asyncronous Architecture: phsical dynamics, haptic device, audio synthesis, audio output (haptic rate ~100Hz)
– DIMPLE: Dynamically Interactive Musically Physical Environment (download url is in the paper) = opensource

    * Chroma Palette: Chromatic Maps of Sound As Granular Synthesis Interface
Justin Donaldson, Ian Knopke, Chris Raphael
  (Indiana University School of Informatics)
– Chromapalate: granular synthesis interface
– Granular Synthesis: Gabor, Roads, Truax, Xenakis
– selection of grains, density, pitch, length, envelope, combine and re-synthesis
– grains form 1-50ms
– nwe way to sort out the sounds: Chromatic characteristics of the individual grains (what pitch do they represent?)
– use of FFT analysis and Aygmented Chroma
– Converting the Chroma dat into a 2d space
– Now you can use the maps for granular synthesis (for grain selection)
– Client side: interface is coded in Flash, only grain index and onset information need to be sent to the synthesis engine
– issues: MDS can involve error in the representation, it can only handle a few thousand grains per map

Panel Discussion – Music and Robotics

June 9, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

panelists: Trimpin, Godfried-Willem Raes, Gordon Monahan, en Jaques Remus
http://www.gordonmonahan.com/
http://en.wikipedia.org/wiki/Trimpin
http://www.mecamusique.com/
http://en.wikipedia.org/wiki/Godfried-Willem_Raes

Q: Is musical robotics a novelty or something really serious?
JR – gives you a orchestra without paying musicians
– it’s about making acoustic music with electronics
– it’s about building interfaces for real instruments, not synthesizers and computer music.
– To have pure acoustic sound: music made for machines but that no loudspeaker can reproduce and no people can make
GWR – It’s not something new but what is new is the programmability: this is a real novilty
GM – we’re inventors that other musician pick up,.. there is a fine line then between us and the other musicians
T – All composers composed for machines but it was not accepted and it never came to the public. But all composers composed for it.
Q: there is a current explosion of interest, at least here in NY,.. s this shortlived or the future?
GWR – I don’t feel an explosion: there are few people doing serious work in this field. There is a lot o amateurism: In order for somthing to be a robot it must have some form of sensor. either of it’s internal state, or more advanced: a sensor for there environment in which they ‘live’. That makes it into a robot. It’s not just something mechanical.
Q: well there is artbot etc and some more teaching in the field
GWR: I’m sure it’s gonna grow
JR – We’re a lot of things at the time: multidimensional artists: time (music), space (visual), performers (stage)
T – interdisciplinary institutions in the university failed because the different departments didn’t understand each other.
Q: tips to get started
GWR – get the book “the art of electronics”, also get a good workshop
T – start simple,.. with small things and then grow,.. do it all yourself, get nothing “off the shelf”, be patient
JR – first a dream, then the tools
GM – you can also buy things of the shelf,.. for me it works
GWR: get the tool for the purpose in stead of turning the tool into the purpose
Q: How do you make it so human?
QWR: there is nothing as human as a machine

Keynote Session 3 – Teresa Marrin Nakra: Insights on musical expression: Conductors, musicians and Audiences

June 9, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

– What exactly is musical expression? It really depends on your point of view. As a classical musician you’d think about great artists. What do great artists do that is directly communicative? To express emotions. It has all to do with emotion (and also personality, but more emotion).
– Depending on the conductor a piece can be difficult or easy to play: it has to do with gestures and emotional expression, even on the face. Some conductors have a particular magic that can lift up the orchestra. This idea is the core of musical expression. It is about shaping the expression.
– The other aspect about musical expression and emotion is about being able to connect with an emotion that is inside of you. You’ve got to have something to express. (example of how after 9/11 her father started singing 7 days a week instead of 2)
– Music as a form of communication. Start with Claude Shannon’s ” A mathematical theory of communication” (1948) –> information source, transmitter, signal, noise source, received signal, receiver, message, destination. Source as a carries wave and the transmitter as a modulation to think of this in a musical model.
– Theories of emotion and music: Plato and Aristotle, Leonard Meyer, Leonard Bernstein, Marvin Minsky, Manfred Clynes, David Huron, Daniel Levitin.
– Promising Quantitative Methods: used to verify our theories. Analysis f Herbert von Karajan’s pulse rate while conducting and while piloting a jet aircraft. Heartbeat seems to be connected to emotional arousal.
– Dan Machover’s Brain Opera (with Paradiso) –> the digital baton
– Then built the device called the “Conductor Jacket” (1997 – 2000) with R. Picard (het dissertation about this is available online)
– Research collaboration with Levitin and McAdams with the Boston Symphony Orchestra as a follow-up to the conductor’s jacket.
– Also analysis of vertical position of conducting gestures
– development of a conducting system for education applications (Conducting Jacket)
– Application for entertainment: Boston Children’s music exhibition: “You are the Conductor”
– Nintendo Wii Orchestra: promotional video shows the face of the players and video’s of the system in use shows only the program and a moving hand.
– The digital orchestra League: http://www.digitalorchestraleague.com. Turing test for the orchestra machine. David Smith: “We are 5% there… does Moore’s Law apply to digital orchestras”
– Future Goals: Research on emotional contagion and microexpression (Ekman and Condon). Also more collaborations with orchestras and classical musicians

Keynote Session 2 – Trimpin

June 9, 2007 by

Trimpin, a sound sculptor, composer, inventor, is one of the most stimulating one-man forces in music today. A specialist in interfacing computers with traditional acoustic instruments, he has developed a myriad of methods for playing, trombones, cymbals, pianos, and so forth with Macintosh computers. He has collaborated frequently with Conlon Nancarrow, realizing the composer’s piano roll compositions through various media. At the 1989 Composer-to-Composer conference in Telluride, Colorado, Trimpin created a Macintosh-controlled device that allowed one of Nancarrow’s short studies for player piano to be performed by mallets striking 100 Dutch wooden shoes arranged in a horseshoe from the edge of the balcony at the Sheridan Opera House. He also prepared a performance of Nancarrow’s studies at the Brooklyn Academy of Music for New Music America in 1989.

Trimpin was born in southwestern Germany, near the Black Forest. His early musical training began at the age of eight, learning woodwinds and brass instruments. In later years he developed an allergic reaction to metal which prevented him from pursuing a career in music, so he turned to electro-mechanical engineering. Afterwards, he spent several years living and studying in Berlin where he received his Master’s Degree from the University of Berlin.

Eventually he became interested in acoustical sets while working in theater productions with Samuel Beckett and Rick Cluchey, director of the San Quentin Drama Workshop. From 1985-87 he co-chaired the Electronic Music Department of the Sweelinck Conservatory in Amsterdam.

Trimpin now resides in Seattle where numerous instruments that defy description adorn his amazing studio. In describing his work, Trimpin sums it up as “extending the traditional boundaries of instruments and the sounds they’re capable of producing by mechanically operating them. Although they’re computer-driven, they’re still real instruments making real sounds, but with another dimension added, that of spatial distribution. What I’m trying to do is go beyond human physical limitations to play instruments in such a way that no matter how complex the composition of the timing, it can be pushed over the limits.” source

Trimpin on Wikipedia

Paper Session 4 – Timing, Motion and Rhythm

June 8, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

* New Interfaces for Popular Music Performance
Roger Dannenberg
– using computers to augment music performance: tape music, click tracks, computer accompaniment, interactive improvisation systems, new instruments, popular music is relatively unexplored: too common? too technical? huge potential market, some very challenging problems.
– Popular music: characteristics: general a steady tempo, generally fixed score, somewhat flexible structure, drums, guitar, keyboards are not playing fixed parts. This genre is interesting because: there is enough structure to be predictable and understandable, there is enough variation to require realtime interaction.
– How can computers augment performance of popular music: play additional parts, assist performers with technical parts, create new musical material, assist with rehearsals, assist with sound reinforcement and digital audio effects.
– Research framework: where is the beat? beat tapping interfaces, tempo and phase estimation, merging multiple sources of information. Also data preparation: editing, mixing, annotation of scores with performance info. Also: Where are we in the song? listening to chord changes, melody spotting, interfaces to cue score location, visual display integrated with musical notation. Sound generation: realtime time stretching for synchronization, sound synthesis from scores and lead-sheets, computer assisted sound reinforcement/mixing.
– Online adaptation: coordination of note attack times, learning beat phase relationships, dynamics / balance /mixing, learning charts from rehearsal
– Prototype: Goal: to augment horn section in rock band, demands: control by one musician, high quality, time scaling for tempo adaptation, tap interface to get tempo, extra taps to get phase and cue entrances.

* Towards Rhythmic Analysis of Human Motion using Acceleration-Onset Times
Eric Lee, Urs Enke, Leo de Jong
– Motivation: music is a rhythmitization of sound: whether this is true or not, rhythm is at least important
– Rhythm pattern: a repeating series of accentuated impulses separated by time intervals.
– Goal: attach accelerometers to people and extract rhythmic notions from the input data
– Related work: dance-movement analysis: Paradiso’s “DanceShoe”, Griffith’s “LifeFoot”, automatic beat detection tools.
– We try to extract beat information from accelerometers, so no audio data analysis for beat extraction. We want to do this all in real time.
– The Algorithm: sensor signal, movement detection, impulse sequence generation from movement detection, interval analysis, frequency analysis –> data fusion from the two analyses, impulse folding from impulse sequence and pattern structure, impulse clustering –> rhythm

* nJam user experiments: enabling NMP (networked music performance) from milliseconds to seconds
Nicolas Bouillot
In the remote real-time musical interaction domain, the end- to-end latency is a well known problem. Today, the main explored approach is to keep it below the musician perception threshold. In this paper, we experiment another approach, where end-to-end delays rise to several seconds but computed in a controlled (and synchronized) way depending on the musical pieces structure. We implement a prototype called nJam and perform user experiments to show how this new kind of interactivity breaks the actual end-to-end latency bounds while being user friendly.

* Ashitaka: an audiovisual instrument
Niall Moody, Dr. Nick Fells, Dr. Nicholas Bailey
project website
– Aim is to create an audiovisual instrument whose output is perceived as an audiovisual whole, with audio and visuals not easily separated
– based on Michel Chion’s AudioVision: sound on screen” (synchresis)
– synchresis is based on motion: objects that we can see that are moving mostly make a sound.
– metaphor based mappings and perception based mappings
– sound and image influence each other
– performer’s gestures are mapped to the audiovisual parameters
– X3D is a virtual world file format, the successor to VRML
– interface is based on claw-based gestures
– sensors: stretch, twist, 4x force sensors, accelerometers
– visual: a system of gravity objects
– audio: Tao physical modeling language, single string model as primary synthesis method

* Percussion instruments using realtime convolution: Physical controllers
Roberto Aimi

– Physical controllers: drum pad, frame drum, brushes, bass drum, cymbals
– convolution latency overcome by increasing FFT window size –> effective delay of eg 64 samples
– Using real modified percussion instruments as controllers for other sounds
– using non-linear waveshaping in order to emulate for example the non-linear properties of a real cymbal

Session 3 – Philosophical, Historical and Pedagogical Issues

June 8, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

* Erkki Kurenniemi’s Electronic Musical Instruments of the 1960’s and 1970’s
Mikko Ojanen, Jari Suominen, Titti Kallio

MISSED

* The Acoustic, the Digital and the Body: A Survey on Musical Instruments
Thor Magnusson, Enrike Hurtado Mendieta

– University of Sussex, Huddersfield University
– research of interactive modes in musical software
– free and opensource software is on the website
– propagating opensource software and the sharing of knowledge
– Research focus: dual semiotic stance of the user of the software
– designer of creative software has to be aware of this fact
– What does it mean to be a consumer of musical software
– does it make sense to talk about “software interpretation”?
– environment in supercollider: various instruments
– you have the ability to code in realtime
– interested in the software as the “neuro instrument” of the software
– people can create and modify their instruments
– what is a digital instrument? –> people have different opinions, here we did not try to define it
– focus: control interaction, instrument entropy, affordance and constraints, creativity: the epistemic tool as the prime mover
– phenomenological: based on experience
– Participants: mailing lists for audio programming languages, conservatories, universities and orchestras
– 210 replies of which 9 were female
– linux (45), osx (88), windows (105), average age 45(!)
– acoustic instruments positive: tactile feedback, limitations are inspiring, traditions and legacy, depth, instrument becomes 2nd nature, embodied experience, no latency, easy to express mood, extrovert state when playing
– acoustic instruments negative: lacking in range, no editing, no memory or intelligence, prone to cliche playing, too much tradition, no experimentation in design, inflexible, less microtonalities or tunings, no inharmonic spectra
– digital instruments positive: free from traditions, experimental, any sound and interface, freedom in mapping, designed for specific needs, automation and intelligence, good for composing, easier to get into, not as limited to tonal music
– digital instruments negative: lacking in substance no legacy, no haptic feedback, latency, disembodied experience, lacking social conventions, slave to the historic, limitation to the acoustic, introvert state when playing, no haptic feedback
– Conclusion: people work with the best of both worlds, design around the constraints of each, digital is playing and composition in once, the entropy of acoustic instruments is important, open-source: people express the importance in freedom in expression, open standards, etc etc…..

* Ten Years of Tablet Musical Interfaces
Michael Zbyszynski, Matthew Wright, Ali Momeni

– UC Berkeley Centre for new Music
– interfaces with a future, interfaces with a past: “towards a theoretical formulation of the Long New”
– What makes a good musical controller: advantages of standard controllers: low cost and availability, leading to,.. redundancy and replaceability / General characteristics: high resolution output data, fine temporal accuracy, multiple axes of control.
– Tactile reference: the player touches it (haptic feedback), spatial coordinates are absolute, leverages fine motor control and writing/drawing experience, leaves the other hand free to do something else
– Precedents: many, including: Xenakis UPIC Sstem, Boie/Mathews/Schloss Radio Drum
– opensoundcontrol.org (OSC wraper between controllers and sound maker processes. This facilitates a max patch to document this and opens up for a wrapper to change and remap things
– Matt Wright: interactive instruments, he makes performance templates from samples of recordings of real players
– Wacom objects for Max/MSP
– Ali Momeni: parameter interpolation space, multi-touch sensors (wacom and contact mic),
– temporal resolution and accuracy leaves something to be desired

* Expression and Its Discontents: Toward an Ecology of Musical Creation
Michael Gurevich, Jeffrey Treviño

– dominant model of creation –> ecological model of creation
The Dominant model of creation in the NIME discourse:
– Defining creation through expression: “deviation” and ” deformation”: implied determinate artistic content. Is there something else that needs to be expressed through music than sounds? Composer: creation, performer: expression. Where is the interpretation.
– Are there expressive interfaces? In the NIME discourse we are locating expression in the interface / quantifying expression. Is expression a unified flow of quantities.
– “Performance communicate musical expression to listeners by a process of coding. Listeners receive musical expression by decoding” (Poepel)
– Experimentalism as non-expressive creation “the shortest path between two people is not a straight line” (Earle Brown)
– The medium as active participant
– Relationships between composers, performance and listeners: any form of configuration may exist. It’s also about context and surrounding elements and influences, economics, authorship, etc. Of paramount importance is that expression is an option.
– Applications: imitation of expression by machines, interfaces/mappings to facilitate traditional expression, develop new expressive cues within the text/act model. We should question expression as the goal.

* Live Coding Practice
Nick Collins (Click Nollins?)

– makes powerpoint presentation on the fly,.. not using: he uses supercollider instead to generate a clock for how much time he has left. Then generates a random numberin order to show how many quotes he will give. What is live coding? defenition: when musicians and artists can express themselves immediately through a program. “Humans make dynamic cogs within the threads of rule systems which rewrite themselves”
– online live coding performance video: “Study in Keith”
http://www.toplap.org
– In the paper he goes through pedagogical issues and suggest practice and exercises.
– What about live coding together with live musicians? Live Coding cards can inform musicians what to do.
– Is live coding a scene by now?
– skill acquisition of violin: 3 hours a day for 10 years,… how much do you have to practice to become a live coder?
– ” Teach yourself programming in 10 years” = website
– question: where do you draw the line between what you’ll prepare at home and what you do live? What is considered cheating in the scene?


* Natural Interfaces for Musical Expression: Physiphones as primordial Infra-Instruments
Steve Mann

– Hydraulophones as Physiphones
– organolog (ethnomusicology): strings, percussion, wind: strings and percussion are more similar to each other than to wind)
– Geaphones make sound from matter in its solid state, wind: make sound from matter in gas-like sound
– idiophone (3d solid), membranophone (2d solid), chordophone (1d solid), Aerophone (gas), Electrophone (informatic), note again that the first three are more similar to each other than the last two.
– there also can be a physics based organology: solid (1, 2 and 3d), liquid, gas, plasma, informatic: Greek names correspond to earth, water, wind, fire and Quintessence (idea)
– So what is the complete orchestra?
– presents the “self-cleaning keyboard” a water-based flute in a public park
– presents the “global village fountain and immersive multimedia”: physiphones, electric xylophones, hyper-hydraulophone, cyborg instruments.
– presents “splash page”: a waterfall musical instrument
– The KEY to good music is to PLAY in the water

* Wireless sensor interface and gesture-follower for music pedagogy
Frederic Bevilacqua, Fabrice Guedy, Emmanuel Flety, Nicolas Leroy, Norbert Schnell

– Wireless Interface: requirements: compact size and weight, handheld interfaces, augmented string interfaces
– Custum XBee digitizer: 6 inputs, 10bits, 5ms / Ethernet receiver/base untit / OSC compliant data / Li-Po battery
– 5 output sensor (sparkfun combo board)
– Parallel use of units is possible
– Examples: dance (it’s wearable), augmented instruments
– gesture follower: analyze data in real time and compare it to already learned gestures,.. the device learns these gestures (you’ll need a “training fase”)
– The program reacts only on gestural changes so ths means you can do a particular gesture slower or faster and possibly speeding a coupled sound up or down.,… or control any other parameter
– This device can be used for training conductors
– Preliminary studies encouraging: scenarios stimulating the interaction between theory and practice, etc.
– gesture follower: free download at http://ftm.ircam.fr

NIME07 Concert 1

June 8, 2007 by

Frederick Loewe Theater, NYU

Disparate Bodies – Pedro Rebelo, Alain Renaud, Tom Davis
Disparate Bodies
is a network performance that explores multi-modal remote presence. The performance happens simultaneously in three sites (Belfast, NY and Stanford, California). The stage performance in NY features a laptop musicians and two Remote.bots. These are robotic entities that host the physical and musical gestures which are performed by the remote participants in the various locations. They consist of reflective elements which move according to the analysis of each audio stream and project glimpses of 3D rendered imagery around the performance space. The performance is based on the notion of performance entities as reflected by telepresence, robotics and sound systems. As such, each performer (local and remote) has a specific sound diffusion set up and a chosen 3D avatar which consists of abstract representations of movement and gesture. The performance is improvised with reference to strategies that intend to explore the relationship between sound and movement. The performance uses high quality audio streaming software developed by CCRMA and gesture, robotic and 3D rendering technologies developed at SARC. Instrumentation: Saxophones (Franziska Schroeder), Moustrap (Mark Applebaum), Piano/Computer (Pedro Rebelo), Remote.bot (Tom Davis) and Frequencyliator (Alain Renaud).
EyeMusic v1.0 – Anthony Hornof, Troy Rogers, Tim Halverson
EyeMusic is a project that explores how eye movements can be sonified to show where a person is looking using sound, and how this sonification can be used in real time to create music. An eye tracking device (the LC Technologies Eyegaze Communication System, http://www.eyegaze.com/) reports where the performer is looking on the computer screen, as well as other parameters pertaining to the status of the eyes. The eye tracker reports these data in real time to a computer program (written using Max/MSP/Jitter). The computer program generates and modifies sounds and images based on these data. While the eye is, in ordinary human usage, an organ of perception, EyeMusic allows for it to be a manipulator as well. EyeMusic creates an unusual feedback loop. The performer may be motivated to look at a physical location either to process it visually (the usual motivation for an eye movement) or to create a sound (a new motivation). These two motivations can work together to achieve perceptual-motor harmony and also to create music along the way. The two motivations can also generate some conflict, though, as when the gaze must move close to an object without looking directly at it, to set up a specific sonic or visual effect. Through it all, EyeMusic explores how the eyes can be used to directly perform a musical composition.
“Let’s Just See What Happens” for Long Tube and gestural interface – Brenda Hutchinson
website


Ménagerie Imaginaire – Zach Settel, Mike Wozniewski, Jeremy Cooperstock
http://www.electrocd.com/bio.e/settel_za.html
http://www.cim.mcgill.ca/~mikewoz/
http://www.cim.mcgill.ca/~jer/
Cyberdidj Australis – Garth Paine, Michael Atherton
For a recently designed telescopic didjeridu, Capybara and Wacom interface. The work explores the shifting fundamentals and overtones of the didjeridu and the possibilities of interactive synthesis. Traditional playing techniques are extended and morphed by and in response to electronic elaboration. The performers explore shifting dronal material, vocalisations, and additive rhythmic patterns to create dramatic shifts in timbre, density and pulse.
“NYZ” by Zanana – Monique Buzzarté, Kristin Norderval
http://www.zanana.org/


KARMA/live – Kurt Hentschlager
http://www.hentschlager.info/
KARMA is a “living” environment, a procedurally changing audiovisual installation. KARMA follows a non-linear progression in which moments of commotion are followed by periods of meditative peace. The installation comes alive via humanoid 3D figures suspended, often seemingly unwell, trembling and oscillating. Their movements emanate a drone-like sound-scape. The 3D characters are presented as puppets on strings, instilling them with a familiar yet ambiguous sense of human life, resulting in an indefinite dance of the almost living dead. Karma is incidentally the name of the physics simulation unit within Unreal Tournament, a multi player computer game. Karma in UT or similar “3D real-time engines” describes the simulation of physical laws like gravity & kinetic forces. In KARMA / cell, the motions and actions of the 3D characters synthesize through an additional sound software, a dynamic sound-track composed on the fly. The characters each are a discrete musical instrument and become, through their “motions and emotions,” part of a symphonic, multilayered body of sound. Both the realtime synthesis of the characters motions and their sounds build, within the scripted frame defined by the artist, an endlessly changing variety of emotional expressions.

NIME07 Paper Session 2 – Instrument Design

June 8, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

Paper Session 2 – Instrument Design

* The Multimodal Music Stand
Dan Overholt, Lance Putnam, John Thompson

– Made for multimodal musical performance. Made for the untethered performance gestures that are not directly controlling the other instrument they are playing. Use of different sensors
– generalized approach towards instr augmentation
– Capture expressive gestures and map them to synthesis parameters
– multimodal
– video camera, mic, 4 E-field sensors
– Background: instrument for expressive control, Augmented music stands, gestures in music, score following
– Computer Vision Techniques: flute segmentation algorithm, gaze detection using the Viola-Jones face detector, nod detection using LK pyramidal optical flow
– Multimodal detection layer, Sound Synthesis
– Future Goals: Incorporate more features in gestural control and recognition, Expressive gesture tracking,

* The T-Stick: from Musical Interface to Musical Instrument
Joseph Malloch, Marcelo Wanderley

– Other stick controllers: Sweatstick, interval stick/talking stick, musicPole
– T-Stick motivation: to create a family of DMInstr, to create a robust physical interface
– A family of DMi’s: context in pedagogy, familiarity in performance (also for the audience to understand what you are doing), fits into traditional performance aesthetics
– Metaphor: a vibrating string or bar, like any object that would make sound acoustically,.. so you can swing, throw, beat, shake, etc
– It’s not a physical model
– Any user could pick it up and have a model in their mind what it’s about,.. later they can become better at the instrument and learn
– Goals: to create a multi-touch sensor surface, to make it more robust, to make a model of a real vibrating object incl excitation and damping etc.
– Multi-touch sensing: an array of discrete capacitive sensors
– 3 axis accelerometer, pressure sensors, contact microphone inside
– cover: to add mechanical strength: shrink-tubing
– Performances: DMI Seminar; McGill Digital Orchestra Project
– From Interface to Instrument: must be extremely robust, many hours of private practice, simple to operate, hide the sensors, hide the tech, methaphors pulls it together, multiple performers make a better context for pedagogy

* The Thummer Mapping Project (ThuMP)
Garth Paine, Ian Stevenson, Angela Pearce

Marcs, Comarts,
– The freedom Thummer: how can we turn this into a musical interface?
– Design Paradigms: Design for EMinstr are often based on reductionist models of user interaction and sound synthesis / Derived from research in the fields of human comp interaction, industrial design and digital signal processing./ lacking musical context
– How many discrete control parameters do trained acoustic musicians normally exercise in a performance? How are these related to the produced sounds?
– Pressure, Speed, Angle, Position: these are the most important control elements of acoustic instruents: how do we translate them to interfaces for electronic instruments?

* HandSketch Bi-Manual Controller: Investigation on Expressive Control Issues of an Augmented…
Nicolas D’Alessandro, Thierry Dutoit

– Context of research: Realtime CALM, nime06 Paris
– From there: realtime control of voice features, dimension based study of expressivity (singing synthesis), intuitive hand-based control of voice textures:
– Voice, Quality, Control
– Voice: VQCLib, Quality:RAMCESS, Control:HandSketch
– Pen-based control (Kyma System) Pushing forward writing skills with pen-based gestures
– Does RT singing synthesis need precise and complex control? –> vibrato is complex, automatic production is difficult to make sounding natural, realtime CALM sounded god at nime06
– pitch, vocal effort, tenseness
– Now start with control space of realtime improvements in precision,ergonomics and codification
– mapped to angle, pressure and radius
– voice contains articulations impossible to do with shapes (intensity and/or pitch sensors, consonants, etc) solution: make all controllers asymmetric, use position based approach, NPH techniques, etc.
– Work with a FSR (force sensing resistor) network
– mapping strategies: direct, modal (overall control space deformation), spectral (links between dactoylemes and phonemes)
– All in one tablet based musical instrument

* Mobile Clavier: A New Music Keyboard for Flexible Key Transposition
Yoshinari Takegawa, Tsutomu Terada, Masahiko Tsukamoto

– Requirement: musical performance need to show off their virtuosity so they want a mobile (small) keyboard
– porblem: piano’s are too large and heavy to use portably
– Mobile Clavier with Key Transpose cause problems and cause mistake because the keyboard layout becomes unclear
– Adding additional black keys in between two adjacent white keys can solve the issue.

Keynote Session 1 – Perry Cook: Principles for Controlling Computer Music Designers

June 7, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

Principles for Controlling Computer Music Designers
Keynote Session 1 – Perry Cook, Princeton University, Computer Science (also Music)

Book by Perry Cook: “The beautiful Voice and the Machine”

A follow up and update on the first NIME01 conference “Principles for Designing Computer Music Controllers”

Goals:
1 Revisit the 13 principles
2 What do they mean
3 Are they true today?
4 Add some more principles based on NIMEs and other since and based on teaching

Original Principles:
1 Programmability is a curse
Still true today, easy to add complexity, features, bandwidth
We should make instr. that can be understood, can be learned, can be played, can live on (and programmability works against all these points)
We should make pieces that; are actually performed, are actually listened to (by peers and general audience)
2 “Smart” Instruments are often not
Still true today, AI (Machine Learning). Machines may learn but don’t let the users know this
3 Copying an instrument is dumb, leveraging expert technique is smart
Leveraging examples: R-Bow and BoSSA, Hyperbow, vBow, Overtone Violin, Many others
4 Some players have spare bandwidth, some do not
Less true today
trumpets have 3 valves, a clarinetist is pretty busy
New Sensors give us new means to sense and map those to musical interesting things as well
5 Make a piece, not an instrument
Still very much true, we should actually perform for audiences on our interfaces and instruments
Ideally we should observe and work with others
6 Instant music, subtlety later
Still true,.. think about the piano and ourselves as infants,.. immediate sound and slowly developing skills.
Think about complexity, learning, retention, persistence expression and fun
7 MIDI = Miracle, Industry, Designed, Inadequate
Still an easy path to a quick prototype,.. pro’s and cons,,. but,.. now there is OSC
8 Batteries, Die (command, not an observation)
Things are getting better now,.. we are still waiting for those wind, solar, and hydrogen fuel cells etc.
9 Wires are not that bad (compared to wireless)
This point has definitely changed: 802.11, Bluetooth (Wii, Sparkfun), Zigbee, Roll-your-own Radio)
Still, wires are not that bad
[Demonstrates the lettuce shaker,.. accelerometer in a lettuce that controls a shaker algorithm with different sounds]
10 New algorithms suggest new controllers (and mappings)
Still True: PHISEM,.. unprepared piano, PHOLISE/Gaitlab, Scanned Synthesis, PHYSMISM
11 New Controllers suggest new algorithms
Still True, Radio Baton, Jmug, Fglass, P-Ray’s Cafe, Interval: Pork-o-phone, Stick(s), Nukelele,..
12 Existing Instruments Suggest new Controllers
Still true, Cook/Morrill Trumpet, BoSSA, SqueezeVox, Accordiatron, DigitalDoo, COWE, VOMID, Etabla/Sitar, many others
13 Everyday objects suggest good (and amusing) musical controllers
lots of examples,… be creative ad think like a child

Some New Principles:
14 More van be better! (but hard)
PLOrk (15+ laptops)
15 Music+Engineering is a great Teaching/Marketing Tool
Public interest, student interest, motivation for Teaching
16 The Younger the student, the more fearless

Conclusions: NIME has grown, we’ve learned and build a lot,.. there is still a lot to do,… new technology and new ideas,.. keep up the work!

NIME07 Paper Session 1 – Controllers and Physical Models

June 7, 2007 by

JUST TYPED ALONG WITH THE SESSION SO EVERYTHING IS STILL PRETTY MESSY, NO LINKS INCLUDED AND POSSIBLE FAULTS ARE MY OWN RATHER THAN THE SPEAKERS,…

Controlling a Physical Model with a 2D Force Matrix
Randy Jones, Andrew Schloss

Mystic center. First goal was intimate control of percussion synthesis. A musical instrument should be alive in the hands of the musician. What comes between the notes in addition to the notes itself. Other goal is about sonic exploration.

2d waveguide mesh was first described by van Duyne & Smith. WaveguideMesh is implemented as 3×3 convolution in a Max/msp/jitter object.

2d Force Matrix takes care of the input to the waveguide mesh. excitation as damping at the same time. Two sources of data go into the matrix. Surface Data as well as multitouch data from multitouch controllers. There is a continuous sampling of surface pressure. Spatially as well as temporally.

Concerts: Schloss, Duran, Mitri Trio: EMF, Real Art Ways & Schloss, Neto, Mitri Trio: CCRNA

Current Goals: Realistic filtering (nonlinear hammer), more aspects of drum modeling, increase controller sampling rate.

PHYSMISM: A control interface for creative exploration of physical models
Niels Boettcher, Steven Gelineck, Stefania Serafin

Medialogi, Aalborg university, Copenhagen, Denmark

Motivation: physical models are oldschool and they sound like shit,.. it was a challenge

What are the possibilities and boundaries,.. focus on completely new sounds

Design criteria: many different models, replica models, sounding like original, extended replica models, hybrid models (models + instrunments), physical interface, unusual interface that the audience can understand, interface should be musically and have a lot of possibilities

4 different models implemented: flute model, PHISM model, Friction model, drum model

crank: prticle model a crank controlling a PHISM particle model: rotation speed is mount of beams
drumpads, flute like interface controlling tube/string model (extended Karplus Strong)
Slider – Friction: horizontal and vertical slider with pressure sensor controlling friction model

2 control stations with 4 parameters, patching system combining the 4 different modes: you can take the output of one model into the other model

mini sequencer that is not innovative at all


Physical models and musical controllers – designing a novel electronic percussion instrument
Katarzyna Chuchacz, Sile O’Modhrain, Roger Woods

Sonic Arts Research Centre, Queen’s University in Belfast

Existing Eletronic Percussion Instruments: Buchla Thunder, Korg Wavedrum, ETabla
Limitations: complexity, extend of control,.. especially difficult are the modeling of large size instruments and the nonlinear sounds

Creation of realtime plate based electronic percussion instruments
high quality sound, range of modeled resonators

Finite difference schemes: problems: huge computational requirement, memory access: possibilities of real-tie performance. Possible are the recreation of large instruments

Solution: FPGA Hardware Implementation: possible to program the architecture of your system: full flexibility
Why? –> more processing power, parallelism in the algorithm, higher memory access bandwidth, flexibility in terms of onterfacing to a range of sensors.
Now it runs “faster than realtime”

Parameter Space: grid size, Plate size , sample frequency

Parameters mapping: hardware mapping parameters, sound synthesis parameters

Sound world of the model
opportunity to drive the model in a number of ways: many parameters fully open,… possibility to go beyond the constraints existing in acoustic systems

design approach is based on connecting to the sample of the model and creating a successful interface

What are the range of techniques of a real percussion player?

future work concentrated on observation of real percussionists, sensor system specifications should follow from this


A Force Sensitive Multi-touch Array Supporting Multiple 2-D Control Structures
David Wessel, Rimas Avizienis Avizienis, Matthew Wright

Gestures and Signals: very high rate of motion capture if you work with percussion sensors. Interlinks VersaPad Semi-conductive Touch-pad.

Multi-touch is the big rage,. we should get the data-rate up in order to be able to process multi-touch high definition controllers.

Most compact layout to get all fingers on the pad. Not really multi-touch but several versapads next to each other. New one has 32 pads.

Data acquisition hardware” daughter-boards consist of 4 to 6 sensors each, analog conditioning, multichannel A/D

72 or 96 variables: most efficient way is to use just 72 audio channels and only convert them if necessary.

Only a midi input, no output. Just to turn the midi into sample signals of the audio.

Yes: reading 147456000 bits per sec is cheaper than demultiplexing, up-sampling and converting to floating-point on the host CPU.

pressure profiles of short taps that percussionists use: 9ms, 14 ms, 18ms, etc.

Pressure profiles of short taps asks for substantially different attacks and curves even in the first milliseconds.

Zstretch: A Stretchy Fabric Music Controller
Angela Chang, Hiroshi Ishii, Joe Paradiso

Starting point from our hand” our hands posess rich capabilities of interacting with materials

Related works: most lack haptic feedback which alters the control loop.

Musical fabrics: mostly about localised places for touching the fabric rather than supporting the many gestures of our hands.

They should support 0 to 20 Newtons: that means it should be robust / Haptic expression / the fabric should guide the interaction

Resistive strech sensors are sown into the lycra fabric

Mechanical: a tabletop frame that holds te fabric but allows access to all sides of the fabric

Robustness issues: noise from mechanical contacts, drift of threads resistance over time, bouncebacks after a hard pull, fabric fatigue from wear and tear.

Software Mapping playback speed (pitch) and volume,.. interrupting zing noise and volume control of it (later was considered to be annoying)

Conclusion: scalable, no electronics in interaction, supports the interaction with the hands, it’s about material properties. Now better mappings and better materials


Oculog: Playing with Eye Movements
Juno Kim, Greg Schiemer, Terumi Narushima

From the Sonic Arts Research Network Faculty of Creative arts, University of Wollongong, Australia

Initially for clinical use, adapted as an expressive performance interface. First performance will be held in July 2007

Interface: firewire camera on snow goggles. Camera mounted to capture the eye movements. Up to 120 fps, in performance 30 fps,

control either voluntary or involuntary. Eye movement is mapped to MIDI, implemented using STK.

5 channels of information: horizontal position, vertical position, etc… CHECK


Active listening to a virtual orchestra through an expressive gestural interface: The Orchestra…
Antonio Camurri, Corrado Canepa, Gualtiero Volpe

University of Genova,InfoMus Lab

Embodied Active Listening: enabling to interactively operate on musical content by modifying it in realtime

full body movement and gesture

focus on: high level expressive qualities of movement and gesture, cross- and multi-modal techniques\\the result is embodied control of….

actively explore the orchestral play.

Multitrack Audio inputs. You can operate on each single channel with realtime mixing.

input with video camera and other possible sensors,… than tracking and extraction of specific features, modes for interaction with space and possible visual feedback

interaction with space: 2d potential functions superimposed onto physical space, single instruments and each function applied to individual instruments. You can change the parameters of the functions in real time in the space

Public Installations: they are aiming for a natural as possible interface (ambient design, disappearance of technology for non-expert users)

Sensors @ Harvestworks

June 7, 2007 by

Today, wednesday 6th of June, was the first workshop we attended. Actually, it was more of a presentation of companies and (student) entrepeneurs who had the possibility to present their product developments in sensors and sensor interfacing solutions to the public, than a workshop.

The products they presented were a broad variety of possible sensors and sensor interfaces for installations, performances, VR environments etc. Some of them have a plug-and-play approach. Others need more programming and sometimes even soldering to make it work smoothly. It was very interesting to see all the possibilities of sensors and sensor interfacing in a four hour session. This made it more touchable/realistic than staring at a website and thinking “should a buy this or that one…”

Here is an overview of what was presented.

I-cubeX of Infusion Systems

The I-cubeX is a sensor-to-MIDI interface with three types of systems: the microsytem, the system and the Wi-microsystem, which is wireless. To all three of these systems a broad variety of sensors can be attached. A little programming is needed, but then you will be up and running with this plug-and-play approach.

Teabox of Electrotap

The teabox of Electrotap is similar to the I-cubeX, but uses digital audio instead of MIDI for connecting sensors to the computer. It’s also a plug-and-play approach, so a little bit of programming and then you’re set!

Make Controller Kit of Making Things

The Make Controller Kit is a fully programmable, open source hardware platform for projects requiring high performance control/feedback, connectivity, and ease of use. It can also be used as an interface to a variety of desktop environments like Max/MSP, Flash, and Processing, Java, Python, Ruby – anything that supports OSC. (from Making Things website)

This product needs programming/soldering/modifying before you can use it run as smooth as the “plug-and-play” products like the IcubeX or the Teabox. This can be a little bit of a problem if you’re not really into technical stuff. Another interesting feature of this tool is that it can be accessed over a network.

EOBody2 by EOWave

The French company EOWave has a similar product as the IcubeX called the EOBody2 and it’s a follow up from the EOBody, which is out of stock. The EObody is an USB-MIDI sensor system that can host up to 128 sensors and store sensor settings in its SensorBoxes. Also based on the plug-and-play approach.

CREATE USB Interface (CUI) by Dan Overholt

This is probably the cheapest($50) out of the five possible sensor interfaces we have seen today, but also the one with the most working hours needed for a successful project. It’s called CUI, short for Create USB Interface. Offering only a print-board with only the necessary elements: a USB port, a power LED, a reset switch, a programming switch and a prototyping area, were you solder your personal sensor wishes. This is all you need for making your own hardware sensor interface! (Note: Dan Overholt told us when you aren’t familiar yet with soldering and programming your own boards, start with the Arduino board first)

Crackle, Noise & Light; Performances @ 3LD

June 7, 2007 by

On Tuesday, June 5th we visited another NYEAF performance evening at 3LD Art Center. This time it was called “Crackle, Noise and Light”, common terms of which crackle and noise almost seem to designate a specific genre of electronic music these days. The evening was being described as “Electronic sound and video with sonic environmentalist Anne Wellmer in collaboration with live video artist and musician Adam Kendal; interactive sound-art and live cinema by NoiseFold (David Stout and Cory Metcalf); Bay Area electronic performer Elise Baldwin; and video artist Leslie Thornton. Supported by the Gaudeamus Foundation.” The evening was again presented by Carol Parkinson, executive director of Harvestworks, and she was remarkably more relaxed and enthusiastic compared to the Sunday before. Maybe because she already knew that the evening was going to be so much better, in fact, the evening was going to host at least two very interesting and remarkably good performances. Here we go,…

First up were Anne Wellmer and Adam Kendall. They presented a collaborative audiovisual performance examining the interaction of sound, light and video. Anne performed with digital granular synthesis and some analog equipment controlling TVs as audio-reactive “lightboxes”. Adam played with software-based live video playing on and expanding the theme of white light. The video work was quite nice and showed dark black and white images of cities and especially bridges, with occasionally some colored accents. The style was quite edgy and it looked a bit like those old projectors that have a concentration of light in the middle and become a bit frayed on the edges. This was a good start to set the mood for the rest of the evening. The sounds and music from Wellmer reminded me in a way of the heartbeat and nervous system opposition. There was always a certain rhythmic pattern going on in the lower frequencies and a more continuous singing slower moving cloud in the higher frequencies. What I liked very much about the piece was the way the video and the audio interacted. For the rest I didn’t get too crazy about the sounds and composition. I kept wondering throughout the piece what Wellmer was trying to express or what kind of experience she was trying to create. The composition didn’t manage to appeal to any emotion and ended up to sound just very sterile. Also she kept breaking the flow by constantly introducing new elements and sounds into the piece. If you’d view certain aspects of the performance in isolation there were some magical moments though, sonically as well as visually.

Next there was a short movie by Leslie Thornton.

After this enjoyable warming-up began the really good stuff. Elise Baldwin used early ’20s circus video footage and new processing techniques together with a fantastic musical composition. One of the great things about this performance was the way she managed to revive these old sounds and images with modern technology. There was a certain dark, sweet, loving melancholy in her performance that started out with sounds that reminded me of an old music box. Slowly the composition developed into a more layered and complex whole that towards the end climaxed in series of sounds that easily managed to equal the emotional richness of the human voice. All throughout the piece she managed to dose the sounds and images so that you were taken away into this other world of days past, completely forgetting about the audience and city around you. The early ’20s video footage was edited in such away that it created these visual loops that took you in and slowly passed you over into the next loop. I don’t know what more to say, it was just beautiful.

After the break we were treated with another brilliant performance. Up were David Stout and Cory Metcalf who together make NoiseFold. Noisefold is “NoiseFold is an interactive visual-music-noise performance that draws equally from mathematics, science and the visual and sonic arts. This networked performance duet explores the use of infrared and electromagnetic sensors to manipulate and fold virtual 3-D objects that emit their own sounds. The work integrates multiple techniques including; real-time 3-D animation, mathematic visualization, recombinant non-linear data-base, A-life simulation, image to sound transcoding, complex data feedback structures and a variety of algorithmic processes used to generate both sonic and visual skins. The result is a theater of emergence and alchemical transformation existing within an intricate cybernetic system. The endlessly folding objects, synthetic life forms, vortices and oblique spirals defy easy anthropomorphic projection – images of crumpled paper, nerve ganglia, dendrites, organic architectures, impossible animals, seed-pods and fungi may come to mind.”

Audio Visual – A New Generation Of Installation Art

June 7, 2007 by

Saturday, 2nd of June, after a nice cup of coffee in City Hall Park, we visited the exhibition Audio Visual at LMCC’s Swing Space at 38 Park Row. In a small building a collection of seven installations where brought together to show the work of “a new generation of digital artists”, in “order of appearance”:

(cursive parts are quotes from the NYEAF flyer)

Phoenix Perry – Honey

“An interactive game exploring issues of survival and the environment. This powerful game demonstrates even the smallest creatures’ vital importance to their ecosystem as they struggle to survive in a unique, fantasy-world.”

The first installation sounded promising. In the first (and only?) level the user clicks on bees and flowers to make the bees collect the nectar from the flowers. After a few clicks the games stops and the user is guided to a website to play more of the game. I didn’t understand the point of this small tease, especially because the game can’t be played on the recommended website

Olen Hsu – Drift (II) (2007)

“A sculpture and sound installation that charts the prehistory of the digital network. Olen Hsu uses porcelain, paper and algorithmically composed sound, converging new media, tactile forms and acoustic instruments.”

Olen Hsu - Drift (II)

The sculpture made of porcelain and paper emits an algorithmic composition created with numerical oceanographic data of the past two hundred years. The seven gramophone horns made you expect that each horn would emit a different sound, but that wasn’t the case.

On the wall next to the sculpture hang the oceanographic data which Olen Hsu used for his composition. Parts of his (orchestral)composition could also be read.

Lovid & Douglas Repetto – cross current resonance transducer

“A sculptural-graphic collaboration that addresses the processes of interpretation and evaluation inherent in human attempts to understand natural phenomena.”

Lovid & Douglas Repetto – cross current resonance transducer

Headphones and a TV with “the-making-of-cross-current-resonance-transducer”-video playing on it. The sculpture was standing in front of the TV. Is this an installation?

I didn’t have the patience to watch the more than 15-minutes during documentary.

Terry Nauheim – rotating

“Rotating (in Four Movements) is an installation built from recorded and processed sound fragments of hand-cast record negatives and their corresponding recorded drawings.”

A video screen with a projection split up in four parts. Every part has his own and different rotating forms of circles rotating on a turntable. This video was accompanied with crackly sounds (like what you hear when a record is scratched and the needle hangs in the groove) only record negatives were used for this. A nice detail, I must say.

To see a movie excerpt of and sound clips of this installation

Karina Aguilera Skvirsky – el espectàculo

Three TV’s standing next to each other showing a synchronized video. The video that was shown was a typical example of “cut-and-paste”: celebrities, or mass media characters, were cut out of there “natural habitat”. Then the artist made little loops of the movements of the celebrities, duplicated them and placed them in lines to suggest some sort of choreographed dance. The background regularly changed colour (very basic colours were used) and eventually the background changed into “news breaks”, like the tsunami or other disasters.

Hisao Ihara – the collapsing wall

About five TFT-screens placed above each other on a wall. What they showed was a “collapsing wall”. From the top small “bricks” made of video-material fell down to the bottom where you could watch the movie that was used for making the “bricks”.

Rashaad Newsome’s – The Conductor

Carl Orff’s Carmina Burana is used as the basic soundtrack for the 2.31 min. digital video made up of footage from various popular hip-hop videos. The footage is digitally enhanced end re-edited to track the motion of the hands of the hip-hop artists. The soundtrack is accompanied with sounds extracted from the hiphop videos.

Sensors & Gestures; Performances @ 3LD

June 6, 2007 by

On that rainy Sunday evening June 3 we visited “Sensors and Gestures” at 3LD. This performance evening was part of NYEAF07 and programmed by Harvestworks. We arrived soaking wet but were excited about what this evening had in store for us. 3LD is a fancy place that looks a bit like an igloo, probably heavily sponsored but I guess the high admission fee ($20) also helped to keep the place fashionable. Carol Parkinson, executive director of Harvestworks, introduced the artists and their performances and she did so by reading pre-written texts from paper.

First up was a short film by Henry Threadgill. According to many websites and bio’s Threadgill is a respected composer, multi-instrumentalist and bandleader who has won numerous awards and who’s music has been performed by the most acclaimed ensembles of the past two decades. I personally didn’t understand why the movie was shown there. First of all, though this might not be blamed entirely on Threadgill, it is very strange to watch a film on a screen that is being projected on from the back. The effect of this is that right in the middle of the screen (depending on where you are seated) all you see is a big bright white spot without any contrast. Not to waste too many words: the only thing I enjoyed about this short film was that it was,… indeed,.. short. The soundtrack was a bit annoying and sounded like a first experiment with electronics. When the movie was over Threadgill left immediately.

The next performance was by SSS which stands for Sensors Sonics Sights and is, according to their website, “a trio performing visual music with sensors and gestures. They create a work of sound and sight, a laptop performance that goes beyond with the intensity of bodies in movement. Going beyond media: music that is more than a soundtrack, images going further than video wallpaper. A three-way conversation modulating sonic and luminous pulse and flow.” Cecile Babiole uses ultrasound sensors to control the visuals of the performance. Laurent Dailleau was playing the Theremin and Atau Tanaka was using Biomuse to turn his body into a sound controller. The performance didn’t always come together as musical compositions but mostly remained a collection of (sometimes really nice) sounds. The visuals were fantastic to watch and Babiole did a very good job in a “less is more” kind of way. What I found a bit strange about this performance was that they used all these sensors and controllers without overcoming some of the typical problems of the laptop musician: “During a live performance with laptop, the audience has difficulty connecting the visual input (the physical gestures of the performer) to the auditive output (the sound they hear coming from the speakers), as they are used to do with musicians performing on acoustic instruments. What the audience is seeing does not correspond to what they are hearing. Because of this, the surplus value of attending a live performance is often unclear or unsatisfying. These problems are caused by the overwhelming amount of possibilities that the laptop offers for music making and the diffuseness of its functionality as such. With a laptop as a musical instrument, the relation between input and output is ‘modular’; it is dependent on the used software and other settings, which are normally invisible to the audience. It is possible to visualize this input/output relation. The physical gestures used by the performer to operate a laptop are of a very small nature and mostly take place behind the raised screen; therefore, they are invisible to the audience. This can be overcome by using external controllers that demand larger physical gestures, again making the relation between the visual and the auditive, the input and the output, more clear to the audience.” (Edited from a text by Arthur Wagenaar, Aliona Yurtsevich, Henk van Engelen, Jense Meek and Thomas Bensdorp for HKU, KMT). First of all, in the performance by SSS, although they were using controllers that demand larger physical gestures, the relation between what they were doing on stage and what you heard coming out of the speakers remained mostly unclear. Also I didn’t really feel that the controllers that they used added much musical expression in the sense that there were little subtleties to be heard in the sounds that required the use of lets say EMG biosignals in stead of a panpot or something. Overall the visuals were great and some of the generated sounds as well, especially those coming from the Theremin but the whole performance lacked musical coherence and good compositions.

After a short break Miguel Frasconi was trying to perform “Out of Edges: for gesturally controlled surround sound matrix” with the use of the Buchla Lightning MIDI controller. His setup failed on him so there is little to say about it. Of what I heard and saw I can only say that I didn’t get it anyway.

The final performance of the evening was by FAIR USE (Luke DuBois, Matthew Ostrowski and Zach Layton). They had nice videowork going on for the two pieces they did. They used existing film material (I think from Blade Runner and Metro but I’m not quite sure) and processed this quite heavily. Matthew Ostrowski used a P5 Glove to control his Max/MSP patch.

Overall I had expected more of the evening. It didn’t seem like very cutting-edge as you’d expect from the NYEAF. Maybe the words Heinrich Heine once said about Holland now seem to be true for NY: When the world comes to an end, I’m going back to New York, because everything happens 10 years later there. Okay,.. Heine said 50 years and that would be a bit too much. Actually I’m sure that New York is much more on top of things than what has been shown this evening at 3LD. Hope the rest of the NYEAF and especially the NIME conference proves me right in this.

 

 

Andrea Parkins – FAULTY (per-objective)

June 4, 2007 by

Saturday June 2nd we visited Diapason to hear Andrea Parkins’ sound installation FAULTY (per-objective). We had just finished a meal so this setting was ideal for some after-dinner relaxation; and quite a nice desert it was! You had to leave your shoes at the door and there were some cushions and rugs lying on the floor to make yourself comfortable. The sounds were very well balanced and the playback system and acoustics of the place, although not super high-end, provided a very comfortable listening environment. The only thing that was less enjoyable is that after 45 minutes something went wrong and all sounds stopped. The lady present at the gallery had no idea how to get the max/msp patch up and running again and there was nobody around to fix it. We decided not to wait until things were working again since we had had our share already. I have to admit though that I wouldn’t have mind to hang out there another hour or so. You can visit this installation again on the 9th of June and I can advise anyone to go there, lay back and just enjoy the sounds (providing they have fixed the problem). Below follows a brief description of the installation and artist and a picture stolen from Diapason:

Saturdays
May 19 & 26, June 2 & 9

6 pm – midnight
Free Admission

FAULTY (per-objective) is a multi-channel audio installation by Andrea Parkins. The work creates purposefully flawed sonic structures, built from a variety of sources, including sound recordings that document the specificity of objects – collected or invented – as they are set into motion. (Upended wine glasses on tilted/greasy mirrors, taut lines of plastic tubing, wobbly plaster forms, apples/potatoes that roll across a bumpy floor, spinning metal washers, stretched skeins of plastic gimp, raspy little snapshots in the wind – these might be performers.) Through the use of Max-based generative processing, multiple chains of audio events will arrive at an indeterminate sonic outcome – an aurality that weaves playful connections between sonified objects, materials and language – and a metaphor for the slippage between object and meaning that occurs through the passage of time (and space).

ANDREA PARKINS is a sound artist, composer and electro multi-instrumentalist who also makes/arranges objects, images and (sometimes) words. Known for her dynamic timberal explorations on the electric accordion and inventive use of generative sound processing, Andrea has appeared on more than 40 recordings on labels including Hatology, Atavistic, Knitting Factory, and Creative Sources. She has performed worldwide as a soloist, and with artists such as Nels Cline, Thomas Lehn, Fred Frith, ROVA Saxophone Quartet, and Otomo Yoshihide. She has also presented her work at the Whitney Museum of American Art, The Kitchen and Experimental Intermedia, among other NYC venues. Currently, Andrea continues to develop and perform a series of Max/MSP-based audio/visual works inspired by Rube Goldberg’s circuitous contraptions, a project realized during artist’s residencies sponsored by the Hamburg Cultural Board in Germany; at Harvestworks in New York City and CESTA in the Czech Republic. For more information: www.myspace.com/andreaparkins

 

Andrea Parkins - FAULTY (per-objective)