Loading…
This event has ended. Visit the official site or create your own event on Sched.
View analytic

Log in to bookmark your favorites and sync them to your phone or calendar.

Sunday, June 3
 

8:00am

9:00am

The Animated Music Notation Workshop: Demonstrating the Possibilities for Rhythm-Based Music Education for Young Students
The Animated Music Notation workshop was designed to investigate the theory and practice of Animated Music Notation and to enable to performance of rhythmically-complex musical material by performers of virtually any age and ability. A presentation of this workshop will generally contains four sections. The purpose of Section 1 is to provide an historical and technological context, as well as a general introduction to contemporary animated scoring practices and animated music notation. In Section 2, a taxonomy of high-level and low-level animated score functionalities and symbols are posited in order to provide a consistent terminology with which to approach animated scores in theory and practice. In Section 3, attendees are encouraged to participate in a series of hands-on explorations of a variety of animated score functionalities with a focus on rhythm, and in Section 4, a theory of animated music notation is presented simultaneously with an extension of the hands-on practices experienced in Section 3. Each Section is designed to encourage discussion throughout and beyond. Workshop attendees are strongly encouraged to bring instruments, although this is by no means necessary. The NIME 2018 demonstration of the Animated Music Notation workshop will include information about various iterations of the workshop, including selected content from the workshop and reflections on its effectiveness in teaching rhythmically complex musical material to students as well as more seasoned performers.



9:00am

A NIME Primer
Attending NIME for the first time can be an overwhelming experience. Beginners may find it difficult to make sense of the vast array of topics presented during the busy program of talks and posters, or appreciate the significance of the wide variety of demos and concerts. This half-day tutorial is intended to provide a general and gentle introduction to the theory and practice of the design of interactive systems for music creation and performance. Our target audience consists of newcomers to the field who would like to start research projects, as well as interested students, people from other fields and members of the public with a general interest in the potential of NIME. We aim to give our audience an entry point to the theory and practice of musical interface design by drawing on case studies from previous years of the conference. Past attendees of the tutorial have told us that they gained a helpful perspective that helped them to increase their understanding and appreciation of their first NIME.

Speakers
avatar for Michael Lyons

Michael Lyons

Ritsumeikan University|Kyoto||Japan
Michael Lyons is a professor of Image Arts and Science at Ritsumeikan University in Kyoto. His interest in experimental music dates to childhood backyard percussive improvisations, which were not consistently appreciated by the neighbours. As a teenager, Michael studied classical... Read More →


9:00am

NexusHUB Distributed Performance Workshop
To engage a large number of people, immediately, ad hoc, at any location, give them control, interconnection and agency, and a different facet for engaging an art work – there is only one viable solution: the web. We present the NexusHUB path to audience participation, sonic art, and network performance via web browser.  
NexusHUB is a framework for creating and managing distributed performances that has been in use in performances and installations since 2010 and incorporates solutions to many common distributed performance issues. This workshop walks users through creating and deploying a NexusHUB web application with some common requirements for a cell phone based performance. We will create a basic, yet complete, setup for interaction, visualization, and sonification across cell phones, server, and computer with Max. We will also use Docker to create a dead simple way of deploying and maintaining a server.  

Workshop Outline:
  • NexusHUB overview  
  • Docker | Redis | Nodejs | NodeClusters | Tonejs & WebAudio | NexusUI  
  •  Server – The HUB
  • Client – The Audience/Performer
  • Theater – The Display
  • Control – Other actors and interactors
  • MaxMSP Integration – Local Machine Mayhem
  • OSC Integration (or not) – for whatever
  • Discussion of Distributed Performance ‘Best Practices’
  • Domain registry | Local vs. Cloud deployment | Security | wifi infrastructure options | Testing
  • Deployment Strategies with Demo Examples  
  • Personal Server | Google Cloud | Amazon AWS

Speakers
avatar for Jesse Allison

Jesse Allison

Experimental Music & Digital Media, Louisiana State University|Baton Rouge|LA|United States
Jesse Allison is a professor at LSU in Experiment Music & Digital Media. As part of the AVATAR initiative, he is actively performing research and collaboration into ways that technology can expand what is possible in the arts. As an artist, Allison has disseminated works and research... Read More →


9:00am

Audio-first VR: Imagining the Future of Virtual Environments for Musical Expression
Please note that the participants of this workshop have been selected through a call for abstracts, which had closed on May 6th, 2018. We also have limited seats for non-presenting auditors. Please visit the workshop website at https://audio1stVR.github.io for more information.

This workshop aims to investigate the concept of an “audio-first VR” as a medium for musical expression, and identify multimodal experiences that focus on the auditory aspects of immersion rather than those that are ocular. Through a day-long workshop that involves presentations and demo sessions followed by round table discussions, the participants will collectively address such questions as: What is a VR experience that originates from sonic phenomena? How do we act and interact within such experiences? What are the tools that we have or need to create these experiences? What are the implications of an audio-first VR for the future of musical expression? The participants will collaboratively outline a position on what constitutes an audio-first VR from a NIME perspective.

Focus will be on the following topics among others:
  • Sonic virtual realities
  • Immersive sonification
  • Virtual interfaces for musical expression (VIMEs)
  • Creativity support tools for VR audio
  • Visualizing an audio-first VR
  • Spatial audio techniques for VR
  • Embodied interactions within virtual audio systems
  • Composing music for VR games and films
  • Sonic VR as assistive technology

Workshop Schedule

09:00 - 09:20 Welcome and Introductions
Anıl Çamcı and Rob Hamilton

09:20 - 09:40 ECOSONICO: Augmenting Sound and Defining Soundscapes in a Local Interactive Space
José M. Mondragón, Adalberto Blanco, and Francisco Rivas 

09:40 - 10:00 Sonic Thinking in VR: Incorporating Sound into S.T.E.A.M Curriculum and Data-Driven Installations
Melissa F. Clarke and Margaret Schedel

10:00 - 10:20 On Standardization, Reproducibility, and Other Demons (of VR)
Davide Andrea Mauro

10:20 - 10:40 Chunity for Audio-first VR
Jack Atherton and Ge Wang

10:40 - 11:00 Sonic Cyborg Feminist Futures in Extended Realities
Rachel Rome

11:00 - 11:20 Adapting 3D Selection and Manipulation Techniques for Immersive Musical Interaction
Florent Berthaut

11:20 - 11:40 What Postmodal Processes Can Teach Us about Existing Mediums
Josh Simmons

11:40 - 12:00 Enhanced Virtual Reality (EVR) for Live Concert Experiences
Chase Mitchusson and Edgar Berdahl

12:00 - 13:00 Lunch Break

13:00 - 14:00 Demo Sessions

14:00 - 16:00 Group Discussion; Outlining a Position on Audio-first VR

Speakers
avatar for Anil Camci

Anil Camci

Assistant Professor of Performing Arts Technology, University of Michigan


  • Max Auditing Attendees 10 attendees will be accepted
  • Max Active Attendees 10 attendees will be accepted

10:00am

Experiments in Pressure Sensitive Multi-touch Interfaces
Using the Sensel Morph pressure sensor, participants will be introduced to the different ways of getting contact data from the Sensel Morph using the API (in Python, C, and C#), Cycling 74’s Max and Max for Live, the Innovator’s Overlay, and MIDI Polyphonic Expression. Guidance will be provided for the different types of data ascribed to each contact, and using the contacts creatively. We will also use the Overlay Designer to create pressure sensitive control areas that can output different types of data: Game Pad, MIDI, MPE, Key code, and Mouse pad. Time allowing, participants will apply their custom control designs to music and sound design over a multi-speaker array.

Attendees are encouraged to bring laptops, dowloaded API for computers or arduino and supporting software for python, C, or C# development. For people who want to work with higher level controller design, the SenselApp from this download page will be needed.

Speakers

Sunday June 3, 2018 10:00am - 11:30am
Center of Dance 460 Turner St NW, Blacksburg, VA 24060, USA

10:00am

10:00am

Aphysical Unmodeling Instrument | 2017
Aphysical Unmodeling Instrument | 2017
by Tomoya Matsuura

Moss Arts Center - 3rd Floor in Balcony Lobby

Aphysical Unmodeling Instrument rethinks a description and a generation of sound and music through re-physicalization of Whirlwind, a targetless-physical model. Whirlwind is a combined and impossible physical model of 3 wind instruments; a trumpet, flute and clarinet. Our work reimplements the computational elements of Whirlwind with physical objects such as a delay with the sound propagation and a resonator with the Helmholtz resonator. The acts of a composition, a creation of instruments or a installation and otherwise a performance are parallelized in our work. The notion of the digital sound is expanded out of the computer by re-physicalizing a computational model.

Exhibitors
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University


10:00am

Attunement | 2018
Attunement | 2018
by Olivia Webb & Flo Wilson

Moss Arts Center - 2nd Floor in Mezzanine Lobby

The verb ‘attune’ usually describes the act of making something harmonious, as in the tuning of an instrument. Attunement is also a state of relation to an object, technology, environment or other people. To become attuned is to engage in a two-way sympathetic and empathetic exchange. In this installation, attunement is used both as a technique for exploring ways of being with others in the world, and a method for considering how technology mediates this exchange.

In this conference of new musical interfaces, we invite all participants to consider the ethics of listening as mediated by technology. Listening is central to human interaction, yet habits within Western culture tend to privilege speech and being heard over listening to and receiving others. New technology continues to accelerate the speed that we can sound, voice and express ourselves. We are interested in how we might engage with performance and technology to become better listeners.

In this installation, you are invited to practice attunement by taking part in a selection of simple embodied listening exercises. Step out of your own breath, your own voice, your own self. Contemplate the changes required of you in order to receive someone or something else.

Exhibitors
avatar for Flo Wilson

Flo Wilson

Audio Foundation
Flo Wilson is a composer, producer and artist based in Auckland, New Zealand whose organic, experimental music creates emotive atmospheres to facilitate empathetic, shared listening experiences. She has created custom-built spatial sound installations and then performed with them... Read More →


10:00am

Bǎi (摆): An Oscillating Sound Installation | 2018
Bǎi (摆): An Oscillating Sound Installation | 2018
by Jelger Kroese, Danyi Liu & Edwin van der Heide

Moss Arts Center - 1st Floor in Experience Studio B

Bǎi (摆), meaning pendulum in Chinese, is an interactive installation that uses a speaker hanging as a pendulum from the ceiling combined with an octophonic speaker setup to create a responsive sound environment. Besides being a sound source, the pendulum speaker is also the interface by which the audience interacts with the installation. Through pushing, pulling and twisting, the audience can move the pendulum and set it into oscillating motions. A dynamic system underlying the installation translates these motions into different modes of behavior of the sound environment. At first, it may seem that the environment reacts to the motions predictably. However, exercising too much control over the pendulum causes the installation to quickly spiral into chaotic and unpredictable behavior. This, combined with the fact that hard physical labour is needed to restrain the pendulum, leads to a tense dialogue between participant and object, struggling for control. The movements resulting from this dialogue cause the sounds in the environment to change between different states of stability and chaos, hereby mirroring the types of dynamics that are also seen in natural ecosystems.

Exhibitors
avatar for Jelger Kroese

Jelger Kroese

Jelger Kroese is a designer and coder in the field of sound, interaction and education. He has a deep interest in the processes that underlie ecological systems. As a result, his work mostly comprises compositions and installations that place biological concepts within a technological... Read More →
avatar for Danyi LIU

Danyi LIU

PhD student, Leiden Institute of Advanced Computer Science
Danyi Liu is a designer and researcher in the field of sound and interactive art. Currently, she is a PhD student at the Leiden Institute of Advanced Computer Science. Her research focuses on real-time interactive data sonification and audience participatory installations and performances... Read More →


10:00am

Chorus for Untrained Operator | 2016
Chorus for Untrained Operator | 2016
by Stephan Moore, Peter Bussigel

Chorus for Untrained Operator is a collection of discarded objects. Each has been relieved of its original responsibilities, rewired, and transformed to emphasize its musical voice. The ensemble of objects is controlled through the patch bay of a 1940s Western Electric switchboard.

Exhibitors
avatar for peter bussigel

peter bussigel

assistant professor, emily carr university of art & design
avatar for Stephan Moore

Stephan Moore

Senior Lecturer, Northwestern University


Sunday June 3, 2018 10:00am - 6:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

10:00am

FingerRing installation | 2016-18
FingerRing installation | 2016-18
by Sergey Kasich


Moss Arts Center - 3rd Floor in Balcony Lobby

FingerRing installation - is an exhibition set of the fFlower interface for the FingerRing technique. The two sensitive panels are installed in the center of the space, which has 8-channel sound system in perimeter. Anyone is allowed to play with the panels and experience flexibility and nature of the FingerRing technique - the simplest way to play with multichannel music. The installation has been shown at "SOUNDART: space of sound" (Manege Central Exhibition Hall, Saint-Petersburg, 2017), MakerFaire Moscow (MISIS, Moscow, 2017).

The FingerRing technique has been presented at BRERA Art Academy New Technologies in Art Dept (Milano, Italy), National University of Science and Technology MISIS (Moscow, Russia), New York University Tandon School of Engeneering (NYC, USA), Cambridge (specially for Dr. Peter Zinovieff). It was included into education process in Falmouth University in England in 2017 and presented as a workshop at EXPO 2017 (Astana, Kazakhstan).

Exhibitors
SK

Sergey Kasich

founder, SoundArtist.ru
music technology, experimental sound arts, interactive installations, social infrastructure, cultural projects, events, festivals, curation of new media arts, hybrid studies , R&D , anything


10:00am

What We Have Lost / What We Have Gained | 2014
What We Have Lost / What We Have Gained | 2014
by Matthew Mosher

What We Have Lost / What We Have Gained explores how to transform viewers into performers, participants, and players through large upper body movements and tangible interactions with a sculpture. The piece was originally conceived as a large scale MIDI drum pad style interface that would be both familiar to electronic musicians yet more physically expressive than typical MIDI devices.
As an art installation, it presents a four by three grid of video projected mouths on a spandex screen. Each video sample animates and sings a different vowel tone when pressed by a user. The volume of the singing increases as the player presses harder and deeper into the mouth screen, which distorts the spandex display surface. In this way, the piece provides audio, video and tactile feedback, rewarding the user with a multi-modal embodied experience. This work contributes to the discourse at the intersection of tangible interactions and musical expression by providing an example of how interaction design can facilitate engagement and convey meaning. What We Have Lost / What We Have Gained questions the experience of using one's physical body to manipulate the digital representation of another's body.
Special thanks to vocalists Ashley Reynolds and Keri Pierson.

Exhibitors
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →


Sunday June 3, 2018 10:00am - 6:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

10:30am

The New Evolution of Ableton Live 10
Exhibitors
avatar for Richard Graham

Richard Graham

CEO, Delta Sound Labs
Delta Sound Labs is an audio technology company based in the United States. Come talk to us about eurorack modules and VSTs for music production. We have some hardware and software in beta and we're looking for testers, particularly if you're an active artist/musician.


12:00pm

Lunch Break
Lunch Break

Sunday June 3, 2018 12:00pm - 1:00pm
Blacksburg: Downtown Downtown, Blacksburg, VA, USA

12:00pm

Forgetfulness | 2018
Forgetfulness | 2018
by Zachary Duer, Ivica Ico Bukvic & Meaghan Dee

Curated

Exhibitors
avatar for Ivica Ico Bukvic

Ivica Ico Bukvic

NIME 2018 Co-Chair, Virginia Tech
Ico is... or is he?


1:00pm

Gender Diversity at NIME
It is well-known that music technology has a gender diversity problem. NIME is not immune - in 2017, 8% of the authors of papers, demos and posters were women. This is not representative of the diversity of people working in our domain.

 We ask, then, why women and gender non-conforming people do not publish at NIME, and what is the NIME loses by not addressing this lack of parity.  

This workshop aims to take a first step to better community diversity, by gathering together women and gender non-conforming participants present at NIME, as well as those interested in addressing this problem as allies. It’s common to hear that community members want to address this problem but don’t know where to start, so this workshop will have a strong focus on identifying issues and formulating solutions.

 Topics of discussion will include:
  • An overview of women and gender non-conforming people within NIME, their practices and their impact; Examining the benefits of diversity in music technology;
  • Discussing strategies for inclusivity;
  • Brainstorming solutions that can begin to make NIME a conference that is more diverse, wide-reaching, and representative of the practitioners working within it, and how attendees can support this

Speakers
avatar for Sarah Schoemann

Sarah Schoemann

Georgia Institute of Technology
Sarah Schoemann is a doctoral candidate in Digital Media at the Georgia Institute of Technology. A designer and researcher working at the intersection of queer and feminist theory, design studies and human computer interaction, her research explores the social and technological practices... Read More →


1:00pm

Interactive Performances for Dance, Sound and Biosignals
Emovere project focuses on developing interactive dance performances that utilize biosignals to amplify, understand and connect with the internal functions of the human body. We use physiological signals such as electromyography (EMG), electrocardiography (ECG) and electrodermal activity (EDA) as materials that access an intimate and – usually – inaccessible biological dimension, that is in constant change affected by external and internal stimuli. This workshop will focus on understanding the meaning and possibilities of different physiological signals (biosignals) and the technologies available to measure them with practical demonstrations using different biosensors and interactive designs. We will discuss the creative methods used in our work, presenting the rationale behind interaction designs, compositional techniques and outcomes of different artistic pieces that we have developed, as well as discussing the work of prominent artists in the field. The workshop will include a practical co-creative exercise using the heart rate of the participants and their own voices with the objective of creating a collaborative sound environment that builds on the participants’ explorations around sound poetry and biofeedback.

Speakers
avatar for Javier Jaimovich

Javier Jaimovich

Lecturer, Universidad de Chile


1:00pm

Making Embedded Instruments with Bela
This hands-on workshop introduces Bela (http://bela.io), an embedded platform for ultra-low latency audio and sensor processing. Bela is useful for creating digital musical instruments and other interactive projects, which can be developed in C/C++, Pure Data (Pd) or Supercollider, amongst other languages. The platform features an on-board browser-based IDE and oscilloscope for getting started quickly, onboard examples and documentation, and online community resources.  

This workshop will focus on creating interactive music projects using Pd. Participants will be guided through a series of circuit-building and Pd programming activities, followed by time for open experimentation. Bela kits, breadboards, sensors and other electronics will be provided for use during the workshop. Participants should bring their own laptop (any platform) and a pair of headphones.

Attendees are encouraged to bring laptops (Windows, Mac, Linux, ChromeOS) and a pair of headphones.

Speakers
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.


1:00pm

Re-engaging the Body and Gesture in Musical Live Coding
At first glance, the practice of musical live coding seems distanced from the gestures and sense of embodiment common in musical performance, electronic or otherwise. This workshop seeks to explore the extent to which this assertion is justified, to re-examine notions of gesture and embodiment in musical live coding performance, to consider historical approaches for integrating musical programming and gesture, and to look to the future for new ways of fusing the two. The workshop will consist firstly of a critical discussion of these issues and related literature. This will be followed by applied practical experiments involving ideas generated during these discussions. The workshop will conclude with a recapitulation and examination of these experiments in the context of previous research and proposed future directions.

Speakers
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.


Sunday June 3, 2018 1:00pm - 4:00pm
Center of Dance 460 Turner St NW, Blacksburg, VA 24060, USA

4:00pm

Meet NIME 2018 Installations Artists
Meet NIME 2018 installations artists

Exhibitors
avatar for peter bussigel

peter bussigel

assistant professor, emily carr university of art & design
SK

Sergey Kasich

founder, SoundArtist.ru
music technology, experimental sound arts, interactive installations, social infrastructure, cultural projects, events, festivals, curation of new media arts, hybrid studies , R&D , anything
avatar for Jelger Kroese

Jelger Kroese

Jelger Kroese is a designer and coder in the field of sound, interaction and education. He has a deep interest in the processes that underlie ecological systems. As a result, his work mostly comprises compositions and installations that place biological concepts within a technological... Read More →
avatar for Danyi LIU

Danyi LIU

PhD student, Leiden Institute of Advanced Computer Science
Danyi Liu is a designer and researcher in the field of sound and interactive art. Currently, she is a PhD student at the Leiden Institute of Advanced Computer Science. Her research focuses on real-time interactive data sonification and audience participatory installations and performances... Read More →
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University
avatar for Stephan Moore

Stephan Moore

Senior Lecturer, Northwestern University
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →
avatar for Flo Wilson

Flo Wilson

Audio Foundation
Flo Wilson is a composer, producer and artist based in Auckland, New Zealand whose organic, experimental music creates emotive atmospheres to facilitate empathetic, shared listening experiences. She has created custom-built spatial sound installations and then performed with them... Read More →


6:00pm

Opening Reception
Opening Reception Featuring Remarks by:

Rosemary Blieszner (VT CLAHS Dean)

Ivica Ico Bukvic (VT ICAT Senior Fellow)

Matthew Burtner (UVA Music Chair)

Francesca Fiorani (UVA Assoc. Dean Arts & Hum.)

Jody Kielbasa (UVA VP Arts)

Paul Steger (VT SOPA Director)

Ruth Waalkes (VT Assoc. Provost for the Arts)

8:00pm

Concert 1: Evening with Ikue Mori

Artists
avatar for Ikue Mori

Ikue Mori

Composer | Performer | Media Artist
Ikue Mori moved from Tokyo to New York in 1977. She started playing drums and soon formed the seminal NO WAVE band DNA with Arto Lindsay. Since the 1990s she has collaborated with numerous musicians and artists throughout the US, Europe, and Asia, while continuing to produce and recordher... Read More →
avatar for Jean-Francois Charles

Jean-Francois Charles

Assistant Professor, Composition & Digital Arts, University of Iowa


8:00pm

Music Piece 1.1
27b/6
by Alex Christie

27b/6 is the instrument and the composition. It is designed to interfere with itself and the performer. 27b/6 is anti-communication, anti-aesthetic, and anti-instrumentality. It is not an instrument that optimizes performer control. Instead, compositional form is dictated by the fragmentation and interruption of musical phrases and performance. 27b/6 is purposefully impractical. This impracticality leads to new and expanded forms of performer engagement. The performance becomes an act of navigating a relentlessly faulty and inefficient system in which some forces hold more power than others.

Artists

8:00pm

Music Piece 1.2
Absalon Crash
by Jean-Francois Charles

Absalon Crash is a composition for cymbal and live electronics. Homage to Søren Absalon Larsen. A cymbal is equipped with a piezoelectric sensor and a transducer. They are connected through a custom-built effects pedal to form a feedback network. A performer acts on the system to explore different sonic worlds. The performer shapes the sound by using her/his hands on the cymbal to muffle or suppress certain frequencies from the resonance spectrum, and by adjusting the settings of the effects pedal. Absalon Crash fits in the tradition of electro-acoustic works exploring the resonant characteristics of physical materials, like David Tudor's Rainforest.

Artists
avatar for Jean-Francois Charles

Jean-Francois Charles

Assistant Professor, Composition & Digital Arts, University of Iowa


8:00pm

Music Piece 1.3
Alien, L2Orkin' Around, An Ending
by L2Ork

Artists

8:00pm

Music Piece 1.4
Keynote Performance
by Ikue Mori

Artists
avatar for Ikue Mori

Ikue Mori

Composer | Performer | Media Artist
Ikue Mori moved from Tokyo to New York in 1977. She started playing drums and soon formed the seminal NO WAVE band DNA with Arto Lindsay. Since the 1990s she has collaborated with numerous musicians and artists throughout the US, Europe, and Asia, while continuing to produce and recordher... Read More →


10:30pm

Concert 2

Artists
avatar for Bob Pritchard

Bob Pritchard

UBC School of Music
avatar for Edgar Berdahl

Edgar Berdahl

Assistant Professor, Louisiana State University
avatar for Esteban Betancur

Esteban Betancur

Professor, ITM Medellin
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.
avatar for Javier Jaimovich

Javier Jaimovich

Lecturer, Universidad de Chile


10:30pm

Music Piece 2.1
Bone Piece 
by Lyn Goeringer

Bone Piece, a ritualized musical performance, began as a conclusion performance for an installation titled "The Wishing Goat". In its installation form, The Wishing Goat invites participant into a closed room where they are able to speak their wishes aloud to a goat skull, which then records their wishes, adding them to an ever growing sound pool of looped wishes of previous participants in the installation. At the end of the duration of the installation, I use the previously recorded materials in a musical performance setting to release them so that they might catch the wind and be heard by whomever grants wishes. As the piece developed, it's now become a ritual in its own right, where the performer brings about a musical space where the audience can collectively meditate on their own desires and through the music, release them into the world.

Artists

10:30pm

Music Piece 2.2
T(w)o Nearly Touch: You
by Bob Pritchard
Ziyian Kwan & Emmalena Fredriksson, dancers

Touch is personal, whether on ourselves or on each other. In TNT:Y personalized exploration becomes expression through gesture and sound, allowing for emotion and engagement. Using touch the two characters come together to develop sonic and tactile relationships, and the growing exchange of touches and sonified gestures creates an active counterpoint of movement, timbre, emotion, and meaning, before dying away to quiet conversations in a whispered, imaginary language. Each Responsive User Body Suit (RUBS) consists of a dance leotard with e-textile surface sensors, wirelessly connected to Max/MSP patches. By completing the sensor circuits using touch, wearers generate data that is used to trigger samples or scrub through audio files. On the original suits media artist Kiran Bhumber chose the sensor locations, taking advantage of the desired lengths and accessibility of the materials, while Margaret Lancaster’s 2017 suit had asymmetric placements due to right hand only access. The sensor locations on the current twelve-sensor suits were chosen by designer Alaia Hamer in collaboration with dancers to emphasize the flow of muscle and bone.

Artists
avatar for Bob Pritchard

Bob Pritchard

UBC School of Music


10:30pm

Music Piece 2.3
Language Embodiment ¿?
by Esteban Betancur & Jack Armitage

Each word has an unique existence, its own semantic relation in a specific context, its particular sound (sound in time) and its graphic structure (strokes on a support). It is possible to construct a programming language where the syntactic-semantic relationship can be subverted to transform the language itself into a poetic entity, not only expressive by its capacity to generate (poiesis) computational organisms, but also by its sonority, rhythm and form.  Another possibility is to adapt the language (of programming) to a particular context (geographical, cultural, artistic) allowing the use of words in different languages ... the code stops being an imposition (or a colonization) and starts to be an inclusion and an opening.

Artists
avatar for Esteban Betancur

Esteban Betancur

Professor, ITM Medellin
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.


10:30pm

Music Piece 2.4
Fragile Intersections
by Javier Jaimovich & Francisca Morand

“Fragile Intersections” is a solo interactive performance developed around themes of identity and self-image, constructed by the subjective relationship of somatic processes, such as self-sensing while the performer navigates across a sound environment composed from her own multiplicity of corporeal elements. The performance is based on the capture of microphone, inertial and electromyography signals, which are mapped to an interactive sound system in real-time. The vocalizations and words are re-signified by mapping strategies, allowing the performer to fragment, distort and re-interpret her voice within an interactive and fluid sound environment that is affected by the performer’s gestures and presence.

Artists
avatar for Javier Jaimovich

Javier Jaimovich

Lecturer, Universidad de Chile


10:30pm

Music Piece 2.5
Experiment in Augmentation 2
by Scott Barton

In this work, a human performer interacts with the musical robots designed and built by WPI’s Music, Perception and Robotics Lab and Expressive Machines Musical Instruments. Musical interactions can occur in a variety of forms. The agents in an improvisation are typically able to sense the utterances of others, and are free to respond (depending on the constraints of the particular situation). The expressivity of an agent then is conveyed in a multitude of ways (pitch and rhythmic choices, timbral nuance, physical gestures, etc.). This kind of interaction has been a primary focus of my compositional work. Experiment in Augmentation 2 shows another kind of interaction that finds a human in greater control of the situation. The robots do not sense or interpret their sonic environment. Instead, they respond to human-produced cues by voicing one of a variety of pre-composed gestures (which may also be changed in the context of the performance). This configuration affects the way in which the robots are expressive. Their performative idiosyncrasies generate timbral, pitch and rhythmic variations, transforming the idealized pitch, rhythm and velocity instructions into new kind of statements (transformation is an important part of expressivity). As an ensemble, these gestures combine to create emergent textures that illuminate the novel expressive possibilities of machines. All of this inspires the human performer’s choices about which cue will come next. It shows the compositional process unfolding in real time, highlighting a way in which a human performer’s expressive abilities can be augmented via physical computing technologies.

Artists

10:30pm

Music Piece 2.6
A Sound Walk Through Chaos Forest
by Edgar Berdahl

"A Sound Walk Through Chaos Forest" is an electroacoustic miniature written for two circle map oscillators. Their parameters are adjusted in real time by the controls of an embedded instrument. As the parameters are adjusted, the performer walks the listener through a forest of chaotic sounds. From time to time during the work, a coupling parameter is increased, causing the two circle map “resonances” to mirror each other’s dynamic behavior.

Artists
avatar for Edgar Berdahl

Edgar Berdahl

Assistant Professor, Louisiana State University


10:30pm

Music Piece 2.7
Arrest
by Kristina Warren

"Arrest," a sonocybernetic work for vocal body and Exoskeleton, explores embodied performance. I designed and built the Exoskeleton, a wearable, hybrid analog-digital instrument that uses body-to-body connections, such as wrist to wrist, to close different circuits, thus altering analog audio and digital control routing. MaxMSP consolidates the audio and adds an element of randomness in an effort to create parity within the human-computer interaction. "Arrest" uses closed, covered gestures to reflect the carceral state and limited contemporary access to NIMEs. The Exoskeleton explores the complete body – choreographically, vocally, expressively, and socially – as a crucial musical affordance.


10:30pm

Music Piece 2.8
ち — chi for Candles, Live Voice, and Sounds
by Akiko Hatakeyama

Trembling lights grow and cease. Small shimmering flames create a world – an ephemeral world tied to the past, present, and the future. The orange light, fuzzy yet powerful, coexists with sounds and my voice communicates with the air at the scene. Sounds are like connected with the ground and keep our feet stable. The smell and heat from the candles confirm that I am alive, evoking senses and memories stored deep in me. The title ち — chi could mean blood, earth, knowledge, lateness, planting, and more in Japanese.  Ten light sensors of the custom-made instrument, myaku, distributed onto a table react to various light intensities emitted from candle flames. Each sensor converts the light intensities to values, and these values correspond to each gain of audio files in a music program.  Candle flames emit strong light comparing to small electric lights, and the dancing motion of the flames are both visible and audible in this piece. I make compositional decisions by considering the light intensities and movements of different candles to place them to create a desired yet autonomous sound environment. The length, thickness, and kinds of candles as well as kinds of candle holders used for the instrument all change the property of the sounds and affect the performance. The heat, melting wax, and smell coming from candles influence how I perform the piece.



 
Monday, June 4
 

8:00am

8:30am

9:00am

Paper Session 1: Collaboration & Audience Participation

Speakers
avatar for Andrew R. Brown

Andrew R. Brown

Professor of Digital Arts, Griffith University
avatar for Cécile Chevalier

Cécile Chevalier

University of Sussex
avatar for Eran Egozy

Eran Egozy

Professor of the Practice, Music Technology, MIT
avatar for Arne Eigenfeldt

Arne Eigenfeldt

Professor, Music and Technology, Simon Fraser University|Vancouver|BC|Canada
Arne Eigenfeldt is a composer of acoustic and electroacoustic music, and is an active software designer. His music has been performed throughout the world, and his research in intelligent music systems has been published and presented in international conferences. He teaches music... Read More →
avatar for Anders Lind

Anders Lind

Composer, Senior Artistic Lecturer, Department of Creative Studies, Umeå University, Sweden
avatar for Bernt Isak Wærstad

Bernt Isak Wærstad

University Lecturer / Freelance, Norwegian Academy of Music
Musician, sound artist, producer and sound designer with a Masters in Music Technology from NTNU in real time granular synthesis of electric guitar. In addition to work free lance as a musician and sound engineer, he teaches at the Norwegian University of Science and Technology and... Read More →


9:00am

Paper 1.1
Working Methods and Instrument Design for Cross-Adaptive Sessions
by Oeyvind Brandtsegg, Trond Engum & Bernt Isak Wærstad


This paper explores working methods and instrument design for musical performance sessions (studio and live) where cross-adaptive techniques for audio processing are utilized. Cross-adaptive processing uses feature extraction methods and digital processing to allow the actions of one acoustic instrument to influence the timbre of another. Even though the physical interface for the musician is the familiar acoustic instrument, the musical dimensions controlled with the actions on the instrument have been expanded radically. For this reason, and when used in live performance, the cross-adaptive methods constitute new interfaces for musical expression. Not only do the musician control his or her own instrumental expression, but the instrumental actions directly influence the timbre of another instrument in the ensemble, while their own instrument's sound is modified by the actions of other musicians. In the present paper we illustrate and discuss some design issues relating to the configuration and composition of such tools for different musical situations. Such configurations include among other things the mapping of modulators, the choice of applied effects and processing methods.

Speakers
avatar for Bernt Isak Wærstad

Bernt Isak Wærstad

University Lecturer / Freelance, Norwegian Academy of Music
Musician, sound artist, producer and sound designer with a Masters in Music Technology from NTNU in real time granular synthesis of electric guitar. In addition to work free lance as a musician and sound engineer, he teaches at the Norwegian University of Science and Technology and... Read More →


9:00am

Paper 1.2
*12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance
by Eran Egozy & Eun Young Lee


*12* is chamber music work composed with the goal of letting audience members have an engaging, individualized, and influential role in live music performance using their mobile phones as custom tailored musical instruments. The goals of direct music making, meaningful communication, intuitive interfaces, and technical transparency led to a design that purposefully limits the number of participating audience members, balances the tradeoffs between interface simplicity and control, and prioritizes the role of a graphics and animation display system that is both functional and aesthetically integrated. Survey results from the audience and stage musicians show a successful and engaging experience, and also illuminate the path towards future improvements.

Speakers
avatar for Eran Egozy

Eran Egozy

Professor of the Practice, Music Technology, MIT


9:00am

Paper 1.3
Animated Notation in Multiple Parts for Crowd of Non-Professional Performers
by Anders Lind


The Max Maestro – an animated music notation system was developed to enable the exploration of artistic possibilities for composition and performance practices within the field of contemporary art music. More specifically, to enable a large crowd of non-professional performers regardless of their musical background to perform a fixed music compositions written in multiple individual parts. Furthermore, the Max Maestro was developed to facilitate concert hall performances where non-professional performers could be synchronised with an electronic music part. This paper presents the background, the content and the artistic ideas with the Max Maestro system and gives two examples of live concert hall performances where the Max Maestro was implemented. An artistic research approach with an auto ethnographic method was adopted for the study. This paper contributes with new knowledge to the field of animated music notation.

Speakers
avatar for Anders Lind

Anders Lind

Composer, Senior Artistic Lecturer, Department of Creative Studies, Umeå University, Sweden


9:00am

Paper 1.4
Interacting with Musebots
by Andrew R. Brown, Matthew Horrigan, Arne Eigenfeldt, Toby Gifford, Daniel Field & Jon McCormack


Musebots are autonomous musical agents that interact with other musebots to produce music. Inaugurated in 2015, musebots are now an established practice in the field of musical metacreation, which aims to automate aspects of creative practice. Originally musebot development focused on software-only ensembles of musical agents, coded by a community of developers. More recent experiments have explored humans interfacing with musebot ensembles in various ways: including through electronic interfaces in which parametric control of high-level musebot parameters are used; message-based interfaces which allow human users to communicate with musebots in their own language; and performance-as-interface and/or audio-as-interface, in which musebots have jammed with human musicians. Here we report on the recent developments of human interaction with musebot ensembles, and reflect on some of the implications of these developments for the design of metacreative music systems.

Speakers
avatar for Andrew R. Brown

Andrew R. Brown

Professor of Digital Arts, Griffith University
avatar for Arne Eigenfeldt

Arne Eigenfeldt

Professor, Music and Technology, Simon Fraser University|Vancouver|BC|Canada
Arne Eigenfeldt is a composer of acoustic and electroacoustic music, and is an active software designer. His music has been performed throughout the world, and his research in intelligent music systems has been published and presented in international conferences. He teaches music... Read More →


9:00am

Paper 1.5
Towards New Modes of Collective Musical Expression through Audio Augmented Reality
by Chris Kiefer & Cecile Chevalier


We investigate how audio augmented reality can engender new collective modes of musical expression in the context of a sound art installation, 'Listening Mirrors', exploring the creation of interactive sound environments for musicians and non-musicians alike. 'Listening Mirrors' is designed to incorporate physical objects and computational systems for altering the acoustic environment, to enhance collective listening and challenge traditional musician-instrument performance. At a formative stage in exploring audio AR technology, we conducted an audience experience study investigating questions around the potential of audio AR in creating sound installation environments for collective musical expression.
We collected interview evidence about the participants' experience and analysed the data with using a grounded theory approach. The results demonstrated that the technology has the potential to create immersive spaces where an audience can feel safe to experiment musically, and showed how AR can intervene in sound perception to instrumentalise an environment. The results also revealed caveats about the use of audio AR, mainly centered on social inhibition and seamlessness of experience, and finding a balance between mediated worlds so that there is space for interplay between the two.

Speakers
avatar for Cécile Chevalier

Cécile Chevalier

University of Sussex


9:00am

9:00am

Aphysical Unmodeling Instrument | 2017
Aphysical Unmodeling Instrument | 2017
by Tomoya Matsuura

Moss Arts Center - 3rd Floor in Balcony Lobby

Aphysical Unmodeling Instrument rethinks a description and a generation of sound and music through re-physicalization of Whirlwind, a targetless-physical model. Whirlwind is a combined and impossible physical model of 3 wind instruments; a trumpet, flute and clarinet. Our work reimplements the computational elements of Whirlwind with physical objects such as a delay with the sound propagation and a resonator with the Helmholtz resonator. The acts of a composition, a creation of instruments or a installation and otherwise a performance are parallelized in our work. The notion of the digital sound is expanded out of the computer by re-physicalizing a computational model.

Exhibitors
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University


9:00am

Attunement | 2018
Attunement | 2018
by Olivia Webb & Flo Wilson

Moss Arts Center - 2nd Floor in Mezzanine Lobby

The verb ‘attune’ usually describes the act of making something harmonious, as in the tuning of an instrument. Attunement is also a state of relation to an object, technology, environment or other people. To become attuned is to engage in a two-way sympathetic and empathetic exchange. In this installation, attunement is used both as a technique for exploring ways of being with others in the world, and a method for considering how technology mediates this exchange.

In this conference of new musical interfaces, we invite all participants to consider the ethics of listening as mediated by technology. Listening is central to human interaction, yet habits within Western culture tend to privilege speech and being heard over listening to and receiving others. New technology continues to accelerate the speed that we can sound, voice and express ourselves. We are interested in how we might engage with performance and technology to become better listeners.

In this installation, you are invited to practice attunement by taking part in a selection of simple embodied listening exercises. Step out of your own breath, your own voice, your own self. Contemplate the changes required of you in order to receive someone or something else.

Exhibitors
avatar for Flo Wilson

Flo Wilson

Audio Foundation
Flo Wilson is a composer, producer and artist based in Auckland, New Zealand whose organic, experimental music creates emotive atmospheres to facilitate empathetic, shared listening experiences. She has created custom-built spatial sound installations and then performed with them... Read More →


9:00am

Bǎi (摆): An Oscillating Sound Installation | 2018
Bǎi (摆): An Oscillating Sound Installation | 2018
by Jelger Kroese, Danyi Liu & Edwin van der Heide

Moss Arts Center - 1st Floor in Experience Studio B

Bǎi (摆), meaning pendulum in Chinese, is an interactive installation that uses a speaker hanging as a pendulum from the ceiling combined with an octophonic speaker setup to create a responsive sound environment. Besides being a sound source, the pendulum speaker is also the interface by which the audience interacts with the installation. Through pushing, pulling and twisting, the audience can move the pendulum and set it into oscillating motions. A dynamic system underlying the installation translates these motions into different modes of behavior of the sound environment. At first, it may seem that the environment reacts to the motions predictably. However, exercising too much control over the pendulum causes the installation to quickly spiral into chaotic and unpredictable behavior. This, combined with the fact that hard physical labour is needed to restrain the pendulum, leads to a tense dialogue between participant and object, struggling for control. The movements resulting from this dialogue cause the sounds in the environment to change between different states of stability and chaos, hereby mirroring the types of dynamics that are also seen in natural ecosystems.

Exhibitors
avatar for Jelger Kroese

Jelger Kroese

Jelger Kroese is a designer and coder in the field of sound, interaction and education. He has a deep interest in the processes that underlie ecological systems. As a result, his work mostly comprises compositions and installations that place biological concepts within a technological... Read More →
avatar for Danyi LIU

Danyi LIU

PhD student, Leiden Institute of Advanced Computer Science
Danyi Liu is a designer and researcher in the field of sound and interactive art. Currently, she is a PhD student at the Leiden Institute of Advanced Computer Science. Her research focuses on real-time interactive data sonification and audience participatory installations and performances... Read More →


9:00am

Chorus for Untrained Operator | 2016
Chorus for Untrained Operator | 2016
by Stephan Moore & Peter Bussigel

Chorus for Untrained Operator is a collection of discarded objects. Each has been relieved of its original responsibilities, rewired, and transformed to emphasize its musical voice. The ensemble of objects is controlled through the patch bay of a 1940s Western Electric switchboard.

Exhibitors
avatar for peter bussigel

peter bussigel

assistant professor, emily carr university of art & design
avatar for Stephan Moore

Stephan Moore

Senior Lecturer, Northwestern University


Monday June 4, 2018 9:00am - 5:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

9:00am

FingerRing installation | 2016-18
FingerRing installation | 2016-18
by Sergey Kasich


Moss Arts Center - 3rd Floor in Balcony Lobby

FingerRing installation - is an exhibition set of the fFlower interface for the FingerRing technique. The two sensitive panels are installed in the center of the space, which has 8-channel sound system in perimeter. Anyone is allowed to play with the panels and experience flexibility and nature of the FingerRing technique - the simplest way to play with multichannel music. The installation has been shown at "SOUNDART: space of sound" (Manege Central Exhibition Hall, Saint-Petersburg, 2017), MakerFaire Moscow (MISIS, Moscow, 2017).

The FingerRing technique has been presented at BRERA Art Academy New Technologies in Art Dept (Milano, Italy), National University of Science and Technology MISIS (Moscow, Russia), New York University Tandon School of Engeneering (NYC, USA), Cambridge (specially for Dr. Peter Zinovieff). It was included into education process in Falmouth University in England in 2017 and presented as a workshop at EXPO 2017 (Astana, Kazakhstan).

Exhibitors
SK

Sergey Kasich

founder, SoundArtist.ru
music technology, experimental sound arts, interactive installations, social infrastructure, cultural projects, events, festivals, curation of new media arts, hybrid studies , R&D , anything


9:00am

What We Have Lost / What We Have Gained | 2014
What We Have Lost / What We Have Gained | 2014
by Matthew Mosher

What We Have Lost / What We Have Gained explores how to transform viewers into performers, participants, and players through large upper body movements and tangible interactions with a sculpture. The piece was originally conceived as a large scale MIDI drum pad style interface that would be both familiar to electronic musicians yet more physically expressive than typical MIDI devices.
As an art installation, it presents a four by three grid of video projected mouths on a spandex screen. Each video sample animates and sings a different vowel tone when pressed by a user. The volume of the singing increases as the player presses harder and deeper into the mouth screen, which distorts the spandex display surface. In this way, the piece provides audio, video and tactile feedback, rewarding the user with a multi-modal embodied experience. This work contributes to the discourse at the intersection of tangible interactions and musical expression by providing an example of how interaction design can facilitate engagement and convey meaning. What We Have Lost / What We Have Gained questions the experience of using one's physical body to manipulate the digital representation of another's body.
Special thanks to vocalists Ashley Reynolds and Keri Pierson.

Exhibitors
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →


Monday June 4, 2018 9:00am - 5:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

10:00am

McBlare: A Robotic Bagpipe Player
McBlare: A Robotic Bagpipe Player
by Roger Dannenberg

"McBlare" is a robotic bagpipe player. It plays an ordinary set of bagpipes using an air compressor to provide air. Electro-magnetic devices power the “fingers” that open and close tone holes to determine the musical pitch. A computer sends control signals to McBlare to operate the “fingers” to play traditional bagpipe tunes as well as new compositions. McBlare can also add authentic sounding ornaments to simple melodies entered through a piano-like keyboard and play the result on the pipes. McBlare was constructed by the Robotics Institute for its 25th Anniversary in 2004. The team that built McBlare includes Ben Brown, Garth Zeglin, and Roger Dannenberg. McBlare has performed in Miami, Pittsburgh, Vancouver, and twice flew to Scotland, attending the International Piping Festival in Glasgow in 2006 and an exhibition at the Scottish Parliament in 2013. McBlare has also appeared and performed on the Canadian Broadcasting Company and the BBC Scotland.

A triskelion is a triple spiral design, a reference to McBlare's tripod base. The work was composed using various algorithms to produce effects, such as the long opening trill, that are unplayable by humans.

Exhibitors


Monday June 4, 2018 10:00am - 12:00pm
Moss Arts Center - Lawn Alumni Mall, Blacksburg, VA 24060, USA

11:00am

Keynote Talk with R. Benjamin Knapp
Keynote
R. Benjamin Knapp

Speakers
avatar for R. Benjamin Knapp

R. Benjamin Knapp

Scholar | Researcher | Performer
R. Benjamin Knapp is the Founding Director for the Institute for Creativity, Arts, and Technology (ICAT) and Professor of Computer Science at Virginia Tech. ICAT seeks to promote research and education at the nexus of art, design, engineering, and science. For more than 25 years... Read More →


11:30am

Keynote Talk with R. Benjamin Knapp: Transference
Transference 
by R. Benjamin Knapp (Gesture choreography and performance), Eric Lyon (Sound design and spatial orchestration) & Ariana Wyatt (Soprano)

Transference was created by Ben Knapp and Eric Lyon during 2018 for the NIME 2018 Conference. The composition Transference is tightly coupled with Ben’s NIME keynote talk, which directly precedes the piece, both in the music examples and the concepts presented during the talk. Transference transfers the ideas of the talk into the experiential domain of musical performance.

Transference was inspired by a conversation between Ben and Eric that took place in 2009, when both were working at the Sonic Arts Research Centre of Queen’s University Belfast. While working with the Biomuse, a NIME that was designed by Ben, they predicted that this bespoke hardware would eventually be replaced by industrially produced equipment. This “transference” would be bittersweet, since it would validate the ideas underlying the Biomuse, while rendering it obsolete. 

In preparing Transference, the creators discovered the Myo, which replicates much of the functionality of the Biomuse. Two Myos are used in the performance, along with a 24-camera Qualisys motion capture system. Live sound synthesis to a 124-channel loudspeaker array is achieved with Max/MSP and SuperCollider.

Transference begins with a spatialized cacophony of sounds derived from media appearances of Ben discussing the Biomuse. This cacophony transfers first to a texture of sounds from the band d’Cuckoo performing on the Biomuse, which then transfers to a texture based on Ben’s voice. All of these transfers are controlled by the spatial location of Ben within the Cube. From this point forward, all sounds heard in the composition are based exclusively on voice. Textures based on Ben’s voice are eventually transferred to a texture based on Ariana Wyatt’s performance of a fragment from the traditional Irish song Johnny Seoighe. This song was used in Transference as an homage to Ben’s earlier incorporation of Johnny Seoighe in his composition The Reluctant Shaman, created in 2008 for the Belfast ICMC. 

Johnny Seoighe, composed during the Great Famine, is deceptive in its musical beauty and the text’s flattery of the listener. Digging deeper, the song reveals itself to be a satiric indictment of the human costs of political oppression. Its multiple layers of meaning complement the multiple layers of technological history that undergirds Transference



Artists
AW

Ariana Wyatt

Ariana Wyatt’s recent opera engagements include appearances with Gotham Chamber Opera, Opera on the James, Opera Omaha, Opera Roanoke, Glimmerglass Opera, Florida Grand Opera, Santa Fe Opera, the Juilliard Opera Center, and the Aspen Opera Theater.  Symphonic highlights include... Read More →
avatar for Eric Lyon

Eric Lyon

Eric Lyon’s work focuses on articulated noise, chaos music, spatial orchestration, and computer chamber music. His software includes FFTease and LyonPotpourri, collections of externals for Max/MSP and Pd. He authored “Designing Audio Objects for Max/MSP and Pd.” His music has... Read More →
avatar for R. Benjamin Knapp

R. Benjamin Knapp

Scholar | Researcher | Performer
R. Benjamin Knapp is the Founding Director for the Institute for Creativity, Arts, and Technology (ICAT) and Professor of Computer Science at Virginia Tech. ICAT seeks to promote research and education at the nexus of art, design, engineering, and science. For more than 25 years... Read More →


11:30am

Demo-Poster Session 1
Lunch break, and demo-poster session. Note: Demos are part of this session.  Coffee, pastries and
refreshments will be provided.

PostersDemos

Exhibitors
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.
avatar for Nathan M. Asman

Nathan M. Asman

Doctoral Student, University of Oregon
avatar for Astrid Bin

Astrid Bin

Queen Mary University of London|London||UK
avatar for Andrew Blanton

Andrew Blanton

Assistant Professor, San José State University
Andrew Blanton is Assistant Professor and and Area Coordinator of the CADRE Media Labs at San Jose State University and Visiting Researcher at the Center for New Music and Audio Technologies at the University of California Berkeley. His work has been performed and presented around... Read More →
avatar for Courtney Brown

Courtney Brown

Assistant Professor, Southern Methodist University
Courtney Brown is a sound artist, musician, researcher, and tango dancer. Her work has been featured and performed in the United States and Europe. Her interactive sound installation and musical instrument, ‘Rawr! A Study in Sonic Skulls’ received an Honorary Mention from the... Read More →
avatar for Rachel Gibson

Rachel Gibson

Student, Oberlin Conservatory of Music
avatar for Ulf A. S. Holbrook

Ulf A. S. Holbrook

PhD Researcher, University of Oslo
avatar for Kazuhiro Jo

Kazuhiro Jo

Kyushu Univ. / YCAM
RK

Rebecca Kleinberger

MIT Media Lab|Cambridge|Massachusetts|United States
avatar for Anders Lind

Anders Lind

Composer, Senior Artistic Lecturer, Department of Creative Studies, Umeå University, Sweden
avatar for Riccardo Marogna

Riccardo Marogna

Musician, Technician, Magician, Institute of Sonology, Royal Conservatoire in The Hague
Musician, improviser, composer, born in Verona (Italy), currently based in The Hague (NL). His research is focused on developing an improvisational language in the electro-acoustic scenario, where the electronic manipulations and the acoustic sounds merge seamlessly in the continuum... Read More →
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Ben Robertson

Ben Robertson

Graduate Student, University of Virginia
Ben Luca Robertson is a composer, experimental luthier, and co-founder of the independent record label, Aphonia Recordings. His work addresses an interest in autonomous processes, landscape, and biological systems—often by supplanting narrative structure with an emphasis on the... Read More →
avatar for Luca Turchet

Luca Turchet

Luca Turchet is a postdoctoral researcher at the Centre for Digital Music and co-founder of MIND Music Labs. His research interests span the fields of new interfaces for musical expression, human-computer interaction, perception, and virtual reality. He is also a musician and composer... Read More →


11:30am

Demo 1.01
Étude No.1, for Curve
by Nathan Asman


My new custom-built instrument is called Curve, and is named after the shape and contour of the interface itself. I wanted to create something that had a myriad of different sensors and ways of controlling different musical parameters, while also maintaining the functionality and traditional idioms of other controllers, interfaces, and instruments that are around today. It’s kind of my take on a grid/keyboard/controller hybrid, or something along those lines, but that has far more options and possibilities for musical control and expression. I wanted it to be ergonomic as well, hence the final shape and layout.

Exhibitors
avatar for Nathan M. Asman

Nathan M. Asman

Doctoral Student, University of Oregon


11:30am

Demo 1.02
Waveguide
by Andrew Blanton


Waveguide is an audio visual performance that uses the internet as a resonant body for drums. By sending data from drums to a server and back through the audience's cell phones in real time, the work uses the array of cell phone speakers to create an immersive audio visual environment. Conceptually, the work draws on a number of different topics exploring the ubiquity of cell phones in contemporary society, and what it means to have an increasingly mediated reality through the screen of a smart phone. It also questions the increasing role that cell phones are playing in our lives. The work takes over audiences cell phones in the performance environment. The work creates a diverse multi channel array of speakers that are controlled in real time from the stage. Each phone of the audience acts as an individual small speaker, screen, and interactive environment, allowing for real time dispersed audience interaction with the work as it is performed. The system is built using Max for realtime analysis of audio data from analog drums which sends data to a node.js server that then sends data to the audiences cellphones if they are the site andrewblanton.com/node.html

Exhibitors
avatar for Andrew Blanton

Andrew Blanton

Assistant Professor, San José State University
Andrew Blanton is Assistant Professor and and Area Coordinator of the CADRE Media Labs at San Jose State University and Visiting Researcher at the Center for New Music and Audio Technologies at the University of California Berkeley. His work has been performed and presented around... Read More →


11:30am

Demo 1.03
Self/Work: Therapy Games
by Anastasia Clarke


This performance combines music composition, sound/instrument design, dance, speech, and projected imagery to provoke stories about power and agency in matters of health and healing. Its stage design involves long (15’) wires extending from patch points on analog circuit boxes, terminating in eight foot-sized copper touch-points spread around the performance floor. Touch-points invite use of hands, feet, and skin-to-skin contact between performers, yielding direct (amplified audio signals) and indirect (audio used as control signal) sonic results. Aerial-view live camera feed projection and spoken translation of onstage actions both address questions of access to the work across differing sensory abilities.


Exhibitors

11:30am

Demo 1.04
Rain Shadow
by Ben Robertson


'Rainshadow' is an exploration of space and the derivative structures that link one’s immediate surroundings within a larger topographical or spectral framework.  Using a pair of piezo-electric transducers set in a hand-held “wand” and tactile interface, the performer probes surfaces in the environment to capture minute impulse signals. These impulses are transformed using a variation of Karplus-Strong synthesis, imbuing each texture with a discrete pitch derived through intersections of a virtual overtone and undertone series.  The resulting harmonic structure—constituting a 13-limit system of Just Intonation—is mirrored in the altered tuning of the cello. 

Exhibitors
avatar for Ben Robertson

Ben Robertson

Graduate Student, University of Virginia
Ben Luca Robertson is a composer, experimental luthier, and co-founder of the independent record label, Aphonia Recordings. His work addresses an interest in autonomous processes, landscape, and biological systems—often by supplanting narrative structure with an emphasis on the... Read More →


11:30am

Demo 1.05
The Tape Machine: I See Spirits
by Anne Hege


The Tape Machine is a live looping instrument made of one retrofitted tape cassette recorder and two retrofitted tape cassette players. Since 2008, I have composed and improvised with this instrument honing my ability to use the unique attributes of this instrument, including haptic sensitivity, rate of playback change, live manipulation of sound by distorting both recording and playback, as well as compositional choice in vocal and sonic material and finally, interplay between recorded material (fidelity changing over time as the tape becomes used) and live sounds. I would like to perform a new work using this instrument.

Exhibitors

11:30am

Demo 1.06
Real-time 3D Convolution Reverb with High Order Ambisonics
by Tanner Upthegrove


Real-time panning of musical sources with third order ambisonics.

Exhibitors
TU

Tanner Upthegrove

Virginia Tech


11:30am

Demo 1.07
Interactive Chinese Shadow Puppetry
Demo by Chenyu Sun


Shadow play is a classic Chinese traditional culture, it is a character silhouette as a performance form of folk drama. Some people behind the scenes control, and tell the story, but also music, sound accompaniment. A variety of cultural and media participation in the shadow play on behalf of China's precious traditional culture.

However, now because of the lack of contemporary culture on the importance of traditional culture and the new way of entertainment of the shadow play, so that gradually fade in people's attention and life. When I look back on the shadow play, I felt its wonderful and magical. In order to arouse people's attention and participation in traditional culture, by improving the technical means of shadow play to better and more interesting in life with people to interact, let contemporary people experience the new form of the shadow play.

What is more important is that the side of this interactive experience reflects the use of their own stories and limbs to perform shadow play, shadow characters are also the hearts of traditional culture, the shadow of the foundation. The timelessness reflects the contemporary people for the cultural impact of the confused and two sides.

Exhibitors

11:30am

Demo 1.08
Vox Augmento : An Improvisable VR Sampling Interface
by Philip Kobernik, Andrew Luck


Paint voice-recorded sound waves in a virtual world.

Exhibitors

11:30am

Demo 1.09
Voyage One - Mobile Phone Orchestra 2020 Conducted by Animated Notation
by Anders Lind


VOYAGE ONE is a composition for mobile phone orchestra, which is conducted by animated notation. The mobile phone orchestra could be 24-240 people regardless of age. No musician background is needed to participate as performers in the orchestra and only 60 minutes of rehearsal is needed before the concert. In previous performances of Voyage One schoolchildren in the age of 14 have participated as mobile phone orchestra. The mobile phone orchestra will be divided in six individual parts and conducted with performance instructions from specially designed animated music notation, presented on a projected screen in front of them. How to follow the animated notation will be instructed by the composer during rehearsal before the concert. The Mobile Phone Orchestra will perform on their personal phones and instruments found at www.orchestra2020.com.


Exhibitors
avatar for Anders Lind

Anders Lind

Composer, Senior Artistic Lecturer, Department of Creative Studies, Umeå University, Sweden


11:30am

Poster 1.01
Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind
by Tomoya Matsuura & Kazuhiro Jo

Aphysical Unmodeling Instrument is the title of a sound installation that re-physicalizes the Whirlwind meta-wind-instrument physical model. We re-implemented the Whirlwind by using real-world physical objects to comprise a sound installation. The sound propagation between a speaker and microphone was used as the delay, and a paper cylinder was employed as the resonator. This paper explains the concept and implementation of this work at the 2017 HANARART exhibition. We examine the characteristics of the work, address its limitations, and discuss the possibility of its interpretation by means of a “re-physicalization.”

Exhibitors
avatar for Kazuhiro Jo

Kazuhiro Jo

Kyushu Univ. / YCAM
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University


11:30am

Poster 1.02
An approach to stochastic spatialization - A case of Hot Pocket
by Ulf A. S. Holbrook


Many common and popular sound spatialisation techniques and methods rely on listeners being positioned in a "sweet-spot'' for an optimal listening position in a circle of speakers. This paper discusses a stochastic spatialisation method and its first iteration as implemented for the exhibition Hot Pocket at The Museum of Contemporary Art in Oslo in 2017. This method is implemented in Max and offers a matrix-based amplitude panning methodology which can provide a flexible means for the spatialialisation of sounds.

Exhibitors
avatar for Ulf A. S. Holbrook

Ulf A. S. Holbrook

PhD Researcher, University of Oslo


11:30am

Poster 1.03
AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation
by Cory Champion & Mo H Zareei


AM MODE is a custom-designed software interface for electronic augmentation of the acoustic drum set. The software is used in the development a series of recordings, similarly titled as AM MODE. Programmed in Max/MSP, the software uses live audio input from individual instruments within the drum set as control parameters for modulation synthesis. By using a combination of microphones and MIDI triggers, audio signal features such as the velocity of the strike of the drum, or the frequency at which the drum resonates, are tracked, interpolated, and scaled to user specifications. The resulting series of recordings is comprised of the digitally generated output of the modulation engine, in addition to both raw and modulated signals from the acoustic drum set. In this way, this project explores drum set augmentation not only at the input and from a performative angle, but also at the output, where the acoustic and the synthesized elements are merged into each other, forming a sonic hybrid.

Exhibitors

11:30am

Poster 1.04
Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer.
by Don Derek Haddad & Joe Paradiso


This paper introduces the Kinesynth, a hybrid kinesthetic synthesizer that uses the human body as both an analog mixer and as a modulator using a combination of capacitive sensing in "transmit" mode and skin conductance. This is achieved when the body, through the skin, relays signals from control & audio sources to the inputs of the instrument. These signals can be harnessed from the environment, from within the Kinesynth’s internal synthesizer, or from external instrument, making the Kinesynth a mediator between the body and the environment.

Exhibitors

11:30am

Poster 1.05
The XT Synth: A New Controller for String Players
by Gustavo Oliveira da Silveira


This paper describes the concept, design, and realization of two iterations of a new controller called the XT Synth. The development of the instrument came from the desire to maintain the expressivity and familiarity of string instruments, while adding the flexibility and power usually found in keyboard controllers. There are different examples of instruments that bring the physicality and expressiveness of acoustic instruments into electronic music, from “Do it yourself” (DIY) products to commercially available ones. This paper discusses the process and the challenges faced when creating a DIY musical instrument and then subsequently transforming the instrument into a product suitable for commercialization.


11:30am

Poster 1.06
Risky business: Disfluency as a design strategy
by S. M. Astrid Bin, Nick Bryan-Kinns & Andrew P. McPherson


This paper presents a study examining the effects of disfluent design on audience perception of digital musical instrument (DMI) performance. Disfluency, defined as a barrier to effortless cognitive processing, has been shown to generate better results in some contexts as it engages higher levels of cognition. We were motivated to determine if disfluent design in a DMI would result in a risk state that audiences would be able to perceive, and if this would have any effect on their evaluation of the performance. A DMI was produced that incorporated a disfluent characteristic: It would turn itself off if not constantly moved. Six physically identical instruments were produced, each in one of three versions: Control (no disfluent characteristics), mild disfluency (turned itself off slowly), and heightened disfluency (turned itself off more quickly). 6 percussionists each performed on one instrument for a live audience (N=31), and data was collected in the form of real-time feedback (via a mobile phone app), and post-hoc surveys. Though there was little difference in ratings of enjoyment between the versions of the instrument, the real-time and qualitative data suggest that disfluent behaviour in a DMI may be a way for audiences to perceive and appreciate performer skill.

Exhibitors
avatar for Astrid Bin

Astrid Bin

Queen Mary University of London|London||UK
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom


11:30am

Poster 1.07
The Theremin Textural Expander
by Rachel Gibson


The voice of the theremin is more than just a simple sine wave. Its unique sound is made through two radio frequency oscillators that, when operating at almost identical frequencies, gravitate towards each other. Ultimately, this pull alters the sine wave, creating the signature sound of the theremin. The Theremin Textural Expander (TTE) explores other textures the theremin can produce when its sound is processed and manipulated through a Max/MSP patch and controlled via a MIDI pedalboard. The TTE extends the theremin’s ability, enabling it to produce five distinct new textures beyond the original. It also features a looping system that the performer can use to layer textures created with the traditional theremin sound. Ultimately, this interface introduces a new way to play and experience the theremin; it extends its expressivity, affording a greater range of compositional possibilities and greater flexibility in free improvisation contexts.

Exhibitors
avatar for Rachel Gibson

Rachel Gibson

Student, Oberlin Conservatory of Music


11:30am

Poster 1.08
Siren: Interface for Pattern Languages
by Mert Toka, Can Ince & Mehmet Aydin Baytas


This paper introduces Siren, a hybrid system for algorithmic composition and live-coding performances. Its hierarchical structure allows small modifications to propagate and aggregate on lower levels for dramatic changes in the musical output. It uses functional programming language TidalCycles as the core pattern creation environment due to its inherent ability to create complex pattern relations with minimal syntax. Borrowing the best from TidalCycles, Siren augments the pattern creation process by introducing various interface level features: a multi-channel sequencer, local and global parameters, mathematical expressions, and pattern history. It presents new opportunities for recording, refining, and reusing the playback information with the pattern roll component. Subsequently, the paper concludes with a preliminary evaluation of Siren in the context of user interface design principles, which originates from the cognitive dimensions framework for musical notation design.

Exhibitors

11:30am

Poster 1.09
Developing a Performance Practice for Mobile Music Technology
by Spencer Salazar, Andrew Piepenbrink & Sarah Reid


This paper documents an extensive and varied series of performances by the authors over the past year using mobile technology, primarily iPad tablets running the Auraglyph musical sketchpad software. These include both solo and group performances, the latter under the auspices of the Mobile Ensemble of CalArts (MECA), a group created to perform music with mobile technology devices. As a whole, this diverse mobile technology-based performance practice leverages Auraglyph's versatility to explore a number of topical issues in electronic music performance, including the use of physical and acoustical space, audience participation, and interaction design of musical instruments.


11:30am

Poster 1.10
MOM: an Extensible Platform for Rapid Prototyping and Design of Electroacoustic Instruments
by Ali Momeni, Daniel McNamara & Jesse Stiles


This paper provides an overview of the design, prototyping, deployment and evaluation of a multi-agent interactive sound instrument named MOM (Mobile Object for Music). MOM combines a real-time signal processing engine implemented with Pure Data on an embedded Linux platform, with gestural interaction implemented via a variety of analog and digital sensors. Power, sound-input and sound-diffusion subsystems make the instrument autonomous and mobile. This instrument was designed in coordination with the development of an evening-length dance/music performance in which the performing musician is engaged in choreographed movements with the mobile instruments. The design methodology relied on a participatory process that engaged an interdisciplinary team made up of technologists, musicians, composers, choreographers, and dancers. The prototyping process relied on a mix of in-house and out-sourced digital fabrication processes intended to make the open source hardware and software design of the system accessible and affordable for other creators.

Exhibitors

11:30am

Poster 1.11
Harmonic Wand: An Instrument for Microtonal Control and Gestural Excitation
by Ben Luca Robertson & Luke Dahl


The Harmonic Wand is a transducer-based instrument that combines physical excitation, synthesis, and gestural control. Our objective was to design a device that affords exploratory modes of interaction with the performer’s surroundings, as well as precise control over microtonal pitch content and other concomitant parameters. The instrument is comprised of a hand-held wand, containing two piezo-electric transducers affixed to a pair of metal probes. The performer uses the wand to physically excite surfaces in the environment and capture resultant signals. Input materials are then processed using a novel application of Karplus-Strong synthesis, in which these impulses are imbued with discrete resonances. We achieved gestural control over synthesis parameters using a secondary tactile interface, consisting of four force-sensitive resistors (FSR), a fader, and momentary switch. As a unique feature of our instrument, we modeled pitch organization and associated parametric controls according to theoretical principles outlined in Harry Partch’s “monophonic fabric” of Just Intonation—specifically his conception of odentities, udentities, and a variable numerary nexus. This system classifies pitch content based upon intervallic structures found in both the overtone and undertone series. Our paper details the procedural challenges in designing the Harmonic Wand.

Exhibitors
avatar for Ben Robertson

Ben Robertson

Graduate Student, University of Virginia
Ben Luca Robertson is a composer, experimental luthier, and co-founder of the independent record label, Aphonia Recordings. His work addresses an interest in autonomous processes, landscape, and biological systems—often by supplanting narrative structure with an emphasis on the... Read More →


11:30am

Poster 1.12
Sansa: A Modified Sansula for Extended Compositional Techniques Using Machine Learning
by McLean J Macionis & Ajay Kapur


Sansa is an extended sansula, a hyper-instrument that is similar in design and functionality to a kalimba or thumb piano. At the heart of this interface is a series of sensors that are used to augment the tone and expand the performance capabilities of the instrument. The sensor data is further exploited using the machine learning program Wekinator, which gives users the ability to interact and perform with the instrument using several different modes of operation. In this way, Sansa is capable of both solo acoustic performances as well as complex productions that require interactions between multiple technological mediums. Sansa expands the current community of hyper-instruments by demonstrating the ways that hardware and software can extend an acoustic instrument's functionality and playability in a live performance or studio setting.

Exhibitors

11:30am

Poster 1.13
Demo of interactions Between a Performer Playing a Smart Mandolin and Audience Members Using Musical Haptic Wearables
by Luca Turchet & Mathieu Barthet


This demo will showcase technologically mediated interactions between a performer playing a smart musical instrument (SMIs) and audience members using Musical Haptic Wearables (MHWs). Smart Instruments are a family of musical instruments characterized by embedded computational intelligence, wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. They offer direct point-to-point communication between each other and other portable sensor-enabled devices connected to local networks and to the Internet. MHWs are wearable devices for audience members, which encompass haptic stimulation, gesture tracking, and wireless connectivity features. This demo will present an architecture enabling the multidirectional creative communication between a performer playing a Smart Mandolin and audience members using armband-based MHWs.

Exhibitors
avatar for Luca Turchet

Luca Turchet

Luca Turchet is a postdoctoral researcher at the Centre for Digital Music and co-founder of MIND Music Labs. His research interests span the fields of new interfaces for musical expression, human-computer interaction, perception, and virtual reality. He is also a musician and composer... Read More →


11:30am

Poster 1.14
Mechatronic Expression: Reconsidering Expressivity in Music for Robotic Instruments
by Steven Kemper & Scott Barton


Robotic instrument designers tend to focus on the number of sound control parameters and their resolution when trying to develop expressivity in their instruments. These parameters afford greater sonic nuance related to elements of music that are traditionally associated with expressive human performances including articulation, timbre, dynamics, and phrasing. Equating the capacity for sonic nuance and musical expression stems from the “transitive” perspective that musical expression is an act of emotional communication from performer to listener. However, this perspective is problematic in the case of robotic instruments since we do not typically consider machines to be capable of expressing emotion. Contemporary theories of musical expression focus on an “intransitive” perspective, where musical meaning is generated as an embodied experience. Understanding expressivity from this perspective allows listeners to interpret performances by robotic instruments as possessing their own expressive meaning, even though the performer is a machine. It also enables musicians working with robotic instruments to develop their own unique vocabulary of expressive gestures unique to mechanical instruments. This paper explores these issues of musical expression, introducing the concept of mechatronic expression as a compositional and design strategy that highlights the musical and performative capabilities unique to robotic instruments.


11:30am

Poster 1.15
Interactive Tango Milonga: Designing DMIs for the Social Dance Context
by Courtney Brown


Musical participation has brought individuals together in on-going communities throughout human history, aiding in the kinds of social integration essential for wellbeing. The design of Digital Musical Instruments (DMIs), however, has generally been driven by idiosyncratic artistic concerns, Western art music and dance traditions of expert performance, and short-lived interactive art installations engaging a broader public of musical novices. These DMIs rarely engage with the problems of on-going use in musical communities with existing performance idioms, repertoire, and social codes with participants representing the full learning curve of musical skill, such as social dance. Our project, Interactive Tango Milonga, an interactive Argentine tango dance system for social dance addresses these challenges in order to innovate connection, the feeling of intense relation between dance partners, music, and the larger tango community.

Exhibitors
avatar for Courtney Brown

Courtney Brown

Assistant Professor, Southern Methodist University
Courtney Brown is a sound artist, musician, researcher, and tango dancer. Her work has been featured and performed in the United States and Europe. Her interactive sound installation and musical instrument, ‘Rawr! A Study in Sonic Skulls’ received an Honorary Mention from the... Read More →


11:30am

Poster 1.16
Vocal Musical Expression with a Tactile Resonating Device and its Psychophysiological Effects
by Rebecca Kleinberger

This paper presents an experiment to investigate how new types of vocal practices can affect psychophysiological activity. We know that health can influence the voice, but can a certain use of the voice influence health through modification of mental and physical state? This study took place in the setting of the Vocal Vibrations installation. For the experiment, participants engage in a multi sensory vocal exercise with a limited set of guidance to obtain a wide spectrum of vocal performances across participants. We compare characteristics of those vocal practices to the participant’s heart rate, breathing rate, electrodermal activity and mental states. We obtained significant results suggesting that we can correlate psychophysiological states with characteristics of the vocal practice if we also take into account biographical information, and in particular mea- surement of how much people “like” their own voice.


Exhibitors
RK

Rebecca Kleinberger

MIT Media Lab|Cambridge|Massachusetts|United States


11:30am

Poster 1.17
CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music
by Riccardo Marogna


CABOTO is an interactive system for live performance and composition. A graphic score sketched on paper is read by a computer vision system. The graphic elements are scanned following a symbolic-raw hybrid approach, that is, they are recognised and classified according to their shapes but also scanned as waveforms and optical signals. All this information is mapped into the synthesis engine, which implements different kind of synthesis techniques for different shapes. In CABOTO the score is viewed as a cartographic map explored by some navigators. These navigators traverse the score in a semi-autonomous way, scanning the graphic elements found along their paths. The system tries to challenge the boundaries between the concepts of composition, score, performance, instrument, since the musical result will depend both on the composed score and the way the navigators will traverse it during the live performance.

Exhibitors
avatar for Riccardo Marogna

Riccardo Marogna

Musician, Technician, Magician, Institute of Sonology, Royal Conservatoire in The Hague
Musician, improviser, composer, born in Verona (Italy), currently based in The Hague (NL). His research is focused on developing an improvisational language in the electro-acoustic scenario, where the electronic manipulations and the acoustic sounds merge seamlessly in the continuum... Read More →


11:30am

Poster 1.18
Performance Systems for Live Coders and Non Coders
by Avneesh Sarwate, Ryan Taylor Rose, Jason Freeman & Jack Armitage


This paper explores the question of how live coding musicians can perform with musicians who are not using code (such as acoustic instrumentalists or those using graphical and tangible electronic interfaces). This paper investigates performance systems that facilitate improvisation where the musicians can interact not just by listening to each other and changing their own output, but also by manipulating the data stream of the other performer(s). In a course of performance-led research four prototypes were built and analyzed them using concepts from NIME and creative collaboration literature. Based on this analysis it was found that the systems should 1) provide a commonly modifiable visual representation of musical data for both coder and non-coder, and 2) provide some independent means of sound production for each user, giving the non-coder the ability to slow down and make non-realtime decisions for greater performance flexibility.

Exhibitors
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.


1:30pm

Paper Session 2: Software & Algorithms

Speakers
avatar for Jack Atherton

Jack Atherton

PhD Candidate, CCRMA, Stanford University
I study Artful Design, as applied to programming as a mode of creative self-expression. Within that, I study the creation of new programming languages that are manipulated from within VR to create musical sculptures and environments. I use these as a lens to investigate what it means... Read More →
avatar for Oliver Bown

Oliver Bown

Senior Lecturer, UNSW Faculty of Art & Design, Interactive Media Lab
I am a researcher and maker working with creative technologies. I come from a highly diverse academic background spanning social anthropology, evolutionary and adaptive systems, music informatics and interaction design, with a parallel career in electronic music and digital art spanning... Read More →
avatar for Rodrigo Schramm

Rodrigo Schramm

Associate Professor, Federal University of Rio Grande do Sul


1:30pm

Paper 2.1
A Framework for Modular VST-based NIMEs Using EDA and Dependency Injection
by Patrick Palsbröker, Christine Steinmeier & Dominic Becking


In order to facilitate access to playing music spontaneously, the prototype of an instrument which allows a more natural learning approach was developed as part of the research project (name omitted for anonymity purpose). The result was a modular system consisting of several VST plug-ins, which on the one hand provides a drum interface to create sounds and tones and on the other hand generates or manipulates music through dance movement, in order to simplify the understanding of more abstract characteristics of music. This paper describes the development of a framework that defines a new software concept for the prototype, which since then has been further developed and evaluated several times. This will improve the maintainability and extensibility of the system and eliminate design weaknesses. To do so, the existing system first will be analyzed and requirements for the new framework, which is based on the concepts of event driven architecture and dependency injection, will be defined. The components are then transferred to the new system and their performance is assessed.


1:30pm

Paper 2.2
Chunity: Integrated Audiovisual Programming in Unity
by Jack Atherton & Ge Wang


Chunity is a programming environment for the design of interactive audiovisual games, instruments, and experiences. It embodies an audio-driven, sound-first approach that integrates audio programming and graphics programming in the same workflow, taking advantage of strongly-timed audio programming features of the ChucK programming language and the state-of-the-art real-time graphics engine found in Unity. We describe both the system and its intended workflow for the creation of expressive audiovisual works. Chunity was evaluated as the primary software platform in a computer music and design course, where students created a diverse assortment of interactive audiovisual software. We present results from the evaluation and discuss Chunity's usability, utility, and aesthetics as a way of working. Through these, we argue for Chunity as a unique and useful way to program sound, graphics, and interaction in tandem, giving users the flexibility to use a game engine to do much more than "just" make games.

Speakers
avatar for Jack Atherton

Jack Atherton

PhD Candidate, CCRMA, Stanford University
I study Artful Design, as applied to programming as a mode of creative self-expression. Within that, I study the creation of new programming languages that are manipulated from within VR to create musical sculptures and environments. I use these as a lens to investigate what it means... Read More →


1:30pm

Paper 2.3
Exploring Continuous Time Recurrent Neural Networks through Novelty Search
by Steffan Carlos Ianigro & Oliver Bown


Within this paper, we aim to expand on prior research into the use of Continuous Time Recurrent Neural Networks (CTRNNs) as evolvable generators of musical structures such as audio waveforms. This type of neural network has a compact structure and is capable of producing a large range of temporal dynamics. Due to these properties, we believe that CTRNNs could provide a genotype structure for an EA that offers musicians many creative possibilities for the exploration of sound. In prior work, we have explored the use of interactive and target-based EA designs to tap into the creative possibilities of CTRNNs. Our results have shown promise for the use of CTRNNs in the audio domain. However, we feel neither evolutionary algorithm (EA) designs allow both open-ended discovery and effective navigation of the CTRNN audio search space by musicians. Within this paper, we explore the possibility of using novelty search as an alternative algorithm that facilitates both open-ended and rapid discovery of the CTRNN creative search space.

Speakers
avatar for Oliver Bown

Oliver Bown

Senior Lecturer, UNSW Faculty of Art & Design, Interactive Media Lab
I am a researcher and maker working with creative technologies. I come from a highly diverse academic background spanning social anthropology, evolutionary and adaptive systems, music informatics and interaction design, with a parallel career in electronic music and digital art spanning... Read More →


1:30pm

Paper 2.4
All the Noises: Hijacking Listening Machines for Performative Research
by John Bowers & Owen Green


Research into machine listening has intensified in recent years creating a variety of techniques for recognising musical features suitable, for example, in musicological analysis or commercial application in song recognition. Within NIME, several projects exist seeking to make these techniques useful in real-time music making. However, we debate whether the functionally-oriented approaches inherited from engineering domains that much machine listening research manifests is fully suited to the exploratory, divergent, boundary-stretching, uncertainty-seeking, playful and irreverent orientations of many artists. To explore this, we engaged in a concerted collaborative design exercise in which many different listening algorithms were implemented and presented with input which challenged their customary range of application and the implicit norms of musicality which research can take for granted. An immersive 3D spatialised multichannel environment was created in which the algorithms could be explored in a hybrid installation/performance/lecture form of research presentation. The paper closes with reflections on the creative value of ‘hijacking’ formal approaches into deviant contexts, the typically undocumented practical know-how required to make algorithms work, the productivity of a playfully irreverent relationship between engineering and artistic approaches to NIME, and a sketch of a sonocybernetic aesthetics for our work.


1:30pm

Paper 2.5
A polyphonic pitch tracking embedded system for rapid instrument augmentation
by Rodrigo Schramm, Federico Visi, André Brasil & Marcelo O Johann


This paper presents a system for easily augmenting polyphonic pitched instruments.
The entire system is designed to run on a low-cost embedded computer, suitable for live performance and easy to customise for different use cases. The core of the system implements real-time spectrum factorisation, decomposing polyphonic audio input signals into music note activations. New instruments can be easily added to the system with the help of custom spectral template dictionaries. Instrument augmentation is achieved by replacing or mixing the instrument's original sounds with a large variety of synthetic or sampled sounds, which follow the polyphonic pitch activations.

Speakers
avatar for Rodrigo Schramm

Rodrigo Schramm

Associate Professor, Federal University of Rio Grande do Sul


3:00pm

3:30pm

Paper Session 3: Movement & Gesture

Speakers
avatar for Balandino Di Donato

Balandino Di Donato

Postdoctoral Researcher, Goldsmiths, University of London
Interested in the field of Music and Human-Computer Interaction. Currently working on the BioMusic ERC POC funded project at Goldsmiths, University of London.
avatar for Michael Gurevich

Michael Gurevich

Associate Professor of Performing Arts Technology, University of Michigan
LH

Lamtharn Hantrakul

Yale University|New Haven|CT|United States
avatar for R. Benjamin Knapp

R. Benjamin Knapp

Scholar | Researcher | Performer
R. Benjamin Knapp is the Founding Director for the Institute for Creativity, Arts, and Technology (ICAT) and Professor of Computer Science at Virginia Tech. ICAT seeks to promote research and education at the nexus of art, design, engineering, and science. For more than 25 years... Read More →
avatar for Koray Tahiroğlu

Koray Tahiroğlu

Research Fellow, Aalto University|Espoo||Finland
http://sopi.aalto.fi/


3:30pm

Paper 3.1
Contextualising Idiomatic Gestures in Musical Interactions with NIMEs
by Koray Tahiroglu, Michael Gurevich & R. Benjamin Knapp


This paper introduces various ways that idiomatic gestures emerge in performance practice with new musical instruments. It demonstrates that idiomatic gestures can play an important role in the development of personalized performance practices that can be the basis for the development of style and expression. Three detailed examples – biocontrollers, accordion-inspired instruments, and a networked intelligent controller – illustrate how a complex suite of factors throughout the design, composition and performance processes can influence the development of idiomatic gestures. We argue that the explicit consideration of idiomatic gestures throughout the lifecycle of new instruments can facilitate the emergence of style and give rise to performances that can develop rich layers of meaning.

Speakers
avatar for Michael Gurevich

Michael Gurevich

Associate Professor of Performing Arts Technology, University of Michigan
avatar for R. Benjamin Knapp

R. Benjamin Knapp

Scholar | Researcher | Performer
R. Benjamin Knapp is the Founding Director for the Institute for Creativity, Arts, and Technology (ICAT) and Professor of Computer Science at Virginia Tech. ICAT seeks to promote research and education at the nexus of art, design, engineering, and science. For more than 25 years... Read More →
avatar for Koray Tahiroğlu

Koray Tahiroğlu

Research Fellow, Aalto University|Espoo||Finland
http://sopi.aalto.fi/


3:30pm

Paper 3.2
GestureRNN: A neural gesture system for the Roli Lightpad Block
by Lamtharn Hantrakul


Machine learning and deep learning has recently made a large impact in the artistic community. In many of these applications however, the model is often used to render the high dimensional output directly e.g. every individual pixel in the final image. Humans arguably operate in much lower dimensional spaces during the creative process e.g. the broad movements of a brush. In this paper, we design a neural gesture system for music generation based around this concept. Instead of directly generating audio, we train a Long Short Term Memory (LSTM) recurrent neural network to generate instantaneous position and pressure on the Roli Lightpad instrument. These generated coordinates in turn, give rise to the sonic output defined in the synth engine. The system relies on learning these movements from a musician who has already developed a palette of musical gestures idiomatic to the Lightpad. Unlike many deep learning systems that render high dimensional output, our low-dimensional system can be run in real-time, enabling the first real time gestural duet of its kind between a player and a recurrent neural network on the Lightpad instrument.

Speakers
LH

Lamtharn Hantrakul

Yale University|New Haven|CT|United States


3:30pm

Paper 3.3
Myo Mapper: a Myo armband to OSC mapper
by Balandino Di Donato, Jamie Bullock & Atau Tanaka


Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a `quick and easy' solution for exploring the Myo's potential for realising new interfaces for musical expression. Together with details of the software, this paper reports some applications in which Myo Mapper has been successfully used and a qualitative evaluation. We then proposed guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Findings show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of Myo data high-level features for realising interactive artistic works. It also facilitates the recognition of poses and gestures beyond those included with the product by using third-party interactive machine learning software.

Speakers
avatar for Balandino Di Donato

Balandino Di Donato

Postdoctoral Researcher, Goldsmiths, University of London
Interested in the field of Music and Human-Computer Interaction. Currently working on the BioMusic ERC POC funded project at Goldsmiths, University of London.


3:30pm

Paper 3.4
Real-Time Motion Capture Analysis and Music Interaction with the Modosc Descriptor Library
by Federico Visi & Luke Dahl

We present modosc, a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time. The library contains methods for extracting descriptors useful for expressive movement analysis and sonic interaction design. Moreover, modosc is designed to address the data handling and synchronization issues that often arise when working with complex marker sets. This is achieved by adopting a multiparadigm approach facilitated by odot and Open Sound Control to overcome some of the limitations of conventional Max programming, and structure incoming and outgoing data streams in a meaningful and easily accessible manner. After describing the contents of the library and how data streams are structured and processed, we report on a sonic interaction design use case involving motion feature extraction and machine learning.

Speakers

5:30pm

Concert 3A @The Cube

Artists
avatar for Erik Nyström

Erik Nyström

Research Fellow, University of Birmingham
Erik Nyström makes live computer music, electroacoustic works, and sound installations. Currently his main artistic research interests are sound synthesis, space, algorithmic interactive composition/performance, and posthuman conceptions of sound. He is a Leverhulme Research Fellow... Read More →
avatar for Palle Dahlstedt

Palle Dahlstedt

University of Gothenburg, Aalborg University
Palle Dahlstedt (b.1971), Swedish improviser, researcher, and composer of  everything from chamber and orchestral music to interactive and  autonomous computer pieces, receiving the Gaudeamus Music Prize in 2001. Currently Obel Professor of Art & Technology at Aalborg University... Read More →


5:30pm

Music Piece 3A.1
Sympathetic Resonance
by Monica Bolles


Sympathetic Resonance is the vibrational effect one vibratory body can invoke on another of similar harmonic likeness. This physical phenomenon has been exploited by musical instrument design throughout time. This piece explores both the physical nature of the phenomenon of sympathetic resonance at the same time as questioning the ways in which we as humans connect and resonate with each other and the world around us.  Conceptual and visual design, and audio recording and spatialization by Monica Bolles.  Musical composition by Zachary Patten.  Special thanks to Eric Lyon, Tanner Upthegrove, and Virginia Tech.

Artists
avatar for Monica Bolles

Monica Bolles

Monica Bolles is an artist, audio engineer, and composer who brings together her love of multimedia arts, science, sound, and immersive environments to create and build experiences that question our human relationships and interactions with the world around us. She is fascinated by... Read More →


5:30pm

Music Piece 3A.2
Libration Perturbed
by Palle Dahlstedt

Libration Perturbed is an immersive performance, and an instrument, originally composed for the speaker dome at ZKM, Karlsruhe, Germany, in September 2017. The performer controls a bank of 64 virtual inter-connected strings, and has individual control of each string. All vibrations of the strings come from physical vibrations in the keyboard interface and its casing, captured through contact microphones. In this sense, it is a hybrid acoustic-electronic instrument. All strings are connected, and can inter-resonate as a ”super-harp”. If the resonance is strong, the strings go into chaotic motion. The hybrid digital/acoustic approach and the enhanced keyboard provide for an expressive and very physical interaction, and a strong immersive experience.

Artists
avatar for Palle Dahlstedt

Palle Dahlstedt

University of Gothenburg, Aalborg University
Palle Dahlstedt (b.1971), Swedish improviser, researcher, and composer of  everything from chamber and orchestral music to interactive and  autonomous computer pieces, receiving the Gaudeamus Music Prize in 2001. Currently Obel Professor of Art & Technology at Aalborg University... Read More →


5:30pm

Music Piece 3A.3
Texton Mirrors
by Erik Nyström

The late neuroscientist Bela Julesz invented the term 'textons' for describing the ‘perceptual atoms’ of texture in visual perception. In the present performance, the idea of textons may be used to describe the spatial distribution of sonic blots, particles, and figures in a 3D sound environment. A montage of textons is created through improvisation and algorithmic processes, where machine learning is used to classify, generate and match sonic morphologies as an extension of performance through artificial intelligence. Mirrored resonances are formed in the multi-dimensional auditory projections of textons, and in the way that machine intelligence creates a posthuman reflection of listening perception.
The texture generating process and performance interface apply neural networks, clustering algorithms, self-organising maps, cellular automata and spatial feedback networks. The work was realised as part of a Leverhulme Fellowship at BEAST (Birmingham Electroacoustic Sound Theatre), University of Birmingham, researching synthesis of spatial texture in composition and performance.

Artists
avatar for Erik Nyström

Erik Nyström

Research Fellow, University of Birmingham
Erik Nyström makes live computer music, electroacoustic works, and sound installations. Currently his main artistic research interests are sound synthesis, space, algorithmic interactive composition/performance, and posthuman conceptions of sound. He is a Leverhulme Research Fellow... Read More →


5:30pm

Music Piece 3A.4
The Murmurator
by Eli Stine & Kevin Davis

The Murmurator is a semi-improvised immersive composition created using a novel, biological model-driven digital musical instrument. The system is built around a three-dimensional bird flocking simulation that spatializes and affects different sonic characteristics of a corpus of granularized audio files. Diffused over a multi-channel system, the work alternates between densely immersive ambient textures and isolated spatialized gestures and back. The result is a work that harnesses the emergent behaviors of the biological model while at the same time maximizing “liveness” and the improvisational decisions of the performers through exploration of the musical space of the performance venue.


6:30pm

Concert 3B @The Cube

Artists
avatar for Erik Nyström

Erik Nyström

Research Fellow, University of Birmingham
Erik Nyström makes live computer music, electroacoustic works, and sound installations. Currently his main artistic research interests are sound synthesis, space, algorithmic interactive composition/performance, and posthuman conceptions of sound. He is a Leverhulme Research Fellow... Read More →
avatar for Palle Dahlstedt

Palle Dahlstedt

University of Gothenburg, Aalborg University
Palle Dahlstedt (b.1971), Swedish improviser, researcher, and composer of  everything from chamber and orchestral music to interactive and  autonomous computer pieces, receiving the Gaudeamus Music Prize in 2001. Currently Obel Professor of Art & Technology at Aalborg University... Read More →


6:30pm

Music Piece 3B.1
Sympathetic Resonance
by Monica Bolles


Sympathetic Resonance is the vibrational effect one vibratory body can invoke on another of similar harmonic likeness. This physical phenomenon has been exploited by musical instrument design throughout time. This piece explores both the physical nature of the phenomenon of sympathetic resonance at the same time as questioning the ways in which we as humans connect and resonate with each other and the world around us.  Conceptual and visual design, and audio recording and spatialization by Monica Bolles.  Musical composition by Zachary Patten.  Special thanks to Eric Lyon, Tanner Upthegrove, and Virginia Tech.

Artists
avatar for Monica Bolles

Monica Bolles

Monica Bolles is an artist, audio engineer, and composer who brings together her love of multimedia arts, science, sound, and immersive environments to create and build experiences that question our human relationships and interactions with the world around us. She is fascinated by... Read More →


6:30pm

Music Piece 3B.2
Libration Perturbed
by Palle Dahlstedt

Libration Perturbed is an immersive performance, and an instrument, originally composed for the speaker dome at ZKM, Karlsruhe, Germany, in September 2017. The performer controls a bank of 64 virtual inter-connected strings, and has individual control of each string. All vibrations of the strings come from physical vibrations in the keyboard interface and its casing, captured through contact microphones. In this sense, it is a hybrid acoustic-electronic instrument. All strings are connected, and can inter-resonate as a ”super-harp”. If the resonance is strong, the strings go into chaotic motion. The hybrid digital/acoustic approach and the enhanced keyboard provide for an expressive and very physical interaction, and a strong immersive experience.

Artists
avatar for Palle Dahlstedt

Palle Dahlstedt

University of Gothenburg, Aalborg University
Palle Dahlstedt (b.1971), Swedish improviser, researcher, and composer of  everything from chamber and orchestral music to interactive and  autonomous computer pieces, receiving the Gaudeamus Music Prize in 2001. Currently Obel Professor of Art & Technology at Aalborg University... Read More →


6:30pm

Music Piece 3B.3
Texton Mirrors
by Erik Nyström

The late neuroscientist Bela Julesz invented the term textons for describing the ‘perceptual atoms’ of texture in visual perception. In the present performance, the idea of textons may be used to describe the spatial distribution of sonic blots, particles, and figures in a 3D sound environment. A montage of textons is created through improvisation and algorithmic processes, where machine learning is used to classify, generate and match sonic morphologies as an extension of performance through artificial intelligence. Mirrored resonances are formed in the multi-dimensional auditory projections of texton clusters, and in the way that artificial intelligence creates a posthuman reflection of listening perception. The work was realised as part of a Leverhulme Fellowship at BEAST (Birmingham Electroacoustic Sound Theatre), University of Birmingham, researching synthesis of spatial texture in composition and performance.

Artists
avatar for Erik Nyström

Erik Nyström

Research Fellow, University of Birmingham
Erik Nyström makes live computer music, electroacoustic works, and sound installations. Currently his main artistic research interests are sound synthesis, space, algorithmic interactive composition/performance, and posthuman conceptions of sound. He is a Leverhulme Research Fellow... Read More →


6:30pm

Music Piece 3B.4
The Murmurator
by Eli Stine & Kevin Davis

The Murmurator is a semi-improvised immersive composition created using a novel, biological model-driven digital musical instrument. The system is built around a three-dimensional bird flocking simulation that spatializes and affects different sonic characteristics of a corpus of granularized audio files. Diffused over a multi-channel system, the work alternates between densely immersive ambient textures and isolated spatialized gestures and back. The result is a work that harnesses the emergent behaviors of the biological model while at the same time maximizing “liveness” and the improvisational decisions of the performers through exploration of the musical space of the performance venue.


8:00pm

Concert 4: Evening with Pamela Z

Artists
avatar for Ben Robertson

Ben Robertson

Graduate Student, University of Virginia
Ben Luca Robertson is a composer, experimental luthier, and co-founder of the independent record label, Aphonia Recordings. His work addresses an interest in autonomous processes, landscape, and biological systems—often by supplanting narrative structure with an emphasis on the... Read More →
avatar for Luke Dahl

Luke Dahl

Assistant Professor of Composition and Computing Technologies, University of Virginia
avatar for Pamela Z

Pamela Z

Composer | Performer | Media Artist
Pamela Z is a composer/performer and media artist who works primarily with voice, live electronic processing, sampled sound, and video. A pioneer of live digital looping techniques, she processes her voice in real time to create dense, complex sonic layers. Her solo works combine... Read More →
TC

Ted Coffey

NIME Music Co-Chair, University of Virginia


8:00pm

Music Piece 4.1
Collective Response: Moving Forward 
by Becky Brown, Matthew Burtner, A. D. Carson, Ted Coffey, Luke Dahl, Mona Kasra, Ben Robertson,  Ryan Maguire, Travis Thatcher & M.I.C.E.

Artists
avatar for Ben Robertson

Ben Robertson

Graduate Student, University of Virginia
Ben Luca Robertson is a composer, experimental luthier, and co-founder of the independent record label, Aphonia Recordings. His work addresses an interest in autonomous processes, landscape, and biological systems—often by supplanting narrative structure with an emphasis on the... Read More →
avatar for Luke Dahl

Luke Dahl

Assistant Professor of Composition and Computing Technologies, University of Virginia
TC

Ted Coffey

NIME Music Co-Chair, University of Virginia


8:00pm

Music Piece 4.2
Who Was That Timbre I Saw You With?
by Kerry Hagan & Miller Puckette

Using cheap game controllers, two titans of computer music engage in a virtual thumb-wrestling match. As Shakespeare never wrote, "By the twitching of my thumbs, something pitchèd this way comes."

Artists
avatar for Kerry Hagan

Kerry Hagan

Lecturer Above the Bar, University of Limerick
avatar for Miller Puckette

Miller Puckette

Professor, University of California, San Diego


8:00pm

Music Piece 4.3
Evening Concert with Pamela Z
by Pamela Z

Artists
avatar for Pamela Z

Pamela Z

Composer | Performer | Media Artist
Pamela Z is a composer/performer and media artist who works primarily with voice, live electronic processing, sampled sound, and video. A pioneer of live digital looping techniques, she processes her voice in real time to create dense, complex sonic layers. Her solo works combine... Read More →


8:45pm

NIME Projection Mapping Exhibit | 2018 | CURATED
NIME Projection Mapping Exhibit | 2018 | CURATED
by Armi Behzad, David J Franusich, Mahshid Gorjian, George Hardebeck, Tacie Jones, Xindi Liu, Daniel Robert Monzel, Huy Quoc Ngo, Heather Arnold, Jesse Bibel, Zachary Cortez, Justus Darby, Nishat Jamil, Antonia Marigliani, Pei Qiu, Michael Rhoades & Yiming Wang

The School of Visual Arts Art 4544 and 5724 projection mapping class taught by Thomas Tucker and in collaboration with ICAT and the Moss Arts Center have created a large-scale projection mapping project which will be projected onto the south-west facade of the Moss Arts Center after dark.

Please note that this session will only take place if the weather permits. 

Exhibitors

10:30pm

Concert 5


Artists
avatar for Angelo Fraietta

Angelo Fraietta

Post doctoral fellow, University of New South Wales
I specialise in embedded hardware design.
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London
DM

Dr Mark Bokowiec

Studio Manager, University of Huddersfield
avatar for Mengtai Zhang

Mengtai Zhang

MFA Sound Art, Columbia University
Interdisciplinary Artist | mengtaizhang.com
avatar for Oliver Bown

Oliver Bown

Senior Lecturer, UNSW Faculty of Art & Design, Interactive Media Lab
I am a researcher and maker working with creative technologies. I come from a highly diverse academic background spanning social anthropology, evolutionary and adaptive systems, music informatics and interaction design, with a parallel career in electronic music and digital art spanning... Read More →
avatar for Spencer Topel

Spencer Topel

Assistant Professor, Dartmouth College | Bregman Studios
I am a composer and sound artist with a research interest in acoustic synthesis and augmented instrument design.
avatar for Yingjia(Lemon) Guo

Yingjia(Lemon) Guo

MFA Sound Art, Columbia University
Interdisciplinary artist working with voice and movement in both solo and multimedia ensemble forms.


10:30pm

Music Piece 5.1
So Predictable!?
Oliver Bown & Sam Ferguson, interactive audio design
Lian Loke & Kirsten Packham, movement
Angelo Fraietta, software support
Liam Bray, hardware support
Vert Designs, fabrication
Shandoah Goldman & Faith Levine, performers


When movement improvisation meets sonic composition in the shape of a ball, who leads and who follows? Teetering on the edge between music and noise, can the dancer's next move create art or chaos?     "So Predictable!?" builds on a series of interactive music performance using the Distributed Interactive Audio Devices (DIADs), a set of portable Raspberry Pi-powered sonic devices. The DIADs are standalone sonic objects that are network-aware and sensor-enabled, and are housed in soft tactile balls. The performers improvise with peculiar mappings and temporal dynamics programmed into the DIADs, and explore the sonic properties of this spatial performance.

Artists
avatar for Angelo Fraietta

Angelo Fraietta

Post doctoral fellow, University of New South Wales
I specialise in embedded hardware design.
avatar for Oliver Bown

Oliver Bown

Senior Lecturer, UNSW Faculty of Art & Design, Interactive Media Lab
I am a researcher and maker working with creative technologies. I come from a highly diverse academic background spanning social anthropology, evolutionary and adaptive systems, music informatics and interaction design, with a parallel career in electronic music and digital art spanning... Read More →


Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA

10:30pm

Music Piece 5.2
Pythia:Delphine21
by Mark Bokowiec

Julie Wilson-Bokowiec, performer

Pythia:Delphine21 stages an act of live mediation, bringing the essence of the ancient practice of the Delphic Oracle into the 21st century The piece explores the context, resonances, practice and aesthetic synergies between the ancient and a distinctly contemporary form of sonic mediation. Recasting the pythian/bodycoder performer as medium and mediator in which the live manipulation of voice is socially and politically charged with meaning. The Max/MSP environment for Pythia:Delphine21 attempts to configure the relationship between the Oracle and Apollo (the medium and the Other) in what is essentially a form of dialogical physical possession.     The piece makes use of both live and automated processes, hierarchical diffusion patterns are countered by live/gestural spatialization, while live sampling and gesturally embodied granular processes drive disembodiment/dissociation and transformation of the live voice through a set of very specific sonic images and personas. The result is a strongly nuanced duet between what is seen (the performer/medium) and what is unseen (the system/other). The power-play (the seizures of control) that takes place between these two elements is visceral and dramatic, deliberately theatrical and operatic in scale. The piece is fully scored with some moments of extemporisation, and no off-stage sound manipulation.

Artists
DM

Dr Mark Bokowiec

Studio Manager, University of Huddersfield


Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA

10:30pm

Music Piece 5.3
Transplantation
by Mengtai Zhang & Yingjia Guo


Transplantation (2018)     &     Mengtai Zhang (artist, engineer, performer)     &     Lemon Guo (composer)          &&          The music engages with the emotional, ethnographical, and political charge of material and place. Specifically, the piece explores music reconstruction and displacement in relation to a historical movement during postwar China (After 1949). Informed by personal experience, this work raises questions of systems of dissemination and cultural appropriation. Due to the call from the Chinese government, the idea of westernization while adhering to the national essence has caused an epistemic transformation of the musical practice, and lead to a movement of reconstructing folk instruments and music system, accommodating symphonic demands. Afterwards, this movement caused serious controversy and criticism. Although the war (organized massacre) has come to a temporary end, the combats on the cultural recognition and political ideology have never stopped. This composition is inspired by such historical and political issues, combined with acoustic Xiao, digital signal processing and sample triggering in Teensy, and expressing an imagination on the post-war cultural reconstruction. Overall, this work mirrors the wider idea of postcolonial identity, but explodes the expectations of a standardized sense of oriental aesthetics.

Artists
avatar for Mengtai Zhang

Mengtai Zhang

MFA Sound Art, Columbia University
Interdisciplinary Artist | mengtaizhang.com
avatar for Yingjia(Lemon) Guo

Yingjia(Lemon) Guo

MFA Sound Art, Columbia University
Interdisciplinary artist working with voice and movement in both solo and multimedia ensemble forms.


Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA

10:30pm

Music Piece 5.4
Clouds and Horses
by Spencer Topel


Clouds and Horses is a tribute to the late composer Pauline Oliveros. It recasts and transcribes her iconic work Horse Sings From Cloud (1982), as a meditation for electronics, light transmitter and receiver devices, and vocalist. Holding a photo-receiver, a performer gradually approaches a pedestal with a photo transmitter device emitting the signal of oscillators approximating the original harmonies in the Oliveros, causing them to gradually crescendo to an audible level. After singing and circling the pedestal several times, she slowly moves away from the pedestal causing the sounds to grow softer, until no longer audible.

Artists
avatar for Spencer Topel

Spencer Topel

Assistant Professor, Dartmouth College | Bregman Studios
I am a composer and sound artist with a research interest in acoustic synthesis and augmented instrument design.


Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA

10:30pm

Music Piece 5.5
Shapeshifting
by Christopher Tignor


Shapeshfting was my first work developed using my new hybrid violin / percussion technique for digitally processed tuning forks.

Every tuning fork gesture begins with me striking percussion and then resonating the fork against the bridge of my violin.
There, its singular pitch conducts through my pickup into the computer. Using custom software, intimate melodies spring from this single note, transposing and delaying the original tone. By switching between melodies with a pedal, I weave the framework of this song's evolving narrative.

The unique shape of each six note melody begins with either the tuning fork's A, or one of the plucked violin notes (D or E). When I chose the latter, the melody shifts up or down by a fifth. Shifting these melodic shapes up, down, and morphing them on the fly drives the work.

At the bookends, these shapes breathe deeply, punctuated by drum hits. In the middle, they coalesce into multi-layered grooves.


Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA

10:30pm

Music Piece 5.6
SloMo Study #2
by Federico Visi


SloMo study #2 was composed to explore the use of slow and microscopic body movements in electronic music performance, and the role of rhythmic visual cues and breathing in the perception of movement and time. To do so, it employs wearable sensors, variable-frequency stroboscopic lights, an electronic stethoscope, and a body-worn camera for face tracking.     The performer's left hand very slowly draws an arc that begins with the left arm across the chest and ends when the arm is fully stretched outwards. The whole movement is performed in about 10 minutes, and marks the beginning and end of the piece. Breathing sounds, representing the inner tempo of the performer, are amplified.     As the performer moves, the frequency of the stroboscopic light changes progressively from 1 to 30 Hz, reaching its maximum when the left arm is at the lowest point of the arc, approximately half-way through the piece. Variable-frequency stroboscopic light is used to alter the perception the audience has of the fluidity and speed of the performer's movements.     In SloMo study #2, movements and rhythmic events entrain and interact at different timescales, sliding in and out of sync through the movements of the performer, affecting the perception of time.

Artists

Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA

10:30pm

Music Piece 5.7
Beckon
by Anna Weisling & Anna Xambó

Beckon is a call to immersion. The listener is prompted to sit in the moments of tension created by droning spaces and rhythmic motions, feeling the physical pull of these dichotomies. The viewer is presented with live imagery which illustrates these sonic spaces, reinforcing both the calm and the distress that comes with sensory immersion. The audio and visual engines engage in bi-directional communication at all times, prompting each other do develop both independently and together.

Artists
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London


Monday June 4, 2018 10:30pm - Tuesday June 5, 2018 12:00am
Moss Arts Center - Anne and Ellen FIFE Theatre Alumni Mall, Blacksburg, VA 24060, USA
 
Tuesday, June 5
 

8:00am

8:30am

9:00am

Paper Session 4: Multi-Touch & Haptics

Speakers
avatar for Anil Camci

Anil Camci

Assistant Professor of Performing Arts Technology, University of Michigan
avatar for Lars Engeln

Lars Engeln

PhD Student, Technische Universität Dresden
Spectral Editing - VisualAudio-Design | Jokes about Viola-Players


9:00am

Paper 4.1
The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments
by Cagan Arslan, Florent Berthaut, Jean Martinet, Ioan Marius Bilasco & Laurent Grisoni


Mobile devices have been a promising platform for musical performance thanks to the various sensors readily available on board. In particular, mobile cameras can provide rich input as they can capture a wide variety of user gestures or environment dynamics. However, this raw camera input only provides continuous parameters and requires expensive computation. In this paper, we propose to combine motion/gesture input with the touch input, in order to filter movement information both temporally and spatially, thus increasing expressiveness while reducing computation time. We present a design space which demonstrates the diversity of interactions that our technique enables. We also report the results of a user study in which we observe how musicians appropriate the interaction space with an example instrument.


9:00am

Paper 4.2
Multi-Touch Enhanced Visual Audio-Morphing
by Lars Engeln, Dietrich Kammer, Leon Brandt & Rainer Groh


Many digital interfaces for audio effects still resemble racks and cases of their hardware counterparts. For instance, DSP-algorithms are often adjusted via direct value input, sliders, or knobs. While recent research has started to experiment with the capabilities offered by modern interfaces, there are no examples for productive applications such as audio-morphing. Audio-morphing as a special field of DSP has a high complexity for the morph itself and for the parametrization of the transition between two sources. We propose a multi-touch enhanced interface for visual audiomorphing. This interface visualizes the internal processing and allows direct manipulation of the morphing parameters in the visualization. Using multi-touch gestures to manipulate audio-morphing in a visual way, sound design and music production becomes more unrestricted and creative.

Speakers
avatar for Lars Engeln

Lars Engeln

PhD Student, Technische Universität Dresden
Spectral Editing - VisualAudio-Design | Jokes about Viola-Players


9:00am

Paper 4.3
GrainTrain: A Hand-drawn Multi-touch Interface for Granular Synthesis
by Anıl Çamcı

We describe an innovative multi-touch performance tool for real-time granular synthesis based on hand-drawn waveform paths. GrainTrain is a cross-platform web application that can run on both desktop and mobile computers, including tablets and phones. In this paper, we first offer an analysis of existing granular synthesis tools from an interaction stand-point, and outline a taxonomy of common interaction paradigms used in these tools. We then delineate the implementation of GrainTrain, and its unique approach to controlling real-time granular synthesis. We describe practical scenarios in which GrainTrain enables new performance possibilities. Finally, we discuss the results of a user study, and the feedback gathered from expert users who worked with GrainTrain. A video abstract for GrainTrain can be viewed at https://vimeo.com/graintrain/video

Speakers
avatar for Anil Camci

Anil Camci

Assistant Professor of Performing Arts Technology, University of Michigan


9:00am

Paper 4.4
ShIFT: A Semi-haptic Interface for Flute Tutoring
by Gus Xia & Roger B. Dannenberg

Traditional instrument learning procedure is time-consuming; it begins with learning music notations and necessitates layers of sophistication and abstraction. Haptic interfaces open another door to the music world for the vast majority of talentless beginners when traditional training methods are not effective. However, the existing haptic interfaces can only be used to learn specially designed pieces with great restrictions on duration and pitch range due to the fact that it is only feasible to guide a part of performance motion haptically for most instruments. Our study breaks such restrictions using a semi-haptic guidance method. For the first time, the pitch range of the haptically learned pieces go beyond an octave (with the fingering motion covers most of the possible choices) and the duration of learned pieces cover a whole phrase. This significant change leads to a more realistic instrument learning process. Experiments show that semi-haptic interface is effective as long as learners are not “tone deaf”. Using our prototype device, the learning rate is about 30% faster compared with learning from videos.


9:00am

9:00am

Aphysical Unmodeling Instrument | 2017
Aphysical Unmodeling Instrument | 2017
by Tomoya Matsuura

Moss Arts Center - 3rd Floor in Balcony Lobby

Aphysical Unmodeling Instrument rethinks a description and a generation of sound and music through re-physicalization of Whirlwind, a targetless-physical model. Whirlwind is a combined and impossible physical model of 3 wind instruments; a trumpet, flute and clarinet. Our work reimplements the computational elements of Whirlwind with physical objects such as a delay with the sound propagation and a resonator with the Helmholtz resonator. The acts of a composition, a creation of instruments or a installation and otherwise a performance are parallelized in our work. The notion of the digital sound is expanded out of the computer by re-physicalizing a computational model.

Exhibitors
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University


9:00am

Attunement | 2018
Attunement | 2018
by Olivia Webb & Flo Wilson

Moss Arts Center - 2nd Floor in Mezzanine Lobby

The verb ‘attune’ usually describes the act of making something harmonious, as in the tuning of an instrument. Attunement is also a state of relation to an object, technology, environment or other people. To become attuned is to engage in a two-way sympathetic and empathetic exchange. In this installation, attunement is used both as a technique for exploring ways of being with others in the world, and a method for considering how technology mediates this exchange.

In this conference of new musical interfaces, we invite all participants to consider the ethics of listening as mediated by technology. Listening is central to human interaction, yet habits within Western culture tend to privilege speech and being heard over listening to and receiving others. New technology continues to accelerate the speed that we can sound, voice and express ourselves. We are interested in how we might engage with performance and technology to become better listeners.

In this installation, you are invited to practice attunement by taking part in a selection of simple embodied listening exercises. Step out of your own breath, your own voice, your own self. Contemplate the changes required of you in order to receive someone or something else.

Exhibitors
avatar for Flo Wilson

Flo Wilson

Audio Foundation
Flo Wilson is a composer, producer and artist based in Auckland, New Zealand whose organic, experimental music creates emotive atmospheres to facilitate empathetic, shared listening experiences. She has created custom-built spatial sound installations and then performed with them... Read More →


9:00am

Bǎi (摆): An Oscillating Sound Installation | 2018
Bǎi (摆): An Oscillating Sound Installation | 2018
by Jelger Kroese, Danyi Liu & Edwin van der Heide

Moss Arts Center - 1st Floor in Experience Studio B

Bǎi (摆), meaning pendulum in Chinese, is an interactive installation that uses a speaker hanging as a pendulum from the ceiling combined with an octophonic speaker setup to create a responsive sound environment. Besides being a sound source, the pendulum speaker is also the interface by which the audience interacts with the installation. Through pushing, pulling and twisting, the audience can move the pendulum and set it into oscillating motions. A dynamic system underlying the installation translates these motions into different modes of behavior of the sound environment. At first, it may seem that the environment reacts to the motions predictably. However, exercising too much control over the pendulum causes the installation to quickly spiral into chaotic and unpredictable behavior. This, combined with the fact that hard physical labour is needed to restrain the pendulum, leads to a tense dialogue between participant and object, struggling for control. The movements resulting from this dialogue cause the sounds in the environment to change between different states of stability and chaos, hereby mirroring the types of dynamics that are also seen in natural ecosystems.

Exhibitors
avatar for Jelger Kroese

Jelger Kroese

Jelger Kroese is a designer and coder in the field of sound, interaction and education. He has a deep interest in the processes that underlie ecological systems. As a result, his work mostly comprises compositions and installations that place biological concepts within a technological... Read More →
avatar for Danyi LIU

Danyi LIU

PhD student, Leiden Institute of Advanced Computer Science
Danyi Liu is a designer and researcher in the field of sound and interactive art. Currently, she is a PhD student at the Leiden Institute of Advanced Computer Science. Her research focuses on real-time interactive data sonification and audience participatory installations and performances... Read More →


9:00am

Chorus for Untrained Operator | 2016
Chorus for Untrained Operator | 2016
by Stephan Moore & Peter Bussigel

Chorus for Untrained Operator is a collection of discarded objects. Each has been relieved of its original responsibilities, rewired, and transformed to emphasize its musical voice. The ensemble of objects is controlled through the patch bay of a 1940s Western Electric switchboard.

Exhibitors
avatar for peter bussigel

peter bussigel

assistant professor, emily carr university of art & design
avatar for Stephan Moore

Stephan Moore

Senior Lecturer, Northwestern University


Tuesday June 5, 2018 9:00am - 5:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

9:00am

FingerRing installation | 2016-18
FingerRing installation | 2016-18
by Sergey Kasich


Moss Arts Center - 3rd Floor in Balcony Lobby

FingerRing installation - is an exhibition set of the fFlower interface for the FingerRing technique. The two sensitive panels are installed in the center of the space, which has 8-channel sound system in perimeter. Anyone is allowed to play with the panels and experience flexibility and nature of the FingerRing technique - the simplest way to play with multichannel music. The installation has been shown at "SOUNDART: space of sound" (Manege Central Exhibition Hall, Saint-Petersburg, 2017), MakerFaire Moscow (MISIS, Moscow, 2017).

The FingerRing technique has been presented at BRERA Art Academy New Technologies in Art Dept (Milano, Italy), National University of Science and Technology MISIS (Moscow, Russia), New York University Tandon School of Engeneering (NYC, USA), Cambridge (specially for Dr. Peter Zinovieff). It was included into education process in Falmouth University in England in 2017 and presented as a workshop at EXPO 2017 (Astana, Kazakhstan).

Exhibitors
SK

Sergey Kasich

founder, SoundArtist.ru
music technology, experimental sound arts, interactive installations, social infrastructure, cultural projects, events, festivals, curation of new media arts, hybrid studies , R&D , anything


9:00am

What We Have Lost / What We Have Gained | 2014
What We Have Lost / What We Have Gained | 2014
by Matthew Mosher

What We Have Lost / What We Have Gained explores how to transform viewers into performers, participants, and players through large upper body movements and tangible interactions with a sculpture. The piece was originally conceived as a large scale MIDI drum pad style interface that would be both familiar to electronic musicians yet more physically expressive than typical MIDI devices.
As an art installation, it presents a four by three grid of video projected mouths on a spandex screen. Each video sample animates and sings a different vowel tone when pressed by a user. The volume of the singing increases as the player presses harder and deeper into the mouth screen, which distorts the spandex display surface. In this way, the piece provides audio, video and tactile feedback, rewarding the user with a multi-modal embodied experience. This work contributes to the discourse at the intersection of tangible interactions and musical expression by providing an example of how interaction design can facilitate engagement and convey meaning. What We Have Lost / What We Have Gained questions the experience of using one's physical body to manipulate the digital representation of another's body.
Special thanks to vocalists Ashley Reynolds and Keri Pierson.

Exhibitors
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →


Tuesday June 5, 2018 9:00am - 5:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

11:00am

Paper Session 5: Theory & Critique

Speakers
avatar for Juan Pablo Martinez Avila

Juan Pablo Martinez Avila

PhD Student, The University of Nottingham, Mixed Reality Lab
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Fabio Morreale

Fabio Morreale

Postdoctoral Researcher, Queen Mary University of London
MW

Marcelo Wanderley

Professor, McGill University
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London


11:00am

Paper 5.1
NIME Identity from the Performer's Perspective
by Fabio Morreale, Andrew P. McPherson & Marcelo Wanderley


The term `NIME' - New Interfaces for Musical Expression - has come to signify both technical and cultural characteristics. Not all new musical instruments are NIMEs, and not all NIMEs are defined as such for the sole ephemeral condition of being new. So, what are the typical characteristics of NIMEs? Is there a typical repertoire played with NIMEs? What are their idiosyncrasies and what their roles in performers' practice? This paper aims to address these questions with a bottom up approach. We reflect on the answers of 78 NIME performers to an online questionnaire discussing their performance experience with NIMEs. The results of our investigation explore the role of NIMEs in the performers' practice and identify the values that are common among performers. We find that most NIMEs are viewed as exploratory tools created by and for performers, and that they are constantly in development and almost in no occasions in a finite state. The findings of our survey also reflect upon virtuosity with NIMEs, whose peculiar artistic practice results in learning trajectories that often do not lead to the development of virtuosity as it is commonly understood in traditional performance.

Speakers
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Fabio Morreale

Fabio Morreale

Postdoctoral Researcher, Queen Mary University of London
MW

Marcelo Wanderley

Professor, McGill University


11:00am

Paper 5.2
Who Are the Women Authors in NIME? – Improving Gender Balance in NIME Research
by Anna Xambó


In recent years, there has been an increase awareness of the misrepresentation of women in the sound and music computing fields. NIME is not an exception, with still a number of open questions around the issue. In this paper, we study the presence and evolution over time of women authors in NIME since the beginning of the conference in 2001 until 2017. We discuss the results of gender imbalance and potential solutions by summarizing the actions taken by a number of worldwide initiatives that put an effort into making women's work visible in our field, with a particular emphasis on Women in Music Tech, a student-led organization that aims to encourage more women to join music technology, as a case study. We conclude with the hope of an improvement in the forthcoming years of the representation of women in NIME by presenting WiNIME, a public online database that details who are the women authors in NIME.

Speakers
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London


11:00am

Paper 5.3
Women Who Build Things: Gestural Controllers, Augmented Instruments, and Musical Mechatronics
by Sarah Reid, Sara Sithi-Amnuai & Ajay Kapur

This paper presents a collection of hardware-based technologies for live performance developed by women over the last few decades. The field of music technology and interface design has a significant gender imbalance, with men greatly outnumbering women. The purpose of this paper is to promote the visibility and representation of women in this field, and to encourage discussion on the importance of mentorship and role models for young women and girls in music technology.

11:00am

Paper 5.4
Democratising DMIs: the Relationship of Expertise and Control Intimacy
by Robert H. Jack, Jacob Harrison, Fabio Morreale & Andrew P. McPherson

An oft-cited aspiration of digital musical instrument (DMI) design is to create instruments, in the words of Wessel and Wright, with a `low entry fee and no ceiling on virtuosity'. This is a difficult task to achieve: many new instruments are aimed at either the expert or amateur musician, with few instruments catering for both. There is often a balance between learning curve and the nuance of musical control in DMIs. In this paper we present a study conducted with non-musicians and guitarists playing guitar-derivative DMIs with variable levels of control intimacy: how the richness and nuance of a performer's movement translates into the musical output of an instrument. Findings suggest a significant difference in preference for levels of control intimacy between the guitarists and the non-musicians. In particular, the guitarists unanimously preferred the richest of the two settings whereas the non-musicians generally preferred the setting with lower richness. This difference is notable because it is often taken as a given that increasing richness is a way to make instruments more enjoyable to play, however, this result only seems to be true for expert players.

Speakers
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Fabio Morreale

Fabio Morreale

Postdoctoral Researcher, Queen Mary University of London


11:00am

Paper 5.5
The Problem of DMI Adoption and Longevity: Envisioning a NIME Performance Pedagogy
by Adnan Marquez-Borbon & Juan Pablo Martinez Avila


This paper addresses the prevailing longevity problem of digital musical instruments (DMIs) in NIME research and design by proposing a holistic system design approach. Despite recent efforts to examine the main contributing factors of DMI falling into obsolescence, such attempts to remedy this issue largely place focus on the artifacts establishing themselves, their design processes and technologies. However, few existing studies have attempted to proactively build a community around technological platforms for DMIs, whilst bearing in mind the social dynamics and activities necessary for a budding community. We observe that such attempts while important in their undertaking, are limited in their scope. In this paper we will discuss that achieving some sort of longevity must be addressed beyond the device itself and must tackle broader ecosystemic factors. We hypothesize, that a longevous DMI design must not only take into account a target community but it may also require a non-traditional pedagogical system that sustains artistic practice.

Speakers
avatar for Juan Pablo Martinez Avila

Juan Pablo Martinez Avila

PhD Student, The University of Nottingham, Mixed Reality Lab


12:30pm

Demo-Poster Session 2
Lunch break, and demo-poster session.  Coffee, pastries and refreshments will be provided.

Posters

Demos

Exhibitors
avatar for Daniel Bennett

Daniel Bennett

PhD Researcher, Bristol University
Music: Central Pattern Generator Neural Networks, Nonlinear Dynamic Systems, Feedback, Noise | | Other research: HCI, effect of computer systems on autonomy
PB

Peter Beyls

researcher, University College Ghent
avatar for Pangur Brougham-Cook

Pangur Brougham-Cook

Undergraduate Researcher, College of Charleston
avatar for Andrew R. Brown

Andrew R. Brown

Professor of Digital Arts, Griffith University
avatar for Ivica Ico Bukvic

Ivica Ico Bukvic

NIME 2018 Co-Chair, Virginia Tech
Ico is... or is he?
avatar for Deepak Chandran

Deepak Chandran

Masters Student, CCRMA, Stanford University
avatar for Jean-Francois Charles

Jean-Francois Charles

Assistant Professor, Composition & Digital Arts, University of Iowa
avatar for Richard Graham

Richard Graham

CEO, Delta Sound Labs
Delta Sound Labs is an audio technology company based in the United States. Come talk to us about eurorack modules and VSTs for music production. We have some hardware and software in beta and we're looking for testers, particularly if you're an active artist/musician.
avatar for Alexander Jensenius

Alexander Jensenius

Associate Professor, University of Oslo
Alexander Refsum Jensenius is a music researcher and research musician. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the analytic knowledge and tools in the creation... Read More →
avatar for Tetsuro Kitahara

Tetsuro Kitahara

Associate Professor, Nihon University
I'm developing music systems with which non-musicians enjoy music through performance and/or creation. This year, I presented a demo of an improvisation system using eye tracking.
avatar for Shawn Lawson

Shawn Lawson

Associate Professor, Rensselaer Polytechnic Institute|Troy|New York|United States
Shawn Lawson is an experiential media artist exploring the computational sublime with technologies like: stereoscopy, camera vision, touch screens, game controllers, mobile devices, random number generators, live-coding, and real-time computer graphics. His artwork has exhibited in... Read More →
avatar for Bill Manaris

Bill Manaris

Professor, Computing in the Arts, College of Charleston
avatar for Charles Martin

Charles Martin

Postdoctoral Fellow, University of Oslo
Computer scientist, percussionist and computer musician. Interested in embedded systems, mobile devices and musical AI!
avatar for Alex Nieva

Alex Nieva

McGill University
avatar for Miguel Ortiz

Miguel Ortiz

Lecturer, Queen\'s University Belfast
Music
avatar for Robert Van Rooyen

Robert Van Rooyen

Ph.D. Candidate, University of Victoria|Victoria|British Columbia|Canada
As an experienced multidisciplinary engineer and musician, I am keenly interested in robotics that can render "human like" performances. Exploring the nuances and translating them to stochastic multidimensional motion control that can render comparable performances is of particular... Read More →
avatar for Yosuke Sakai

Yosuke Sakai

Painter, Programmer
Yosuke Sakai (1980-) is a sumi-e artist based in Tokyo, Japan. Sumi-e is a Japanese traditional brush painting style with black ink in all possible gradations ranging from the purest black to the lightest gray. One piece of Hokusai Katsushika's sumi-e work inspired Yosuke to get into... Read More →
avatar for Andrew Schloss

Andrew Schloss

Professor of Music, University of Victoria
avatar for Hiroto Takeuchi

Hiroto Takeuchi

Composer | Performer | Programmer
Hiroto Takeuchi (1979-) is a electronic musician and sound artist from Tokyo, Japan. He works for the electronic music labels in Tokyo and Seoul as a composer and a producer. His primary medium is sound, principally involving but not limited... Read More →
avatar for Kyriakos Tsoukalas

Kyriakos Tsoukalas

Virginia Tech, NIME 2018 Workshops & Demos Co-Chair
avatar for Luca Turchet

Luca Turchet

Luca Turchet is a postdoctoral researcher at the Centre for Digital Music and co-founder of MIND Music Labs. His research interests span the fields of new interfaces for musical expression, human-computer interaction, perception, and virtual reality. He is also a musician and composer... Read More →
JW

Jonathan Wakefield

Subject Area Leader for Music Technology and Production, University of Huddersfield
MW

Marcelo Wanderley

Professor, McGill University


12:30pm

Demo 2.01
Static Respirator
by Sarah Reid & Ryan Gaston


Static Respirator is a piece for electronically augmented trumpet using the Minimally Invasive Gesture Sensing Interface (MIGSI) for real-time sound synthesis and processing. Gestural information is captured and used to trigger, control and manipulate various parameters of the accompanying granular processing, synthesis, and spatialization Max/MSP patch. Soft exhales, whispers, and delicate air sounds are transformed into dense, fervent gestures (controlled by MIGSI)—at times reminiscent of a thousand scuttling insects, at others teetering between urgency and fragility. As the piece progresses, valve activity and hand tension data is captured and used to influence a semi-autonomous feedback-based chaotic synthesis engine.

12:30pm

Demo 2.02
Collar Controller
by Susan Grochmal

The Collar is a controller worn around the performer's neck, composed of switches and potentiometers, mapped to a Max MSP vocal looper patch. The collar is a chaotic interface that allows for an immersive interaction with technology through the body.

Exhibitors

12:30pm

Demo 2.03
JamSketch Eye: An Eye-tracking-based Improvisation System for Disabled People
by Tetsuro Kitahara, Yasuyuki Saito, Sergio Giraldo & Rafael Ramirez

Improvisation is an exciting way to enjoy music, but it is difficult for people with a severe motor disability. In this paper, we propose a system that enables disabled people to play improvisation using only gaze control. We have two issues when developing this system. The first issue is the type of data the user should input. Because our target users can control only their gaze, the data they can input are limited. We use a melodic outline, in which only the macro structure of the melody intended by the user is represented. Users can input melodic outlines easily using gaze control with a commercial eye tracker. The second issue is how to generate a melody from a given melodic outline. We use our melody generation algorithm based on a genetic algorithm, as proposed in our previous paper. Through preliminary tests, we confirmed that users could improvise by drawing melodic outlines with their gaze.

Exhibitors
avatar for Tetsuro Kitahara

Tetsuro Kitahara

Associate Professor, Nihon University
I'm developing music systems with which non-musicians enjoy music through performance and/or creation. This year, I presented a demo of an improvisation system using eye tracking.


12:30pm

Demo 2.04
The Dark Side Demo
by Shawn Lawson


A live-coding audiovisual software/interface IDE.

Exhibitors
avatar for Shawn Lawson

Shawn Lawson

Associate Professor, Rensselaer Polytechnic Institute|Troy|New York|United States
Shawn Lawson is an experiential media artist exploring the computational sublime with technologies like: stereoscopy, camera vision, touch screens, game controllers, mobile devices, random number generators, live-coding, and real-time computer graphics. His artwork has exhibited in... Read More →


12:30pm

Demo 2.05
Using Accelerometers for Gestural Control of Computer Music
by Christopher Morgan


Fractured is a composition for dancer and computer music utilizing two wireless three-axis accelerometers worn on the backs of the dancer's hands.  Their data is transmitted as OSC messages to a computer running the Max-MSP environment to generate and shape all of the sounds and musical gestures.

Exhibitors

12:30pm

Demo 2.06
The First Flowers of the Year Are Always Yellow
by Miguel Ortiz & Hyeon Min Kim

This work responds to the theme of “off-the-shelve NIME” We use two MYO (EMG bands) which are commercially available from Thalmic Labs. The work embraces the maturity of the technology and understanding of EMG signals in interactive contexts. The MYO bands are not the theme nor the centre of the work but serve to facilitate a specific narrative and allows for gestural control that would be difficult to achieve with alternative sensing modalities. This live performance work revolves around two narrative figures: a writer and his fictional character. The writer sits at his typewriter attempting to write a story. While struggling to work through the writer’s block, he starts to seek advice from his own fiction – the character from his story. Through the act of writing, he activates the character and triggers her to move, think and recall memories from her fictional past.

Exhibitors
avatar for Miguel Ortiz

Miguel Ortiz

Lecturer, Queen\'s University Belfast
Music


12:30pm

Demo 2.07
MechDrum™ and SDRdrum™ Sonic Pair Demonstration
by Robert Van Rooyen & Andrew Schloss


“sonicpair #n” is a series of duos involving robotic percussion.  “sonicpair #1” is for viola and radiodrum-controlled robotic percussion. This new piece, "sonicpair #2” is unique in that  the robotic player is a "mirror" of the human percussionist. Using a new 3D capacitive sensor called the SDRdrum, the full range of the percussionist's gestures are exactly mimicked by the MechDrum, including gestures above the surface. This mirroring creates a unique visual and auditory experience that is different from all other percussive interfaces because the robotic percussion is not “waiting” for a noteon to be detected; rather it is continuously following all the gestures of the percussionist.

Exhibitors
avatar for Robert Van Rooyen

Robert Van Rooyen

Ph.D. Candidate, University of Victoria|Victoria|British Columbia|Canada
As an experienced multidisciplinary engineer and musician, I am keenly interested in robotics that can render "human like" performances. Exploring the nuances and translating them to stochastic multidimensional motion control that can render comparable performances is of particular... Read More →
avatar for Andrew Schloss

Andrew Schloss

Professor of Music, University of Victoria


12:30pm

Demo 2.08
Live Painting as an Interface for Generating Music
by Yosuke Sakai & Hiroto Takeuchi


This is an experimental performance which starts with live painting (Japanese traditional brush painting with black ink). The image made by painting is converted into sound, which is composed and edited by a musician using an analog synthesizer. We made a device and an application which do this in real time. Since painting with brush and ink is not easy to control, the generated sounds are different every time, although the painter tries to make exactly the same one, which makes the composer face contingency and have to handle it. The performance will be highly improvisational.

Exhibitors
avatar for Yosuke Sakai

Yosuke Sakai

Painter, Programmer
Yosuke Sakai (1980-) is a sumi-e artist based in Tokyo, Japan. Sumi-e is a Japanese traditional brush painting style with black ink in all possible gradations ranging from the purest black to the lightest gray. One piece of Hokusai Katsushika's sumi-e work inspired Yosuke to get into... Read More →
avatar for Hiroto Takeuchi

Hiroto Takeuchi

Composer | Performer | Programmer
Hiroto Takeuchi (1979-) is a electronic musician and sound artist from Tokyo, Japan. He works for the electronic music labels in Tokyo and Seoul as a composer and a producer. His primary medium is sound, principally involving but not limited... Read More →


12:30pm

Demo 2.09
pandemonium trio
by Paul Stapleton & Miguel Ortiz

pandemonium trio is a performance research group which explores multiple instantiations of a single custom made electronic instrument through improvisation. The group consists of three highly experienced improvisers and instrument makers based at SARC, Queen’s University Belfast. Our current performance system is a synthesis of previously existing musical circuits that have been modified to promote productive instability within a restricted set of timbral possibilities. The aesthetic of our performance is informed by noise and free improvised musics. The proposed performance will be comprised of rapid and overlapping solos, duos and trios, structured both by intuition and rule-based interactions.

Exhibitors
avatar for Miguel Ortiz

Miguel Ortiz

Lecturer, Queen\'s University Belfast
Music


12:30pm

Demo 2.10
Her Painting for Performance
by Rachel Tandon

Both painting and interface, “Her Painting for Performance” explores potentials in musical abstraction through personalized communication between an artist and her computer. The painting is a meeting of and medium for sonic and visual expression. The painting has 52 controls programmed to a granular synthesis and effects patch in Max MSP. It plays a composition written by the artist for celesta, bassoon, cello, bass and drum set, recorded with MIDI. The painting is performed in real-time, with a USB camera placed above the performer’s hands to send a close-up of her hands operating the controller to live projection.

Exhibitors

12:30pm

Demo 2.11
The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments
by Cagan Arslan, Florent Berthaut, Jean Martinet, Ioan Marius Bilasco & Laurent Grisoni

Mobile devices have been a promising platform for musical performance thanks to the various sensors readily available on board. In particular, mobile cameras can provide rich input as they can capture a wide variety of user gestures or environment dynamics. However, this raw camera input only provides continuous parameters and requires expensive computation. In this paper, we propose to combine motion/gesture input with the touch input, in order to filter movement information both temporally and spatially, thus increasing expressiveness while reducing computation time. We present a design space which demonstrates the diversity of interactions that our technique enables. We also report the results of a user study in which we observe how musicians appropriate the interaction space with an example instrumen


12:30pm

Poster 2.01
Composing an Ensemble Standstill Work for Myo and Bela
by 
Charles Patrick Martin & Alexander Refsum Jensenius

This paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe the technical details of our setup and introduce Myo-to-Bela and Myo-to-OSC software bridges that assist with prototyping compositions using the Myo controller.

Exhibitors
avatar for Alexander Jensenius

Alexander Jensenius

Associate Professor, University of Oslo
Alexander Refsum Jensenius is a music researcher and research musician. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the analytic knowledge and tools in the creation... Read More →
avatar for Charles Martin

Charles Martin

Postdoctoral Fellow, University of Oslo
Computer scientist, percussionist and computer musician. Interested in embedded systems, mobile devices and musical AI!


12:30pm

Poster 2.02
The T-Stick: Maintaining a 12 year-old Digital Musical Instrument
by Alex Nieva, Johnty Wang, Joseph Malloch & Marcelo Wanderley


This paper presents the work to maintain several copies of the digital musical instrument (DMI) called the T-Stick in the hopes of extending their useful lifetime. The T-Sticks were originally conceived in 2006 and 20 copies have been built over the last 12 years. While they all preserve the original design concept, their evolution resulted in variations in choice of microcontrollers, and sensors. We worked with eight copies of the second and fourth generation T-Sticks to overcome issues related to the aging of components, changes in external software, lack of documentation, and in general, the problem of technical maintenance.

Exhibitors
avatar for Alex Nieva

Alex Nieva

McGill University
MW

Marcelo Wanderley

Professor, McGill University


12:30pm

Poster 2.03
MIDI Keyboard Defined DJ Performance System
by 
Christopher Dewey & Jonathan P. Wakefield

This paper explores the use of the ubiquitous MIDI keyboard to control a DJ performance system. The prototype system uses a two octave keyboard with each octave controlling one audio track. Each audio track has four two-bar loops which play in synchronisation switchable by its respective octave’s first four black keys. The top key of the keyboard toggles between frequency filter mode and time slicer mode. In frequency filter mode the white keys provide seven bands of latched frequency filtering. In time slicer mode the white keys plus black B flat key provide latched on/off control of eight time slices of the loop. The system was informally evaluated by nine subjects. The frequency filter mode combined with loop switching worked well with the MIDI keyboard interface. All subjects agreed that all tools had creative performance potential that could be developed by further practice.

Exhibitors
JW

Jonathan Wakefield

Subject Area Leader for Music Technology and Production, University of Huddersfield


12:30pm

Poster 2.04
Democratizing Interactive Music Production over the Internet
by 
Trond Engum & Otto Jonassen Wittner

This paper describes an ongoing research project which address challenges and opportunities when collaborating interactively in real time in a "virtual" sound studio with several partners in different locations. "Virtual" in this context referring to an interconnected and inter-domain studio environment consisting of several local production systems connected to public and private networks. This paper reports experiences and challenges related to two different production scenarios conducted in 2017.

Exhibitors

12:30pm

Poster 2.05
Using the Axoloti Embedded Sound Processing Platform to Foster Experimentation and Creativity
by 
Jean-Francois, Carlos Cotallo Solares, Carlos Toro Tobon & Andrew Willette

This paper describes how the Axoloti platform is well suited to teach a beginners’ course about new elecro-acoustic musical instruments and how it fits the needs of artists who want to work with an embedded sound processing platform and get creative at the crossroads of acoustics and electronics. First, we present the criteria used to choose a platform for the course titled "Creating New Musical Instruments" given at the University of Iowa in the Fall of 2017. Then, we explain why we chose the Axoloti board and development environment.

Exhibitors
avatar for Jean-Francois Charles

Jean-Francois Charles

Assistant Professor, Composition & Digital Arts, University of Iowa


12:30pm

Poster 2.06
Introducing a K-12 Mechatronic NIME Kit
by 
Kyriakos Tsoukalas & Ivica Ico Bukvic

The following paper introduces a new mechatronic NIME kit that uses new additions to the Pd-L2Ork visual programing environment and its K-12 learning module. It is designed to facilitate the creation of simple mechatronics systems for physical sound production in K-12 and production scenarios. The new set of objects builds on the existing support for the Raspberry Pi platform to also include the use of electric actuators via the microcomputer’s GPIO system. Moreover, we discuss implications of the newly introduced kit in the creative and K-12 education scenarios by sharing observations from a series of pilot workshops, with particular focus on using mechatronic NIMEs as a catalyst for the development of programing skills.

Exhibitors
avatar for Ivica Ico Bukvic

Ivica Ico Bukvic

NIME 2018 Co-Chair, Virginia Tech
Ico is... or is he?
avatar for Kyriakos Tsoukalas

Kyriakos Tsoukalas

Virginia Tech, NIME 2018 Workshops & Demos Co-Chair


12:30pm

Poster 2.07
Neurythmic: A Rhythm Creation Tool Based on Central Pattern Generators
by 
Daniel Bennett, Peter Bennett & Anne Roudaut

We describe the development of Neurythmic: an interactive system for the creation and performance of fluid, expressive musical rhythms using Central Pattern Generators (CPGs). CPGs are neural networks which generate adaptive rhythmic signals. They simulate structures in animals which underly behaviours such as heartbeat, gut peristalsis and complex motor control. Neurythmic is the first such system to use CPGs for interactive rhythm creation. We discuss how Neurythmic uses the entrainment behaviour of these networks to support the creation of rhythms while avoiding the rigidity of grid quantisation approaches. As well as discussing the development, design and evaluation of Neurythmic, we discuss relevant properties of the CPG networks used (Matsuoka's Neural Oscillator), and describe methods for their control. Evaluation with expert and professional musicians shows that Neurythmic is a versatile tool, adapting well to a range of quite different musical approaches.

Exhibitors
avatar for Daniel Bennett

Daniel Bennett

PhD Researcher, Bristol University
Music: Central Pattern Generator Neural Networks, Nonlinear Dynamic Systems, Feedback, Noise | | Other research: HCI, effect of computer systems on autonomy


12:30pm

Poster 2.08
Evaluating LED-based interface for Lumanote composition creation tool
by James Granger, Mateo Aviles, Joshua Kirby, Austin Griffin, Johnny Yoon, Raniero A. Lara-Garduno & Tracy Hammond

Composing music typically requires years of music theory experience and knowledge that includes but is not limited to chord progression, melody composition theory, and an understanding of whole-step/half-step passing tones among others. For that reason, certain songwriters such as singers may find a necessity to hire experienced pianists to help compose their music. In order to facilitate the process for beginner and aspiring musicians, we have developed Lumanote, a music composition tool that aids songwriters by presenting real-time suggestions on appropriate melody notes and chord progression. While a preliminary evaluation yielded favorable results for beginners, many commented on the difficulty of having to map the note suggestions displayed on the on-screen interface to the physical keyboard they were playing on. This paper presents the resulting solution: an LED-based feedback system that is designed to be directly attached to any standard MIDI keyboard. This peripheral aims to help map note suggestions directly to the physical keys of a musical keyboard. A study consisting of 22 individuals was conducted to compare the effectiveness of the new LED-based system with the existing computer interface, finding that the vast majority of users preferred the LED system. Three experienced musicians also judged and ranked the compositions, noting significant improvement in song quality when using either system, and citing comparable quality between compositions that used either interface.


12:30pm

Poster 2.09
GuitarAMI and GuiaRT: two independent yet complementary projects on augmented nylon guitars
by 
Eduardo Meneses, Sergio Freire & Marcelo Wanderley

This paper describes two augmented nylon-string guitar projects developed in different institutions. GuitarAMI uses sensors to modify the classical guitars constraints while GuiaRT uses digital signal processing to create virtual guitarists that interact with the performer in real-time. After a bibliographic review of Augmented Musical Instruments (AMIs) based on guitars, we present the details of the two projects and compare them using an adapted dimensional space representation. Highlighting the complementarity and cross-influences between the projects, we propose avenues for future collaborative work.

Exhibitors
MW

Marcelo Wanderley

Professor, McGill University


12:30pm

Poster 2.10
Playsound.space: Inclusive Free Music Improvisations Using Audio Commons
by Ariane de Souza Stolfi, Miguel Ceriani, Luca Turchet & Mathieu Barthet

Playsound.space is a web-based tool to search for and play Creative Commons licensed-sounds which can be applied to free improvisation, experimental music production and soundscape composition. It provides a fast access to about 400k non-musical and musical sounds provided by Freesound, and allows users to play/loop single or multiple sounds retrieved through text based search. Sound discovery is facilitated by use of semantic searches and sound visual representations (spectrograms). Guided by the motivation to create an intuitive tool to support music practice that could suit both novice and trained musicians, we developed and improved the system in a continuous process, gathering frequent feedback from a range of users with various skills. We assessed the prototype with 18 non musician and musician participants during free music improvisation sessions. Results indicate that the system was found easy to use and supports creative collaboration and expressiveness irrespective of musical ability. We identified further design challenges linked to creative identification, control and content quality.

Exhibitors
avatar for Luca Turchet

Luca Turchet

Luca Turchet is a postdoctoral researcher at the Centre for Digital Music and co-founder of MIND Music Labs. His research interests span the fields of new interfaces for musical expression, human-computer interaction, perception, and virtual reality. He is also a musician and composer... Read More →


12:30pm

Poster 2.11
CTRL: A Flexible, Precision Interface for Analog Synthesis
by 
John Harding, Richard Graham & Edwin Park

This paper provides a new interface for the production and distribution of high resolution analog control signals, particularly aimed toward the control of analog modular synthesisers. Control Voltage/Gate interfaces generate Control Voltage (CV) and Gate Voltage (Gate) as a means of controlling note pitch and length respectively, and have been with us since 1986 [2]. The authors provide a unique custom CV/Gate interface and dedicated communication protocol which leverages standard USB Serial functionality and enables connectivity over a plethora of computing devices, including embedded devices such as the Raspberry Pi and ARM based devices including widely available ‘Android TV Boxes’. We provide a general overview of the unique hardware and communication protocol developments followed by usage case examples toward tuning and embedded platforms, leveraging softwares ranging from Pure Data (Pd), Max, and Max for Live (M4L).

Exhibitors
avatar for Richard Graham

Richard Graham

CEO, Delta Sound Labs
Delta Sound Labs is an audio technology company based in the United States. Come talk to us about eurorack modules and VSTs for music production. We have some hardware and software in beta and we're looking for testers, particularly if you're an active artist/musician.


12:30pm

Poster 2.12
Motivated Learning in Human-Machine Improvisation
by 
Peter Beyls

This paper describes a machine learning approach in the context of non-idiomatic human-machine improvisation. In an attempt to avoid explicit mapping of user actions to machine responses, an experimental machine learning strategy is suggested where rewards are derived from the implied motivation of the human interactor – two motivations are at work: integration (aiming to connect with machine generated material) and expression (independent activity). By tracking consecutive changes in musical distance (i.e. melodic similarity) between human and machine, such motivations can be inferred. A variation of Q-learning is used featuring a self-optimizing variable length state-action-reward list. The system (called Pock) is tunable into particular behavioral niches by means of a limited number of parameters. Pock is designed as a recursive structure and behaves as a complex dynamical system. When tracking systems variables over time, emergent non-trivial patterns reveal experimental evidence of attractors demonstrating successful adaptation.

Exhibitors
PB

Peter Beyls

researcher, University College Ghent


12:30pm

Poster 2.13
InterFACE: new faces for musical expression
by 
Deepak Chandran & Ge Wang

InterFACE is an interactive system for musical creation, mediated primarily through the user’s facial expressions and movements. It aims to take advantage of the expressive capabilities of the human face to create music in a way that is both expressive and whimsical. This paper introduces the designs of three virtual instruments in the InterFACE system: namely, FACEdrum (a drum machine), GrannyFACE (a granular synthesis sampler), and FACEorgan (a laptop mouth organ using both face tracking and audio analysis). We present the design behind these instruments and consider what it means to be able to create music with one’s face. Finally, we discuss the usability and aesthetic criteria for evaluating such a system, taking into account our initial design goals as well as the resulting experience for the performer and audience.

Exhibitors
avatar for Deepak Chandran

Deepak Chandran

Masters Student, CCRMA, Stanford University


12:30pm

Poster 2.14
Hand Posture Recognition: IR, IMU and sEMG
by 
Richard Polfreman

Hands are important anatomical structures for musical performance, and recent developments in input device technology have allowed rather detailed capture of hand gestures using consumer-level products. While in some musical contexts, detailed hand and finger movements are required, in others it is sufficient to communicate discrete hand postures to indicate selection or other state changes. This research compared three approaches to capturing hand gestures where the shape of the hand, i.e. the relative positions and angles of finger joints, are an important part of the gesture. A number of sensor types can be used to capture information about hand posture, each of which has various practical advantages and disadvantages for music applications. This study compared three approaches, using optical, inertial and muscular information, with three sets of 5 hand postures (i.e. static gestures) and gesture recognition algorithms applied to the device data, aiming to determine which methods are most effective.

Exhibitors

12:30pm

Poster 2.15
The Digital Orchestra Toolbox for Max
by 
Joseph Malloch, Marlon Mario Schumacher, Stephen Sinclair & Marcelo Wanderley

The Digital Orchestra Toolbox for Max is an open-source collection of small modular software tools for aiding the development of Digital Musical Instruments. Each tool takes the form of an "abstraction" for the visual programming environment Max, meaning it can be opened and understood by users within the Max environment, as well as copied, modified, and appropriated as desired. This paper describes the origins of the Toolbox and our motivations for creating it, broadly outlines the types of tools included, and follows the development of the project over the last twelve years. We also present examples of several digital musical instruments built using the Toolbox.

Exhibitors
MW

Marcelo Wanderley

Professor, McGill University


12:30pm

Poster 2.16
JythonMusic: An Environment for Developing Interactive Music Systems
by 
Bill Manaris, Pangur Brougham-Cook, Dana Hughes & Andrew R. Brown

JythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other software, such as PureData, Max/MSP, and Processing. The paper provides an overview of important JythonMusic libraries related to constructing interactive musical experiences. It demonstrates their scope and utility by summarizing several projects developed using JythonMusic, including interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.

Exhibitors
avatar for Pangur Brougham-Cook

Pangur Brougham-Cook

Undergraduate Researcher, College of Charleston
avatar for Andrew R. Brown

Andrew R. Brown

Professor of Digital Arts, Griffith University
avatar for Bill Manaris

Bill Manaris

Professor, Computing in the Arts, College of Charleston


2:00pm

Paper Session 6: Making & New Instruments

Speakers
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.
avatar for Ivica Ico Bukvic

Ivica Ico Bukvic

NIME 2018 Co-Chair, Virginia Tech
Ico is... or is he?
avatar for Anil Camci

Anil Camci

Assistant Professor of Performing Arts Technology, University of Michigan
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Kyriakos Tsoukalas

Kyriakos Tsoukalas

Virginia Tech, NIME 2018 Workshops & Demos Co-Chair
avatar for Halldor Ulfarsson

Halldor Ulfarsson

PhD Student
Developing the halldorophone for over a decade. Recently started a PhD supervised by Thor Magnusson and Chris Kiefer at Sussex.


2:00pm

Paper 6.1
Triplexer: An Expression Pedal with New Degrees of Freedom
by Steven Leib & Anıl Çamcı

We introduce the Triplexer, a novel foot controller that gives the performer 3 degrees-of-freedom over the control of various effects parameters. With the Triplexer, we aim to expand the performer's control space by augmenting the capabilities of the common expression pedal that is found in most effects rigs. Using industrial-grade weight-detection sensors and widely-adopted communication protocols, the Triplexer offers a flexible platform that can be integrated into various performance setups and situations. In this paper, we detail the design of the Triplexer by describing its hardware, embedded signal processing, and mapping software implementations. We also offer the results of a user study, which we conducted to evaluate the usability of our controller. A video abstract for the Triplexer can be viewed at https://vimeo.com/triplexer/video.

Speakers
avatar for Anil Camci

Anil Camci

Assistant Professor of Performing Arts Technology, University of Michigan


2:00pm

Paper 6.2
The halldorophone: The ongoing innovation of a cello-like drone instrument
by Halldór Úlfarsson

The halldorophone is a new electroacoustic string instrument based on the use of positive feedback for its sound and timbre. An objective of the project has been to encourage its adoption into use by practicing musicians which can be said to be somewhat successful as it has a growing repertoire of works by prominent composers and performers. During the development of the halldorophone, the question has been asked: “what makes musicians want to use this instrument?” and answers have been found through extensive user studies and feedback. As the project progresses a picture is emerging of how a culture of acceptance around this new instrument forms. Its strengths, in terms of adoptability are: known technique (string feedback), packaged in a familiar form (cello-ish) but affording the novel function of a musical system which can be brought to such a state of complexity that it verges on having agency, complementary to that of the performer affecting the musical outcome.
 This paper describes the halldorophone and presents the rationale of its major design features and ergonomic choices, as they relate to the overarching objective of nurturing a culture of use for the instrument.

Speakers
avatar for Halldor Ulfarsson

Halldor Ulfarsson

PhD Student
Developing the halldorophone for over a decade. Recently started a PhD supervised by Thor Magnusson and Chris Kiefer at Sussex.


2:00pm

Paper 6.3
L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music Performance
by Kyriakos Tsoukalas, Joseph Kubalak & Ivica Ico Bukvic

Laptop orchestras create music, although digitally produced, in a collaborative live performance, not unlike a traditional orchestra. The recent increase in interest and investment in this style of music creation has paved the way for novel methods for musicians to create and interact with music. To this end, a number of nontraditional instruments have been constructed which enable musicians to control the sound they produce beyond pitch and volume, integrating filtering, musical effects, etc. Wii Remotes (WiiMotes) have seen heavy use in maker communities, including laptop orchestras, for their robust sensor array and low cost. The placement of sensors and the form factor of the device itself are suited for video games, not necessarily live music creation. To that end, the authors present a new controller, based on the WiiMote hardware platform, designed to address usability in gesture-centric music performance. Based on the pilot-study data, the new controller offers unrestricted two-hand gesture production, smaller footprint, and lower muscle strain.

Speakers
avatar for Kyriakos Tsoukalas

Kyriakos Tsoukalas

Virginia Tech, NIME 2018 Workshops & Demos Co-Chair


2:00pm

Paper 6.4
Crafting Digital Musical Instruments: An Exploratory Workshop Study
by Jack Armitage & Andrew P. McPherson

For the creators of musical instruments, symbolic and embodied media encourage different ways of thinking; typically circuitry and code reward top-down explication, whereas physical materials reward bottom-up exploration. In seeking the best of both, many computational design media are constrained by the goal of integrating these ways of thinking. Achieving this integration often involves a trade off between working with functional prototypes, and maintaining the openness of the design space. In this work we propose that embodied craft practice with functional prototypes can nurture bottom-up ways of working to inspire new computational design media. To begin investigating this idea, we designed a craft workshop for 20 musical instrument designers. Groups were given the same `unfinished' instrument to craft for one hour with raw materials, and though the task was open ended they were prompted to focus on subtle details that might distinguish their instruments. Despite the prompt the groups diverged dramatically in intent and style, and generated gestural language rapidly and flexibly. The workshop outcomes support the idea that this approach encourages bottom-up process, subjective reinterpretation and diverse collaborative process. We discuss how this approach compliments and can act as a reference for computational design media.

Speakers
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom


2:00pm

Paper 6.5
Individual Fabrication of Cymbals using Incremental Robotic Sheet Forming
by Ammar Kalo & Georg Essl

Incremental robotic sheet forming is used to fabricate a novel cymbal shape based on models of geometric chaos for stadium shaped boundaries. This provides a proof-of-concept that this robotic fabrication technique might be a candidate method for creating novel metallic ideophones that are based on sheet deformations. Given that the technique does not require molding, it is well suited for both rapid and iterative prototyping and the fabrication of individual pieces. With advances in miniaturization, this approach may also be suitable for personal fabrication. In this paper we discuss this technique as well as aspects of the geometry of stadium cymbals and their impact on the resulting instrument.

Speakers

3:30pm

4:00pm

Paper Session 7: Skill, Learning & Guitars

Speakers
avatar for John McDowell

John McDowell

Graduate, Trinity College Dublin
John McDowell graduated from Trinity College Dublin with a Masters degree in Music and Media Technologies in 2017. John focused on the role of haptics in instrumental music listening. John studied the classical guitar under internationally renowned performer and musicologist, John... Read More →
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Sandor Mehes

Sandor Mehes

PhD student, Queen's University Belfast
avatar for Fabio Morreale

Fabio Morreale

Postdoctoral Researcher, Queen Mary University of London


4:00pm

Paper 7.1
Haptic-Listening and the Classical Guitar
by John McDowell

This paper reports the development of a ‘haptic-listening’ system which presents the listener with a representation of the vibrotactile feedback perceived by a classical guitarist during performance through the use of haptic feedback technology. The paper describes the design of the haptic-listening system which is in two prototypes: the “DIY Haptic Guitar” and a more robust haptic-listening Trial prototype using a Reckhorn BS-200 shaker. Through two experiments, the perceptual significance and overall musical contribution of the addition of haptic feedback in a listening context was evaluated. Subjects preferred listening to the classical guitar presentation with the addition of haptic feedback and the addition of haptic feedback contributed to listeners’ engagement with a performance. The results of the experiments and their implications are discussed in this paper.

Speakers
avatar for John McDowell

John McDowell

Graduate, Trinity College Dublin
John McDowell graduated from Trinity College Dublin with a Masters degree in Music and Media Technologies in 2017. John focused on the role of haptics in instrumental music listening. John studied the classical guitar under internationally renowned performer and musicologist, John... Read More →


4:00pm

Paper 7.2
When is a Guitar not a Guitar? Cultural Form, Input Modality and Expertise
by Jacob Harrison, Robert H Jack, Fabio Morreale & Andrew P. McPherson

The design of traditional musical instruments is a process of incremental refinement over many centuries of innovation. Conversely, digital musical instruments (DMIs), being unconstrained by requirements of efficient acoustic sound production and ergonomics, can take on forms which are more abstract in their relation to the mechanism of control and sound production. In this paper we consider the case of designing DMIs for use in existing musical cultures, and pose questions around the social and technical acceptability of certain design choices relating to global physical form and input modality (sensing strategy and the input gestures that it affords). We designed four guitar-derivative DMIs designed to be suitable to perform a strummed harmonic accompaniment to a folk tune. Each instrument possessed varying degrees of `guitar-likeness', based either on the form and aesthetics of the guitar or the specific mode of interaction. We conducted a study where both non-musicians and guitarists played two versions of the instruments and completed musical tasks with each instrument. The results of this study highlight the complex interaction between global form and input modality when designing for existing musical cultures.

Speakers
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Fabio Morreale

Fabio Morreale

Postdoctoral Researcher, Queen Mary University of London


4:00pm

Paper 7.3
A Longitudinal Field Trial with a Hemiplegic Guitarist Using The Actuated Guitar
by Jeppe Larsen, Hendrik Knoche & Dan Overholt

Common emotional effects following a stroke include depression, apathy and lack of motivation. We conducted a longitudinal case study to investigate if enabling a post-stroke former guitarist re-learn to play guitar would help increase motivation for self rehabilitation and quality of life after suffering a stroke. The intervention lasted three weeks during which the participant had a fully functional electrical guitar fitted with a strumming device controlled by a foot pedal at his free disposal. The device replaced right strumming of the strings, and the study showed that the participant, who was highly motivated, played 20 sessions despite system latency and reduced musical expression. He incorporated his own literature and equipment into his playing routine and improved greatly as the study progressed. He was able to play alone and keep a steady rhythm in time with backing tracks that went as fast as 120bpm. During the study he was able to lower his error rate to 33%, while his average flutter also decreased.

Speakers

4:00pm

Paper 7.4
Co-Tuning Virtual-Acoustic Performance Ecosystems: observations on the development of skill and style in the study of musician-instrument relationships
by Paul Stapleton, Maarten Van Walstijn & Sandor Mehes

In this paper we report preliminary observations from an ongoing study into how musicians explore and adapt to the parameter space of a virtual-acoustic string bridge plate instrument. These observations inform (and are informed by) a wider approach to understanding the development of skill and style in interactions between musicians and musical instruments. We discuss a performance-driven ecosystemic approach to studying musical relationships, drawing on arguments from the literature which emphasise the need to go beyond simplistic notions of control and usability when assessing exploratory and performatory musical interactions. Lastly, we focus on processes of perceptual learning and co-tuning between musician and instrument, and how these activities may contribute to the emergence of personal style as a hallmark of skilful music-making.

Speakers
avatar for Sandor Mehes

Sandor Mehes

PhD student, Queen's University Belfast


6:00pm

Banquet
Banquet Featuring Remarks by:
Cyril Clarke (VT Provost)

8:30pm

Concert 6: Evening with Onyx Ashanti

Artists
avatar for Onyx Ashanti

Onyx Ashanti

Composer | Performer | Media Artist
Since being introduced on the worldstage by way of a fortuitous TED talk he did in 2011 entitled “This is Beatjazz”, Mississippi native, Onyx Ashanti, has been evolving at a pace that surprises even him. Musician, Programmer, 3d print-designer, writer, performer, inventor...a... Read More →
avatar for Seth Thorn

Seth Thorn

Faculty | AME, Arizona State University


8:30pm

Music Piece 6.1
Windowless
by Seth Thorn


“Windowless” is the name of a particular sonic substrate for performance with the alto.glove controller and the violin. As the name suggests, the system is designed to be highly responsive. For the same reason, it is helpful to think of Windowless as a hyperbolic acoustics rather than a particular composition or an instrument paradigm.

Artists
avatar for Seth Thorn

Seth Thorn

Faculty | AME, Arizona State University


8:30pm

Music Piece 6.2
Goldstream Variations
by Scott Deal

Harry Chaubey & Erzsébet Gaál Rinne, performers

Goldstream Variations (2012) creates an interconnected system through live music, electronics, and machine learning algorithms. The variations are scored for one to seven musicians on undetermined acoustic instruments, together with one to seven electronic/computer artists. The selection of this grouping shapes the aural nature of performance space through the arrangement of performers and loudspeakers. Each page of the score constitutes one variation which is performed in heterophonic fashion as an ensemble. The acoustic musician’s performances are engaged by various computer artists. The variations are designed for performance in either a single physical space, or distributed telematically between multiple sites on high-bandwidth Internet. Machine Learning (ML) is introduced into the design of the work via a software application as well as through the structural shape of the composition, in which virtuostic musical passages are followed by large rests, which in turn create room for liberal amounts of interactivity. The aesthetic focus of a performance lies in the timing, placement, and juxtaposition of virtual and live sound. Decisions regarding spacing, ensemble, dynamics, crescendo and decrescendos, pacing, and phrasing are left to each group. Goldstream Variations was composed during an extended stay in the Goldstream Valley outside of Fairbanks, Alaska. It was commissioned by Erzsébet Gaál Rinne and is dedicated to her.

Artists

8:30pm

Music Piece 6.3
who Are we Are here
by Ben Sutherland

David Perry, clarinet
Elizabeth Crone, flute

This work explores the dislocation and discovery of sonic and physical identity and space. It relies on gestural control via computer vision and audio analysis to trigger, shape, and direct the computer’s responses. The work is in five movements without pause: I - Invocation and Genesis; II - Construction and Exposition; III – Dance; IV - Reflection and Resistance; and V - Transcendence. Though formally delineated by these sections, the piece evolves organically, tightly integrating the music technology and human elements, blurring the lines of musical agency and identity, and democratizing the roles of the acoustic and electroacoustic constituents.


8:30pm

Music Piece 6.4
Evening Concert with Onyx Ashanti
by Onyx Ashanti

Artists
avatar for Onyx Ashanti

Onyx Ashanti

Composer | Performer | Media Artist
Since being introduced on the worldstage by way of a fortuitous TED talk he did in 2011 entitled “This is Beatjazz”, Mississippi native, Onyx Ashanti, has been evolving at a pace that surprises even him. Musician, Programmer, 3d print-designer, writer, performer, inventor...a... Read More →


8:45pm

NIME Projection Mapping Exhibit | 2018 | CURATED
NIME Projection Mapping Exhibit | 2018 | CURATED
by Armi Behzad, David J Franusich, Mahshid Gorjian, George Hardebeck, Tacie Jones, Xindi Liu, Daniel Robert Monzel, Huy Quoc Ngo, Heather Arnold, Jesse Bibel, Zachary Cortez, Justus Darby, Nishat Jamil, Antonia Marigliani, Pei Qiu, Michael Rhoades & Yiming Wang

The School of Visual Arts Art 4544 and 5724 projection mapping class taught by Thomas Tucker and in collaboration with ICAT and the Moss Arts Center have created a large-scale projection mapping project which will be projected onto the south-west facade of the Moss Arts Center after dark.

Please note that this session will only take place if the weather permits.

Exhibitors

10:30pm

Concert 7

Artists
avatar for Alexander Jensenius

Alexander Jensenius

Associate Professor, University of Oslo
Alexander Refsum Jensenius is a music researcher and research musician. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the analytic knowledge and tools in the creation... Read More →
avatar for Benjamin Carey

Benjamin Carey

Academic Fellow, University of Sydney, Sydney Conservatorium of Music
Benjamin Carey is a Sydney-based saxophonist, composer and technologist. His recent research and practice incorporates equal parts performance, composition and the development of musical software systems. He completed a PhD at the University of Technology, Sydney (2016), and currently... Read More →
avatar for Bernt Isak Wærstad

Bernt Isak Wærstad

University Lecturer / Freelance, Norwegian Academy of Music
Musician, sound artist, producer and sound designer with a Masters in Music Technology from NTNU in real time granular synthesis of electric guitar. In addition to work free lance as a musician and sound engineer, he teaches at the Norwegian University of Science and Technology and... Read More →
avatar for Charles Martin

Charles Martin

Postdoctoral Fellow, University of Oslo
Computer scientist, percussionist and computer musician. Interested in embedded systems, mobile devices and musical AI!


10:30pm

Music Piece 7.1
Imaginary Trainscape
by 
Takuto Fukuda & Ana Dall'Ara-Majek

Imaginary Trainscape is a theatrical structural improvisation. This performance evokes a nostalgia for the golden age of Canadian trains. While appreciating its decadent, this improvisation also dreams of the Renaissance of railways in North America. In association with the transmission mechanism between a boiler and driving wheels of steam locomotives, this performance explores two types of synchronicity: inter-corporeal synchrony and sonic-corporeal synchrony, meaning synchronicity between performers and the one between sounds and bodily gestures of each performer, respectively. Our self-made analogue/digital musical instruments: an augmented Theremin and a gyro-based gestural controller - mediate these synchronies, and empower our mental boilers, which enthusiastically trigger an over-saturation of synthetic sounds from the instruments.


10:30pm

Music Piece 7.2
Stillness Under Tension
by Charles Martin & Alexander Refsum Jensensius


Stillness Under Tension is a quartet standstill work for Myo gesture control armband and Bela embedded music platform. Humans are incapable of standing completely still due to breathing and other involuntary micromotions. This work explores the expressive space of standing still through an inverse action-sound mapping: less movement leads to more sound. The Myo is used to measure the performers movement, and the muscle activity in their forearm which they can use--both voluntarily and involuntarily--to control a synthesised sound world. This musical space is defined separately for each performer by their physical position.

Artists
avatar for Alexander Jensenius

Alexander Jensenius

Associate Professor, University of Oslo
Alexander Refsum Jensenius is a music researcher and research musician. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the analytic knowledge and tools in the creation... Read More →
avatar for Charles Martin

Charles Martin

Postdoctoral Fellow, University of Oslo
Computer scientist, percussionist and computer musician. Interested in embedded systems, mobile devices and musical AI!


10:30pm

Music Piece 7.3
lichens - for soprano saxophone and interactive audio visual system
by Ben Carey


lichens is a work for improvising saxophonist and audio-visual system. The work places the performer in a symbiotic relationship with a living audio-visual object, where both improviser and system learn from and adapt to each other in performance. This conception of human-machine interactivity is modelled upon the natural process of symbiosis, with the title 'lichens' referring to a composite organism exhibiting a symbiotic relationship between a fungus and an algae.

Artists
avatar for Benjamin Carey

Benjamin Carey

Academic Fellow, University of Sydney, Sydney Conservatorium of Music
Benjamin Carey is a Sydney-based saxophonist, composer and technologist. His recent research and practice incorporates equal parts performance, composition and the development of musical software systems. He completed a PhD at the University of Technology, Sydney (2016), and currently... Read More →


10:30pm

Music Piece 7.4
Vibez
by ALGOBABEZ = Joanne Armitage & Shelly Knotts

ALGOBABEZ are Joanne Armitage and Shelly Knotts coding crunchy error driven sound synthesis and MIDI patterns in SuperCollider. Their performances embrace human and computational failure and error working around network issues, bugs and communication problems to form cohesive musical structures. Vibez is a telehaptic performance where the duo perform geographically separated, but feeling each other’s presence through data streams mapped to touch sensation. The transcontinental Algorave duo have been working on technical solutions to recreate the physical closeness of co-located collaboration. Since half of the band relocated to the southern hemisphere they have been performing telematically at raves around the world, sending one physical body to the performance space while beaming in the audio waves of the other over the internet. More recently, they have been working on mechanically replicating themselves through sensors, vibrations and algorithms. They will perform with a sensor system (HR and GSR) which turns biophysical data into haptic feedback and visualisations. They will embed an extended sense of their physicality into performance, and share with each-other and the audience their levels of stress, moments of stasis and general head- bobbing enjoyment.

Artists

10:30pm

Music Piece 7.5
Electroacoustic Guitar
by
Bernt Isak Wærsted

DIY electronics and custom processing software written in Csound (COSMO) is morphed with an acoustic guitar to create a new electroacoustic instrument. Extending the traditional instrument and exploring the cross-sections between electronic and acoustic timbres.

Artists
avatar for Bernt Isak Wærstad

Bernt Isak Wærstad

University Lecturer / Freelance, Norwegian Academy of Music
Musician, sound artist, producer and sound designer with a Masters in Music Technology from NTNU in real time granular synthesis of electric guitar. In addition to work free lance as a musician and sound engineer, he teaches at the Norwegian University of Science and Technology and... Read More →


10:30pm

Music Piece 7.6
Mozartkebap
by Nick Acorne (Mykyta Prykhodchenko), Jonathan Carter & Takuto Fukuda


 
Wednesday, June 6
 

8:00am

NIME Registration
The registration desk will be closed on the 6th of June. Any participants wishing to register are encouraged to contact the staff via the Slack #presenters channel: nime2018.slack.com


Wednesday June 6, 2018 8:00am - 4:00pm
nime2018.slack.com nime2018.slack.com

8:30am

9:00am

Paper Session 8: New Instruments and Interactions

Speakers
avatar for Giacomo Lepri

Giacomo Lepri

PhD candidate, Queen Mary University of London
Giacomo is a musician, improviser, composer, sonic interaction designer and researcher. Currently PhD student at the Augmented Instruments Lab, Centre for Digital Music, Media & Arts Technology Program, Queen Mary University of London. Previously at STEIM - Amsterdam.
avatar for Charles Martin

Charles Martin

Postdoctoral Fellow, University of Oslo
Computer scientist, percussionist and computer musician. Interested in embedded systems, mobile devices and musical AI!
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Seth Thorn

Seth Thorn

Faculty | AME, Arizona State University
avatar for Dan Wilcox

Dan Wilcox

Researcher & Software Developer, ZKM | Hertz-Lab
Artist Engineer Musician Performer (Astronaut) | new media & arts engineering, experimental computer rock, technically complicated projects for noisy/primal displays


9:00am

Paper 8.1
Telemetron: A Musical Instrument for Performance in Zero Gravity
by Sands A Fish II & Nicole L'Huillier


The environment of zero gravity affords a unique medium for new modalities of musical performance, both in the design of instruments, and human interactions with said instruments. To explore this medium, we have created and flown Telemetron, the first musical instrument specifically designed for and tested in the zero gravity environment. The resultant instrument (leveraging gyroscopes and wireless telemetry transmission) and recorded performance represent an initial exploration of compositions that are unique to the physics and dynamics of outer space. We describe the motivations for this instrument, and the unique constraints involved in designing for this environment. This initial design suggests possibilities for further experiments in musical instrument design for outer space.

Speakers

9:00am

Paper 8.2
robotcowboy: 10 Years of Wearable Computer Rock
by Dan Wilcox


This paper covers the technical and aesthetic development of robotcowboy, the author's ongoing human-computer wearable performance project. Conceived as an idiosyncratic manifesto on the embodiment of computational sound, the original robotcowboy system was built in 2006-2007 using a belt-mounted industrial wearable computer running GNU/Linux and Pure Data, external USB audio/MIDI interfaces, HID gamepads, and guitar. Influenced by roadworthy analog gear, chief system requirements were mobility, plug-and-play, reliability, and low cost.
 
 From 2007 to 2011, this first iteration "Cabled Madness" melded rock music with realtime algorithmic composition and revolved around cyborg human/system tension, aspects of improvisation, audience feedback, and an inherent capability of failure. The second iteration "Onward to Mars" explored storytelling from 2012-2015 through the one-way journey of the first human on Mars with the computing system adapted into a self-contained spacesuit backpack.
 
 Now 10 years on, a new "robotcowboy 2.0" system powers a third iteration with only an iPhone and PdParty, the author's open-source iOS application which runs Pure Data patches and provides full duplex stereo audio, MIDI, HID game controller support, and Open Sound Control communication. The future is bright, do you have room to wiggle?

Speakers
avatar for Dan Wilcox

Dan Wilcox

Researcher & Software Developer, ZKM | Hertz-Lab
Artist Engineer Musician Performer (Astronaut) | new media & arts engineering, experimental computer rock, technically complicated projects for noisy/primal displays


9:00am

Paper 8.3
Bela-Based Augmented Acoustic Guitars for Sonic Microinteraction
by Victor Evaristo Gonzalez Sanchez, Charles Patrick Martin, Agata Zelechowska, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson & Alexander Refsum Jensenius


This article describes the design and construction of a collection of digitally-controlled augmented acoustic guitars, and the use of these guitars in the installation Sverm-Resonans. The installation was built around the idea of exploring `inverse' sonic microinteraction, that is, controlling sounds by the micromotion observed when attempting to stand still.

 It consisted of six acoustic guitars, each equipped with a Bela embedded computer for sound processing (in Pure Data), an infrared distance sensor to detect the presence of users, and an actuator attached to the guitar body to produce sound. With an attached battery pack, the result was a set of completely autonomous instruments that were easy to hang in a gallery space.

 The installation encouraged explorations on the boundary between the tactile and the kinesthetic, the body and the mind, and between motion and sound. The use of guitars, albeit with an untraditional `performance' technique, made the experience both familiar and unfamiliar at the same time. Many users reported heightened sensations of stillness, sound, and vibration, and that the `inverse' control of the instrument was both challenging and pleasant.

Speakers
avatar for Alexander Jensenius

Alexander Jensenius

Associate Professor, University of Oslo
Alexander Refsum Jensenius is a music researcher and research musician. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the analytic knowledge and tools in the creation... Read More →
avatar for Charles Martin

Charles Martin

Postdoctoral Fellow, University of Oslo
Computer scientist, percussionist and computer musician. Interested in embedded systems, mobile devices and musical AI!


9:00am

Paper 8.4
Mirroring the past, from typewriting to interactive art: an approach to the re-design of a vintage technology
by Giacomo Lepri & Andrew P. McPherson


Obsolete and old technologies are often used in interactive art and music performance. DIY practices such as hardware hacking and circuit bending provide effective methods to the integration of old machines into new artistic inventions. This paper presents the Cembalo Scrivano .1, an interactive audio-visual installation based on an augmented typewriter. Borrowing concepts from media archaeology studies, tangible interaction design and digital lutherie, we discuss how investigations into the historical and cultural evolution of a technology can suggest directions for the regeneration of obsolete objects. The design approach outlined focuses on the remediation of an old device and aims to evoke cultural and physical properties associated to the source object.

Speakers
avatar for Giacomo Lepri

Giacomo Lepri

PhD candidate, Queen Mary University of London
Giacomo is a musician, improviser, composer, sonic interaction designer and researcher. Currently PhD student at the Augmented Instruments Lab, Centre for Digital Music, Media & Arts Technology Program, Queen Mary University of London. Previously at STEIM - Amsterdam.
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom


9:00am

Paper 8.5
Alto.Glove: New Techniques for Augmented Violin
by Seth Dominicus Thorn


This paper describes a performer-centric approach to the design, sensor selection, data interpretation, and mapping schema of a sensor-embedded glove called the “alto.glove” that the author uses to extend his performance abilities on violin and viola. The alto.glove is a response to the limitations—both creative and technical—perceived in feature extraction processes that rely on classification. The hardware answers one problem: how to extend violin playing in a minimal yet powerful way; the software answers another: how to create a rich, evolving response that enhances expression in improvisation. The author approaches this problem from the various roles of violinist, violist, hardware technician, programmer, sound designer, composer, and improviser. Importantly, the alto.glove is designed to be cost-effective and relatively easy to build.

Speakers
avatar for Seth Thorn

Seth Thorn

Faculty | AME, Arizona State University


9:00am

9:00am

Aphysical Unmodeling Instrument | 2017
Aphysical Unmodeling Instrument | 2017
by Tomoya Matsuura

Moss Arts Center - 3rd Floor in Balcony Lobby

Aphysical Unmodeling Instrument rethinks a description and a generation of sound and music through re-physicalization of Whirlwind, a targetless-physical model. Whirlwind is a combined and impossible physical model of 3 wind instruments; a trumpet, flute and clarinet. Our work reimplements the computational elements of Whirlwind with physical objects such as a delay with the sound propagation and a resonator with the Helmholtz resonator. The acts of a composition, a creation of instruments or a installation and otherwise a performance are parallelized in our work. The notion of the digital sound is expanded out of the computer by re-physicalizing a computational model.

Exhibitors
avatar for Tomoya Matsuura

Tomoya Matsuura

Master Student/Artist, Kyushu University


9:00am

Attunement | 2018
Attunement | 2018
by Olivia Webb & Flo Wilson

Moss Arts Center - 2nd Floor in Mezzanine Lobby

The verb ‘attune’ usually describes the act of making something harmonious, as in the tuning of an instrument. Attunement is also a state of relation to an object, technology, environment or other people. To become attuned is to engage in a two-way sympathetic and empathetic exchange. In this installation, attunement is used both as a technique for exploring ways of being with others in the world, and a method for considering how technology mediates this exchange.

In this conference of new musical interfaces, we invite all participants to consider the ethics of listening as mediated by technology. Listening is central to human interaction, yet habits within Western culture tend to privilege speech and being heard over listening to and receiving others. New technology continues to accelerate the speed that we can sound, voice and express ourselves. We are interested in how we might engage with performance and technology to become better listeners.

In this installation, you are invited to practice attunement by taking part in a selection of simple embodied listening exercises. Step out of your own breath, your own voice, your own self. Contemplate the changes required of you in order to receive someone or something else.

Exhibitors
avatar for Flo Wilson

Flo Wilson

Audio Foundation
Flo Wilson is a composer, producer and artist based in Auckland, New Zealand whose organic, experimental music creates emotive atmospheres to facilitate empathetic, shared listening experiences. She has created custom-built spatial sound installations and then performed with them... Read More →


9:00am

Bǎi (摆): An Oscillating Sound Installation | 2018
Bǎi (摆): An Oscillating Sound Installation | 2018
by Jelger Kroese, Danyi Liu & Edwin van der Heide

Moss Arts Center - 1st Floor in Experience Studio B

Bǎi (摆), meaning pendulum in Chinese, is an interactive installation that uses a speaker hanging as a pendulum from the ceiling combined with an octophonic speaker setup to create a responsive sound environment. Besides being a sound source, the pendulum speaker is also the interface by which the audience interacts with the installation. Through pushing, pulling and twisting, the audience can move the pendulum and set it into oscillating motions. A dynamic system underlying the installation translates these motions into different modes of behavior of the sound environment. At first, it may seem that the environment reacts to the motions predictably. However, exercising too much control over the pendulum causes the installation to quickly spiral into chaotic and unpredictable behavior. This, combined with the fact that hard physical labour is needed to restrain the pendulum, leads to a tense dialogue between participant and object, struggling for control. The movements resulting from this dialogue cause the sounds in the environment to change between different states of stability and chaos, hereby mirroring the types of dynamics that are also seen in natural ecosystems.

Exhibitors
avatar for Jelger Kroese

Jelger Kroese

Jelger Kroese is a designer and coder in the field of sound, interaction and education. He has a deep interest in the processes that underlie ecological systems. As a result, his work mostly comprises compositions and installations that place biological concepts within a technological... Read More →
avatar for Danyi LIU

Danyi LIU

PhD student, Leiden Institute of Advanced Computer Science
Danyi Liu is a designer and researcher in the field of sound and interactive art. Currently, she is a PhD student at the Leiden Institute of Advanced Computer Science. Her research focuses on real-time interactive data sonification and audience participatory installations and performances... Read More →


9:00am

Chorus for Untrained Operator | 2016
Chorus for Untrained Operator | 2016
by Stephan Moore, Peter Bussigel

Chorus for Untrained Operator is a collection of discarded objects. Each has been relieved of its original responsibilities, rewired, and transformed to emphasize its musical voice. The ensemble of objects is controlled through the patch bay of a 1940s Western Electric switchboard.

Exhibitors
avatar for peter bussigel

peter bussigel

assistant professor, emily carr university of art & design
avatar for Stephan Moore

Stephan Moore

Senior Lecturer, Northwestern University


Wednesday June 6, 2018 9:00am - 5:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

9:00am

FingerRing installation | 2016-18
FingerRing installation | 2016-18
by Sergey Kasich


Moss Arts Center - 3rd Floor in Balcony Lobby

FingerRing installation - is an exhibition set of the fFlower interface for the FingerRing technique. The two sensitive panels are installed in the center of the space, which has 8-channel sound system in perimeter. Anyone is allowed to play with the panels and experience flexibility and nature of the FingerRing technique - the simplest way to play with multichannel music. The installation has been shown at "SOUNDART: space of sound" (Manege Central Exhibition Hall, Saint-Petersburg, 2017), MakerFaire Moscow (MISIS, Moscow, 2017).

The FingerRing technique has been presented at BRERA Art Academy New Technologies in Art Dept (Milano, Italy), National University of Science and Technology MISIS (Moscow, Russia), New York University Tandon School of Engeneering (NYC, USA), Cambridge (specially for Dr. Peter Zinovieff). It was included into education process in Falmouth University in England in 2017 and presented as a workshop at EXPO 2017 (Astana, Kazakhstan).

Exhibitors
SK

Sergey Kasich

founder, SoundArtist.ru
music technology, experimental sound arts, interactive installations, social infrastructure, cultural projects, events, festivals, curation of new media arts, hybrid studies , R&D , anything


9:00am

What We Have Lost / What We Have Gained | 2014
What We Have Lost / What We Have Gained | 2014
by Matthew Mosher

What We Have Lost / What We Have Gained explores how to transform viewers into performers, participants, and players through large upper body movements and tangible interactions with a sculpture. The piece was originally conceived as a large scale MIDI drum pad style interface that would be both familiar to electronic musicians yet more physically expressive than typical MIDI devices.
As an art installation, it presents a four by three grid of video projected mouths on a spandex screen. Each video sample animates and sings a different vowel tone when pressed by a user. The volume of the singing increases as the player presses harder and deeper into the mouth screen, which distorts the spandex display surface. In this way, the piece provides audio, video and tactile feedback, rewarding the user with a multi-modal embodied experience. This work contributes to the discourse at the intersection of tangible interactions and musical expression by providing an example of how interaction design can facilitate engagement and convey meaning. What We Have Lost / What We Have Gained questions the experience of using one's physical body to manipulate the digital representation of another's body.
Special thanks to vocalists Ashley Reynolds and Keri Pierson.

Exhibitors
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →


Wednesday June 6, 2018 9:00am - 5:00pm
Armory Gallery 203 Draper Rd NW, Blacksburg, VA 24061, USA

10:00am

McBlare: A Robotic Bagpipe Player
McBlare: A Robotic Bagpipe Player
by Roger Dannenberg

"McBlare" is a robotic bagpipe player. It plays an ordinary set of bagpipes using an air compressor to provide air. Electro-magnetic devices power the “fingers” that open and close tone holes to determine the musical pitch. A computer sends control signals to McBlare to operate the “fingers” to play traditional bagpipe tunes as well as new compositions. McBlare can also add authentic sounding ornaments to simple melodies entered through a piano-like keyboard and play the result on the pipes. McBlare was constructed by the Robotics Institute for its 25th Anniversary in 2004. The team that built McBlare includes Ben Brown, Garth Zeglin, and Roger Dannenberg. McBlare has performed in Miami, Pittsburgh, Vancouver, and twice flew to Scotland, attending the International Piping Festival in Glasgow in 2006 and an exhibition at the Scottish Parliament in 2013. McBlare has also appeared and performed on the Canadian Broadcasting Company and the BBC Scotland.

A triskelion is a triple spiral design, a reference to McBlare's tripod base. The work was composed using various algorithms to produce effects, such as the long opening trill, that are unplayable by humans.

Exhibitors


Wednesday June 6, 2018 10:00am - 12:00pm
Moss Arts Center - Lawn Alumni Mall, Blacksburg, VA 24060, USA

11:00am

Keynote Talk with Pamela Z
Keynote Talk 
Pamela Z

Artists
avatar for Pamela Z

Pamela Z

Composer | Performer | Media Artist
Pamela Z is a composer/performer and media artist who works primarily with voice, live electronic processing, sampled sound, and video. A pioneer of live digital looping techniques, she processes her voice in real time to create dense, complex sonic layers. Her solo works combine... Read More →


12:00pm

Demo-Poster Session 3
Lunch break, and demo-poster session.  Coffee, pastries and refreshments will be provided.

Posters

Demos

Exhibitors
avatar for Jack Armitage

Jack Armitage

PhD student, Augmented Instruments Lab, C4DM, QMUL
Jack Armitage is a PhD student in the Augmented Instruments Lab, Centre for Digital Music, Queen Mary University of London. His topic is on supporting craft in digital musical instrument design, supervised by Dr. Andrew McPherson.
avatar for Edgar Berdahl

Edgar Berdahl

Assistant Professor, Louisiana State University
avatar for Matthew Blessing

Matthew Blessing

Doctoral Candidate, Louisiana State University
KC

Kaiming Cheng

Undergrad, University of Virginia
avatar for Palle Dahlstedt

Palle Dahlstedt

University of Gothenburg, Aalborg University
Palle Dahlstedt (b.1971), Swedish improviser, researcher, and composer of  everything from chamber and orchestral music to interactive and  autonomous computer pieces, receiving the Gaudeamus Music Prize in 2001. Currently Obel Professor of Art & Technology at Aalborg University... Read More →
DF

Daniel Formo

Research Fellow, Norwegian University of Science and Technology
avatar for Yoshifumi Kitamura

Yoshifumi Kitamura

Tohoku University
avatar for Maria Mannone

Maria Mannone

UNIVERSITY OF MINNESOTA
avatar for Anthony T. Marasco

Anthony T. Marasco

Louisiana State University
Embedded instruments, software for emergent media experiences, audiovisual installations, composition, modular synthesizers, the TV show “Perfect Strangers. ”
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →
avatar for Mike Mulshine

Mike Mulshine

Research Specialist, Princeton University
avatar for Jon Pigrem

Jon Pigrem

Researcher, Queen Mary University of London
Hi! I'm Jon from Queen Mary University of London. I'm a Musician, Artist and Researcher. My current research investigates tacit understandings of instrumental interaction with materials and sensors. | Talk to me about: Instrumental interaction - Sensors - NIMEs - DMIs - Electronic... Read More →
ES

Eric Sheffield

Louisiana State University|Baton Rouge|Louisiana|United States
avatar for Jeff Snyder

Jeff Snyder

Director of Electronic Music, Princeton University
Instrument designer, composer, improvisor.
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London


12:00pm

Demo 3.01
game over - a musical 2D game engine
by Christof Ressi


This demo shows a self-developed game engine which is meant to be used in the context of installations and audio-visual performances. The player is supposed to explore different game worlds and interact freely with their environments and non-player-characters. Almost everything the player does has musical consequences. The game worlds are collages of well-known vintage video game genres, like platformer, dungeon crawler, NES 'Mode 7' or isometric pseudo 3D, creating bizarre and confusing scenarios. It is possible modify the game while it is running (e.g. spawning/ destroying/teleporting actors, changing the tile map, trigger events etc.). This adds a level of live coding to a musical performance where the programmer can act as a musical partner. While the game engine is written in C++, the game logic is scripted with Lua. All sound is done in Pure Data using a selfwritten emulator of the Roland D-110 sound module and various other synthesis and sampling techniques. Maps can be desgined with a custom level editor. game_over was developed within the artistic research project "GAPPP" at the Institute of Electronic Music in Graz, Austria and is founded by the "Österreichisches Wissenschaftsfonds" (project number AR364-G24).

Exhibitors

12:00pm

Demo 3.02
Teeth
by Spencer Salazar


Teeth is a composition for solo performer using the Auraglyph musical sketchpad software on an iPad. Using two hand-drawn oscillator waveforms and delays, the performer explores a series of modulated timbres and frequencies. The performer's actions on the tablet are displayed to the audience via an overhead camera, making transparent the connection from the performer's action to sonic and visual representations.

Exhibitors

12:00pm

Demo 3.03
Romp in Chaos
Demo by Edgar Berdahl


The sound of chaos can be joyous! This electroacoustic miniature is an exercise that explores the edge of chaos, which is realized by two digital waveguides resonating against the Peter de Jong chaotic map. For this work, an embedded acoustic instrument was created with five pressure sensors and five potentiometers. As the performer changes the parameters to and fro, the sound romps back and forth between chaotic regimes and more tonal sounds. Long live chaos!

Exhibitors
avatar for Edgar Berdahl

Edgar Berdahl

Assistant Professor, Louisiana State University


12:00pm

Demo 3.04
Circles
by Barry Moon


Circles was created in response to the alienating experience of concert music, where the audience is confined to seating far away from the performer, who reads from a score hidden from their sight. I was interested in making the score more interesting to the spectator, and allowing the audience to move among the performers instead of being confined to seats. Performers each wear a helmet containing a Raspberry Pi computer, sensor for color and motion, speaker, and battery for power. Pure Data is used for audio processing. Changes in computer processing are dictated by changes in colors and movement of the performers picked up by the sensor. Performances by Annie Stevens (percussion) and Kyle Hutchins (saxophone).

Exhibitors

12:00pm

Demo 3.05
Paint the Melody in Virtual Reality
by Kaiming Cheng

Exhibitors
KC

Kaiming Cheng

Undergrad, University of Virginia


12:00pm

Demo 3.06
Unsichtbares Klavier (invisible piano), virtual controller
by Remmy Canedo


Play an imaginary instrument that simulates the attributes of a grand piano without its physical form.

Exhibitors

12:00pm

Demo 3.07
Musical Chairs
by Matthew Blessing


Play a coffee table, end table, ottoman, and wingback chair - each outfitted with embedded sensors, CPUs, and audio drivers.

Exhibitors
avatar for Matthew Blessing

Matthew Blessing

Doctoral Candidate, Louisiana State University


12:00pm

Demo 3.08
OtoKin installation
by Palle Dahlstedt & Ami
Skånberg-Dahlstedt

In OtoKin, an invisible sound space is explored through ear (Oto) and movement (Kinesis). With eyes closed, you enter a high-dimensional acoustic space, where every small body movement matters. Through this re-translation of three-dimensional body action and position into infinite-dimensional sound texture and timbre, you are forced to re-think and re-learn: Position as place, position as posture, posture as timbre, timbre as a bodily construction. The OtoKin sound space is also shared with other users, with added modes of presence, proximity and interaction.

Exhibitors
avatar for Palle Dahlstedt

Palle Dahlstedt

University of Gothenburg, Aalborg University
Palle Dahlstedt (b.1971), Swedish improviser, researcher, and composer of  everything from chamber and orchestral music to interactive and  autonomous computer pieces, receiving the Gaudeamus Music Prize in 2001. Currently Obel Professor of Art & Technology at Aalborg University... Read More →


12:00pm

Demo 3.09
Body Biopotential Sonification
by Alan Macy


The Body Biopotential Sonification art project explores the idea of human nervous system expression in a movement / dance setting.  A participant movement artist’s biopotentials are monitored and the measured electricity, sourced from the artist’s body physical activity, is greatly amplified and then conditioned to be presented as sound expression. The project operates via the principle of differential biopotential signal amplification.  The detected signals are the participant’s Lead I electrocardiogram and associated electromyogram.  The visceral, sonic, environment generated within the confines of the project is established by the participant’s measured biopotentials.  The project enables a participant to co-create a musical composition that’s sourced from an individual’s physiology.  Physiological metrics are measured, transformed, and then re-introduced as auditory stimuli to the participant movement artist and audience.

Exhibitors

12:00pm

Demo 3.10
Music Demo
by Kristina Warren


Arrest (2018), a work for Exo/Rosie. Exo/Rosie is a wearable, analog-digital instrument/persona I created that uses body-to-body connections, such as wrist to wrist, to vary analog audio and digital control output. These closed, covered gestures reflect the carceral state and limited agency around music technology today. Arrest explores the complete body – choreographically, expressively, and socially – as a crucial musical affordance.


12:00pm

Poster 3.01
Low Frequency Feedback Drones: A non-invasive augmentation of the double bass
by Thanos Polymeneas Liontiris

This paper illustrates the development of a Feedback Resonating Double Bass. The instrument is essentially the augmentation of an acoustic double bass using positive feedback. The research aimed to reply the question of how to augment and convert a double bass into a feedback resonating one without following an invasive method. The conversion process illustrated here is applicable and adaptable to double basses of any size, without making irreversible alterations to the instruments.


12:00pm

Poster 3.02
The Orchestra of Speech: a speech-based instrument system
by Daniel Formo


The Orchestra of Speech is a performance concept resulting from a recent artistic research project exploring the relationship between music and speech, in particular improvised music and everyday conversation. As a tool in this exploration, a digital musical instrument system has been developed for “orchestrating” musical features of speech into music, in real time. Through artistic practice, this system has evolved into a personal electroacoustic performance concept.

Exhibitors
DF

Daniel Formo

Research Fellow, Norwegian University of Science and Technology


12:00pm

Poster 3.03
Surveying the Compositional and Performance Practices of Audiovisual Practitioners
by Anna Weisling, Anna Xambó, Ireti Olowe, Mathieu Barthet


This paper presents a brief overview of an online survey conducted with the objective of gaining insight into compositional and performance practices of contemporary audiovisual practitioners. The survey gathered information regarding how practitioners relate aural and visual media in their work, and how compositional and performance practices involving multiple modalities might differ from other practices. Discussed here are three themes: compositional approaches, transparency and audience knowledge, and error and risk, which emerged from participants' responses. We believe these themes contribute to a discussion within the NIME community regarding unique challenges and objectives presented when working with multiple media.

Exhibitors
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London


12:00pm

Poster 3.04
Sound Opinions: Creating a Virtual Tool for Sound Art Installations through Sentiment Analysis of Critical Reviews
by Anthony T. Marasco


The author presents Sound Opinions, a custom software tool that uses sentiment analysis to create sound art installations and music compositions. The software runs inside the NodeRed.js programming environment. It scrapes text from web pages, pre-processes it, performs sentiment analysis via a remote API, and parses the resulting data for use in external digital audio programs. The sentiment analysis itself is handled by IBM's Watson Tone Analyzer.
 
 The author has used this tool to create an interactive multimedia installation, titled Critique. Sources of criticism of a chosen musical work are analyzed and the negative or positive statements about that composition work to warp and change it. This allows the audience to only hear the work through the lens of its critics, and not in the original form that its creator intended.

Exhibitors
avatar for Anthony T. Marasco

Anthony T. Marasco

Louisiana State University
Embedded instruments, software for emergent media experiences, audiovisual installations, composition, modular synthesizers, the TV show “Perfect Strangers. ”


12:00pm

Poster 3.05
A web-based 3D environment for gestural interaction with virtual music instruments as a STEAM education tool
by Kosmas Kritsis, Aggelos Gkiokas, Carlos Árpád Acosta, Quentin Lamerand, Robert Piéchaud, Maximos Kaliakatsos-Papakostas & Vassilis Katsouros


We present our work in progress on the development of a web-based system for music performance with virtual instruments in a virtual 3D environment, which provides three means of interaction (i.e physical, gestural and mixed), using tracking data from a Leap Motion sensor. Moreover, our system is integrated as a creative tool within the context of a STEAM education platform that promotes science learning through musical activities.

 The presented system models string and percussion instruments, with realistic sonic feedback based on Modalys, a physical model-based sound synthesis engine. Our proposal meets the performance requirements of real-time interactive systems and is implemented strictly with web technologies.

Exhibitors

12:00pm

Poster 3.06
CubeHarmonic: A New Interface from a Magnetic 3D Motion Tracking System to Music Performance
by Maria C. Mannone, Eri Kitamura, Jiawei Huang, Ryo Sugawara & Yoshifumi Kitamura


We developed a new musical interface, CubeHarmonic, with the magnetic tracking system, IM3D, created at Tohoku University. IM3D system precisely tracks positions of tiny, wireless, battery-less, and identifiable LC coils in real time. The CubeHarmonic is a musical application of the Rubik's cube, with notes on each little piece. Scrambling the cube, we get different chords and chord sequences. Positions of the pieces which contain LC coils are detected through IM3D, and transmitted to the computer, that plays sounds. The central position of the cube is also computed from the LC coils located into the corners of Rubik's cube, and, depending on the computed central position, we can manipulate overall loudness and pitch changes, as in theremin playing. This new instrument, whose first idea comes from mathematical theory of music, can be used as a teaching tool both for math (group theory) and music (music theory, mathematical music theory), as well as a composition device, a new instrument for avant-garde performances, and a recreational tool.

Exhibitors
avatar for Yoshifumi Kitamura

Yoshifumi Kitamura

Tohoku University
avatar for Maria Mannone

Maria Mannone

UNIVERSITY OF MINNESOTA


12:00pm

Poster 3.07
The Whammy Bar as a Digital Effect Controller
by Martin M Kristoffersen & Trond Engum


In this paper we present a novel digital effects controller for electric guitar based upon the whammy bar as a user interface. The goal with the project is to give guitarists a way to interact with dynamic effects control that feels familiar to their instrument and playing style. A 3D-printed prototype has been made. It replaces the whammy bar of a traditional Fender vibrato system with a sensor-equipped whammy bar. The functionality of the present prototype includes separate readings of force applied towards and from the guitar body, as well as an end knob for variable control. Further functionality includes a hinged system allowing for digital effect control either with or without the mechanical manipulation of string tension. By incorporating digital sensors to the idiomatic whammy bar interface, one would potentially bring guitarists a high level of control intimacy with the device, and thus lead to a closer interaction with effects.


12:00pm

Poster 3.08
Timbre Tuning: Variation in Cello Sprectrum Across Pitches and Instruments
by Robert Pond, Alexander Klassen & Kirk McNally


The process of learning to play a string instrument is a notoriously difficult task. A new student to the instrument is faced with mastering multiple, interconnected physical movements in order to become a skillful player. In their development, one measure of a players quality is their tone, which is the result of the combination of the physical characteristics of the instrument and their technique in playing it. This paper describes preliminary research into creating an intuitive, real-time device for evaluating the quality of tone generation on the cello: a ``timbre-tuner" to aid cellists evaluate their tone quality. Data for the study was collected from six post-secondary music students, consisting of recordings of scales covering the entire range of the cello. Comprehensive spectral audio analysis was performed on the data set in order to evaluate features suitable to describe tone quality. An inverse relationship was found between the harmonic centroid and pitch played, which became more pronounced when restricted to the A string. In addition, a model for predicting the harmonic centroid at different pitches on the A string was created. Results from informal listening tests support the use of the harmonic centroid as an appropriate measure for tone quality.


12:00pm

Poster 3.09
Tributaries of Our Lost Palpability
by Matthew Mosher, Danielle Wood & Tony Obr


This demonstration paper describes the concepts behind Tributaries of Our Distant Palpability, an interactive sonified sculpture. It takes form as a swelling sea anemone, while the sounds it produces recall the quagmire of a digital ocean. The sculpture responds to changing light conditions with a dynamic mix of audio tracks, mapping volume to light level. People passing by the sculpture, or directly engaging it by creating light and shadows with their smart phone flashlights, will trigger the audio. At the same time, it automatically adapts to gradual environment light changes, such as the rise and fall of the sun. The piece was inspired by the searching gestures people make, and emotions they have while, idly browsing content on their smart devices. It was created through an interdisciplinary collaboration between a musician, an interaction designer, and a ceramicist.

Exhibitors
avatar for Matthew Mosher

Matthew Mosher

Assistant Professor, University of Central Florida
Boston native Matthew Mosher is an intermedia artist and mixed methods research professor who creates embodied experiential systems. He received his BFA in Furniture Design from the Rhode Island School of Design in 2006 and his MFA in Intermedia from Arizona State University in 2012... Read More →


12:00pm

Poster 3.10
Embedded Digital Shakers: Handheld Physical Modeling Synthesizers
by Andrew Piepenbrink


We present a flexible, compact, and affordable embedded physical modeling synthesizer which functions as a digital shaker. The instrument is self-contained, battery-powered, wireless, and synthesizes various shakers, rattles, and other handheld shaken percussion. Beyond modeling existing shakers, the instrument affords new sonic interactions including hand mutes on its loudspeakers and self-sustaining feedback. Both low-cost and high-performance versions of the instrument are discussed.

Exhibitors

12:00pm

Poster 3.11
Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases
by Anna Xambó, Gerard Roma, Alexander Lerch, Mathieu Barthet & György Fazekas


The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that can be approached using music information retrieval (MIR). In this paper, we explore the use of MIR to retrieve and repurpose sounds in musical live coding. We present a live coding system built on SuperCollider enabling the use of audio content from online Creative Commons (CC) sound databases such as Freesound or personal sound databases. The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database or a crowdsourced database and reported its potential in facilitating tailorability of the tool to their own creative workflows.

Exhibitors
avatar for Anna Xambó

Anna Xambó

Postdoctoral Research Assistant, Queen Mary University of London


12:00pm

Poster 3.13
The Feedback Trombone: Controlling Feedback in Brass Instruments
by Jeff Snyder, Michael R Mulshine & Rajeev S Erramilli


This paper presents research on control of electronic signal feedback in brass instruments through the development of a new augmented musical instrument, the Feedback Trombone. The Feedback Trombone (FBT) extends the traditional acoustic trombone interface with a speaker, microphone, and custom analog and digital hardware.

Exhibitors
avatar for Mike Mulshine

Mike Mulshine

Research Specialist, Princeton University
avatar for Jeff Snyder

Jeff Snyder

Director of Electronic Music, Princeton University
Instrument designer, composer, improvisor.


12:00pm

Poster 3.14
Mechanoise: Mechatronic Sound and Interaction in Embedded Acoustic Instruments
by Eric Sheffield


The use of mechatronic components (e.g. DC motors and solenoids) as both electronic sound source and locus of interaction is explored in a form of embedded acoustic instruments called mechanoise instruments. Micro-controllers and embedded computing devices provide a platform for live control of motor speeds and additional sound processing by a human performer. Digital fabrication and use of salvaged and found materials are emphasized.

Exhibitors
ES

Eric Sheffield

Louisiana State University|Baton Rouge|Louisiana|United States


12:00pm

Poster 3.15
Do We Speak Sensor? Cultural Constraints of Embodied Interaction
by Jon Pigrem & Andrew P. McPherson


This paper explores the role of materiality in Digital Musical Instruments and questions the influence of tacit understandings of sensor technology. Existing research investigates the use of gesture, physical interaction and subsequent parameter mapping. We suggest that a tacit knowledge of the ‘sensor layer’ brings with it definitions, understandings and expectations that forge and guide our approach to interaction. We argue that the influence of technology starts before a sound is made, and comes from not only intuition of material properties, but also received notions of what technology can and should do. On encountering an instrument with obvious sensors, a potential performer will attempt to predict what the sensors do and what the designer intends for them to do, becoming influenced by a machine centered understanding of interaction and not a solely material centred one. The paper presents an observational study of interaction using non-functional prototype instruments designed to explore fundamental ideas and understandings of instrumental interaction in the digital realm. We will show that this understanding influences both gestural language and ability to characterise an expected sonic/musical response.

Exhibitors
avatar for Andrew McPherson

Andrew McPherson

Reader, Queen Mary University of London|London||United Kingdom