Loading…
This event has ended. Visit the official site or create your own event on Sched.
Back To Schedule
Monday, June 4 • 3:30pm - 5:00pm
Paper 3.2

Log in to save this to your schedule, view media, leave feedback and see who's attending!

GestureRNN: A neural gesture system for the Roli Lightpad Block
by Lamtharn Hantrakul


Machine learning and deep learning has recently made a large impact in the artistic community. In many of these applications however, the model is often used to render the high dimensional output directly e.g. every individual pixel in the final image. Humans arguably operate in much lower dimensional spaces during the creative process e.g. the broad movements of a brush. In this paper, we design a neural gesture system for music generation based around this concept. Instead of directly generating audio, we train a Long Short Term Memory (LSTM) recurrent neural network to generate instantaneous position and pressure on the Roli Lightpad instrument. These generated coordinates in turn, give rise to the sonic output defined in the synth engine. The system relies on learning these movements from a musician who has already developed a palette of musical gestures idiomatic to the Lightpad. Unlike many deep learning systems that render high dimensional output, our low-dimensional system can be run in real-time, enabling the first real time gestural duet of its kind between a player and a recurrent neural network on the Lightpad instrument.

Speakers
LH

Lamtharn Hantrakul

Yale University|New Haven|CT|United States


Monday June 4, 2018 3:30pm - 5:00pm EDT
Torgersen Hall - Room 2150

Attendees (8)