Designing gestures for continuous sonic interaction

A Tanaka, B Di Donato, M Zbyszynski, G Roks - 2019 - figshare.le.ac.uk
2019figshare.le.ac.uk
We present a system that allows users to try different ways to train neural networks and
temporal modelling to asso-ciate gestures with time-varying sound. We created a soft-ware
framework for this and evaluated it in a workshop-based study. We build upon research in
sound tracing and mapping-by-demonstration to ask participants to de-sign gestures for
performing time-varying sounds using a multimodal, inertial measurement (IMU) and muscle
sens-ing (EMG) device. We presented the user with two classical techniques from the …
We present a system that allows users to try different ways to train neural networks and temporal modelling to asso- ciate gestures with time-varying sound. We created a soft- ware framework for this and evaluated it in a workshop- based study. We build upon research in sound tracing and mapping-by-demonstration to ask participants to de- sign gestures for performing time-varying sounds using a multimodal, inertial measurement (IMU) and muscle sens- ing (EMG) device. We presented the user with two classical techniques from the literature, Static Position regression and Hidden Markov based temporal modelling, and pro- pose a new technique for capturing gesture anchor points on the fly as training data for neural network based regression, called Windowed Regression. Our results show trade- offs between accurate, predictable reproduction of source sounds and exploration of the gesture-sound space. Several users were attracted to our windowed regression technique. This paper will be of interest to musicians engaged in going from sound design to gesture design and offers a workflow for interactive machine learning.
figshare.le.ac.uk
以上显示的是最相近的搜索结果。 查看全部搜索结果