The gestures / movements are captured by the MPU6050 3-axis accelerometer and 3-axis gyroscope. The sensordata transmitted to the computer is interpreted and transformed to sound by software (performance patch) written in Puredata. Structure, functionality and programming guidlines for this performance patch will be described in the following.
Raw data is read from the serial port the XBee dongle is connected to
(main > serial-status > pd serialport)
The sensor data is parsed from the data sent by the GePS unit, and split in 6 data stream, one for each axis.
Data streams are written to tables for visualisation.
(main > serial-status > pd parse)
data interpretation Sensor data is scaled and processed to gain specific informations like derivatives, deltas which in turn are interpreted to get triggers and control signals for the DSP. This data is sent out to the instruments and the settings patcher.
(main > serial-status > pd parse)
The performance can be structured by allowing to switch modes by gestures. This and initial settings are handled in settings.
(main > controls > pd settings)
Instruments are built to react on the interpreted sensor data. It's up to you what you like to create / implement, it can range from simple sample playback to filtering, granulation, complex DSP like phase vocoders and physical modelling or external instruments. Audio outputs are sent to the mixer.
The mixer collects the audio-signals from all instruments. There are options process the summed signal and to record the performance.
Graphical overview / interfacte of the main performance controls and modules. Everything can be accessed from here by opening the subpatchers (right-click open) or via the buttones in the "CONTROLS" section.
This element displays the status of the serial port or connection. It has a control to open, close or reset the serial connection, and it displays the average latency of the sensor data (7.3ms is a common value).
Instruments are the main building blocks to generate sound. The combination and control is handled by settings. Your performance can be extended by programming new instruments and combining them.
If you want to combine instruments and activate them by specific events, you need to implement a performance outline reacting to sensor data.
You can also turn them on/off manually, but keep in mind that the mixer is controlled by settings which could mean that the intruments outputs are muted.
Abstractions are files with code that is used in more than one place. Bear in mind, that modifying an abstraction can change or completely break other modules that depend on this abstraction. If you want to modify it, copy the contents to a subpatcher or to a new patcher!
|geps.mod-template.pd||Instrument module template|
|geps.em-gyro.pd||Emphasize by movement (gyro)|
|geps.abstraction-header||Note to include into all abstractions.|
|geps.delta.pd||Used to scale sensor data|
Every instruments needs its own sensor data curves. Therefore the sensor data is treated before it is sent to the instruments. In order to reduce calculation time, most of the treatment is done globally.
When using the gyroscope data, in some cases it is not important in which direction the hand moves (or on which axis). In this case values below 0 get multiplied by -1 (absolute value) or the whole signal is squared. Squaring the control signal makes the gestures more dynamic – values close to 1 are affected less than values close to 0. This also reduces the noise floor of the signal which is useful when the data gets interpreted directly as amplitude of an audio signal.
An adaptive low pass filter (which treats the data differently depending on whether the values increase or decrease) is used for temporal control of the sensor data.
Triggering sounds is mostly done with the gyroscope data, because it is directly linked to the hand movement. The constant values of the accelerometer are used for slower and finer changes in sound. They can also be used for detecting the current hand position which can be applied to switching between different modes of control.
The sensor data is interpreted and turned into 5 different control signals. In case an instruments has more specific methods of interpretation, the scaled sensor data is sent out directly, too.
|accel_x,y,z||Accelerometer data in 3 axes. Temporal control and squaring is case specific.|
|gyro_x,y,z||Gyroscope data in 3 axes. Temporal control and squaring is case specific.|
|gyrodelta_x,y,z||This is the derivation of the gyroscope data in 3 axes. It shows peaks at the beginning and end of gestures but not during the movement. The peaks coincide with the "visual peaks" of abrupt movements and are therefore useful to trigger sounds with a clearly defined attack.|
|gyro_all||The absolute values of all gyroscope axes are added. This signal represents the amount of movement of the hand / arm as a whole and is not direction sensitive.|
|gyrodelta_all||The absolute values of the derivation of all gyroscope axes are added. This signal shows peaks at the beginning and end of movements of the hand / arm as a whole and is not direction sensitive.|
|t.flick||Transient detection based on gyrodelta_all.|
|slide||Variable lowpass filter, can be used to remove or smoothen fast movement. y (n) = y (n-1) + ((x (n) - y (n-1))/slide) with parameters for increasing and decreasing input. This treatment is available as an abstraction (slide.cs) as it takes individual parameters.|
We prepared an instrument prototype (GePS_TEMPLATE) to get started with implementing a new instrument. It is structured as follows:
During the development of the device the main focus was always on transmission speed. For a complete fusion of movement and sound the reaction of the sensor had to be as fast as possible. (Commercial products like the Wii-remote do not have that speed.)
The sensor data is sent byte by byte to the computer. One reading of all sensor values consists of 14 bytes: 6 axes with 2 bytes each (16bit) and 2 bytes as identifiers for the data parsing.
When starting a performance the configuration defined in Settings is sent out to the instruments and the mixer. You can implement a performance outline by triggering differents instrument states by interpreting sensor data.
|mixer.ch0 time 10, gain 0||Set the gain of channel 0 in the mixer to a gain value of 0 (close it) in 10 miliseconds.|
|mixer.summing type 1, factor 2l||>mplify the sum by a factor of 21 and choose the summing type soft saturation of the mixer (0 for hard clipping, 1 for soft saturation).|