Test date : October 26, 2011
Location : MFA DT Lab, Parsons The New School for Design
Observe the level of attention, exploration and engagement with a sound installation.
Observe behavior of participants when confronted to a sound interface with no visual feedback.
A computer sound interface is placed in a highly transited zone in an private open space. The interface uses a camera to sense movement and proximity and outputs sonic feedback based on those parameters. The camera and speakers were placed in a zone where people usually walk by (near a hallway), with the intention to see if the interface made them stop.
When users walk by the interface, depending on how much they move, a sound becomes louder, if the user freezes, the sound stops. A second control is provided depending on the proximity with the camera: as the user gets closer to the camera, a second sound (with different timbre and frequency) gets louder. Consequently, if the user walks away from the camera, the sound will decrease in loudness.
26-30 September, 2011
Observe user behavior when confronted to a sound interface with body as input without giving them any directions.
A sound controller using camera vision that is capable of understanding if you extend your arms to the right, left or up.
This set of three actions are used to trigger three different sounds
14-17 September, 2011
Observe the communicational behaviors of players when engaged in creating a collaborative sound piece.
- Four players sit around a midi device with buttons that trigger different sounds.
- Each player will be in charge of operating one quart of the interface ( a 4×4 matrix ) .
- The goal is to collectively create a rhythm where each participant plays only one sound, basically is dividing the role of a drummer within four participants.
Participants can play one sound at a time and they can only do it after the previous player’s turn
- How do players build a strategy to tackle a common goal?
- What stages in the developing communication can be identified?
- What kind of communications can be observed (i.e. physical, verbal, through the interface)?
- How is the learning process when operating the interface?