26-30 September, 2011
Observe user behavior when confronted to a sound interface with body as input without giving them any directions.
A sound controller using camera vision that is capable of understanding if you extend your arms to the right, left or up.
This set of three actions are used to trigger three different sounds
There are no explicit constraints besides remaining in the camera’s area of vision.
- Questions: What do users do, how do they move ?
- How does the spatial mapping affect their movement choice ?
- What are the identifiable stages in the learning process ?
- What is their overall evaluation of the experience ?
- What is their own perception about what and how can be controlled ?
- As a first action, users waive their arms. Then they try different movements.
- Users perceive that they can control more parameters than they actually can.
- The simple act of controlling sound with movement seems to be a fun experience.
- The learning curve is short, probably due to the aid of the visual feedback.