We are interested in sound as a vehicle to transmit information between humans and machines. In our research we focus mainly on spatial sound, applied psychoacoustics, and applied phonetics.
Vision is saturated with information coming from gadgets we use on a daily basis; we want to find ways to convey part of that information via spatial (3D) sound using loudspeakers or headphones. We are particularly interested in synthesizing auditory distance and elevation in virtual environments and multi-sensory interfaces.
The processing capabilities of the brain are sometimes exceeded by hardware. This brings opportunities for new interfaces explored in our lab, such as near ultrasound communication, bass enhancements using vibration motors, etc.
In collaboration research, we are studying effects of noise on speech, multilingualism, articulation and phonation phenomena. Speech technologies are the ultimate interaction method for human-machine communication. Understanding how speech is produced and perceived in different setups is of paramount importance for such technologies.
We use sound regularly to communicate with others, yet our understanding of it is so limited that there are many opportunities for new technologies to be discovered. This is a difficult task that requires common efforts.