ヴィジェガス オロズゴ ジュリアン アルベルト

JULIAN Alberto Villegas Orozco

Senior Associate Professor

Affiliation
Department of Computer Science and Engineering/Division of Information Systems
Title
Senior Associate Professor
E-Mail
julian@u-aizu.ac.jp
Web site
https://onkyo.u-aizu.ac.jp/

Education

Courses - Undergraduate
LI10 Introduction to Multimedia Systems
IT09 Sound and Audio Processing
FU14 Intro. to Software Engineering (exercise class)
FU15 Introduction to Data Management (exercise class)
Courses - Graduate
Spatial Hearing and Virtual 3D Sound
Introduction to Sound and Audio
Digital Audio Effects
Multimedia Machinima

Research

Specialization
Linguistics
Software
Perceptual information processing
Human interface and interaction
Entertainment and game informatics
I am interested in spatial sound, audio signal processing, phonetics, psychoacoustics, and aural/oral human-computer interaction.
Educational Background, Biography
2021 – Senior Associate Professor, University of Aizu.
2013 - Associate Professor, University of Aizu.
2010 - Researcher, Ikerbasque - University of the Basque Country.
2010 - Ph.D. in Computer Science and Engineering, University of Aizu.
Current Research Theme
PSYPHON: Psychoacoustic features for Phonation prediction
Key Topic
Aural/oral human-computer interaction, real-time programming, visual programming
Affiliated Academic Society
• Audio Engineering Society

• Acoustical Society of Japan

• Acoustical Society of America

• IEEE

Others

Hobbies
Running, playing music, etc.
School days' Dream
Building spatial ships
Current Dream
Make of this world a better place to live.
Motto
Nothing can stop you if you really want to do something.
Favorite Books
A Life on Our Planet: My Witness Statement and A Vision for the Future by David Attenborough

Guns, Germs, and Steel: The Fates of Human Societies by Jared Diamond

The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science by Norman Doidge

The Anthropocene Reviewed by John Green
Messages for Students
We are always looking forward to doing collaboration research; email me if you are interested. We’re particularly interested in Master and Doctoral students.

Main research

Sound and Audio Technologies

I have spent over two decades as an information scientist, dedicating the last ten years to sound and audio research at the University of Aizu. Throughout my career, I have published over 150 articles in top journals and conferences, and I hold three patents related to sound and audio. My passion for research has also led me to supervise more than 30 undergraduate students, a dozen Master's students, and I am currently guiding two Ph.D. students.

We call our lab "Onkyo." In our lab, we explore sound as a powerful medium for transmitting information between humans and machines. Our research relies heavily on machine learning methods and focuses on three key areas:

Spatial Sound: In an age of information overload, where vision is saturated with data from daily gadgets, we seek to utilize spatial (3D) sound through loudspeakers or headphones to convey vital information. Our interests lie in spatial data compression, auditory display personalization, and the development of multi-sensory interfaces. By enriching human experiences with audio interactions, we strive to push the boundaries of human-machine communication.
Applied Psychoacoustics: The discipline of psychoacoustics studies how sound in the physical world is perceived and processed in our minds. Understanding these perceptual processes enables us to identify when the brain's processing capabilities are outpaced by hardware. This opens doors to innovative interfaces, such as near ultrasound communication and advancements in speech communication technologies.
Applied Phonetics: Phonetics, the study and classification of speech sounds, plays a crucial role in human-machine communication. Through collaborative research, we investigate the effects of noise on speech, multilingualism, articulation, and phonation phenomena (the production of speech sounds). By comprehending speech production and perception in diverse settings, we seek to improve speech technologies for seamless human-machine interactions.

As we delve into these research areas, our mission is to contribute meaningfully to the field of human-machine interactions. By harnessing the potential of sound, we aspire to create a future where technology and human experiences seamlessly converge.

We are always looking forward to doing collaboration research; contact us (convert this into a valid email address: julian at u-aizu period ac dot jp) if you are interested. We’re particularly interested in Master and Doctoral students.

View

Dissertation and Published Works

For a complete list, please check https://onkyo.u-aizu.ac.jp/#/Publications

J. Villegas, K. Akita, and S. Kawahara, “Psychoacoustic features explain subjective size and shape ratings of pseudo-words,” in Proc. of Forum Acusticum, the 10 Conv. of the European Acoust. Assoc., (Turin, Italy), Sep. 2023.

E. Ly and J. Villegas, “Cartesian genetic programming parameterization in the context of audio synthesis,” IEEE Signal Process. Letters, vol. 30, pp. 1077–1081, Aug. 2023. DOI: 10.1109/LSP.2023.3304198.

C. Arevalo and J. Villegas, “Study of auditory trajectories in virtual environments,” in Proc. of Audio Mostly, Aug. 2023. DOI: 10.1145/3616195.3616210.

J. Villegas, S. J. Lee, J. Perkins, and K. Markov, “Psychoacoustic features explain creakiness classifications made by naive and non-naive listeners,” Speech Comm., vol. 147, pp. 74–81, Jan. 2023. DOI: 10.1016/j.specom.2023.01.006.