基本情報

所属
Computer Arts Laboratory
職位
上級准教授
E-Mail
julian@u-aizu.ac.jp
Webサイト
https://onkyo.u-aizu.ac.jp/

教育

担当科目 - 大学
LI10 Introduction to Multimedia Systems
IT09 Sound and Audio Processing
FU14 Intro. to Software Engineering (exercise class)
FU15 Introduction to Data Management (exercise class)
担当科目 - 大学院
Spatial Hearing and Virtual 3D Sound
Introduction to Sound and Audio
Digital Audio Effects
Multimedia Machinima

研究

研究分野
I am interested in spatial sound, audio signal processing, phonetics, psychoacoustics, and aural/oral human-computer interaction.
略歴
2021 – Senior Associate Professor, University of Aizu.
2013 - Associate Professor, University of Aizu.
2010 - Researcher, Ikerbasque - University of the Basque Country.
2010 - Ph.D. in Computer Science and Engineering, University of Aizu.
現在の研究課題
PSYPHON: Psychoacoustic features for Phonation prediction
研究内容キーワード
Aural/oral human-computer interaction, real-time programming, visual programming
所属学会
• Audio Engineering Society, • Acoustical Society of Japan, and • Acoustical Society of America • IEEE

パーソナルデータ

趣味
Running, snowboarding, playing music, etc.
子供時代の夢
宇宙船を作ること
これからの目標
よりよい世界を築くこと
座右の銘
やる気があれば何でもできる
愛読書
• “Catch 22” by Joseph Heller;
• "The Man Who Mistook His Wife For A Hat: And Other Clinical Tales" by Oliver Sacks;
• “The Hitchhiker's Guide to the Galaxy” by Douglas Adams
学生へのメッセージ
Distrust the authority. Your strategy in life should be to listen carefully everybody and then test by yourself to finally make up your own mind.
その他
Encuentros entre Colombia y Japón: homenaje a 100 años de amistad, chapter "De como el mundo es un pañuelo y de las misteriosas maneras" (Of how the world is a handkerchief and other mysterious ways). Colombian Ministry of Foreign Affairs, Bogotá D.C., Colombia, 2010. (Fiction) In Spanish.

主な研究

Sound and Audio Technologies

We are interested in sound as a vehicle to transmit information between humans and machines. In our research we focus mainly on spatial sound, applied psychoacoustics, and applied phonetics.
• Spatial sound
Vision is saturated with information coming from gadgets we use on a daily basis; we want to find ways to convey part of that information via spatial (3D) sound using loudspeakers or headphones. We are particularly interested in synthesizing auditory distance and elevation in virtual environments and multi-sensory interfaces.
• Applied psychoacoustics
The processing capabilities of the brain are sometimes exceeded by hardware. This brings opportunities for new interfaces explored in our lab, such as near ultrasound communication, bass enhancements using vibration motors, etc.
• Applied phonetics
In collaboration research, we are studying effects of noise on speech, multilingualism, articulation and phonation phenomena. Speech technologies are the ultimate interaction method for human-machine communication. Understanding how speech is produced and perceived in different setups is of paramount importance for such technologies.
We use sound regularly to communicate with others, yet our understanding of it is so limited that there may be many other opportunities for new technologies awaiting to be discovered. This is a difficult task that requires common efforts.
We are always looking forward for collaboration research; contact us (convert this into a valid email address: julian at u-aizu period ac dot jp) if you are interested. We’re particularly interested in Master and Doctoral students.

この研究を見る

主な著書・論文

Please check https://onkyo.u-aizu.ac.jp/#/Publications,[1] J. Villegas, J. Perkins, and I. Wilson, “Effects of task and language nativeness on the Lombard effect and on its onset and offset timing,” J. Acoust. Soc. Am., vol. 149, pp. 1855–1865, Mar 2021. DOI 10.1121/10.0003772.
[2] E. Ly and J. Villegas, “Generating artificial reverberation via genetic algorithms for real-time applica- tions,” Entropy, vol. 22, p. 1309, Nov. 2020. doi 10.3390/e22111309.
[3] J. Villegas, K. Markov, J. Perkins, and S. J. Lee, “Prediction of creaky speech by recurrent neural networks using psychoacoustic roughness,” IEEE J. of Selected Topics in Signal Processing, vol. 14, pp. 355–366, Feb. 2020. DOI: 10.1109/JSTSP.2019.2949422.
[4] J. Villegas, “Movement perception of Risset tones presented diotically,” Acoustical Science and Technol- ogy, vol. 41, Jan 2020. DOI https://doi.org/10.1250/ast.41.430.
[5] I. de la Cruz Pav ́ıa, G. E. Alcibar, J. Villegas, J. Gervain, and I. Laka, “Segmental information drives adult bilingual phrase segmentation preference,” Int. J. of Bilingual Education and Bilingualism, Jan. 2020. DOI 10.1080/13670050.2020.1713045.