Basic Information

Affiliation
Computer Arts Laboratory
Title
Senior Associate Professor
E-Mail
julian@u-aizu.ac.jp
Web site
https://onkyo.u-aizu.ac.jp/

Education

Courses - Undergraduate
LI10 Introduction to Multimedia Systems
IT09 Sound and Audio Processing
FU14 Intro. to Software Engineering (exercise class)
FU15 Introduction to Data Management (exercise class)
Courses - Graduate
Spatial Hearing and Virtual 3D Sound
Introduction to Sound and Audio
Digital Audio Effects
Multimedia Machinima

Research

Specialization
I am interested in spatial sound, audio signal processing, phonetics, psychoacoustics, and aural/oral human-computer interaction.
Educational Background, Biography
2021 – Senior Associate Professor, University of Aizu.
2013 - Associate Professor, University of Aizu.
2010 - Researcher, Ikerbasque - University of the Basque Country.
2010 - Ph.D. in Computer Science and Engineering, University of Aizu.
Current Research Theme
PSYPHON: Psychoacoustic features for Phonation prediction
Key Topic
Aural/oral human-computer interaction, real-time programming, visual programming
Affiliated Academic Society
• Audio Engineering Society • Acoustical Society of Japan • Acoustical Society of America • IEEE

Others

Hobbies
Running, playing music, etc.
School days' Dream
Building spatial ships
Current Dream
Make of this world a better place to live.
Motto
Nothing can stop you if you really want to do something.
Favorite Books
• “Catch 22” by Joseph Heller;
• "The Man Who Mistook His Wife For A Hat: And Other Clinical Tales" by Oliver Sacks;
• “The Hitchhiker's Guide to the Galaxy” by Douglas Adams
Messages for Students
We are always looking forward to doing collaboration research; email me if you are interested. We’re particularly interested in Master and Doctoral students.
Publications other than one's areas of specialization
Encuentros entre Colombia y Japón: homenaje a 100 años de amistad, chapter "De como el mundo es un pañuelo y de las misteriosas maneras" (Of how the world is a handkerchief and other mysterious ways). Colombian Ministry of Foreign Affairs, Bogotá D.C., Colombia, 2010. (Fiction) In Spanish.

Main research

Sound and Audio Technologies

We are interested in sound as a vehicle to transmit information between humans and machines. In our research we focus mainly on spatial sound, applied psychoacoustics, and applied phonetics.

Spatial sound
Vision is saturated with information coming from gadgets we use on a daily basis; we want to find ways to convey part of that information via spatial (3D) sound using loudspeakers or headphones. We are particularly interested in synthesizing auditory distance and elevation in virtual environments and multi-sensory interfaces.
Applied psychoacoustics
The processing capabilities of the brain are sometimes exceeded by hardware. This brings opportunities for new interfaces explored in our lab, such as near ultrasound communication, bass enhancements using vibration motors, etc.
Applied phonetics
In collaboration research, we are studying effects of noise on speech, multilingualism, articulation and phonation phenomena. Speech technologies are the ultimate interaction method for human-machine communication. Understanding how speech is produced and perceived in different setups is of paramount importance for such technologies.
We use sound regularly to communicate with others, yet our understanding of it is so limited that there are many opportunities for new technologies to be discovered. This is a difficult task that requires common efforts.

View this research

Dissertation and Published Works

For a complete list, please check https://onkyo.u-aizu.ac.jp/#/Publications

[1] J. Villegas, N. Fukasawa, and C. Arevalo, “The presence of a floor improves subjective elevation accuracy of binaural stimuli created with non-individualized head-related impulse responses,” J. Audio Engineering Society, vol. 69, pp. 849–859, Nov. 2021. DOI 10.17743/jaes.2021.0045.

[2] J. Villegas, J. Perkins, and I. Wilson, “Effects of task and language nativeness on the Lombard effect and on its onset and offset timing,” J. Acoust. Soc. Am., vol. 149, pp. 1855–1865, Mar 2021. DOI 10.1121/10.0003772.

[3] E. Ly and J. Villegas, “Generating artificial reverberation via genetic algorithms for real-time applications,” Entropy, vol. 22, p. 1309, Nov. 2020. DOI 10.3390/e22111309.

[4] J. Villegas, K. Markov, J. Perkins, and S. J. Lee, “Prediction of creaky speech by recurrent neural networks using psychoacoustic roughness,” IEEE J. of Selected Topics in Signal Processing, vol. 14, pp. 355–366, Feb. 2020. DOI 10.1109/JSTSP.2019.2949422.

[5] I. de la Cruz Pavía, G. E. Alcibar, J. Villegas, J. Gervain, and I. Laka, “Segmental information drives adult bilingual phrase segmentation preference,” Int. J. of Bilingual Education and Bilingualism, Jan. 2020. DOI 10.1080/13670050.2020.1713045.