University of Aizu Computer Arts Laboratory and Spatial Media Group
Most of the courses taken by engineers and computer science students emphasize scientific discipline and the accumulation of "truth."
The Computer Arts Lab. activities includes such technically objective factors,
but also encourages original expression, subjectively motivated by æsthetics rather than "correctness,"
sometimes "putting the art before the course!"
Unlike many other labs' activities that try to converge on a "right answer" sharable by everyone else,
artistic disciplines encourage originality, in which the best answer is one that is like no one else's!
The Computer Arts Lab., through its resident Spatial Media Group,
is researching projects including practical and creative applications of
virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality;
panoramic interfaces and spatially-immersive displays
stereotelephonics, spatial sound,
wearable and mobile applications, computing, and interfaces;
with related interests in
CVE (collaborative virtual environments), groupware and CSCW (computer-supported collaborative work);
digital typography and electronic publishing;
telecommunication semiotics (models of teleconferencing selection functions);
way-finding and navigation;
ubicomp (ubiquitous computing), calm (ambient), and pervasive technology.
Many of these topics are explored in a course for advanced undergraduates,
"Human Interface and Virtual Reality,"
and a parallel course for grad students,
For research, we are particularly interested in narrowcasting commands,
conference selection functions for adjusting groupware situations in which
users have multiple presence,
virtually existing in more than one space simultaneously.
We explore realtime interactive multimedia interfaces--- auditory, visual, haptic, and multimodal:
We are exploring interfaces for multichannel sound,
including stereo, quadraphonic, and nearphones (mounted on our rotary motion platform)
two separate speaker array systems in the University-Business Innovation Center 3D Theater.
We are also exploring the Unity-based sound spatialization engine based on the Google Reverb component,
especially in conjunction with mobile control and paramaterized directionality.
We heavily use iPad tablet courseware in our "Intro. to Sound and Audio" course for graduate students,
which is a prerequisite for "Spatial Hearing and Virtual 3D Sound,"
taught jointly with Julián Villegas; ジュリアン ヴィジェガス and Prof. Jie Huang in the Human Interface Lab.
We host a Computer Music Studio with assorted amplifiers, racks, mixers, and effects processors.
We promote creative applications of scientific visualization,
encouraging the use of Mathematica
(3D images with depth layers cued by color).
We enjoy exploiting the unique large-format immersive stereographic display in the UBIC 3D Theater.
Some group members are using visual input techniques through a web cam to sense user mood,
for kansei interpretation,
or for control of narrowcasting chatspace interfaces.
We are also exploring creative applications of panoramic imaging and object movies.
We are also exploring the use of haptic interfaces,
including force-display joysticks
and a rotary motion platform (the "Schaire [for `shared chair'] Internet Chair").
We conduct annual Creative Factory Seminars.
Past CFSs explored advanced audio interfaces, panoramic imaging, and haptic modeling.
Every year, in conjunction with the Knowledge Engineering Lab.,
we conduct a workshop on Haptic Modeling and 3D Printing,
using force-feedback CAD workstations
to make models that were then rapid prototyped (as stereolithograms)
with our personal fabricator,
closing the "idea (stored in brain neurons) -- information (stored as bits) -- matter (atoms)" pathway.
Using such multimodal interfaces,
our students have crafted driving simulators,
location-based games featuring the rotary motion platform,
synæsthetic (cross-sensory modality) visual and haptic music players
(rendering songs as light shows or dancing chairs).
Using the aforementioned visual sensing technique,
narrowcasting postures can be recognized,
and used to control distributed chatspaces or virtual concerts.
A recent student project deployed a microphone vector to track a moving sound source,
using its network interface to trigger internet appliances
(like lights that follow the source).
We are also developing a driving simulator
using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform.
The most recent version of the project features a dual-steering (front and back) fire truck,
racing through a 3D model of our campus to reach a fire,
piloted by two drivers, and featuring spatial sound effects.
We are also exploring mobile (nomadic, portable) computing,
working in conjunction with university spin-offs
We host an annual symposium,
the Int. Symposium on Spatial Media,
inviting experts to share their knowledge and passion regarding such themes as
“Spatial Sound and Spatial Telepresence” ('00–'01),
“Magic in Math and Music” ('01–'02),
“Advanced Multimedia and Virtual Reality” ('02–'03),
“Spatial Sound” ('03–'04),
“Hearing and Sound Installations” ('04–'05),
“Sound, Audio, and Music” ('05–'06),
“Interactive Media, Security, and Stereography” ('06–'07).
“Internet Media: Image Processing, Music, Grid Supercomputing, and Virtual Reality” ('07–'08),
“Computation and Music” ('08–'09),
“Systems and Applications: User Interfaces” ('09–'10),
“Distributed, Mobile, and Ubiquitous Interfaces” ('10–'11),
“Social Multimedia” (’11–’12), and
“Visual Interfaces for Multimedia Systems“ (’12–’13),
“Computer Enhancement of User Experience“ ('13–'14),
“Audio and Music“ ('14–'15),
“Acoustics and Sound“ and “Typography“ ('15–'16)
"Germany-Japan Collaboration" ('16–'17), and
"Computer Mediation" ('17–'18).
Besides the Dual Boot flying disk club,
our lab sponsors several student social and performance circles,
Furiten Mah Jong Club,
Yasakoi Dance Circle,
Disco Mix Club,
and, with Co-Advisor Nishimura Satoshi; 西村 憲 , the M-Project Digital Arts Circle.
“Publish or perish”
and “demo or die” are operative maxims in our group.
Our members are encouraged to present their research
at international conferences and in international journals.
Time is the core of muldia,
and we stress realtime interactive experiential demonstrations.
Through the research & development,
the deployment & integration,
and haptic applications,
we nurture scientific and artistic interest in advanced computer-human and human-human communication.
Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational.
Some relevant links:
Edited by Michael Cohen, 5/2016