Next: Centers Up: Department of Computer Previous: Computer Education Laboratory

Computer Industry Laboratory


/ Makoto Ikeda / Professor
/ Lothar M. Schmitt / Associate Professor
/ Jens Herder / Research Associate

Being highly application-oriented, the Computer Industry Laboratory tries to enhance production and engineering processes in industry. A deep understanding of the art of working is required.

Research at the university and in industry has to be integrated to achieve advances for humanity. Standards are required to coordinate industrial development, to provide a basis for further products, and to save financial investment. The Computer Industry Laboratory would like to influence the standardization process in new areas and focus on future needs, rather than on re-establishing existing systems.

Currently, Mr. Herder participates in the Intelligent Dental Care System Project where he manages the design of the user interface. The research of Professor Hiramoto is reliability theory, especially applied to the safety standards of nuclear power plants. Professor Ikeda is doing three kinds of academic projects related to incubation process modeling, electronic commerce, and information navigation systems. He wrote a report about the incubation mechanism of Silicon Valley Area, a book of Netscape Commerce Server for EC, and also a book about Java. Professor Schmitt participates in research on mathematical models for genetic algorithms used for chip placement problems. Furthermore, he participates in research on the modeling of semiconductor devices with computer algebra methods.

With a top-down education approach, students are involved in joint research projects with industry. They learn engineering by doing it, in a context which they find highly motivating.

Current Research Topics:


Refereed Journal Papers

  1. Jens Herder, Tools and Widgets for Spatial Sound Authoring. Computer Networks & ISDN Systems, vol.30, No.20-21, pp.1933--1940. 1998.

    Broader use of virtual reality environments and sophisticated animations spawn a need for spatial sound. Until now, spatial sound design has been based very much on experience and trial and error. Most effects are hand-crafted, because good design tools for spatial sound do not exist. This paper discusses spatial sound authoring and its applications, including shared virtual reality environments based on VRML. New utilities introduced by this research are an inspector for sound sources, an interactive resource manager, and a visual soundscape manipulator. The tools are part of a sound spatialization framework and allow a designer/author of multimedia content to monitor and debug sound events. Resource constraints like limited sound spatialization channels can also be simulated.

  2. Jens Herder, Sound Spatialization Framework: An Audio Toolkit for Virtual Environments. Journal of the 3D-Forum Society, Japan, 1998. vol.12, No.9, pp.17--22.

    The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for {\sc midi} animation and handling.

Refereed Proceeding Papers

  1. Michael Cohen and Jens Herder, Symbolic representations of exclude and include for audio sources and sinks: Figurative suggestions of mute/solo & cue and deafen/confide & harken. Virtual Environments 98, Stuttgart, 1998, pp.95/1--4, June.

    Shared virtual environments require generalized control of user-dependent media streams. Traditional audio mixing idioms for enabling and disabling various sources employ mute and solo functions, which, along with cue, selectively disable or focus on respective channels. Exocentric interfaces which explicitly model not only spatial audio sources, but also location, orientation, directivity, and multiplicity of sinks, motivate the generalization of mute/solo & cue to exclude and include, manifested for sinks as deafen/confide & harken, a narrowing of stimuli by explicitly blocking out and/or concentrating on selected entities. This paper introduces figurative representations of these functions, virtual hands to be clasped over avatars' ears and mouths, with orientation suggesting the nature of the blocking. Applications include groupware for collaboration and teaching, teleconferencing and chat spaces, and authoring and manipulation of distributed virtual environments.

  2. Kimitaka Ishikawa and Minefumi Hirose and Jens Herder. A Sound Spatialization Server for a Speaker Array as an Integrated Part of a Virtual Environment. IEEE YUFORIC Germany 1998, Stuttgart, June 1998.

    Spatial sound plays an important role in virtual reality environments, allowing orientation in space, giving a feeling of space, focusing the user on events in the scene, and substituting missing feedback cues (e.g., force feedback). The sound spatialization framework of the University of Aizu, which supports number of spatialization backends, has been extended to include a sound spatialization server for a multichannel loudspeaker array (Pioneer Sound Field Control System). Our goal is that the spatialization server allows easy integration into virtual environments. Modeling of distance cues, which are essential for full immersion, is discussed. Furthermore, the integration of this prototype into different applications allowed us to reveal the advantages and problems of spatial sound for virtual reality environments.

  3. Jens Herder, Sound Spatialization Framework: An Audio Toolkit for Virtual Environments. First Int. Conf. on Human and Computer, University of Aizu, Aizu-Wakamatsu, Japan, Sept. 1998.

    The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for MIDI animation and handling.

  4. William L. Martens and Jens Herder, Perceptual criteria for eliminating reflectors and occluders from the rendering of environmental sound. Proc. Joint Meeting of the 137th Regular Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association: Forum Acusticum, Acoustical Society of America (ASA), and European Acoustics Association (EAA), cdrom, Berlin, March 1999.

    Given limited computational resources available for the rendering of spatial sound imagery, we seek to determine effective means for choosing what components of the rendering will provide the most audible differences in the results. Rather than begin with an analytic approach that attempts to predict audible differences on the basis of objective parameters, we chose to begin with subjective tests of how audibly different the rendering result may be heard to be when that result includes two types of sound obstruction: reflectors and occluders. Single-channel recordings of 90 short speech sounds were made in an anechoic chamber in the presence and absence of these two types of obstructions, and as the angle of those obstructions varied over a 90 degree range. These recordings were reproduced over a single loudspeaker in that anechoic chamber, and listeners were asked to rate how confident they were that the recording of each of these 90 stimuli included an obstruction. These confidence ratings can be used as an integral component in the evaluation function used to determine which reflectors and occluders are most important for rendering.

  5. William L. Martens and Jens Herder and Yoshiki Shiba, A filtering model for efficient rendering of the spatial image of an occluded virtual sound source. Proc. Joint Meeting of the 137th Regular Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association: Forum Acusticum, cdrom. Acoustical Society of America (ASA), and European Acoustics Association (EAA), Berlin, March 1999.

    Rendering realistic spatial sound imagery for complex virtual environments must take into account the effects of obstructions such as reflectors and occluders. It is relatively well understood how to calculate the acoustical consequence that would be observed at a given observation point when an acoustically opaque object occludes a sound source. But the interference patterns generated by occluders of various geometries and orientations relative to the virtual source and receiver are computationally intense if accurate results are required. In many applications, however, it is sufficient to create a spatial image that is recognizable by the human listener as the sound of an occluded source. In the interest of improving audio rendering efficiency, a simplified filtering model was developed and its audio output submitted to psychophysical evaluation. Two perceptually salient components of occluder acoustics were identified that could be directly related to the geometry and orientation of a simple occluder. Actual occluder impulse responses measured in an anechoic chamber resembled the responses of a model incorporating only a variable duration delay line and a low-pass filter with variable cutoff frequency.

Books

  1. Michael Cohen and Jens Herder, editors: M. Goebel and J. Landauer and U. Lang and M. Wapler. Virtual Environments '98: Symbolic representations of exclude and include for audio sources and sinks. Springer-Verlag/Wien, Stuttgart, Germany, pages 235--242, ISBN 3-211-83233-5, June 1998.

    Shared virtual environments require generalized control of user-dependent media streams. Traditional audio mixing idioms for enabling and disabling various sources employ mute and solo functions, which, along with cue, selectively disable or focus on respective channels. Exocentric interfaces which explicitly model not only spatial audio sources, but also location, orientation, directivity, and multiplicity of sinks, motivate the generalization of mute}/solo & cue to exclude and include, manifested for sinks as deafen/confide & harken, a narrowing of stimuli by explicitly blocking out and/or concentrating on selected entities. This paper introduces figurative representations of these functions, virtual hands to be clasped over avatars' ears and mouths, with orientation suggesting the nature of the blocking. Applications include groupware for collaboration and teaching, teleconferencing and chat spaces, and authoring and manipulation of distributed virtual environments.

Others

  1. Yoshiki Shiba, Sound Occluder. The University of Aizu, 1998, Thesis Advisor: Jens Herder.

  2. Manabu Kusakabe, Realtime control for first-order reflections based on an image source model. The University of Aizu, 1998, Thesis Advisor: Jens Herder.

  3. Kuniaki Honno", Monitoring Sound Objects in Virtual Reality Environments. The University of Aizu, 1998, Thesis Advisor: Jens Herder.

  4. Ikumi Suzuki, Elevation Identification Performance for Real Sound Sources vs. HRTF-Processed Virtual Sources. The University of Aizu, 1998, Thesis Advisor: Jens Herder.

  5. Makoto Yamaoka, A Multi-purpose Manipulator for Objects in Virtual Reality Environments Using a Dataglove. The University of Aizu, 1998, Thesis Advisor: Jens Herder.

  6. Takashi Mikuriya, Multi-purpose Manipulators for Objects in Virtual Reality Environments Using an Isometric 3D Input Device. The University of Aizu, 1998, Thesis Advisor: Jens Herder.

  7. Masaki Kobayashi, Control and Optimization of the Viewpoint of a Virtual Camera. The University of Aizu, 1998, Thesis Advisor: Jens Herder.



Next: Centers Up: Department of Computer Previous: Computer Education Laboratory


www@u-aizu.ac.jp
November 1999