/ Makoto Ikeda / Professor
/ Lothar M. Schmitt / Associate Professor
/ Jens Herder / Research Associate
Being highly application-oriented, the Computer Industry Laboratory tries to enhance production and engineering processes in industry. A deep understanding of the art of working is required.
Research at the university and in industry has to be integrated to achieve advances for humanity. Standards are required to coordinate industrial development, to provide a basis for further products, and to save financial investment. The Computer Industry Laboratory would like to influence the standardization process in new areas and focus on future needs, rather than on re-establishing existing systems.
Currently, Mr. Herder participates in the Intelligent Dental Care System Project where he manages the design of the user interface. The research of Professor Hiramoto is reliability theory, especially applied to the safety standards of nuclear power plants. Professor Ikeda is doing three kinds of academic projects related to incubation process modeling, electronic commerce, and information navigation systems. He wrote a report about the incubation mechanism of Silicon Valley Area, a book of Netscape Commerce Server for EC, and also a book about Java. Professor Schmitt participates in research on mathematical models for genetic algorithms used for chip placement problems. Furthermore, he participates in research on the modeling of semiconductor devices with computer algebra methods.
With a top-down education approach, students are involved in joint research projects with industry. They learn engineering by doing it, in a context which they find highly motivating.
Current Research Topics:
Refereed Journal Papers
Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization.
Level-of-detail is a concept well-known in computer graphics to reduce the number of rendered polygons. Depending on the distance to the subject (viewer), the objects' representation is changed. A similar concept is the clustering of sound sources for sound spatialization. Clusters can be used to hierarchically organize mixels and to optimize the use of resources, by grouping multiple sources together into a single representative source. Such a clustering process should minimize the error of position allocation of elements, perceived as angle and distance, and also differences between velocity relative to the sink (i.e., Doppler shift). Objects with similar direction of motion and speed (relative to sink) in the same acoustic resolution cone and with similar distance to a sink can be grouped together.
A module for soundscape monitoring and visualizing resource management processes was extended for presenting clusters, generated by a novel sound source clustering algorithm. This algorithm groups multiple sound sources together into a single representative source, considering localization errors depending on listener orientation. Localization errors are visualized for each cluster using resolution cones. Visualization is done in runtime and allows understanding and evaluation of the clustering algorithm.
Refereed Proceeding Papers
A chatspace was developed that allows conversation with 3d sound using networked streaming in a shared virtual environment. The system provides an interface to advanced audio features, such as a ``whisper function'' for conveying a confided audio stream. This study explores the use of spatial audio to enhance a user's experience in multiuser virtual environments.
In a virtual reality environment, users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a practical system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. A sound spatialization resource manager, introduced in this thesis, controls sound resources and optimizes fidelity (presence) under given conditions, using a priority scheme based on psychoacoustics. Objects which are spatially close together can be coalesced by a novel clustering algorithm, which considers listener localization errors. Application programmers and VR scene designers are freed from the burden of assigning mixels and predicting sound source locations. The framework includes an abstract interface for sound spatialization backends, an API for the VR environments, and multimedia authoring tools.
pages 8--12, (Japanese), Sound spatialization is a technology which puts sound into the three dimensional space, so that it has a perceivable direction and distance. Interactive means mutually or reciprocally active. Interaction is when one action (e.g., user moves mouse) has direct or immediate influence to other actions (e.g., processing by a computer: graphics change in size). Based on this definition an introduction to sound reproduction using DVD and virtual environments is given and illustrated by applications (e.g., virtual converts).
Others