Next: Shape Modeling Laboratory Up: Department of Computer Previous: Multimedia Systems Laboratory

Human Interface Laboratory


/ Masahide Sugiyama / Professor
/ Michael Cohen / Associate Professor
/ Susantha Herath / Associate Professor
/ William L. Martens / Visiting Researcher
/ Minoru Ueda / Assistant Professor

Using our communication channels (sense organs: ears, mouth, eyes, nose, skin, etc) we can communicate each other, including between human and human, human and machine, and human and every information sources. Because of disability of the above channels in software or hardware sense, sometimes it becomes to be difficult for human to communicate. Research area of Human Interface Laboratory covers enhancement and generation of various human interface channels.

In order to advance the above research on human interface, we adopt the following research principle:

  1. Theoretical
    Our target is human interface and our study has possibility to do try-and-error, heuristic, too practical business. Based on our experimental results, and experiences we try to establish the theory, unified insight, generalization and analytical viewpoints.

  2. Practical
    Our target is not theory generation for theory. We extract the concept, theory in order to clarify in experimental and quantitative viewpoint.

We organized second workshop IWHIT98 on Nov. 11th, 12th and 13th (International Workshop on Human Interface Technology 1998) which was sponsored by the International Affairs Committee of the University of Aizu. The workshop had 5 sessions (1.Object Location and Tracking in Video Data, 2.Subjective Factors in Handling Images, 3.Visual Interfaces, 4.Visual and Body Perception, 5.Tools for Language Generation; 15 lectures).

We promoted 6 SCCPs for students (``Speech Processing and Multimedia", ``CAI for Signs (CAIS)", ``Hand signs (HS)'', ``Computer Music(Computer-Aided Musical Composition and Performance)'', ``Aizu Virtual City on InterNet", ``Developing Sound Spatialization Framework Applications'') and 3 Research Projects (``Object Location and Tracking in Video Data'', ``Sound Field Control'', ``Acoustics and Perception of Sound Sources at Close Range''). We received a commissioned research fund from IPA on ``Development of Japanese Dictation Software'' , HITOCC on ``Study on Computer Security using Speaker Recognition" and Fukushima Prefecture on ``Sound Field Control''.

We exhibited our research activities in the open campus in University Festival (Oct 31st, Nov.1st) and Fukushima Sangyo Fair (Nov. 29th and 30th). We promoted Lab Open House for Freshmen on April 3rd.

On our research activity we presented 6 papers in academic journals and 10 refereed papers in International Conferences.

One of members organized working group on ``Blind and Computer" and about 30 people attended to the working group and received the support from NHK Wakaba Fund.

We have the homepage of Human Interface Lab to open our research and education activities to the world. http://www.u-aizu.ac.jp/labs/sw-hi/.


Refereed Journal Papers

  1. M. Yamashita, M. Sugiyama, Speaker Verification Applied to xvlock in X Window Lock System --- Development and Its Evaluation. Trans. of IPSJ, vol.39, no.11, p.3131-3141, Nov. 1998.

  2. Woodrow Barfield and Michael Cohen and Craig Rosenberg, Localization as a Function of Azimuth and Elevation. IJAP: Int. J of Aviation Psychology, vol.7, no.2, p.123--138, 1997.

    ISSN 1050-8414, This study was performed to investigate the accuracy of performing a localization task as a function of the use of three display formats: an auditory display, a perspective display, and a perspective-auditory display. The experimental task for the perspective and perspective-auditory displays was to judge the relative azimuth and elevation which separated a computer-generated target object from a reference object. The experimental task for the auditory display was to determine the azimuth and elevation of a sound source with respect to the listener. For azimuth estimates, there was a significant effect for type of display, with worse performance resulting from the purely auditory format. Further, azimuth judgements were better for target objects which were aligned close to the major meridian orthogonal to the viewing vector. For elevation errors, there was a main effect for the type of display with worst performance for the purely auditory condition; elevation judgements were worse for larger elevation separations independent of display condition. Finally , elevation performance was superior when target images were aligned close to the major meridian orthogonal to the viewing vector. Implications of the results for the design of spatial instruments is discussed.

  3. Richard Duda and William L. Martens, Range dependence of the response of an ideal rigid sphere to a point sound source. Accepted for publication in Journal of the Acoustical Society of America, July 1998.

    The Head-Related Transfer Function (HRTF) varies with range as well as with azimuth and elevation. To better understand its close-range behavior, a theoretical and experimental investigation of the HRTF for an ideal rigid sphere was performed. An algorithm was developed for computing the variation in sound pressure at the surface of the sphere as a function of the direction and range to the sound source. The impulse response was also measured experimentally. The results may be summarized as follows. First, the experimental measurements were in close agreement with the theoretical solution. Second, the variation of low-frequency interaural level difference with range is significant for ranges smaller than about five times the sphere radius. Third, the impulse response reveals the source of the ripples observed in the magnitude response, and provides direct evidence that the interaural time difference is not a strong function of range. Fourth, the time-delay is well approximated by the well-known ray-tracing formula due to Woodworth and Schlosberg. Finally, except for this time delay, the HRTF for the ideal sphere appears to be minimum-phase, permitting exact recovery of the impulse response from the magnitude response in the frequency domain.

  4. Katsumi Amano, Fumio Matsushita, Hirofumi Yanagawa, Michael Cohen, Jens Herder, William L. Martens, Yoshiharu Koba, and Mikio Tohyama, A Virtual Reality Sound System Using Room-Related Transfer Functions Delivered Through a Multispeaker Array: the PSFC at the University of Aizu Multimedia Center. TVRSJ: Trans. of the Virtual Reality Society of Japan, 3(1):1--12, issn 1344-011X, March 1998.

    The psfc, or Pioneer Sound Field Controller, is a dsp-driven hemispherical loudspeaker array, installed at the University of Aizu Multimedia Center. The \initials{psfc} features realtime manipulation of the primary components of sound spatialization for each of two audio sources located in a virtual environment, including the content (apparent direction and distance) and context (room characteristics: reverberation level, room size and liveness). In an alternate mode, it can also direct the destination of the two separate input signals across 14 loudspeakers, manipulating the direction of the virtual sound sources with no control over apparent distance other than that afforded by source loudness (including no simulated environmental reflections or reverberation). The \initials{psfc} speaker dome is about 10\,m in diameter, accommodating about fifty simultaneous users, including about twenty users comfortably standing or sitting near its ``sweet spot,'' the area in which the illusions of sound spatialization are most vivid. Collocated with a large screen rear-projection stereographic display, the \initials{psfc} is intended for advanced multimedia and {\bf v}irtual {\bf r}eality applications.

Refereed Proceeding Papers

  1. Michael Cohen, Exclude and Include for Audio Sources and Sinks: Analogs of mute/solo & cue are deafen/confide & harken. Proc. Int. Conf. on Auditory Display, p.19--28, Palo Alto, CA, 1997.

    Non-immersive perspectives in virtual environments enable fluid paradigms of perception, especially in the context of frames-of-reference for conferencing and musical audition. Traditional mixing idioms for enabling and disabling various sources employ mute and solo functions, which, along with cue, selectively disable or focus on respective channels. Exocentric interfaces which explicitly model not only sources, but also location, orientation, directivity, and multiplicity of sinks, motivate the generalization of mute/solo and cue to exclude and include, manifested for sinks as deafen/confide and harken, a narrowing of stimuli by explicitly blocking out and/or concentrating on selected entities. Such functions can be applied not only to other users' sinks for privacy, but also to one's own sinks for selective presence. Multiple sinks are useful in both groupware, where a common environment implies social inhibitions to rearranging shared sources like musical voices or conferees, and individual sessions in which spatial arrangement of sources, like the configuration of a concert orchestra, has mnemonic value. An audibility protocol is described, comprising revoke, renounce, grant, and claim methods, invocable by these narrowcasting commands to control superposition of soundscapes, and a taxonomy of modal narrowcasting functions is proposed.

  2. Jens Herder and Michael Cohen, Enhancing Perspicuity of Objects in Virtual Reality Environments. Proc. Int. Conf. on Cognitive Technology, August 1997. Aizu, Wakamatsu. p.228--237.

    In an information-rich Virtual Reality (vr) environment, the user is immersed in a world containing many objects providing that information. Given the finite computational resources of any computer system, optimization is required to ensure that the most important information is presented to the user as clearly as possible and in a timely fashion. In particular, what is desired are means whereby the perspicuity of an object may be enhanced when appropriate. An object becomes more perspicuous when the information it % potentially provides to the user becomes more readily apparent. Additionally, if a particular object provides high-priority information, it would be advantageous to make that object obtrusive as well as highly perspicuous. An object becomes more obtrusive if it draws attention to itself (or equivalently, if it is hard to ignore). This paper describes a technique whereby objects may dynamically adapt their representation in a user's environment according to a dynamic priority evaluation of the information each object provides. The three components of our approach are: \begin{enumerate} \item an information manager that evaluates object information priority, \item an enhancement manager that tabulates rendering features associated with increasing object perspicuity and obtrusion as a function of priority, and \item a resource manager that assigns available object rendering resources according to features indicated by the enhancement manager for the priority set for each object by the information manager. \end{enumerate} We consider resources like visual space (pixels), sound spatialization channels (mixels), {\sc midi}/audio channels, and processing power, and discuss our approach applied to different applications. Assigned object rendering features are implemented locally at the object level (e.g., object facing the user using the billboard node in {\sc vrml 2.0}) or globally, using helper applications (e.g., active spotlights, semi-automatic cameras). Keywords: perspicuity, obtrusion, virtual reality, spatialization, spatial media, autonomous actors, user interface design, man-machine interfaces.

  3. Jens Herder and Michael Cohen, Sound spatialization resource management in virtual reality environments. Proc. ASVA: Int. Symposium on Simulation, Visualization and Auralization for Acoustic Research and Education, p.407-414, Tokyo, April 1997.

    In a virtual reality environment the user is immersed in a scene with objects which might produce sound. Tasks of VR environments are to bring all these objects to presence of the user, but a system has only limited resources, including spatialization channels (mixels), midi/audio channels, and processing power. The sound spatialization resource management manages all sound resources and optimizes fidelity (presence) under given conditions. For that a priority scheme based on human psychophysical hearing is needed. Parameters for spatialization priorities include intensity calculated from volume, distance, orientation in case of non-uniform radiation pattern, occluding objects, frequency spectrum (low frequencies are harder to localize), expected activity, and others. Objects which are spatially close together (depending on distance) can be mixed. Sources that can not be spatialized can be treated as one ambient sound source. An alternative strategy tries to group objects based on distance and direction. Important for resource management is the resource assignment, i.e., minimizing swap operations, which makes it necessary to look ahead and predict upcoming events in a scene. Prediction is achieved by monitoring objects' speed and past evaluation values. Fidelity is contrasted for different kind of resource restrictions and optimal resource assignment based upon unlimited dynamic scene look ahead (i.e., replay). To give standard and comparable results, the vrml, 2.0 specification is used as an application programmer interface. Applicability is demonstrated with a helical keyboard, a polyphonic midi stream driven animation including user interaction (user moves around, playing together with programmed notes). The developed Sound Spatialization Resource Manager gives improved spatialization fidelity under runtime constraints. Application programmers and virtual reality scene designers are freed from the burden of assigning and predicting the sound sources.

  4. Herath, A., Hyodo Y., Ikeda, T., Herath, S., Case Structure Approach to Solve Ambiguity in Japanese to Sinhalese MT. Proc. of International Conference on Advanced Computing 1997, Ravi Mittal, K. M. Mehata, Arun K Somani, p.119-123, IEEE, Tata McGraw Hill, New Delhi, December 1997.

    Japanese to modern Sinhalese pair of languages is virtually unexplored in its machine translation perspective. Often, the natural language translation fails, due to the differences of the case, number, gender and tense. This paper discusses a procedure that extracts information relevant to case from the Japanese text and then to combine it with the knowledge in Sinhalese translations.

  5. William L. Martens and Karol Myszkowski, Psychophysical validation of the Visible Differences Predictor for global illumination applications. Proceedings of the IEEE International Conference on Visualization, Durham, NC, Oct. 1998.

    The perceptually-based Visual Difference Predictor (VDP) developed by Daly has many potential applications in realistic image synthesis. However, systematic validation and subsequent calibration of the VDP response via human psychophysical experiments should be completed before integrating VDP calculation into image synthesis algorithms such as those in global illumination computations. For example, the VDP local error metric can guide decision making in adaptive mesh subdivision, and in selecting regions of interest for more intensive global illumination computations. In this study, we designed two human psychophysical experiments to test whether VDP predictions match well with subjective reports of visible difference between images under conditions mimicking those in our VDP applications. These experiments showed a good match with VDP predictions for shadow and lighting pattern masking by texture, and in comparisons of the perceived quality of images generated at progressive stages of an indirect lighting solution.

  6. Tsuneo Ikedo and William L. Martens, Multimedia Processor Architecture. Proceedings of the IEEE International Conference on Multimedia Computing and Systems, Austin, TX, July 1998.

    This paper describes trends in the development of multimedia processor architecture that may be predicted on the basis of the availability of an \acronym{asic} with 10s of millions of gates. Multimedia processing based upon multi-granular parallelism for diverse media needs supercomputing power for multi-threaded, process-level execution. Due to the appearance of large scale integration (\acronym {lsi}) for what has been termed {\em system on silicon}, a new scheme for building the multimedia-centric processor will be realized. This paper proposes advanced implementation technologies for multimedia acceleration employing a reconfigurable architecture and using hundreds of processing elements embedded within an \acronym {asic}. Accelerated functions considered in this proposal include 3D graphic and 3D audio rendering, and implementation of video and audio codecs. Computational efficiency for advanced applications, such as walk-through virtual reality (\acronym {vr}), is maximized by sharing the results of geometric calculations that are required both for graphics and audio rendering.

  7. William L. Martens and Jens Herder, Perceptual criteria for eliminating reflectors and occluders from the rendering of environmental sound. To appear in Proceedings of the ICAD Workshop on Auditory Display, Berlin, Germany, Mar. 1999.

    Given limited computational resources available for rendering spatial sound imagery, it is important to determine effective means for choosing what components of the rendering will provide the most audible differences in the results. Rather than begin with an analytic approach that attempts to predict audible differences on the basis of objective parameters, subjective tests were executed to determine the audible difference made by two types of sound obstruction: reflectors and occluders. Single-channel recordings of 90 short speech sounds were made in an anechoic chamber in the presence and absence of these two types of obstructions, and as the angle of those obstructions varied over a 90 degree range. These recordings were reproduced over a single loudspeaker in that anechoic chamber, and listeners were asked to rate how confident they were that the recording of each of these 90 stimuli included an obstruction. The results revealed the conditions under which these obstructions have a significant impact on the perceived spatial image. These confidence ratings were incorporated into an evaluation function used in determining which reflectors and occluders are most important for rendering.

  8. William L. Martens, Jens Herder, and Yoshiki Shiba, A filtering model for efficient rendering of the spatial image of an occluded virtual sound source. Accepted for publication, 1998.

    Rendering realistic spatial sound imagery for complex virtual environments must take into account the effects of obstructions such as reflectors and occluders. It is relatively well understood how to calculate the acoustical consequence that would be observed at a given observation point when an acoustically opaque object occludes a sound source. But the interference patterns generated by occluders of various geometries and orientations relative to the virtual source and receiver are computationally intense if accurate results are required. In many applications, however, it is sufficient to create a spatial image that is recognizable by the human listener as the sound of an occluded source. In the interest of improving audio rendering efficiency, a simplified filtering model was developed and its audio output submitted to psychophysical evaluation. Two perceptually salient components of occluder acoustics were identified that could be directly related to the geometry and orientation of a simple occluder. Actual occluder impulse responses measured in an anechoic chamber resembled the responses of a model incorporating only a variable duration delay line and a lowpass filter with variable cutoff frequency.

  9. William L. Martens and Stephen Lambacher, The influence of vowel context on the Japanese listener's identification of English voiceless fricatives. To appear in Proceedings of the joint meeting of the Acoustical Society of America and the European Acoustics Association, Berlin, Germany, Mar. 1999.

    In order to examine the influence of vowel context on the ability of native Japanese speakers to distinguish between English voiceless fricatives, recognition rates from a five-alternative, forced-choice (5AFC) test were analyzed in terms of receiver operating characteristics. This signal-detection-theoretic analysis showed contrasting effects of vowel context on sensitivity versus response bias. Subjects heard each of 75 nonsence syllables spoken by three native speakers of English, and were asked to report whether the syllable they heard contained /f/, /s/, /\sh/, /\phontheta/ or /h/. Fricative identifiability (measured by the index of sensitivity, d prime) appeared to be modulated primarily by phonemic and phonetic factors. In contrast, the likelihood of giving one fricative response over another without regard for which fricative is presented (measured by the index of bias, beta) appeared to be modulated by more subjective factors such as familiarity with common loanwords (words of foreign origin). A cross-linguistic examination of phonemic and phonetic factors reveals possible reasons why the Japanese listener's identification of English fricatives is considerably poorer in particular vowel contexts, and how differences in fricative production can make it more difficult to make certain phonetic distinctions.

  10. William L. Martens, The impact of decorrelated low-frequency reproduction on auditory spatial imagery: Are two subwoofers better than one? To appear in Proceedings of the Audio Engineering Society 16th International Conference on Spatial Sound Reproduction, Rovaniemi, Finland, Apr., 1999.

    Though only a single subwoofer is typically used in multichannel sound reproduction systems, there are reasons to consider employing two subwoofers. Including a pair of decorrelated low-frequency signals in a spatial sound presentation enables better control over several subjective features of the resulting spatial image. The features investigated herein include apparent source width (ASW), apparent source distance (ASD), and spaciousness. As expected, the highest ASW ratings were observed for the lowest IACC stimuli (those with the greatest decorrelation). But the ASW ratings obtained under conditions of two-channel subwoofer reproduction dominated the ASW ratings obtained in the one-channel subwoofer condition. Also consistent with previous results, negative IACC stimuli were rated closer (lower ASD) than stimuli exhibiting positive IACC values. Once again, low-frequency decorrelation affected these ASD ratings, as it did for ratings of spaciousness. One conclusion is that extended control over auditory spatial imagery is provided by the use of two subwoofers.

  11. William L. Martens and Karol Myszkowski, Appearance preservation in image processing and synthesis. Proceedings of the 6th International Workshop on Human Interface Technology, University of Aizu, Aizu-Wakamatsu, Fukushima, Japan, 1998.

    Many improvements in image processing and synthesis have resulted from exploitation of human perceptual capacities and insensitivities. This is because it is the appearance of the resulting images that is of primary relevance to the successful development and deployment of new algorithms for image processing and synthesis. Poor performance often results from image evaluation that is based upon analysis of luminance, or some other strictly objective error metric. On the other hand, error metrics that are based upon predictions of human perception have been shown to be of great benefit in a wide range of applications, including evaluation of lossy image compression and dithering algorithms and the refinement of global illumination solutions in realistic image synthesis. Dramatic reductions in memory required for image storage can be realized, and significant improvements in synthesis speed can be attained by using tools that focus only on those image features readily perceived by human observers under given viewing conditions. Conversely, simplification of computations can be achieved by omitting from image synthesis the computation of details that will not make significant differences in image appearance. This paper addresses a number of questions concerning the context within which such ``perceptually-informed'' tools are developed, deployed, and evaluated. The primary focus is upon the psychophysical validation of tools for perceptually-based image processing, rather than upon the image processing techniques themselves.

Unrefereed Papers

  1. M. Yamashita, M. Sugiyama. Speaker Verification Applied to Display Lock System --- Development and Its Evaluation. Proc. of AVIOS98, AVIOS, Sep. 1998.

  2. T. Asano, M. Sugiyama, Object Location and Tracking in Video Data. Proc. of SPECOM98, SPECOM, Oct. 1998.

  3. T. Asano, M. Sugiyama, Segmentation and Classification of Auditory Scenes in Time Domain, Proc. of IWHIT98, IWHIT, Nov. 1998.

  4. T. Asano, M. Sugiyama, Study on Acoustic Scene Segmentation. Proc. of ASJ, p. 143-144, ASJ, Mar. 1998.

  5. T. Asano, M. Sugiyama, Junction of Acoustic Scene Segments. Proc. of ASJ,, p.155-156, ASJ, Sep. 1998.

  6. Herath, A., Ikeda, T., Herath, S., Case Structure Based Solution for Meaningless Translation Problem from Japanse to Sinhalese MT. Proc. of IWHIT'97, editor M. Cohen, p.91-95, Sashimaya Printing, Japan, March 1998.

Technical Reports

  1. M. Sugiyama, Object Detection and Tracking in Video Data. Technical Report of Human Information Group, p.7-12, The Institute of Image Information and Television Engineers, Nov. 1998.

  2. M. Suzuki, M. Sugiyama, Study on Characterization of Music based on Music Score Information. Technical Report of Speech Processing Group, p.23-30, IEICE & ASJ, Feb. 1998.

  3. M. Yamashita, M.Sugiyama, Speaker Verification Applied to xvlock in X Window Lock System --- Development and Its Evaluation. Technical Report of Human Interface Group, p.43-48, IPSJ, Mar. 1998.

Grants

  1. Masahide Sugiyama, received IPA Research fund 1997.

  2. Masahide Sugiyama, received HITOCC Research fund 1997.

  3. Susantha Herath, Fukushima Prefectural Foundation for the advancement of Science and education, Environment computer activity project, yen 1,000,000, 1997.

  4. Susantha Herath, Fukushima Prefectural Foundation for the advancement of Science and education, Effective Teaching and Evaluation Method Seminar, yen 1,600,000, 1997.

Academic Activities

  1. Masahide Sugiyama, member of Speech Processing Committee in IEICE, (Institute of Electronics, Information, Communication Engineerings) and ASJ (Acoustic Society of Japan)(1994.5 - ).

  2. Masahide Sugiyama, member of Human Interface Committee in IPSJ.

  3. Masahide Sugiyama, member of Spoken Language Processing in IPSJ.

  4. Masahide Sugiyama, member of Tohoku Regional Board of IEICE and ASJ.

  5. Masahide Sugiyama, referee of IEICE and ASJ (Acoustic Society of Japan).

  6. Michael Cohen, Referee for IEICE Trans. on Fund. Electronics, Communications and Comptuer Sciences A Proposal of Five-Degree-of-Freedom 3D Nonverbal Voice Interface T. Yonekura, R Narisawa, and Y. Watanabe.

  7. Michael Cohen, Referee for Presence: Teleoperators and Virtual Environments (MIT Press) Some Perspectives on Preformed Sound and Music in Virtual Environments.

  8. Michael Cohen, Referee for CISMOD95- Conf. on Information Systems and Management of Data Implementation of a Graphical User Interface for Object-Oriented Databases Yong S. Jun and Suk I. Yoo.

  9. Michael Cohen, member of IEEE VR Terminology Committee.

  10. Michael Cohen, VRML Audio Advisory Board.

  11. Susantha Herath, IEEE coordinator (1993.4 -).

  12. Susantha Herath, Member of the Review Board for the International Journal of Applied Intelligence (1992. 5 -).

  13. Susantha Herath, Financial Chair of third International Workshop in Human Interface Technology '97 (IWHIT'97) 3/12-14.

  14. Susantha Herath, Co-editing the Proceedings of third International Workshop in Human Interface Technology '97 (IWHIT'97) 3/12-14.

  15. Susantha Herath, Session chair of Natural Language Processign, Third International Workshop in Human Interface Technology '97 (IWHIT'97) 3/12-14.

  16. Susantha Herath, General Chair, International Workshop on Improving Effectiveness in University Education, Aizu university 12/6.

  17. Susantha Herath, Editor of the proceeings of International Workshop on Improving Effectiveness in University Education, Aizu university 12/6.

  18. William L. Martens, Research Project: Acoustics and Perception of Sound Sources at Close Range.

    Abstract: variation in the Head-Related Transfer Function (HRTF) has been measured and analyzed for sound sources originating anywhere within close range of the listener's head. A simplified model for DSP (digital signal processing) implementation of HRTFs within the 3D space surrounding the listener has been developed. This result extends the typical 2D HRTF solution, which assumes that source range is not a parameter, to a 3D solution that can fill the listener's nearby space with virtual sound sources (i.e. addressing locations within a circumscribed sphere, rather than directions identified by angles associated with the surface of such a circumscribed sphere.

    Investigation of variation in the Source-Radiation Transfer Function (SRTF) has also been completed for human speech and a number of musical instruments, including guitar, violin, koto, conga, and bongo. An exhaustive study of the source-radiation characteristics for the clarinet has also been acomplished in collaboration with masters student Yasuda Satoko. Furthermore, a third category of transmission phenomena has been investigated, in addition to source and receiver characteristics. Transmission characteristics were measured for an acoustical signal traveling around a partially occluding object that is interposed between source and receiver. Such transmission characteristics have been termed Obstruction-Related Transfer Functions (ORTFs), and depend upon the geometry of the occluding object, and upon the orientation and the locations of source, occluding object, and receiver.

Others

  1. Michael Cohen with Jens Herder, Virtual Reality Audio SCCP. The University of Aizu, 1997.

  2. Michael Cohen with James Goodwin, Computer Music SCCP. The University of Aizu, 1997.

  3. Naoyuki Hashimoto, Japanese Morphological and Syntactic Analysis. The University of Aizu, 1997. Thesis Advisor: Susantha Herath.

  4. Miwa Itou, Measuring Hand Shapes. The University of Aizu, 1997. Thesis Advisor: Susantha Herath.

  5. Chie Saitou, Structured Comparison of Sign Language. The Univ. of Aizu, 1997. Thesis Advisor: Susantha Herath.

  6. Kasumi Abe, Network-Based Sign Language Learning System. The Univ. of Aizu, 1997. Thesis Advisor: Susantha Herath.

  7. Kunio Yamamoto, Electronic Sign Language Dictionary Development. The Univ. of Aizu, 1997. Thesis Advisor: Susantha Herath.



Next: Shape Modeling Laboratory Up: Department of Computer Previous: Multimedia Systems Laboratory


www@u-aizu.ac.jp
December 1998