ISSM'08-'09: Program

The Ninth International Symposium on Spatial Media
Dates
Tuesday-Wednesday, Feb. 17-18, 2009
Room
UBIC 3D Theater for all technical sessions

Extra copy of proceedings
Y2,000

Speakers
Tuesday, Feb. 17
  • Michael Frishkopf, University of Alberta (Edmonton, CA)
    Title
    (virtual [world) music]: thoughts on designing a collaborative spatial sonic immersive world musical virtual globe environment
    Time
    9:00-9:30
    Abstract
    In this paper I elaborate a theory of musical mappability, and then describe a preliminary design for a collaborative spatial sonic immersive world musical virtual global environment featuring fixed virtual speakers, and user avatars equipped with virtual ears and virtual mobile speakers.

    Relying on satellite imagery, Google Earth and other virtual globe environments provide visual representations of geographical, topographical, architectural, social, political, cultural, and organizational information-- visual, textual, or numeric-- that is mappable to the earth's surface, while omitting corresponding audio representation of audio information. Yet the world is a sonic as well as a visual space. While global sonic features cannot be captured by small numbers of remote sensors such as satellites, many pre-recorded sounds (often culturally specific) are nevertheless quasi-mappable. In particular, recorded music tracks are typically (if not always) fuzzily mappable, via recourse to metadata (typically, the geographical region associated with the musical style represented in the track).

    The proposed virtual environment features fixed, world-referenced virtual speakers, assigned earth coordinate (lat/lon) positions, virtually broadcasting (with a particular power and radiation pattern) mapped music tracks selected at random from the collection of all such tracks located within a fixed radius of the speaker (or, possible, broadcasting real sound, as captured by real remote sensors: "webmics," each one attached to a virtual speaker). User avatars can also radiate sound into the virtual world via mobile virtual speakers (connected to real user microphones), enabling (speech or music) communication among simultaneous users. Users traverse the virtual world via pointing devices, either FCG (fixed (user) center of gravity, e.g. a mouse or trackpad), or MCG (movable center of gravity, e.g. a GPS-compass unit), allowing them to control the course and orientation of their avatars. As they do so, binaural audio signals are computed as the acoustically correct sum (e.g. accounting for inverse square law, doppler effects, head shadowing, interaural intensity and phase differences, etc.) of avatar orientation, speed, and location relative to the of virtual broadcasts emanating from all virtual speakers, and output to real user headphones. Virtual wind noise can also be added as a function of speed. Visual position feedback is provided by Google Maps, via LCD screen.

    In the projected implementation, Smithsonian Folkways Recordings, comprising over 2,200 albums and more than 40,000 tracks from around the world, provides a useful possible global musical database. A non-obtrusive e-commerce application can also be provided: metadata for the maximally salient track is visually indicated, with an option to purchase. IPR issues are briefly addressed.

    The environment is envisioned to function in three possible modes, which may also be combined: (a) in FCG mode as a geographical world music browser; (b) in FCG mode as a collaborative improvisational tool; (c) in MCG mode as a virtual museum of world music, transforming any sufficiently large (with respect to MCG pointer resolution) open space into a virtual world. For instance, Smithsonian Folkways Recordings (unlike other Smithsonian Institution units), lacks a real music museum. Using (c), a large grassy square on the Washington DC Mall could be mapped to the entire globe, and thus transformed into a technologically and musically appealing virtual museum featuring Smithsonian Folkways' rich world musical content.

    Bio
    Michael Frishkopf, Assoc. Prof. in the Department of Music, University of Alberta (Canada), received his doctorate from UCLA's Department of Ethnomusicology in 1999, with a dissertation on Sufi music of Egypt. Specializing in sounds of the Arab world, West Africa, and Islamic ritual, his research interests also include social network analysis, music multimedia systems, the popular music industry, and digital multimedia repositories. He has conducted fieldwork for many years in Egypt, teaches a summer school in Ghana, centered on West African music and culture. Recent articles and book chapters include "Globalization and re-localization of Sufi music in the West" (Routledge), "Nationalism, Nationalization, and the Egyptian music industry" (Asian Music), "Mediated Qur'anic recitation and the contestation of Islam in contemporary Egypt" (Ashgate), and "'Islamic Music in Africa' as a tool for African Studies," to appear in the Canadian Journal of African Studies. Three books are in progress: The Sounds of Islam (Routledge), Sufism, Ritual, and Modernity in Egypt (Brill), and an edited collection entitled Music and Media in the Arab World. At the University of Alberta he is Associate Director of the Canadian Centre for Ethnomusicology. He serves as Associate Editor of the Middle East Studies Association Bulletin, and Chair of the Society for Arab Music Research, and has received major research grants from the Social Sciences and Humanities Research Council of Canada, the Canadian Heritage Information Network (Canadian Heritage), and the National Endowment for the Humanities (USA).
  • Ivan Poupyrev, Interaction Laboratory, Sony Computer Science Labs. (Tokyo)
    Title
    12 Pixels: Tools and Techniques for Creativity on the Go
    Time
    9:30-10:00
    Abstract
    How we draw depends on the tools we have: drawing with a pencil is very different from drawing with a brush. The TwelvePixels project attempts to create a new type of digital "brush", a set of techniques and algorithms that allow us to draw with very simple input devices, such as the twelve keys of the mobile phone.

    Creating such a "brush" leads to a whole new range of creative applications that can be enjoyed anywhere casually "on-the-go" without having to use a PC. Examples include, but are not limited to, annotating digital photographs taken on a mobile phone, creating personalized emoticons for text communication, drawing characters for mobile games, and sharing all this new content with millions of other people.

    The first version of 12 pixels is currently planned to be released in Japan by Sony as a free service.

    More info: www.12pixels.com

    Bio
    Born in the USSR, Ivan Poupyrev is currently a Researcher at the Interaction Laboratory at Sony CSL in Tokyo where he designs and investigates user interfaces for future consumer electronic devices and digital living environments. In his research he is particularly interested in creating interfaces and technologies that can seamlessly blend both digital and physical properties in devices and everyday objects. That includes novel tactile and haptic user interfaces, shape-changing and flexible computers, tangible and embodied interfaces as well as well as more traditional augmented and virtual reality interfaces. The results of his research have been extensively presented at major international conferences, such as ACM SIGGRAPH, CHI and UIST conferences, were reported in popular media and released on the market in Sony products. He recently co-edited a special issue of Communication of ACM on Organic User Interfaces. His book on 3D user interfaces has been published by Addison-Wesley in 2004.

    Ivan Poupyrev graduated from Moscow Airspace University with Masters in Applied Mathematics and Computer Science in 1992. While working on his doctorate degree at Hiroshima University, he stayed for three years as a Visiting Scientist at Human Interface Technology Lab at the University of Washington working on virtual reality and 3D user interfaces. After defending his PhD. in 1999 he stayed as a post-doctorate researcher at Advanced Telecommunication Research Institute International (ATR) in Kyoto. He joined Sony in 2001.

  • Jess Mantell, Keio Media Design (Hiyoshi, Yokohama)
    Title
    iHashi: eco- and nutrition-friendly chopsticks
    Time
    10:00-10:30
    Abstract
    Awareness of and concern for the deteriorating state of our environment are widespread these days. At the same time, many in the developed world enjoy a culture of convenience and consumption, making it difficult to choose a more sustainable lifestyle. One simple habit that is often overlooked, yet contributes greatly the destruction of forests is the use of disposable chopsticks. China produces around 45 billion pairs of waribashi (disposable chopsticks) a year, equaling about 25 million trees. Japan consumes 25 billion sets of them every year. That comes to around 200 pairs per person, and 14 million trees felled for single-use waribashi.

    By providing an easy to use and personalizable product that gives positive reinforcement of good eating and eco habits, iHashi encourages consumers to make positive lifestyle changes. iHashi can keep track of and analyze your eating habits, calculate the number of trees you have saved, warn of extreme food temperatures or allergens, and offer points which can be redeemed for goods.

    Bio
    Jess Mantell holds an Honours Bachelor of Design degree from York University and Sheridan College's joint program in Design (YSDN), where the theme of her final year project was video surveillance in public spaces. Undergraduate studies included typography, branding, editorial, book, video, and new media.

    Upon graduating, she set off to explore the visual culture of Tokyo, Japan, and decided to study further as a Masters student. In the spring of 2007, she was awarded the Monbukagakusho scholarship as a research student under Prof. Masahiko Inakage in the department of Media and Governance at Keio University. In the fall of 2008 she entered Keio University's new graduate school of Media Design (KMD) as a Monbukagakusho scholar.

    Current research is in interactivity design, bicycles, navigation, urban design, mobile devices, nostalgia, visual communication, and collective intelligence. Jess is involved in the establishment of a new bicycle navigation project at KMD, designing the digital signage system for the new Collaboration Complex at Keio's Hiyoshi campus, interactive media art, and experimental video making.

  • Michael Cohen, U. of Aizu
    Title
    Spatial Media Demonstrations: Spatial Sound, 3D Graphics, Mobile Computing, Panoramic Imagery
    Time
    10:30-11:00
    Abstract
    The synergetic convergence of telephones, computers, televisions, high fidelity stereos, robotics, and visual displays enables a consumer-level participation in multimedia and its cousin "virtual reality." New idioms of computer-human interaction encourage different styles of communication. We will use the unique facilities of the UBIC's 3D Theater, including its immersive visual display and speaker arrays, to demonstrate some research being done locally by faculty and students, explaining and showing examples of spatial media, visual music, and virtual reality.
    Bio
    Michael Cohen is Professor at the University of Aizu in Japan, where he heads the Spatial Media Group, comprising about 30 members, and teaches undergraduate courses in information theory, human interfaces & virtual reality, and has graduate lectures in sound and audio, computer music, and spatial sound. His research primarily concerns interactive multimedia, including virtual & mixed reality, spatial audio & stereotelephony, stereography, ubicomp (ubiquitous computing), and mobile computing.
  • Tibor Richter, Kandó Kálmán Szakközépiskola (Miskolc, Hungary) & Norbert Györbíró, U. of Aizu
    Title
    Infrared tracking with the Nintendo Wiimote
    Time
    11:00-11:30
    Abstract
    We present some novel applications using the Wiimote, the controller of the Nintendo Wii. By using a special infrared pen, a computer monitor or a projected computer screen is turned into a quasi touch-sensitive surface on which we draw pictures and interact with graphical applications. In another demonstration, infrared spectacles are used to track our head-motion for displaying immersive and interactive 3D scenes. The demonstrations are based on the prototypes of Johnny Chung Lee.
    Bio
    Tibor Richer is a student at the Kandó Kálmán Szakközépiskola High School Miskolc, Hungary. He is interested in computers & designing, creating, and hacking electronic gadgets. He also enjoys building and tweaking nitro-fueled remote-controlled cars.

    Norbert Györbíró received his B.Sc. and M.Sc. degrees in Computer Science from the University of Szeged, Hungary, in 1999 and 2003. He worked as a Researcher and from 2006 as a Senior Researcher for Nokia Research Center in Budapest, and in Helsinki between 1999 and 2007. He is currently pursuing Ph.D. studies in Computer Science at the University of Aizu, interested in mobile and distributed computing, fuzzy logic, and soft computing methods.

Wednesday, Feb. 18
  • Rahul Banerjee, NVidia Corp. (Bangalore, India)
    Title
    An Introduction to the 3D Graphics Pipeline
    Time
    9:30-10:30
    Abstract
    3D Computer Graphics are ubiquitous, but how are they created?

    The highly responsive graphics that are created inside any interactive 3D application today are the end-result of a sequence of steps that transform geometry, textures and other scene-description data into pixels. The end result may be a first-person view inside a racing car, the graphics for an immersive VR environment, etc.

    In the motion picture industry, scenes that are infeasible to shoot on film due to time, monetary, or physical constraints are also generated using a 3D graphics pipeline, but since the degree of realism must be higher, the frames take much longer to generate. These frames are generated on separate computers (in parallel, to speed things up) and then "stitched" together into the final sequence.

    Although the actual algorithms used may vary, the transformations used are fundamentally the same in both cases (real-time and offline).

    This tutorial session will provide an introduction to the stages in the 3D Graphics Pipeline, with examples and brief explanations of the inner workings of each stage. No mathematical background or prior knowledge of graphics (except at an end-user level) is assumed, but depending on audience interest, algorithms can be discussed. Questions are encouraged, both during and after the session.

    Bio
    Rahul Banerjee works for NVIDIA Corp. at their Bangalore development center. He has worked on video compression, Linux kernel drivers, and an OpenGL-ES driver. He is currently working on OpenCL, a cross-platform parallel programming language. He holds a Master's Degree in Computer Science from the Indian Institute of Technology in Kanpur, India. In his spare time, he plays the piano.
  • Eric Boskin, Volterra Semiconductor Corp. (Oakland, CA; USA)
    Title
    "Quality Programs for Silicon Power Solutions"
    Time
    10:30-11:00
    Abstract
    This lecture begins with an overview of the product lines and market penetration of Volterra's power management products. The products are designed to be small, light, and highly efficient power supplies for CPU and GPU chips. The system control architecture of this DC to DC converter is described. An integrated, multi-phase, buck converter architecture is chosen for high performance. The manufacturing process flow for this fabless semiconductor company, and the Yield Enhancement Systems, are outlined. A Quality Control Program, Maverick Product Elimination (MPE), is described, which is the cornerstone of manufacturing power products at ultra-low Defective Parts per Million (DPPM) levels.
    Bio
    Eric Boskin is currently Director of Product Engineering at Volterra Semiconductor Corporation, a leading fabless semiconductor company in Power Management. Volterra's major customers include IBM, nVidia, and Lenovo. He is responsible for New Product, Process, and Package Technology Qualification, and Yield Enhancement, for all product lines. He has previously worked in Design Engineering and R&D, at Teradyne, HP, and Lam Research. He founded a company, Perceptive Technologies, which developed Real Time Statistical Process Control software for the semiconductor manufacturing industry. He holds a B.S.E.E. from Rensselaer Polytechnic Institute (1980), an M.S.E.E. from Stanford University (1986), and a Ph.D. from the College of Engineering, University of California, Berkeley (1995).

mcohen@u-aizu.ac.jp