Relying on satellite imagery, Google Earth and other virtual globe environments provide visual representations of geographical, topographical, architectural, social, political, cultural, and organizational information-- visual, textual, or numeric-- that is mappable to the earth's surface, while omitting corresponding audio representation of audio information. Yet the world is a sonic as well as a visual space. While global sonic features cannot be captured by small numbers of remote sensors such as satellites, many pre-recorded sounds (often culturally specific) are nevertheless quasi-mappable. In particular, recorded music tracks are typically (if not always) fuzzily mappable, via recourse to metadata (typically, the geographical region associated with the musical style represented in the track).
The proposed virtual environment features fixed, world-referenced virtual speakers, assigned earth coordinate (lat/lon) positions, virtually broadcasting (with a particular power and radiation pattern) mapped music tracks selected at random from the collection of all such tracks located within a fixed radius of the speaker (or, possible, broadcasting real sound, as captured by real remote sensors: "webmics," each one attached to a virtual speaker). User avatars can also radiate sound into the virtual world via mobile virtual speakers (connected to real user microphones), enabling (speech or music) communication among simultaneous users. Users traverse the virtual world via pointing devices, either FCG (fixed (user) center of gravity, e.g. a mouse or trackpad), or MCG (movable center of gravity, e.g. a GPS-compass unit), allowing them to control the course and orientation of their avatars. As they do so, binaural audio signals are computed as the acoustically correct sum (e.g. accounting for inverse square law, doppler effects, head shadowing, interaural intensity and phase differences, etc.) of avatar orientation, speed, and location relative to the of virtual broadcasts emanating from all virtual speakers, and output to real user headphones. Virtual wind noise can also be added as a function of speed. Visual position feedback is provided by Google Maps, via LCD screen.
In the projected implementation, Smithsonian Folkways Recordings, comprising over 2,200 albums and more than 40,000 tracks from around the world, provides a useful possible global musical database. A non-obtrusive e-commerce application can also be provided: metadata for the maximally salient track is visually indicated, with an option to purchase. IPR issues are briefly addressed.
The environment is envisioned to function in three possible modes, which may also be combined: (a) in FCG mode as a geographical world music browser; (b) in FCG mode as a collaborative improvisational tool; (c) in MCG mode as a virtual museum of world music, transforming any sufficiently large (with respect to MCG pointer resolution) open space into a virtual world. For instance, Smithsonian Folkways Recordings (unlike other Smithsonian Institution units), lacks a real music museum. Using (c), a large grassy square on the Washington DC Mall could be mapped to the entire globe, and thus transformed into a technologically and musically appealing virtual museum featuring Smithsonian Folkways' rich world musical content.
Creating such a "brush" leads to a whole new range of creative applications that can be enjoyed anywhere casually "on-the-go" without having to use a PC. Examples include, but are not limited to, annotating digital photographs taken on a mobile phone, creating personalized emoticons for text communication, drawing characters for mobile games, and sharing all this new content with millions of other people.
The first version of 12 pixels is currently planned to be released in Japan by Sony as a free service.
More info: www.12pixels.com
Ivan Poupyrev graduated from Moscow Airspace University with Masters in Applied Mathematics and Computer Science in 1992. While working on his doctorate degree at Hiroshima University, he stayed for three years as a Visiting Scientist at Human Interface Technology Lab at the University of Washington working on virtual reality and 3D user interfaces. After defending his PhD. in 1999 he stayed as a post-doctorate researcher at Advanced Telecommunication Research Institute International (ATR) in Kyoto. He joined Sony in 2001.
By providing an easy to use and personalizable product that gives positive reinforcement of good eating and eco habits, iHashi encourages consumers to make positive lifestyle changes. iHashi can keep track of and analyze your eating habits, calculate the number of trees you have saved, warn of extreme food temperatures or allergens, and offer points which can be redeemed for goods.
Upon graduating, she set off to explore the visual culture of Tokyo, Japan, and decided to study further as a Masters student. In the spring of 2007, she was awarded the Monbukagakusho scholarship as a research student under Prof. Masahiko Inakage in the department of Media and Governance at Keio University. In the fall of 2008 she entered Keio University's new graduate school of Media Design (KMD) as a Monbukagakusho scholar.
Current research is in interactivity design, bicycles, navigation, urban design, mobile devices, nostalgia, visual communication, and collective intelligence. Jess is involved in the establishment of a new bicycle navigation project at KMD, designing the digital signage system for the new Collaboration Complex at Keio's Hiyoshi campus, interactive media art, and experimental video making.
Norbert Györbíró received his B.Sc. and M.Sc. degrees in Computer Science from the University of Szeged, Hungary, in 1999 and 2003. He worked as a Researcher and from 2006 as a Senior Researcher for Nokia Research Center in Budapest, and in Helsinki between 1999 and 2007. He is currently pursuing Ph.D. studies in Computer Science at the University of Aizu, interested in mobile and distributed computing, fuzzy logic, and soft computing methods.
The highly responsive graphics that are created inside any interactive 3D application today are the end-result of a sequence of steps that transform geometry, textures and other scene-description data into pixels. The end result may be a first-person view inside a racing car, the graphics for an immersive VR environment, etc.
In the motion picture industry, scenes that are infeasible to shoot on film due to time, monetary, or physical constraints are also generated using a 3D graphics pipeline, but since the degree of realism must be higher, the frames take much longer to generate. These frames are generated on separate computers (in parallel, to speed things up) and then "stitched" together into the final sequence.
Although the actual algorithms used may vary, the transformations used are fundamentally the same in both cases (real-time and offline).
This tutorial session will provide an introduction to the stages in the 3D Graphics Pipeline, with examples and brief explanations of the inner workings of each stage. No mathematical background or prior knowledge of graphics (except at an end-user level) is assumed, but depending on audience interest, algorithms can be discussed. Questions are encouraged, both during and after the session.
mcohen@u-aizu.ac.jp