Creating interactive virtual auditory environments - Computer Graphics and Applications, IEEE
نویسندگان
چکیده
IEEE Computer Graphics and Applications 49 Sound rendering 1 is analogous to graphics rendering when creating virtual auditory environments. In graphics, we can create images by calculating the distribution of light within a modeled environment. Illumination methods such as ray tracing and radiosity are based on the physics of light propagation and reflection. Similarly, sound rendering is based on physical laws of sound propagation and reflection. In this article, we aim to clarify real-time sound rendering techniques by comparing them to visual image rendering. We also describe how to perform sound rendering, based on the knowledge of sound source(s) and listener locations, radiation characteristics of sound sources, geometry of 3D models, and material absorption data—in other words, the congruent data used for graphics rendering. In several instances, we use the Digital Interactive Virtual Acoustics (DIVA) auralization system, which we’ve been developing since 1994 at the Helsinki University of Technology, as a practical example to illustrate a concept. (The sidebar “Practical Applications of the DIVA System” [next page] briefly describes two applications of our system.) In the context of sound rendering, the term auralization—making audible—corresponds to visualization. Applications of sound rendering vary from film effects, computer games, and other multimedia content to enhancing experiences in virtual reality (VR).
منابع مشابه
Keynote speaker: Towards immersive multimodal display: Interactive auditory rendering for complex virtual environments
Extending the frontier of visual computing, an interactive, multimodal VR environment utilizes audio and touch-enabled interfaces to communicate information to a user and augment the graphical rendering. By harnessing other sensory channels, an immersive multimodal display can further enhance a user’s experience in a virtual world. In addition to immersive environments, multimodal display can p...
متن کامل3D for the Web
In the 1995 report, “Virtual Reality: Scientific and Technological Challenges,” the National Research Council surveyed key challenges in VR technology and research. Years of effort on distributed interactive 3D graphics applications have clearly identified networking issues as the critical bottlenecks preventing the creation of large-scale VEs [virtual environments] as ubiquitously as home-page...
متن کاملjReality - interactive audiovisual applications across virtual environments
jReality is a Java scene graph library for creating real-time interactive applications with 3D computer graphics and spatialized audio. Applications written for jReality will run unchanged on software and hardware platforms ranging from desktop machines with a single screen and stereo speakers to immersive virtual environments with motion tracking, multiple screens with 3D stereo projection, an...
متن کاملIdentifying and reducing critical lag in finite element simulations
Interactive, immersive virtual environments allow observers to move freely about computer-generated 3D objects and to explore new environments. The e ectiveness of these environments depends on the graphics used to model reality and the endto-end lag time (i.e., the delay between a user's action and the display of the result of that action). In this paper we focus on the latter issue, which has...
متن کاملReal-time Interactive 3D Audio and Video with jReality
We introduce jReality, a Java library for creating real-time interactive audiovisual applications with three-dimensional computer graphics and spatialized audio. Applications written for jReality will run unchanged on software and hardware platforms ranging from desktop machines with a single screen and stereo speakers to immersive virtual environments with motion tracking, multiple screens wit...
متن کامل