Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
The word "How?" springs to mind at this point.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                     

real-time rendered audio

Render audio based on enviromental and material properties
  (+15)(+15)
(+15)
  [vote for,
against]

There's alot of fuss put into games lately relating to the quality or graphics being rendered on screen. However, a large part of the only other sensory input we get from a game has been ignored. Currently, audio in games or other applications is formed from prerecorded sources. Some postprocessing (i.e. reverb, delay, etc.) have been added in recent years, but don't actually compute for the user's environment. The proposal is to have audio rendered, so to speak, in real-time based purely on the physical properties of the environment and materials in which the sound is produced. There are no pre-recorded sound files or simple reverb effects, just computed data. This would make for a more involving game experience, allow for more unique text-to-speech voices, etc.
timdorr, Aug 19 2001

MPEG-4 Structured Audio http://sound.media.mit.edu/mpeg4/
Baking as we speak. [rmutt, Aug 19 2001]

Thief - The Dark Project (Eidos) http://www.eidosint...s/info.html?gmid=34
Also available: Thief II - The Metal Age [phoenix, Feb 06 2002]

Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.
Short name, e.g., Bob's Coffee
Destination URL. E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)






       I'm still waiting to hear from a games company who were looking to employ me to work on a project doing this very thing. It's been weeks since the interview though, so I doubt they were that keen on me...
-alx, Aug 19 2001
  

       EAX is supposed to do this. Environmental Audio something.   

       In practice, what it does is give everything an echo.
StarChaser, Aug 19 2001
  

       Ummm...no. EAX does NOT do this. EAX just adds reverb and similar effects to create to effect of having an environment. It doesn't compute, for example, if there is a wall in front of a sound at some specific angle. All it does is add x ammount of y effects to a prerecorded sound
timdorr, Aug 19 2001
  

       I've had this idea.   

       The question is... would anyone notice? I don't think people would notice "a wall in front of a sound at some specific angle".   

       Instead, what this is really good for is avoiding repetition. If a monster is running, or a pipe is dripping, or an engine is running, don't just keep playing the same sample over and over again. Instead, generate it algorithmically, and vary it depending on how the monster is running (up stairs, down stairs, around a corner), how the pipe is dripping (it's a chaotic process), or how fast the engine is running.
egnor, Aug 20 2001
  

       Exactly what egnor said. And for most purposes, I'd submit that precisely natural imitation is a waste of processor cycles--rather than use an exquisite hydrologically accurate model for dripping water, for example, just kludge something together that sounds realistic. I assume game builders already use some sort of look-ahead processing so that certain sounds which might be needed in the next few seconds of play could be at least partially synthesized in advance?
Dog Ed, Aug 20 2001
  

       Timdorr, I said that EAX is SUPPOSED to, not that it does. I like the idea, although it'd take some ferocious machinery to figure out how to do it, especially on the fly. How many surfaces is that 'drip' sound going to bounce off, how long will it take to get to you from where it is...
StarChaser, Aug 20 2001
  

       // The question is... would anyone notice? I don't think people would notice "a wall in front of a sound at some specific angle". //
I would agree so far as to say that *at first* people would not notice, but I would like to point out some ways in which audio rendering could make a perceptible difference.
  

       People today downplay the significance of sound in favor of vision mainly because they have trained themselves to do just that. Vision is a higher-bandwidth medium than hearing (there is a greater number of receptors in the human eye than in the human ear) and so people find it easier to send their intentional messages via visual cues, and omit the aural cues except those which get slipped in automatically by the subconscious mind. Since most of the "important" information is presented visually, people train themselves to focus their attention of the visual cues, leaving the aural cues for the subconscious mind to process "in the spare cycles".   

       However, this does not mean that people are incapable of extracting information from the subtleties of sound. Indeed, unless you have damaged hearing from loud music or occupational noise, you might be amazed at the level of detail you can perceive in sound. If you train yourself accordingly, you just might be able to perceive that wall ahead at a certain angle and with certain surface properties, etc.   

       Just as an experiment, stand in a darkened room which has different types of surfaces around its perimeter and interior. Close your eyes, just to make sure you can not see. Now listen. Listen to the ambient sounds, *how* they sound, and *where* they sound. Now slowly take a step or two in one direction, and then another. Pay attention to how the sounds change slighly. Get very close to each of the major objects in the room. Listen. You will begin to "hear" the major objects and surfaces in the room, and learn to recognize them.   

       Now imagine a game environment which duplicates this sensation by rendering the acoustics of the room in two dimensions and tracking the frequency-dependent reflection and scattering of sounds as they encounter each surface in the room. Now make game play *dependent* on the player's ability to listen to the sound, and you will have a product worthy of multiple croissants!   

       Sorry for the long and rambling annotation. This happens to touch on one of my favorite topics.
BigBrother, Aug 23 2001
  

       Nice catch, BigBrother. I'm not aware that games provide much 'pseudosense' for visual or auditory consideration, or if they do I've gotten it as really cool sound or visual effects. No doubt it would take enormous computer resources to duplicate the effect of a player suspending flight, for example, and listening carefully for the sound of footfalls, wings, or cracking ice. What becomes "really cool" to the player is so subjective, isn't it?   

       BTW: I always loved Mechwar for the sensory illusions I received from that game
reensure, Aug 23 2001
  

       What?!   

       Come on. I know there's some old school gamers who wouldn't change the Super Marios Brothers theme for nothin.
iuvare, Aug 23 2001
  

       I like the idea of having games vary their sound based on "environmental" factors. I would suggest, though, that the key isn't to calculate what's most "realistic", but rather what "feels right". For example, in a Quake-style game I would keep for each brush an indicator of what type of material it is; for each region I'd have an indicator of its general reverb-ness at a few frequencies and a matrix indicating what areas were 'connected' better or worse than distance would suggest.   

       A monster walking on carpet would thus sound different from one walking on stone, and cavernous caves could be made to sound more 'boomy' than carpeted halls without requiring excessive processing.
supercat, Feb 06 2002
  

       I think this is a fine idea given the following precepts: that the engine under discussion is capable of both synthesizing, from scratch, new soundscapes based on hints or meta-data provided by the (visual) environment rendering system AND capable of enhancing (processing) existing sound samples and sound files based on the presented environmental situation of the moment.   

       I believe that people would notice sonic modification/synthesis that is directly based on the instantaneous visual environment being presented. Perhaps they wouldn't notice it overtly but, rather, they would notice it as an overall quality, a nuance, of the immersion of the game or application--an aid to the suspension of disbelief.   

       An old adage amongst foley artists is that "the eye sees what the ear hears." The accuracy of the sound to visual match doesn't even have to be very close to "sell" the combined effect to our overall sensory selves. A shot of a car, when loosely dubbed with the sound of a perfectly tuned hot rod, leads the viewer to believe that she is looking at a muscle car. The same shot when combined with the sound of a out-of-tune, backfiring car, leads her to see the car as a beater. The visual is the same but the perception is vastly different.   

       So, a sound rendering engine that can either generate new sounds (synthesis) or enhance existing sounds (processing), based on some sort of lookup/meta data about the presented visual environment is surely interesting to me. The notion, given by supercat, that, say, in Quake, a lookup value for what the sonic qualities of each textural/material "brush" is is a great idea and one that, I think, would enhance the state of the art.
bristolz, Feb 06 2002
  

       I invite you all to play Thief I or II with an EAX sound card and four-point surround speakers. The effect is astounding.   

       The fact that the game requires sneaking around in the shadows (and being able to maneuver by sound only) and listening for footsteps or conversation (and knowing *exactly* where they're coming from) makes it one of the most realistic I've ever played. Kill the lights and I swear you'll lose yourself in the game.
phoenix, Feb 06 2002
  

       That sounds really cool, phoenix. I've not heard of the game before.
bristolz, Feb 06 2002
  

       It seems to me that, if you weren't going to use WAVs or similar prerecorded files as the basis for the sounds and instead generate them dynamically, the result would sound very (non-wavetable) MIDI-like unless you took up huge amounts of processing power.   

       That being said, and seeing no end of processor speed increases in sight, I'm going to have to crosisant++ this. :)
jester, Feb 06 2002
  

       very nice idea, but like a poster said, that is what EAX is SUPPOSED to do, or at least what we hope it does if you gloss over the fine print in their ads
vmaldia, Aug 01 2006
  

       You'll want good headphones. You can accurately model the audio environment within the game (given enough processor power), but as soon as this sound emerges from speakers into your room, your local audio environment will be overlayed onto the virtual one unless you're playing in an anechoic chamber or wearing headphones. Headphones are cheaper.
wagster, Aug 01 2006
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle