h a l f b a k e r yYour journey of inspiration and perplexement provides a certain dark frisson.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
Much less info and much more quality to pass to computer.
Send frequency domain instead of time domain info. No need for FFT's etc. Get the frequencies directly from nature. The result would be a "digital microphone" giving fantastic quality at the price of a simple mic.
(?) Artificial Cochlea
http://diwww.epfl.c...ik/eap/cochlea.html [half, Oct 04 2004]
[link]
|
|
Er... what? Please explain this in more detail. My brain doesn't have the necessary information to fill in the gaps in what you wrote. |
|
|
Possibly a bit over our heads, but I'll give it a stab.
So you use the artificial cochlea schematic as shown on the link to output frequency data to a computer's soundcard, instead of relying on the microphone to transmit a raw signal response, which has the effect of filtering the noise selectively? |
|
|
The link was a wild shot at attempting to find something possibly semi-related to what [pashute] may or may not have been referring to. I was hoping somebody might extrapolate, interpolate or fabricate some detail. |
|
|
The inner ear is the shape of a nautillus shell. As it narrows it is resonant to different frequencies along its length. The tiny hairs along its length therefore pick up different frequencies. What it is, in effect, is lots of microphones in parallel, each with a very narrow frequency band within which it responds. |
|
|
If you build a microphone on these principles you would have an amplitude for each frequency range rather than one amplitude that you have to split into frequencies before processing it. |
|
|
I guess you get a cleaner signal because you have to do less work on it and, because each small hairy mike is dedicated to one frequency, it can presumably perform its job better. |
|
|
You could test this last bit by making a woofer, mid-range and tweeter mikes and seeing if you get better quality by strapping these together than you would with a single mike. |
|
|
On the principle that I think I understand what's going on despite the brevity, I'm going to tentatively award a croissant, though I'd prefer having to do less work to understand the idea. |
|
|
So this is about hit and miss then. |
|
|
Sorry people, and thank you half! The ear works in a different way than a mic. Was proposing to have an "artificial ear" like device which would be as cheap as a mic and may give "better quality" of audio information (behaving a bit as a soundcard with audio software). |
|
|
FFT, frequency domain etc. Buzzwords of the audio industry. |
|
|
Hmmm, that didn't clarify this at all. I can still only guess at the way this works. <Withdraws croissant> |
|
|
I'll try and be clear. The mic works by going in and out with the air pressure created by sound. After being electronicly recorded, this creates a series of numbers going up and down, which is saved to a (sound) file, or worked on. This information is called: time domain sound. |
|
|
The ear works by having a snail shaped tube with physical devices along it (tiny hairs and sensors) which record the frequencies that are being given at a certain time. Let's say you hear a foghorn (low frequency) and the sound causes the sensors at the outer part of the snail like tube, which is wider, to give off a signal. When you hear a violin, the inner sensors give off their signals. While we talk the frequencies change all the time, and the ear "records" these changes. |
|
|
If we want to compress the sounds and save them in mpeg like files, the computer has to do some work and change the signal from the list of numbers in the time domain (from the mic) to a new list of numbers in the frequency domain, and then work on these new numbers further. This math with numbers is called Digital Signal Processing (DSP) and the particular function changing from time domain to frequency domain (and vice versa) is called FFT (fast Fourier transform). |
|
|
So basically the idea is to make an "artificial ear" as a mic replacement for the future of audio. |
|
|
There are several other benefits to this "mic" but I lost most of you till here anyways. If anyone reached this point and wishes to hear more, simply ask. Thank you for your time. |
|
|
FFT = fast fourier... I should've known that one. Been awhile since I've done 'em, though. |
|
|
It shouldn't be complex. It's just a tube and a sensor. |
|
|
Sorry, [pashute]. I'm afraid that it must be too complex for me. Is this sensor not a microphone? It has to be a pressure transducer of some type, no? I don't quite grasp how this sensor is converting the pressure wave (sound) directly to frequency domain information. |
|
|
In my admitted ignorance in the area of sound processing, it sounds like this is a proposal to build a spectrum analyzer with a tube and a sensor. I must be missing something important. |
|
|
What is needed is a sensor along the tube, which tells where the tube is being pressed. In the human ear it is achieved by a simple mechanical array of hairs. |
|
|
At some museums they use sand (or any other grain) to show the displacement along a tube. A simple photograph (or your eyes) show you what frequencies are being heard. |
|
|
There are many different sensors out there now which could do the job, coupled with some material to give the desired effect (which the sensor will sense). That's the other half of the baking. |
|
|
But I'm certain it's possible. (I work with this stuff every day) |
|
|
I remember this physics lesson vividly. It involved a glass tube with sand in it, called (Mr Croft enunciating carefully) "Kundt's Tube." It was agitated by a metronomic device which arrayed the sand in wavelengths, called a (ahem) "Vibrator." |
|
|
Good idea, pashute. You are forgetting one thing, though - what is more important than the frequencies for understanding the signal is the *phase* of the frequencies. The magnitudes carry less information than the phase. |
|
|
However, that's just being pedantic - the cochlear mic system can easily be designed to collect phase info as well. :-) Croissant for you! |
|
|
I wonder if you could somehow pass information from this microphone, using parallel signals (one from each "hair" or specific frequency), to a oppositely constructed speaker system. Basically one to one, to demonstrate the idea. |
|
|
DSP:
I seem to remember that the human ear cannot easily distinguish frequencies that are close to each other (except for beat effects), and part of the signal processing is to remove frequencies that are close to each other. That would mean that the "hairs" can be placed discrete distances apart and there would be no need for intermediate frequencies? |
|
| |