In some philosophical sense, no CD, or MP3 "really" reproduces the sound that it mimics accurately. The sound must be broken up into segments, each of which has a value for pitch, intensity, etc. Sadly, the fewer of these segments (Called samples by the way) that occur per second of recording, the lower the fidelity of a sound will be. And the more samples per second, the larger your file is. This tradeoff between sound fidelity and file size is an abomination. Not only is it a pain in the neck, but it is something of a lie. No matter how many samples per second you take, some data is still lost. One can argue that the data lost is about as important as the value of in infinitesimal in a calculus equation, but still I find this to be rather... displeasing. I propose a different system: Sounds should be stored on computers as an analog signal. Rather than running sounds through a DAC, run the microphone directly to the heads of the hard drive and move those heads like the needle of a phonograph. A file would have a digital header telling the computer how long the song is, and its coordinates on the hard drive, followed by this analog signal. When it comes time to play the song, the computer program seeks out the "header" file, and simply begins to move the read head over the analog signal at a set rate. Whatever value it finds there is sent directly through an amplifier (tube if possible for maximum anachronistic appeal) to your speakers. None of this lossy digital nonsense. Because hard drives are inherently designed not to corrupt their data over time the signal would not degrade as a cassette or record does. Because the sound is stored as an analog signal, there will be less data loss than there would be in even an "uncompressed" digital sound file. This would probably require a completely different file system than that used in most operating systems today, and for that matter, I believe that the hard drive's heads would need to be somewhat different from those that exist now. Hell, the disk substrate will need to be different too since I don't want the "bit" sized magnetic particles, I want an undifferentiated stream of the bloody things. And I, being a lunatic audiophile, do not care.-- Madcat, Oct 03 2003 CCR-81 http://pilot.ucdavi...images/26-1208b.gif [Shz, Oct 17 2004] [Qnow]: No, I think shellac would last longer.
The Multi-level (ML) CD format that TDK developed a few years back uses special media and a stepped-strength laser to encode discs with variable-density marks, essentially turning what was a 1-bit pulse type storage into a 4-bit one with varying depths of laser contact. I wonder if one could modify this for audio use to a continuously variable laser controlled by an analog circuit. It should be possible then to read this back (you'd need two tracks for stereo audio).
It is isn't accurate enough for data storage, but it might work for audio...-- Cedar Park, Oct 04 2003 [Madcat]: If you're so fixated on pristine analog storage get yourself an Ampex ATR100/102 2-track tape deck. Many consider it to be the best sounding recording device ever made and is the preferred mastering machine of many an engineer.
Bring money as they are quite sought after.-- bristolz, Oct 04 2003 You could record them on laserdisk.-- lawpoop, Oct 04 2003 Incorrect.... hard drives can't store an analog signal without degradation. The reason data is stored 100% accurately is because it is digital and a little noise makes no difference unless its so larget that it makes it impossible to distinguish 0s from 1s.
And if sampled higher than the Nyquist rate (2x the highest frequency component) the original signal can be recovered with 100% accuracy. Sure you'd lose higher frequency (inaudible components) but then no analog system can have infinite bandwidth either.-- vp, Oct 04 2003 Its not strictly one pattern for 0s and one pattern for 1s but more like say a pattern for each of a group of bits. for example 4 different patterns could store each of 00, 01, 10 and 11.
And no. the Nyquist rate is twice the highest input frequency. In practice you might need to sample higher than that because its not possible to design a filter with that sharp a cut off-- vp, Oct 04 2003 If an input waveform consists entirely of frequencies below Nyquist, the sampled data will uniquely identify the input waveform, so a reconstructed waveform which consists entirely of frequencies below Nyquist will precisely match the original.
Unfortunately, producing such a reconstructed waveform accurately from the samples is a decidedly non-trivial task if the frequencies of interest extend very close to Nyquist. For example, imagine that a continuous 4999Hz sine wave is being sampled at 10KHz. An examination of the recorded sample data will show a 5KHz wave which grows and shinks twice per second and flips phase each time it shrinks to nothing.
Mathematically, if one takes a one-second sample of this data, performs an FFT, and then reconstructs the signal from the Foorier series, the result will be an accurate reconstruction of the original waveform. In practice, though, no reconstruction filter would do that; instead, any reconstruction filter would output a 5000Hz sine wave, ring modulated at 1Hz (even though such a reconstruction has frequency components at 4999Hz and 5001Hz).
Having the input data stripped of any frequency components above about 0.8 Nyquist helps to avoid odd aliasing effects like the above which could otherwise occur on both playback and recording. Although in some ways a "brick wall" filter is ideal, in practice a filter with a more gradual roll-off is often better. For example, suppose our sampling setup had a "brick wall" filter at 4KHz and the input audio signal was a 3999Hz sine wave modulated at 2Hz. Since such a sine wave has frequency components at 3997Hz and 4001Hz, a brick-wall filter would strip the 4001Hz component leaving a continuous 3997Hz sine wave. A more gradual filter would avoid this problem.-- supercat, Oct 04 2003 random, halfbakery