h a l f b a k e r yI think this would be a great thing to not do.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
|
Yes. A RAM disk is software that makes a piece of memory accessible as a file system. The disk contents are not persistent. It trades in a large amount of memory to gain speed. |
|
|
This idea is for software running either on the disk controller or in a low-level systems driver that dynamically indexes a disk at system boot time. The disk contents are persistent. It trades in a small amount of memory and a lot of boot-up speed to gain robustness. |
|
|
Unfortunatly, this is poo (or poe, I'm still trying to clean my britches from that one). The File Allocation Table is how the operating system finds the files to begin with. If the FAT didn't exist, you'd have to go on vacation everytime you booted so the OS would have time to locate every portion of every file on your hard drive. The problem gets worse as drive sizes increase. |
|
|
FWIW, FAT file systems aren't used as much as they used to be. Win2k, for instance, can write dynamic file information to hard drives (using NTFS) so that disk partitions can span physical hard drives. NTFS also allows some file recovery options using a process not unlike what you describe, but only searches data which isn't already linked to some other piece of data - which speeds up the process considerably. |
|
|
1. Get a real filesystem already. |
|
|
2. As others have mentioned, the FAT data is not redundant with the file data; the FAT describes how data blocks are organized into files. |
|
|
(I once accidentally overwrote the FAT on a floppy disk containing the only copy of my mother's book manuscript, and spent the next couple days piecing the sectors together by hand. It was a giant verbal jigsaw puzzle, and not something that could be done automatically.) |
|
|
3. Even if it were redundant, this is the equivalent of running a disk recovery program every time you boot. Why not just keep the FAT as is, and use the recovery utility only when you need to? That's the theory behind chkdsk and fsck, so your idea is "baked" in that sense. |
|
|
4. RAID and journalling are better ways to explicitly manage redundancy to preserve data in the face of (different kinds of) failure. |
|
|
Yes, useless as it may be. (Without any internal consistency indicator, if they differ, how do you know which is correct?) |
|
|
2 fat cats mad - ok I'm 4 |
|
|
You could always fill an entire disk with a single file (basically a database) - then the FAT only handles details of one file so the problem diminishes to trivial proportions. |
|
|
But, hey, what about consistency of the database...? |
|
|
(I think MS are working towards something like this in the future anyway - so it is probably beta-baked.) |
|
|
More seriously - sensible use of partitions can help a lot with this problem. Hopefully only one goes belly up at a time and, as it is only a fraction of your disk space, it only takes a similar fraction to run scandisk. |
|
|
I think CP/M probably generated its allocation map on the fly, since what I've read of CP/M disk format suggests that each directory entry contained a lists of blocks used by the file in question, but there was no explicit indication of which blocks on the disk were unused. |
|
|
Otherwise, what I'd like to see as a file-system improvement would be, in addition to having a FAT, also storing at the start of each cluster a record containing: (1) a unique identifier for the file of which the block is or was a part; (2) an index indicating which part of the file this block represents; (3) a pointer to the next block of the current file, if any; (4) a pointer to the previous block of the current file, if any. |
|
|
While cluster sizes would no longer be quite a power of two, I don't think this should in most cases pose too much difficulty, since on modern processors the time required to perform a divide to locate a sector of a long file would be small compared with the I/O time required to actually read the data. The handling of unbuffered files would have to change, but it could probably be emulated acceptably if necessary. The big advantage would be that while the FAT could be used for rapid access of data, the verification of tag values on data blocks would allow for double-checking the FAT, thus ensuring any corruption could be immediately detected. Additionally, in the event of severe damage to the FAT, it would be possible (albeit slow) to reread the file-system structure data off individual disk clusters and rebuild all or nearly all of the data on the drive. |
|
|
Use a jounalling filesystem, and corruption becomes a thing of the past (no matter how badly you crash your machine). Problem solved. |
|
|
The ext3 filesystem (used by Linux) has a journalling block. |
|
|
The full scan on startup would be very, very slow,
and you still need some means to know with
certainty where one file ends and the other begins,
and that's not even taking into consideration the
complexity of fragmentation. Why not just distribute
multiple copies of the FAT in a few places on the disk
if you're looking for redundancy? Better yet, why not
use a journaling filesystem? |
|
|
//why not just use a journaling filesystem// |
|
|
Because this thread seems to be very old. Back in 2001 we didn't have the GParted we know and love today. |
|
|
Journalling file systems existed in 2001. So did FAT systems with redundant copies of the FAT for increased reliability. (Of course, back then many people were forced to use FAT for annoying compatibility reasons). |
|
| |