h a l f b a k e r yPoint of hors d'oevre
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
419 delete
securely delete files and hide their existence (on a hard-drive you want to keep on using) | |
Simply deleting files typically doesn't remove the file information from the storage device (floppy or hard-disk etc). (See link)
The recommended process to securely delete files is to overwrite them with random data (multiple times for the paranoid requiring extreme privacy).
Of course, this
implies a requirement for a large amount of random data, which is something computers are basically rubbish at producing. [1] Fortunately, the fastest growing industry in Nigeria is to provide you with large amounts of random data, for free.
I propose that spam should be collected and stored in a cyclically replaced store. That is, when you identify spam it needs to get moved to a temporary folder which holds the last 'n' kb of spam you've received. I believe that many existing email viewers can be configured to perform this task directly.
When a private file needs to be deleted, it would first be overwritten with random-seeded pseudo-randomness as usual. The new step is to overwrite it with spam from the spam-cache, after which the original spam itself is deleted.
The data-blocks released from the spam-cache should ideally be used at least transiently for temporary storage of other files, for example virtual memory, while the overwritten, now spam, files' data-blocks are linked in to the spam repository.
Thus the file leaves no incriminating trail of random data. The only potential sign of anything out of the ordinary would be extra copies of some spam (which would be overwritten over time anyway) and the program itself, which could potentially be stored on other media.
[1] Generating pseudo-random numbers algorithmically from a small amount of seed data is of course easy. However, if this output is stored 'in clear' it may simply identify areas suitable for closer analysis.
Analysis of security problem, and 2nd hand hard-disks.
http://www.computer...l.pdf?SMIDENTITY=NO [Loris, Oct 04 2004, last modified Oct 05 2004]
Secure Deletion of Data from Magnetic and Solid-State Memory
http://www.cs.auckl...ubs/secure_del.html (Peter Gutmann's article) [Loris, Oct 04 2004, last modified Oct 05 2004]
No need to delete anything for security
http://desdrive.com/features.php [kbecker, Oct 04 2004, last modified Oct 05 2004]
Complete Delete
http://www.halfbake...a/Complete_20Delete (Shameless self promotion) [Detly, Oct 04 2004, last modified Oct 05 2004]
[link]
|
|
Why does the data used for the overwrite need to be random. Surely a multiple overwrite using 101010101010101... would cover the data pretty well. |
|
|
[GenYus]: The data need not be random, but the alternative stinks. Fewer passes are required to make your sensitive data unrecoverable if it is overwritten at each pass with cryptographically secure randomness and not mere algorithmically generated pseudo-random gibberish. If you only overwrite with some known pattern, that can be filtered out and the underlying data still recovered, even after a dozen passes. |
|
|
Not so, GenYus. Even a completely erased hard disk will have some memory of the magnetic polarisation it had before it was wiped, particularly if the data was on there for a long time. Writing a known pattern to the hard drive will weaken the memory of this stored information, but the fact that it is a known pattern makes it easier to filter out. |
|
|
Just to give an idea of the methods of data retrival where budget it no object, it is not uncommmon to remove the disks from the drive and read them using sensitive analogue equipment. This will provide a diffent reading for a bit that was a one but is now a zero and a bit that has been a zero for a long time. Readings between the tracks are taken as data that has been on the drive for a long time tends to spread out further than fresh data and, once erased, the old data may be clearly visible off track. Finally, the drive can be written to, to see how easily the surface changes from one state to another. Taking less current to flip a bit one way may indicate that it has been in that state previously. |
|
|
Writing data that cannot be determined retrospecively over areas of the disk at time of deletion would certainly help mask deleted data but, even with this system, the safest way to delete the data is to destroy the hard drive as completely as possible. |
|
|
Covering the data with known values doesn't do much to stop the determined recovery agent with the proper tools, since the magnetic media is, on a bit-by-bit basis, analog. That is to say, When you write a "1", it's not a 1, but may be closer to .8 or 1.2. The drive electronics simply see that as "above a set level, let's call it a "1". By using very precise tools, it's possible to find out exactly how the write head modifies the media, and by doing that, figuring out what was there before the write. |
|
|
That aside, let's say you rewrite a sector a few hundred times. There's still a phenomenon called "magnetic creep", whereby the magnetic field of a particular bit will affect the polarity of the material adjacent to it, and what was once a well-defined spot of data defining a bit is now a slightly spread-out blob of data. Still readable, but a little bit less sharp. Newly written data will of course be sharp, so if you can manage to read just outside the sharp edges of the new data, you can recover the old data. The longer a piece of data has been in the same spot on disk, the easier it will be to recover. |
|
|
The only way to be completely sure that noone can recover your data from your old drive is to destroy it in such a way that it cannot be reassembled. One pretty sure way is to melt the drive under a block of thermite. A little less pyrotechnic would be to break up the platters and dispose of the pieces at several separate recycling facilities (so they'll be melted into separate pools of slag) |
|
|
If you're not that paranoid, there are software packages that will overwrite your data with 0000....1111.....0101....1010....random....inverse random.... however many times you want. This will wipe it well enough that the effort involved in recovering the data will not be worth it unless you're known to be trafficing in top secret information about the North Korean nuclear weapons test program, the location of Osama Bin Laden, or something else similarly sensitive. |
|
|
Tin-foil hats made from genuine recycled post-consumer tin-foil available for a moderate fee. |
|
|
//Why does the data used for the overwrite need to be random?//
Actually, I simplified that a little for brevity. Peter Gutmann developed a protocol for hard-disk sanitation (see second link), which involves "Gutmann patterns", which are apparently widely used nowadays. Several overwrites are required. Some are random, some are specific bit-patterns intended to imprint into the disk platter most strongly. |
|
|
Apparently there are several problems with removing the previous data. The basic problem is that overwriting '1' with another '1' leads to a slightly stronger '1' being recorded than overwriting a '0'. This difference can be read out using special equipment. |
|
|
The main reason for not just using a specific bit-pattern is that if you know the pattern used for the overwrite, then you can allow for it and 'subtract' it from your precision measurements. |
|
|
Another reason is that the drive doesn't necessarily record exactly the data you gave it. It has to record extra guidance bits so it doesn't get lost in runs of the same bit-value, for example. I don't pretend to fully understand this, but to clear away the recorded bits residual image, you want to flip its magnetisation several times in a particular way, and a particular bit-pattern might not achieve this.
|
|
|
//What security conscious people need is an operating system that continously shuffles data around on the disk; that might prevent data from spreading out into blobs.// |
|
|
Of course that would mean that you'd have multiple copies of your private data all over the unused portion of your disk, you know. :-) |
|
|
I want virgin aluminum-foil, not any post consumer junk. Otherwise the aluminum atoms may be contaminated with subliminal messages planted by "them". Of course I am environmentally concious, so this aluminum should be mined using recycled veggy oil powered machinery and refined using solar and or wind power. |
|
|
With the data spreading off the tracks if it sits too long, it sounds like someone needs to make a program that automatically rearranges the data on the hard disk a couple times a day. This program could also keep the disk defragmented, and should span the data across at least two hard disks so when one disk crashes because of the excessive use of moving the data around all the time, the data is not lost. |
|
|
Couldn't a USB memory stick be used for those files? |
|
|
//OK then, the OS would perform multiple overwrites, as in your system, but do it every time a file is shuffled, not just when the owner does a file delete.// |
|
|
To some extent this is a good idea anyway. RiscOS does this as necessary to defragment files and provide faster access to frequently read files. For media which have a certain number of guarenteed writes (Flash RAM etc) it would be clever to move very rarely updated data (like programs etc) to areas which are nearing their limit. I personally wouldn't want it happening all the time, although I don't have to worry about securely deleting any of my own stuff. |
|
|
<<dispose of the pieces at several separate recycling facilities (so they'll be melted into separate pools of slag)>> Interesting. Can one really recover data from a well done, charred hard drive if its all in one lump? |
|
|
No, but they could recover all the plater peices... |
|
|
It would be even safer if unencrypted data were never written to the HD as it is done for the DES drive in the link. A chip manufacturer could embed this feature in the southbridge so all attached hard drives are automatically encrypted. The chip would need an extra port that is routed to the front panel so the user can plug in a key with the de/encryption code. |
|
|
For multiuser systems the chip and the BIOS have to be smart enough to recognize who is who so only valid partitions show up. |
|
|
//No, but they could recover all the plater peices//
Would those be Duck-Billed? |
|
| |