h a l f b a k e r yi v n i n seeks n e t o
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
My idea is to have a client program on each computer on a network take a few gigs (say 20GB) of the free space on it and share that space to a master controller (server). The server would use the multiple client machines and the space they've given to create one large, fast, redundant file system that
can then be shared and used as if it was a single drive. Meanwhile, all the client machines can be used as usual by the users with little degradation in processing power available. All the RAID styling (Mirroring, striping, and paritying) would be up to the administrator of the server depending on the way the client machines are used.
For instance if you had 4 client computers and turned one off often, your best bet would be a RAID stripe of mirrors. That way one (or two, depending) computer(s) could be turned off without loss of data or uptime. The server would have to be online at all times however, unless the clients could create an AD-HOC server between themselves which would be a whole different issue...
[link]
|
|
Would you post an example of an application which could take advantage of this? I thought processing power was the limiting issue for many applications, not storage space. |
|
|
The whole point of the system would be that you could effectively use the otherwise unused space of the client machines. In my current situation I have 25 client machines, each with over 20 gigs of free space available. I would like a system where I could make a full backup of our main fileserver and have it chopped and RAIDed amongst all the client machines. By doing this, I can use the free space of the clients as backup space for the main file server here and not need to buy another fileserver just to backup our main machine. Of course a cheap machine would have to manage the RAID and the clients, but once set up, my main server could fall victim to any number of attacks or explode and I would still have a full copy of it distributed amongst my other machines and none of the clients notice a thing as far as performance degradation. |
|
|
It's a good idea, and probably bakable. I wouldn't expect it to be fast, but you could get decent fault tolerance. You'd want to make sure you had good physical security over the drives in use. |
|
|
Baked by Hadoop & Google Datastore. |
|
|
It's different from a SAN because it's only using free space on non-dedicated computers, rather than using dedicated space on dedicated servers. It's a cost-effective solution for consolidating the free space on your network without a large investment in new hardware. Basically you're using the free space that the client computers don't need and the clients (users) can continue working on their machines as if nothing's happening.
@james_what : I checked out Hadoop and found it's mainly for parallel processing on commodity machines whereas I'm trying to combine the storage, not the processing power... I couldn't get any info about google datastore but in any case, i doubt it's open for home/business use if it is what i'm describing (which I doubt. Google wouldn't have a need for something like this) |
|
|
This is just what Google does. They Raid their storage across the cheapest machines they can find. No reason the machines couldn't be clients that also do other stuff. |
|
| |