h a l f b a k e r y"It would work, if you can find alternatives to each of the steps involved in this process."
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
In the quest for low-latency storage on a $10 budget (just
for fun), the proposed idea uses a farm of obsolete
computers. Each computer would be configured with a
near-minimal installation of an open-source operating
system and a RAM drive (of perhaps a half gig or a gig)
configured as an iSCSI
target. The machines would be
wired together on a 100MB switch (because Gigabit doesn't
give you appreciably faster latency and because 100MB is
getting cheap). A master host machine would have each of
the iSCSI targets partitioned as "physical" volumes for a
logical volume group. The group would then be partitioned
with one logical volume spanning the sum of them.
(Striping wouldn't make a lot of sense since they'd be
competing for the host's network bandwidth.... unless link
aggregation or a gigabit NIC is involved).
I might actually do this in my garage and see what
happens.
It's the Latency, Stupid
http://rescomp.stan.../rants/Latency.html If you have latency (on drives or networks), you're stuck with it. [kevinthenerd, Jun 01 2012]
[link]
|
|
You realize that's it's actually 100Mb, as in megabits,
translating to 12.5 megabytes per second? Even the
slowest hard drives you can currently buy can do
several times that speed. |
|
|
I'm chasing latency, not bandwidth. If you'd like, a
single 1Gb connection would give you 125MB/s, and
six of them aggregated would give you SATA-3
bandwidth (ignoring all of the protocol overheads in a
slapdash comparison). The latency wouldn't improve,
unfortunately. |
|
| |