h a l f b a k e r yAlas, poor spelling!
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
Proposed is a way to optimize virtual disk images (VDI)
across a logical volume (LV) inside of a logical volume
group (LVG). The trick would be to have a JBOD/LVG
storage concatenation of two storage devices, one fast
and small and the other slow and big. Inside of this LVG
would an LV, and
inside of that would be virtual disk
images. The host virtualization platform would track which
portions of the virtual disk image are frequently accessed
and cache them to the fast disk, and it would move
infrequently used portions to the slow disk. This would be
transparent to the guest OS and would probably have to be
implemented at the host's filesystem level (like ext4, JFS,
etc.). Otherwise, in the host OS, the disk image would still
be accessible as a single file for image backups and the
like. This would be more granular than selecting which files
to cache (like operating systems probably do); a particular
PORTION of a database tablespace could be cached at a
low level instead of either trying to keep the whole
tablespace in RAM or trying to cache rows at the DB level.
[link]
|
|
Soo.... an OS X sparse bundle combined with SSD
caching? I don't think that's novel enough to say this
isn't baked. |
|
|
//like OSes probably do// you'd think they might, but since nobody knows how to write computer programs anymore they look at access stats and cache physical sectors instead. |
|
|
I would imagine that pro databases have this option and that they could do it at the table element level. |
|
| |