h a l f b a k e r yCall Ambulance, Rebuild Kitchen.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Please log in.
Before you can vote, you need to register.
Please log in or create an account.
|
I understand that the use of fractal techniques to compress
images is largely abandoned, i assume because other
algorithms are faster, make smaller files and so forth.
However, it seems to me that fractal image compression
would still be useful if one wanted to add information to an
image
rather than just subtracting it, and that this
information might even sometimes have a tendency to be
close to being accurate.
So this is my idea:
Scan an image for patterns which show at least two levels
of self-similarity, for instance a star field, a tree, a
network of blood vessels or a pebbly beach. Have the
algorithm determine which areas exhibit these features
and also whether they do so in more than one way - for
instance, a root system in sandy soil might have a definite
distribution of particle sizes between roots which have a
definite pattern of branches. Separate all areas of the
image which cannot be processed in this way, such as a
clear blue sky, recording any colour gradients in these
areas and store them as functions. Anything else, which is
unpredictable in this manner, separate out and store
separately in a more conventional form.
When the image is to be displayed, the actual pixels of the
image often need not be used. Instead, the fractal and
gradient patterns are reproduced. With no zooming, the
image looks the same as the original and doesn't make any
attempt to make anything up. If zooming does take place,
the self-similar tendency of the fractals and the
predictability of the gradients is exploited to introduce
new details to the image. A simple example would be a
photograph of a fern. Zooming in on the frond would
provide an infinite series of metafronds no matter how far
zoomed in the image is. Similarly, a tree might have
leaves of a particular shape which are predictable from the
shape of the whole tree. This could get more interesting
when two features of an area interact, because new
structures not closely resembling either might emerge, but
which really exist. Zooming into a night sky could look like
the Hubble Deep Field.
Faces are said to be easily compressible and storable in
databases using very few bytes - i seem to recall about
forty. A blurred face could be similar. The lower part of a
face might be predictable from the upper and vice versa.
Similarly, a self-similar area which disappears behind
something in the foreground could continue with the same
pattern, and a sky, for example, which had the same
shade of blue on either side of a foreground object would
probably be the same behind it, though it might have a
cloud.
The result is unreliable but interesting and possibly helpful
gainy image compression which might reveal details of
images behind objects, deblur faces and show smaller
details of the whites of one's eyes or the vein pattern in a
pot plant. Often wrong but infinitely zoomable.
"Enhance"
http://www.youtube....watch?v=Vxq9yj2pVWk Like this and about as realistic [nineteenthly, Mar 13 2012]
[link]
|
|
It could have a "Where's Wally" mode. Also, if it turns out not to
be possible to identify everything by software, it could either be
done manually or by crowdsourcing. |
|
|
How do you see, or otherwise take advantage of,
intelligently reconstructed details of images behind objects in a static photograph? |
|
|
With quantum linked photons. |
|
|
There are layers and the image is viewable from different angles.
Also, your mention of stills brings to mind the possibility of
reconstructing details by retaining the details which change in
video files, though clearly it wouldn't be that simple. |
|
|
Didn't Google publish an image enhancement or scaling
algorithm like this recently? |
|
|
Thanks [notexactly]. I shall, well, Google. |
|
|
I think this is how Deja Vu (.djvu) files work. The encoder
removes the text glyphs from the page background, and
compress a blank background separately which then has the
text overlaid on top of it when reconstituted on a display
using canonical versions of each glyph. |
|
| |