h a l f b a k e r yGet half a life.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Not really a very long baseline optical interferometer. More of a 'cheep' multi gigapixcel camera with near perfect optics and an aperture of a couple of square kilometres
Get everyone, or almost everyone in Europ Africa and the naer east, who has digital camera to go out at the same time, and take
a picture of Orion. It dose not mater what the quality of there equipment. Infact the more different pixel sizes and orientations the better.
These pictures are the emailed to a central point.
The advantages of this are that with lots of pictures any error introduced by a particular camera or seeing condition will fall to almost 0 { near perfect optics }.
And even with only 1000000 pictures of 0.1 seconds length is equal to a long exposure of about 16 week
and if each of those camerass as an aperture of 2 squae millimetres, then that is a total aperture of 2 square kilometres.
Before the processing is applied to the millions of images sent in, they are each blown up (on the computer) to about 2000000 pixels across. The exact number will depend on the resolution of the pictures sent in.
There already exists software to line up and add images taken over the coarse of a nights observing. (dark sky stacker) however this software will need to be tweaked or rewritten to accommodate pixels at different angels .
To try and explain the advantage of this, imagine that all of the professional astronomers have shiny new 3x3 pixel cameras, and that when the process there images they can certain that the star is behind the centre pixel. The amiters have crappy little 2x2 cameras and think that the is in the top left corner, of course the star is in the top left corner of the centre pixel. The best way a computer can add these 2 sets of data is if we first multiply the images up to aastonishing 6x6 pixels.
So the more different pixel sizes, and the bigger the multiplication the better the resolution of the finished image.
Whatever we take pictures of with this system, is unlikely to have long striate lines. Having lots of images taken at different angles will mean that the stars will tend to become discs.
The last step of the operation would be to shrink and crop the image in to something manageable.
Planet-wide camera
[xaviergisz, May 29 2011]
Discover Habitable...he Professionals Do
[xaviergisz, May 29 2011]
[link]
|
|
//if each of those camerass as an aperture of 2
squae millimetres, then that is a total aperture of 2
square kilometres.// |
|
|
No, if each has an aperture of 2 square millimetres,
the total aperture is 2 million square millimetres, or
2 square metres. Biggish, but not immense. |
|
|
sorry my mistake
my first here but unfortunatly not my last |
|
|
Welcome to HB. This is a grand idea but it added a
full inch to my girth. [+-] |
|
|
You'd need to do this in a region of high population density, clear skies, and low light pollution levels. |
|
|
The Burning Man festival might be a good place to trial it. |
|
|
//unfortunatly not my last// |
|
|
//unfortunatly not my last// The phrase "tragic
accident" springs ineluctably to mind. |
|
|
Neat idea [+]. Does the exact time of the photo
need to be known (or inferred) for interferometry? |
|
|
//does the exact time of the photo need to be
known (or inferred)// |
|
|
It can done inferrometrically. |
|
|
That would help iron out the problems. |
|
|
I almost hate to mention this (then again, it has seldom given me pause before) but where, exactly, does the fudge come in? Perhaps a camera nestled in fresh hot fudge from the oven would have some sort of vibration dampening characteristics the non-fudge models have? |
|
|
// where, exactly, does the fudge come in? // |
|
|
Oh, for crying out loud, are you stupid ? |
|
|
Do you have to have evrything explained to you in words of one syllable or less ? |
|
|
Fudge - the good stuff - comes in tins. What else do you need to know ? |
|
|
//Does the exact time of the photo need to be known (or inferred) for interferometry?// Yes, and it needs to be "known" to within a small fraction of the period of the light. The position also needs to be known to within a small fraction of the wavelength of the light. Sorry, but this just isn't interferometry. |
|
|
Uh... well... thanks for clearing that up, [8th]... about the fudge connection that is. This must be some new form of alchemy I hadn't heard of. |
|
|
(Perhaps I shouldn't have been excusing myself so much to slouch off to the wash room for a smoke as a boy... elementary school can be quite stressful in the states) |
|
|
So then, all of these camera buffs are members of the fudge packers union. (You guys could have said that in the first place) |
|
|
This idea is about fudge in the same way that it's
about interferometry. |
|
|
Precisely. Please, [Grogs], do try to keep up with the rest. You'll regret it if you don't. |
|
|
All right, this has gone on long enough. [Grogster]:
it's "fudge" as in "fudge factor." The fudge is
metaphorical, like the interferometry. |
|
|
Dang. I was kinda looking forward to the fudge, to be honest with you... <walks away muttering> |
|
|
Come back [Grogs] ... you don't understand, it is real fudge, but flavoured with fresh metaphors*. |
|
|
*May also contain similies, oxymorons, and traces of nuts. |
|
|
//May also contain similies, oxymorons, and traces of nuts// |
|
|
In case I was too subtle, this will not work at all. To create multiple images that could be synthesised in a way that could be described as interferometry, the images would have to record, with a spacial resolution of a few nanometres, and a temporal resolution around 10^-12 second, the instantaneous strength, direction, and rate of change, of the electric and magnetic fields of the incident radiation. That is an utterly ludicrous idea. You could conceivably do that with radio waves, but visible? |
|
|
Even if you don't really mean interferometry, you cannot add multiple images to overcome an impossible low signal to noise ratio. It doesn't work like that. |
|
|
as observed it is not any sort of interferometer! |
|
|
deep sky stacker software normaly add together images taken in sequence with the one set of kit |
|
|
so time is not mportant. except you can get more people involved if you tell them what to do when |
|
|
nubee question what with all the bones and buns i keep reading about? |
|
|
If you have more votes for than against, you get croissants (buns); if you have fewer votes for than against, you get fish bones. Also described in annotations as [+] or [-]... |
|
|
You can find metaphorical fudge anywhere. But
for metaphorical interferometry, you have to
come to the halfbakery. |
|
|
By the way, [spidermother], *why* does it not
work like that? As long as the noise is unbiased
and uncorrelated, why can you not extract a signal
from any amount of noise, with a sufficiently large
sample to average? If I'm missing something, then
it's the sort of something I want to learn about.
Or are you bothered by the assumption that the
noise is unbiased and uncorrelated? |
|
|
Unless the pictures are timed with precision, you are picking up different pieces of the signal in time; like trying to deduce a song from a distant, fuzzy station by overlapping the verse and chorus together. Add to that factors like the rotation and revolution of the earth, and the signal has gone through a great deal of modulation between pictures. |
|
|
//*why* does it not work like that?// I confess I'm not an expert on signal theory, but my understanding is that for sufficiently low signal-to-noise ratios, the signal is effectively not there at all. If I can't hear a distant voice over a loud waterfall, then the combined output of a million microphones will also fail to contain the voice. |
|
|
Signals above the Nyquist frequency are also not recorded. In the case of digital cameras, this implies that any details smaller than about 2 pixels in size are simply not available, and combining multiple images does nothing to fix that. |
|
|
As an amateur sound recordist, I'm well aware that it's not possible to produce a high quality recording by combining several low quality recordings. There are certain small improvements that can be made - for example, generating a noise profile to improve noise reduction in a different recording - but it's quite limited, and the general rule is garbage in, garbage out. |
|
|
Perhaps the photography boffins will have more to say. I'll also try to get something out of a friend, who knows more about signal processing. |
|
|
//Signals above the Nyquist frequency are also not
recorded// They are, but they're aliased. (In
practice, you deliberately remove them, with an
analog filter, prior to sampling, because you'd rather
have them absent than aliased.) |
|
|
Fair point, but it amounts to the same thing - you can't reconstruct the original signal from the recording. (In case we're being über-pedantic, you can't _unambiguously_ reconstruct the original signal; you don't know whether the reconstruction is correct, but it almost certainly is not.) |
|
|
// digital cameras, this implies that any details smaller than about 2 pixels in size are simply not available, and combining multiple images does nothing to fix that. // |
|
|
It's not image "size"; it's the noise floor for the CCD sensing element, which is why the detectors in the Hubble (and many other big scopes now) are supercooled, to remove internal "noise" from the detector. |
|
|
Each pixel effectively integrates incident photons during a capture "window" (time period). Astronomical detectors have phenomenal sensitivity, and programmable capture times (as you would expect). |
|
|
The criteria are (i) a very large objective (to capture as many photons as possible from the item of interest), (ii) a very low-loss optical path, and a low-noise detector. |
|
|
Because the scope has to track the target, short aperture times and multiple images are postprocessed to provide a higher quality final image; long "exposures" inevitably cause blurring and streaking. |
|
|
If this sort of thing s so important to your species, why don't you just go to the star or nebula or whatever and look at it close to ? You could take a picnic or something, make a day of it. |
|
|
//It's not image "size"; it's the noise floor// It's both, isn't it? Magnitudes below the noise floor and resolutions below the Nyquist limit are effectively not recorded. |
|
|
i just knew this site would be educational |
|
|
i was wrong, still i`ve lernd stuff. |
|
|
"give me bones and i`ll make stock". |
|
|
"Give ush the toolsh, and we will finish the job." |
|
|
[j paul] You weren't entirely wrong; perhaps if you combined millions of mostly-wrong ideas, you could come up with one super-right idea. |
|
|
I ended up asking my friend about this. He thought I was wrong in one particular point - namely, it is possible to overcome the Nyquist limit to some extent, through super-sampling. But he also pointed out that doing so requires particular conditions, which would not be easy to meet from a bunch of loosely correlated images like this; and he agreed that this idea would not help much. |
|
|
The killer is the point spread function, rather than the CCD resolution. The Hubble telescope is able to combine many images to produce greater resolution and colour depth, but that's because its superb optics, consistency (each image is produced from the same device), and the lack of atmosphere result in a good point spread function. In effect, you are limited by the accuracy, rather than the precision, of your measurements. |
|
| |