Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Is it soup yet?

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                                   

Luneburg microlens array

  (+4)
(+4)
  [vote for,
against]

Lenticular lens displays are well known to create moving or 3D visual effects when the device is viewed or tilted in one axis of rotation.

Microlens arrays extend the principle of lenticular lenses. The arrays of approximately hemispherical lenses provide moving or 3D visual effects when the device is viewed or tilted in 2 axes of rotation.

Both lenticular and microlens arrays work at a limited range of angles of tilt. This is because the focal length of a lens is approximately the same at all angles (i.e. follows a circular arc) but the image substrate is flat. Also lens have more optical aberrations the further away from perpendicar.

So my idea is to make the microlenses as Luneburg lenses. Luneburg lenses work equally well at all angles, thus overcoming the aforementioned problems.

Luneburg lenses are spheres which are made by layering progressively lower refractive index layers. The Luneburg lenses would be arranged in a regular array over the image substrate which is made of a deformable material. Each Luneburg lens placed directly above a corresponding microimage. The Lunberg lenses would then be pressed into the substrate (such that they are half submerged) which is then solidified (e.g. UV curable polymer).

The result would be more effective displays that would be essentially colour holograms.

This could be extended to colour holographic video displays if a deformable high resolution electronic display could made (e.g. OLED).

xaviergisz, Jan 02 2020

Luneburg lens https://en.m.wikipe.../wiki/Luneburg_lens
[xaviergisz, Jan 02 2020]

Fibre-optic image conduit https://gfycat.com/...emandingHarrierhawk
[xaviergisz, Jan 04 2020]

Storage requirement for smaller frames of full-motion video https://www.cl.cam....m/book/node111.html
[Voice, Jan 07 2020]

2.5µm pixel width https://hardware.sl...lead-to-flawless-vr
Samsung and Stanford University have developed OLED technology that supports resolutions up to 10,000 pixels per inch. Perfect for this idea.
[xaviergisz, Oct 27 2020]

Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.
Short name, e.g., Bob's Coffee
Destination URL. E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)






       I like the concept. Glad to see a use for even higher pixel density displays. This seems feasible with existing technology for large displays e.g. a billboard.
For smaller screens, this design seems to require very small pixels...each sphere appears as one pixel to the viewer, but each sphere must be illuminated by a grid of pixels with one pixel per ~degree of solid angle. For a color display, presumably there would need to be 3 sets of spheres, each sized to avoid chromatic aberration at one specific wavelength.
sninctown, Jan 02 2020
  

       //For a color display, presumably there would need to be 3 sets of spheres, each sized to avoid chromatic aberration at one specific wavelength.//   

       My guess would be that chromatic aberration would not be an issue or could easily be avoided with proper design.   

       //This seems feasible with existing technology for large displays e.g. a billboard. For smaller screens, this design seems to require very small pixels...each sphere appears as one pixel to the viewer, but each sphere must be illuminated by a grid of pixels with one pixel per ~degree of solid angle.//   

       Yep, easier for big displays, but plausible for TV sized displays also. Each Luneburg lens could be 500µm with a 100x100 array of 5µm pixels behind it. Also could be optimised for human binocular vision, with less vertical pixels than horizontal pixels for each Luneburg lens.
xaviergisz, Jan 02 2020
  

       I think I've pointed this out before, but "lenticular lens" just means "lens-shaped lens" or "lens-related lens", which are tautological.   

       Anyway, I like it. Maybe 3D TV could finally get and stay popular if it didn't need glasses. But I'm more excited about ubiquitous glassesless 3D displays in computers, tablets, phones, etc.
notexactly, Jan 02 2020
  

       // tautological //   

       <Pedantry>   

       Not necessarily. The majority of lenses are circular in plan; some specialised ones, like the anamorphic ones used for projection technologies like Panavision (which converts 3:4 35mm frames to widescreen) have a rectangular footprint.   

       So a lens can have a "lenticular" elevation (indeed it couldn't function as a lens if it didn't, and therefore wouldn't be a lens at all) but can also be "lens-shaped" in plan view.   

       <Pedantry/>
8th of 7, Jan 02 2020
  

       The existence of rectangular lenses means that a rectangle is a lens shape.
notexactly, Jan 03 2020
  

       //in what way would these be superior to existing full color holograms?//   

       Well, this would be relatively simple to make, not requiring lasers and exotic materials.   

       The main difficulty would be printing the microimages at a high enough resolution. A typical printer prints dots with a dimension of approximately 100µm whereas this idea requires printing dots with dimension of 5µm. Other problems would include aligning the microimages with the lenses, and accounting for the stretch when the lenses are pressed in.   

       These problems could be overcome by using the film used in pre-digital cameras. Thus, the Luneburg microlens array would be embedded into unexposed film. A repositionable screen would illuminate the film from different angles; each angle would expose one pixel behind each of the lenses. The process repeated until all pixels behind each lens is exposed (for a 100x100 array, this would be 10,000 exposures).
xaviergisz, Jan 03 2020
  

       I'm not sure how difficult it would be to make the Luneburg microlenses. Spheres are generally easy to make because the symmetry puts them at the bottom of an energy well. That is, droplets and bubbles will naturally form spherically.   

       The layers could be made of uniform depth either through equilibrium of surface tension or through self- assembly molecules.   

       Polymers can be made with precisely controlled refractive index.   

       Once the Luneburg microlenses are formed arranging them in a regular array would be trivial, e.g. with an apertured plate.
xaviergisz, Jan 03 2020
  

       An alternative way of making the video version could be made with a flat (super high resolution) display.   

       The flat display is covered in adjacent optic fibres forming a layer. The optic fibre layer would be machined with an array of hemispherical recesses in which the Luneburg lenses would fit.
xaviergisz, Jan 04 2020
  

       I feel like an entire display, with lenses, could be produced photolithographically. We already have the tech to produce nanometer-scale transistors and whatnot, surely layering up tiny lenses can't be all that difficult.
mitxela, Jan 05 2020
  

       Re the photographic film implementation: I think you'll need a projector rather than a screen, but I think it should work otherwise.   

       Re the photolithographic implementation: I'd use photolithography to make the pixels and subpixels (maybe actually "angels" pronounced like "angles", I guess), with recesses for the lenses to fit into, and then just pour the lenses on. (They can be coated with UV-activated glue that's only activated after they're vibrated into place and the excess ones are shaken off.)   

       If you make the lenses photolithographically, I expect there'll be roughness on their surfaces and also inside them (i.e. each isosurface of refractive index will have spatial quantization noise). That might not be a problem if you make them with sufficiently shortwave light (or electrons) and then use them at visible wavelengths, though, as long as the roughness is much smaller than the visible light's wavelength.
notexactly, Jan 05 2020
  

       //I feel like an entire display, with lenses, could be produced photolithographically. We already have the tech to produce nanometer-scale transistors and whatnot, surely layering up tiny lenses can't be all that difficult.//   

       Maybe for a prototype, but for mass production doing it photolithographically seems very inefficient.   

       //Re the photographic film implementation: I think you'll need a projector rather than a screen, but I think it should work otherwise.//   

       Yep, a projector or screen that is super collimated so the image is projected in a single direction so that each screen/projector pixel exposes a single corresponding pixel behind the Luneburg lens at each angle. This collimation could be done with an array of tubes (with black interior surface).   

       Instead of the screen/projector moving to different angles, the screen/projector could be stationary while the Luneburg array device moved to different angles.   

       The exposure process could be done continuously, so the screen/projector would play an animation of the 10,000 frames while the device was pivoted in a spiral motion.
xaviergisz, Jan 05 2020
  

       Umm..what happens to the other 9,999 frames, that's a serious storage problem.
not_morrison_rm, Jan 06 2020
  

       //Umm..what happens to the other 9,999 frames, that's a serious storage problem//   

       So this device would squeeze 10,000 frames (let's say each frame being a full HD image (1080×1920)) all into one device. That is 10000x1080x1920 pixels (20,736,000,000 pixels) in total.   

       At any instant a viewer with two eyes would only see two slightly different full HD images (forming a single stereoscopic image in their mind).
xaviergisz, Jan 06 2020
  

       'spose how thick the wood be on the frames, maybe aluminum ?
not_morrison_rm, Jan 06 2020
  

       It wouldn't be necessary to story all 10,000 frames. Compression would make it trivial to bring it down to 100 frames.
Voice, Jan 07 2020
  

       //It wouldn't be necessary to story all 10,000 frames. Compression would make it trivial to bring it down to 100 frames.//   

       Not sure what you mean. Could you explain this a bit more?
xaviergisz, Jan 07 2020
  

       ^ the computer clock rate of processing in between the needed viewing frame rate.
wjt, Jan 08 2020
  

       You don't need to store 10,000 frames at once, just enough voxels to paint the whole 3D picture.
Voice, Jan 08 2020
  

       The held voxel data model still has to be processed to turn out data the pixels, under the lenses, need to display for an image that the viewer can again process into thought.
wjt, Jan 10 2020
  

       // a projector or screen that is super collimated so the image is projected in a single direction so that each screen/projector pixel exposes a single corresponding pixel behind the Luneburg lens at each angle. //   

       I'd look into telecentric lenses.   

       Alternatively, you could expose the film by pressing it directly against a very pixel-dense screen, before applying the lenses. But then you have the problem of aligning the lenses with the printing again.   

       On the other hand, I just realized: with the lenses in the way, how do you develop the film?   

       // 10,000 frames //   

       // [storage] //   

       Presumably, unless you were mass-producing identical 3D images (and maybe even then), you'd just store the 3D model and render each angle on the fly. Much less storage required in that case, but more processing if you're making more than one.   

       // while the device was pivoted in a spiral motion //   

       Makes sense, but I think it would need to be carefully designed and calibrated such that the step size between frames and between turns of the spiral was exactly the same as the spot size of the Luneburg lenses. If the step size is bigger, then, at certain angles, a viewer will get only a black image. If it's smaller, then the viewer will get a blurry image. I think.   

       ETA: I've also just realized that either the display needs to be backlit or every viewer needs a headlamp.
notexactly, Jan 15 2020
  

       //On the other hand, I just realized: with the lenses in the way, how do you develop the film?//   

       I suppose you could carefully removed the lenses, develop the film, and the replace the lenses.   

       // 10,000 frames //         // [storage] //     

       Storing 10,000 frames at HD quality is trivial; it's 8 minutes of HD video. This is to imprint the static image into the film. To make an active 3D display would have to deliver 20 billion pixels per frame and I want 60 frames per second, so 1.2 trillion pixels per second. Each pixel would be 3 bytes, so 3.6 terabytes per second. A completely uncompressed 90 minute movie would be   

       EDIT   

       20 petabytes. I think compression of 100:1 would be feasible, so a mere 200 terabytes. A terabyte of storage (HDD) costs $18, so $3,600 to store a movie.   

       //I've also just realized that either the display needs to be backlit or every viewer needs a headlamp.//   

       Then backlit it is.
xaviergisz, Jan 15 2020
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle