Holograms and the Ideal Display
Holograms are also covered in the Misleading Terms area of the appendix, but I think it is worth covering a few stray thoughts I have about this sort of "holy grail" concept of a hologram and an ideal display type.
Holograms are an overused term that tends to feel a bit meaningless, but it also seems to point to a sort of promise of an ideal display. Everything from light field, to autostereoscopic, to persistence of vision is branded as a "hologram" to suggest something more than 2D, even if that's all it is. Is there a way we can look at some commonalities around what exists as a "hologram" display today and what might be needed to get to some next stages?
The characteristics of a (heavy air quotes) " hologram " display often include:
Image seems to float against a real-world background utilizing parallax to trick us into a sense of depth (Pepper's ghost, persistence of vision, projection on scrim, transparent LCD/OLED, etc)
Some sense of stereoscopic depth/alternate views of an object to tickle the brain into thinking it is a real object (autostereoscopic, light field displays)
Some combination of the above (swept volume displays, volumetric projection)
However, a real hologram or an ideal holographic display seems to still need some of these sorts of things to meet common requests that aren't currently met by most:
Show a full black to white image in any lighting environment, no matter how dark or bright the ambient environment is.
Be able to scale and be viewable from the smallest detail to a very large size (changes in depth cues would be a big thing to address in that scale)
Be transparent to the environment or not necessarily contained within a frame or require looking into the display (like seeing a floating object from a full 180 or even 360 off the surface of the display).
Render things from diffuse points of light to crisp details.
React to their environment (more on this below).
The Promise of a Hologram
The drive to market a display as a hologram stems from a couple of factors. Firstly, there's the cool factor of a novel display that enhances otherwise less exciting content. People might also want something more immersive than a glowing rectangle on a wall. Secondly, research suggests that holograms can streamline information processing. Part of why the medical and defense industries have invested a fair amount in these technologies is because the enhanced ability to see and process fully 3D or 4D content happens faster for our brains that evolved to process it that way. Looking at a flat map takes longer to understand, and may contain less critical information than a fully 3D map.
I sometimes wonder where that thread towards a sort of "perfect holographic display" leads - is it just a desire to essentially manipulate visual reality itself? If we had truly holographic displays tomorrow that could manipulate realistic visual reality at room scale or larger, as sci-fi tin-foil-hat as it sounds, that sounds like borderline dangerous technology for anyone to have in their possession. To achieve anything close to that level of realism still feels decades off, but it can be fun/terrifying to think about what this could all be headed towards.
I think that even the famous Princess Leia and "Minority Report" holograms wouldn't feel like enough to some people if we got there. Someone would want something more crisp, more colorful, more opaque, larger - the arms race for 2D displays certainly went that route, and even head mounted/AR displays strive for a similar goal. To be fair, to make most of the above happen would essentially require many new levels of understanding about not only our physical reality, but also our ability to manipulate and steer photons in mid-air. Additionally, if with the floating images, we would probably need to add physical sensations as well - touch, texture, heat, pressure, etc.
What might come next?
While a lot of what I've covered in this survey focuses a lot on displays as an output mechanism, I think to truly get to the realism of a promised hologram, there is still a bit of nascent research to be done around the input to displays and their content. When I say inputs, I don't mean just HDMI cables, and physical sensors and touchscreens and other interactives either, but rather the idea of the capture of the light field surrounding the display.
Most displays these days are essentially blind to the world around them. Some current displays and devices have built in tricks for sensing ambient light and adjusting brightness or color, but that's about it for the popularized adjustment capabilities. If we got to the point of realistic holograms but still had to light the 3D scenes in the same way we do today, I think everyone would quickly realize that the next hurdle to realism is capturing the world around the display.
The hologram should, in an ideal scenario, react to changes in the environmental lighting conditions just as a regular physical object would. I should be able to shine a flashlight on it, cast a shadow onto it, or bring it inside or outside, see myself in the reflection of shiny objects, see through transparencies and block opacity, and it would all need to happen with extremely low latency.
Right now we have displays we send pixels to, and some may have sensors, but a full 360º lightfield sensor that is integrated into a lightfield display would make things that much more impressive. Showing a nice shiny rendered sphere floating in space that reacted to all real-world lighting cues would be mind blowing. XR Studios understand a bit of this technique in terms of using environmental LED to cast 3D scene light onto the IRL subjects being filmed, but turning it all inward feels nutty with our current understanding. There are also examples out there of things like using a fully spatially tracked flashlight, and using that flashlight to cast light or shadow onto a 3D scene - its a clever trick and looks really cool, but obviously still has its limitations.
While there has been research on light field cameras like the Lytro from years past, I haven't come across a ton of projects that attempt to incorporate light field capture AND light field output. If anyone sees anything like that, please let me know!
Last updated