Close

In recent years virtual reality headsets or VRs have been all the rage when it comes to video viewing. One of the reasons for the latest hype is due to the immersive experience it promises. However, despite all these, VRs have yet to topple TVs and computer screens as the main mode of video viewing.

The primary reason is that many claims to feel nauseous and experience eye strain when using VRs due to the illusion of a 3D viewing experience staring at a fixed 2D display.

On the other hand, a study reveals that AI technology may help in producing real-time 3D holograms without the constraints of VR today.

Hologram Experiences

Holograms offer a one-of-a-kind representation of the 3D world, shifting perspective based on the viewer's position allowing the human eye to adjust focal depths.

For years, researchers have sought computer-generated holograms, although the traditional process requires supercomputers churning through physics simulations that require time rendering less-than photorealistic results.

Findings published in the journal Nature entitled, "Towards real-time photorealistic 3D holography with deep neural networks" shows how MIT researchers developed a way to produce hologram virtually instantly--deep learning-based methods made the process efficient that rendering can be run on a laptop in the blink of an eye.

Liang Shi, lead author and a Ph.D. student in MIT's Department of Electrical Engineering and Computer Sciences says that people believed that today's consumer-grade hardware made it impossible to do real-time 3D holographic computations.

Shi believes that the new approach coined by the team as "tensor holography" will bring the 10-year goal of real-time holographic imagery within reach. The advancement could fule a spill of holography into fields like 3D printing and VR.

Hologram
(Photo : Photo by Ali Pazani from Pexels)

ALSO READ: Carbon Nanodots Could Serve as Cheaper, Cleaner Quantum Dot Alternatives for the Future


The Search for Better 3D Experiences

Typical photography encodes the brightness of each light wave which is how a photo is a clear reproduction of the scene's color in a flat image.

On the other hand, holograms encode the phase of each lightwave and the brightness. The combination delivers a truer representation of the scene's parallax and depth. Hence, a photograph may highlight a painting's color palette, a hologram can bring the image to life, rendering unique 3D textures of each stroke. Unfortunately, despite the realism promised by holograms, they are incredibly challenging to make.

During the early conceptions of holograms in the mid-1900s, holograms were recorded optically and required a laser beam to split with half the beam for illuminating the subject while the other half is a reference for the light waves.

On the other hand, modern computer-generated holograms completely sidestep the earlier challenges by simulating the optical setup.

The authors of the study utilized deep learning to hasten computer-generated holograms, allowing real-time hologram to be generated.

To generate real-time holograms the team of MIT students designed a convolutional neural network that processes techniques using a chain of trainable tensors to mimic how the brain processes visual information. The neural network developed requires a high-quality dataset. Hence researchers created a custom database of 4,000 pairs of computer-generated images.



RELATED ARTICLE: Stereolithography: 3D Printed Human Organs and Tissue Now Possible With Revolutionary Method


Check out more news and information on Tech & Innovation on Science Times.