High-fidelity Stereo Imaging Depth Estimation for Metaverse Devices
This project is about high-fidelity stereo RGBD imaging for Metaverse devices.
This work proposes to design and realize a fast and accurate 3D imaging architecture that Metaverse VR headsets can leverage. Different from the previous imaging framework, we use the prior of optical encoding, and joint optimization structure in deep optics to explore a new and improved framework for high-resolution RGBD imaging. A state-of-the-art stereo-matching algorithm is explored with jointly optimized lenses and an image recovery network. Coventional imaging algorithms consider only the post processing of captured images, which is less fleasible and with high computational complexity. Our deep stereo depth estimation integrates the optical preprossessing and encoding into the advanced decoding neural networks to achieve a higher accuracy of RGBD imaging with higher resolution. It fully utilizes the information obtained from the stereo camera with a pair of asymmetric lens in a Metaverse device to achieve a more extended depth range of all-in-focus imaging.