|Purpose||In this research, the goal is to develop a 3D imaging system for producing 3D digital contents. Currently, making 3D contents requires an expensive and large 3D imaging system, which is larger than commercial 2D imaging systems. This project proposes to design and implement a system that integrates both the depth from defocus method that Dual Aperture is currently using and the stereo technique. In order to achieve that goal, we are going to design a deep learning architecture for the depth from defocus technique and using information from that, we are expecting to obtain an accurate depth map based on the image information.|
|Contents||In order to develop the 3D imaging system, this research project addresses following research challenges in 3D imaging:
● Multi-sensor based 3D imaging approach: Traditional passive stereoscopic approaches only work well in real time with enough surface features in a low resolution, while the passive stereoscopic approaches function consistently outdoor. This project will develop a novel stereoscopic imaging system with localization and a novel optical component. It will allow us to reconstruct 3D models outdoor in an improved resolution.
● Depth estimation algorithm on a coded aperture system: The images, which are obtained from the coded aperture system, consist of visible spectrum and infrared spectrum images. The image of visible spectrum is blurry and the image of infrared spectrum is all-focus. Therefore, we will estimate depth utilizing the difference of the blurs of the two images.
● Depth from defocus with deep learning, and stereo fusion with dual aperture techniques: A dual aperture camera exploits defocus cues in order to estimate geometry of scene. We will combine stereo technology with defocus cues, so that robust depth estimates can be obtained. Also a deep learning approach would be developed for predicting depth information from defocus cues effectively and efficiently.
● An efficient algorithm dedicated for a designed optical system: After obtaining the high-dimensional data of the real world, efficient algorithms need to be developed, supporting the feasibility of the applications using the data.
|Expected Contribution||The mobile 3D imaging system that this project will develop allows us to capture any 3D solid objects easily with their mobile devices. Such 3D contents could be consumed as input of 3D printers and 3D lightfield display. In addition, this 3D imaging system could be integrated to intangible computing interface. Hands and body motion could be used to control a TV or game characters in a video game. We also expect our 3D imaging system could be used as a 3D sensing unit of a robot and auto-driving vehicle which needs real time performance.|
|1||Multispectral Photometric Stereo for Acquiring High-Fidelity Surface Normals||2014.11.||Phase 2