We describe a method to produce detailed high resolution depth maps from aggressively subsampled depth measurements. Our method fully uses the relationship between image segmentation boundaries and depth boundaries. It uses an image combined with a low resolution depth map. 1) The image is segmented with the guidance of sparse depth samples. 2) Each segment has its depth field reconstructed independently using a novel smoothing method. 3) For videos, time-stamped samples from near frames are incorporated. The paper shows reconstruction results of super resolution from x4 to x100, while previous methods mainly work on x2 to x16. The method is tested on four different datasets and six video sequences, covering quite different regimes, and it outperforms recent state of the art methods quantitatively and qualitatively. We also demonstrate that depth maps produced by our method can be used by applications such as hand trackers, while depth maps from other methods have problems.
Static depth images
Static optical flow images
Depth videos
Comparison to Kinect fusion
Depth from a single image
Paper: Download 8.35M
Supplementary: Download 12.9M
Code & Data: Download 23.4M
Science Foundation under Grants No. NSF IIS 09-16014 and IIS-1421521; and in part by ONR MURI Award N00014-10-10934.
Please send email to us if you have any questions.