Incremental Dense Reconstruction From Monocular Video With Guided Sparse Feature Volume Fusion

Abstract
Incrementally recovering 3D dense structures from monocular videos is of paramount importance since it enables various robotics and AR applications. Feature volumes have recently been shown to enable efficient and accurate incremental dense reconstruction without the need to first estimate depth, but they are not able to achieve as high of a resolution as depth-based methods due to the large memory consumption of high-resolution feature volumes. This letter proposes a real-time feature volume-based dense reconstruction method that predicts TSDF (Truncated Signed Distance Function) values from a novel sparsified deep feature volume, which is able to achieve higher resolutions than previous feature volume-based methods, and is favorable in outdoor large-scale scenarios where the majority of voxels are empty. An uncertainty-aware multi-view stereo (MVS) network is leveraged to infer initial voxel locations of the physical surface in a sparse feature volume. Then for refining the recovered 3D geometry, deep features are attentively aggregated from multi-view images at potential surface locations, and temporally fused. Besides achieving higher resolutions than before, our method is shown to produce more complete reconstructions with finer detail in many cases. Extensive evaluations on both public and self-collected datasets demonstrate a very competitive real-time reconstruction result for our method compared to state-of-the-art reconstruction methods in both indoor and outdoor settings.
Description
This article was originally published in IEEE Robotics and Automation Letters. The version of record is available at: https://doi.org/10.1109/LRA.2023.3273509. © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Keywords
Monocular dense mapping, neural implicit representation, feature volume fusion
Citation
X. Zuo, N. Yang, N. Merrill, B. Xu and S. Leutenegger, "Incremental Dense Reconstruction From Monocular Video With Guided Sparse Feature Volume Fusion," in IEEE Robotics and Automation Letters, vol. 8, no. 6, pp. 3876-3883, June 2023, doi: 10.1109/LRA.2023.3273509.