Physics-based dynamic reconstruction of deformable objects

Date
2019
Journal Title
Journal ISSN
Volume Title
Publisher
University of Delaware
Abstract
Recently, the renewed interest in virtual reality (VR) and augmented reality (AR) has created new demands for dynamic reconstruction, i.e., scanning real objects efficiently and accurately from all directions. But there are several challenges. Firstly, 3D reconstruction produced by traditional photogrammetry or multi-view geometry is heavily corrupted due to occlusions, noise, limited field-of-view, etc. Secondly, because of the high-quality demand, rendering dynamic reconstruction results costs a significant amount of disk and memory, which is not practical when the regular user wants to access a relatively longer free-viewpoint 3D video. Thirdly, reliable human parts segmentation on images plays an important role in 3D reconstruction tasks. While significant achievements have been made on human pose estimation, the performance on human parts segmentation remains dissatisfying. Finally, recovering time-dependent volumetric 3D fluid flow is a challenging task as the particles lie at different depths but with similar appearance, making it particularly difficult to track a large number of particles. ☐ In this dissertation, for deformable shape completion, I present a graph-based non- rigid shape registration framework that can simultaneously recover 3D human body geometry and estimate pose/motion at high fidelity. My approach first generates a global full body template by registering all poses in the acquired motion sequence, and then constructs a deformable graph utilizing the rigid components in the global template. The global template graph can be directly used to warp each motion frame as well as to fill in missing geometry. Specifically, I combine local rigidity and temporal coherence constraints to maintain motion and geometry consistencies. ☐ For deformable shape correspondence and compression, I present an end-to-end deep learning scheme to establish dense shape correspondences and subsequently compress the data. My approach uses the sparse set of panoramic depth maps or PDMs, each emulating an inward-viewing concentric mosaic (CM). Then, it develops a learning-based technique to learn pixel-wise feature descriptors on PDMs. Finally, it feeds the results into an autoencoder-based network for compression. ☐ In order to improve the reconstruction results, I present a novel technique which I call Pose2Body, able to robustly conduct human parts segmentation based on the pose estimation results. I partition an image into superpixels and set out to assign a segment label to each superpixel most consistent with the pose. Then, I design special feature vectors for every superpixel-label assignment as well as superpixel-superpixel pairs, and model optimal labeling as solving a conditional random field (CRF). In addition, the segmentation results can further improve 3D reconstruction by effectively removing outliers and accelerating feature matching.☐ Finally, I present a light field based 3D deformable particle reconstruction and matching scheme that I call light field PIV. I exploit the refocusing capability and focal symmetry constraint of the light field for reliable particle depth estimation. I further propose a new motion-constrained optical flow estimation scheme by enforcing local motion rigidity and the Navier-Stoke constraint. Comprehensive experiments on synthetic and real experiments show that my technique can recover dense and accurate 3D fluid flows in small to medium volumes using a single light field camera.
Description
Keywords
Citation