This article presents the first surfel-based method for multi-view 3D reconstruction of the boundary between known and unknown space. The proposed approach integrates multiple views from a moving depth camera and it generates a set of surfels that encloses observed empty space, i.e., it models both the boundary between empty and occupied space, and the boundary between empty and unknown space. One novelty of the method is that it does not require a persistent voxel map of the environment to distinguish between unknown and empty space. The problem is solved thanks to an incremental algorithm that computes the Boolean union of two surfel bounded volumes: the known volume from previous frames and the space observed from the current depth image. A number of strategies were developed to cope with errors in surfel position and orientation. The method, implemented on CPU and GPU, was evaluated on real data acquired in indoor scenarios, and it was compared against state of the art approaches. Results show that the proposed method has a low number of false positive and false negatives, it is faster than a standard volumetric algorithm, it has a lower memory consumption, and it scales better in large environments.
|Appare nelle tipologie:||1.1 Articolo su rivista|