This paper investigates the use of a real-time self filter for a robot manipulator in next best view planning tasks. The robot is equipped with a depth sensor in eye-in-hand configuration. The goal of the next best view algorithm is to select at each iteration an optimal view pose for the sensor in order to optimize information gain to perform 3D reconstruction of a region of interest. An OpenGL-based filter was adopted, that is able to determine which pixels of the depth image are due to robot self observations. The filter was adapted to work with KinectFusion volumetric based 3D reconstruction. Experiments have been performed in a real scenario. Results indicate that removal of robot self observations prevents artifacts in the final 3D representation of the environment. Moreover, view poses where the robot would occlude the target regions can be successfully avoided. Finally, it is shown that a convex-hull robot model is preferable to a tight 3D CAD model, and that the filter can be integrated with a surfel-based next best view planner with negligible overhead.
|Titolo:||A 3D Robot Self Filter for Next Best View Planning|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||4.1b Atto convegno Volume|