This paper investigates the use of a real-time self filter for a robot manipulator in next best view planning tasks. The robot is equipped with a depth sensor in eye-in-hand configuration. The goal of the next best view algorithm is to select at each iteration an optimal view pose for the sensor in order to optimize information gain to perform 3D reconstruction of a region of interest. An OpenGL-based filter was adopted, that is able to determine which pixels of the depth image are due to robot self observations. The filter was adapted to work with KinectFusion volumetric based 3D reconstruction. Experiments have been performed in a real scenario. Results indicate that removal of robot self observations prevents artifacts in the final 3D representation of the environment. Moreover, view poses where the robot would occlude the target regions can be successfully avoided. Finally, it is shown that a convex-hull robot model is preferable to a tight 3D CAD model, and that the filter can be integrated with a surfel-based next best view planner with negligible overhead.

A 3D Robot Self Filter for Next Best View Planning / Monica, Riccardo; Aleotti, Jacopo. - (2019), pp. 117-124. ((Intervento presentato al convegno 3rd IEEE International Conference on Robotic Computing, IRC 2019 tenutosi a ita nel 2019 [10.1109/IRC.2019.00025].

A 3D Robot Self Filter for Next Best View Planning

Monica, Riccardo;Aleotti, Jacopo
2019

Abstract

This paper investigates the use of a real-time self filter for a robot manipulator in next best view planning tasks. The robot is equipped with a depth sensor in eye-in-hand configuration. The goal of the next best view algorithm is to select at each iteration an optimal view pose for the sensor in order to optimize information gain to perform 3D reconstruction of a region of interest. An OpenGL-based filter was adopted, that is able to determine which pixels of the depth image are due to robot self observations. The filter was adapted to work with KinectFusion volumetric based 3D reconstruction. Experiments have been performed in a real scenario. Results indicate that removal of robot self observations prevents artifacts in the final 3D representation of the environment. Moreover, view poses where the robot would occlude the target regions can be successfully avoided. Finally, it is shown that a convex-hull robot model is preferable to a tight 3D CAD model, and that the filter can be integrated with a surfel-based next best view planner with negligible overhead.
9781538692455
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11381/2859022
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact