A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.

Contour-based next-best view planning from point cloud segmentation of unknown objects / Monica, Riccardo; Aleotti, Jacopo. - In: AUTONOMOUS ROBOTS. - ISSN 0929-5593. - (2018), pp. 1-16. [10.1007/s10514-017-9618-0]

Contour-based next-best view planning from point cloud segmentation of unknown objects

MONICA, RICCARDO;ALEOTTI, Jacopo
2018-01-01

Abstract

A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.
2018
Contour-based next-best view planning from point cloud segmentation of unknown objects / Monica, Riccardo; Aleotti, Jacopo. - In: AUTONOMOUS ROBOTS. - ISSN 0929-5593. - (2018), pp. 1-16. [10.1007/s10514-017-9618-0]
File in questo prodotto:
File Dimensione Formato  
10514_2017_9618_Author.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 5.21 MB
Formato Adobe PDF
5.21 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2821273
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 45
  • ???jsp.display-item.citation.isi??? 37
social impact