According to neuro-psychology studies, 3D shape segmentation plays an important role in human perception of objects because when an object is perceived for grasping it is first parsed in its constituent parts. This capability is missing in current robot planning systems, which are therefore hindered in their ability to plan part-specific grasps suitable for the current task. In this paper, a novel approach for part-based grasping is presented that combines 3D shape segmentation, programing by human demonstration and manipulation planning. The central advantage over previous approaches is the use of a topological method for shape segmentation enabling both object categorization and robot grasping according to the affordances of an object. Manipulation tasks are demonstrated in a virtual reality environment using a data glove and a motion tracker, and the specific parts of the objects where grasping occurs are learned and encoded in the task description. Tasks are then planned and executed in a robot environment targeting semantically relevant parts for grasping. Planning in the robot environment can be generalized to objects that are similar to the ones used for task demonstration, i.e. objects that belong to the same category. Results obtained in 3D simulation confirm that the proposed approach finds with less effort grasps appropriate for the requested task.
Learning Manipulation Tasks from Human Demonstration and 3D Shape Segmentation / J. Aleotti; S. Caselli. - In: ADVANCED ROBOTICS. - ISSN 0169-1864. - 26:16(2012), pp. 1863-1884. [10.1080/01691864.2012.703167]
|Tipologia ministeriale:||Articolo su rivista|
|Appare nelle tipologie:||1.1 Articolo su rivista|