This paper presents a synthesis of techniques enabling vision-based autonomous Unmanned Aerial Vehicle (UAV) systems. A full stack of computer vision processing modules are used to exploit visual information to simultaneously perceive obstacles and refine localization in a GPS-denied environment. An omni-directional stereo-vision based setup is used to build a 3D representation of the surroundings. A fully 3D local obstacle grid, maintained through multiple frames and updated accordingly to the UAV movement, is built accumulating multiple observations coming from the 360 stereo vision sensing suite. Visual data is also used to extract information regarding the drone attitude and position while exploring the environment. Sparse optical flow collected from both front and down facing stereo cameras is used to estimate UAV movement through multiple frames. The down-looking stereo pair is also used to estimate the drone height from the ground and to refine the pose estimation in a Simultaneous Localization and Mapping (SLAM) fashion. An improved A* planning algorithm exploits both the 3D representation of the surroundings and precise localization information in order to find the shortest path and reach the goal through a three dimensional safe trajectory.
|Titolo:||Enabling Computer Vision-Based Autonomous Navigation for Unmanned Aerial Vehicles in Cluttered GPS-Denied Environments|
|Data di pubblicazione:||2018|
|Appare nelle tipologie:||4.1b Atto convegno Volume|