PeRL STUDIES AUTONOMOUS NAVIGATION & MAPPING FOR MOBILE ROBOTS IN A PRIORI UNKNOWN ENVIRONMENTS.

At a Glance

Synopsis

Browse Publications by Ryan Eustice and the rest of the PeRL Team.

Browse by year

2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2000

Theses

Active visual SLAM with exploration for autonomous underwater navigation

Summary


Ayoung Kim, Active visual SLAM with exploration for autonomous underwater navigation. PhD thesis, Department of Mechanical Engineering, University of Michigan, August 2012.

Abstract

One of the major challenges in the field of underwater robotics is the opacity of the water medium to radio frequency transmission modes, which precludes the use of a global positioning system (GPS) and high speed radio communication in underwater navigation and mapping applications. One approach to underwater robotics that overcomes this limitation is vision-based simultaneous localization and mapping (SLAM), a framework that enables a robot to localize itself, while simultaneously building a map of an unknown environment. The SLAM algorithm provides a probabilistic map that contains the estimated state of the system, including a map of the environment and the pose of the robot. Because the quality of vision-based navigation varies spatially within the environment, the performance of visual SLAM strongly depends on the path and motion that the robot follows. While traditionally treated as two separate problems, SLAM and path planning are indeed interrelated: the performance of SLAM depends significantly on the environment and motion; however, control of the robot motion fully depends on the information from SLAM. Therefore, an integrated SLAM control scheme is needed---one that can direct motion for better localization and mapping, and thereby provide more accurate state information back to the controller. This thesis develops perception-driven control, an integrated SLAM and path planning framework that improves the performance of visual SLAM in an informative and efficient way by jointly considering the reward predicted by a candidate camera measurement, along with its likelihood of success based upon visual saliency. The proposed control architecture identifies highly informative candidate locations for SLAM loop-closure that are also visually distinctive, such that a camera-derived pose-constraint is probable. Results are shown for autonomous underwater hull inspection experiments using the Bluefin Robotics Hovering Autonomous Underwater Vehicle (HAUV).

Bibtex entry

@PHDTHESIS { akim-phdthesis,
    AUTHOR = { Ayoung Kim },
    TITLE = { Active visual {SLAM} with exploration for autonomous underwater navigation },
    SCHOOL = { Department of Mechanical Engineering, University of Michigan },
    YEAR = { 2012 },
    MONTH = { August },
    ADDRESS = { Ann Arbor, MI, USA },
}