Ryan Wolcott

PhD Candidate in Computer Science Engineering at the University of Michigan
Graduate Student Researcher at the Perceptual Robotics Laboratory

My research is interested in perception for mobile robotics. Specifically, my work has direct application to various aspects of perception for self-driving cars—including localization, map building, and obstacle detection. With this in mind, I seek robust and efficient methods that are real-time.

rwolcott@umich.edu, Google Scholar


Fast LIDAR Localization using Multiresolution Gaussian Mixture Maps
Ryan W. Wolcott and Ryan M. Eustice
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2015, Seattle) [pdf] [BibTeX] [ppt]

Visual Localization within LIDAR Maps for Automated Urban Driving
Ryan W. Wolcott and Ryan M. Eustice
In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014, Chicago) [pdf] [BibTeX] [ppt]
        Best Student Paper!

NEEC research: Toward GPS-denied landing of unmanned aerial vehicles on ships at sea
Stephen M. Chaves, Ryan W. Wolcott and Ryan M. Eustice
Naval Engineers Journal (NEJ 2014, In Press) [pdf] [BibTeX]


Ford Fusion Research Platforms

Current research is focused on the development of several Automated Fusion Hybrid Research Vehicles through the Next Generation Vehicle Project. This project is a collaboration with APRIL Robotics Laboratory, Ford Motor Company, and State Farm. These platforms are equipped with multiple Velodyne LIDAR scanners, cameras, and radars, which we use for various perceptual tasks include localization and obstacle detection.

Large-Scale Map Building

Using data from our LIDAR scanners, we perform offline SLAM to generate extremely rich prior maps that can be used online for localization and improving obstacle detection.

LIDAR Localization (ICRA15)

LIDAR-based localization systems often consider only the appearance of the scene in terms of LIDAR reflectivity. These methods can fail when faced with adverse weather conditions or poorly textured roadways. This work proposes a fast 3D scan matcher that leverages a map composed of Gaussian mixtures describing the z-height distribution of the world. We consider a multiresolution approach to finding the maximum likelihood alignment of candidate scans.

Visual Localization (IROS14)

We are interested in using cameras to increase robustness of our localization solution. With maps generated using LIDAR scanners, we can localize our platforms with a monocular camera by exploiting a graphics processing unit to generate synthetic views of our belief environment. We then seek to find the "most similar" synthetic view by evaluating the normalized mutual information between proposed views and our real camera measurements. Our approach achieves localization accuracy similar to LIDAR-only solutions.

Past Projects


The NEEC program was designed to get undergraduate and graduate students interested in working on Navy-relevant engineering problems. As part of NEEC, I mentored several students in real-time robotics toward developing autonomy for quadrotors. Various tasks completed include autonomous landing aboard a moving platform using visual fiducials and map building.


During undergrad, I worked on a student team that developed an autonomous surface vehicle for the RoboBoat competition. In 2010, we made a clean sweep and took first overall and in static judging.

Media Appearances

IEEE Spectrum
IROS 2014 Best Student Paper
UM Self-Driving Car
Low Cost Self-Driving Cars
IROS 2014 Recap
ASVC 2010 Winner





129 NAME Bldg
2600 Draper Drive
Ann Arbor, MI 48109-2145