PeRL STUDIES AUTONOMOUS NAVIGATION & MAPPING FOR MOBILE ROBOTS IN A PRIORI UNKNOWN ENVIRONMENTS.

At a Glance

Synopsis

Here are the softwares and datasets that we have released to the public.

Dig Deeper

Ford Campus Vision and Lidar Data Set


Ford's F-250 serves as an experimental platform for this data collection.

We provide a dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during November-December 2009. The vehicle path trajectory in these datasets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and SLAM (Simultaneous Localization and Mapping) algorithms. The size of the dataset is huge (~100 GB) so make sure that you have sufficient bandwidth before downloading the dataset.

Once downloaded you can unzip the dataset.tgz file to get the following files and folders under the main directory:
Folders: SCANS, IMAGES, LCM, VELODYNE.
Files: Timestamp.log, Pose-Applanix.log, Pose-Mtig.log, Gps.log, PARAM.mat

Unzip the Code.zip file. It should have two folders "C" and "MATLAB" containing the utility functions. The details about these files and folders can be found in the README file and in our IJRR data paper. Please cite the IJRR data paper when using this data set in your work.

Download

Download Page

Results for Laser-Camera Co-Registration

1) Here, we have projected the 3D point cloud onto the corresponding camera images. The reprojections are colored based on height above the ground plane and we have removed the ground plane points just to avoid clutter in the video.

2) Here, in the first few frames we have shown the 3D point cloud generated by the velodyne laser scanner and then we render the colored point cloud. The color is obtained by projecting the 3D point cloud onto the corresponding camera image.

Support

Please send bug reports to Gaurav Pandey: <pgaurav@umich.edu>