PeRL STUDIES AUTONOMOUS NAVIGATION & MAPPING FOR MOBILE ROBOTS IN A PRIORI UNKNOWN ENVIRONMENTS.

At a Glance

Synopsis

Here are the softwares and datasets that we have released to the public.

Dig Deeper

Downloads

Video

For videos of our research, please visit PeRL's YouTube Channel.

Software

Some of the softwares that we have developed:

  1. Extrinsic Calibration of a 3D Lidar and Camera: This is an implementation of our mutual information (MI) based algorithm for automatic extrinsic calibration of a 3D laser scanner and optical camera system.
  2. Generic Linear Constraint Node Removal: This is an implementation of our factor-based method for node removal in SLAM graphs.

Data

During the course of our research we have collected datasets that not only help us in testing state of the art algorithms for autonomous navigation but also help us in developing new algorithms. We provide these datasets to the research community for further development of algorithms related to autonomous navigation.

  1. Ford Campus Vision and Lidar Dataset: This dataset is a part of the project active safety situational awareness for automotive vehicles.

    The top panel is a perspective view of the Velodyne lidar range data, color-coded by height above the estimated ground plane. The bottom panel shows the above-ground-plane range data projected into the corresponding image from the Ladybug cameras.


  1. North Campus Long-Term Vision and Lidar Dataset: This dataset was collected over the course of 27 sessions over 16 months.

    The left displays various scenes of the environment viewed during different sessions, showing changes in the scene due to seasons or lighting. On the right, we have a top-down view of the accumulated Velodyne lidar range data, colored by height.