PeRL STUDIES AUTONOMOUS NAVIGATION & MAPPING FOR MOBILE ROBOTS IN A PRIORI UNKNOWN ENVIRONMENTS.

At a Glance

Synopsis

PeRL studies autonomous navigation and mapping for mobile robots in a priori unknown environments.

Dig Deeper

Active Safety Situational Awareness

Background


Team IVS's qualifying entry in the 2007 DARPA Urban Grand Challenge (1 of 11 finalists). This vehicle serves as an experimental platform for active-safety research on this project.

Fueled by the recent successes of the DARPA Grand Challenges (2004, 2005, 2007), the reality of autonomous self-driving cars has moved from the realm of science fiction to that of reality and of growing public acceptance/anticipation. While it will likely still be several decades before fully autonomous vehicles are routine participants in daily urban traffic, elements of this technology are already playing a key role in many of the advanced Active Safety systems that are beginning to equip modern production cars (e.g., Adaptive Cruise Control (ACC), Collision Mitigation by Braking (CMbB), etc). Fundamental to the development of even more advanced production systems is the development of robust localization and vehicular situational awareness technologies [1], thereby enabling the vehicle itself to have the ability to sense and react to potential faults in an active manner so that they can be avoided.

While the DARPA Grand Challenges have clearly promoted and furthered the development of autonomous vehicle systems, the competitions themselves have intentionally been contrived events in which the participants were given a limited set of prior knowledge. Real-world applications will not leave the user guessing about things like GPS waypoint density, speed limits, stop sign locations, and road network topology. We expect there to be accurate a priori maps with ample metadata (e.g., Google Maps or perhaps a previously HD-LIDAR scanned map of the environment). While many of these maps obviously already exist, and there are selected highly-mapped areas, some metadata has yet to be generated. The fusion of this metadata with onboard HD-LIDAR could push automotive applications to be sensibly priced.


Depiction of the rich perceptual data available with 3D LIDAR (right).

Research Problem

The primary objective of this research is to develop novel algorithms for the fusion of HD-LIDAR sensing with automotive-grade pose (position, pitch, roll, yaw, etc.) sensor components and map-based databases to enable low-cost localization, navigation, and environmental perception for automotive Active Safety applications. For this purpose we propose casting this problem within a simultaneous localization and mapping (SLAM) framework [2] [3] using HD-LIDAR for environmental perception. Task 1 of this research plan will be to investigate the fusion of LIDAR-derived odometry with automotive-grade IMUs for GPS denial/dropout navigation scenarios, which is prone in urban city environments with poor line of sight to the horizon, roads with overhead tree canopy, and tunnels. This analysis will involve analytical and numerical studies to determine a satisfactory lower performance bound on component specifications. Analytical results will be validated by re-simulating baseline optimal datasets using sub-sampled HD-LIDAR resolution, increased GPS uncertainties, and increased IMU drift rates. Task 2 will investigate the registration of HD-LIDAR information with preexisting map databases for in-lane localization. Example databases include Google Maps, EDMaps [4], or previously collected HD-LIDAR maps. This research will identify suitable metadata, desirable attributes and infrastructure, and develop algorithms to register HD-LIDAR with this representation for improved localization.

Results for Laser-Camera Co-Registration

1) Here, we have projected the 3D point cloud onto the corresponding camera images. The reprojections are colored based on height above the ground plane and we have removed the ground plane points just to avoid clutter in the video.

2) Here, in the first few frames we have shown the 3D point cloud generated by the velodyne laser scanner and then we render the colored point cloud. The color is obtained by projecting the 3D point cloud onto the corresponding camera image.