To Serve and Detect

Thursday, August 27, 2009

JPL’s Robotics group is developing a system that will help autonomous vehicles with the complex task of detecting and tracking humans in their vicinity. The principle motivation for this task, which is funded by the Army Research Laboratory, is to ensure safe operation of a vehicle in environments containing both trained operators and untrained bystanders.

The U.S. military has many self-guided robots currently deployed or in development, including wheeled and tracked vehicles, small hovering reconnaissance platforms and autonomous watercraft. To ensure that the vehicles operate safely around people, it is considered critical that they reliably detect and predict the motion of pedestrians or personnel in their vicinity.

The requirements for the new system are, by necessity, extremely strict: it must achieve reliable detection over ranges from zero to at least many tens of meters, have quick reaction times and operate under a variety of environmental conditions. Robotic vehicles must be able to detect people in urban and cross-country environments, including flat, uneven and multi-level terrain, with widely varying degrees of clutter, occlusion and illumination. Ultimately these machines will need to be able to operate day or night, in all weather and in the presence of atmospheric obscurants.
 

human detection
Demonstration of moving human detection from a moving robot. This image shows the imagery seen by the robot, automatically overlayed with a bounding box around the moving person.

Larry Matthies is supervisor of the Computer Vision Group in JPL’s Mobility & Robotic Systems Section, which develops image processing algorithms and implementations for robotic systems. He and his colleagues created the stereo vision and hazard avoidance systems for NASA’s Mars Exploration Rovers, Spirit and Opportunity. New autonomous guidance systems developed by JPL for military and commercial customers will make use of navigation capabilities pioneered by the rover program, such as path replanning autonomous control and dynamic hazard avoidance.

Matthies explains that for robotic vehicles in both combat and peacekeeping settings, autonomy can be preferred over direct teleoperation. Unmanned vehicles can perceive their environment with senses unavailable to human operators, reacting in real time to developing situations. Autonomy is attractive to military commanders, who would prefer not to encumber officers with the demanding task of operating robots in the field. And even with an operator on standby, two-way communication is not always possible; for example, if the robot enters a cave. Long latency and response times can also prevent remote teleoperation of vehicles.

Human detection goes a step beyond the already complex machine learning task of identifying obstacles and plotting an avoidance course. The task requires the machine to recognize and classify objects in its environment.

According to Matthies, machines are already fairly capable at detecting people standing upright. It is much more difficult for a computer to distinguish, for instance, an injured person lying on the ground from a pile of rustling leaves. In the future, the team hopes to teach the system to recognize non-upright or partially obscured human figures, and help it to distinguish people or objects in close contact.

The ability to detect pedestrians from a moving vehicle in cluttered, dynamic urban environments is also highly transferable to non-military applications, such as automatic driver-assistance systems or smaller autonomous robots navigating a sidewalk or marketplace environment. A car employing hazard avoidance capabilities could potentially sense an impending collision and alert the driver, or even begin braking, in order to avoid disaster. In recent years, several auto makers have unveiled cars that can parallel park themselves. Human detection abilities would likely make these sorts of systems safer.

This research was carried out at JPL, with funding from the Army Research Lab (ARL) under the Robotics Collaborative Technology Alliance (RCTA), through an agreement with NASA.

JPL contributors to this research include Max Bajracharya, Baback Moghaddam, Andrew Howard, and Shane Brennan.
 


References

  • Bajracharya, M., Moghaddam, B., Howard, A., Brennan, S., Matthies, L. (2009) “Results from a Real-time Stereo-based Pedestrian Detection System on a Moving Vehicle.” Workshop on People Detection and Tracking, IEEE International Conference on Robotics and Automation, Kobe, Japan.
     

Links