Mobility & Robotics

Space exploration was transformed when NASA landed humans on the moon. NASA is now poised for its next great transformation: the robot revolution. Here on Earth, robots are performing increasingly complex tasks in ever more challenging settings—medical surgery, automated driving, and bomb disposal are just a few examples of the important work of robots. In space, robots deployed at planetary bodies could construct and maintain space assets, autonomously explore difficult terrain, and even clean up space debris. Future exploration opportunities will be limited only by our imagination.

An ambitious robot revolution will foster creativity and innovation, advance needed technologies, and transform the relationship between humans and robots. Key areas of research include:

  • Mobile robotic systems: Advanced robotic systems that combine mobility, manipulation/sampling, machine perception, path planning, controls, and a command interface could be capable of meeting the challenges of in situ planetary exploration.
  • Manipulation and sampling: Extending our manipulation and sampling capabilities beyond typical instrument placement and sample acquisition, such as those demonstrated with the Mars rovers, could make ever more ambitious robotics missions possible.
  • Machine perception and computer vision: Our ability to control robot functions remotely is severely constrained by communication latency and bandwidth limitations. Autonomous mobile robots must be capable of perceiving their environments and planning maneuvers to meet their objectives. The Mars Exploration Rover (MER) mission demonstrated stereo vision and visual odometry for rover navigation; future missions could benefit from the development of robotic systems with advanced machine perception and computer vision technology.
  • Path planning: Advanced robots need to be capable of traversing the Martian terrain, flying through the Venusian atmosphere, floating on Titan’s lakes, and diving into Europa’s ocean. We are developing path-planning technologies for robotic vehicles operating in a variety of planetary environments.
  • User interface: The graphical user interfaces (GUIs) and scripts currently used to create robot command sequences for operating rovers on Mars could be insufficient for future robot missions in which we need to interface with multiple dexterous robots in complex situations—including interactions with astronauts. At a minimum, we need to develop a more efficient way of commanding robots.

Over the past 20 years, JPL has developed and tested numerous robotic systems for space exploration on, above, and below the surface of planetary bodies. These include robots that are capable of assembly, inspection, and maintenance of space structures; robots that are capable of conquering the steepest slopes and accessing ever more challenging science sites; and mobility platforms that are capable of exploring the underside of ice sheets in frozen lake or ocean environments.

We are working across a variety of foundational and advanced science areas to ensure that robots will continue to make significant contributions to NASA’s future science and exploration and to its vision—“to reach for new heights and reveal the unknown so that what we do and learn will benefit all humankind.”

 


Selected Research Projects

 

Mobile Robotic Systems

ATHLETE is a robotic wheels-on-limbs vehicle designed to have rolling mobility over Apollo-like undulating terrain, able to walk over rough or steep terrain. ATHLETE’s capabilities can enable missions on the surface of the Moon to reach essentially any desired site of interest and to load, transport, manipulate, and deposit payloads. The system has achieved high technological maturity (Technology Readiness Level 6), with numerous field testing/demonstration campaigns supported by detailed engineering analyses of all critical technical elements, including the structural, thermal, actuator, motor control, computing, vision and sensor interfacing, communications, operator interface, and power subsystems.

 

ATHLETE

 

SmalBoSSE is an autonomous, multilimbed robot designed to maneuver and sample on the surface of small bodies. Technologies developed for this robot include onboard 3D terrain modeling and analysis for grasping, force-controlled tactile grasping, optimal gait control, and remote visual terrain-traversability estimation. The system has been demonstrated and evaluated in a 6-DOF (degrees of freedom) microgravity gantry with terrain simulants and a microgravity simulation environment.

 

SmallBoSSE

 

RoboSimian is a simian-inspired, limbed robot that competed in the DARPA Robotics Challenge (2013 - 2015). RoboSimian can assist humans in responding to natural and manmade disasters and can contribute to other NASA applications. RoboSimian uses its four general-purpose limbs and hands to achieve passively stable stances; establish multipoint anchored connections to supports such as ladders, railings, and stair treads; and brace itself during forceful manipulation operations. The system reduces risk by eliminating the costly reorientation steps and body motion of typical humanoid robots through the axisymmetric distribution of the limb workspace and visual perception.

 

RoboSimian

 

Axel is a tethered robot capable of rappelling down steep slopes and traversing rocky terrain. Conceptually, Axel is a mobile daughter ship that can be hosted on different mother ships—static landers, larger rovers, even other Axel systems—and thereby can enable a diverse set of missions. The system’s ability to traverse and explore extreme terrains, such as canyons with nearly vertical slopes, and to acquire measurements from such surfaces has been demonstrated with a mission realistic exploration scenario.

 

Axel

 

 

Manipulation and Sampling

DARPA ARM has advanced dexterous manipulation software and algorithms suitable for numerous applications; for example, the DARPA ARM could be used to assist soldiers in the field, disarm explosive devices, increase national manufacturing capabilities, or even provide everyday robotic assistance within households. DARPA ARM is capable of autonomously recognizing and manipulating tools in a variety of partially structured environments. Demonstrations include grasping a hand drill and drilling a hole in a desired position of a block of wood, inserting a key into a door handle lock and unlocking it, turning a door handle and opening the door, picking and placing tools such as hammers and screwdrivers, hanging up a phone, changing a tire, and cutting wire

 

DARPA ARM

 

BiBlade is an innovative sampling tool for a future sample-return mission to a comet’s surface. The sampling tool has two blades that could be driven into the comet surface by springs in order to acquire and encapsulate the sample in a single, quick sampling action. This capability is achieved with only one actuator and two frangibolts, meeting the mission need of minimized tool complexity and risk. BiBlade has several unique features that improve upon the state of the art—including the ability to acquire a sample to a depth of 10 cm while maintaining stratigraphy and the ability to return two samples with one tool—thereby enabling multiple sample attempts per sample, providing direct sample measurement, and performing fast sampling. A prototype of the tool has been experimentally validated through the entire sampling chain using an innovative suite of simulants developed to represent the mechanical properties of a comet. The BiBlade sampling chain is a complete end-to-end sampling system that includes sampling tool deployment and use, sample measurement, and sample transfer to a sample return capsule

BiBlade

 

 

Machine Perception and Computer Vision

Sensor Fusion research is tasked with developing a low-cost perception system that can make the most of complementary, low-cost sensors to transform a small, jeep-sized vehicle into an autonomous Logistics Connector unmanned ground vehicle (UGV). Replacing manned resupply missions with these autonomous UGVs could improve the logistics support of soldiers in the field. JPL performed a trade study of state-of-the-art, low-cost sensors, built and delivered a low-cost perception system, and developed the following algorithms: daytime stereo vision, multimodal sensor processing, positive obstacle detection, ground segmentation, and supervised daytime material-classification perception. The first version of the low-cost perception system was field-tested at Camp Pendleton against the baseline perception system using an autonomous high-mobility multiwheeled vehicle. Nearly all of the algorithms have been accepted into the baseline and are now undergoing verification and validation testing.

 

Sensor Fusion

 

The LS3 perception system uses visible and infrared sensors and scanning laser range finders to permit day/night operation in a wide range of environments. Part of a DARPA project to create a legged robot that could function autonomously as a packhorse for a squad of soldiers, LS3 system capabilities include local terrain mapping and dynamic obstacle detection and tracking. The local terrain-mapping module builds a high-resolution map of nearby terrain that is used to guide gait selection and foot planting and that remains centered on the vehicle as it moves through the world. The local terrain-classification algorithms identify negative obstacles, water, and vegetation. The dynamic obstacle module allows LS3 to detect and track pedestrians near the robot, thereby ensuring vehicle safety when operating in close proximity with soldiers and civilians. After five years of development (1999-2014), LS3 is mature enough to operate with Marines in a realistic combat exercise.

 

LS3

 

Project Tango envisions a future in which everyday mobile devices estimate their position with up to millimeter accuracy by building a detailed 3D map, just as GPS is used today. 3D mapping and robust, vision-based real-time navigation have been major challenges for robotics and computer vision, but recent advancements in computing power address these challenges by enabling the implementation of 3D pose estimation and map building in a mobile device equipped with a stereo camera pair. In collaboration with Google, JPL has demonstrated accurate and consistent 3D mapping that includes constructing detailed, textured models of indoor spaces in real time on memory-constrained systems.

 

Project Tango

 

The Contact Detection and Analysis System (CDAS) processes camera images (both visible and IR spectra) for 360-degree maritime situational awareness. This capability is required to navigate safely among other vessels; it also supports mission operations such as automated target recognition, intelligence, surveillance, and reconnaissance in challenging scenarios—low-visibility weather conditions, littoral and riverine environments with heavy clutter, higher sea states, high-speed own-ship and contact motion, and semi-submerged hazards. The CDAS software fuses input from the JPL 360-degree camera head and the JPL Hammerhead stereo system for robust contact detection. Contacts are then tracked to build velocity estimates for motion planning and vessel type classification.

 

CDAS

 

ARES-V is a collaborative stereo vision technology for small aerial vehicles that enables instantaneous 3D terrain reconstruction with adjustable resolution. This technology can be used for robust surface-relative navigation, high-resolution mapping, and moving target detection. ARES-V employs two small quadrotors flying in a tandem formation to demonstrate adaptive resolution stereo vision. The accuracy of the reconstruction, which depends on the distance between the vehicles and the altitude, is adjustable during flight based on the specific mission needs. Applications of this technology include aerial surveillance and target-relative navigation for small body missions.

ARES-V

 

 

Path Planning

Fast Traverse enables fully autonomous rover navigation with a 100 percent driving duty cycle. Planetary rovers have traditionally been limited by the available computational power in space: When driving autonomously, the limited computation means that the rover must stop for a substantial period while the navigation software identifies a hazard-free path using acquired imagery. The resulting limitation on driving duty cycle reduces the rover’s average traverse rate; this in turn leads operators to prefer manual driving modes without the full suite of vision-based safety checks. Fast Traverse enables planetary rovers to drive faster, farther, and more safely by transitioning computation-intensive portions of autonomous navigation processing from the main CPU to a field-programmable gate array (FPGA) coprocessor. What would currently take many seconds or even minutes on state-of-the art radiation hard processors can be accomplished in microseconds using FPGA implementations. Fast Traverse technology has already been implemented, tested, and demonstrated on a research rover.

 

Fast Traverse

SUAVE could revolutionize the use of unmanned aerial vehicles (UAVs) for Earth science observations by automating the launch, retrieval, and data download process. SAUVE experiments with small, autonomous UAVs for in situ observation of ecosystem properties from leaf to canopy. These UAVs are able to conduct sorties many times per day for several months without human intervention, increasing the spatial resolution and temporal frequency of observations far beyond what could be achieved from traditional airborne and orbital platforms. This method also extends observations into locations and timescales that cannot be seen from traditional platforms, such as under tree canopies and continuous sensing throughout the diurnal cycle for months at a time. SAUVE could develop and demonstrate capabilities for autonomous landing and recharging, position estimation in-flight with poor GPS, and in-flight obstacle avoidance to enable unattended, long-duration, and repetitive observations.

 

SUAVE

 

ACTUV, DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel, is developing an independently deployed unmanned surface vessel optimized to provide continuous overt tracking of submarines. The program objective is to demonstrate the technical viability of an independently deployed unmanned naval vessel under sparse remote supervisory control robustly tracking quiet, modern diesel-electric submarines. SAIC is the prime for this DARPA contract. JPL is providing the autonomy capabilities of the ACTUV. In particular, JPL will support motion and mission planning and provide the health management capabilities for the robotic platform during its 75-day mission.

 

ACTUV

 

AUV is an adaptive, long-duration autonomous in situ sensing system for an unmanned underwater vehicle with onboard autonomous capabilities for monitoring mixed layer variability and its relation to upper-ocean carbon cycles. AUV provides intelligent onboard autonomy to manage systems, self-adapt, and react to changing conditions related to mission objectives, resource constraints, and science opportunities in the environment. AUV also conducts onboard adaptive sampling algorithms to detect features of interest, follow relevant signatures, and adaptively build physical process models. AUV offers enhanced robotics and science exploration capabilities for marine environments at a reduced cost.

AUV

 

 

User Interface

BioSleeve is a sleeve-based gesture recognition interface that can be worn in inside vehicle activity (IVA) and exovehicle activty (EVA) suits. BioSleeve incorporates electromyography and inertial sensors to provide intuitive force and position control signals from natural arm, hand, and finger movements. The goal of this effort is to construct a wearable BioSleeve prototype with embedded algorithms for adaptive gesture recognition. This could allow demonstration of control for a variety of robots, including surface rovers, manipulator arms, and exoskeletons. The final demonstration could simulate and assess gestural driving of the ISS Canadarm2 by an astronaut on EVA who is anchored to the arm’s end effector for station keeping.

BioSleeve

 

 

Other Robotics Technologies

Mars Heli is a proposed add-on to future Mars rovers that could potentially triple the distance these vehicles can drive in a Martian day while delivering a new level of visual information for choosing which sites to explore. This 1 kg platform (1 m blade span) can fly where rovers cannot drive, provide higher-resolution surface images than possible from orbit, and see much larger areas than possible with rover-mounted cameras. Mars Heli employs coaxial rotors designed for the thin Martian atmosphere (1% of Earth) and a radio link to the rover for relay to Earth. It has energy-absorbing, lightweight legs that provide for landing on natural terrain. A camera/IMU/altimeter is used for navigation and hazard detection, and a fault-tolerant computer provides autonomous aerial flight control and safe landings. Aerogel insulation and a heater keep the interior warm at night, and solar cells are used to recharge the battery. Testing with engineering prototypes has been done in a 25-foot vacuum chamber that replicates the atmosphere on Mars, allowing characterization of blade aerodynamics, lift generation, and flight control behaviors.

 

Mars Heli

 

ISTAR is an in-space robotics technology and a telescope design concept featuring a limbed robot capable of assembling large telescopes in space. This could enable future space missions with telescopes of 10 m – 100 m aperture diameter size. Such large telescopes cannot be folded into a conventional rocket payload and, therefore, must instead be assembled in space from smaller components. ISTAR provides integrated robotics system concepts and matching telescope design concepts for a large space telescope mission, including lab demonstrations of telerobotics assembly in orbit.

 

ISTAR

 

IRIS is a robot that can grip the sides of spacecraft while performing tasks, enabling increased mobility and sustained operations on the surfaces of microgravity objects. The concept is to create a small (20 kg) robot characterized by a body with four limbs, each equipped with adhesively-anchoring grippers for surface mobility and thrusters for free flight. The IRIS effort specifically focuses on laying the technological groundwork for inspecting the ISS for micrometeorite damage. Using an airbearing table to simulate microgravity in two dimensions, the IRIS robot has demonstrated adhesively anchored walking, free flying using microthrusters, and transitional operations (takeoff and landing). The robot will carry a relevant contact inspection instrument and demonstrate the use of that instrument, including the generation of the adhesive reaction forces necessary for the use of the instrument.
 

IRIS

 

Cavebot is a gravity-agnostic mobility platform for any natural terrain. The robot uses hundreds of sharp claws called microspines that adapt to a surface independently to create secure anchor points. High-fidelity field experiments to test the robot’s mobility in caves in California and New Mexico have been conducted. Caves provide a chance to test the robot in all gravitational orientations for future missions to caves on Mars and the Moon, or for missions to asteroids, where a mobile robot could have to grip to the surface to avoid falling off. Microspine grippers were also tested successfully aboard NASA’s zero-g aircraft on multiple rock types, enabling the first ever zero-g drilling demonstration in 2014.

 

Cavebot

 

Hedgehog is a toss-hop-tumble spacecraft-rover hybrid robot concept for the exploration of small Solar System bodies. Multiple Hedgehogs can be deployed from a “mothership” onto the surface of a low-gravity object such as an asteroid. Using internal actuation to hop and tumble across the surface of a new frontier, Hedgehog is a minimalistic robotic platform for the in situ exploration of small bodies that has minimal complexity and is capable of large surface coverage as well as finely controlled regional mobility.

 

Hedgehog

 

BRUIE is a two-wheeled robot capable of roving in an under-ice environment. The rover has positive buoyancy, allowing it to stick to the ice underside and operate using similar control principles as those used for traditional aboveground rovers. The system has been tested in thermokarst lakes near Barrow, Alaska, and data from onboard video and methane sensors gives scientific insight to the formation and distribution of trapped methane pockets in the lake ice.

 

BRUIE

 

Planetary balloons are buoyant vehicles that could fly for weeks or months in the planetary atmospheres of Venus and Titan, carrying a wide variety of science instruments and conducting extensive in situ investigations. The work done combines prototyping, testing, and analysis to mature the balloon technology for first use by NASA in a planetary mission. Planetary balloons are a direct extension of the balloon technology that has been used on Earth for the past two centuries. The main challenge is adapting the technology to the very different environments—Titan is cryogenically cold (85 to 95 K), and Venus has very high temperatures near the surface (460°C) and sulfuric acid clouds in the cool upper atmosphere (30°C).

balloons