Practice English Speaking&Listening with: MIT DARPA Urban Challenge Proposal

Difficulty: 0

The DARPA Urban Challenge promises

to be a stage for innovation in the areas of robotics,

perception, and planning.

MIT has assembled a team of top researchers in these fields

and is determined to win the Grand Challenge.

With that in mind, MIT has chosen the Land Rover

LR3 because of its unique on- and off-road capabilities.

The vehicle will rely upon a layered sensor architecture

that will incorporate a coupled GPS/INS

system, a series of redundant LiDAR

scanners for short- to medium-range sensing,

and several telephoto and wide-field cameras that

will be used for object detection in the near and far


While capable hardware is necessary,

the unique nature of the upcoming challenge

places the greatest responsibility on software,

and winning will require a revolution in the approach

to the problems, ranging from perception to planning.

Whether you are in Baghdad or Boston,

uncertainty is implicit in all aspects of urban driving.

The driver must constantly be aware of a pedestrian darting

out onto the street or the car ahead suddenly braking,

as well as replan routes on the fly

to deal with unexpected road construction.

Uncertainty will similarly extend to all aspects

of the DARPA Grand Challenge.

MIT will take advantage of its unique research

and statistical state estimation and mapping,

planning under uncertainty, and statistical perception

in developing a software architecture that addresses

uncertainty at all levels of the perception, planning,

and control tasks.

At the high level, our architecture

will include a mission planner that manages the global plan,

a situational planner that is designed to rapidly handle

the uncertainty of the environment,

and a perceptual state estimator that

extracts salient information from the scene.

A map fragment database maintains a history

of the relevant scene structure and allows

the technical planner to be more aggressive when

revisiting areas.

This simulation shows a motion planning technique

applied to a sample RNDF.

Under nominal conditions, the planner

chooses the shortest path to the goal,

but is able to replant on the fly when an obstruction arises.

In the multi-vehicle case, our algorithms

predict the motion of other agents and replan short horizon

trajectories in order to avoid conflicting paths.

The path planning relies on a condensed description

of the environment provided by the sensors.

Similarly, our current vision research

is capable of real-time, 3D tracking of features

and shapes, which then provides rich and salient information

about the environment.

With video, we identify long-range movement

with particles that capture motion

that is spatially dense and temporally long-range.

The obstacles can then be tracked

using robust and optimized vision

methods that take into account the structure of the scene.

In this mockup, the system separates the road

shown in blue from the obstacles shown in red and extracts

in the far field a sign which, in turn, impacts the driving


Additionally, the mapping component of this system

will draw upon our current research in localization

and mapping, which has been demonstrated

in large, complex environments.

MIT, in conjunction with Olin College,

currently has the skills in hardware development

and construction necessary to implement

these unique algorithms.

Work by team members previously at the iRobot corporation

demonstrates the ability to deploy autonomous vehicles

that operate in rugged, outdoor terrain.

Common both to on-road and urban driving,

puddles are one challenging feature,

as they are not easily detected by standard laser range-finder


Driving in an urban environment will require highly precise

navigation at the local scale.

Using the NavCom system, we can accurately control the vehicle

based upon an IMU and by directly incorporating

occasional GPS data, can achieve centimeter precision.

Here, the system uses the vehicle's own motion

to recover three-dimensional information from the plane

or laser range data of the vehicle.

Some vehicle poses can cause a ground plane to appear

as an obstacle to the system.

Here, we show that the vehicle slows down in order

to ensure that the perceived obstacle

during the downward pitch is, in fact, the ground

plane before continuing at high speed.

Part of the DARPA Challenge requires ability

to park between stationary, unmodeled vehicles.

Here, we see a vehicle demonstrating a similar ability

by autonomously pulling into an orchard lane using a fusion

GPS/LiDAR wheel odometer InVision.

GPS is used to identify the approximate location

of the lane opening, after which local sensors take over

for navigation.

Another critical capability required for the DARPA

Challenge is sensing and tracking

of other moving vehicles.

Here, we see the ATV tracking the moving vehicles

ahead of it using only a single laser range-finder

and a color camera.

MIT has a long-standing history of building autonomous

platforms for a wide range of environments

that have successfully helped to pave the way for the maturing

field of robotics.

We are looking forward to applying

our technical experience to the 2006 DARPA Urban Challenge.

The Description of MIT DARPA Urban Challenge Proposal