RISE UP/GEAR UP

A Lesson of Tesla Crashes? Computer Vision Can’t Do It All Yet

Jitendra Malik, a researcher in computer vision for three decades, doesn’t own a Tesla, but he has advice for people who do.

“Knowing what I know about computer vision, I wouldn’t take my hands off the steering wheel,” he said.

Dr. Malik, a professor at the University of California, Berkeley, was referring to a fatal crash in May of a Tesla electric car that was equipped with its Autopilot driver-assistance system. An Ohio man was killed when his Model S car, driving in the Autopilot mode, crashed into a tractor-trailer.

Federal regulators are still investigating the accident. But it appears likely that the man placed too much confidence in Tesla’s self-driving system. The same may be true of a fatal Tesla accident in China that was reported last week. Other automakers like Ford, which last week announced its plan to produce driverless cars by 2021, are taking a go-slow approach, saying the technology for even occasional hands-free driving is not ready for many traffic situations.

Tesla has said that Autopilot is not meant to take over completely for a human driver. And earlier this month, the company implicitly acknowledged that its owners should heed Dr. Malik’s advice, announcing that it was modifying Autopilot so that the system will issue drivers more frequent warnings to put their hands on the steering wheel. Tesla is also fine-tuning its radar sensors to more accurately detect road hazards, and rely less on computer vision.

The Tesla accident in May, researchers say, was not a failure of computer vision. But it underscored the limitations of the science in applications like driverless cars despite remarkable progress in recent years, fueled by digital data, computer firepower and software inspired by the human brain.

Read full article in The New York Times