Google’s autonomous cars have already shown how close vehicles are to driving themselves in day-to-day traffic, but there’s still one uncontrollable, unpredictable, and often-irrational variable that autonomous cars still struggle to cope with: you, me and all the other haphazardly-programmed human beings on the road. And though predicting human behavior might be one of the most difficult tasks for a human-programmed computer, researchers at MIT are already digging into the challenge. Using model cars (one autonomous, one human-controlled) on overlapping tracks, 97 out of 100 laps avoided collision. But not all of those laps fell into the near-collision “capture set”… which, as it turns out, is what makes the human threat to autonomous cars so challenging.
According to [MIT Mechanical Engineering Professor Domitilla] Del Vecchio, a common challenge for ITS developers is designing a system that is safe without being overly conservative. It’s tempting to treat every vehicle on the road as an “agent that’s playing against you,” she says, and construct hypersensitive systems that consistently react to worst-case scenarios. But with this approach, Del Vecchio says, “you get a system that gives you warnings even when you don’t feel them as necessary. Then you would say, ‘Oh, this warning system doesn’t work,’ and you would neglect it all the time.”














Recent Comments