
Google acknowledges the 12th accident involving its autonomous cars, while Virginia opens 70 miles of highway to Google and others for testing.
Google’s acknowledgement of the accident was made Wednesday during the tech giant’s annual shareholders meeting, Autoblog reports. Co-founder Sergey Brin said the vehicle was stopped at a traffic light when it was rear-ended, the seventh or eighth occurrence of such a scenario per Brin.
Safety advocates want the company to release all records pertaining to the accidents so the public can know where each instance went wrong, a request reinforced by shareholder and Consumer Watchdog advocate John Simpson during the meeting. Brin gave a synopsis of each accident in response, adding Google would be open to giving more information, an action it has yet to take thus far.
The latest accident follows a disclosure made by Google in May, wherein 11 accidents involved its fleet of autonomous cars over a six-year period, with all accidents attributed to human error by the tech giant.
Meanwhile, Northern Virginia drivers may soon see Google’s autonomous cars on their way to work. Richmond Times-Dispatch says over 70 miles of highway on Interstates 95, 495 and 66, as well as U.S. 29 and U.S. 50, will become so-called Virginia Automated Corridors under the oversight of the Virginia Tech Transportation Institute. The vehicles would be first certified for safety by VTTI at its Smart Road test track in Montgomery County, and at Virginia International Raceway in Halifax County prior to highway testing.
VTTI Center for Autonomous Vehicle Systems Director Myra Blanco says the aim of the program is for Virginia to show other states how to make testing of autonomous technologies easier, adding the program would “advance the technology and… attract companies and satellite offices in the Northern Virginia area to develop these new concepts.”
Testing is set to begin within a year, with the Center providing insurance and license plates for those companies looking to prove their concepts. The Virginia Department of Transportation and Department of Motor Vehicles are partners in the program.
[Image credit: Google]
Rear ended. Well, we know where the fault in that one lies.
I’ll be curious to see the listing of all accidents. And how many (if any) are attributed to the Google car’s actions.
“all accidents caused by human error”
Programmers are human. Engineers are human. Technicians are human. Mechanics are human…
Well, let’s go right to the source.
Cameron, did you use “human” to mean drivers of normal cars that hit the cute little robot?
Yes.
Thanks.
The “cute little robot” pictured is the fully-autonomous (only) demo vehicle that’s confined to private roads and parking lots. I’m pretty sure the Google autonomous vehicles involved in road testing and accidents were Prions and other Toyotas with additional sensors that should be less distracting than the street view cars.
They are Lexus RXes.
“They are Lexus RXes”
Eeeeww… that takes the Pixar right out of it. Go ‘head and smash ’em all.
Google says it is driver error and bleat, bleat go the sheep. Let’s see an analysis of accidents per mile traveled. There have been many accidents in a short time for a small fleet. Something is off.
To what extent were the other drivers at fault? Did the Google car do something that increased the chances of an accident. That’s why it’s important to see more details about the accident.
“Hey, lookit that funny l’il car!”
(forgets to hit brake pedal)
(crashes)
Google has now started posting the details.
http://www.google.com/selfdrivingcar/reports/
Probably the most interesting context missing from the article is that the 12 accidents occurred in the course of over 1 million miles driven autonomously.
The scenario that worries me about autonomous cars is what will the computer decided on my behalf in a no win situation.
If little Suzie runs out in front of my autonomous car with no time to react and the computers choices are run over Suzie, or swerve me into an oncoming 18 wheeler, how does it decide what to do?
A system biased to splatter pedestrians/cyclists in a no-win situation is ripe for lawsuits from victim families and survivors. A system biased to analyze mass and make a decision based on survivability percentages (al a I, Robot the sort of crappy movie versus the much better series of 9 books) starts conflicting with the three laws of robotics.
The no win situation in driving presents itself every day all over the world more times than anyone can count. The decision made is very human, unique, and at a subconscious level. I for one don’t want to give that choice to a robot overlord that might go, “little Suzie has a 12% chance of survival if you hit her going at projected impact speed with 100% braking over 39 feet of 38 MPH. You have a 12% chance of survival if you head on this 18 wheeler coming at you at 48 MPH and a projected forward speed of 31 MPH based on 67 feet of maximum braking. Equal chance, run assigned value algorithm – little Suzie’s mass and shape indicates this is a child, and your middle-age butt has less value to the potential of a child so, 18 wheeler here we come!”
In your scenario the system will choose to hit the smaller object (Suzie) as the lesser of 2 evils. Look, the system operating your car won’t be a HAL9000, and it won’t have sensors capable of allowing logic as sophisticated as you’re imagining. The only bias built into the code will be hit the smallest obstacle with the least force possible, and that bias is no different from what most humans would have. People keep projecting sci-fi nightmares on autonomous cars, but they just aren’t going to be that capable.
In a situation where you have to hit something, I think the robot will decide based on two criteria:
1) How can I hit with the lowest energy? Pretty straightforward.
2) How can I hit with the least “fault?” That one is trickier. But in practice I think it will mean a bias toward threshold braking without turns.
The questions of fault and liability will be resolved the same way for driving computers that they are for anything else that operates machinery (whether robotic or human): through a bunch of court cases that eventually start adding up into rules.
How can I hit with the least “fault?”
I don’t think that this will even be in the logic. Sensors in the car will not be able to provide enough data for program logic to make a determination of fault (beyond staying in the proper lane). Autonomous cars certainly won’t make judgement calls about relative morality of what to hit.
“Suzie runs out in front of my autonomous car ”
The the car slams on the brakes. If you hit her then you sue her parents for the damage to your car*. She’s legally at fault for jaywalking. Her parents are legally at fault for inadequately supervising their child.
* Not that anyone ever does – but you could.
“If little Suzie runs out in front of my autonomous car with no time to react and the computers choices are run over Suzie, or swerve me into an oncoming 18 wheeler, how does it decide what to do?”
This is not an issue. The computer is going to do what the ideal human would do — hit the brakes hard, and then only steer for an opening if it exists and can be negotiated safely (which it probably won’t and can’t be.)
The safety of the autonomous system is in what it won’t do.
It won’t speed.
It won’t tailgate.
It won’t weave irrationally.
It won’t make turns across traffic unless there is adequate time to complete them safely.
It won’t get drunk.
It won’t get high.
It won’t make lane changes without signaling.
It won’t run red lights.
It won’t get into road rage contests.
It won’t blame the other guy for bad driving or otherwise fail to accept responsibility for its own contributions to poor road safety.
It won’t allow personal issues, being late, overconfidence, or a general lack of interest in the welfare of others to negatively influence its behavior.
In other words, it won’t be human. That’s a benefit.
Based on the previous article on this,the Google cars are getting rear-ended once every couple months-that seems a pretty high rate for a limited number of vehicles
Think of them as self-sacrificing. Surely they’re getting all kinds of proximity, velocity and collision warnings. Internally, they’re probably freaking out, maybe even making Wall-E sounds.
But out of an abundance of caution they do not bolt in any direction like I’ve done when I watched some a-hole approaching way too fast in my mirror. Rather, they give up their lives in the service of a safer and better managed Motoring Public.
*tear drop*
They’re Heroes!
Maybe the Googlemobile is sitting at a green light trying to work out what to do next. Nose to tail is almost inevitable.
Imagine one handling a multi-lane roundabout at rush hour.
I’ve never seen studies on this subject, but I’d guess certain shapes, colors, and tail lighting arrangements are more prone to rear ending than other cars. The Google Car is probably just not recognized until it’s too late.
All I know is keep those things away from me. I hear it’s like driving near your grandmother.