Uber proudly released a fleet of eleven driverless Volvos onto the streets of San Francisco Wednesday morning and one or two immediately started running amok. One person tweeted about seeing a self-driving vehicle nearly hitting another car, while another posted a video showing an autonomous tech-equipped XC90 breezing through a red light and active pedestrian cross-walk.
Before the end of the program’s first day, people were clamoring for Uber to explain the incidents and the California Department of Motor Vehicles had sent the ride-hailing company a cease and desist letter for operating without a permit.
Initially reported by the San Francisco Examiner, the video — taken from a Luxor taxicab dash-cam — shows the autonomous Uber ignoring a red signal around 10:37 yesterday morning. Later that same morning, freelance writer and producer Annie Gaus tweeted that she, “Just passed a ‘self-driving’ Uber that lurched into the intersection on Van Ness [Avenue], on a red, nearly hitting my Lyft.”
Speaking with the Examiner, Gaus explained the Uber vehicle darted out across Union Street on a red light, and nearly hit the Lyft car she was riding in. “It was close enough that both myself and the driver reacted and were like, ‘Shit,’” she said. “It stopped suddenly and stayed like that, as you see in the [tweeted] photo.”
Uber released an official statement that claims the incident in the video was caused by human error and not the technology. “This incident was due to human error. This is why we believe so much in making the roads safer by building self-driving Ubers. This vehicle was not part of the pilot and was not carrying customers. The driver involved has been suspended while we continue to investigate.”
That’s not reassuring, considering it really looks like one of the Volvo SUVs from the pilot, and the best case scenario is that the company has had trouble hiring competent motorists. Uber employees are required to keep their hands on the self-driving vehicle’s controls at all times, so even a technical snafu should be solved by operator intervention. However, that hasn’t kept Uber test vehicles in the Pittsburgh trials from having mishaps. In September, reports surfaced of self-driving Ford Fusions involved in wrong-way driving and a minor fender-bender.
Uber has been ordered by state regulators to immediately stop using self-driving cars in California. It may be allowed to return to public testing after it secures the necessary state permit that allows companies to operate autonomous vehicles on public roads. The California Department of Motor Vehicles issued a statement stating that Uber was expected to secure such a permit, but Uber maintained that it was unnecessary because its vehicles are fully self-driving — requiring them to have a driver on board at all times.
“If Uber does not confirm immediately that it will stop its launch and seek a testing permit, the DMV will initiate legal action,” wrote DMV attorney Brian Soublet in a letter addressed to Anthony Levandowski, who runs Uber’s autonomous car programs.
As of now, roughly twenty companies have received the DMV permits needed to test on California roads, including Ford, Google, and Tesla.
[Image: Volvo]

per Techcrunch, Uber claims that the self-driving car in the red-light running video was under human control at the time.
They are not too happy with the driver.
Scapegoated.
PEBKAC (Problem Exists Between Keyboard And Chair)
PICNIC: Problem In Chair, Not In Computer!
The human was defective.
I imagine that regardless if the driver was at fault or the autonomous system was at fault, they will always make this claim.
As soon as the system reaches a very low defect rate, time to fire all the drivers.
“The California Department of Motor Vehicles issued a statement stating that Uber was expected to secure such a permit, but Uber maintained that it was unnecessary because its vehicles are fully self-driving — requiring them to have a driver on board at all times.”
that sounds like a five-year-old trying to wiggle out of punishment.
Well, that would be about par for Uber’s rationale: better to ask for forgiveness than consent.
Except in Uber’s case, it’s more like “Better to be exploitative, obstructionist douchebags under the guise of ‘disruption’ than to ask for consent”
Yep and you thought wall street bankers were dicks meet Uber execs.
Shouldn’t be surprised by Uber doing this. If they don’t like a law, they either leave, like Austin, or just ignore it. Now, you may say that the laws about taxis are non-competitive nonsense, and I might agree, but I’m not surprised that Uber would just ignore the law.
Anyone have any idea where Uber’s tech came from in the first place? I don’t get how they just decided their tech was good enough for city streets when AFAIK only Google’s little self-driving cars are being used in urban centres at this point.
I’m starting to wonder if Uber is just trolling here.
George Hotz?
HAHA Indeed.
Uber hired all the folks in the Carnegie Mellon robotics department and have been testing self driving cars in Pittsburgh for a while:
http://www.nytimes.com/2016/09/11/technology/no-driver-bring-it-on-how-pittsburgh-became-ubers-testing-ground.html
Between CMU and the fact this area is so f***ed up in terms of infrastructure, climate, and geography makes it an ideal place to beta test something like autonomous driving.
Yes — Uber bought Carnegie Mellon University’s top robotics people last year. They offered huge bonuses, double salary, and no doubt a blank check for tech, and brought in decades of autonomous driving expertise in one swoop from the best place to get it from.
CMU had built an autonomous vehicle as early as 1984. In 1995, two scientists from CMU took a 2800 mile road trip from Pittsburgh to San Diego, 98% of it autonomously. CMU also won the 2007 DARPA Urban Challenge in 2007, which required an autonomous vehicle to navigate an urban environment with no human intervention.
Uber purchased the foremost experts on autonomous driving and handed them a blank check to develop the tech.
aha! Thanks, both of you. I’m even *more* interested now in what they’re up to.
I will be in and around downtown Pittsburgh sometime next week (probably early in the week). I’ll keep an eye out for these self driving cars, I will have my cell phone camera ready, and I’ll exercise appropriate caution around the junior robot drivers.
Let the fun and games begin.
if a driver-less car gets a red light (camera) ticket, is the auto manufacturer responsible to pay the fine or the software developer or the sensor suppliers?
I’m sure there will be new law about that. But under current law it’s the automaker’s product, so the automaker will be liable to the government. Then the automaker might be able to turn around and go after the software/component suppliers.
So, a full employment scheme for lawyers?
You forgot the final option in a typical multiple choice question. “All the above”.
The trial lawyers will sue everyone involved, even if only remotely involved to see what sticks.
Johnny Five was late for work, so what?
Uber’s attitude about laws is interesting. They basically just do whatever they want and dare the government to shut them down.
Like Mr. Trump and his Twitter account handlers?
Yeah I feel sad for our country that they have been allowed to do that without being shut down. It does not set a good precedent for future companies.
How does that differ from the average driver?
The law says it’s illegal to go faster than the posted speed limit. Everyone speeds, intentionally or otherwise.
It doesn’t make it right, but not sure why everyone is so surprised.
This is the problem with current technology – it is NOT perfect. But it works most of the time, so the “driver” who is supposed to take over in the event of problems is lulled into paying far less attention than they should. When an emergency problem develops, their reaction time to perceive the problem and take over will often be MUCH longer than if they were actively controlling the car.
Yep, this. I’ve talked to quite a few people involved in autonomous driving research, and absolutely none of them think level three (autonomous except for emergencies) will ever work. You either need to do it all or you need to do almost none of it.
The number of nines you need for adequate vehicle operation is absurd, and the tech is just nowhere near there yet. Being perfect in 999 out of every 1000 minutes means one screwup per day for every vehicle on the road. Ten vehicles and you have a worst-out-of-ten incident every day. A hundred and you have one every hour.
It’s too damn soon.
“Being perfect in 999 out of every 1000 minutes means one screwup per day for every vehicle on the road.”
What’s your guess on the number of mistakes per person per day?
It needs to be practically perfect in every way :). Jesting aside, the goal is 100% accuracy. Airlines are maybe a good example. Their goal is 100% safety and zero loss of life. They cannot tell their customers they feel an adequate goal is to achieve 95% safety which results X crashes per Y hours of flight which equals Z loss of life. Whether we have human drivers or robot drivers, the goal is zero loss of life. A reduction compared to today’s numbers (i.e. 30k per year reduced to something greater than zero) isn’t a sufficient goal for autonomous car technologies. Try explaining to the customer, or their lawyer, the corporation decided X% loss of life was acceptable and less costly than doing the job right (I think we’ve all seen how that sort of thing plays out).
That makes no sense. Human drivers kill 30,000 people every year in the US (and rising because so many drivers are texting). Let’s say that self-driving cars could cut this to 10,000 deaths per year. You’re telling me that’s not good enough – we can’t field a single self driving vehicle until the risk is cut to zero? That because they are not perfect yet, an additional 20,000 people every year will have to die horrible deaths in twisted wreckage (and countless thousands more maimed and injured)? It will never be zero. Even air travel has a greater than zero risk and that was only achieved through decades of learning from bitter experience during which time the risks were much higher than zero. If you can’t have self driving cars until they are perfect then we will never have them.
“a ‘self-driving’ Uber that lurched into the intersection on Van Ness [Avenue], on a red, nearly hitting my Lyft.”
…it will no doubt be punished severely for failing to finish the job.
Am I the only one who values my own life too much to even consider taking a ride in a driverless car on American major city streets?
I know plenty of folks who don’t trust other drivers–people they are friends with, have known and loved for decades–lots of folks will ride with no other driver.
I’d tend to think someone willing to ride with No Driver At All might have a death wish.
There’s a driver. In fact there’s both a driver AND a technician in the car. I’d wager they are more hyper-aware of what’s going on that just about any other driver on the road at this point.
@ orenwolf–in this instance, yes. But eventually, no. I’m skeptical, and a little afraid.
;o]
Yet they still run a red?
To be fair, according to Uber the car that ran the red was being driven by a human and not part of the pilot program, so who knows?
Uber isn’t a company I would take at their word.
Commendable adoption of the vernacular by the freelance writer:
“…both myself and the driver reacted and were like, ‘Shit,’” she said.
Inspiring story-telling, that.
The last time I rode in a taxi, the driver decided to make a left turn across six lanes, directly in front of the only other car at the intersection. That’d be part of the reason I haven’t ridden in a taxi in several years, but self-driving cars have a pretty low bar to clear.
Human beings are terrible drivers; we kill 30,000 people every year in the US with our cars.
Even if you’re the best driver who ever lived you’re still trusting everybody else just by sharing the road with them.
Autonomous cars don’t have to be perfect, they just have to be better than people.
The general public used to be afraid to ride escalators, too. We got over it.
Uber routinely flips the metaphorical bird at government agencies who are trying to do their jobs of protecting public safety. The level of arrogance at the top of many technology companies is something one has to experience to believe.
So, they behave just like regular Volvos?
“This incident was due to human error. This is why we believe so much in making the roads safer by building self-driving Ubers”
Yeahhh Right. You’re building self driving Ubers because you don’t want to pay the drivers. Uber’s losses will exceed 1 Billion in 2016. They are hoping and praying they can get a totally self driving car operational before they go bankrupt.
The beauty of if is, if a driver is always required to be in the car at this stage, they can always blame it on the driver. Good luck if you’re an Uber driver in an autonomous vehicle and the autobot goes berserk and starts running over pedestrians. They will blame the driver not the machine.
“Good luck if you’re an Uber driver in an autonomous vehicle and the autobot goes berserk and starts running over pedestrians. They will blame the driver not the machine.”
Telemetry will prove otherwise. That won’t work.
I agree with the rest of your comment though!
That assumes the telemetry data is publically available.
Tesla are happy to voluntarily share telemetry data when it proves the driver is at fault. My guess is that at other times they choose not to share the data.
Short of legal action to force the data to be disclosed we’ll never be the wiser.
In this particular case the car went through a red light. “If” prosecuted the driver will need to pay a fine. Worst case scenario, the driver loses his license due to driving record. No telemetry data is revealed or is required to successfully prosecute the driver for running the red light.
When someone is injured/dies at the hands of one of these cars, then and only then will telemetry data be legally obtained. I would guess Uber would fight to have the evidence dismissed as inadmissible for any technicality they can find, unless of course it proves their innocence.
what are you talking about? Were you not here when the tesla was opened like a tin can by a tractor-trailer? Tesla told everyone the car was in auto mode at the time and failed to see the truck.
What makes you think that in an accident prosecutors wouldn’t subpoena records it knows Tesla has, through their own admission? I’d be more worried about *other* manufacturers personally, who have never shown the full breadth of what telemetry their cars collect, how it’s collected, or if it is transmitted back to them or not.
The solution is to have the infrastructure do the driving and not the vehicle. Farming out the self-driving tech and enclosing it in each vehicle is idiotic. Every road should be networked, and the vehicles driven and directed by a central control station (with redundancy) that maintains and maps the location of each vehicle, akin to a central air traffic controller. Having a centralized network do it, as opposed to millions of closed networks, is a much more reliable way to accomplish “self-driving”.
You reeeeeeally don’t want a single point to hack. You want to firewall every node so ISIS doesn’t get to play a point and click adventure game in New York.
Plus, you want all of your nodes to be totally self-sufficient if they lose connection, such as in a snowstorm, lightning storm, etc.
Finally, there’s so much that HAS to be done locally that it makes sense to do almost all locally. The sensor processing that puts you within less than a centimeter accuracy has to be local. Remote processing is WAY too laggy. We will probably not see the capability to offload that computational load within our lifetime.
I respectfully disagree. A single point of failure in many complex systems represents an order not getting shipped or something mostly harmless. Here, people could die. That’s why I think the model applied to air traffic control works rather well. I don’t want a thousand points of failure when I could have one.
The problem with applying the air traffic control model to cars is scale.
With a few thousand airports you can control traffic into and out of the airports and their associated airspace.
When you have millions of roads and vehicles things get very difficult to centralize and keep control. The compute power, the cost of redundancy and security on this scale would be astronomical.
Some things are best distributed.
MazdaThreeve writes: ‘Having a centralized network do it, as opposed to millions of closed networks, is a much more reliable way to accomplish “self-driving”.’
It might be more efficient in terms of overall traffic management, but by the principle of “one grenade gets ’em all” it’s most unlikely to be more reliable, even with theoretical redundancy. A highly distributed processing system is vastly more fault tolerant than centralized control, as well as more flexible.
Exactly. One hack and vehicular mayhem on a mass scale ensues. Not to mention that centralized most likely mean under government control, and we all know how that will likely end up.
Uber and others like them (Google) better get it together.
There will be an accident with fatalities if not. Once a human does become a statistic by being the victim of a driverless vehicle of this type, the charade of self-driving cars and even driver-assisted vehicles will come to an end and it will be 20 years before it is tried again.
And Uber will be sued out of existence. I say good riddance anyway.
These silicon valley maniacs are going to kill more people before this is over.
You wouldn’t see this from one of the big auto OEMs, despite them having had much more functional autonomous technology for some time. Ford and GM know the storm that would come if they hurt someone with a half baked “beta test” well enough.
How in the world are Volvo okay with Uber pulling this? Volvo’s whole marketing tagline is safety, sending out guided missiles isn’t like them. The LAST thing they want is the headline “AUTONOMOUS VOLVO FLIES THROUGH RED LIGHT.”
” despite them having had much more functional autonomous technology for some time.”
[citation needed]
http://www.detroitnews.com/story/business/autos/detroit-auto-show/2016/01/11/ford-fusion-driverless-snow/78612944/
First link I found on google about the whole snow runs Ford did early this year. (this is why they wanna convert the Willow Run plant to an autonomous test course FYI – terribad Michigan weather) Ford, GM, Mercedes, etc. have some pretty serious automation groups with test fleets that dwarf what Uber has.
EDIT: plus GM’s whole autonomous Volt project, which has them scurrying around the Warren tech center as taxis.
Uber, maybe. I haven’t looked into what sort of work CMU had been doing pre-acquisition. You said *silicon valley* though, and I question whether anyone has the simulation work and real-world testing that Google has, and certainly no one has more real-world telemetry data now than Tesla, given that basically running autopilot = data collection for them.
Uber shouldn’t have named the software “Roxanne.” Hubris, that was.
You may find it interesting, but I ran across an article from someone who got to ride in one of the Uber cars before the program was shut down:
https://medium.com/@caren/robot-take-the-wheel-c8a20fa5cbd6#.s5utgc7li
Anybody know how many miles this Uber system has racked up? It sounds wildly safer than my father. I’ve *seen* him drive through a red light, and I take maybe 10 short hops at most a year with him at the wheel. Last year he went through another and hit someone.
Self driving cars can’t come soon enough for dear old dad.
Screw Uber! Yep, screw em’
I’m interested to see what will happen…I’ve never used Uber myself though, so I have no opinion on it.