Uber Crash Shows Human Traits in Self-Driving Software – Bloomberg (Mar 29, 2017)

I think the headline here would more appropriately read “Uber crash shows bystanders see human traits in self-driving software” because there’s no evidence that the car actually gunned a light turning yellow in this case, but that’s how witnesses perceived it. (I joked on Twitter earlier that perhaps Uber’s autonomy software had been taught the company’s values, one of which is “always be hustlin'”.) The reality is that self-driving cars are often taught to emulate human behavior rather than driving in some idealized perfect way, because that’s what makes human passengers feel comfortable and ultimately trust the technology. But I very much doubt that Uber’s cars are taught to accelerate through lights in the process of turning yellow or red. It appears that police concluded Uber’s technology was not at fault in this crash, and after a brief break over the weekend, its cars are back on the roads in the various cities where they’re operating. But given Uber’s failure rates relative to Waymo’s, and the fact that Uber cars are carrying paying customers, there’s certainly potential for a lot more crashes, some of them actually Uber’s fault.

via Bloomberg


The company, topic, and narrative tags below will take you to other posts with the same tags. The narrative link(s) will also take you to the narrative essay which provides additional context behind the post.

Vote for or share this post

Use the Like button below to vote for this post as one of the most important of the week. The posts voted most important are more likely to be included in the News Roundup podcast episode I do each week. Or use the sharing buttons to share a link to this post to social networks or other services.