Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Human driving behaviors are also based on a bunch of heuristics and assumptions that do not hold in all road conditions.

It's good to expect autonomous/semi-autonomous to improve on humans, but it's not good to expect zero edge cases to be the bar that must be crossed before allowing this.

It's likely that even any significantly improved (vs humans) system would also have new fatal edge cases while also sharing some existing fatal edge cases. What's important is that the systems are not overall worse than humans alone and that they continuously improve.



The thing about humans is we're relatively good at understanding our own heuristics, and rating the confidence of those beliefs. If there's a storm we reduce distractions and pay more attention to the road, drive more conservatively. Other passengers will understand not to distract the driver.

But if the car is confused by some occasional paint splotches on the road that clearly aren't lane markings to our human eyes, we don't have any understanding that the car is being misled. Like that video the other day of the Tesla clipping the barrier - up until about a half second before impact I would have assumed it knew what it was doing and was on track to avoid it.

It's unrealistic to expect a human driver to take over immediately in failure scenarios that the person can't recognize.


I think there's lots of work to be done in quantifying and comparing discrepencies and lack of confidence in neural net classifications and sensor/model outputs. Part of this is human interface design.

An example of this is the "sensor disagree" light on the Boeing 737 max (which unfortunately was optional equipment not installed for the Ethiopian Airlines 737 Max crash), although I do believe we can do much better than a mere light. A full screen and a sound system with over-the-air updates means we ought to be able to field a really good (and continuously improving) system for communicating when the autopilot's sensors disagree or if the model has low confidence.

This is likely going to take years or even decades of operational use over millions of vehicles to refine with good confidence.


> What's important is that the systems are not overall worse than humans alone and that they continuously improve.

I work in the industry. These systems are currently far worse than humans and the edge cases are numerous. We have so much brain power and years of experience that the heurustics that a L2+ system uses to make decisions are not really comparible and are far more simplistic and much worse. We can handle new situations, these cars cannot. The fact that you are making this comparison between human heurustics and the vehicle system just highlights that people don't understand the issues.

Mix that in with the amount of overpromising and overselling that Tesla is doing wrt the abilities of these systems people will be overconfident in them and end up getting complacent leading to accidents.

Tesla's approach to development, release and marketing is not a good way to build safety critical driving systems. Overhyping them to people who don't understand the nuance is my biggest concern.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: