Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably because it's 40-pages long.

There's even a better summary of the entire research in this Twitter thread: https://twitter.com/campuscodi/status/1112064046369591296

Research is pretty smart, but highly impractical for real world attacks.

It's also been patched since 2017, with additional patches in 2018. So, it's also impractical because it doesn't work anymore.



I guess it's a bit ambiguous, but I think the thing that he says was "patched way back in 2017, and again in 2018" is not the hack to use adversarial dots on the road to fool the system, but the other hack where the they inject malicious code into the autodrive system over Wifi and then the CAM bus, and then take over control of the car from a cellphone.

So maybe the lane marking thing is still unfixed?

I'm also a bit amused by the suggestion that we should feel even safer because it was fixed twice! :)


Rationally, shouldn’t we feel safer if it’s patched twice? Seems like it’s irrational to prefer a state where it was patched once over one where it was patched twice. It would seem to indicate that they dedicated time and resources to resolve it once then did proper follow up to catch missed cases. It certainly points to it being a pernicious issue but I’m can’t see why you’d presume it’s worse that it’s been patched more.


Depends on why they patched it twice.

On the ”we should be worried” end of the scale, we have

”we fixed a bug but didn’t add a test, then accidentally reverted the bug fix, had to fix it again, and still didn’t improve our process to prevent this from happening again”

In the exaggerated version, they don’t use source control, so they had to figure out how to fix the bug twice, and they don’t even did a test, but, in both cases, only believed they fixed it.

Going back to your example, it depends on how they detected that they missed cases, and on how sure they are they know them all now. If (100% hypothetically) they only detected it after ten people died, denied existence of the problem, and were sued in court, we should be worried.

In the real world, just like all other companies working on autonomous driving, I think they will never be able to say they truly fixed the problem. The problem is, and always will be, underspecified, and even if we knew the problem well, we don’t really know for what kinds of problems the software we use to do this is a reliable solution.

I would be a bit more worried about Tesla than, say, about German car manufacturers because Tesla has shown to be willing to sail close to cliffs. Worried enough to be really worried? No.


"This time we fixed it for the last time"


Because the fact that it was patched twice means they didn't fix it the first time, which means they didn't test it properly and might not have fixed it the second time.

It's entirely logical.


I’m not sure the number of patches actually provides any definive information on the current state from a purely logical argument.

Zero patches, one, two, or more. That there are patches merely tells you that they tried to respond to an identified threat. It doesn’t tell you the threat was properly mitigated in any case.

I think we can say it is strictly better to have more than zero patches in the case where a vulnerability is public. Exactly one patch to me doesn’t guarantee anything about the quality. Multiple patches at least implies persistent effort and dedication to securing the fix. Maybe at some point the number of patches implies incompetence but I’m not sure I’d draw that line at “2”?


Not really, a hotfix is a do no harm situation where an initially conservative approach that’s unlikely to cause harm may be deployed. That’s true even if it’s known not to completely solve the problem.

Now, in the case of an undisclosed voluntarily, waiting may be preferred as a path is going to get people to look on that direction. However, with public disclosure the clock is ticking.


While I agree about the first and third issue, he's rather dismissive about the second one: "The attack relies on placing dots on the road to trick the car. Lots of them. \n\n I wouldn't worry about it." The demo shows 3 small dots can trigger a lane change.

Hypothetically, it could be a very targeted attack if the attacker drives in front of the target to put the paint on the road, and then a car behind the target covers it up. If the cover-up looks just like normal road (no idea how easy that is to achieve), and if nobody is looking for it because it's not a common thing to do, it might never be found. There are probably much better attacks that one could come up with if one gives it more than two minutes' thought, but it seems realistic enough to me to be interesting. Or if not a targeted attack, it can be very disruptive once there is some adoption.

Then again, people could already be disruptive and leave stuff on the road or rails, and experience tells us it's very rare (per driven person-kilometer at least). Something about tricking the computer seems more likely to happen than putting physical objects to harm others directly, but that's probably just between my ears.


"and then a car behind the target covers it up. If the cover-up looks just like normal road (no idea how easy that is to achieve)"

Powdered chalk would probably make that trivial. https://cdn-tp3.mozu.com/24645-37138/cms/37138/files/30c19d5...


Or drones projecting dots using lasers or collimated powerful leds: no traces, no smoking gun, no foul play. That would be scary at very least.


sunbleached chewing gum should do it?


Exactly. I wouldn't have even attempted to read the 40 page paper. Twitter is more eye catching.


I would agree if Twitter had a usable interface.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: