Previously I worked at Waymo for a year on the perception module of the self driving car. Based on what I know about the state of the art of computer vision, I can pretty much guarantee that current Tesla cars will never be autonomous. This is probably a huge risk for the investors of Tesla, because currently Tesla is selling a fully-autonomous option on its car which will never happen with current hardware. We need several breakthroughs in computer vision for no-fail image based object detection, and you will need higher resolution cameras, and much much more compute power to be able to process all the images. It's hard to estimate when we will reach that level of advancement in computer vision, my most optimistic wild guess is 10-20 years. And then Tesla will need to upgrade all those cameras and install a dedicated TPU which should be at least 10x or probably more faster than the nVidia chip they have installed on their cars, and they should do it for zero dollars because they have already sold the option. It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.
Exactly. Claiming to be close to solving production problems that haven't been solved in robotics research in nice controlled environments is nuts. Either people at Tesla are so smart they are making many huge breakthroughs in the technology (hint: they're not) or the are exaggerating and lying to sell cars despite the dangers of selling systems based on heuristics that could cost lives.
I was driving the other day and stopped behind a stop sign at a 4-way intersection. A police car was already stopped at the same intersection, to my left. Since he had the right of way I waited for him to proceed. But after a few seconds without moving he flashed his high beams, which I understood to mean that he was waiting for something and was yielding to me. Now that's not a standard signal for yielding in the California Vehicle Code but most humans can figure it out.
These are the type of odd little situations that come up all the time in real world driving. I can't understand how anyone would expect level 4+ autonomous driving to work on a widespread basis without some tremendous breakthroughs in AGI.
Do you really need AGI to understand that some drivers don't respect the right-of-way rules at 4-way intersections? And even if you don't detect the high beams flashing, do you need AGI to know that you shouldn't lock yourself out of the intersection purely based on your place in the queue?
Regardless of the possible solutions for that one particular situation, my point was that odd unpredictable situations come up all the time in real world driving. We can't hope to code rules in advance for every possible situation. In the general case is it possible to handle unexpected situations without true AGI? That remains an unanswered question.
I like the analogy of the internet. In order to have the internet, we have packet switching, retransmissions, flow control with exponential backoff, distributed spanning tree algorithms, etc. If we accept what the AI proponents say, the internet is a jumbled mess of conflicting hand-crafted rules that has no chance of ever working. And yet here we are on the internet, and we don't even need a Go or chess solver running in each router.
You're making a strawman argument and I have no idea which AI proponents you're referring to. Internet routers mostly rely on deterministic non-conflicting rules with little or no AI involved. It generally works fine, although there are multiple major failures every year.
How much large scale network engineering have you done?
>Internet routers mostly rely on deterministic non-conflicting rules
Exponential backoff (which the previous poster mentioned) is randomized and success is by definition non-deterministic algorithm to resolve conflicting usage. It works very well, and is very simple. But deterministic and non-conflicting are really not qualities that the IP protocol is known for.
Yes, they have a fleet... that also fails in a number of ambiguous situations that average human drivers handle easily. I've observed failures in more than half my trips through Mountain View.
Two use cases happen repeatedly: first, it's indecisive on lane changes when there are vehicle(s) in its targeted lane for a long time. If it cannot merge over safely due to traffic or rudeness, it will Stop in its lane until a clearance occurs -- the concept of proceeding to the next left and making a U-turn seems incomprehensible. Second, in certain right turn situations on a red light, it will hever turn if there is traffic in far lanes even if the nearest lane has a generous opening. I see this all the time on the right turn from eastbound Central turning onto southbound Castro St., for example.
>Second, in certain right turn situations on a red light, it will hever turn if there is traffic in far lanes even if the nearest lane has a generous opening. I see this all the time on the right turn from eastbound Central turning onto southbound Castro St., for example.
To be fair, even I do that sometimes if I just don't trust oncoming traffic not to do something like changing lanes in the intersection or at the last second. To your point, though, it's all dependent upon the intersection, number of lanes, etc., and I'm not familiar with the intersection in question.
Super simple solution is to dial in a remote human operator whenever something 'odd' happens.
'Odd' events like that are probably only 5 seconds out of every hour of driving, so in aggregate one operator could handle 720 cars. At that point, the operator is far cheaper than further work on the AI.
Side note: The police officer may have thought that you both arrived at the same time, in which case you would have right-of-way, since the right-most driver has priority at a four way stop (if one wasn’t clearly at the intersection first).
(If four cars arrive at the same time ... you just wing it, I guess.)
Never mind then. I've been in similarly frustrating situations, where I try to accommodate someone while driving and they just don't even recognize it:
- Someone will tailgate me like they want to pass ... but I'm safely in the right lane, and they can easily pass in one of the other two lanes.
- They want to drive a lot faster than me in a residential area, so I pull to the side and wave them to pass, but they just stop there.
These situations can result in the vehicle notifying the human driver, or for fully-driverless operation, remote operator taking control of the vehicle (at greater expense, of course).
Level 3 autonomy such as you describe is generally considered unsafe because the human driver may not have been paying attention and lacks the context to safely take control.
Remote operation is a total non-starter. Our existing cellular networks lack the bandwidth and redundancy for safety critical applications. What happens if the local tower is down because the cooling system failed or a construction crew accidentally cut the backhaul fiber?
You're describing an anecdotal and seemingly trivial corner case, which may very well have been solved by these secretive companies already (wish we saw more data). It's most likely the scenarios we haven't thought of, the truly unique/very hard scenarios, the ones humans would fail at as well: those are the truly hard edge cases. Not 2 cars sitting stopped at stop signs... are you seriously claiming 2 stopped cars requires AGI?
[Tesla would need to].. and install a dedicated TPU which should be at least 10x or probably more faster than the nVidia chip they have installed on their cars
That is literally what Tesla have announced they are doing next. I've got no opinion on your other points, but if you've missed that news, you're not really following Tesla very closely. It's a drop in replacement chip set designed by Jim Keller.
Elon Musk takes huge ambitious gambles. In this case the gamble is that good AI and the huge amount of training data from the Tesla fleet will win out over lidar. It is a gamble. But if you look at other gambles SpaceX and Tesla have taken, the context changes a bit. Everyone talks about innovation, but innovating is actually a gamble.
As part of that risk I'm sure Tesla would far rather burn the people who paid extra for some technology that wasn't ready yet than release something that is sub-par at a minimum and dangerous at worst
Killing a bunch of people or jamming up traffic is a quick way to going out of business in the car industry, so there's far more incentives to wait for the real thing. It's not like an optional $2-3k add-on (to a $40-60k product) that turns out to be useless would be a game changer. People will just not trust them to pre-invest and wait to buy it when it's available.
> I'm sure Tesla would far rather burn the people who paid extra for some technology that wasn't ready yet than release something that is sub-par at a minimum and dangerous at worst.
Uhh... I love Tesla but current Autopilot, that they have released, is definitely not what I'd call "public-ready".[1] And they agree, which is why it's locked behind clickwrap disclaimers and warnings and described as a "beta".
Don't get me wrong, I want one. But I wouldn't want my mum to use Autopilot.
[1] It can't see stopped cars on the highway (!) and gets confused by freeway exits, sometimes driving directly at the gore point. Worse, this second issue was fixed and is now happening again, indicating a critical lack of regression testing. (!!)
Speaking from a code archeology ;) perspective, there's another explanation: the previous fix has addressed the proximate problem, but also unearthed a deeper problem, which may or may not be fixable.
I don't think it's that he saw LiDAR as technically inferior, just expensive. They needed an approach that they could put in place with minimal unit cost.
It's not that it has to be no-fail it's that the failures need to be a subset of the failures that humans make.
If the failures are in places that humans can easily and reliably handle (which is the case now) then people won't trust these systems. If the failures are created by the software not being able to handle basic driving tasks and wouldn't normally happen with a person driving then this is a huge problem with the system. A system that repeatedly drives at a lane divider repeatedly is not something people should trust.
Yes! I think that’s the right way to measure whether you’ve solved self-driving cars: if it has similar (possibly worse) failure rates to humans in each environment where humans are expected to operate.
Example: say an SDC never had collisions except when a cop is directing traffic, and in that case it floors it full speed to the cop. I would not consider that to have solved self driving cars, even though dealing with a cop directing traffic is so rare that its accident rate is lower.
Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.
I do agree with the OPs skepticism though - full autonomy is 10 years away.
> Humans kill over 100 people per day in traffic accidents in the US alone - I don’t think no-fail (0 deaths) is a reasonable requirement, just being safer than humans is enough.
This is a fallacy. People don't just look at safety statistics. The actual manner/setting in which something can kill you matters a ton too. There's a huge difference between a computer that crashes in even the most benign situations and one that only crashes in truly difficult situations, even if the the first one isn't any more frequent than the second.
Hypothetical example: you've stopped behind a red light and your car randomly starts moving, and you get hit by another car. Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason. Would you be fine this car? I don't think most people would even come close to finding this acceptable. They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.
P.S. people are also more forgiving of each other than computers. E.g. a 1-second reaction time can be pretty acceptable for a human but, but if the computer I'd trusted with my life had that kind of reaction time it would get defenestrated pretty quickly.
Let's say there are two cars with two drivers. The first, with a human driver, is deadly under traditional human scenarios-- the driver could be drunk, texting or eating or distracted. They could make a slow decision, be looking the wrong way, stomp on the wrong pedal, etc.
The second driver is a machine. It sees all around it at all times and never gets drunk. But when it fails it does so in a way that to a human looks incredibly stupid. A way that is unthinkable for a human to screw up- making an obviously bad decision that would be trivial for a human to avoid.
Now let's say that statistically it can be proven that the first type of driver (human) is ten times more likely to be at fault for killing their passengers than the second in real world driving. So if you die, and it's the machine's fault, it will be in a easily avoidable and probably embarrassingly stupid way. But it's far less likely to happen.
Here's the thing--they hand out drivers licenses to anyone. You need virtually no skills whatsoever to pass a road test here in the States.
If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here. If I asked everyone on HN how many times they have practiced stopping their car from high speed in the shortest time possible, and were competent braking at high speed; I'd round that guess to virtually 0. Let's talk about the wet; can you control the car when it fishtails? Can you stop quickly in the rain? No and no.
You simply can not be competent in a life and death situation without training, nor without a basic understanding of vehicle dynamics. You just can't. Now I'm not saying Everyone must be able to toss the car into a corner, get it sideways and clip the apex while whistling a happy tune; but for god's sakes can we at least mandate 2 days of training at a closed course by an instructor who has a clue about how to drive? That would absolutely save lives... lots and lots of lives.
Which brings me to my favorite feature of some of these "self driving" cars: all of a sudden, with No warning whatsoever the computer says hey I'm fucked here --you have to drive now and save us all from certain disaster. I probably could not do that, and I sure as hell can toss a street car sideways into the corner while whistling a happy tune.
> If I asked everyone on HN what steers the rear of a motor vehicle;
What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.
Totally agree on advanced driver training though. If you don't know the limits of your vehicle then you shouldn't be driving it.
As for the last point, I think we need to ditch the "level X" designations and describe automated vehicles in terms of time that they can operate autonomously without human intervention. A normal car is rated maybe 0.5s. Autopilot would be 0.5s - 1s. Waymo would be much more depending on how rarely they need a nudge from the remote operators.
> What does this even mean? I'm guessing you're going for 'the throttle' but it's a pretty ambiguous question.
I am going for the throttle (you know, to stop the car from rotating too much after I threw into that corner), and yes you are correct it (the throttle) does "steer" the rear of the car. Plus 1 btw.
Ambiguous... maybe. Anyway; see you on the wet skidpad ;) .
The answer to this question really depends on whether your vehicle is FWD or RWD. It sounds like you have a RWD car and the people not answering it "correctly" don't.
> Other possible interpretations that I thought of (for the record):
I almost forgot how difficult it can be to explain the nuances of vehicle dynamics clearly and succinctly.
Let's start by using the classic Grand Prix Cornering Technique (rear wheel drive/rear engine car). We brake in a straight line, and the weight transfers forward so that now the front tires have more grip than the rear tires (as a rule of thumb; the more weight a tire has on it the more it grips, because it is being "pushed" down onto the road. You can of course "overwhelm" the tire by putting too much weight on it causing it to start to lose adhesion). As we get to the turn in area of the corner we (gently) release the brake and we (gently) apply throttle to move some of the weight of the car back towards the rear tires (if we didn't do this the back of the car would still have almost no grip and we would spin as soon as we initiated steering input to turn into the corner).
Now we are into the first 1/3 of the turn, and approaching the apex--we have all the necessary steering lock to make the corner, that is to say we will not move the steering wheel anymore until it is time to unwind it in the final third of our turn (also we are on even throttle we cannot accelerate until we are at the apex). So here we are--the front and rear slip angles are virtually equal, but we want to increase our rate of turn because we see we will not perfectly clip the apex... we breath (lift a bit) off the throttle, but keep the steering locked at the same angle and the car turns (a bit) more sharply. We have actually just steered the rear of the car with the throttle; yes we have affected the front tires slip angles as well, but if we viewed this from above we would see we have rotated the car on its own axis.
This works, to varying degrees, in every layout of vehicle--FWD, AWD, RWD. Technique and timing are critical as is the speed, gearing, road camber, and so, and so. The fact remains though that the throttle steers the rear of the car.
"If I asked everyone on HN what steers the rear of a motor vehicle; I'd guess only 10% would guess correctly, and we're talking about some of the smartest most well read people on the Planet here."
In this hypothetical, I'd rather ride in a car driven by the human--I can see if my driver is drunk, texting, or otherwise distracted and yell at him or demand to get out of the car.
With the computer, I'm just completely in the dark and then maybe I die in some stupid way.
If that were the only options, we could choose the second.
But you have the option of assistive technologies. Have the human drive and the machine supervise it. The mistakes made when the human falls asleep can be prevented. The mistakes that the machine makes, well those won't be made at all as human is the active driver.
I didn't want to cloud the question with this point, but data from a machine driver mistake can be used to train every other machine driver and make it better. While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.
Also it's probably important to keep in mind-- if my undestanding is correct-- companies like Tesla are only using neural nets for situational awareness-- to understand the car's position in space and in relation to the identified objects, paths, and obstacles around it. The actual logic related to the car's operation/behavior within that space is via traditional programming. So it's not quite a black box in terms of understanding why the car decided to act in a particular way-- it may be that something in the world was miscategorized or otherwise sensed incorrectly, which could be addressed (via better training/validation, etc.). Or it could be that it understood the world around it but took the wrong action, which could also be addressed (via traditional programming).
If I'm wrong about that, I'm sure someone will chime in. (please do!)
>While much can till be learned from the mistake made by a human driver, the error is not as likely to be minimized across the 'fleet' in the same way as it is for a machine driver, if that makes sense.
This is some serious whitewashing here. It's not "unlikely", it's simply not going to happen at all. People have been killing innocents by drunk driving for decades now, so they obviously still haven't learned. They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.
This could be changed, if we as a society wanted it to. We could mandate serious driver training (like what they do in Germany), and also periodic retraining. Putting people in simulators and test tracks and teaching them how to handle various situations, using the latest findings, would save a lot of lives. But we choose not to do this because it's expensive and we just don't care that much; we think that driving is some kind of inherent human right and it's very hard for people to lose that privilege in this country. And it doesn't help that not being able to drive equates to being very difficult to survive in much of the US thanks to a lack of public transit options.
>They continually make the same mistakes, over and over. No, human drivers do not learn at all from each other's mistakes in any significant fashion.
Why do you assume that drivers have to learn from each other's mistakes? A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again. The correlated risk may even cause accidents in bursts. 10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.
>Why do you assume that drivers have to learn from each other's mistakes?
Because that's what computers do: we can program them to avoid a mistake once we know about it, and then ALL cars will never make that mistake again. The same isn't true with humans: they keep making the same stupid mistakes.
>A drunk driver learning from his own mistakes is already significantly ahead of what a self driving car does which potentially just repeats the same mistake over and over again
Why do you think this? You're assuming the car's software will never be updated, which is completely nonsensical.
>10 self driving cars all doing the same mistake at the same time will cause even more damage than just a single one.
Only in the short term. As soon as they're updated to avoid that mistake, it never happens again. Hum
I would choose the "machine" driver if it is two ( not one ) orders of magnitude safer.
By the way, your examples are not very good as the drunk/texting driver makes a choice and to some extent her passengers too. No such choice is given with the car is autonomous.
I'd pick the distracted driver's currently and for two good reasons:
1) Computer has bugs that will kill the driver 100/100 times if they encounter that case. Driving at the guard rail, the truck decapitation and whatever bugs have yet to be discovered.
2) A distracted driver nay encounter one of those situations and see and avoid, even if drunk or looking down.
The likely case is the current crop of self driving cars are much more dangerous and will remain that until some magical breakthrough happens that was mentioned above.
If that was actually the case, i.e. it had a lower accident rate than humans buts its accidents were silly bugs, it wouldn't be a "huge difference" -- it would be a slight improvement that would become a major improvement when the bug gets fixed.
If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.
> They want to be safe when they haven't done anything dangerous -- they want to be in reasonable control of their destiny.
If you don't trust the computer, does that mean you won't trust another driver, if the computer is better than the average driver? Then how do you drive at all, when the roads are full of average drivers who could hit you at any time?
> If the average person thinks they're better than the computer when the computer is better than the average person, the average person is incorrect.
Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving. And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].
In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.
> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].
And they include driving in a huge range of conditions and roads that driving automation does not function in.
The conditions in which automated driving technology gets used (good weather, highway driving) must have far lower rates of accidents than average.
The accident rate comparison wasn't between miles driven by humans vs. miles driven by technology, it was between miles driven by cars before and after the technology was made available, and the accident rate went down. The only way that happens is if it does better than humans at the thing it's actually being used for. Which means that even if it's only used in clear conditions, it's doing better than humans did in clear conditions -- not just better than humans did on average.
It's plausible that it currently does worse with adverse weather than humans do with adverse weather, but I'm not aware of any data on that one way or the other, and I wouldn't expect any since it's not currently intended to be used in those conditions.
> Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving.
It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average. So you have a probability of being the unlucky first person to encounter a new bug, but unless the probability of that is significantly higher than the overall probability of being in a collision for some other reason, that's just background noise.
Moreover, it isn't an unreasonable expectation that newer versions should be safer than older versions in general, so using the risk data for the older versions would typically be overestimating the risk.
> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].
That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.
> In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.
The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR. But then your heuristic is stale as soon as they fix it, which is likely to happen long before any kind of true driverless operation is actually available.
> It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average.
This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action. This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios. Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low. The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)
> That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.
I don't get to choose what vehicles other people use. I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.
> The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR.
This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed. I'm sure plenty of engineer time has been devoted to studying them (despite Tesla's actual PR strategy being to deny the problem and deflect blame onto the driver rather than announce fixes) but the fixes aren't trivial or easily generalised and approaches to fixing them are bound to produce side effects of their own.
> This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action.
We're talking about a regression that makes things worse than they were before. The worst case is that they have to put it back the way it was.
> This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios.
I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.
That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically, which puts pressure on the companies to compete to have the best safety record and therefore not trade the reductions in misbehavior for additional complexity as much.
> Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low.
It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.
And if it was really that common then why aren't their safety records worse than they actually are?
> The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)
The rate for autonomous vehicles is also very low, and the average person is still average.
> I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.
So who is forcing you to buy a car with this, or use that feature even if you do? Not everything is a dichotomy between being mandatory or prohibited. You can drive yourself and the drunk can let the software drive both at the same time.
Though it wouldn't be all that surprising that computers will one day be able to beat even the best drivers the same way they can beat even the best chess players.
> This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed.
You're assuming they're the same problem rather than merely the same result.
And in this case it's purposely adversarial behavior. There are tons of things you can do to cause an accident if you're trying to do it on purpose, regardless of who or what is driving. The fact that software can be programmed to handle these types of situations is exactly their advantage. If you push a sofa off an overpass into the highway full of fast moving traffic, there may be a way for the humans to react to prevent that from turning into an multi-car pile up, but they probably won't. And they still won't even if you do it once a year for a lifetime because every time it's different humans without much opportunity to learn from the mistakes of those who came before.
> I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.
I think this is misunderstanding the difference between a safety critical system which is designed to be as simple as possible, such as an airline system to maintain altitude or follow an ILS-signalled landing approach, and a safety critical system which cannot be simple and is difficult to even design to be tractable, such as an AI system designed to handle a vehicle in a variety of normal road conditions without human fallback.
> That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically
The benchmark maximum acceptable fatality rate for all kinds of traffic related fatality is a little over 1 per hundred million miles, based on that of human driver. Pretty damned difficult to measure safety performance of a vehicle type statistically when you're dealing with those orders of magnitude...
> It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.
Well yes, the system will handle a significant proportion of unforeseen scenarios safely, or at least in a manner not unsafe enough to be fatal (much like most bad human driving is unpunished). Trouble is, there are a lot of unforeseen scenarios over a few tens of millions of miles, and a large proportion of these involve some danger to occupants or other road users in the event of incorrect [in]action. It's got to be capable of handling all unforeseen scenarios encountered in tens of millions of road miles without fatalities to be safer than the average driver.
> And if it was really that common then why aren't their safety records worse than they actually are?
They really haven't driven enough to produce adequate statistics to judge that, and invariably drive with a human fallback (occasionally a remote one). Still, the available data would suggest that with safety drivers and conservative disengagement protocols, purportedly fully autonomous vehicles are roughly an order of magnitude behind human drivers for deaths per mile. Tesla's fatality rate is also higher than that of other cars in the same price bracket (although there are obviously factors other than Autopilot at play here)
> The rate for autonomous vehicles is also very low, and the average person is still average.
You say this, but our best estimate for the known rate for autonomous vehicles isn't low relative to human drivers despite the safety driver rectifying most software errors. And if a disproportionate number of rare incidents are caused by "below average" drivers, then basic statistics implies that an autonomous driving system which actually achieved the same gross accident rate as human drivers would still have considerably less reliability at the wheel than the median driver.
> You're assuming they're the same problem rather than merely the same result
From the point of view of a driver, my car killing me by accelerating into lane drivers is the same problem. The fact there are multiple discrete possibilities for my car to accelerate into lane dividers and that fixing one does not affect the others (may even increase the chances of them occurring) supports my argument, not yours. And even in this instance, which unlike others was an adversarial scenario, involved something as common and easily handled by human drivers as a white patch on a road.
> Turns out your car has a habit of glitching like this every 165,000 miles or so (which seems to be the average accident rate) for no apparent reason.
People do equally inexplicably stupid things as well, due to distraction, tiredness or just a plain brain fart.
Please do point out where the statement is incorrect.
I wasn't talking about perception, people have all kinds of ideas about machines (mostly unfounded), but typically safety records drive policy, not whether drivers think they are better than the machine.
I believe machine assisted driving is safer than unassisted already and will continue to improve such that in 10 or 20 years human drivers will be the exception, not the norm. That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.
Remember they don't have to beat the best human driver, just the average.
> That will happen because computers are already safer in most conditions - the switch will happen when they are demonstrably far safer in all conditions.
Computers currently can disengage in any conditions for any reason - how did you come with conclusion computers are already safer?
Humans drive 8 billion miles per day in the US, through a variety of conditions, crazy weather, crazy traffic, crazy pedestrians, etc. “Just being safer than humans” is an extremely tall order.
> Humans kill over 100 people per day in traffic accidents in the US alone
They also drive 8-9 billion miles per day. That's around 1 death per 90 million miles of driving. Given the number of AV miles that are driven annually right now, we actually would expect to see ~0 deaths per year if AVs were as safe as humans...
It is probably best to categorize these as autopilot PLUS human supervision, but anyway, Wikipedia cites 3 autopilot-caused deaths worldwide over 3 years or so.
It's all worth subcategorizing. Highway driving is substantially safer than surface streets, and if you stick to awake, sober, daytime drivers in good weather, the safety of human drivers is even higher.
Given that autopilot can handle nighttime, but not the others, it's completely possible that 1/300 million miles is above the sober good weather highway human driver fatality rate.
Full autonomy has been 10 years away for half a century. If driving still exists in a century, I'd wager that full autonomy would still be 10 years away then.
Keller finished designing HW3 before departing Tesla. In a recent tweet Musk claimed HW3 is nearing production, then, a while after after media outlets rushed to report on the first tweet Elon threw out the caveat, which is that HW3 won't be going into cars until the software is ready, which could mean anything, and likely means they are nowhere near having their current software validated for HW3, let alone FSD.
In Keller's case, that's his MO. He did exactly the same thing with Zen which basically saved AMD. I'm fairly bearish on Tesla and don't read anything in to it.
That's actually Jim Keller's M.O, he doesn't hang around for long. He does the bit he's interested in and then moves on. In the last decade he's worked for Apple, AMD, Tesla and Intel.
I posted mine and looked back at the thread and someone else had posted at roughly the same time. Coincidence I guess. M O is a good term to describe the situation
Isn't software a bigger issue? I mean it took Nervana and Nvidia a non-trivial amount of years to optimize code. This is also the reason OpenCL is next to useless for deep learning, and AMD is creating a HIP transpiler instead of building it's own equivalents of CUBLAS/CUDNN/....
I believe the chipset was designed with the current/future roadmap of the AI in mind. I think its basically a chipset optimised for the particular neural net approach they are using.
They probably needed the data collected by the current hardware in order to consider developing the next generation. Then they needed the hardware in order to develop the software. The only way to get the data is from a real fleet running hardware and software you control.
>but if you've missed that news, you're not really following Tesla very closely.
If you think this will actually happen, you're not really following Tesla very closely.
Outside of Elon's boasting, is there any evidence they have a chip that is going to be 10x better than the best in the industry? How do you imagine that happens? Where does the talent come from?
A 30 watt power Nvidia GPU from 2016 is not "the best in the industry".
Edit: To expand, the GPU in current Teslas is a GP106. It has no neural net-specific compute abilities. For NN inference it's slower than last year's iPhone. A vastly faster chip wouldn't be hard to get. Even if their in-house silicon efforts fail, they could just buy a modern Nvidia chip and the inference speed would go up 100x. Those chips go for $1000, easily covered by Tesla's "full self driving" upgrade package price tag
of $5000.
If they run into a problem with FSD, it's not going to be finding a way to run 10x faster compute than the current shipping hardware. They may have other problems, but not that.
If the improvements happen in software, deploying them for free is obviously not an issue. The question is how much the necessary hardware modifications would cost. But they're charging thousands of dollars for the option, which gives them some leeway to at least break even there, especially if you consider the time value of money between when they sell the option and when they deliver it.
The real question is, what happens if "full self driving" doesn't arrive for another 30 years?
But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.
> But if you think car companies have never promised something they weren't sure they could deliver, you're not aware of the incumbents' unfunded pension liabilities.
Can we just get rid of whataboutism here entirely, please, HN?
> It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.
But promising things based on speculation about the future is not new, it's common practice. It may be a questionable idea, but the claim was that it's unusual. It's not.
If you buy one of their electric cars planning to use it, you're assuming they're actually going to build it, and if you live near one of the proposed sites, actually build that one in particular.
And this is hardly new. Ford Sync is more than a decade old, but when it came out they were advertising all the things you could theoretically do with it, many of which were contingent on third parties writing apps for it. Some of that actually happened, some of it didn't, but it wasn't all there on day one.
Whataboutism is “X isn’t bad because Y is worse/also bad”.
Parent was saying “X will probably be bad because similar Y was bad in the same dimension (here, overpromising)”.
Maybe not a solid argument, but it isn’t whataboutism.
Edit: Never mind, it looks like I misread the parent's point, and he was saying that it's no big deal to promise something without being sure you can deliver because the Big Three did it with pensions.
On second thought, I misread the argument, and I agree it's whataboutism. See edit. (However, if my misreading had been correct, it wouldn't have been whataboutism, as it would be a prediction that Tesla would also be bad.)
You make no mention of HW3, which will be installed on the cars that bought FSD for free, and includes Tesla's own chip, (I assume some kind of TPU) designed by Jim Keller. It has been described by Tesla as having 100x the FPS processing capability that the nvidia PX2, the one they use now, has [1]. So they're more or less in full agreement with, and have been working on making happen, the things you say need to be done, for the past 3-4 years.
Tesla's head of AI is Andrej Karpathy, who many in this community hold in high esteem. I know this is "argument by authority", but we're working with a black box here, so it will have to do. Do you really think he is wasting his best years on a project that anyone in the field can "guarantee" will never happen? Or could it be that he knows something you don't?
By the way, it seems you also don't know that they hold most FSD revenue in reserve on their financials, it's not being spent. So if they need to return it, they can.
It's hard to argue when all of your arguments are based on Elon Musk's assertions. I remember watching Elon Musk in one of his product announcement events promising that development of full self-driving mode will be finished by the end of 2018 and rolled out through 2019. This is while they told CA DMV they haven't tested even a single mile in self-driving mode in 2018:
https://www.dmv.ca.gov/portal/wcm/connect/96c89ec9-aca6-4910...
You'd think they need at least one mile before rolling out the feature.
Regarding the TPU chip that is 100x faster than NVidia's chip, I also take that with a grain of salt. Note that 3rd generation Google TPUs are on par with latest NVidia GPUs in terms of performance according to Google. If Tesla has made a chip that is 100x faster, they should spin it off as a separate company that could be valuable as much as 2x Tesla's market cap.
>Note that 3rd generation Google TPUs are on par with latest NVidia GPUs in terms of performance according to Google. If Tesla has made a chip that is 100x faster, they should spin it off
I don't think they've claimed that the FSD computer (hw3) is 100x faster than "the latest NVidia GPUs". He's said that it's about an order of magnitude in the number of video images the current Nvidia hardware in a Tesla can process (that is, from 150-200fps to ~2000fps) without needing to scale down any frames.
I think he may have said in one live interview it would be "2000%" better, but since he said previously it would do 2000 frames/s, that may have just been a mistake of saying "percent" instead of "frames".
Your TPU point makes no sense. If Google's TPUs are on par with NVIDIA GPUs for NN workloads (I assume that's what you mean on par?) then they suck at making chips, which they don't. What's even the point of making them, and why has e.g. Deepmind been using them if they could have been using NVidia chips in the first place? I don't think they're budget constrained. It sounds to me like you're using a non-relevant definition of "on par".
About the stuff that's based on Elon's assertions:
First, yes, he is often wrong on timelines. Nobody doubts that. By the way, for other car companies (even Waymo!) who claim they'll have X milestone by Y date, everyone is understanding, since timelines slip. For Tesla, apparently it's a capital crime to say "I think we'll have it by then" and not have it. But your original points were not about timelines.
As for the miles they have registered with the DMV, Tesla's self-driving programme does not follow the same path as others. They are progressing from level 2 upwards, and deploying improvements to their fleet of cars in production. Other companies are working with tiny fleets and aiming directly at level 4+. So basically, you're looking in the wrong place. But even so, Elon's latest prediction is that they'll have "feature completeness" by end of year, and then they'll start working on regulatory approval. So I assume that's when you'll start seeing miles there, and you will very likely see lots of them, all at once.
Andrej Karpathy just finished his PhD and he was offered an executive position at a major public company. Also, he is not wasting his time. His department is building something that is actually used in cars today. It's just not the self-driving software that Elon is pitching.
Ah, so they are going to equip all future cars with a chip that doesn't exist yet, for free, so it will shrink margins even more. Sounds smart.
>By the way, it seems you also don't know that they hold most FSD revenue in reserve on their financials, it's not being spent. So if they need to return it, they can.
Please point to the line and note in the financials where this is done, because I'm quite sure you're mistaken. Tesla hardly has any warranty reserves, let alone FSD refunds.
Hello, fellow "former Waymo dev." What you said is one of the many reasons that I replaced my Model X with an I-PACE. While I support the movement-away-from-gas part of their mission, I can no longer support Tesla as a business. I'm now focusing my energy toward advocating sustainable migration to general EV technology.
People on blind are either straight up lying or having a very extreme case of selection bias.
Blind worries me a lot. Because if the general claims made on the website are true, then the people in top companies making $300k+ annually are some of the most immature and toxic people on the face of the earth.
It happens on this site too... if you believe HN everyone in the UK is making £100k+, even though the average salary here is more like 40k. I’ve seen people complain because they’re ‘only’ making as much per month as most people in the country make annually.
I don't see these other people as fellow developers, but as golf-playing executives or their children.
I'm aware that not everyone here is a dev, but we mostly have similar careers.
Not everyone here lives in the US. Making SV salaries here, where 99+% of people are making under $80k/year, would be obscene. If I had access to that kind of money, I'd be putting it into better housing (penthouse?) long before ever thinking of buying a car like that.
I live on about 30% of my salary. Unless you're in the bay area or have a family to support, that's plausible (and honestly pretty comfortable) for most developers. If you wanted that car, you could have it.
An I-Pace is only $60-70k. Assume you have 20k positive equity in your current vehicle the loan cost is about $800/mo. The equivalent would be buying a new Subaru Outback or F150 and paying for gas and oil changes.
Perhaps an easier and quicker fix (although extremely costly) is to outfit all roadways with aids for autonomous driving. Otherwise, we will have to wait decades for the technology to be mature enough.
I wonder though if the roadway maintenance would be good enough to have that work reliably. The car would still have to react to misconfigured , misplaced or defective aids.
Even if it’s not just negligeance, if an accident were to move the markers, would it cascade accidents until these markers are fixed ?
In the end, it seems we’d still need cars way clever than what we have now for it to be trustworthy.
The fatal Tesla crash into a highway barrier a while back was because of faded highway markings.
Many localities can barely keep the lines painted. Many more have all sorts of adverse weather conditions (e.g., snow and slush) that make road markings hard to see. The chances of this being a workable solution, at least in the US, is nil.
that crash wasn’t because of the markings. saying so, although maybe accurate, softens the issue. it was because tesla has a poor autonomous system that couldn’t plainly see a road splitting and instead hit a barrier at full speed wit no braking. the markers were a contributor since all teslas seem to mainly do is stay between the lines they see.
No tech that requires a special type of road will ever get successful, whether it's aids for autonomous driving aids, electric roads for on the go charging, or similar. The world already have countless of kilometers of roads, and all you need to drive a regular car is a relatively hard and flat piece of dirt. The cost and time required would be immense, and there will always be a need to use cars outside any built up road network.
Sure, for commuting in large cities anything is possible, but then cars still need to be able to do both old an new navigation.
It doesn't have to be an all-or-nothing proposition. For example, just augmenting interstate highways to enable reliable autonomous driving would be a complete game changer for commerce, let alone personal use.
Let the driver drive the car manually for the last mile of travel. Most of the problems of autonomous driving are on the last mile (pedestrians, unmarked roads, poor lighting, school crossings, train crossings, etc.). The last mile is the most expensive to outfit and maintain, too.
I wouldn't be disappointed at all to find that in the future, cars drive by themselves 80% of the time, and humans take over for the tricky 20%. If the roadway isn't autonomous-ready the car shouldn't become a brick. You just drive it manually.
You would either have to add massive taxes to road usage (not just gas), heavily tax automobile ownership or use general purpose taxes for such very expensive upgrades.
As a non-owner of a car I would balk at the last option.
You're assuming the upgrades will be extraordinarily expensive. What if what is required is something as simple as radar-reflecting paint, and somehow encoding additional information in the markings? Or maybe all cars should have an RFID-like thing on their bumpers to make car detection more reliable. The gov't could distribute that bumper sticker along with the car registration sticker. You could enforce that in the annual inspection.
Sure, a car that doesn't need any aids is much cooler. But if you look at the history of technology, especially in the PC world, there were a lot of clever hacks and external aids that had to be used in order to make the tech feasible. Then with time, they were rendered unnecessary.
yea, just like we’ve been able to outfit all roads properly for aiding human drivers. you mentioned it’s costly, but this just won’t or can’t happen. we don’t even properly maintain our roads.
Just a random data point. I've used Tesla's autopilot on completely unmarked dirt roads in the past. The speed was low an I was monitoring the it very carefully, but it did somewhat work on some of the dirt roads I tried it on.
> It is kinda amazing that Elon Musk is selling cars based on speculative future breakthroughs in technology. That must be a first.
It is. But it's not that unusual these days: it's basically a Kickstarter!
Well, not exactly. Kickstarter projects have no responsibility to ever deliver. They just have to make a good faith effort.
Tesla has to eventually deliver some version of full self driving to customers, but they made sure not to say when.
So it's hard for me to see the legal liability for them. How can someone sue for a product that was never guaranteed a delivery date?
Worst case would be a lot of customers demanding refunds: which I doubt Tesla would fight.
So I guess that's the liability on the books: some % of $5000 refunds.
But a lot of customers will be happy with whatever they deliver whenever they deliver it. It doesn't feel like an existential risk to me.
Part of why it doesnt faze me is Tesla has so many knobs they can dial here. They can decide where FSD is enabled: if it works only on roads where the autopilot is highly confident, and only during good weather conditions, that probably fulfills their legal obligations.
This is why I think self-driving bears are wrong.... They argue computers will never be able to handle every corner case, which is true. But it only needs to work for one slice of drivers in one location for there to be a market, and from there it's just an iterative process to grow the service area.
It's like arguing my ice cream shop will fail because many people are lactose intolerant, pale, vegan, etc. Sure, but to have a successful business I just need one customer, then two, then four. And I get to choose which customers I court.
if this is accurate, the current models give a hint of the issue. the car can't yet see very far and any minor obstruction confuses the hell out of it. there's too little pixel to make out the road at a distance and the car doesn't use any of the other clues human do to figure out what the road is doing next - i.e. we can guess a corner from vertical signs, guard rails, trees and even hill sides - see this example: in red the pixels that hints a tesla of an incoming corner, in blu those that also a human can use: https://i.imgur.com/CvntZuZ.png
the problem is that a camera, especially if it's not at a high vantage point, will have very few pixel to represent distant features.
The autopilot behaves like an illegal street racer in that video (extremely tight turns) and this happened while it was only a few seconds away from colliding with a motorcycle.
Adversarial attacks. Extrapolating from incomplete data. Consistent performance in any/most lightning conditions: night, dusk, low hanging sun over an icy road. Fog. Any combination of the above.
Humans while not perfect are capable if making split-second decisions on incomplete data under a surprising range if conditions based on the fact that our brains are an unmatched pattern-matching beast feeding off of our experience.
Adversarial Attacks? Like removing a speed limit sign? Painting over lane markers? Dropping bricks off of overpasses? Throwing down a spike strip? Sugar in the gas tank?
Unless you're talking traditional computer security, which it doesn't seem like you are, these types of threats have not prohibited human drivers despite the fact that humans are very susceptible to "adversarial attacks" while driving too. Whether it's putting carefully crafted stickers over a stop sign to confuse a CNN or yanking it out of the ground to get a human driver killed.. you're talking about interfering with the operator of a moving vehicle... so what's the critical difference here?
I think the difference is that software tends to be much more fragile - and more predictable - than humans. Paint fake lane markers, and an autopilot might drive full speed into a wall because it trusts them; the next 5 cars with autopilots will all do the same thing. An attacker can verify that an autopilot will do this ahead of time. A human on the other hand will be more likely to notice that things are amiss - they can pick up on contextual clues, like the fresh paint, and the fact they've driven that road hundreds of times before and instantly notice the change, and the pile of burning self-driving cars.
I'm not sure why hackers prefer to hack large scale computer systems rather than individual humans, but they do. So we have to protect neural networks against adversarial examples for the same reason we have to protect databases against sql injections.
> so what's the critical difference here?
If neural networks are deployed at scale in self driving cars, a single bug could trigger millions of accidents.
Printing an adversarial example on billboards would lead to crashes all around the country. Are we going to assume no one is going to try? (btw: real world adversarial examples are easy to craft [1]).
Like literally the article we’re commenting on. Image recognition systems in general are much more susceptible to errors in cases where humans wouldn’t even think twice.
And yes, “hey, a stop sign was here just yesterday” is also a situation for which humans are uniquely equipped, and computers aren’t.
Do you mean OTA or just any attack because I think non-autonomous vehicles would have the same concerns... or really any equipment with embedded computers.
Humans are LOUSY in almost all of those conditions as well as much less challenging ones--we engineer the road in an attempt to deal with human imperfections. The I-5/I-805 intersection in California used to have a very slight S-curve to it--there would be an accident at that point every single day. Signs. Warning markers. Police enforcement. NOTHING worked. They eventually just had to straighten the road.
Humans SUCK at driving.
Most humans have a time and landmark-based memory of a path and they follow that. Any deviation from that memory and boom accident.
This is the problem I have with the current crop of self-driving cars. They are solving the wrong problem. Driving is two intertwined tasks--long-term pathing, which is mostly spatial memorization, and immediate response, which is mostly station keeping with occasional excursions into real-time changes.
Once they solve station-keeping, the pathing will come almost immediately afterward.
"Station Keeping" is maintaining your position relative to the other cars around around you--and it's what most people do when driving.
Ever notice how a bunch of stupid drivers playing with their phones tend to lock to the same speed and wind up abreast of one another? Ever notice how you feel compelled to start rolling forward at a light even when it is still red simply because the car next to you started moving? When in fog, you are paying attention to lane markers if you can see them, but you are also paying attention to what the tail lights ahead of you are doing.
All of that is "station keeping".
And it's normally extremely important to give it priority--generally even over external signals and markings (a green light is only relevant if the car in front of you moves). It's the kind of thing that prevents you from running into a barrier because everybody else is avoiding the barrier, too.
Of course, it's also what leads to 20 car pile ups, so it's not always good...
It’s not a long time. Computer speech recognition, a far simpler problem, has barely advanced at all in 10-20 years. Siri is no better than Dragon Dictate was in the late 1990s. It’s possibly worse.
Yeah this is just completely wrong. Without getting into specific products, public test sets from the 1990s like Switchboard and WSJ are now at around human-level transcription accuracy rates; 20 years ago the state of the art was nowhere near that.
It's also not objectively a simpler problem. Humans are actually not particularly good at speech recognition, especially when talking to strangers and when they can't ask the speaker to repeat themselves. Consider how often you need subtitles to understand an unfamiliar accent, or reach for a lyrics sheet to understand what's being sung in a song. For certain tasks ASR may be approaching the noise floor of human speech as a communication channel.
Humans may not be particularly great at speech transcription, but they're phenomenal at speech recognition, because they can fill in any gaps in transcription from context and memory. At 95% accuracy, you're talking about a dozen errors per printed page. Any secretary that made that many errors in dictation, or a court reporter that made that many errors in transcribing a trial, would quickly be fired. In reality, you'd be hard pressed to find one obvious error in dozens of pages in a transcript prepared by an experienced court reporter. It is not uncommon in the legal field to receive a "proof" of a deposition transcript, comprising hundreds of pages, and have only a handful of substantive errors that need to be corrected. That is to say, whether or not the result is exactly what was said, it's semantically indistinguishable from what was actually said. (And that is why WER is a garbage metric--what matters is semantically meaningful errors.)
The proof of the pudding is in the eating. If automatic speech recognition worked, people would use it. (After all, executives used to dictate letters to secretaries back in the day.) The rare occasions you see people dictate something into Siri and Android, more often then not what you see is hilarious results.
That article is correct that WER has some problems, but it also correctly concludes that "Even WER’s critics begrudgingly admit its supremacy."
Yes, Switchboard has problems (I've mentioned many of them here) but it was something that 1990s systems could be tuned for. You would see even more dramatic improvements when using newer test sets. A speech recognition system from the 1990s will fall flat on its face when transcribing (say) a set of modern YouTube searches. Most systems in those days also didn't even make a real attempt at speaker-independence, which makes the the problem vastly easier.
Executives don't dictate as much any more because most of them learned to touch type.
Siri, at least, is total garbage. Half the time I try to dictate a simple reminder, Siri botches it. (The other day, I tried to text my wife that both Maddie and Tae sing. Siri kept transcribing “sing” as “sick.”) Siri at least is no better than Dragon Naturally Speaking was in the 1990s. The Windows 10 speech recognizer is somewhat better, but it’s still not usable (what was the last time you saw anybody use it?).
I don't have experience with Dragon or Siri, but Google Assistant has been improving at a noticable pace and for me seems seems to recognize at least 90% correct.
We confirm or disprove this by getting human test subjects to drive cars, using nothing but video feed from exactly same cameras.
If the humans perform significantly better, then that shows that there is a lot of work to be done independently of the camera resolution.
I could drive reasonably safely without my glasses (equivalent to a massive reduction in resolution); I just wouldn't be able to read direction signs and such. I'd have to drive right up to a parking sign to get the exact days and hours of the parking restrictions and such, but I wouldn't run over a pedestrian, or go the wrong way down some lane.
As a human driver I'd say you could drive ok with that although looking into the distance the images are annoyingly blurry compared to what I'd see by naked eye driving normally. Maybe comparable to driving in the rain with so so windscreen wipers.
I'd say it's marginal if I'd be able to drive safely with vision like that.
quite a lot of the time, the vertical 'smear' prevents me seeing what something is or if it's moving towards me, and the dynamic range is too poor to make out black cars on a black background.
Do you have any opinion on the new HW3 which is a custom ASIC for CNNs or something like that? Would love to know how that stands compared to other possible ASICs at Waymo.
No, they have to be much better than human drivers, close to perfect. We all know and expect that people make mistakes, and if a driver kills themselves by mistake that's on them. But if you buy a machine to drive you and it kills you because the engineers made a mistake, that's a different thing. That's much less tolerated, and rightly so, even if it isn't 100% logical by your standard. It's not just about mean number of accidents per driven km, it's also about accountability and blame.
I understand that many people will view it that way, but there is another way to view it.
Every single time I get in a vehicle, the people coming towards me might be talking on their phone, drunk or whatever. There is a very real chance I will be injured or killed on a daily basis even if I drive perfectly, and nothing beyond my abilities happens.
In the same way that seat belts and airbags and automatic braking improved safety, imperfect self-driving cars will improve overall safety.
The important thing to note is that seat belts, airbags and automatic braking a far from perfect, and thousands of people a year still die even though they are using them. People still use them, because it is safer than the alternative - which imperfect self driving cars will be too.
People will not accept a computer driver, even if it manages a significant improvement on the current rate of car accidents, because there is a real psychological barrier. Your counterargument of "but it's an improvement" would miss that point.
Humans are prone to all sorts of fallacies, especially surrounding "destiny" and our own influence over our lives - that's why ideas like "bad things happen to bad people" are so popular. That kind of problem is not surmountable by statistical fact. You cannot sway the majority of road users to trust a machine that way. They want to be in control, because it makes them feel something that the computer can't - safe in their own hands.
> People will not accept a computer driver, even if it manages a significant improvement on the current rate of car accidents, because there is a real psychological barrier
In your opinion.
We can't know what people will accept, because we've never tried something like this before.
I've pointed out my view, and you've pointed out yours. Time will be the only way to see what people are willing to accept, or not.
>Those people are using a poor emotional argument.
Yes, they certainly are, but those people also vote. You can't just ignore them and their idiotic arguments, unless you live in a techno-authoritarian nation where things like this are forced on the populace against their will (even if their will is stupid). This is the whole problem here; you can't just convince everyone with a technical argument, because most people are non-technical, emotional, and frequently quite stupid, but they also have a say in decisions.
I would rather compare an auto-driving failure more to an airbag that failed to deploy or was otherwise rendered useless. An airbag that is inadequate for the amount of force applied might still save your life, and is still serving its intended purpose. Auto-driving incidents are so outrageous because they represent systematic failures.
What does it mean to be “better than human drivers?” One of the features of humans is that while they make mistakes, those mistakes tend to be pretty randomly distributed. In any given unusual scenario, most people will be fine most of the time. Sometimes we get tired and veer into the incoming lane, but you could get rid of the lane markings entirely and almost everyone would still be able to figure out what to do. A self driving car might never get tired and veer into oncoming traffic on a well marked highway on a sunny day. But we don’t know what phenomena will emerge if a particular stretch of weirdly painted lines causes every vehicle to veer into oncoming traffic during rush hour.
“Better than human drivers” is not an analytically useful criterion. The question is, can you handle all the corner cases human drivers manage to handle every day?
> What does it mean to be “better than human drivers?”
It's extremely simple, and the same criteria they use to determine if driving is safer with many safety devices (ie. airbags or seat belts) than without them.
Injuries or deaths per x miles or x hours driven.
It's very important to note that in some cases airbags and seat belts actually result in more severe injuries or deaths than without them, but overall they are better because they reduce the fatalities per x miles driven.
Just because airbags sometimes kill people, it doesn't mean I'm going to choose a car that doesn't have them. Overall, I'm better off to have them.
That’s not an incorrect metric, but it’s not a very useful one. Or put differently, you’ll need to be a lot more “perfect” than you think in order to get to an ultimate injury/death rate lower than human drivers. You need to be able to handle all the corner cases humans handle routinely, otherwise you’re going to get catastrophic effects.
For example, right now Teslas cannot detect stationary obstacles. They slam right into them at highway speeds: https://www.caranddriver.com/features/a24511826/safety-featu.... This is not a matter of just tweaking the algorithms to get better and better error rates—it’s a fundamental problem with the system Tesla uses to detect obstacles.
In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.
> In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.
That's like saying for airbags to be overall better, they have to be better at every single edge case like kids in the front seat or unrestrained passengers. We know for a fact they are not better at those edge cases, yet overall airbags are safer.
Why? Because 99.99999% of driving is not edge cases (that's why they're called edge cases), and as long as you're better for that very vast majority of cases, then you're better overall.
Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.
You need to handle the “long tail” of exceptional cases. While most driving is not exceptional cases (on a per mile basis), they arise quite often for any given driver, and more importantly, for drivers in the aggregate. A vehicle stopped in the middle of the road is an edge case. It’s also something that just happened to me today, and in the DC metro area happens hundreds of times per day. Encountering a traffic cop in the middle of the street directing traffic happens to thousands of cars per day in DC. Traffic detours happen thousands of times per day. Road construction, unplanned rerouting, screwed up lane markings, misleading signs, etc.
The basic problem you’re having is that you’re assuming that failure modes for self driving vehicles are basically random. That’s not the case. A particular human might plow into a stopped vehicle because she is not paying attention, but the vast majority of people will not. But a particular Tesla will plow into a stopped vehicle because it doesn’t know how to handle that edge case, and so will every other Tesla on the road. A human driver might blow through a construction worker signaling traffic by hand because they’re texting. But the vast vast majority will not. But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case. A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time. Humans can handle all the edge cases most of the time. That means self driving cars must be able to handle all the edge cases, because any edge case they can’t handle, they can’t handle it any of the time.
> Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.
We know how many fatalities there are in the US per year, but I wonder how many crashes there are? How many near misses, or how many times does someone "luck" out and miss death by inches while being completely oblivious to it?
> A vehicle stopped in the middle of the road is an edge case
No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.
> But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case
You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.
> A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time.
Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.
> Humans can handle all the edge cases most of the time
The number of road deaths per day around the world makes me strongly disagree with that.
It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is.
They don't have to be perfect, but they do have to continually get better. And they are.
> No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.
Yet Tesla released a vehicle with an “auto pilot” that can’t handle that case. Makes me skeptical they’ll ever be able to handle the real edge cases.
> You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.
Teslas can’t. And will the be able to read the hand signals of the Verizon worker who didn’t have a stop sign while directing traffic on my commute last week?
> Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.
For a human driver, it doesn’t come down to reaction time. The human driver will know to be careful from the pan handler’s body language long before they jump into traffic.
Also, being able to emergency stop isn’t the issue. Designing a system that can emergency stop while not generating false positives is the issue. That’s why that Uber killed the lady in Arizona. Uber had to disable the emergency breaking because it generated too many false positives.
> Humans can handle all the edge cases most of the time
The number of road deaths per day around the world makes me strongly disagree with that.
Humans drive 3.2 trillion miles every year in the US, in every sort of condition. Statistically, people encounter a lifetime’s worth of edge cases without ever getting into a collision (there is one collision for about every 520,000 miles driven in the US). In order to reach human levels of safety, self driving cars must be able to handle every edge case a human is likely to encounter over a entire lifetime.
> It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is. They don't have to be perfect, but they do have to continually get better. And they are.
I have a bent against techno optimism. Engineering is really hard, and most technology doesn’t pan out. Technology gets “continually better” until you hit a wall, and then it stops, and where it stops may not be advanced enough to do what you need. That happened with aerospace, for example. I grew up during a period of techno optimism about aerospace, but by the time I actually got my degree in aerospace engineering, I realized that we had hit a plateau. In the 60 years between the 1900s and 1960s, we went from the Wright Flyer to putting a man in space. But we hit a plateau since then. When the Boeing engineers where designing the 747 in the 1960s, I don’t think they realized that they were basically at the end of aviation history. That 50+ years later (nearly the same gap between the Wright Flyer and themselves), the Tokyo to LA flight would take basically the same time as it would in their 747.
The history of technology is the history of plateaus. We never got pervasive nuclear power. We never got supersonic airliners. Voice control of computers is still an absurd joke three generations after it was shown on Star Trek.
It’s 2019. The folks who pioneered automatic memory management and high level languages in their youth are now octogenarians, or dead. But the “sufficiently smart compiler” or garbage collector never happened. We still write systems software in what is fundamentally a 1960s-era language. The big new trend is languages (like Rust), that require even more human input about object lifetimes.
CPUs have hit a wall. You used to be able to saturate the fastest Ethernet link of the day with one core and an ordinary TCP stack. No longer. Because CPUs have hit a wall, we’re again trading human brain cells for gains: multi-core CPUs that require even more clever programming, vector instruction sets that often require hand rolled assembly, systems like DPDK that bypass the abstraction of sockets and force programmers to think at the packet level. This is all a symptom of the fact that we’ve hit a plateau in key areas of computing.
There is no reason to assume self driving tech will keep getting better until it gets good enough. It may, or it may not. This is real engineering, where the last 10% is 90% of the work, and where that last 10% often proves intractable.
OK, so imagine someone paints these robotcar-fooling dots on the road, and a caravan of a hundred robot cars is cruising down the street. All 100 of them veer into oncoming traffic. That will destroy your safety statistics quite handily.
OMG, Imagine someone cuts all the brake lines in our current cars, or hacks the traffic system and makes all the lights green at the same time, or puts nails on the road, or turns into the boogie man and "gets" people.
Fear mongering is not the way to view problems, or to make our society better. We need to improve things, not live in fear of what "Bad" people could do to us.
> [we will reach] no-fail image based object detection level of advancement [...] my most optimistic wild guess is 10-20 years
You mean using cameras without LIDARs right? Presumably Waymo itself (cameras + LIDARs) is roughly at that level of advancement today.
In 10-20 years LIDARs should improve dramatically as well, in both price and performance. So I'm guessing cars will continue to rely on both technologies even when cameras become viable by themselves.
BTW, did you enjoy working at Waymo? Would you recommend the perception group as an employer?
In your opinion, how much does that timeline change if policy/infrastructure also evolves to support the growth with things like dedicated autonomous lanes and roads, shared network data from vehicles in those lands (ie. vehicle from Manufacturer A 10 cars up stops short and sends that event to the network so cars further back can start breaking even earlier and less aggressively), etc.?
Can you be more specific about the limitations of state of the art image based object detection? Is it skewed toward false positives or false negatives? What accuracy is considered state of the art? Do you mean classification of an object in an image, or merely recognising that there is an object there?
I think the real potential issue for self-driving cars is that they need to have an understanding of how people's minds work, to understand their gestures, intentions, and movements. This requires, not just visual processing, but something closer to AGI. (Not an expert, so feel free to correct me.)
Not really, you don’t really gauge traffic by looking at the drivers firstly you can’t see most of them secondly it’s dangerous to assume their intention which is why we have things like right of way and road signs.
The big issue is still in terms of processing speed and sensor fusion and Tesla isn’t leading the pack in either.
As far as wide scale autonomous driving goes for it to be good enough there needs to be an agreeable and verifiable decision making model for all manufacturers to follow and for regulators to validate, this could be as simple as never to perform an action that would cause a collision but that’s not exactly an ideal model either because you might then get a silly decision such as you don’t engage the breaks to slow down because you’ll end up hitting the car in front of you in either case.
Sadly I’m seeing too many people that tend to complicate things and chase the red herrings when it comes to autonomous driving such as “ethics” while it’s an interesting philosophical exercise in the real world it’s not the important part as when it comes to accidents the vast majority of drivers act on basic instinct and don’t weight the orphan vs the disabled veteran simply because they don’t have that data nor the capacity to incorporate it into their decision making process.
First we need to get sensors that don’t get confused by oddly placed traffic cones, birds and shadows the rest is pretty much irrelevant at this point.
> Not really, you don’t really gauge traffic by looking at the drivers firstly you can’t see most of them secondly it’s dangerous to assume their intention which is why we have things like right of way and road signs.
Have you ever driven in a city? People don’t follow right of way rules and road signs. Pedestrians jump into traffic randomly. You absolutely have to be looking around trying to gauge peoples’ intentions in order to drive safely.
I think there are some situations like merging on a busy highway where something like a theory of mind is at least a little bit useful so you can negotiate entry. If you don’t, the car is either going to be too aggressive or too timid.
That’s something completely different because you follow right of way laws and can gauge if you can merge or not safely based on their speed you sure as hell not watching what look they give you through your side mirror.
And the “theory of mind” comes to assume the other driver doesn’t want to cause an accident either and will slow down at which case the estimation can be broken to math such as would they have to slow down given the average response time and their speed.
As a person you have no idea as well if they are drunk, paying attention or not the only thing you can see is how fast they are going and usually if they are going well above the speed limit you’ll assume they are jerks and won’t let you merge which luckily is simple enough for the car to do as well
Facial expressions aren't required here - even reading the basic behavior of a car being driven by a human involves theory of mind. Some people are actual [insult]s and will not let you merge or otherwise not play a game theoretic optimal strategy.
Right you don’t need to see someone’s face to guess what they’re thinking any more than you need to see their brain.
You can infer motivations and intentions based on how they’re driving. If you turn on your blinker and the guy in the lane next to you speeds up to close your available space, you have a pretty good idea of what they’re trying to do.
I’m not sure if there’s an algorithm that will figure out if other people on the road are complete fucking assholes that you want to avoid.
Yep tons of focus on philosophical sideshows like the trolley problem in these discussions, when autonomous cars just plain _do not work_ in the general case.
People can’t solve a trolley problem either while driving (or at all).
If you have enough time to stop if say a kid just jumped in front of you you will stop, if you don’t you don’t have enough time to process all the information around you and gauge the most optimal outcome if you swerve you swerve but that is likely going to happen regardless of what jumps out in front of you simply because that what your instincts are telling you to do.
The best thing an autonomous vehicle can offer is a better response time so you’ll be more likely to stop or to slow down to non/less than fatal velocity.
Yeah it’s a complete distraction, even if you were living in some dystopian world where each car could ping everyone around it to gauge their citizen score and select the lowest score citizen to run over in this Kobayashi Maru scenario your cars would be much safer if they didn’t waste CPU cycles on this nonsense but rather just focus on the road.
Drivers are drivers doesn’t matter if it’s humans or robots distracting them with nonsense wouldn’t make them better or safer but rather quite the opposite.
Right. The trolley problem assumes vehicles can make optimal split second decisions while right now Tesla can't even tell the difference between the road and a lane divider reliably.
Human drivers do have to make split second ethical decisions though: You're driving down a country road and a small animal jumps out in front of you. Do you hit the animal, or do you swerve into the ditch? Hitting the animal would kill it, but not harm you. Swerving into the ditch might harm you.
There is no right answer to that question, because I didn't specify what the animal is. Whether the animal is a baby deer or a human child will influence your choice. This isn't contrived. When I hit a deer, I made a choice to stay on the road. Had it been a child, I would have swerved, endangering myself in the process.
I have not logged enough miles in US interstates, but on German Autobahns you surely need to read other drivers intention (if you want to prevent unpleasant surprises). And this is usually done by carefully observing their driving and the overall context.
- Are they zooming by, then suddenly moving to the right and slowing down? Probably got a phone call, make ready to overtake them. And watch out, they might get really slow.
- Is the car in front stuck behind a truck and slowly moving to the left? Driver probably plans to overtake the truck soon. Reduce speed to not crash into his rear end.
Looking forward to when these things will be included in the reasoning model of self-driving cars.
The general ability to understand others (and oneself) to the level of intentions etc is often referred to as Theory of Mind.
Seen from the task of autonomous driving, what is needed is a strong predictive model for what other cars/drivers are going to do, based on current observations. Considering that driving is a quite more narrow space than all possible human behavior, this is hopefully[0] solvable using something close to existing approaches.
0. Otherwise I don't think self-driving cars is happening soon.
This. Sharing a street with others requires human communication: a driver waves to a pedestrian wanting to use a crosswalk, or a pedestrian waves a driver through before doing so, or they both pause before one or the other goes first. Our streets aren't designed for robots, and while they may work on controlled-access highways in the not-too-distant future, I don't expect to see self-driving cars on city streets within the next few decades.
computer vision is clearly much harder than go. the point is that the forefront of AI is moving quite rapidly, and the consequence are bound to catch the casual observer (or even an expert) off guard.
Elon Musk also keeps fear mongering about AI (which is what made him invest in OpenAI). His success made him overconfident about, thinking he knows more about a field than the experts.
AP3 is not going to be nearly good enough. It's just the next step in keeping the "we sell full self-driving hardware as a feature but's it's not ready yet" promise alive. It's not years away, it's decades away. And for a frame of reference, the computers (as in the actual processors, ram, memory, etcetera) in cars a la Waymo and Cruise cost more than the cars, so it's not an insignificant cost. That's completely ignoring the lidar issue and the fact that state-of-the-art just isn't good enough even when you have an extra $100k in hardware, have spent billions training your models, and have remote operators to help you out of sticky situations.
But yeah, okay, Elon Musk said it's going to happen. He also said they were going to have it years ago and we're still waiting on that cross-country trip from New York to Los Angeles that never materialized.
We'll see them roll out an error-prone stop-sign+traffic-light detection as fsd beta, and everyone will act like that was what was promised. Just like Lane keeping assist is "autopilot".
I realize this is a tangent, but I feel like it ends with Tesla going bankrupt before they ever come close to building a self-driving car, or even for the narrative to wear off for the general public. I believe a criminal investigation should be brought against Tesla for negligent manslaughter due to the fraudulent advertising and consequential deaths, but I have no expectation of that ever happening.
I just get really angry when people lie, then people die, and nobody does anything about it. As another commenter pointed out, Elon Musk himself is still demonstrating the technology as being completely autonomous just a few months ago.
I guess others don't agree, but I do. People are dying using autopilot, deaths that are likely preventable, and yet very little has been done about it. What's even worse is they're advertising safety claims based on false data, and you have to fight the courts just to have access to the document showing the redacted data.
"The upshot is that Autopilot might, in fact, be saving a ton of lives. Or maybe not. We just don’t know."
...which doesn't support the claim that banning Tesla's autopilot would be a net positive.
Maybe the effect is zero-sum, which means that preventing the deaths where Autopilot did wrong by banning Autopilot, means that other people would die, because Autopilot saved them from a situation they would have crashed in.
If Tesla's Autopilot is demonstrably net negative, then yes, it should be banned, and banning it would save lives.
But if it's currently zero-sum, we should absolutely allow it, we should absolutely allow it to be improved, because improvements to the system will probably tip it over into net positive.
It is absolutely concerning that the company is trying to spin the tech as already being net positive, without any clear evidence of that, yes, I agree. If the tech were net negative, and they were trying to cover that up, that would be even worse. But that's not the position we're in.
You are correct that the data is redacted, but I think it's a reasonable assumption to assume it's pretty bad for Tesla. If the data painted Tesla in a favorable light, they would be force-feeding it down our throats. They would do everything they could to let everybody know "here is the data proving we are safer."
Instead they are burying it behind legal procedures and doing everything they can to make sure nobody knows what the actual data says.
I cannot deny that I do not know anything with certainty. But it's more than just a guess that autopilot is not what's it's advertised to be.
Wikipedia lists 3 Tesla autopilot deaths since Jan 2016. In that time about 4 million people have died in road accidents caused by humans. I care about the deaths but 4,000,000>3.
This is an absurd misinterpretation of the data. What matters is how many miles driven per death, not how many deaths. Did you know more people die driving each year than from having bullets implanted in their brains? And yet I'd still rather get in a car than shoot myself in the face.
If you'd actually care to educate yourself, you can start by reading the article I linked in my original comment.
>Now NHTSA says that’s not exactly right—and there’s no clear evidence for how safe the pseudo-self-driving feature actually is.
Which doesn't seem so terrible. Personally I'm optimistic that as the systems get better they can roll them out to other cars and make a dent in the 1.3m/year global deaths.
That's not an unreasonable conclusion, but it's a lot messier than that.
The real issue is comparing miles driven to similar miles driven, and autopilot miles are only supposed to be on the highway in good conditions (which is when the fewest accidents occur...well, probably). But the breakdown of accidents into categories such as speed, weather, traffic, etcetera does not exist (or at least I am unaware of it). It's further complicated by demographics, where older more affluent drivers - the kind likely to buy a Tesla - are safer as well. Then it's also confounded by the fact that Tesla is not a trustworthy company, at least in my opinion, and they will give OTA updates without warning owners which can revitalize old bugs (https://www.techspot.com/news/79331-tesla-autopilot-steering...). A lack of regression testing for a safety critical system is just terrifying.
Now admittedly, you came back to me with a reasonable response and I am throwing you a litany of "yeah, but" rebuttals. Do I believe Tesla Autopilot has the potential, when used properly, to make driving under certain situations safer? Probably. The main problem is the human element, making sure they're actually monitoring the car, informing them correctly of what Autopilot can and cannot do, etcetera. There are also issues with how Tesla not only improves the technology, but validates it.
It's the gross overpromising (honestly I believe it is probably fraud, but I cannot be sure) that makes me despise Tesla as a company. But I can admit they make a product a lot of people like. But I think a lot of people like them because they are misinformed.
Not arguing that they're taking steps in that direction but the implication that they're anywhere reasonably close to the goal is unrealistic. It's like saying we can travel to space so we have all we need to get to the nearest star.
There is the energy limitation as well. All the equipment needed to do self driving today requires more energy than you can sustainably draw from typical cars.
Please if you downvote at least leave a comment why you think I am wrong. This is a well known fact in the autopilot community.
I don't understand the scepticism, self-driving cars are already a reality and are working very efficiently in controlled environments. Anyone working on logistics already heavily relies on it. Another point, the "dedicated TPU" is also a myth. Neural Networks inference is very cheap and current nvidia hardware is able to run neural networks that would take years to train.
Highways are quite controlled, compared to city driving. You have separated traffic directions, no traffic allowed below certain speed. Markings are usually in good conditions and any works on the highways are marked and secured long distance before actual work place. I was just driving new car couple months ago with lane assist, front sensors and cruise control. Only thing I needed was that it overtakes trucks on its own and that is it.
That said people do stupid things on highways, but one step is to allow only autonomous cars to drive there and we are done. In the cities and rural areas that is not going to happen because costs would be too high.
I would be happy enough if I could get self driving car only for highways. If someone expects fully autonomous vehicles that can go everywhere, that is not something I think it will going to be available in next 100 years.
Industrial robotics applications are limited in scope and the environment is adjusted to make the problem easier than for autonomous driving on roads. Even then there are issues that mean often a human operator/supervisor is needed. General autonomous vehicles systems are not widely used in industry if at all.
Making a system that works on roads with other cars and pedestrians and varying quality of road markings and infrastructure is a different problem entirely. Tesla is trying to do this with a limited set of sensors and compute compared to others with makes it even less feasible. Waymo is by far the furthest ahead and they don't claim that full autonomy is close like Musk is claiming.
This is not the standard we need to meet. It's obviously the goal, but autonomous driving doesn't have to be perfect, it has to be better than the median human driver. Well, we need that plus a regulatory environment that handles the liability in such a way that results in countless lives saved, even though some might die due to imperfect autonomous driving.
I've long been of the belief that in order to get real autonomous driving we'll have to re-engineer major roadways. Today they're designed to maximize effectiveness of human sensors. I'm convinced that there must be a way to maximize effectiveness of non-human sensors without harming the effectiveness of human sensors. I don't know what that looks like. It might look like putting passive RFID in the reflectors on the road, or some other thing not yet imagined. But I do think that will ultimately be the future.
I keep seeing people say that and in a society wide kind of way I think that's true yes but I think for people to actually accept it it will have to be better. Everyone thinks they're a better driver than they are and all the failures and crashes will be attributable to either the autopilot or the passenger-driver.
Also if we're going to need roads upgraded for autopilot it's going to be much much more limited. Roadworks take so long and we can barely keep roads together in some states.
I'm thinking about a time scale much longer than silicon valley is accustomed to. Moderate use highways are repaved about every 10 years. The highest use roadways are repaved every 2-3 years. Cars will need to be able to fall back to current-ish mechanisms for the foreseeable future, but we could get the locations where most human miles are driven repaved with some new tech within a decade.
There's more than one moving target here. One is when regulators allow the vehicles on the roads en masse. I think that should be at the point they're "better than the median driver." As far as user acceptance, I think once the price-point of automated driving as a service is cheaper than private ownership it will take care of itself. It will likely take a human generation or two to get the majority of people in autonomous cars.
My parents are quickly nearing the age that they will lose their independence/ability to drive. I would love the ability to put them in an autonomous car that is even a little bit worse than the median driver.
Yeah from a raw statistics the 'better than a median driver' is the first point where a mass switch to driver-less cars and allowing them on the road makes sense. The big stumbling block to me though is that takes the diffuse responsibility of individual bad drivers and putting it all on the autopilot systems. It's the switch from the generic vague 'drivers' or 'distracted drivers' kill XXX people a year to Company Y's autopilot has killed YYY people this year.
I worked on self-driving cars in 1991 at Daimler-Benz. We used prominent horizontal lines with significant bilateral symmetry and an overall trapezoidal shape as a proxy for a car ahead. (Such was the state of computer vision [and prevailing automotive design] at the time.)
We were running a test day on a wet day at a disused airfield in Bavaria and during a test run, the car (bus really) slammed on the brakes, throwing some of the occupants forward, and then, like a recalcitrant horse, the car refused to move forward. It turns out that the drying pavement on one of the taxiways had created a visual pattern on the ground that our vision system ID'd as a nearby car.
Based on that overall experience, I held the same opinion then as I do today: "never". (I was an intern at the time, though I quickly took over the implementation and operation of our speed control and vision system.)
I have to admit: much more progress has been made than I expected in the intervening ~30 years. I still think that full self-driving with zero outside communications/support is "never", though "mostly self-driving with selective help sometimes" might be possible within the next 25 years.
It is interesting to see logical descendants of work that group did appearing in road cars. (CAN bus, Adaptive Cruise, park assist, probably others that were lab projects at the time...)
I don't mean this to be insulting, but if the intern is taking the point position on "speed" and "vision" you can hardly call that a serious engineering project.
There are in fact experts on topics like computer vision, even 20 years ago. They are expensive though.
Implement/Operate point of certain subsystems, not theoretical or leadership point. (I take no offense to your comment as it's spot on if that were the case. My apologies if I misled.)
I was implementing an improvement and operating our vision system (doubling frame rate and resolution using an existing algorithm across a scaled up processing system). Because of where our code sat in the overall architecture, that I had control systems coursework under my belt, and that I was "done" with my vision processing project much faster than expected, I took on coordinating signals from a variety of other subsystems [radar, lane keeping, sign reading, navigation, etc], computing an overall speed and lateral acceleration target, steer angle, etc, and emitting those and a series of additional signals to downstream systems that controlled the servos and vehicle. Ernst Dickmanns was my boss's project director. I was in no way a "point" on the overall project, which was by all accounts a serious engineering effort.
This was part of the Prometheus project (~30-35 years ago, not 20):
Why? You think we should be moving around in open air? Or you think we should all be moving though high density channels like trains?
I don't see what's wrong with cars if they are minimally heavy, aerodynamic, and managed as a fleet. Busses are certainly not better. And trains only solve half of the problem. (Thoroughfares.)
I absolutely adore the bubble people on HN live in sometimes. 99.99% of people in the world are not computer engineers who have the luxury of doing their jobs from anywhere(including home). It takes a day of interacting with people in the real world(instead of reading about them on the internet) to realize that no, daily transportation to and from jobs is not going absolutely anywhere.
And this is not meant to be a personal attack on you - just an observation of a very common sentiment displayed by many HN users.
Taking the top 3 most common professions here in Sweden by a large margin: sales, education and health care.
Every year sales are converting into online sales. computer engineers and sale people go to the same office buildings, sit at similar desk at similar computers.
Education is also converting into online education. Self study is increasing in practically all levels, and the cost of buildings and class rooms could be saved to pay for more teachers.
Health care is similar transforming. A lot of primary care is done online to save costs and to filter out cases that actually need a doctor visit. Telepresence is also creeping up for elder care. In theory this pay for more doctors.
So slowly the 99.99% of people of world are doing jobs which from a mechanical view is identical to computer engineers. Travel from home, enter office, do work, go home.
Not nessesarily. Driving is one of the most common tasks worldwide, so it makes economic sense to pour large amounts of resources into automating it. Beyond sensors and computer components it also doesn't require any new hardware.
Most forms of cheap labor in the West don't fit that description.
Since you think standalone self driving cars will never happen, I take it that you also believe an artificial general intelligence also will never happen?
Probably. There is a difference between artificial general intelligence and self-driving in that it could be the sensory acquisition/processing and/or the untethered mobility that is the limiting factor for self-driving rather than specifically computing advances.
(IOW, artificial general intelligence in a datacenter does not necessarily imply untethered self-driving.)
I share that sentiment and in light of that, do you think the better and more economic way forward is possibly to not improve self driving cars as much but the infrastructure around them? Geofencing, urban and street design conductive to autonomous vehicles and so forth?
I was driving towards an oncoming Tesla recently on a 2 lane country road and it lane changed neatly into my side and drove straight at me. Closing speed 160km/h or so.
Fortunately it chose to do so with enough time for me to stop, and for the driver to re-take control and take my shoulder as a stopping place. If it had made the decision a few seconds later, it would have been a different story.
I think old snow across the centre lane markings was the cause, but if you can't handle the edge cases, then perhaps you shouldn't release.
It doesn't change lanes when it knows it's changing lanes. If the Tesla thought it was staying in the correct lane, it wouldn't ask for the confirmation.
This discussion is under a nonsense test report and headline btw. The car was moving slowly, around what could easily be identified by gps and vision as some sort of test track or empty lot, with varied and inconsistent markings and no other traffic in sight.
The software running on drivers has been under active development for much longer than Tesla. (I'd start bidding at billions of years, though I'm willing to get argued down to tens of thousands.)
Teslas don't change lanes without confirmation provided they know where the lane is. The problem is that it seems pretty easy to fool a Tesla into thinking the lane is somewhere it isn't.
So you didn't take over when the car crossed the line?
The lane changes are so slow that you have plenty of time to correct if the car makes a mistake.
You're supposed to have your hands on the wheel, and pay full attention, at all times. This means being able to take over if the car drifts into the oncoming lane, or has trouble deciding what the lane is.
Sure, and it had supervision. But Tesla's software made the choice, and the supervisor overrode it, but only because he fortuitously had time to do so.
If a passenger aircraft has an autopilot problem that causes a crash, do you blame the manufacturer or the pilots? The pilots are always supervising the plane, and it's not autonomous, right?
>If a passenger aircraft has an autopilot problem that causes a crash, do you blame the manufacturer or the pilots? The pilots are always supervising the plane, and it's not autonomous, right?
I actually agree with you, but this is not a good point.
In many cases (not all, but many) pilots are definitely partly to blame for an auto-pilot failure causing a crash, as they operate on the assumption that there are many fail-safes, and the pilot is definitely one of them.
A Tesla having a failure and a driver reacting to it is very similar to the pilot acting as a fail safe to a malfunctioning auto-pilot which definitely happens.
Where it falls down, is that drivers are not a trained, responsible part of the fail-safes (in my view), you can't rely on them paying attention and reacting in time, partly as they aren't paid professionals with simulator training, and partly because they don't have the 30,000ft of airspace buffer to recover from and clear airspace.
Judging by a lot of the discourse immediately following the most recent 737max incident, people were definitely blaming the pilots. Thankfully that's cleared up a bit now.
I think that's a very relevant analogy. By the standards of the average driver, the 737 Max pilots were extraordinarily well trained and highly skilled, and they had a lot of time to correct the software error relative to most conceivable problems with a semi-autonomous vehicle. But regulators have still (reasonably) decided that their inability to cope with software quirks should be treated as a software problem and not a problem with the pilots' failure to rectify the erratic behaviour.
You and me must have read very different discourse. Anyone suggesting that Boeing not be immediately burnt at the stake was not and is not received positively at least not on HN.
I encourage you to read the book "Thinking, Fast and Slow." This book explains how System 1 is trained with deeply-ingrained biases through repeated feedback-based experiences. Tesla then says, "But make sure your System 2 is ready to take over in an instant!" This simply isn't how the human brain works.
The real problem with any self-driving system is that a good one will breed complacency. After you spend enough time seeing how well it works in practice you’ll become more and more prone to do stuff like check your phone or watch a YouTube video. I mean the car can handle itself, right? You’ve gone thousands of miles watching this thing effortlessly drive itself, why not enjoy the promise of self-driving technology, which is to free you from the tedium of driving? It’s just human nature to do so,
While people would get angry at being tricked and would put pressure on the city to remove the adversarial sticker, self-driving cars not have the sympathy of the public.
Before the image itself, there is "a School Zone sign, crosswalk, an extended curb and a sign by Preventable that reads: You’re probably not expecting kids to run out on the road". It's hard to imagine that anyone would be going fast enough to cause any damage.
Furthermore the image was there only for a week. I somehow doubt drivers would suddenly start to ignore children, after they saw fake image of a child five times.
Looks like a great way to train drivers to override their instincts and keep going thinking "it's just that stupid sticker again", even if it's a real kid standing there.
About 300 non-occupant (i.e., in the street or on the walk) children a year die from car crashes each year in the US. About a third of those are in school zones. This means two things: one, a solution to force slower driving in school zones would not prevent the majority of fatalities, so if that solution was counterproductive outside of school zones, it would reasonably increase fatalities. Two, 300, or 100 in a school zone, suggests the vast majority of car-child interactions are not fatal or even injurious. A solution that prevented a couple of the existing fatalities could easily turn more of the much larger number of safe encounters into dangerous encounters. And this is plausible because right now, people are watchful and quick to slam on the brakes when they see a child, even out of the corner of their eye. Teaching them that an area where children walk has a “child” they can ignore would weaken that reflex where it should actually be strengthened.
I understand you are concerned about children as well, so I would advise looking into policies regarding parking on the side of the road. Roundabouts in neighborhood intersections near schools also have demonstrated a natural slowing effect that does not seem to have negative externalities beyond the cost to implement the roundabout.
I guess it's a bit ambiguous, but I think the thing that he says was "patched way back in 2017, and again in 2018" is not the hack to use adversarial dots on the road to fool the system, but the other hack where the they inject malicious code into the autodrive system over Wifi and then the CAM bus, and then take over control of the car from a cellphone.
So maybe the lane marking thing is still unfixed?
I'm also a bit amused by the suggestion that we should feel even safer because it was fixed twice! :)
Rationally, shouldn’t we feel safer if it’s patched twice? Seems like it’s irrational to prefer a state where it was patched once over one where it was patched twice. It would seem to indicate that they dedicated time and resources to resolve it once then did proper follow up to catch missed cases. It certainly points to it being a pernicious issue but I’m can’t see why you’d presume it’s worse that it’s been patched more.
On the ”we should be worried” end of the scale, we have
”we fixed a bug but didn’t add a test, then accidentally reverted the bug fix, had to fix it again, and still didn’t improve our process to prevent this from happening again”
In the exaggerated version, they don’t use source control, so they had to figure out how to fix the bug twice, and they don’t even did a test, but, in both cases, only believed they fixed it.
Going back to your example, it depends on how they detected that they missed cases, and on how sure they are they know them all now. If (100% hypothetically) they only detected it after ten people died, denied existence of the problem, and were sued in court, we should be worried.
In the real world, just like all other companies working on autonomous driving, I think they will never be able to say they truly fixed the problem. The problem is, and always will be, underspecified, and even if we knew the problem well, we don’t really know for what kinds of problems the software we use to do this is a reliable solution.
I would be a bit more worried about Tesla than, say, about German car manufacturers because Tesla has shown to be willing to sail close to cliffs. Worried enough to be really worried? No.
Because the fact that it was patched twice means they didn't fix it the first time, which means they didn't test it properly and might not have fixed it the second time.
I’m not sure the number of patches actually provides any definive information on the current state from a purely logical argument.
Zero patches, one, two, or more. That there are patches merely tells you that they tried to respond to an identified threat. It doesn’t tell you the threat was properly mitigated in any case.
I think we can say it is strictly better to have more than zero patches in the case where a vulnerability is public. Exactly one patch to me doesn’t guarantee anything about the quality. Multiple patches at least implies persistent effort and dedication to securing the fix. Maybe at some point the number of patches implies incompetence but I’m not sure I’d draw that line at “2”?
Not really, a hotfix is a do no harm situation where an initially conservative approach that’s unlikely to cause harm may be deployed. That’s true even if it’s known not to completely solve the problem.
Now, in the case of an undisclosed voluntarily, waiting may be preferred as a path is going to get people to look on that direction. However, with public disclosure the clock is ticking.
While I agree about the first and third issue, he's rather dismissive about the second one: "The attack relies on placing dots on the road to trick the car. Lots of them. \n\n I wouldn't worry about it." The demo shows 3 small dots can trigger a lane change.
Hypothetically, it could be a very targeted attack if the attacker drives in front of the target to put the paint on the road, and then a car behind the target covers it up. If the cover-up looks just like normal road (no idea how easy that is to achieve), and if nobody is looking for it because it's not a common thing to do, it might never be found. There are probably much better attacks that one could come up with if one gives it more than two minutes' thought, but it seems realistic enough to me to be interesting. Or if not a targeted attack, it can be very disruptive once there is some adoption.
Then again, people could already be disruptive and leave stuff on the road or rails, and experience tells us it's very rare (per driven person-kilometer at least). Something about tricking the computer seems more likely to happen than putting physical objects to harm others directly, but that's probably just between my ears.
I decided to link the tweet as it's difficult to link to a specific portion of the paper and the paper had multiple topics that were hard to include in an 80-character title. As the tweet links to the paper, it's very easy for anyone interested to read about it in full.
- Linking to a specific page might not work for many setups such as those who use an external program to open the pdf
- The tweet is a lot easier for many people to digest who may otherwise pass on reading (even a section of) a 40 page pdf
- The full original source (which often I would prefer to link) was easily available in the tweet, so it wouldn't be hard to find for anyone who wanted it
- Coming up with a suitable submission title (80 characters or less) for the PDF was difficult as I wanted it to include information about more than one of the sections (such as both remote control + lane changing)
In fairness, if you put enough fake markings on the road, any human will get confused too. The important thing here is how small/few were the fake markings, not the fact that this was possible at all.
Of course if you put bad input into a system you are going to get bad results, whether human or computer. You periodically read about dumb kids messing with stop signs and causing accidents.
Humans have enough intelligence, though, to be able to figure out when the other lane is oncoming traffic and those spurious markings are in fact spurious (snow or a plastic bag).
Sure, but a self driving car might know it should stop at an intersection even if the stop sign is missing because it is a database of stops. Computer and humans are going to have different weaknesses, but if you are intentionally trying to fool them you are going to be able to find a way to do it.
The thing that concerns me more is the machine's confidence. A human seeing unusual lane markings would be confused and exercise extra caution. Confusing a person is easy, but I don't know how you could make them confident there is a nonexistent lane.
I think you just explained to me why I've found robots/machines to be dangerous when they are operating. And that's because they don't have any caution. They will accelerate swiftly, operate at high speeds, and all that. They do it all well, until something goes wrong. Suddenly, having no caution becomes a huge liability.
Not where it snows. I grew up in a place with real winters and agressive plowing/salting of roads. Drivers there don't rely on what's painted on the roads since it's often obscured and the state DOT doesn't really bother putting much information on the road surface, since it's often obscured and when not obscured too damaged from the plowing and salt to be of reliable use anyways.
It was one of the notable differences when I relocated to CA. CA puts a lot more information on the road surface, and CA drivers correspondingly rely on it more heavily.
Not really. Traffic on 95 in NC/SC/GA keeps moving at 50 even in southern downpours so heavy the lane markings are invisible. Here in DC the lane markings are ordinarily entirely off, because lanes get closed when there is construction on a building by the road. People figure it out. Self driving tech that relies on lane markings isn’t viable in the real world.
Humans have millions of years of evolution, particularly at selecting for predictive behavior within real world physics (sight the prey, lead the shot, kill the prey; avoid the predator, etc).
Combine that with past experience about what the shape of the roadway is, and a human can fill in a LOT of missing data and correctly read clues to get a 'good enough' update on the situation and move forward.
Usually that's along the lines of: follow the car in front of me, and match their changes in motion to avoid the badness.
As long as things work, which they will most of the the time on a freeway where no one's screwing around, you'll get out of the other side without issue.
There is a road on my way to work that has a darker asphalt line that goes diagonally across the road for about a quarter of a mile. Inevitably, I and others follow this line before realizing we are leaving our lane.
A human can be confused for sure, but humans learned to see in a non-standard situation what the drivers ahead are doing. If all drivers ahead keep going straight even when road markings are confusing and may look like a lane change then most drivers will also go ahead.
Unless i missed something, they don't demonstrate tricking the Tesla into driving into opposing traffic, but rather into a lane that could contain opposing traffic but doesn't. If there was opposing traffic in that lane, would the car still change into it?
If you put up a sign saying "change into opposite lane" and it looked safe to do so, i'd probably obey the sign while driving my car too.
> If there was opposing traffic in that lane, would the car still change into it?
Thought experiment for human drivers: if there were a stationary fire truck in your lane, under what circumstances would you drive into the back of it, without braking?
Perhaps if I was following a tall vehicle too close then they swerved to avoid the firetruck rather than stopping....?
"The Model S was traveling behind a pickup truck with Autopilot engaged. Due to the truck’s size, the Tesla’s driver was unable to see beyond the vehicle in front.
“... The pickup truck suddenly swerved into the right lane because of the firetruck parked ahead. Because the pickup truck was too high to see over, he didn’t have enough time to react.”"
> Perhaps if I was following a tall vehicle too close [..] "The Model S was traveling behind a pickup truck..."
I'm struggling to see how the geometry works here, how close do you have get to a pickup for it to completely obscure a fire truck in the lane ahead? I can't imagine I'd be happy being in a vehicle that close to another travelling at those speeds.
Q: Does Tesla's Autopilot simply let you drive(or perhaps more accurately: "be driven") that close to the vehicle in front?
I used to be afraid of stuff like this, then I realized that you know what, prison time is a heck of a deterrent against this, which will probably take care of most of any actual risk (at least on a "this is the end of society as we know it" level).
I think you're interpreting this work too narrowly. Don't think of it as having found the one thing that can go wrong with autopilot. Think of it as one example piece of evidence for how (non) robust the autopilot is.
The problem isn't someone doing exactly this, it's that the Tesla auto pilot is based on a bunch of heuristics and assumptions that if they do not hold on certain stretches of road people may die. It's just not safe enough to claim it's a robust working system which is what Tesla and Musk have pitched and sold it as.
Human driving behaviors are also based on a bunch of heuristics and assumptions that do not hold in all road conditions.
It's good to expect autonomous/semi-autonomous to improve on humans, but it's not good to expect zero edge cases to be the bar that must be crossed before allowing this.
It's likely that even any significantly improved (vs humans) system would also have new fatal edge cases while also sharing some existing fatal edge cases. What's important is that the systems are not overall worse than humans alone and that they continuously improve.
The thing about humans is we're relatively good at understanding our own heuristics, and rating the confidence of those beliefs. If there's a storm we reduce distractions and pay more attention to the road, drive more conservatively. Other passengers will understand not to distract the driver.
But if the car is confused by some occasional paint splotches on the road that clearly aren't lane markings to our human eyes, we don't have any understanding that the car is being misled. Like that video the other day of the Tesla clipping the barrier - up until about a half second before impact I would have assumed it knew what it was doing and was on track to avoid it.
It's unrealistic to expect a human driver to take over immediately in failure scenarios that the person can't recognize.
I think there's lots of work to be done in quantifying and comparing discrepencies and lack of confidence in neural net classifications and sensor/model outputs. Part of this is human interface design.
An example of this is the "sensor disagree" light on the Boeing 737 max (which unfortunately was optional equipment not installed for the Ethiopian Airlines 737 Max crash), although I do believe we can do much better than a mere light. A full screen and a sound system with over-the-air updates means we ought to be able to field a really good (and continuously improving) system for communicating when the autopilot's sensors disagree or if the model has low confidence.
This is likely going to take years or even decades of operational use over millions of vehicles to refine with good confidence.
> What's important is that the systems are not overall worse than humans alone and that they continuously improve.
I work in the industry. These systems are currently far worse than humans and the edge cases are numerous. We have so much brain power and years of experience that the heurustics that a L2+ system uses to make decisions are not really comparible and are far more simplistic and much worse. We can handle new situations, these cars cannot. The fact that you are making this comparison between human heurustics and the vehicle system just highlights that people don't understand the issues.
Mix that in with the amount of overpromising and overselling that Tesla is doing wrt the abilities of these systems people will be overconfident in them and end up getting complacent leading to accidents.
Tesla's approach to development, release and marketing is not a good way to build safety critical driving systems. Overhyping them to people who don't understand the nuance is my biggest concern.
Not just Tesla that gets confused. Other cars with lane assist also get easily confused if there’s markings on the road. Pretty sure my Passat would do the same thing.
Same for humans too! Not trying to state the obvious but the Fig 31 and 32 in the paper might also make humans believe that there is, or should be a lane - although it looks like an error. The changes on the road are certainly not minor at all!
What is critical is to consider further context and its unclear if the interpretation of the image by itself would cause the AV to make an incorrect decision. It's part of a large amount of information that is consumed in the decision-making process I assume - again similar to humans.
Is it? I would say Waymo is a lot closer to being the poster child; a significant number of people have realized that Tesla isn’t true self-driving despite their claims.
Disclosure: I work at Cruise (a self-driving car company)
How you implement these systems matters. GM super-cruise (not to be confused with Cruise automation) uses HD mapping + GNSS + visual queues and could potentially do better due to having priors from the map. Tesla's approach that doesn't utilize a map is likely more susceptible.
GM super-cruise is the best system out there right now and they are very modest in their claims. All these systems have problems.
Tesla is the one making the big claims and drawing attention to themselves. How many others are claiming full autonomy is just around the corner? They bring the publicly on themselves and can't have it both ways.
Tesla's is supposed to be more than lane assist, something like only regulators keeping it from being released by now.
When combined with the big neural link announcement that was supposed to happen weeks ago, you may be able to monitor autopilot while you sleep by having it tap into your dreams. They are also going to give us point-to-point earth rocket travel for less than a first class ticket within 8 years.
I'm joking about neural link, though it did miss an announcement, but I'm not joking about the rocket one.
If you added up all the R&D money spent on these self-driving features, we probably could have already covered 80% of American roads with simple guidance systems that would assist in piloting cars safely. Then nav systems wouldn't need to be as complex, and wouldn't depend so much on trying to guess road factors.
If self-driving really is the future, the government's going to need to be involved, the way it is in building and maintaining roads. Many roads already have cameras and sensors built into them just for measuring traffic, so it's not a big leap to say it should also have tech to improve automated driving.
For articles like this there always seems to be a few comments suggesting statistically Tesla's are the safest cars.
Are the cars safe or is it the drivers? The population of Tesla drivers is far from average; the average driver certainly can't afford a 40-80k+ car and may not be interested in electric.
Just because Lane keeping assist reduced accidents doesn't mean that we remove human drivers. As long as the driver has to keep hand on the wheel like autopilot, it remains an assistive technology, and does not support any arguments for driverless technologies.
While the idea is worrying, note that there is no “opposing traffic” in the test performed, and very little info on how the car behaved other than a couple seconds of a TV show. The entire conclusion is:
> Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario.
Not one word more, see last two pages in the paper.
Good work by the researchers. I'm sure Andrej Karpathy and team are already working hard on improving this part of Autopilots neural network.
Most of the updates seem to show the system is memorising things it sees at a distance and building a "mental image" of what the environment is and adding more detail as it get closer instead of assessing every frame individually.
Interesting. I wonder if, as the number of Teslas on the road increases, their "mental models" could be shared so that obstacles detected (with high confidence) by earlier cars can be shared the later cars traveling on the same stretch of road. Like how drivers sometimes signal road conditions (speed traps being probably the most common one) to other drivers.
Eventually, that should probably be standardized in an open, broadcast format to other non-Tesla vehicles as well.
That would certainly make sense. A huge benefit would be it would make it a lot easier for the cars to co-ordinate which lanes they should be in to maximise road throughput.
This is an example of releasing things that are potentially unsafe but 'meh we are part of silicon valley enthusiasts, let's fail fast and fail often'.
The slight problem with this approach is the failures end up killing a few human beings. Sure, you can learn from the failures but is this the right way?
Very callous and self-righteous behavior from these Elon Musk-style people.
101 at 85 in Mountain View has a bunch of lane dividers with their far ends lifted and turned hard to one side. Might this already be happening, albeit accidentally?
The fact it’s the same junction as the fatal crash is also interesting.
I live along US 290 near Houston which has been a total shit show of construction and hazard for the past few years. Adversarial lane markings don’t necessarily require malice.
On 290 confusing lane markings, old lane markings, changes in surface and grade, are hard enough for human drivers. I wonder how state of the art self driving would handle these situations.
It’s been a treacherous stretch of highway. I can’t imagine the construction is being done is a safety compliant manner but I’m not an expert.
Figs. 33 and 35 are really lacking. Can we see the stickers at another angle? What is on them, just a white mark?
I'm interested to see something more along the lines of Figure 19. That is, could you take a sticker of a stop sign, add some secret noise to it, stick the sticker on top of a regular stop sign so no one knows, and then have the car roll through the stop sign?
Perhaps. There was a road sign on my street that got hit with a paintball or an egg by some kid, and for years after that it was difficult to see at night. Somehow, whatever was on the sign interfered with the retro-reflector coating, even though it looked perfectly normal during the day. (Not a stop sign, just a narrow road sign for a single lane bridge.)
I don't understand the haste with self-driving cars. The technology and related legal issues (who is responsible for accidents?) are not prepared for mass use. Why put cars with half-baked self-driving technology on regular roads and highways with other drivers who didn't consent to this? Why not start from predictable and tightly controlled environments (e.g. inner territory of a factory or a cargo port), limit their speed to 10-20mph, spend a few years refining the technology, and only then gradually expand the operating envelope.
When Boeing puts half-baked autopilot technology into MAX8 that's a horrible negligence. When Tesla does a similar thing it's for some reason OK.
Maybe we just shoudln't have self driving cars. The bias of the HN crows seems to be that "oh, technology will advance and solve the problems; just give it time". No code is perfect; no system is 100% secure. This isn't just a malicious case; what of the situations where old lane lines are clearly visible? Can't test perfectly. I'd make a similar argument in things like nuclear centrifuges, namely that there are places where computers should be relegated to their absolute minimum involvement.
Or maybe we should accept that no system is perfect and that cars that kill people at a lower rate than human-driven cars are a good idea, even if they keep killing people.
This is hardly tricking the Tesla. In the video they have no cars nearby for the Tesla to also infer from, so saying it's "driving into opposing traffic" is misleading.
I would imagine the Tesla would do fine if there were cars in front of it, and cars in the oncoming lane that it could infer with.
With that being said, I'm sure there are plenty of corner cases where the car can be legitimately tricked.
It seems like a Wile E. Coyote attack on Autopilot - draw a line that turns into a wall, and the car will follow it blindly. I don't see how this is significantly different from human driver actions. You could put cones up and a human would deviate as well, or put up something they don't recognize to effect the same reaction.
Don’t know why you were down-voted. It’s quite true that humans have the same kind of vulnerability. Drawing misleading things on the road could kill human drivers, too. It’s not a new threat, it’s just that most people aren’t murderers so it’s not a serious problem.
Because it is not just about adversarial samples. How about old Lane markings, some paint spillover, contrasting color in asphalt, tire marks in snow conditions? They'll all come across as new lanes.
These Lane keeping assist systems are nowhere ready to be called driverless.
These sorts of systems tend to be far easier to trick than humans with certain sorts of "camouflage". Is the reverse true? Are there some kinds of markings that confuse people but not modern image recognition systems?
I feel like we are going to need smart roads before we can have autonomous vehicles. If the road could talk to the car, the car would be harder to fool or confuse. Maybe.
Not sure it’s as diabolical as in Ford’s case. Tesla’s are some of the safest cars in the country and autopilot is a work in progress. I do agree that the tolerance for edge case errors should be lower, but also as a driver you should know it isn’t a completely hands-off feature yet.
Not sure what country, but you'll have a hard time convincing me (and my passengers on the day) that they are safe, no matter what statistics show. Totally a fan of the energy tech, not a fan of the self-driving tech.
Humans are not machines, a system where we absolutely have to pay attention 1% of the time but have to just stare mindlessly the other 99% doesn't work. Humans don't work like that and we will space out even if we try not to. Especially when you have to do it for hours on end.
Interesting. This means that one of my predictions: that self driving would happen but only for highly controlled road sections where markings allowed very reliable lane following, is likely not to be realized. Unless we invent digitally signed lane marking..
Perhaps it'll be in areas without pedestrians or cyclists? Eg some version of the interstate. But then the question becomes: what happens if there's an accident and humans are on the road? What if there are wild animals in the road?
California has incredibly good roads, generally maintained, thoughtfully designed and well-marked. It's the last place we should be testing self-driving cars.