I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.
Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.
Iâm in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as Iâm driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Couldâve been asleep. And driving 10-15 mph slower than youâd expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.
Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD doesn't not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
Sure, but the practical experience is that FSD is fairly predictable. It's just a matter of personal preference that comes from experience. I wouldn't impose a system like FSD on everybody.
Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.
SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.
I think you mean SAE Level 3. SAE Level 2 is âlane centeringâ and âadaptive cruise controlâ [1]. (Level 3 is âwhen the feature requests, you must drive.)
I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.
"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.
> the self-driving feature had âaborted vehicle control less than one second prior to the first impactâ
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it canât resolve. If thereâs evidence that Tesla is actively using this to âproveâ that FSD is not behind a crash, Iâm happy to change my mind. For me, probably 5s prior is a reasonable limit.
It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
IDK, this has the same unethical energy as police turning off body cameras.
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, itâs all too unexpected.
Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
Maybe the car should not have this dangerous feature in the first place? Or maybe train drivers thoroughly and frequently for when this situation arises it becomes less dangerous.
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
The few Tesla post-mortems Iâve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), Iâd have sympathy. I have never read a Tesla statement that contained this information.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). Iâd expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.
Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
> It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perceptionâresponse times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, weâre now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
This is about the old autopilot, not FSD, and there doesn't seem to be anything new in the article. This is based on the same leaked data which has been public since 2023. The title seems to be inaccurate, as there's nothing to indicate that they hid fatal accidents.
So...For a bit of context on the video and the article:
- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).
- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.
- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?
I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
I've lived for many years in two large European countries and in both cases I found them hard to trust. Perhaps you have deep, first-hand knowledge of multiple European countries but in my experience they take too much money and are heavily biased. For that reason I'd prefer there to be no public broadcast companies - at least so my tax money doesn't support manipulation. In over 30+ years of life, I've never encountered a truly neutral public broadcaster in Europe, though I'm sure there may be exceptions.
> Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?
The quality of European publicly owned medias is highly country specific and variates quite a lot:
- Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).
- Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.
But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.
They have left leaning biases, RTVE is basically a propaganda channel for the PSOE at this point and France Info/France2 have center-left biases which makes them not neutral and representing the corpus of society.
They are all well-funded though.
The national broadcaster here in Romania has been politically leaning on whoever was paying the bills, hence on whoâs holding political control over the country.
I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (Iâm talking the last couple of rounds of Romanian presidential elections).
it is finitely better today and will be better still. this doesn't mean it's better at everything a human driver can do, it's just better on average. the jagged frontier is real and a very important safety consideration; nevertheless, the averages matter, too.
Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.
We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own deathâdriving off a bridge, say.
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limitââthat the AI will prioritize my life and safety over literally any other concernââis grotesque.
> You mean deaths to multiple other people, do you not
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
Isnât this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?
What is the lowest likelihood of your own death you'd find acceptable in this situation?
This is a fair concern. Iâm unconvinced itâs even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. Iâm willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
What would that guarantee look like and would it be legal to sell a product that made that guarantee?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
No, I mean that they are not prioritizing you and many make poor choices.
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
They are not afraid to operate their own vehicles. They are afraid you will kill them.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Was it 2015 when HN was full of prediction we won't be driving in five years?
From what I see the serious accidents with human drivers are caused by deliberately doing the dangerous thing (in my corner of the world - mostly overtaking at the wrong place or time, or both). Besides that humans drive very safely. Outside of the tightly controlled environment I don't see self-driving getting any better till systems have a proper world-model. So, maybe never.
Look I don't like Tesla as much as the next person, I think it is wildly over-hyped and over-valued. But this article is just slop.
The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from
It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.
Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
>Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.
If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.
I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.
After you wrote this, I went and read the article I also didn't see much there either. And wonder why you are getting down voted. And TBC, also not a tesla fan (the truck is dumb).
Hot take but I feel like Tesla owners (hell, anyone with 'autonomous driving' vehicles) need to see some kind of modern lecture based on the Children of the Magenta talk on automation dependence in aircraft. Mandatory, before you can trigger the system on.
FSD has built this generation's newest children of the magenta line.
> my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not
Youâre correct inasmuch as we have no evidence there is âa significant problem.â But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault. The insurance company still has to pay, which means the Tesla owners have to pay.
> Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault
Why? They only pay out if youâre at fault. And if there arenât final judgements in a deep pipeline of cases, premiums wouldnât have a reason to adjust yet.
I am assuming Tesla has been around long enough and driven enough miles to have a sufficiently representative data set for insurance companies to know. I cannot imagine the pipeline of cases to be so deep as people are waiting on payments from collisions from years ago.
I am also assuming that a collision involving a Tesla has at fault determinations that are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.
Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
The entire point of these articles about mounting lawsuits is those assumptions may be wrong. The liabilities involved are higher. And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas
You may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesnât take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and itâs usually easier to weed out by claims history than running models on small premiums.
> The entire point of these articles about mounting lawsuits is those assumptions may be wrong.
And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.
> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.
In the meantime, a collision with a Tesla is the same as any other auto brandâs. I donât think Ford/Toyota/anyone elseâs software comes into play. No auto brand picks up the liability for the driver (except Mercedes in some circumstances, I think), so no automaker is in the picture for payment in the event of an individual collision.
Usually when people provide examples, they're intended to serve as a representative sample of a larger trend, and not an exhaustive list. Hope that helps.
IMO itâs also a distraction to blame it on âcapitalismâ or some âlarger trendâ rather than just pointing directly at the company and people responsible.
âThe system is brokenâ line hasnât worked for years now. Maybe if we stop blaming the system and start blaming the people?
The Koch brothers stopped breaking the law because it was too expensive. Instead they started lobbying to get the laws changed. This is where the idea that the system is rotten comes from.
Saying "corporations have lied in the past for their own self interest" and then pointing to two very well known examples does not imply or over generalize that all corporations do that.
The point isn't to demonize all corporations, it's to say specifically that a pathology of some megacorporations is broadscale lying to the public about the safety of their products for personal gain.
To pile on to this pathetic excuse for a company: anyone considering buying a Tesla should know that they are the #1 brand for fatal accidents in the United States, with over twice the accident rate of a typical automaker: https://www.roadandtrack.com/news/a62919131/tesla-has-highes...
This terrible statistic canât just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.
Iâm convinced that Tesla makes unsafe cars and covers it up wherever they can.
The crash test safety awards their vehicles have won are clearly not representative of reality.
The self-driving system Tesla offers is only âaheadâ of the competition because the competition is unwilling to sell an unsafe system.
Your link only suggests driver and road conditions to be blamed. Consider the amount of power coming from a base model, I would lean towards driver. What they do with FSD stats is terrible and it would be refreshing to have some unbiased looks at it. Your narrative though is too biased and the link makes no connection to Tesla being responsible for the fatalities.
Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.
Tesla is giving people driving their cars a false sense of security.
But the article doesn't say that at all - quite the opposite:
> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
Tesla stans tell us that theyâre the most luxurious and best-built cars on the road, in reality theyâre as poorly built as an economy car brand for people who donât want to pay for a Toyota with a reputation for low quality.
Youâre missing the obvious explanation here. Driver profile. You could have the safest car around but if itâs being driven by unsafe drivers it will lead to higher accidents and fatalities.
that study was pretty thoroughly debunked. Also, I believe it was put out by a lobbying group representing auto dealerships who see the Tesla DTC model as a mortal threat. There is a lot of legitimate criticism to be directed towards Tesla but the ISeeCars study "aint it".
I've heard people saying the study is bad, but whenever I've asked about why the answers have been pretty bad. Do you have a good source for why we should disregard it?
Looking for more. tl;dr is that NHTSA publishes accident rates but not mileage. ISeeCars has access to legacy auto mileage from dealership data but guessed at mileage for Tesla's in the period in question. Their methodology was not released and was a fraction of the total mileage that Tesla recorded over that period.
I do agree that Tesla could do a much better job with data transparency. But the claims of the ISC report are pretty difficult to reconcile with the crash test ratings they've gotten from many regulators across the world.
For a while they were the safest car in crash tests, weren't they? Was there an inflection point where they were dropping like a rock? Or is this a case of measuring different things (crash tests vs fatal accident rates)?
I know you probably don't know off the top of your head, I'm hoping someone can chime in.
Dan Luu had some interesting analysis about car safety, comparing how different auto-makers fared on newly introduced crash tests: https://danluu.com/car-safety/
The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
I am admittedly not a fan, but I note that in my social circle I don't have anyone who considers one, one that has one wants to sell one, one vendor has one ( the truck one ), but it is clearly for marketing purposes so at least it makes sense.
How do we know it can't be explained by self-selecting driver population? That sounds like the most likely explanation, and it's the only explanation advanced by the article you provided.
I guess there's something to be said for "hey, if you're considering buying a Tesla, you may be the kind of person that's likely to kill themself in a car crash. Consider buying a safer car or taking the bus!"
Who would have guessed that a vehicle with no turn signal stalk or physical control to shift gears is unsafe!
Tesla sells too many vehicles for it to be a âself selecting driver populationâ thing anymore. They sell almost as many Model Ys as Honda CRVs.
I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.
Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.
So why is Dodge better on the list? Most Dodge models sold are rear wheel drive performance cars. They basically only sell the Challenger/Charger and the Hornet SUV that nobodyâs buying.
The lengths people will go to defend Tesla continue to astound me. Canât we just say that they suck without making excuses for them?
> Iâm convinced that Tesla makes unsafe cars and covers it up wherever they can.
Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?
This article specifically mentions "Autopilot", not FSD. I'll call out Tesla for BS as much as the next person and I own no stock, but FSD (Supervised) is exactly what it says. There's no aspect of vehicle operation that isn't controlled by FSD, but it must be supervised.
Here we go again. Autopilot != FSD. Autopilot is not "autonomous" driving. It's lane keep with adaptive cruise control. The same system that Honda, Toyota, etc have. Yes the naming is wrong, the marketing is bad, but I don't see it as much worse as Toyota safety sense. If you use it to be "safe" you're going swerve off the highway into a ditch. I used super cruise from GM in my friends suv. As soon as lane markers go away on a bridge, I almost hit the railing.
I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
The news isn't necessarily of the effectiveness of the particular tech stack, but the integrity, or lack thereof of the manufacturer in reporting incidents. If that is in question, assessing the effectiveness of any of Tesla's tech stacks fsd or autonomy, or taxis for driving is in doubt.
Autopilot is completely different software from FSD. If you think FSD is stupid then Autopilot is worse because it won't do anything other than stay in the same lane and adjust speed to the car in front of you.
For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
The difference is FSD is properly annotated as (Supervised) and does exactly that. Autopilot does not 'autopilot' the vehicle by any reasonable measure.
How about the fact that Tesla is killing people and covering it up?
Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?
What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
It's not a semantic issue, FSD is a completely different system, but many people mix up the terms when discussing these systems due to poor naming. Autopilot is just cruise control and lane keep. FSD handles navigation and full vehicle control. Articles discussing the dangers of Autopilot are making perfectly reasonable claims about a system which was poorly named/marketed, but they are not meaningfully relevant to conversations about FSD.
IMHO you're shifting goal posts (and I am not downvoting).
Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].
It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.
[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.
"Autopilot" is not $8K, that's FSD. Autopilot was the default cruise control/lane keep software and was renamed "Traffic Aware Cruise Control" a few months ago. The original name was ridiculously misleading.
Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.
Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.
Reminds me of a situation not long ago.
Iâm in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as Iâm driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Couldâve been asleep. And driving 10-15 mph slower than youâd expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.
Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD doesn't not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
Which is just worse.
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
Sure, but the practical experience is that FSD is fairly predictable. It's just a matter of personal preference that comes from experience. I wouldn't impose a system like FSD on everybody.
Interestingly, I think that similar types of arguments are made against "agentic coding"
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
Sure, but you can do that in a diff after the event, rather than live.
Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.
SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.
I think you mean SAE Level 3. SAE Level 2 is âlane centeringâ and âadaptive cruise controlâ [1]. (Level 3 is âwhen the feature requests, you must drive.)
[1] https://www.ncdd.com/images/blog/diagram.png
A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.
I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.
I think their point was "it's not ready yet."
"Because the Origin does not have manual controls, the NHTSA must issue an exception to the Federal Motor Vehicle Safety Standards to permit operation on public roads"
Too bad that project failed.
https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)
Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.
Would it be a vote of no confidence in Full Self Flying?
No, it would be an acknowledgement of the lack of perfection in human systems so far.
I mean, they kinda are.
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
That presents an interesting failure mode challenge.
Well we don't have any self driving cars outside of San Francisco. Only cars with advanced driver assistance.
Quite a few more places have them now:
https://support.google.com/waymo/answer/9059119?hl=en
How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.
To be fair, that report says
> the self-driving feature had âaborted vehicle control less than one second prior to the first impactâ
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it canât resolve. If thereâs evidence that Tesla is actively using this to âproveâ that FSD is not behind a crash, Iâm happy to change my mind. For me, probably 5s prior is a reasonable limit.
It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
IDK, this has the same unethical energy as police turning off body cameras.
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
> the "right thing to do" is NOT turn off the cameras just before a collision
Source for autopilot being disabled âseconds before a crashâ also disabling cameras? (Sorry if I missed it above.)
This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, itâs all too unexpected.
Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.
I guess i'll step in for the counter.
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
Maybe the car should not have this dangerous feature in the first place? Or maybe train drivers thoroughly and frequently for when this situation arises it becomes less dangerous.
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
The few Tesla post-mortems Iâve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), Iâd have sympathy. I have never read a Tesla statement that contained this information.
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). Iâd expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.
Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
> It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perceptionâresponse times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, weâre now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
0. https://www.tesla.com/fsd/safety#:~:text=within five seconds
Tesla has a very bad track record in terms of both compliance and disclosure when it comes to autonomy incidents.
Did you find the article lack any real numbers related to the claims? It was a bit weird in that that information was so vague.
Individual tragic anecdotal incidences aside the vagueness of the article really diluted the merit of the claims.
The article was also published in German: https://www.srf.ch/news/dialog/autonomes-fahren-wie-tesla-un...
It's the Swiss national radio/tv service, they probably have the article in 4 languages or more
Are these still accidents where the driver was not paying attention, though?
Of course. But the argument that the nature of FSD causes them to not pay attention.
This is about the old autopilot, not FSD, and there doesn't seem to be anything new in the article. This is based on the same leaked data which has been public since 2023. The title seems to be inaccurate, as there's nothing to indicate that they hid fatal accidents.
So...For a bit of context on the video and the article:
- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).
- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.
- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?
I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
I've lived for many years in two large European countries and in both cases I found them hard to trust. Perhaps you have deep, first-hand knowledge of multiple European countries but in my experience they take too much money and are heavily biased. For that reason I'd prefer there to be no public broadcast companies - at least so my tax money doesn't support manipulation. In over 30+ years of life, I've never encountered a truly neutral public broadcaster in Europe, though I'm sure there may be exceptions.
In my country I judge them purely by what they do and say in the sectors where I know a lot about, and the facts they bring are mostly correct.
Also, they don't tout a single party line.
> Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?
The quality of European publicly owned medias is highly country specific and variates quite a lot:
- Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).
- Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.
But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.
They have left leaning biases, RTVE is basically a propaganda channel for the PSOE at this point and France Info/France2 have center-left biases which makes them not neutral and representing the corpus of society. They are all well-funded though.
The national broadcaster here in Romania has been politically leaning on whoever was paying the bills, hence on whoâs holding political control over the country.
I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (Iâm talking the last couple of rounds of Romanian presidential elections).
You're probably responding to Swiss person that lives in the USA.
One day an AI will obviously be infinitely better at driving than a human will be but that day is not yet here.
it is finitely better today and will be better still. this doesn't mean it's better at everything a human driver can do, it's just better on average. the jagged frontier is real and a very important safety consideration; nevertheless, the averages matter, too.
> that day is not yet here
Have you been in a Waymo? SAE Level 4 is here, and itâs safer than humans [1].
[1] https://waymo.com/safety/impact/
Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
> to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
Yeah, you also have to consider that your kids can be on either side of the equation too.
I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.
We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own deathâdriving off a bridge, say.
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limitââthat the AI will prioritize my life and safety over literally any other concernââis grotesque.
> You mean deaths to multiple other people, do you not
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
> deaths the AI predicts for other people
Isnât this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?
What is the lowest likelihood of your own death you'd find acceptable in this situation?
> not sure I would trust it
This is a fair concern. Iâm unconvinced itâs even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
> what happens when the tech gains market capture
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. Iâm willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
[1] https://www.cdc.gov/nchs/fastats/accidental-injury.htm
What would that guarantee look like and would it be legal to sell a product that made that guarantee?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
I find it interesting that you don't give other drivers any consideration in your analysis.
Other drivers should take public transit if they don't want to / are afraid to operate their own vehicles
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
No, I mean that they are not prioritizing you and many make poor choices.
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
They are not afraid to operate their own vehicles. They are afraid you will kill them.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Appreciate the honesty.
Sure, but then I don't want you to have a vehicle at all to minimize my own risk.
Feel free to minimize your own risk by staying home and never leaving
Feel free to minimize both our risks by not polluting public space with your personal crap.
âInfinitelyâ is a high bar, but Waymo is already demonstrably better than the majority of human drivers.
But only in very controlled environments...
Was it 2015 when HN was full of prediction we won't be driving in five years? From what I see the serious accidents with human drivers are caused by deliberately doing the dangerous thing (in my corner of the world - mostly overtaking at the wrong place or time, or both). Besides that humans drive very safely. Outside of the tightly controlled environment I don't see self-driving getting any better till systems have a proper world-model. So, maybe never.
Same article in German, if thatâs more your thing: https://www.srf.ch/news/dialog/autonomes-fahren-wie-tesla-un...
Full report here (video): https://www.rts.ch/play/tv/temps-present/video/tesla-la-face...
This discussion was #1 and just vanished. Why?
Because the article is sensationalized crap.
Look I don't like Tesla as much as the next person, I think it is wildly over-hyped and over-valued. But this article is just slop.
The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from
It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.
Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
>Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.
If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.
I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.
After you wrote this, I went and read the article I also didn't see much there either. And wonder why you are getting down voted. And TBC, also not a tesla fan (the truck is dumb).
Thanks
Hot take but I feel like Tesla owners (hell, anyone with 'autonomous driving' vehicles) need to see some kind of modern lecture based on the Children of the Magenta talk on automation dependence in aircraft. Mandatory, before you can trigger the system on.
FSD has built this generation's newest children of the magenta line.
https://www.youtube.com/watch?v=5ESJH1NLMLs
>Tesla owners (hell, anyone with 'autonomous driving' vehicles)
Or LLM users.
Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
You would be surprised how passionately people defend Tesla on HN sometimes, especially when safety records come up.
Otherwise number go down
Liability insurance pricing tells the whole story, without clickbait articles or emotion.
If there was a significant problem, my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not.
> my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not
Youâre correct inasmuch as we have no evidence there is âa significant problem.â But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault. The insurance company still has to pay, which means the Tesla owners have to pay.
> Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault
Why? They only pay out if youâre at fault. And if there arenât final judgements in a deep pipeline of cases, premiums wouldnât have a reason to adjust yet.
I am assuming Tesla has been around long enough and driven enough miles to have a sufficiently representative data set for insurance companies to know. I cannot imagine the pipeline of cases to be so deep as people are waiting on payments from collisions from years ago.
I am also assuming that a collision involving a Tesla has at fault determinations that are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.
Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
The entire point of these articles about mounting lawsuits is those assumptions may be wrong. The liabilities involved are higher. And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas
You may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesnât take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and itâs usually easier to weed out by claims history than running models on small premiums.
> The entire point of these articles about mounting lawsuits is those assumptions may be wrong.
And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.
> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.
In the meantime, a collision with a Tesla is the same as any other auto brandâs. I donât think Ford/Toyota/anyone elseâs software comes into play. No auto brand picks up the liability for the driver (except Mercedes in some circumstances, I think), so no automaker is in the picture for payment in the event of an individual collision.
The issue is they are potentially lying. Itâs why we are even having this discussion. The numbers could be fraudulent
Yes, all companies sold leaded gasoline.
Usually when people provide examples, they're intended to serve as a representative sample of a larger trend, and not an exhaustive list. Hope that helps.
Their point still stands.
Not all companies do illegal things.
IMO itâs also a distraction to blame it on âcapitalismâ or some âlarger trendâ rather than just pointing directly at the company and people responsible.
âThe system is brokenâ line hasnât worked for years now. Maybe if we stop blaming the system and start blaming the people?
>Not all companies do illegal things.
The Koch brothers stopped breaking the law because it was too expensive. Instead they started lobbying to get the laws changed. This is where the idea that the system is rotten comes from.
No one claimed all companies do illegal things.
All of this is a crazy overgeneralisation of the hundreds of millions of companies in the world:
> Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
> It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
If I say "Ted is the Unibomber" do you think I'm saying everyone named Ted is the Unibomber? This is basic reading comprehension stuff
Saying "corporations have lied in the past for their own self interest" and then pointing to two very well known examples does not imply or over generalize that all corporations do that.
The point isn't to demonize all corporations, it's to say specifically that a pathology of some megacorporations is broadscale lying to the public about the safety of their products for personal gain.
Or pushed beef that destroys the environment and gives people GI cancers while claiming the opposite.
To pile on to this pathetic excuse for a company: anyone considering buying a Tesla should know that they are the #1 brand for fatal accidents in the United States, with over twice the accident rate of a typical automaker: https://www.roadandtrack.com/news/a62919131/tesla-has-highes...
This terrible statistic canât just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.
Iâm convinced that Tesla makes unsafe cars and covers it up wherever they can.
The crash test safety awards their vehicles have won are clearly not representative of reality.
The self-driving system Tesla offers is only âaheadâ of the competition because the competition is unwilling to sell an unsafe system.
Your link only suggests driver and road conditions to be blamed. Consider the amount of power coming from a base model, I would lean towards driver. What they do with FSD stats is terrible and it would be refreshing to have some unbiased looks at it. Your narrative though is too biased and the link makes no connection to Tesla being responsible for the fatalities.
> Tesla vehicles have a fatal crash rate of 5.6 per billion miles driven, according to the study; Kia is second with a rate of 5.5,
Basically the same as Kia. Why are Kias so bad?
2 reasons I can see.
Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.
Tesla is giving people driving their cars a false sense of security.
But the article doesn't say that at all - quite the opposite:
> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
Until recently, Kias were sub-entry level shitboxes
This would affect both driver selection and performance during impact
Slap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla
I am sure there is a component of safety systems in a Kia but I would bet the bigger weighting is on driver profile.
Youâre so close to understanding!
Tesla stans tell us that theyâre the most luxurious and best-built cars on the road, in reality theyâre as poorly built as an economy car brand for people who donât want to pay for a Toyota with a reputation for low quality.
> Youâre so close to understanding!
Sorry, I don't understand this. I'm just asking a question. Do you reply to every question with that?
Youâre missing the obvious explanation here. Driver profile. You could have the safest car around but if itâs being driven by unsafe drivers it will lead to higher accidents and fatalities.
I can get on board with the rationale that Tesla drivers are idiots.
that study was pretty thoroughly debunked. Also, I believe it was put out by a lobbying group representing auto dealerships who see the Tesla DTC model as a mortal threat. There is a lot of legitimate criticism to be directed towards Tesla but the ISeeCars study "aint it".
I've heard people saying the study is bad, but whenever I've asked about why the answers have been pretty bad. Do you have a good source for why we should disregard it?
Find a link that shows itâs debunked then? All they did was analyze federal crash data.
I donât know whatâs so hard to believe about the study. Teslaâs numbers are pretty similar to other low-performing brands.
https://www.snopes.com/news/2025/01/11/tesla-fatality-rates/
https://en.wikipedia.org/wiki/ISeeCars.com#Partnerships
https://x.com/larsmoravy/status/1860100416819855492
Looking for more. tl;dr is that NHTSA publishes accident rates but not mileage. ISeeCars has access to legacy auto mileage from dealership data but guessed at mileage for Tesla's in the period in question. Their methodology was not released and was a fraction of the total mileage that Tesla recorded over that period.
I do agree that Tesla could do a much better job with data transparency. But the claims of the ISC report are pretty difficult to reconcile with the crash test ratings they've gotten from many regulators across the world.
For a while they were the safest car in crash tests, weren't they? Was there an inflection point where they were dropping like a rock? Or is this a case of measuring different things (crash tests vs fatal accident rates)?
I know you probably don't know off the top of your head, I'm hoping someone can chime in.
Dan Luu had some interesting analysis about car safety, comparing how different auto-makers fared on newly introduced crash tests: https://danluu.com/car-safety/
The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
I am admittedly not a fan, but I note that in my social circle I don't have anyone who considers one, one that has one wants to sell one, one vendor has one ( the truck one ), but it is clearly for marketing purposes so at least it makes sense.
How do we know it can't be explained by self-selecting driver population? That sounds like the most likely explanation, and it's the only explanation advanced by the article you provided.
I guess there's something to be said for "hey, if you're considering buying a Tesla, you may be the kind of person that's likely to kill themself in a car crash. Consider buying a safer car or taking the bus!"
Reminds me of the first episode of madman where the guy pitches appealing to everyoneâs âinherent death wishâ when selling cigarettes haha
âThatâs it? If youâre gonna die, die with us?â
Who would have guessed that a vehicle with no turn signal stalk or physical control to shift gears is unsafe!
Tesla sells too many vehicles for it to be a âself selecting driver populationâ thing anymore. They sell almost as many Model Ys as Honda CRVs.
I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.
Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.
They donât, these are the anti-Tesla folks. No level of reasoning is available for discussions like this.
I donât like Elon but I also donât think fiction and misleading stats serve anyone.
We're talking about a brand whose every car has at least 350HP, and most of them have more.
It's not an apples-to-oranges comparison.
So why is Dodge better on the list? Most Dodge models sold are rear wheel drive performance cars. They basically only sell the Challenger/Charger and the Hornet SUV that nobodyâs buying.
The lengths people will go to defend Tesla continue to astound me. Canât we just say that they suck without making excuses for them?
> Iâm convinced that Tesla makes unsafe cars and covers it up wherever they can.
Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?
The Tesla fans fell for it again.
The Fools Self Driving (FSD) contraption once again revealed as a scam and continues to be pushed onto their fans as a "self-driving" capability.
If they (Tesla) can hide fatal accidents, what else is Tesla not telling us?
This article specifically mentions "Autopilot", not FSD. I'll call out Tesla for BS as much as the next person and I own no stock, but FSD (Supervised) is exactly what it says. There's no aspect of vehicle operation that isn't controlled by FSD, but it must be supervised.
Here we go again. Autopilot != FSD. Autopilot is not "autonomous" driving. It's lane keep with adaptive cruise control. The same system that Honda, Toyota, etc have. Yes the naming is wrong, the marketing is bad, but I don't see it as much worse as Toyota safety sense. If you use it to be "safe" you're going swerve off the highway into a ditch. I used super cruise from GM in my friends suv. As soon as lane markers go away on a bridge, I almost hit the railing.
I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
Elon himself uses both terms interchangeably[1], and the two reportedly use the same stack, so why shouldn't we conflate the terms?
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
Can you explain why that makes it ok to cover up accidents and lie about the recordings of the event being corrupted?
The news isn't necessarily of the effectiveness of the particular tech stack, but the integrity, or lack thereof of the manufacturer in reporting incidents. If that is in question, assessing the effectiveness of any of Tesla's tech stacks fsd or autonomy, or taxis for driving is in doubt.
I don't get it?
If autopilot was missleading, full self driving is too?
Autopilot is completely different software from FSD. If you think FSD is stupid then Autopilot is worse because it won't do anything other than stay in the same lane and adjust speed to the car in front of you.
For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
The difference is FSD is properly annotated as (Supervised) and does exactly that. Autopilot does not 'autopilot' the vehicle by any reasonable measure.
Supervisded self driving would be correct. I don't think I was aware of the (Supervised) before your comment tbh.
How about the fact that Tesla is killing people and covering it up?
Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?
What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
It's not a semantic issue, FSD is a completely different system, but many people mix up the terms when discussing these systems due to poor naming. Autopilot is just cruise control and lane keep. FSD handles navigation and full vehicle control. Articles discussing the dangers of Autopilot are making perfectly reasonable claims about a system which was poorly named/marketed, but they are not meaningfully relevant to conversations about FSD.
Here we go again; Musk fanboy to the rescue!
IMHO you're shifting goal posts (and I am not downvoting).
Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].
It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.
[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.
"Autopilot" is not $8K, that's FSD. Autopilot was the default cruise control/lane keep software and was renamed "Traffic Aware Cruise Control" a few months ago. The original name was ridiculously misleading.