Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's even worse for Tesla if you normalize the data, because the rest of the industry also has advanced driving functionality with far fewer accidents or deaths.

It turns out that one of the things that radar is good for is emergency braking regardless of lighting conditions.

It also helps that other carmakers don't oversell the functionality of their advanced driving functionality; if anything they deliberately understate how well the systems work to avoid giving customers the false sense of safety that Tesla does.



Yeah I love this line of argument from the Tesla cult: “But wait people use their Tesla autonomous features more frequently [in conditions the system doesn’t operate well in].”

Like… yeah, that’s called an unsafe system my dude.


Safety of systems isn't binary, so what's the risk profile?

Newer medical treatments have higher risk, just like experimental transportation methods carry greater risk.

Everyone gets sick/injured and everyone needs to get around, so some suffering in the name of advancing these endeavors seems both inevitable and tolerable. The question is to what degree?

Controlling access to and fallout from these automated driving systems is a temporary priority since the roads they're testing on are far more 'public' than an individual's body undergoing new medical treatment, but the long-term priority must be on getting the systems as safe or safer than other human drivers on aggregate. That will happen, sooner or later, and I'd rather see it sooner as long as the cost of the race isn't catastrophe. The only way to advance is to let it learn...


Yep, and when there is a dozen companies taking their obligations to the public extremely seriously, loading their cars with at least as many sensors as it takes to operate safely for the R&D phases, deploying in limited phases and not directly into the hands of the public, and not marketing their unproven tech as “Full Self Driving,” I think it’s completely reasonable to single out the one company who’s not doing any of that.


Tesla's insistence that vision is all you need is one of the root issues.

Humans are able to drive with vision alone in part because we have a high-level conceptual model of the world that we can rely on to fill in the gaps when a straight literal interpretation of vision is wrong or inadequate. We know, for example, that there are not likely to be walls under highway overpasses and that stop signs on billboards are not signs. This is a "strong AI" level problem that a car isn't going to be able to to solve, so instead the best answer is to give the car super-human senses to it doesn't need such a model as much.

That and if we have self-driving cars we want them to be safer than human drivers. That could only reasonably be achieved with super-human senses.


> even worse for Tesla if you normalize the data

Where's the data that proves this? Tesla has ~2% of the auto market, but ~0.4% of deaths.


Most higher end vehicles (and frankly, I don't count Tesla as particularly high end, particularly not the Model 3) have similar stats, for a multitude of reasons (including but not limited to driver experience and time behind the wheel as a function of affordability of the vehicle).


Where's your data to back up your claim?


~2% of new cars isn't anywhere near 2% of cars on the road or miles driven, and newer cars are safer in general.


miles driven, not market share, is the correct comparison here (and even then you'd want to include many other covariates, such as where the cars are operated, socioeconomic status of drivers, etc, etc) if you really want to compare different vehicles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: