Skip Navigation

Tesla’s Autopilot and Full Self-Driving linked to hundreds of crashes, dozens of deaths

267

You're viewing a single thread.

267 comments
  • This is the best summary I could come up with:


    In March 2023, a North Carolina student was stepping off a school bus when he was struck by a Tesla Model Y traveling at “highway speeds,” according to a federal investigation that published today.

    The Tesla driver was using Autopilot, the automaker’s advanced driver-assist feature that Elon Musk insists will eventually lead to fully autonomous cars.

    NHTSA was prompted to launch its investigation after several incidents of Tesla drivers crashing into stationary emergency vehicles parked on the side of the road.

    Most of these incidents took place after dark, with the software ignoring scene control measures, including warning lights, flares, cones, and an illuminated arrow board.

    Tesla issued a voluntary recall late last year in response to the investigation, pushing out an over-the-air software update to add more warnings to Autopilot.

    The findings cut against Musk’s insistence that Tesla is an artificial intelligence company that is on the cusp of releasing a fully autonomous vehicle for personal use.


    The original article contains 788 words, the summary contains 158 words. Saved 80%. I'm a bot and I'm open source!

    • Cameras and AI aren't a match for radar/lidar. This is the big issue with the approach to autonomy Tesla's take. You've only a guess if there are hazards in the way.

      Most algorithms are designed to work and then be statistically tested. To validate that they work. When you develop an algorithm with AI/machine learning, there is only the statistical step. You have to infer whole systems performance purely from that. There isn't a separate process for verification and validation. It just validation alone.

      When something is developed with only statistical evidence of it working you can't be reliably sure it works in most scenarios. Except the exact ones you tested for. When you design an algorithm to work you can assume it works in most scenarios if the result are as expected when you validate it. With machine learning, the algorithm is obscured and uncertain (unless it's only used for parameter optimisation).

      Machine learning is never used because it's a better approach. It's only used when the engineers don't know how to develop the algorithm. Once you understand this, you understand the hazard it presents. If you don't understand or refuse to understand this. You build machines that drive into children, deliberately. Through ignorance, greed and arrogance Tesla built a machine that deliberately runs over children.

You've viewed 267 comments.