Skip Navigation
93 comments
  • When a service is willing to take responsibility for collisions and driving violations, then we know it works. If the guy asleep at the wheel (which he allegedly can do in an autonomous car) is still the one held responsible, then were not there yet.

    That said end-to-end AI totally sounds like equivocal marketing buzz.

    • When a service is willing to take responsibility for collisions and driving violations

      Devil's advocate: it's kinda hard to pin the responsibility on Tesla when at the end of the day there was a person driving and the driver's always responsible.

      I'm not disagreeing with you, I'm on team ban-human-drivers

      • Ideally, we'd get to the point where the driver merely directs the vehicle to where it wants to go, and then the computer system works out all the pathfinding and maneuvering, so that yes, any instance where a vehicle avoidably collides with another thing can be regarded as a malfunction.

    • I wonder what happens when the car is on a collision course with a golden retriever and the only way not to hit it would be to damage the car. Or same scenario, but the only way not to hit it, is it to hit an 07 Carolla parked on the side of the road. Not saying humans have superior judgement... just wondering if it will be programmed by the theory of actuarial of philosophical science.

  • As a cyclist I really do look forward to the day where good AI is consistently better than the average-to-worst drivers out there; the bar is depressingly low and the stakes are high.

    I write (and test) software for a living and my experience with Tesla as a consumer device is that it's many generations away from being something I would trust.

    Also, I've seen what happens to product quality when management overrides its engineers in the way elon does- we get pre-alpha quality out there in the wild, being tested on a public that didn't sign up for that shit

93 comments