Here's the specific timestamp of the incident you mentioned in case you wanted to actually see it: https://youtu.be/aqsiWCLJ1ms?t=1190
The car wanted to move through the intersection on a green left turn arrow. I've seen a lot of human drivers do the same. In any case, its fixed now and never was part of any public release.
The video didn't end there, it was near the middle. What you're referring to is a regression specifically with the HW3 model S that failed to recognize one of the red lights. Now I'm sure that sounds like a huge deal, but here's the thing...
This was a demo of a very early alpha release of FSD 12 (current public release 11.4.7) representing a completely new and more efficient method of utilizing the neural network for driving and has already been fixed. It is not released to anyone outside of a select few Tesla employees. Other than that it performed flawlessly for over 40 minutes in a live demo.
Other than that it performed flawlessly for over 40 minutes in a live demo.
I get that this is an alpha, but the problem with full self driving is that's way worse than what users want. If chatgpt gave you perfect information for 40 minutes (it doesn't) and then huge lies once, we'd be using it everywhere. You can validate the lies.
With FSD, that threshold means a lot of people would have terrible accidents. No amount of perfect driving outside of that window would make you feel very happy.
I didn't say FSD was an LLM. My comment was implementation agnostic. My point was that drivers are less forgiving to what programmatically seems like a small error than someone who is trying to generate an essay.
Maybe so, but from where I stand the primary goal should be "Better driver than a human" which is an incredibly low bar. We are already quite a ways past that and its getting better with every release. FSD is today nearly 100% safe, most of the complaints now are around how it drives like a robot by obeying traffic laws, which confuses a lot of other drivers. There are still some edge cases yet to be ironed out extensively like really heavy rain, some icy conditions and snow. People are also terrible drivers in those conditions so its not a surprise. It will get there.
Oh man I definitely agree here. I'm a huge fan of that "better than a human" threshold. Roads are already very dangerous. One of the wildest things I've noticed is highway driving at night in very rainy conditions, sometimes visibility will be near zero. Yet a lot of drivers are zooming around pretending they can see. I feel like I'm in the twilight zone when it happens.
it has to perform flawlessly 99.999999% of the time. The number of 9s matters. Otherwise, you are paying some moron to kill you and perhaps other people.
ok so im totally in agreement but 99.999999% is one accident per hundred million miles traveled. I dont think there should be any reasonable expectation that such a technology can ever possibly get that far without real world testing. Which is precisely where we are now. Maybe at 4 or 5 9s currently.
If you do actually want to have that level of safety, which lets be honest we all do, or ideally 100% safety, how would you propose such a system be tested and deemed safe if not how it’s currently being done?