Use Baidu's platform to show how the fusion of Lidar, radar, and cameras can be fooled by stuff from your kids' craft box
A team of researchers from prominent universities – including SUNY Buffalo, Iowa State, UNC Charlotte, and Purdue – were able to turn an autonomous vehicle (AV) operated on the open sourced Apollo driving platform from Chinese web giant Baidu into a deadly weapon by tricking its multi-sensor fusion system, and suggest the attack could be applied to other self-driving cars.
TL;DR: faking out a self-driving system is always going to be possible, and so is faking out humans. But doing so is basically attempted murder, which is why the existence of an exploit like this is not interesting or new. You could also cut the brake lines or rig a bomb to it.
What is the purpose of accountability other than to force people to do better? If the lack of accountability doesn't stop a computer from outperforming a human, why worry about it?
The lack of accountability means that there is nothing and no one to take responsibility when the robot/computer inevitably kills someone. A human can be faced with legal ramifications for their actions, the companies that make these computers have shown thus far that they are exempt from such consequences.
That is true for most current "self driving" systems, because they are all just glorified assist features. Tesla is misleading its customers massively with their advertisement, but on paper it's very clear that the car will only assist in safe conditions, the driver needs to be able to react immediately at all times and therefore is also liable.
However, Mercedes (I think it was them) have started to roll out a feather where they will actually take responsibility for any accidents that happen due to this system. For now it's restricted to nice weather and a few select roads, but the progress is there!
The driverless robo-taxis are also a concern. When one of them killed someone in San Francisco there was not a clear responsible entity to charge with the crime.
That is simply not true. The law since basically forever had held that manufacturers are liable if their product malfunctions and hurts someone when it's being operated in accordance with their instructions.
Edit: I hope all y'all who think the rule of law doesn't exist are gonna vote against the felony party.
Excuse us for being sceptical that businesses will actually be held accountable.
We know legally they are, but will forced arbitration or delayed court proceedings mean people too poor to afford a good lawyer for long will have to fuck off?
The current court cases show that the manufacturers are trying to fob off responsibility onto the owners of the vehicles by way of TOS agreements with lots of fine print and Tesla in particular is getting slammed for false advertising about the capabilities of their self-driving features while they simultaneously try to force all legal liability onto the drivers that believed their advertising.
I think human responses vary too much: could you follow a strategy that makes 50% of human drivers crash reliably? probably. Could you follow a strategy to make 100% of autonomous vehicles crash reliably? Almost certainly.
More exciting would be an exploit that renders an unmoving car useless. But exploits like this absolutely will be used in cases were tire-slashing might be used, such as harassing genocidal vips or disrupting police services, especially if it's difficult to trace the drone to its controller.
You don't even have to rig a bomb, a better analogy to the sensor spoofing would be to just shine a sufficiently bright light in the driver's eyes from the opposite side of the road. Things will go sideways real quick.
It’s not meant to be a perfect example. It’s a comparable principle. Subverting the self-driving like that is more or less equivalent to any other means of attempting to kill someone with their car.