A pilot program testing AI-powered weapons scanners inside some New York City subway stations this summer did not detect any passengers with firearms — but falsely alerted more than 100 times.
In total, there were 118 false positives — a rate of 4.29%.
Earlier this year, investors filed a class-action lawsuit, accusing company executives of overstating the devices’ capabilities and claiming that “Evolv does not reliably detect knives or guns.”
I mean, in terms of performance, I'd be more concerned about the false positive rate than the false negative rate, given the context. Like, if you miss a gun, whatever. That's at worst just the status quo, which has been working. Some money gets wasted on the machine. But if you are incorrectly stopping more than 1 in 25 New Yorkers from getting on their train, and apply that to all subway riders, that sounds like a monumental mess.
With how trigger happy police are, the false positives would lead to more deaths than they prevent. And police would claim it's justified because the machine told them so.
Facial recognition confirmed he was a criminal and the scanner confirmed he had a gun! Of course we opened fire instantly. How could we have known it was just some guy with a water bottle?