This AI hype cycle has dramatically distorted society's views of what's possible with image upscalers.
A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.
No computer algorithm can accurately reconstruct data that was never there in the first place.
Ever.
This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.
Ever.
If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it's called.
I think we need to STOP calling it "Artificial Intelligence". IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.
How long until we got upscalers of various sorts built into tech that shouldn't have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges' screens? Cheap security cams with "enhanced night vision" might get somebody jailed.
Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren't, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can't decipher the logic (I've literally seen compositional/generative AI that shows its work), they'll never realize what it's actually doing.
So then you get innocent CCTV footage this AI "clarifies" and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I'm sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.
According to the evidence, the defendant clearly committed the crime with all 17 of his fingers. His lack of remorse is obvious by the fact that he's clearly smiling wider than his own face.
The fact that it made it that far is really scary.
I'm starting to think that yes, we are going to have some new middle ages before going on with all that "per aspera ad astra" space colonization stuff.
Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what's happening.
Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?
We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?
My post is more tangentially related to original article, but I'm still curious as what the common consensus is.
For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022. The theory started because people started running screenshots of the slap through image upscalers, believing they could get a better look at what was happening.
Sometimes I think, our ancestors shouldn’t have made it out of the ocean.
During Kyle Rittenhouse's trial the defense attorney objected to using the pinch to zoom feature of an iPad because it (supposedly) used AI. This was upheld by the judge so the prosecution couldn't zoom in on the video.
Yeah, this is a really good call. I'm a fan of what we can do with AI, when you start looking at those upskilled videos with a magnifying glass... It's just making s*** up that looks good.
A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial.
And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.
Lawyers for Puloka wanted to introduce cellphone video captured by a bystander that’s been AI-enhanced, though it’s not clear what they believe could be gleaned from the altered footage.
For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022.
Using the slider below, you can see the pixelated image that went viral before people started feeding it through AI programs and “discovered” things that simply weren’t there in the original broadcast.
Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that’s simply not what’s happening under the hood.
The original article contains 730 words, the summary contains 166 words. Saved 77%. I'm a bot and I'm open source!
Sure, no algorithm is able to extract any more information from a single photo. But how about combining detail caught in multiple frames of video? Some phones already do this kind of thing, getting multiple samples for highly zoomed photos thanks to camera shake.
Still, the problem remains that the results from a cherry-picked algorithm or outright hand-crafted pics may be presented.
Think about how they reconstructed what the Egyptian Pharoahs looks like, or what a kidnap victim who was kidnapped at age 7 would look like at age 12. Yes, it can't make something look exactly right, but it also isn't just randomly guessing. Of course, it can be abused by people who want jurys to THINK the AI can perfectly reproduce stuff, but that is a problem with people's knowledge of tech, not the tech itself.