haha, what a bunch of reactionaries. I love how they point to a completely unreadable blur on the original, and go "Seee?! You can't read that in the upscaled version!!" -- as if you could in the original.
The upscales look great with the exception of some fine detail of the shadows on the back. Especially in cases where they don't have the original content, I think it's a perfectly fine way to go about things.
The point is it’s fake 4k, and in a real remaster of the film stock, the example of the text would be more readable. I understand that the source material isn’t available for all of the films.
I think it’s totally fair to either ask for AI upscaling to stop happening on purchasable media, or to ask for it to be clearly labeled as such.
I think it’s totally fair to either ask for AI upscaling to stop happening on purchasable media, or to ask for it to be clearly labeled as such.
Why? People keep buying it. The method of upscaling doesn't matter to most people - only enthusiasts. It's allowing a lot more work to be revived from the dead with it as well - overall I think the benefits far outweigh the problems.
For those who care. Just like how audio formats are labeled, and multiple options are provided. Even though most people don’t care.
Just label it “AI enhanced” or “AI upscaled to 4k”. It’s not too difficult, and it could be a tiny little box next to the other technical details people ignore.
There would be far less of the “reactionary” reaction if people were not “discovering” the situation after release, and potentially after preordering.
Edit: also, to be clear, I don’t think AI upscaling is bad in all cases. I watch Deep Space Nine AI upscaled. And that’s another example of source material not being available.
Criticism of AI taking jobs is a fair take. Even if AI only makes the workflow more efficient, that's less people that need to be hired in the industry. It's fair in realizing that AI is valuable, and is doing amazing things, while also criticizing the downsides of using it.
AI is in its infancy. Different techniques are only going to make it better and better at what it does - it will end up taking jobs. But just broadly claiming it's bad because you can't read words that already weren't readable in the source material is silly.
The reactionary take is "AI is useless, nobody should be using it" or "They should label everything that uses AI so I can avoid it!"...those takes are completely reactionary. The takes complaining about the 'quality' of the AI upscale, even though it looks FANTASTIC are reactionary. An upscale wouldn't be available at all if AI hadn't been produced to do it. So it's clearly worth using. It's produced a result that I'm sure MANY are happy with. There's only a small handful of extraordinarily LOUD individuals making a fuss over it.
I think the biggest criticism of AI is the one that almost NOBODY actually ever complains about: Jobs. Wages are already lower than they have been in 20 years, and NOW it takes EVEN less people to do the same job? Wealth inequality is only going to be exacerbated by AI.
Edit: Also wasn't me who downvoted your question asking me to define reactionary -- so I upvoted to try and counter it, it's a fair question to ask I think.
I think labeling things made by AI is a reasonable request. In this specific example, someone who's buying 4K Wallace & Gromit is doing so out of a love of claymation and Aardman's work in it. They want it in high definition specifically to see the details that went into a handcrafted set and characters. Getting a smoothed over statistical average, when you payed for it expecting the highest quality archive on an artistic work, would be more frustrating than just seeing it in lower definition.
More generally, don't people working with these models also want AI output to be properly labeled? As I understand it, the model starts to degrade when its output is fed back into itself. With the rapid proliferation of AI posting, I've heard you can't even make large language models with the same level of quality as you could before it was released to the general public.
I'm also kinda skeptical that this stuff has as many applications as are being touted. Like, I've seen some interesting stuff for folding proteins or doing statistical analysis of particle physics, but outside highly technical applications... kinda seems like a solution in search of a problem. It's something investors really really like for their stock evaluations, but I just don't see it doing much for actual workers. At most, maybe it eliminates some middle-management email jobs.
These models can do a LOT of different things. If you don't see that, that's an education problem, not an AI problem.
And combining these capabilities in new and unique ways are only going to make things even more wild. It won't be very long at all before my "Ummmmm, I'll have aaaaaaaaa" order at McDonalds doesn't need to be taken by a human being and there's just a single dude in the back running the whole place. That's disruptive on an economic level never before seen. THAT is why companies the world over are so heavily invested in AI. It's finally reached a threshold to replace real labor - and labor accounts for one of the largest portions of expenditure for companies. The economics of paying for electricity to run this stuff FAR outweighs what it takes to pay a person for the same output.