I was looking for information on insect vision for a report I'm writing and came across an internet article. The first thing I noticed is that there was a picture under the header that the article called a mantis, but the anatomy was all wrong. It was covered in ocelli, for one.
Then I started reading, and as someone that has used chat GPT before... Yeah it was clearly written by an algorithm...
Look at the pics closely and you'll see they're messed up.
How long until the whole internet is just this shit? This is why there needs to be rules and shit about this, I don't care if it stifles the 'entrepreneurial spirit' of AI or whatever. The droves of lazy copy and paste style news and 'top ten' sites were already bad enough, but this shit is bringing the internet to a new low in quality.
I can't wait until this shit is banned by China or something.
I used to think the cyberpunk genre trope of finding information online being some difficult task requiring a trained professional to be quaintly anachronistic, but the proliferation of AI generated nonsense has given the concept a new life to the point that "ok yeah soon enough it really will take a skilled professional with up to date detection tools to actually parse through this sea of literal nonsense won't it?" actually seems like a realistic future.
And yet all the discourse on it gets narrowly focused into the useless dead end of fucking property rights, a fight that's lose-lose for the public, win-win for business, and does nothing to stop ad farming/well poisoning/astroturfing spam bullshit at all. Like the property rights thing needs to be solved by making generative AI a poison pill that prevents a work from being copywritable at all and is retroactive poison against it as well (Disney used a deepfake AI in Star Wars once? Star Wars is public domain now, because fuck you; a script used AI autofill? The entire property and all licenses attached are now public domain, because fuck you), and the rest has to be solved by criminalizing ad farming spam and making the use of generative AI in it an escalating factor that turns it into a more serious crime.
Kojima's prophecy from MGS2 was that AI would create a version of the internet where all the trivial stuff is filtered out from the “truth” so human society doesn’t regress in the then-new social media era.
...instead, we got a nightmare inversion where AI turbocharges the creation of trivial and outright hallucinatory content that slowly overwhelms and annihilates all sentience on the internet.
How long until the whole internet is just this shit?
They've been the majority of top results from before when chatgpt was a thing. Usually if you search up a release date, or a comparison between two products it's bot generated shit. Hell, they seem to have gotten worse. Probably because more mainstream places do it now, or they're just cocky and instead of spec dumps they're doing harebrained stuff like letting the AI write that the higher megapixel camera is automatically better.
The LLM industry is going to crash and burn so hard and I'm all here for it. A bunch of techbros hyping up what are just big statistical models was never going to last.
Less LLM articles but “AI” shit has made finding art reference material so annoying. Search results are 50% sponsored ad links and a good 10% are shitty stable diffusion prompts. Even worse if you’re looking for generic scifi fantasy bs, then it feels like 70% of results are peoples ai robo waifus sometimes
Shinigami eyes extension but for AI content could be a useful browser extension. Enable people to opt out of even clicking links that has AI content or allows AI content.
Or alternatively, what about going whole hog? Something that functions like adblocker except ai-blocker. Remove the content from search results and being seen as linkable content entirely.
For a brief moment, I thought we'd done away with websites that were just random walls of text attempting to take advantage of early stage search engine crawlers.
I'm assuming someone has already mentioned the problems this causes when googling health concerns, but have y'all tried looking up animal advice recently? I was trying to find some info on a rare parosphromenus species I'm working with. In the old days, this would have taken me to some niche forum threads full of posts from dedicated conservationists. Instead I found dozens of articles with care requirements listed that would have killed the fish in a matter of weeks.
At present it's still pretty easy to recognize GPT text when you see it, but it's enough to fool web crawlers and by the time they roll out GPT5 it will probably be a lot more difficult for humans to tell it apart.
Yeah it's really bad. I was looking up information on healthcare coverage and it's "8 things you need to know" or "36 questions you might have" before selecting a healthcare plan.
I was thinking it'd be great to have an option to filter out any list results for a search engine.
Some of theses pictures really dont seem to match the article very well. Also im loving the really funky words and numbers that look like old captchas.
I was thinking about a recipe I thought it could work, so I googled it and first match was an article stating that it was a known recipe and blahblah, but it was completely nonsensical, extremely long, and it never went to the point of the recipe, it kept circling and circling and going through tangents contradicting itself
I'm gonna test it again, but in the earlier days of ChatGPT I asked it to list all the VR treadmills there were and it named a number of VR headsets among the list
Also the first pic I saw on the website had a calender going up '2,4,7,4...' etc., lol
How long until the whole internet is just this shit
Months, I'm guessing
Eventually we might also see entire fake communities (generated hobbyist forums for example), with LLMs switching "personalities" to simulate a userbase
What will be the result of uncontrolled ai generated nonsense continuing to feed into itself? I'm guessing it turns into some wierd type of bazinga brain Q anon cult.
This is going to create a rise of artisanal knowledge vendors. "Only the finest vintage knowledge" "Fresh pressed facts" "Hand scooped informations" "Thoroughly vetted by our in-house knowledge scholars" "Laboriously tested by old-school grad students" And then it will be, eventually, the same fake facts because they'll outsource it to AI to cut costs.
Everything is using machine learning algos to push "content" it's only going to get worse. Search engines are basically just directed ad and clickbait platforms now.