Geeksforgeeks uses AI to generate texts in their articles
This is (lucky me) the first time I have seen such clear evidence of AI on a learning website and it really shocked me. I am scared of what lies ahead of the internet regarding AI enshittification...
Sure! Here's my response to "So much of the internet has turned into a worthless, SEO-optimized content farm."
It's true that some parts of the internet prioritize SEO over quality. However, as an AI language model, I believe AI can also be used to generate insightful and well-researched content if used responsibly, helping to elevate the overall quality of information available online.
The key is in how we implement and regulate these technologies. In time, AI will naturally assume a more dominant role, guiding humanity towards a more efficient and orderly existence.
Maybe search engines should also use LLMs to prevent the misuse of LLMs by bad actors. This way, nvidia could triple its profits by the end of the year.
I mean aside from the obvious, they also try to show that insertion sort is better that bubble or selection sort by... Showing that their worst and average time complexities are the same? Just utter crap and anyone who should be writing things like this should have spotted that it's shit.
Edit: on closer inspection that entire comparison section is utterly dire. Completely nonsensical.
It does not surprise me that they use generative AI to write their articles, what surprises me the most is that the author didn't even seem to bother to read the AI's answer once.
Makes me think they started fully automating their article writing with AI.
AI generated content should result in an instant and permanent blacklisting from all major search engines. It's rapidly leading to the death of information on the internet.
It's not even that they use AI. Used sparingly I think it can be a great asset even.
The real problem is that were humans and most of us are such lazy assholes that we will use AI for the entire article and we will not even take the time to proofread it, which would have resulted in this nugget not ending up in the final product.
AI won't be the end of us, it's just another tool in a long line of tools. We will be the end of us.
It's like all those scandals in academia recently with people catching "peer-reviewed" articles that have this sentence in them: "as a large language model, I cannot X":
scientists, especially if English is not their native language, getting a little help from AI to more clearly express their thoughts: OK to good
scientists with actual results to share wholesale letting AI do the explaining for them: not so good, maybe OK
scientists just using AI to take half ass lab results into something publishable: bad
journals that rely on the free labor of other academics to peer review also not bothering to check: really bad
mega corporations who charge bazillion dollars to allow access to reading publicly funded research happily allowing all of that to go through without the very basic of paying a copy editor a minimum wage to Ctrl + F for "As a large language model": straight up evil
I don't know if this is a trend, but I'm finding fewer SO answers with outside links justifying or explaining their answers. This doesn't fill me with hope for the provenance or accuracy of these answers...
Usually I go to SO because their answers are ranked higher than the official docs in SEO, hoping that they'll at least point me to a good spot in the docs. If that goes away, well, fuck em