While recent evolutions in “AI” have netted some profoundly interesting advancements in creativity and productivity, its early implementation in journalism has been a sloppy mess thanks…
I don't understand why any journalism site will advertise that they are using AI. It just says they don't care about facts, research or quality in writing. Journalism is not simply spewing out a handful of paragraphs of text about a random subject. It is research that can take weeks or months, double checking facts, verifying sources and putting it all together into a well written article. AI texts have none of that. Quite the opposite.
Because a significant chunk of what gets passed off as journalism on such sites is just writing copy -- for example, regurgitating press releases, or repackaging the work of another outlet that actually did do the legwork of investigative journalism. I don't think there's anything inherently wrong with using AI tools to speed up the task of summarizing some other text for republishing, but I do question the value of such work in the first place.
It's going to be a long, long time until artificial intelligence can do the work of a true investigative journalist.
Because you have to have specific knowledge about how AI works to know this is a bad idea. If you don't have specific knowledge about it, it just sounds futuristic because AI is like a Star Trek thing.
This current AI craze is largely as big a deal as it is because so few people, including the people using it, have any idea what it is. A cousin of mine works for a guy who asked an AI about a problem and it cited an article about how to fix whatever the problem was, I forgot. He asks my cousin to implement the solution proposed in that article. My cousin searches for it and discovers article doesn't actually exist, so he says that. And after many rounds of back and forth, of the boss saying "this is the name of the article, this is who wrote it" and my cousin saying "that isn't a real thing and that author did write about some related topics but there's no actionable information there", the boss becomes convinced that this is a John Henry situation where my cousin is trying to make himself look more capable than the AI that he feels threatened by and the argument ends with a shrug and an "Okay, then if it's so important to you then we can do something else even though this totally would have worked."
There really needs to be large-scale education on what language models are actually doing to prevent people from using them for the wrong purposes.
I installed INCH on all my browsers, it's obviously not 100% accurate, but it is nice to get a visual cue that the article you're reading may very well be AI generated.
I know news media is losing money fast, but if this is the solution they go with, I think it will have the opposite effect. People who still read the news are sure as hell going to stop paying if this becomes the norm. If they think readership is declining, how is spamming a mass of AI generated junk going to help?
Yeah this is all over. Here's another one about Io9 publishing an AI article with tons of mistakes and no chance to even edit it. Not sure who thought it would be a good idea. AI can be very hit or miss on first go and needs editing before use at the very least.
It's a 1:1 replacement for the lowest effort trash written by an un- or under-paid intern, but when companies start assuming (and they will) it can take over well-researched reporting, it'll be crud for us all.