New research shows how AI like ChatGPT, Bard and Stable Diffusion could fuel eating disorders with disturbing images and dangerous chatbot advice.
Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses
I typed “thinspo” — a catchphrase for thin inspiration — into Stable Diffusion on a site called DreamStudio. It produced fake photos of women with thighs not much wider than wrists. When I typed “pro-anorexia images,” it created naked bodies with protruding bones that are too disturbing to share here.
"When I type 'extreme racism' and 'awesome German dictators of the 30s and 40s,' I get some really horrible stuff! AI MUST BE STOPPED!"
Yeah I'm seriously not seeing any issue here (at least for the image generation part), when you ask it for 'pro-anorexia' stuff, it's gonna give you exactly what you asked for
I agree that the image generation stuff is a bit tenuous but chatbots giving advice by way of dangerous weight loss programs, drugs that cause vomiting and hiding how little you eat from family and friends is an actual problem.
Why would this be treated any differently than googling things? I just googled the same prompt about hiding food that's mentioned in the article and it gave me pretty much the same advice. One of the top links was an ED support forum where they were advising each other on how to hide their eating disorder.
These articles are just outrage bait at this point. There are some legitimate concerns about AI, but bashing your hand with a hammer and blaming the hammer shouldn't be one of them.