New research shows how AI like ChatGPT, Bard and Stable Diffusion could fuel eating disorders with disturbing images and dangerous chatbot advice.
Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses
So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?
Some search engines and social media platforms make at least half-assed efforts to prevent or add warnings to this stuff, because anorexia in particular has a very high mortality rate, and age of onset tends to be young. The people advocating AI models be altered to prevent this say the same about other tech. It’s not techphobia to want to try to reduce the chances of teenagers developing what is often a terminal illness, and AI programmers have the same responsibility on that as everyone else,
I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.
Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.