Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.
An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.
Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.
It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.
Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most "professional" candidate for the job and it picked white-looking people.
This is a real potential issue, not just “clickbait”.
If companies go pick the most professional applicant by their photo that is a reason for concern, but it has little to do with the image training data of AI.
It isn't like it isn't going to happen. You have all these headhunter organizations that lurk on LinkedIn. They are choosing who to contact. Who knows what tea leaf reading bullshit they are going to try?
At least with a big corporation there is some sorta paper trail. Something that could be used in an internal review or produced during a lawsuit.
Some people (especially in business) seem to think that adding AI to a workflow will make obviously bad ideas somehow magically work. Dispelling that notion is why articles like this are important.
(Actually, I suspect they know they’re still bad ideas, but delegating the decisions to an AI lets the humans involved avoid personal blame.)
Businesses will continue to use bandages rather than fix their root issue. This will always be the case.
I work in factory automation and almost every camera/vision system we've installed has been a bandage of some sort because they think it will magically fix their production issues.
We've had a sales rep ask if our cameras use AI, too. 😵💫
I have Asian friends that have used these tools and generated headshots that were fine. Just because this one Asian used a model that wasn't trained for her demographic doesn't make it a reflection of anything other than the fact that she doesn't understand how MML models work.
The worst thing that happened when my friends used it were results with too many fingers or multiple sets of teeth 🤣
No company would use ML to classify who's the most professional looking candidate.
Anyone with any ML experience at all knows how ridiculous this concept is. Who's going to go out there and create a dataset matching "proffesional looking scores" to headshots?
The amount of bad press and ridicule this would attract isn't worth it to any company.
Companies already use resume scanners that have been found to bias against black sounding names. They’re designed to feedback loop successful candidates, and guess what shit the ML learned real quick?
The point, which you've missed, is that AI is being trained on datasets that reinforced stereotypes, poisoning the models and reinforcing things that could become very problematic as AI gets used more without proper supervision.
Didn’t miss the point. It was trained on images of people. The majority of images it had access to were white faces because that’s what was available to scrape. Too many white people are represented in media. Isn’t that the underlying point? AI is merely reflecting that, as it was designed to do. That reflection is embarrassing. Like a toddler with a potty mouth. Not the kids fault.
AI is merely reflecting that, as it was designed to do.
Yes. And everybody knows that. Your comment doesn't really add anything.
The point is that biased datasets are a problem that should be fixed - AI models need to comprehend human diversity, and the limitations from the biased datasets we do have need to be properly communicated to users.
No we’re on the same page. But since you came off hostile from the start, I’m going to assume that’s the kind of interaction you’re looking for. Look elsewhere.
Yeah no... After calling a genuine concern "misleading clickbait" you don't get to say I'm being hostile from the start and your comment is totally "in the same page".
I’m allowed to do whatever I want and you’re allowed to be willfully obtuse. Doing great so far. Keep telling the world how it should work rather than try to understand it and offer solutions. It’s gonna work out for you I’m sure of it.
The majority of images it had access to were white faces because that’s what was available to scrape.
It doesn’t just create an average of all the faces tagged as “professional”—it identifies features that distinguish faces tagged as “professional” from ones that aren’t. If the same proportion of ethnicities were in both data sets (i.e., if professionals and non-professionals were both all white, or all Asian, or 50/50), it wouldn’t see a correlation, and it wouldn’t change the subject’s existing ethnicity.
Too many white people, meanwhile 2/3rds of America is white. That's the actual problem, it's based on American datasets, and finding a balanced dataset is impossible.
We live in the world where when my 9 year old daughter was born she could Skype video chat with grandma in South East Asia.
All this venture capital flowing around with all this data out there and we can't deal with a sizeable fraction of the human race? We really can't find some people from Asia?
Yeah sure it is just holding up a mirror and none of us like what we see. Don't blame the mirror.
The thing is there is plenty of blame to go around. Yes, there are horrible racial biases in the world. That doesn't mean that these well known documented repeatedly issues can't be addressed. The emperor may have no clothing but that doesn't mean we should all be naked.
You are allowed to fix your software so that it isn't racist even if that doesn't make racism go away.
The AI might associate lighter skin with white person facial structure. That kind of correlation would need to be specifically accounted for I'd think, because even with some examples of lighter skinned Asians, the majority of photos of people with light skin will have white person facial structure.
Plus it's becoming more and more apparent that AIs just aren't that good at what they do in general at this point. Yes, they can produce some pretty interesting things, but they seem to be the exception rather than the norm, and in hindsight, a lot of my being impressed with results I've seen so far is that it's some kind of algorithm that is producing that in the first place when the algorithm itself isn't directly related to the output but is a few steps back from that.
I bet for the instances where it does produce good results, it's still actually doing something simpler than what it looks like it's doing.
I also wouldn’t say it was conscious bias either. I don’t think it’s intentionally developed in that way.
The fact still remains though whether conscious or unconscious, it’s potentially harmful to people of other races. Sure it’s an issue with just graphic generation now. What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?
The possibilities are virtually endless, but if we don’t start pointing out and addressing any type of bias, it’s only going to get worse.