Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.
Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.
This would absolutely flag me for something. I tend to have flat delivery, low pitch, avoid eye contact, etc. and when combined with other metrics, could easily flag me as not being a happy enough camper.
I mean don’t get me wrong, I’m never going to be happy to be working, but if I showed up that day, I’m also in a good enough headspace to do my job… and if you want to fire me for that… for having stuff going on and not faking vocal patterns…
This is why I don’t want to work anymore. It’s gotten so invasive and fraught if you happen to be anything but a happy bubbly neurotypical fake. And that’s wildly stressful. I’m not a machine, and refuse to be treated like one. If that means I have to die in poverty, well, dump me in the woods, I guess.
This feels like the AI equivalent of men telling female workers to smile more. I'm totally sure that bias wasn't cooked into these algorithms. Honestly, how is this not profiling for neurodiverse individuals?
It is. The only reason I, an autistic man, can feed myself is because at least some jobs are defined in terms of measurable output. As soon as a human is making a personal judgment about me, they see that I’m like an alien acting human, and they find a way to fire me.
As an Uber driver, or any other job where success is not based on my boss’s judgment, I kick ass.
People have no fucking ability to stand by any of the “diversity” crap they preach. Like, maybe if diversity is so important to you, the fact that my voice sounds slightly tighter than usual one day shouldn’t result in me getting “Does not meet expectations” on my review.
Can you tell I’m a little bitter about this?
Now these kids are trying to organize Uber drivers into some kind of union.
Please no! I only succeed because it’s gig work, because it’s independent contractor stuff. As soon as my benefits become codified, it becomes an employee situation, and I get put under the neurotypical microscope again.
I cannot survive there, and I don’t want to live on state aid. Free money is not a substitute for work.
Have you tried working for a small-mid size company?
I got the same vibe in big companies, now im in a company of 50 people and they just do not care how weird i am, no middle managers trying to justify their existence, as long as you're doing your best you're good.
Like i'm sure that doesnt apply to all small companies, but i'd certainly keep it in mind for the future
There are a lot of lonely people without social support groups or who otherwise may not be willing or able to seek help when they need it. Having an AI that is in a position to go "hey, are you alright?" Could be a boon for those folks.
There are also situations where a worker could be a problem or even a danger to their co-workers, and having an AI that's able to pay attention and potentially intervene in those situations could help prevent trauma from happening in the first place.
I'm not saying this is what it'll be used for, just answering your question about how it could be viewed in a non-dystopian way.
I'm glad to live in a place where that kind of surveillance is already illegal. I recently read that in some places, it's already commonplace to track every single keystroke and mouse click on workers' PCs. That's bad enough even without putting AI and facial recognition into the mix. Truly dystopian.
I'm the enemy. Because I like to think, I like to read. I'm into freedom of speech, the freedom of choice. I'm the kind of guy who likes to sit in a greasy spoon and wonder - "Gee, should I have the T-bone steak or the jumbo rack of BBQ ribs with the side order of gravy fries?" I want high cholesterol. I wanna eat bacon and butter and buckets of cheese, okay? I wanna smoke a Cuban cigar the size of Cincinnati in the non-smoking section. I wanna run through the streets naked with green Jell-O all over my body reading Playboy magazine. Why? Because I suddenly might feel the need to, okay, pal?
Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.
A quick rundown of what it entails and why it might matter in the US:
What is it?
The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety.
The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination.
Key Takeaways:
Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned.
High-Risk Regulation: AI systems with significant implications for people's rights must undergo rigorous assessments.
Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data.
Why Does This Matter in the US?
Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards.
Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally.
Potential for US Legislation: The EU's move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.
Emotion-tracking AI is covered:
Banned applications:
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
Definitely a good start. Surveillance (or ""tracking"") is one of those areas where ""AI"" is actually dangerous, unlike some of the more overblown topics in the media.
"Sentiment analysis" has been creeping into things like IVR and CRM systems for years now. I've been getting creeped out enough by that, I don't need to be constantly thinking that my work computer is trying to read my emotions.