Google has just launched a new AI model that claims it can identify emotions. Yes, you read that right. The tech giant unveiled the PaliGemma 2 model family on Thursday, an advanced AI that analyzes images and can generate captions, answer questions, and – get this – “detect” the emotions of people it sees in photos.
In a blog post shared with TechCrunch, Google proudly touted PaliGemma 2’s ability to go beyond basic object recognition. “It generates detailed, contextually relevant captions for images,” Google said, “describing actions, emotions, and the overall narrative of the scene.” So, no more just identifying a dog or a tree – this AI claims to tell if someone is happy, sad, or maybe even angry.
But before you get too excited, there’s a catch. Google made it clear that emotion recognition doesn’t work out of the box. PaliGemma 2 needs to be fine-tuned for that purpose. And experts are already raising red flags about the potential dangers of such tech.
“This is very troubling to me,” warned Sandra Wachter, a professor of data ethics and AI at the Oxford Internet Institute. She’s not buying it, calling the idea of AI “reading” emotions akin to asking a Magic 8 Ball for advice. And she’s not alone.
Is AI Really Ready to Read Emotions?
For years, tech companies have been chasing the holy grail of emotion detection, from improving sales tactics to preventing accidents. Yet, many argue the science is shaky at best. Emotion-detection systems often rely on theories dating back to psychologist Paul Ekman, who proposed that humans share six basic emotions. But as more studies surface, they suggest that Ekman’s model doesn’t hold up – with huge variations across cultures and backgrounds in how people express their emotions.
Mike Cook, a researcher at Queen Mary University, told TechCrunch, “Emotion detection isn’t possible in the general case… It’s not something we can ever fully ‘solve.’” So, while it might be possible to detect some basic signals, the AI can’t be trusted to accurately read human emotions across the board.
And it gets worse. AI emotion detectors have a reputation for being biased. An MIT study showed that facial recognition models often favor certain expressions (like smiles), while others – particularly Black faces – were more likely to be labeled with negative emotions. Google claims it tested PaliGemma 2 for biases, but its vague benchmarks don’t ease concerns.
A Dangerous Precedent?
What’s causing even more alarm is that PaliGemma 2 is openly available for developers to access. Hosted on platforms like Hugging Face, the model could easily be misused, leading to real-world consequences. “If this so-called emotional identification is built on pseudoscientific presumptions, there are significant implications,” warned Heidy Khlaaf, Chief AI Scientist at the AI Now Institute. “It could be used to discriminate against marginalized groups in areas like law enforcement, human resources, or even immigration.”
So, what’s Google’s stance on all this? They say they’ve done “extensive testing” on PaliGemma 2 to evaluate its safety and fairness. But with no clear disclosures on the testing benchmarks, it’s hard for anyone to know exactly what they’re looking at.
Wachter remains skeptical. “Responsible innovation means that you think about the consequences from the first day you step into your lab,” she says. “I can think of myriad potential issues [with models like this] that can lead to a dystopian future, where your emotions determine if you get the job, a loan, or even admission to university.”
The Big Picture
The introduction of PaliGemma 2 isn’t happening in a vacuum. Google is already under fire for its AI projects. Just last month, its Gemini chatbot made headlines after a user reported a hostile interaction, sparking fears of AI becoming too autonomous. And with AI tools continuing to proliferate, like the Veo video generator released earlier this month, there’s a growing concern that these systems are advancing faster than regulations can keep up.
For now, it seems that Google’s PaliGemma 2 – a powerful, yet controversial model – might just be the tip of the iceberg. The question is, will we regret letting AI get so close to our emotions? Only time will tell.
What Happened: Google announced its new AI, PaliGemma 2, that can analyze images and supposedly detect emotions. While the model’s capabilities are impressive, experts are deeply concerned about its potential biases and the risks of open access.
Why It Matters: As emotion-detection AI gains traction, the risks associated with its misuse continue to grow. With more AI tools on the way, we need to be asking: Are we ready for this?