OpenAI Warns ChatGPT's Voice Mode Could Make Us Emotionally Reliant On AI

Robot hand hold parting of a robot face in front of a background of silicon traces.
There have been various movies and TV shows highlighting emotional bonds with artificial intelligence (AI), such as the movie Her and, if going back much further into the archives, the sitcom Small Wonder from the 1980s (yes, this author has some gray hair). While perhaps these and others like them are far fetched scenarios, OpenAI does acknowledge in a study that there's a risk its new natural voice mode for ChatGPT could lead to "emotional reliance" on AI and "misplaced trust."


The report follows the roll out of a more lifelike voice for ChatGPT, which itself sparked a previous controversy when it was likened to the voice of Scarlett Johanson, the actress who voiced Samantha, an AI virtual assistant for a cutting-edge operating system in the aforementioned movie Her. The actress had said she was "shocked, angered, and in disbelief" at how "eerily similar" ChatGPT's new voice was to hers after she declined an OpenAI offer to be an extra voice.

Now attention is turning to OpenAI's "GPT-4o System Card," a report that outlines risks (with scorecard rankings) and mitigations. One of those risks is anthropomorphization.

"Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models. This risk may be heightened by the audio capabilities of GPT-4o, which facilitate more human-like interactions with the model," OpenAI states in its report.

"Recent applied AI literature has focused extensively on 'hallucinations', which misinform users during their communications with the model and potentially result in misplaced trust. Generation of content through a human-like, high-fidelity voice may exacerbate these issues, leading to increasingly miscalibrated trust," OpenAI adds.


OpenAI goes on to state that in early testing of its natural voice mode, it observed users seemingly forming connections and shared bonds with its AI model. While the tendencies seem benign for the moment, OpenAI warns that its observations signal a need for continued investigation over longer term implications and tendencies.

One specific area of concern is that forming social relationships with AI could reduce a person's need for human interaction. That's something it says could potentially benefit a lonely individual, but at the same time it could affect healthy relationships.


"Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions," OpenAI says.

It's an interesting topic and one that will become increasingly important as the industry gives rise to more capable and human-like AI solutions. It's also just one of several risks outlined in OpenAI's report.
Tags:  AI, openai, chatgpt