views
OpenAI recently expressed concern that its artificial intelligence (AI) and its realistic voice feature might lead individuals to form stronger bonds with bots than with humans. Citing literature, the tech giant said that chatting with AI like a person can result in misplaced trust and that the high quality of the GPT-4o voice may make that effect worse.
In a report on the safety work OpenAI is conducting on a ChatGPT-4o version of its AI, the company stated, “Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models,” according to AFP.
“This risk may be heightened by the audio capabilities of GPT-4o, which facilitate more human-like interactions with the model,” it added.
Elaborating further, the San Francisco-based company mentioned that it noticed testers were speaking to the chatbot in ways that suggested shared bonds, such as lamenting aloud that it was their last day together. These instances, however benign, must be studied to see how they might develop over longer periods of time.
According to the company, interacting with AI may also make users less adept or inclined when it comes to relationships with humans.
Furthermore, the report stated that extended interaction with the model might influence social norms. For example, OpenAI’s models are deferential, allowing users to interrupt and take the mic at any time, which, while expected for an AI, would be anti-normative in human interactions.
OpenAI said that the ability of AI to remember details while conversing and performing tasks could also make people overdependent on the technology.
Co-founder and CEO of AI anti-plagiarism detection platform Copyleaks, Alon Yamin, said that AI should never be a replacement for actual human interaction, adding, “The recent concerns shared by OpenAI around potential dependence on ChatGPT's voice mode indicate what many have already begun asking: Is it time to pause and consider how this technology affects human interaction and relationships?”
Moreover, the company said it will further test the possibility that its AI’s voice capabilities might cause people to become emotionally attached. While testing ChatGPT-4o’s voice capabilities, they were able to prompt AI to repeat false information and offer conspiracy theories, which raised questions about whether the AI model could be persuaded to do this convincingly.
Meanwhile, the San Francisco-based company recently launched a new feature that allows ChatGPT Free users to create images using its advanced DALL-E 3 model. With this new update, OpenAI is allowing free users to generate up to two images per day.
Comments
0 comment