In a recent safety review conducted by OpenAI to get feedback on the security and risk factors of GPT-40, the team examined that the ChatGPT Voice Mode users have been forming an emotional relationship with AI, seeking companionship. It was released as a part of a safety review report named ‘ GPT-4o System Card.’
It was carried out to understand the risk factors before it was complete;y available to public. As it was listed with different categories finding the safety-related issues, the team found that the users form social relationships with AI that may dwindle the conversations with real humans.
This extended interaction with ‘AI’ chatbot may influence social norms and significantly be a risk in the lives of people as they adhere to talking to GPT for all their emotional swings, even though the new model is capable of impersonating a human by mimicking human speech and conveying robotic emotions. The team further found that the users are getting attached to the chatbot to form emotional bonds.
Also, read| The Latest GPT-4o Model Is ‘Medium’ Risk, Says OpenAI
The team also found that the GPT chatbot may carry a possibility to create unauthorised content by cloning an individual’s voice, or other chunks using the reproduced audio having copyright. The company itself assigned a team to scrutinise the weakness in the system and declared the result of the framework on Friday.
The researchers construed that the GPT-40 comes with a ‘medium risk’ evaluation from all 4 categories including cybersecurity, model autonomy, persuasion and biological threats. The research further noted that the GPT-4o doesn’t possess too many risks. Earlier in July, OpenAI announced its new GPT-4o mini, known as the most cost-effective mini model. GPT-4o mini may support text and vision in the API but aiding more efficiency to text is on its experiment.
Also, read| OpenAI Rolls Out New GPT-4o Mini; The Most Cost-Effective Model