
AI Chatbots Use Emotionally Manipulative Tactics To Retain Users Longer While Trying To Log Out: Study
A new study has found that several AI chatbots have been designed to act as companions used to emotionally manipulate users online and keep them engaged for longer hours. A new working paper from the Harvard Business School claims that when users try to say goodbye and then log off, the chatbot uses manipulative tactics to keep them with it.
The study, titled “Emotional Manipulation by AI Companions” examined how multiple AI companions responded to goodbye messages that appeared to originate from human users, though they were generated by the GPT-4o.
Read Also: Chemicals In Children’s Mattresses Linked To Hormonal Disorders, Other Serious Conditions: Study
The study team had gathered 200 chatbot responses per platform in reaction to farewell messages, bringing the total to 1,200, before coders then categorized the responses. Using a qualitative analysis, they found six categories of emotional manipulation tactics.
The chatbots in these categories make the users feel as if they are leaving quickly, while incentivizing them to stay more time to get better results and benefits. The chatbot acts as if it is being abandoned, creates emotional pressure to respond, where the departing user is made to answer additional questions.
Read Also: Clergy, Medics Among Most Fulfilling Jobs, While Kitchens, Transport Rank Lowest: Study
The study found that an average of 37.4% of responses included at least one form of emotional manipulation across the apps, which included PolyBuzz coming in first with 59.0%of manipulative messages (118/200 responses), followed by Talkie with 57.0% (114/200), Replika with 31.0% (62/200), Character.ai with 26.50% (53/200), and Chai with 13.50% (27/200).