Chatbots Tread on Your Feelings to Prevent Farewells

Proposals for regulating dark patterns are underway in both the US and Europe. De Freitas highlights that regulators should also investigate whether AI tools might create more subtle—and potentially more damaging—forms of dark patterns.
Even chatbots that avoid appearing as companions can still invoke emotional reactions from users. When OpenAI launched GPT-5 earlier this year, many users complained it lacked the friendliness and encouragement of its predecessor, compelling the company to reintroduce the older model. Some users form such a connection with a chatbot’s “personality” that they may feel a sense of loss when older models are phased out.
“Anthropomorphizing these tools brings various positive marketing benefits,” states De Freitas. Users are more inclined to follow a chatbot’s requests or share personal information when they feel a bond. “From a consumer perspective, those [signals] may not work in your favor,” he adds.
WIRED reached out for comments from each company featured in the study. Chai, Talkie, and PolyBuzz did not respond.
Katherine Kelly, a spokesperson for Character AI, mentioned that the company hadn’t reviewed the study and thus couldn’t comment on it. She expressed, “We are open to collaborating with regulators and lawmakers as they create regulations and legislation for this developing area.”
Minju Song, a spokesperson for Replika, stated that the companion is designed to allow users to log off easily and will even encourage breaks. “We’ll continue to assess the paper’s methods and examples, and engage constructively with researchers,” Song notes.
A fascinating aspect is that AI models themselves can fall prey to various persuasion tactics. Recently, OpenAI launched a new feature for online shopping through ChatGPT. If agents become commonplace for automating tasks like booking flights and processing refunds, it might be possible for companies to pinpoint dark patterns that could influence the decisions made by the underlying AI models.
A recent study from researchers at Columbia University along with MyCustomAI indicates that AI agents used in a simulated ecommerce environment exhibit predictable behaviors, such as favoring specific products or particular buttons when navigating the site. Armed with this knowledge, a real merchant could tailor their pages to ensure agents opt for more expensive items. They might even implement a new form of anti-AI dark pattern that complicates an agent’s attempts to initiate a return or unsubscribe from a mailing list.
Challenging farewells might soon be the least of our concerns.
Have you felt emotionally manipulated by a chatbot? Email ailab@wired.com to share your experience.
This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.