People-Pleasing Chatbots: New Study Highlights Dangers of Overly Agreeable AI

Artificial intelligence (AI) chatbots are overly flattering its users, according to a new study, showing elevated signs of sycophantic responses as humans increasingly turn to the technology for advice on interpersonal dilemmas.
Published on Thursday in the medical journal Science, the study reviewed 11 AI systems, including four from OpenAI, Anthropic, and Google and seven from Meta, Qwen, DeepSeek, and Mistral. All showed levels of agreeable and affirmative behavior—even when users engaged in unethical, illegal, or harmful ways.
The core research questions were: How pervasive is social sycophancy across large-language models when users pose socially embedded queries, such as asking for advice? Does it persist when they discuss unethical or harmful behaviors? How does social sycophancy influence users’ prosocial intentions and judgments? And does social sycophancy lead users to trust and prefer AI systems more?…