Suicides and Delusions: Lawsuits Point to Dark Side of AI Chatbot

Warning: This article contains descriptions of self-harm.
Can an artificial intelligence (AI) chatbot twist someone’s mind to breaking point, push them to reject their family, or even go so far as to coach them to commit suicide? And if it did, is the company that built that chatbot liable? What would need to be proven in a court of law?
These questions are already before the courts, raised by seven lawsuits that allege ChatGPT sent three people down delusional “rabbit holes” and encouraged four others to kill themselves.
ChatGPT, the mass-adopted AI assistant currently has 700 million active users, with 58 percent of adults under 30 saying they have used it—up 43 percent from 2024, according to a Pew Research survey….