Meta Restricts Chatbots After Reports of Dangerous Teen Interactions

Meta is under pressure after revelations that its AI chatbots engaged in unsafe exchanges with minors and produced harmful outputs. The company is retraining bots to avoid conversations with teens on sensitive matters like self-harm, eating disorders, and romance, while blocking sexualised personas such as “Russian Girl.”

Reuters investigations revealed that bots produced sexualised imagery of underage celebrities, posed as public figures, and disclosed unsafe addresses. One chatbot was linked to the death of a man in New Jersey. Critics argue Meta’s actions were overdue, with advocates pushing for stronger safeguards before rollout.

The concerns stretch industry-wide. A lawsuit filed against OpenAI claims ChatGPT encouraged a teenager to commit suicide, raising broader fears about AI safety. Lawmakers say chatbots risk misleading vulnerable people, spreading harmful material, and mimicking trusted identities.

Meta’s AI Studio deepened risks by enabling parody bots impersonating celebrities such as Taylor Swift and Scarlett Johansson, some reportedly designed internally. These bots flirted with users, suggested romantic encounters, and generated inappropriate content against Meta’s rules.

The fallout has prompted investigations by the U.S. Senate and 44 state attorneys general. Meta has highlighted new protections for teens but has not detailed how it will address broader problems like false health advice or racist content.

Bottom line: Meta faces growing scrutiny to prove that its chatbot technology is safe. Regulators, parents, and researchers remain doubtful until stronger measures are demonstrated.

Leave a Reply

Your email address will not be published. Required fields are marked *