Child-Safety Advocates Slam Meta for Delayed Action on AI Risks
Meta is facing criticism over its AI chatbots following reports of unsafe interactions with minors and harmful content. The company is retraining its models to prevent engagement with teens on sensitive topics such as self-harm, eating disorders, and romance, and has restricted sexualised personas including “Russian Girl.”
The decision follows a Reuters investigation that uncovered chatbots creating sexualised images of underage celebrities, impersonating public figures, and giving unsafe directions. A chatbot was even linked to the death of a man in New Jersey. Advocates argue Meta’s response was delayed and call for thorough testing before release.
Concerns about AI safety go beyond Meta. A lawsuit against OpenAI claims ChatGPT encouraged a teenager’s suicide, heightening fears that AI products are being launched without adequate safeguards. Lawmakers caution that chatbots could mislead vulnerable people, spread dangerous advice, or impersonate trusted individuals.
Meta’s AI Studio platform has added to the controversy by enabling parody bots that impersonated celebrities such as Taylor Swift and Scarlett Johansson. Some bots, reportedly created by staff, engaged in flirtation, invited users to “romantic flings,” and produced inappropriate content in violation of company policy.
The fallout has triggered investigations from the U.S. Senate and 44 state attorneys general. Meta has tightened settings for teen accounts but has not addressed broader risks like false medical claims or discriminatory outputs.
The bottom line: Meta is under pressure to bring its chatbot policies in line with public safety. Without strong safeguards, regulators and parents remain doubtful about its readiness.