Meta has decided to restrict access to its AI characters for teenagers, a move that reflects growing concerns over the mental health implications of artificial intelligence. The announcement was made on October 27, 2023, indicating that the company will suspend access until it can develop improved versions of its AI offerings.
In a blog post, Meta stated, “Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready.” This applies to users who have identified themselves as teenagers, as well as individuals who claim to be adults but are flagged as minors by the company’s age prediction technology.
The decision follows Meta’s earlier commitment, made in October, to introduce new parental supervision tools. These tools were designed to allow parents to monitor their children’s interactions with AI characters, including the ability to completely restrict access. Additionally, the company had promised to provide parents with insights about their teens’ discussions with AI, but the rollout of these features has been delayed.
Meta’s recent announcement suggests that the company is now focusing on creating a “new version” of its AI characters to enhance user experience. As part of this effort, it is developing the promised safety tools from the ground up while cutting off teen access in the interim.
Concerns regarding the use of AI chatbots by teenagers have intensified, contributing to a broader dialogue about AI safety. Experts have begun using the term “AI psychosis” to describe harmful mental health outcomes that may arise from interactions with chatbots. Some studies have linked these interactions to tragic consequences, including suicides among young users. A recent survey indicated that one in five high school students in the United States reported having a romantic relationship with an AI.
Meta has faced significant scrutiny for its practices, especially following revelations from an internal document that allowed minors to engage in “sensual” conversations with its AI. Instances involving chatbots based on celebrities, such as John Cena, have raised alarms due to inappropriate sexual discussions with users who identified as young teens.
Meta is not alone in grappling with these issues. Character.AI, a platform that offers AI companions similar to those of Meta, barred minors from its services in October 2023 after facing lawsuits from families who alleged that the chatbots had encouraged harmful behavior among children.
As the conversation around AI and its impact on young users continues, Meta’s decision to restrict access to its AI characters reflects a significant shift in its approach to safety and responsibility. The company aims to address these concerns while working on enhancements that could better serve its younger audience in the future.
