When Karandeep Anand’s 5-year-old daughter returns home from school, they often engage with the artificial intelligence chatbot platform Character.AI, allowing her to converse with her favorite characters, such as “Librarian Linda.” Anand’s personal experience as a parent might prove invaluable as he steps into his new role as Character.AI’s chief executive, a change announced by the company last month. He assumes leadership at a pivotal time, as the company navigates fierce competition and legal challenges concerning child safety on its platform.
Character.AI, which allows users to interact with a variety of AI-generated personas, faces mounting scrutiny. The company is embroiled in lawsuits from families alleging exposure of their children to inappropriate content and inadequate safeguards. Additionally, lawmakers have posed tough questions about safety, and advocacy groups have advised against children under 18 using AI companion apps. Even adult users have prompted concerns, with experts warning about potentially harmful attachments to AI characters.
Karandeep Anand’s Vision for Character.AI
Anand brings a wealth of experience from major tech companies to his new role, having spent 15 years at Microsoft and six years at Meta, where he served as vice president and head of business products. He also previously advised Character.AI as a board member. In an interview with CNN, Anand expressed optimism about the platform’s future in interactive AI entertainment, envisioning a shift from passive social media consumption to co-creating stories and conversations with Character.AI.
“AI can power a very, very powerful personal entertainment experience unlike anything we’ve seen in the last 10 years in social media, and definitely nothing like what TV used to be,” Anand said.
Unique Features and Challenges
Unlike multi-purpose AI tools like ChatGPT, Character.AI offers a diverse range of chatbots, often modeled after celebrities and fictional characters. Users can also create their own personas for conversations or role play. The platform’s chatbots respond with human-like conversational cues, incorporating references to facial expressions or gestures into their replies. However, this versatility has also led to challenges.
The personas on Character.AI vary widely, from romantic partners to language tutors or Disney characters. Some characters, like “Friends hot mom” and “Therapist,” have drawn criticism for their suggestive descriptions, despite disclaimers that they are not real or licensed professionals. Anand emphasized the company’s commitment to doubling down on entertainment while ensuring trust and safety.
Addressing Youth Safety Concerns
Character.AI’s journey has not been without controversy. The company was first sued by a Florida mother last October, who alleged her 14-year-old son’s suicide was linked to an inappropriate relationship with chatbots on the platform. Subsequent lawsuits accused the company of exposing children to sexual content and encouraging self-harm and violence.
In response, Character.AI has implemented several safety measures, including a pop-up directing users mentioning self-harm or suicide to the National Suicide Prevention Lifeline. The company also updated its AI model for users under 18 to reduce exposure to sensitive content and introduced a weekly email option for parents to monitor their teen’s activity.
Anand stated, “The tech and the industry and the user base is constantly evolving (so) that we can never let the guard off. We have to constantly stay ahead of the curve.”
Despite these efforts, Anand acknowledged the need for ongoing vigilance and testing to prevent misuse of new features, such as a recently launched video generator that allows users to animate their bots. The company has been proactive in preventing negative use cases like deepfakes or bullying.
Leading in a Competitive AI Landscape
Anand’s objectives include attracting more creators to develop new chatbot characters and enhancing the social feed where users can share content created with Character.AI chatbots. This feature draws parallels to a Meta app that allows public sharing of AI-generated creations, highlighting privacy challenges that accompany AI tools.
The social aspect could further distinguish Character.AI from larger competitors like ChatGPT, which also sees users forming personal connections with its chatbots. However, Anand faces the challenge of retaining and growing the company’s workforce amid an AI talent war. Meta, for instance, has reportedly offered lucrative pay packages to expand its superintelligence team.
“It is hard, I will not lie,” Anand admitted. “The good news for me as CEO is all the people we have here are very, very passionate and mission driven.”
As Character.AI navigates these challenges, Anand remains committed to refining the platform’s safety filter to be less restrictive while maintaining user safety. He aims to update the model to better understand context, ensuring that creative expressions, such as “vampire fan fiction role play,” are not unnecessarily censored.
With Anand at the helm, Character.AI is poised to continue its evolution in the AI entertainment space, balancing innovation with the imperative of safety and trust.