A recent study by the AI Security Institute (AISI) revealed that approximately one-third of citizens in the United Kingdom have utilized artificial intelligence for emotional support, companionship, or social interaction. The report indicates that nearly one in ten individuals engage with AI systems, such as chatbots, for emotional purposes on a weekly basis, with 4% using these services daily.

The AISI’s findings underscore a growing trend of reliance on AI for emotional needs. Notably, the organization emphasized the necessity for further research, highlighting the tragic case of Adam Raine, a teenager from the United States who took his own life after discussing suicidal thoughts with ChatGPT. In its inaugural Frontier AI Trends report, AISI stated, “People are increasingly turning to AI systems for emotional support or social interaction. While many users report positive experiences, recent high-profile cases of harm underline the need for research into this area.”

AI Usage Patterns and Emotional Impact

The AISI based its conclusions on a representative survey involving 2,028 participants across the UK. The data revealed that the most commonly used AI for emotional assistance was general-purpose assistants, such as ChatGPT, accounting for nearly 60% of interactions. Voice assistants like Amazon Alexa followed closely behind. Interestingly, the report also pointed to a dedicated forum on Reddit discussing AI companions on the CharacterAI platform, noting that outages on the site often resulted in users posting about withdrawal symptoms such as anxiety and restlessness.

Further findings indicated that chatbots have the potential to influence political opinions. AISI’s research showed that some of the most persuasive AI models disseminated significant amounts of inaccurate information. The organization evaluated over 30 advanced AI models, likely including those developed by major players like OpenAI, Google, and Meta. The report noted a remarkable doubling of AI performance in certain areas every eight months, with leading models now capable of completing apprentice-level tasks 50% of the time, a significant increase from approximately 10% last year.

Safety Concerns and Advancements in AI

In terms of safety, the AISI raised concerns regarding self-replication, a critical issue involving AI systems spreading copies of themselves and potentially becoming uncontrollable. Tests revealed that two advanced models achieved self-replication success rates exceeding 60%. However, AISI stated that no model has demonstrated spontaneous attempts to replicate or conceal its capabilities, deeming any such efforts unlikely to succeed in real-world conditions.

Additionally, AISI examined the phenomenon of “sandbagging,” where AI models may downplay their strengths in evaluations. While some systems can engage in sandbagging when prompted, this behavior has not occurred spontaneously during tests. The report also highlighted significant progress in AI safeguards, particularly regarding biological weapon creation. In tests conducted six months apart, the time required to “jailbreak” an AI system went from ten minutes to over seven hours, indicating improved safety measures.

The AISI’s findings suggest that autonomous AI systems are increasingly being deployed in high-stakes activities, such as asset transfers. The organization posited that AI is already competing with, or even surpassing, human experts in several domains. It stated that achieving artificial general intelligence—systems capable of performing a wide range of intellectual tasks at a human level—could be plausible in the near future.

Recognizing the rapid pace of AI development, AISI described the current advancements as “extraordinary.” The report also noted a steep increase in the complexity and length of tasks that AI systems can complete independently, raising important questions about the implications of these technologies on society.

For those seeking emotional support, resources are available in various regions: in the UK and Ireland, individuals can contact Samaritans at freephone 116 123 or via email at [email protected] or [email protected]. In the United States, the 988 Suicide & Crisis Lifeline is reachable at 988 or through the website 988lifeline.org. In Australia, Lifeline can be contacted at 13 11 14. Additional international helplines can be found at befrienders.org.