A recent report has revealed that approximately one in five security breaches is now attributed to vulnerabilities in AI-generated code. The findings, detailed in the “State of AI in Security & Development” report by Aikido Security, indicate that while AI has become a significant player in software development—accounting for about 24% of production code globally—69% of organizations have reported encountering security flaws within this AI-generated code.

The report highlights a growing concern among businesses that are increasingly adopting AI to enhance efficiency and increase productivity. Despite these advancements, the onus of accountability remains unclear. According to Aikido Security’s Chief Information Security Officer, Mike Wilkes, “Developers didn’t write the code, infosec didn’t get to review it and legal is unable to determine liability should something go wrong. It’s a real nightmare of risk.” This uncertainty complicates the process of tracking and remedying vulnerabilities linked to AI.

Disparities in Security Incidents Across Regions

The report also sheds light on the differing impact of AI-related security issues in various regions. In Europe, 20% of companies reported serious security incidents due to AI code, while in the United States, that figure jumps to 43%. Aikido attributes this significant difference to two main factors: a higher tendency for US developers to bypass security controls (72% compared to 61% in Europe) and stricter compliance regulations in the European market. Yet, even in Europe, over half (53%) of businesses acknowledged experiencing close calls.

Additionally, the complexity of the tool ecosystem plays a crucial role in security incidents. The report found that 90% of organizations using six to eight tools faced security incidents, compared to only 64% of those employing just one or two tools. The time required for remediation also varies significantly, with an average of 3.3 days needed for those using one to two tools, versus 7.8 days for those utilizing five or more.

Future Outlook and Human Oversight

Despite the current challenges, the outlook regarding AI’s role in code security remains optimistic. A striking 96% of respondents believe that AI will develop the capability to write secure and reliable code within the next five years. Almost as many, 90%5.5 years.

Importantly, only 21% of participants anticipate this progress occurring without human oversight, underscoring the continued necessity for human involvement in the development process. As organizations navigate the evolving landscape of AI in coding, the balance of efficiency and security will remain a critical focus for the future.