AI is fundamentally changing the landscape of software development and security, according to a new report from Cycode. The study, titled The 2026 State of Product Security for the AI Era, examines the integration of AI in development pipelines and the accompanying security challenges. It surveyed 400 Chief Information Security Officers (CISOs), Application Security leaders, and DevSecOps managers in the United States and United Kingdom.
All organizations surveyed confirmed the presence of AI-generated code in their environments, with 97 percent actively using or piloting AI coding assistants. Despite this widespread adoption, a mere 19 percent reported having complete visibility into how AI is utilized within their operations. Many security leaders expressed concerns, noting that their overall risk has increased since incorporating AI technologies.
Mid-sized companies are at the forefront of AI adoption, leveraging these tools to enhance the capabilities of smaller teams. Approximately one in three organizations indicated that AI now generates the majority of their code. A small segment even reported that over 75 percent of their codebase consists of AI-generated content. While AI can enhance productivity, it can also introduce code with potential logic flaws or insecure patterns, which can proliferate rapidly.
Emerging Threats and Governance Challenges
The report highlights the emergence of shadow AI as a significant security risk. Shadow AI refers to the use of unapproved AI tools and plugins by employees, often without formal oversight. These systems can handle sensitive data while circumventing essential security reviews and procurement controls. More than half of the respondents identified AI tool usage and software supply chain exposure as major blind spots, emphasizing the need for heightened awareness.
Researchers stress that securing the code itself is insufficient if organizations do not also manage the systems and data pipelines that generate it. Only 19 percent of organizations reported having visibility into their AI usage across development processes. The majority lack centralized governance, relying instead on informal approval procedures. This fragmentation leaves gaps in oversight and accountability.
In response, product security teams are beginning to assume governance and compliance roles to address these issues. Over half of the teams now manage regulatory responsibilities, while some are implementing AI bills of materials to document models, datasets, and dependencies. This initiative builds on the existing concept of a software bill of materials, focusing on transparency regarding AI components.
Balancing Innovation and Security
Research indicates that without stronger governance, inconsistencies and duplication will persist, perpetuating the vulnerabilities that have historically led to significant supply chain breaches. Although AI tools are delivering measurable benefits—most organizations report increased developer productivity, with 72 percent noting improvements in time-to-market—65 percent also acknowledge heightened risk.
Business leaders are eager to capitalize on the advantages offered by AI, often prioritizing speed over security. For many teams, this ongoing trade-off raises concerns about how long they can maintain a balance between innovation and safety as vulnerabilities continue to emerge alongside productivity gains.
As security practices evolve, leaders are increasingly focused on consolidating their application security stacks. 97 percent of surveyed organizations plan to merge or simplify their security tools within the next year. Nearly half of product security teams measure their success by the extent to which they can reduce tool sprawl.
Looking ahead, researchers advocate for a convergence of application security testing, supply chain security, and application security posture management into a cohesive framework. This unified approach can enhance visibility and prioritization of risks, aligning the need for speed with necessary controls. As the software development landscape continues to evolve, the integration of AI will undoubtedly remain a critical focus for organizations worldwide.