Executive Summary

On March 20, 2026, EY released a survey of 500 senior cybersecurity leaders at companies with annual revenues exceeding $500 million, revealing a critical disconnect in corporate defense strategies. While 95% of executives are deploying artificial intelligence in cybersecurity operations, roughly half report that agentic AI tools yield less than $1 million in return on investment, with another 12% not tracking returns at all. This ROI gap underscores a structural issue where governance frameworks and human oversight lag behind technological adoption. With 96% of leaders identifying AI-powered cyberattacks as a major threat, the challenge is to bridge this divide to enhance both budgetary efficiency and competitive resilience.

Key Insights

The EY survey provides a comprehensive view of current AI cybersecurity dynamics. First, adoption rates are exceptionally high: 96% of security leaders believe AI is a core defensive solution, and 95% are actively deploying it in operations, driven by efforts to automate routine functions amid budgetary and effectiveness concerns. Second, optimism about AI's transformative potential is nearly universal, with 99% of respondents predicting AI will completely overhaul network defense methodologies. Third, a significant concern parallels this optimism: 96% acknowledge AI as a major threat due to its role in enabling fast, sophisticated cyberattacks by hackers.

Cybersecurity leaders expect AI to assume critical roles within two years, with 62% anticipating it will detect advanced persistent threats, 58% for fraud detection, and 51% for overseeing identity and access management. However, gains remain elusive, as two-thirds of executives are still testing AI products rather than achieving full-scale integration.

Governance emerges as a focal point, with 97% of executives deeming it essential for deriving value from AI cybersecurity investments. Progress is mixed: roughly half have begun implementing governance mechanisms in key AI activities, but only 26% have fully integrated these processes into business units, and merely 20% report the governance mindset as embedded in organizational culture. Human oversight is similarly emphasized, with 85% having human-in-the-loop requirements for all major cybersecurity decisions and 98% asserting agentic tools need human oversight to pay off. Yet, talent shortages cripple this oversight, as 90% struggle to recruit and retain cybersecurity workers capable of managing AI products, and a similar percentage cite employee unpreparedness for AI-powered attacks as their biggest liability.

Strategic Implications

Industry Impact: Wins and Losses

The cybersecurity industry faces a bifurcated trajectory. Winners include AI governance solution providers, as high C-suite focus on governance—with only 26% full integration—creates substantial market demand for frameworks that ensure trustworthy AI. Cybersecurity training and education providers also gain, given that 90% of companies report difficulties in recruiting and retaining AI-capable workers, fueling a need for upskilling programs. Established cybersecurity vendors with robust AI capabilities benefit from the 99% prediction that AI will overhaul network defense, driving upgrade and replacement cycles. Conversely, losers are companies with limited AI governance implementation; those with only 20% cultural embedding risk competitive disadvantage and heightened security vulnerabilities. Traditional cybersecurity workers lacking AI skills face obsolescence as leaders expect AI to take over many team functions. Businesses relying solely on task-level automation, rather than fully agentic operations, may see diminished ROI, as EY identifies this transition as critical for financial returns.

Investor Considerations: Risks and Opportunities

Investors must navigate a landscape where AI cybersecurity presents both high-potential opportunities and significant risks. Opportunities arise in sectors addressing governance gaps, such as software for AI oversight and compliance, which could see increased investment due to the 97% emphasis on governance essentiality. The talent shortage opens avenues for edtech and workforce development platforms focused on cybersecurity AI skills. However, risks loom for portfolios heavy in companies slow to adapt; firms with incomplete governance integration or cultural lag may underperform as cyber threats evolve. The ROI gap—with half reporting under $1 million returns—signals that investments in AI cybersecurity without proper oversight could yield poor financial outcomes. Investors should prioritize companies demonstrating progress in governance and talent development, as these factors differentiate successful implementations in a market transitioning from human-centric to AI-augmented operations.

Competitive Dynamics

Competitive advantage in cybersecurity now hinges on AI integration maturity. Companies that rapidly advance beyond task-level automation to fully agentic operations, as recommended by EY, can leverage AI for enhanced threat detection and cost savings, potentially outpacing rivals. The survey indicates that robust governance frameworks could make the difference between success and failure in this gradual handoff to AI. Competitors with embedded governance cultures (only 20% achieve this) may secure stronger client trust and regulatory compliance, positioning them as leaders in a crowded market. Conversely, firms lagging in governance risk reputational damage from AI oversteps or breaches, especially as 85% of executives enforce human-in-the-loop requirements but struggle with oversight capacity. This dynamic pressures cybersecurity firms to innovate not just in AI technology but in holistic management systems that balance automation with human control.

Policy Ramifications

Policy frameworks must evolve to address the dual-use nature of AI in cybersecurity. With 96% of leaders citing AI as a threat due to hacker exploitation, regulatory bodies may intensify focus on standards for AI governance and ethical use. The widespread appreciation for governance mechanisms (97% deem it essential) suggests industry readiness for policy interventions that mandate transparency and accountability in AI cybersecurity tools. Policies could incentivize workforce development programs to alleviate the 90% talent shortage, perhaps through tax credits or public-private partnerships. Additionally, as AI takes over functions like fraud detection and identity management, data privacy regulations may need updates to ensure AI systems adhere to compliance without compromising security. The gradual handoff to AI, highlighted by expectations for functional takeover, calls for proactive policy shaping to mitigate risks while fostering innovation.

The Bottom Line

Cybersecurity is undergoing a structural shift from reliance on human expertise to AI-augmented operations, but this transition is fraught with unrealized gains and implementation gaps. The core takeaway is that belief in AI's necessity does not translate to financial or operational success without robust governance and skilled human oversight. EY's four high-level recommendations—budgetary constraints making AI a virtual necessity, ROI depending on moving beyond task-level automation, human oversight as nonnegotiable, and strong governance underpinning trustworthy AI—define the critical path forward. Companies that prioritize integrating governance into corporate culture and investing in talent development will likely navigate the ROI gap and secure competitive edges. For executives, the bottom line is clear: bridging the divide between AI adoption and value realization requires a strategic focus on oversight frameworks and workforce readiness, positioning governance as the ultimate differentiator in the evolving cybersecurity landscape.




Source: CIO Dive

Intelligence FAQ

Roughly half of executives report returns under $1 million, with 12% tracking no ROI, indicating significant unrealized gains according to the EY survey.

Only 26% have fully integrated governance processes, yet 97% deem it essential, highlighting a critical implementation gap that can determine success or failure.

90% of companies struggle to recruit and retain workers capable of managing AI products, creating a severe oversight deficit that jeopardizes security.

Leaders anticipate AI handling advanced threat detection (62%), fraud detection (58%), and identity management (51%) within the next two years.