The velocity of modern software delivery often exceeds the capacity of traditional security reviews. As organisations integrate generative AI into development lifecycles, the friction between rapid deployment and governance increases.
The Cyber Security & Cloud Expo Global 2026 examined this tension, prioritising the realities over theoretical discussions of “AI safety”. For technical architects and platform engineers, the objective is establishing trust within automated workflows without slowing delivery.
Trust at machine scale
The definition of a secure software supply chain is expanding. Ilkka Turunen, Field CTO at Sonatype, argues that development automation requires a change in how teams establish trust.
In his analysis of Sonatype’s recent ‘State of the Software Supply Chain’ report, Turunen identifies risks introduced by open-source downloads and AI-assisted coding tools. Automation changes threat models; manual code review fails when AI agents generate code at volume.
Engineering practices must embed security checks directly into the continuous integration pipeline. Turunen suggests that establishing trust at “machine scale” is the only path to maintaining velocity. If security remains a gatekeeping function performed by humans at the end of a sprint, delivery bottlenecks occur.
Data context and exfiltration
Legacy security technologies often fail because they lack context regarding the data they protect. Dave Matthews, Senior Solutions Engineer for EMEA at Concentric AI, states that data security must evolve from static boundary defense to a managed asset strategy. This applies specifically to developers overseeing GenAI rollouts, where systems ingest and process vast datasets. Matthews notes that AI and context are necessary to identify risk within these environments.
The consequences of failing to secure this data are serious. Guy Batey, Head of Engineering at Rubrik, reports that 94 percent of ransomware attacks now involve data exfiltration. Attackers prioritise data theft over simple encryption, demanding a multi-layered prevention strategy. For platform engineers, backup and recovery systems are insufficient on their own; threat detection must occur closer to the data source.
Rapid development cycles and unmonitored assets create what Marcelo Castro Escalada of Outpost24 describes as a chaotic attack surface.
In his session, Marcelo explored how engineering teams can use “unlikely synergies” to manage this complexity. The session covered “Modern External Attack Surface Management,” a discipline for securing risky endpoints that bypass standard inventory checks. For DevSecOps teams, the goal is bringing these assets under management before they become entry points.
Integrating AI with infrastructure
As AI applications move to production, their integration into cloud infrastructure demands specific architectural standards. Eng. Sameh Zaghloul, CTIO of Fixed Solutions, outlined the lifecycle of engaging with AI through cloud systems, focusing on cyber resilience. Zaghloul identified increased automation and enhanced data analytics as primary components of this process.
A stellar panel featuring leaders from JPMorgan Chase, Saint-Gobain, and TMSC discussed the construction of these cloud workplaces. These executives argue that security cannot reduce developer experience. Their insights aimed to help developers optimise cloud infrastructure use while maintaining governance.
While technical controls remain necessary, the behavioural aspect of AI security is gaining prominence. Emily Yang, Head of Human-Centred AI and Innovation at Standard Chartered, warns that autonomous AI agents introduce risks related to psychological and behavioural manipulation.
Yang explored how AI behaviour can steer user decisions, a vector that technical vulnerabilities miss. This adds a dimension to the threat model for AI developers, who must consider how their systems might manipulate human operators.
Mike Brass, Head of GLC, Enterprise Security Architecture at National Highways, advocated for embedding cyber resilience into enterprise strategy. His session on “human-centric security” argued for integrating practitioner fundamentals with business goals. For senior architects, this means designing systems that account for human behaviour rather than relying solely on technological enforcement.
Navigating ethical and technical risks
The intersection of AI and cybersecurity presents ethical challenges that impact technical implementation.
A panel discussion involving representatives from Santander, The Adecco Group, and National Highways examined how AI reshapes modern threat detection and response. The discussion covered the trends in AI-driven cybersecurity, providing details for developers building AI-enabled systems. While AI offers tools for defence, it also introduces operational complexities that require specific management.
For enterprises, the separation between “development” and “security” is obsolete. Whether it is the supply chain risks identified by Sonatype, the data context demanded by Concentric AI, or the human-centric design advocated by Standard Chartered, the solution lies in integration.
The conclusion? Security must exist as a property of the platform, not a wrapper applied after the fact.
See also: Keeper Security: Software supply chain threats have evolved
Want to learn more about cybersecurity from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the AI & Big Data Expo. Click here for more information.
Developer is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

