16 March 2026

Bridging the Evidence Gap on AI Misuse in Cyberspace, side event at RSAC 2026 Conference

A path towards global cooperation at the intersection of AI security and cyber threat intelligence

Under the digital track of 2026 France's G7 Presidency, a clear ambition is taking shape: to rally like-minded partners around a shared vision of safe and trusted AI in the service of the common good. Central to this effort is the pursuit of international consensus on AI security, with a deliberate focus on the diversion of AI for malicious purposes and the complex cybersecurity challenges that accompany the rapid diffusion of these capabilities (Cf: G7 Press Release) . On the sidelines of RSAC 2026, the Paris Peace Forum, with the support of the Consulate General of France in San Francisco, convenes a working breakfast to advance one core imperative of that agenda: closing the evidence gap that currently prevents accountability for AI misuse by cyber adversaries.

 

Context

The operational use of AI by malicious actors has crossed a qualitative threshold. Recent threat intelligence reports confirm that nation-state actors and criminal groups are now integrating AI capabilities across the operation lifecycle, while a growing dark-web market of AI-enhanced attack tooling is lowering the barrier of entry for less capable players. Yet the underlying evidence remains structurally fragmented. AI developers observe the model layer – jailbreaks, API misuse, agentic exploitation – while cybersecurity vendors track the network and endpoints. Despite notable initiatives in recent years, the overall information-sharing ecosystem between these communities remains immature, as no trusted mechanism exists to reconcile these two pictures into a shared, policy-actionable evidence base. This gap has direct governance consequences: without reliable, comparable data on how AI is weaponized across attack chains, policymakers struggle to calibrate responses, articulate responsibilities, or adapt the existing cyber norms.

Against this backdrop, a useful point of departure lies in the proven organizational structures that the cybersecurity community has established for itself to consolidate intelligence and coordinate responses across the global threat landscape. Information-sharing platforms and threat-exchange protocols enable defenders to pool evidence across borders; incident reporting schemes and coordinated vulnerability disclosure provide the incentives that make such exchanges sustainable.

The AI security community, by contrast, has invested heavily in model evaluation and pre-deployment risk assessment, while post-deployment monitoring of how models are actually used "in the wild" remains nascent.

 

Goals

This working breakfast aims to explore how to better integrate the cybersecurity and AI security communities to advance global transparency on AI misuse in cyberspace. In particular, participants will be invited to address the following questions:

  • What are the primary technical, structural and motivational obstacles preventing effective intelligence sharing between AI security and cybersecurity practitioners across the public and private sectors?
  • How can proven mechanisms, frameworks, or practices from the cybersecurity ecosystem be leveraged to generate a clearer, policy-relevant picture from the insights of both communities?
  • Which existing initiatives at the intersection of AI security and cybersecurity should be built upon in the context of the G7 to advance international coordination on countering AI-enhanced cyber adversaries?

The discussion will be held under Chatham House rule. 

For more information and to request an invitation, contact : pablo.rice@parispeaceforum.org