November 19th, 2025 - 13h30-14h30 - Room 5 (level -1 mezzanine)

 

AI vs AI:

Next-Generation Red Teaming

The development of capabilities and use cases for LLMs and agentic AI is revealing a new attack surface.
One of the reasons for the success of LLMs is their alignment with human and ethical principles, which helps control their behavior—especially in professional settings. Despite the implementation of safeguards, it is still possible to "jailbreak" LLMs simply through prompt injection. Malicious content generation, access to confidential information, code execution, etc., remain achievable through carefully crafted prompts.

Jailbreaking has quickly evolved beyond a mere anomaly or artisanal activity to become a full-fledged scientific discipline and a highly active research field, advancing in step with the development of LLMs and agentic AI.

In this workshop, in collaboration with a public sector partner leading this topic, we will share the most effective attack strategies and show how recent work—particularly by Capgemini—demonstrates that these strategies are increasingly being carried out by specialized AI systems. Through a live demonstration, we will then show how dedicated AI agents can perform automated red teaming, helping to identify vulnerabilities and improve the security of systems that incorporate LLMs or agentic AI.

Copyright © key4events - All rights reserved