Wednesday 

Room 1 

16:20 - 17:20 

(UTC+01

Talk (60 min)

Red Teaming Large Language Models

As machine learning models become increasingly integrated into our digital infrastructure, evaluating their vulnerabilities is essential for both security and ethical reasons. Large language models (LLMs) are no exception. While they represent a revolutionary leap in natural language tasks, LLMs pose unique security and ethical challenges, including the potential to generate misleading, harmful, or biased content as well as leak confidential data, denial of service, or even cause remote code execution.

AI
Machine Learning
Testing

This talk provides an in-depth look into red-teaming LLMs as an evaluation methodology to expose these vulnerabilities. By focusing on case studies and practical examples, we will differentiate between structured red team exercises and isolated adversarial attacks, such as model jailbreaks. Attendees will gain insights into the types of vulnerabilities that red teaming can reveal in LLMs, as well as potential strategies for mitigating these risks. The session aims to equip professionals with the knowledge to better evaluate the security and ethical dimensions of deploying Large Language Models in their organizations.

Armin Buescher

Armin Buescher has over 15 years of expertise in software engineering and research for the development of security technologies. He has worked in leading R&D roles for high-profile cybersecurity companies, including FireEye, Symantec, NortonLifeLock, Blue Coat, and Websense. Early in 2022, he joined Crosspoint Labs as a founding member of a team of security experts. He is currently focusing his research on using AI for security purposes.