Thursday 

Room 1 

10:20 - 11:20 

(UTC+01

Talk (60 min)

Threat Modelling for ML/AI systems

Machine Learning and Artificial Intelligence have been attracting a lot of attention in the past few years, and since the release of ChatGPT in 2023, the interest in these topics has exploded. We are assisting to a gold rush towards these technologies and, amidst the rush, it is too easy to forget about security. One critical challenge in the realm of ML/AI is that models are often black-box. Understanding how these systems arrive at conclusions, evaluating their behaviour, and ensuring consistency across a large input space can be elusive. At Equinor, we have started adopting Threat Modelling practices to identify potential risks in the use of ML/AI systems. Threat Modelling is recognised as one of the most effective practices in improving security in software solutions.

AI
Machine Learning
Application Security
Process

In this talk, we'll explore how to adapt Threat Modelling methodologies to the unique challenges posed by ML/AI. Additionally, we will share outcomes and experiences from applying this approach to selected ML/AI applications within our organisation.

Andrea Brambilla

Coming soon

Benjamin Løkling

Benjamin, security degree, worked as a developer, working as an AppSec engineer, really into threat modelling