Wednesday
Room 4
15:00 - 16:00
(UTC+01)
Talk (60 min)
Using GenAI on your code, what could possibly go wrong?
With GenAI, developers are shifting from traditional code reuse to generating new code snippets by prompting GenAI, leading to a significant change in the ways software gets developed.
Several academic studies show that AI generated code based on LLM's that are trained on vulnerable OSS implementations lead to vulnerable generated code. Another study showed that developers tend to trust GenAI created code more than human created code. Combining that with the higher code velocity it will result in more vulnerabilities in it's output.
Using an AI system that runs an LLM also has additional risks tied to it, related to jailbreaks, data poisoning and malicious agents, recursive learning and IP infringements.
In this presentation, we will examine real-world data from several academic studies to understand how GenAI is changing software security, the risks it introduces, and possible strategies to address these emerging issues.