Lecture: 'Machine Learning Security in Practice'

Cybersecurity is paramount for the dependable operation of digital systems. With about 10% of IT budgets dedicated to cybersecurity, security in general is increasingly vital for emerging technologies like machine learning (ML). While ML security research is growing, it's often criticized for being impractical and failing to reflect how ML is used in the real world.

In this talk, I provide an overview of ML vulnerabilities and highlight common disconnects between ML security research and real-world usage of ML: research often focuses on isolated models rather than entire ML pipelines, employs unrealistic perturbations, or makes impractical assumptions. I then highlight our recent work that measures these gaps and introduce our new AI security incident collection framework, with which will gain valuable real-world ML security knowledge to motivate more practical research.

About the speaker

Kathrin Grosse is a Research Scientist at IBM Research in Zurich, Switzerland, where her work bridges the gap between AI security research and practical industry needs. She earned her master's degree from Saarland University and completed her Ph.D. at CISPA Helmholtz Center, Saarland University, in 2021, under the guidance of Michael Backes. Following her doctorate, she conducted postdoctoral research with Battista Biggio in Cagliari, Italy, and Alexandre Alahi at EPFL, Switzerland. Before joining IBM Research full-time, Kathrin gained valuable industry experience through internships at IBM in 2019 and Disney Research in 2020/21. Beyond her core research, Kathrin actively contributes to the scientific community. She serves as a reviewer for prestigious journals and top-tier conferences, organizes workshops and conferences, and holds two patents. Her contributions have been recognized externally; in 2019, she was nominated as an AI Newcomer for the German Federal Ministry of Education and Research's Science Year, and in 2024, she was an invited member of the ACM.