It has been shown that data-driven AI and machine learning suffer from hallucinations known as adversarial examples, i.e., imperceptible, adversarial perturbations to images, text and audio that fool these systems into perceiving things that are not there.
This phenomenon is even more evident in the context of cybersecurity domains like malware and spam detection, in which data is purposely manipulated by cybercriminals to undermine the outcome of automatic analyses. In this talk, I review previous work on evasion attacks, where malicious samples are manipulated at test time to evade detection, and poisoning attacks, which can mislead learning by manipulating even only a small fraction of the training data. I conclude by discussing some promising defense mechanisms against both attacks in the context of real-world applications, including computer vision, biometric identity recognition and computer security.
Battista Biggio (MSc 2006, PhD 2010) is an Assistant Professor at the University of Cagliari, Italy. In 2015, he co-founded Pluribus One (www.pluribus-one.it). His research interests include adversarial machine learning, kernel methods, biometrics and cybersecurity. In particular, he has provided pioneering contributions in the area of secure machine learning, demonstrating evasion and poisoning attacks, and how to mitigate them, playing a leading role in the establishment and advancement of this research field.
He regularly serves as a program committee member for the most prestigious conferences and journals in the area of machine learning and computer security (ICML, NeurIPS, ACM CCS, IEEE SP). He chairs the IAPR TC on Statistical Pattern Recognition Techniques, co-organizes the S+SSPR and the AISec workshops, and serves as Associate Editor for IEEE TNNLS, Pattern Recognition and IEEE CIM. Dr. Biggio is a senior member of the IEEE and a member of the IAPR and of the ACM.