Adversarial robustness for machine learning
Machine learning is becoming an important component of many applications we use every day. ML models verify our identity through face and voice recognition, label images, make friend and shopping suggestions, search for content on the internet, write code, compose emails, and even drive cars. With so many critical tasks being transferred to machine learning and deep learning models, it is fair to be a bit worried about their security.
Along with the growing use of machine learning, there has been mounting interest in its security threats. At the fore are adversarial examples, imperceptible changes to input that manipulate the behavior of machine learning models. Adversarial attacks can result in anything from annoying errors to fatal mistakes.
With so many papers being published on adversarial machine learning, it is difficult to wrap your head around all that is going on in the field. Fortunately, Adversarial Robustness for Machine Learning, a book by AI researchers Pin-Yu Chen and Cho-Jui Hsieh, provides a comprehensive overview of the topic.
Chen and Hsieh bring together the intuition and science behind the key components of adversarial machine learning: attacks, defense, certification, and applications. Read a summary of the key topics regarding adversarial machine learning on TechTalks.
For more on adversarial machine learning: