Adversarial attacks in constrained-feature domains
There’s growing interest and concern about the security of machine learning models. Experts know that machine learning and deep learning models, which are used in many kinds of applications, are vulnerable to adversarial attacks.
But finding and fixing adversarial vulnerabilities in ML models is easier said than done. There has been a lot of research in the field in recent years, but most of it has been focused on ML models that process visual data.
We’re seeing the growing use of ML in applications such as network data analysis, fraud detection, and spam filtering, which use tabular and text data. Unfortunately, many of the techniques used to discover adversarial attacks against computer vision systems do not apply to these other types of data.
Meanwhile, the study of adversarial attacks on these data types has mostly failed to produce generalized tools and approaches to create robust ML models.
In a new study presented at the International Joint Conference on Artificial Intelligence (IJCAI) 2022, scientists at the University of Luxemburg presented new techniques that can help find adversarial attacks and defenses for these other applications. The research can help find systematic ways to address adversarial vulnerabilities in machine learning systems.
Read the full story on TechTalks.
Book recommendations:
More stories on adversarial attacks: