Membership inference attacks against deep RL
With machine learning becoming part of many applications we use every day, there is a growing focus on identifying and addressing the security and privacy threats of ML models.
However, security threats vary across different machine learning paradigms, and some areas of ML security remain understudied. In particular, the security of reinforcement learning algorithms has not received much attention in recent years.
A new study by researchers at McGill University, Mila, and the University of Waterloo focuses on the privacy threats of deep reinforcement learning algorithms. The researchers propose a framework for testing the vulnerability of reinforcement learning models against membership inference attacks.
The results of the study show that adversaries can stage effective attacks against deep RL systems and potentially obtain sensitive information used in training the models. Their findings are significant as reinforcement learning is finding its way into industrial and consumer applications.
Read the full article on TechTalks.
For more on AI research: