Security must be baked into machine learning research
At this year’s International Conference on Learning Representations (ICLR), a group of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations.
One thing that made their work particularly interesting was that they had gone out of their way to hack an optimization technique they themselves had developed.
In some ways, their work is illustrative of the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.
For more on machine learning security: