TechTalks Newsletter

Share this post
Security must be baked into machine learning research
bdtechtalks.substack.com

Security must be baked into machine learning research

Ben Dickson
Jun 3, 2021
Comment
Share

At this year’s International Conference on Learning Representations (ICLR), a group of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations.

One thing that made their work particularly interesting was that they had gone out of their way to hack an optimization technique they themselves had developed.

In some ways, their work is illustrative of the challenges that the machine learning community faces. On the one hand, many researchers and developers are racing to make deep learning available to different applications. On the other hand, their innovations cause new challenges of their own. And they need to actively seek out and address those challenges before they cause irreparable damage.

Read the full story on TechTalks

For more on machine learning security:

  • Machine learning adversarial attacks are a ticking time bomb

  • The security threat of adversarial machine learning is real

  • The underrated threat of data poisoning

CommentComment
ShareShare

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

TopNewCommunity

No posts

Ready for more?

© 2022 Ben Dickson
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing