TechTalks Newsletter

Share this post
How to protect contrastive learning models against adversarial attacks
bdtechtalks.substack.com

How to protect contrastive learning models against adversarial attacks

Ben Dickson
Nov 18, 2021
Comment
Share

Contrastive learning (CL) is a machine learning technique that has gained popularity in the past few years because it reduces the need for annotated data, one of the main pain points of developing ML models.

But due to its peculiarities, contrastive learning presents security challenges that are different from those found in supervised machine learning. Machine learning and security researchers are worried about the effect of adversarial attacks on ML models trained through contrastive learning.

A new paper by researchers at the MIT-IBM Watson AI Lab sheds light on the sensitivities of contrastive machine learning to adversarial attacks. Accepted at NeurIPS 2021, the paper introduces a new technique that helps protect contrastive learning models against adversarial attacks while also preserving their accuracy.

Read the full article on TechTalks.

For more on machine learning security:

  • Computer vision and deep learning provide new ways to detect cyber threats

  • What are membership inference attacks?

  • Machine learning adversarial attacks are a ticking time bomb

  • Create adversarial examples with this interactive JavaScript tool

CommentComment
ShareShare

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

TopNewCommunity

No posts

Ready for more?

© 2022 Ben Dickson
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing