TechTalks Newsletter

Share this post
Membership inference attacks
bdtechtalks.substack.com

Membership inference attacks

Ben Dickson
Apr 23, 2021
Comment
Share

One of the wonders of machine learning is that it turns any kind of data into mathematical equations. After training a machine learning model on the training dataset, you can then discard the training data and publish the model on GitHub or run it on your own servers without worrying about storing or distributing sensitive information contained in the training dataset.

But a type of attack called “membership inference” makes it possible to detect the data used to train a machine learning model. In many cases, the attackers can stage membership inference attacks without having access to the machine learning model’s parameters and just by observing its output. Membership inference can cause security and privacy concerns in cases where the target model has been trained on sensitive information.

In my latest column on TechTalks, I discuss how membership inference attacks work and how you can protect yourself against them.

Read the full article here.

More explainers on artificial intelligence:

  • Demystifying deep learning

  • What is semi-supervised machine learning?

  • What is ensemble learning?

  • What is machine learning data poisoning?

CommentComment
ShareShare

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

TopNewCommunity

No posts

Ready for more?

© 2022 Ben Dickson
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing