Neural networks can be infected with malware
This issue was sponsored by Edge Impulse, the world’s easiest platform for embedded ML.
With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text—and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found.
Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats.
Read the full article on TechTalks.
For more on machine learning security:
Machine learning security needs new perspectives and incentives
Machine learning adversarial attacks are a ticking time bomb
Build embedded ML models in minutes with Edge Impulse! Sign up for your free account in December and you'll be automatically entered to win one of 100 Arduino Machine Vision bundles.