Stay Informed:

COVID-19 (coronavirus) information
Zoom Links: Zoom Help | Teaching with Zoom | Zoom Quick Guide

Machine Learning, Security, and Privacy: Challenges and Opportunities

Speaker Name: 
Neil Gong
Speaker Title: 
Assistant Professor
Speaker Organization: 
Department of Electrical and Computer Engineering, Iowa State University
Start Time: 
Wednesday, February 27, 2019 - 11:00am
End Time: 
Wednesday, February 27, 2019 - 12:15pm
John Musacchio


Machine learning naturally intersects with security and privacy. On one hand, we can leverage machine learning to uncover new vulnerabilities in computer and network systems as well as enhance their security and privacy. On the other hand, we can leverage security and privacy techniques to uncover vulnerabilities in machine learning systems as well as make them secure and privacy-preserving.

In the first part of this talk, I will discuss one of my representative work on leveraging (adversarial) machine learning for security and privacy. In particular, I will discuss how we can leverage adversarial examples as a deceptive mechanism to defend against inference attacks, a family of attacks that pose pervasive security and privacy threats to the Internet. For example, the Facebook data privacy scandal in 2018 is a real-world example of inference attack. Adversarial examples are often viewed as offensive techniques to compromise the security of machine learning. Our work is the first one to show that adversarial examples can be used for privacy protection. In the second part of this talk, I will discuss one of my representative work on security and privacy for machine learning. Specifically, I will discuss how the intellectual property of a machine learning system could be compromised even if the machine learning system is deployed as a cloud service. In particular, we performed the first study to show that an attacker can steal the hyperparameters that were used to train a machine learning model. Moreover, I will discuss a defense that can effectively protect the hyperparameters for some machine learning algorithms. However, the defense is ineffective for some other machine learning algorithms, highlighting the need for new defenses to protect the intellectual property of machine learning. Finally, I will briefly introduce my other selected projects and future research plans.



Neil Gong is an Assistant Professor in the Department of Electrical and Computer Engineering at the Iowa State University. He had a Ph.D. in Computer Science from the University of California at Berkeley in 2015 and a B.E. in Computer Science from the University of Science and Technology of China in 2010 (with the highest honor). His research interests are security and privacy with a recent focus on the intersections between machine learning, security, and privacy. His research leverages a variety of techniques such as machine/deep learning, optimization, probabilistic graphical models, game theory, differential privacy, and program analysis. He received an NSF CAREER award in 2018 and a Best Paper Award in the International Workshop on Systematic Approaches to Digital Forensics Engineering (SADFE) in 2018. His paper published in INFOCOM’17 was invited for fast tracking to IEEE Transactions on Network Science and Engineering (only 10 of the 292 accepted papers were invited for fast tracking). His work was featured by popular media such as WIRED, ScienceDaily, Slashdot, Hacker News, etc.. For example, his paper published in the Proceedings of the National Academy of Sciences (PNAS) in 2012 was selected as “The Best Scientific Figures in 2012” by WIRED.