Auditing Differentially Private Machine Learning

23 Sep
Add to Calendar
Monday, 09/23/2024 4:00pm to 5:30pm
LGRC A215
Security Seminar

The Cybersecurity Institute is pleased to host this security talk, Auditing Differentially Private Machine Learning.

Abstract: How can researchers use sensitive datasets for machine learning and statistics without compromising the privacy of the individuals who contribute their data?  In this talk I will describe some my work on differential privacy, a rigorous framework for answering this question.  In the past decade, differential privacy has gone from largely theoretical to widely deployed.  These deployments come with a rigorous proof that the algorithm satisfies a strong qualitative privacy guarantee, but these stylized mathematical guarantees can both overestimate and underestimate the privacy afforded by the algorithm in a real deployment.  In this talk I will motivate and describe my ongoing body of work on using empirical auditing of differentially private machine learning algorithms as a complement to the theory of differential privacy.  The talk will discuss how auditing builds on the rich theory and practice of membership-inference attacks and describe our work on auditing differentially private stochastic gradient descent.

Bio: Jonathan Ullman is an Associate Professor at the Khoury College of Computer Sciences at Northeastern University.  Before joining Northeastern, he received his PhD from Harvard in 2013, and in 2014 was a Junior Fellow in the Simons Society of Fellows.   His research centers on privacy for machine learning and statistics, and it's surprising connections to topics like statistical validity, robustness, cryptography, and fairness.  He has been recognized with an NSF CAREER award, research awards from Google and Apple, and the Ruth and Joel Spira Outstanding Teacher Award.

Adam O'Neill, Cybersecurity Institute
: