Loading Events

« All Events

  • This event has passed.

Ph.D. Proposal: Yijue Wang

December 17, 2021 @ 1:00 pm - 2:00 pm EST

Title: Privacy and Efficiency problem in Deep Learning

Ph.D. Candidate: Yijue Wang

Major Advisor:  Dr. Sanguthevar Rajasekaran

Associate Advisors:  Dr. Caiwen Ding, Dr. Suining He

Review Committee Members: Dr. Derek Aguiar, Dr. Dongjin Song, Dr. Qian Yang

Date/Time: Friday, Dec 17th, 2021, 1:00P.M. – 2:00P.M.

Location: WebEx Online 

Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m2a3feacd42d523ad146aa9ff98b04b1f

Meeting number: 2623 132 0775

Password: QHgp7t8d6uZ

Join by phone: +1-415-655-0002 US Toll

Access code: 2623 132 0775

 

Abstract:

Advances in deep learning have enabled high accuracy in classifications, recommendations, and natural language processing. The success of modern deep neural networks (DNNs) is mainly dependent on the availability of advanced computing power and a large number of data. However, there are two main challenges. (i) DNNs based services (e.g., MLaaS) raise privacy concerns on sensitive data such as patient treatment records. Such services can leak sensitive information about training data used to build back-end models. (e.g., membership inference attack (MIA), leakage from gradients(LG)) (ii)DNN models are evolving fast in order to satisfy the diverse characteristics of broad applications. As the layers of DNNs get deeper and the model size of DNNs gets larger, the large computational operations and model size introduce substantial data movements, limiting their ability to provide a user-friendly experience on mobile devices.

In this work, I focus on investigating these challenges and developing mechanisms against them. First, I investigate membership inference attack challenges in different DNNs in computer vision and Natural language processing and envision that since over-fitting is one of the main reasons for this challenge, the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. I propose a pruning algorithm, and I show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA and achieve competitive accuracy with the original DNNs. And about the leakage from gradients challenge, my investigation has revealed that existing techniques can only recover the training data on uniform weight distribution and fail to recover the training data on other weights initialization (e.g., normal distribution) or during the training stage. I provide an analysis of how weight distribution can affect the training data recovery from gradients. Based on this analysis, I propose a self-adaptive privacy attack from gradients–a general gradient attack algorithm that can recover the training data with any weight initialization and in any training phase. My algorithm exploits not only the gradients but also the variance of gradients. Specifically, I exploit the variance of gradients distribution and the Deep Neural Network (DNN) architecture and design an adaptive Gaussian kernel of gradient difference as a distance measure. 

 

 

Details

Date:
December 17, 2021
Time:
1:00 pm - 2:00 pm EST

Connect With Us