Title: Privacy and Efficiency in Deep Learning
Ph.D. Candidate: Yijue Wang
Major Advisor: Dr. Sanguthevar Rajasekaran
Associate Advisors: Dr. Caiwen Ding, Dr. Suining He
Committee Members: Dr. Derek Aguiar, Dr. Dongjin Song, Dr. Qian Yang
Date/Time: Wednesday, October 25th, 2023, 10:00am EST
Location: WebEx
Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m01d5b130b6e5bf17fde093c9c9866d1c
Meeting number: 2621 482 1399
Password: 2ASvdu2pRC3
Abstract:
Recent advances in deep learning have demonstrated remarkable achievements in fields such as image classification, recommendations, and natural language processing. However, two pressing challenges have emerged. Firstly, deep neural networks (DNNs) carry inherent privacy risks, potentially leading to the exposure of sensitive training data, thereby raising concerns about membership inference attacks and gradient attacks. Secondly, the increasing complexity and size of DNN models for diverse applications impose computational limitations, particularly on mobile devices, due to intensified data transfer requirements. Striking a balance between privacy and efficiency is of paramount importance.
In my thesis, I address these two challenges. Firstly, I address membership inference attacks in DNNs used in computer vision and natural language processing by introducing a novel pruning algorithm. This algorithm enhances privacy while preserving accuracy. Secondly, I tackle the issue of data leakage from gradients during training with the introduction of the Self-Adaptive Privacy Attack from Gradients. This algorithm can recover training data irrespective of weight initialization or training phase. Additionally, I tackle the efficiency problem of DNNs in the material science domain by transferring knowledge from a small capacity model to a large capacity model.