Loading Events

« All Events

  • This event has passed.

Ph.D. Proposal: Shanglin Zhou

November 6, 2023 @ 3:00 pm - 4:00 pm EST

Title: Model Sparsification on Emerging Applications and Technologies

Ph.D. Candidate: Shanglin Zhou

Major Advisor: Dr. Caiwen Ding 

Co-Major Advisor: Dr. Krishna Pattipati

Associate Advisors: Dr. Cunxi Yu, Dr. Zhijie Shi

Committee Members: Dr. Mikhail A. Bragin, Dr. Sheida Nabavi

Date/Time: Monday, November 6th, 2023, 3:00 pm

 

Location: WebEx and In-person

In-person Location: HBL Instruction 1102

Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m43e0bc6edc32692b975e3b6460ea4de7

Meeting number: 2632 075 1185

Password: 3nwAqyxM2M5

Join by phone: +1-415-655-0002 US Toll

Access code: 26320751185

 

Abstract:

 

Deep neural networks (DNNs) with improved accuracy result in larger model sizes, increasing storage needs and energy consumption. Model sparsification is a proposed solution to reduce DNN size and mitigate storage demands. However, it may lead to lower accuracy by removing important connections, making it challenging to achieve the right balance between sparsification and accuracy. In addition, low-power, real-time DNN processing requires more efficient computing solutions and technologies, as traditional processors are inadequate. Emerging technologies offer competitive advantages. 

In this thesis, we explore model sparsification on emerging applications and technologies. We propose an optimized approach using Surrogate Lagrangian Relaxation (SLR) for weight sparsification, addressing the time-consuming three-stage pipeline of training, hard-pruning, and retraining. SLR efficiently handles discrete weight sparsification with quadratic penalties, ensuring fast convergence. It yields model parameters closer to optimal values during training, achieving high accuracy even in the hard-pruning stage, and enabling quick accuracy recovery in retraining.

 

Model sparsification enables efficient deployment on resource-constrained devices, overcoming power and memory limitations, and expanding the impact of DNNs on emerging technologies. We explore two scenarios: The first one focuses on energy harvest devices, which are self-powered and require dynamic power management. We propose EVE, an AutoML co-exploration framework that leverages SLR-based pattern sparsification. EVE discovers optimal multi-models with shared weights, enabling adaptation to dynamic environments while reducing memory requirements and staying within the on-chip budget. The second involves diffractive optical neural networks (DONNs), which offer fast computation and low energy consumption. However, interpixel interaction in diffractive layers poses accuracy degradation for deployment. To address this, we propose a physics-aware DONN roughness optimization framework. This framework incorporates roughness modeling regularization and SLR-based sparsification to reduce sharp phase changes in diffractive layers. Additionally, 2π periodic optimization is used to reduce phase mask roughness and preserve DONN accuracy.

Details

Date:
November 6, 2023
Time:
3:00 pm - 4:00 pm EST
Website:
https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m43e0bc6edc32692b975e3b6460ea4de7

Venue

HBL 1102

Connect With Us