Loading Events

« All Events

  • This event has passed.

Ph.D. Proposal Xia Xiao

June 12, 2020 @ 10:00 am - 11:00 am UTC-5

Title:The Speedup Techniques for Deep Neural Networks and its Applications
Ph.D. Candidate: Xia Xiao
Major Advisor: Dr. Sanguthevar Rajasekaran
Associate Advisors: Dr. Jinbo Bi, Dr. Qian Yang.
Date/Time: Friday Jun 12, 2020 10:00 am-11:00 am
Location:
Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mdccb83c6ad90de8268e939acf04612ef
Meeting number: 161 776 7819
Password: Fg97ZGfpsc3
Join by phone: +1-415-655-0002 US Toll
Access code: 161 776 7819

Abstract:

Deep neural networks (DNNs) have achieved significant success in many applications, such as computer vision, natural language processing, robots, and self-driving cars. With the growing demand for more complex real-world applications, more complicated neural networks have been proposed. However, high capacity models result in two major problems: long training times and high inference delays, making the neural networks hard to train and infeasible to deploy for time-intensive applications or resource-limited devices. In this work, we propose multiple techniques to accelerate the training and inference speed as well as model performance

The first technique we study is model parallelization on generative adversarial networks (GANs). Multiple orthogonal generators with shared memory are employed to capture the whole data distribution space. This method can not only improve the model performance but also alleviate the mode collapse problem that is common in GANs. The second technique we investigate is the automatic network pruning. In order to reduce the floating-point operations (FLOPs) to a proper level without compromising accuracy, we propose a better generalized and easy-to-use pruning method, which prunes the network through optimizing a set of trainable auxiliary parameters instead of original weights. Weakly coupled gradient update rules are proposed to keep consistency with pruning tasks. The third technique is to remove the redundancy of the complicated model based on the need of applications. We treat the chemical reaction prediction as a translation problem and apply a low capacity neuron translation model to this problem.

Future works include efficient neuron architecture search and applications. For efficient neuron architecture search(NAS), we propose to combine distillation with Differentiable Architecture Search(DARTS) to stabilize and improve the searching procedure. The application of interest will be Materials Genomics.

Details

Date:
June 12, 2020
Time:
10:00 am - 11:00 am UTC-5
Event Category:

Connect With Us