Loading Events

« All Events

  • This event has passed.

Ph.D. Proposal: Kaleel Mahmood

August 23 @ 2:00 pm - 3:00 pm EDT


TitleDesigning Deep Networks for Adversarial Robustness and Security

Ph.D. Candidate: Kaleel Mahmood

Major Advisor:  Dr. Marten van Dijk

Associate Advisors:  Dr. Jinbo Bi, Dr. Shengli Zhou

Review Committee Members: Dr. Benjamin Fuller, Dr. Caiwen Ding

Date/Time: Monday, Aug 23rd, 2021, 2:00P.M. – 3:00P.M.


Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m047403d1f5f9b80009f528d9d6f72864

Meeting number: 120 189 5741

Password:  AiqjgDnp273

Join by phone: +1-415-655-0002 US Toll

Access code: 120 189 5741



The advent of adversarial machine learning fundamentally challenges the widespread adoption of Convolutional Neural Networks (CNNs), Vision Transformers and other deep neural networks. By injecting a small visually imperceivable noise, an adversary can cause most machine learning classifiers to misclassify inputs with high probability. Defenses against adversarial machine learning attacks are of paramount importance to ensure such systems can be safely deployed in sensitive areas like health care and security.

In this work, we focus on developing three key concepts in adversarial machine learning: defense analysis for CNNs, defense design for CNNs and the robustness of the new Vision Transformer architecture. From the analysis side, we develop a new adaptive black-box attack and test recent defenses under this threat model.  The defenses include Barrage of Random Transforms, ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error Correcting Codes, Distribution Classifier Defense and K-Winner Take All. Next, we specifically focus on the black-box threat model and design a novel defense based on the concept of barrier zones (BARZ). We show that our defense based on barrier zones offers significant improvements in security over state-of-the-art defenses. This improvement includes greater than 85% robust accuracy against black-box boundary attacks, transfer attacks and the adaptive black-box attack. Lastly, we study the robustness of Vision Transformers, a new alternative to CNNs. We show how Vision Transformers advance the field of adversarial machine learning by proposing a new attack on Vision Transformers as well as a new CNN/transformer hybrid defense.


August 23
2:00 pm - 3:00 pm EDT

Connect With Us