Loading Events

« All Events

  • This event has passed.

Ph.D. Defense: Kaleel Mahmood

October 20, 2021 @ 2:00 pm - 3:00 pm EDT

TitleDesigning Deep Networks for Adversarial Robustness and Security

Ph.D. Candidate: Kaleel Mahmood

Major Advisor:  Dr. Marten van Dijk

Associate Advisors:  Dr. Jinbo Bi, Dr. Shengli Zhou

Review Committee Members: Dr. Benjamin Fuller, Dr. Caiwen Ding

Date/Time: Wednesday, Oct 20th, 2021, 2:00P.M. – 3:00P.M.


Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mdd21a82abf7dcd76a1a97f70f9a3b749

Meeting number: 2621 624 8627

Password:  JHxEyDAy863

Join by phone: +1-415-655-0002 US Toll

Access code: 2621 624 8627



The advent of adversarial machine learning fundamentally challenges the widespread adoption of Convolutional Neural Networks (CNNs), Vision Transformers and other deep neural networks. By injecting a small visually imperceivable noise, an adversary can cause most machine learning classifiers to misclassify inputs with high probability. Defenses against adversarial machine learning attacks are of paramount importance to ensure such systems can be safely deployed in sensitive areas like health care and security.

In this work, we focus on developing three key concepts in adversarial machine learning: defense analysis for CNNs, defense design for CNNs and the robustness of the new Vision Transformer architecture. From the analysis side, we develop a new adaptive black-box attack and test recent defenses under this threat model.  The defenses include Barrage of Random Transforms, ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error Correcting Codes, Distribution Classifier Defense and K-Winner Take All. Next, we specifically focus on the black-box threat model and design a novel defense based on the concept of barrier zones (BARZ). We show that our defense based on barrier zones offers significant improvements in security over state-of-the-art defenses. This improvement includes greater than 85% robust accuracy against black-box boundary attacks, transfer attacks and the adaptive black-box attack. Lastly, we study the robustness of Vision Transformers, a new alternative to CNNs. We show how Vision Transformers advance the field of adversarial machine learning by proposing a new attack on Vision Transformers as well as a new CNN/transformer hybrid defense.


October 20, 2021
2:00 pm - 3:00 pm EDT

Connect With Us