Skip to content
#

pgd-attack

Here are 14 public repositories matching this topic...

Exploring the concept of "adversarial attacks" on deep learning models, specifically focusing on image classification using PyTorch. Implementing and demonstrating the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks against a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) trained on the MNIST.

  • Updated Apr 27, 2025
  • Python

Evaluated the robustness of a deep face recognition model (InceptionResNetV1) against adversarial attacks. Tested multiple attack types, analyzed transferability, and implemented a defense system using specialized detectors to improve security while preserving accuracy.

  • Updated Jan 16, 2026
  • Python

Improve this page

Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."

Learn more