The code for prior GLA works

The code for prior works

  1. iDLG: Improved Deep Leakage from Gradients
  2. Inverting Gradients - How easy is it to break privacy in federated learning?
  3. Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
  4. Exploiting Unintended Feature Leakage in Collaborative Learning
  5. Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients
  6. Gradient Inversion with Generative Image Prior
  7. Gradient Obfuscation Gives a False Sense of Security in Federated Learning
  8. GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning
  9. LAMP: Extracting Text from Gradients with Language Model Priors
  10. R-GAP: RECURSIVE GRADIENT ATTACK ON PRIVACY
  11. ROBBING THE FED: DIRECTLY OBTAINING PRIVATE DATA IN FEDERATED LEARNING WITH MODIFIED MODELS
  12. User-Level Label Leakage from Gradients in Federated Learning
-------------本文结束感谢您的阅读-------------