The code for prior GLA works
The code for prior works
- iDLG: Improved Deep Leakage from Gradients
- Inverting Gradients - How easy is it to break privacy in federated learning?
- Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
- Exploiting Unintended Feature Leakage in Collaborative Learning
- Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients
- Gradient Inversion with Generative Image Prior
- Gradient Obfuscation Gives a False Sense of Security in Federated Learning
- GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning
- LAMP: Extracting Text from Gradients with Language Model Priors
- R-GAP: RECURSIVE GRADIENT ATTACK ON PRIVACY
- ROBBING THE FED: DIRECTLY OBTAINING PRIVATE DATA IN FEDERATED LEARNING WITH MODIFIED MODELS
- User-Level Label Leakage from Gradients in Federated Learning
-------------本文结束感谢您的阅读-------------