|
Private Gradient Estimation is Useful for Generative Modeling
Bochao Liu, Pengju Wang, Weijia Guo, Yong Li, Liansheng Zhuang, Weiping Wang and Shiming Ge*
ACM MM, 2024, Oral  
pdf
This paper introduces a novel private generative modeling approach that utilizes Hamiltonian dynamics with sliced score matching to achieve differential privacy by adding noise during gradient estimation. The model successfully generates 256×256 resolution data by progressively approximating the private dataset's manifold, and extensive experiments validate its effectiveness.
|
|
Learning Differentially Private Diffusion Models via Stochastic Adversarial Distillation
Bochao Liu, Pengju Wang and Shiming Ge*
ECCV, 2024  
pdf
/
code
This paper trains a private diffusion model by a stochastic adversarial distillation method.
Specifically, we first train a diffusion model as a teacher and then train a student by distillation, in which we achieve differential privacy by adding noise to the gradients from other models to the student.
For better generation quality, we introduce a discriminator to distinguish whether an image is from the teacher or the student, which forms the adversarial training.
|
|
Model Conversion via Differentially Private Data-Free Distillation
Bochao Liu, Pengju Wang, Shikun Li, Dan Zeng and Shiming Ge*
IJCAI, 2023  
pdf
/
code
This paper propose a learning approach termed differentially
private data-free distillation (DPDFD) for model
conversion that can convert a pretrained model
(teacher) into its privacy-preserving counterpart
(student) via an intermediate generator without access to training data. This work implements SOTA in the field of differentially private learning and theoretically proves that DPDFD can guarantee differential
privacy and well convergence.
|
|
Privacy-Preserving Student Learning with Differentially Private Data-Free Distillation
Bochao Liu, Jianghu Lu, Pengju Wang, Junjie Zhang, Dan Zeng, Zhenxing Qian, Shiming Ge*
MMSP, 2022, (Best Student Paper-Honorable Mention Award)  
pdf
This paper presents an effective teacher-student learning approach to train privacy-preserving deep learning models via differentially private data-free distillation. The main idea is generating synthetic data to learn a student that can mimic the ability of a teacher well-trained on private data.
|
|
一种多方参与数据不共享的网络模型训练方法
王伟平;葛仕明;刘博超;李晨钰
ZL202010940180.7  
|
|
一种基于少量公共数据的隐私模型训练方法及装置
葛仕明;刘浩林;刘博超;王伟平
CN202011065611.6  
|
|