可在手机终端部署,人大等提出全新人物图片保护模型RID

图片的防定制化保护只需要几十毫秒。

可在手机终端部署,人大等提出全新人物图片保护模型RID

原标题:可在手机终端部署,人大等提出全新人物图片保护模型RID
文章来源:机器之心
内容字数:5096字

Real-time Identity Defenses (RID): Protecting Images from Malicious Personalization of Diffusion Models

This article summarizes a new model,RID,developed by researchers from Renmin University of China and Sea AI Lab,for real-time protection of personal images from malicious personalization attacks on diffusion models. The model addresses the significant computational cost and time associated with existing image protection methods.

1. The Problem: Malicious Personalization of Diffusion Models

Recent advancements in diffusion models allow for personalized image generation. Users can provide a few images of a specific concept (e.g.,a person’s face) to fine-tune a pre-trained diffusion model,enabling the generation of new images of that concept. However,this technology poses a privacy risk,as malicious actors could use publicly available photos to create fake images. Existing protection methods rely on gradient-based optimization to add perturbations to the original images,resulting in high computational costs (minutes to tens of minutes) and significant memory consumption.

2. RID: A Real-time Solution

RID offers a novel approach by employing a pre-trained small network to generate perturbations for input images. This allows for real-time protection (tens of milliseconds) and enables deployment on mobile devices. The core of RID is a novel training scheme called Adversarial Score Distillation Sampling (Adv-SDS),inspired by DreamFusion’s score distillation sampling (SDS). While DreamFusion aims to minimize SDS loss for realistic image generation,RID maximizes it to ensure the perturbed image is unrecognizable to the personalized diffusion model.

3. Adv-SDS and the RID Architecture

To prevent the optimization from getting stuck in local optima,RID incorporates a regression loss alongside Adv-SDS. A pre-trained dataset of clean images and their corresponding perturbations (generated using methods like AdvDM or Anti-DB) is used for training. The network architecture uses a Diffusion Transformer (DiT) adapted to remove conditional input,focusing solely on perturbation generation. A tanh activation function and scaling constrain the size of the generated perturbations.

4. Experimental Results and Evaluation

RID was trained on a filtered subset of the VGG-Face 2 dataset and evaluated on Celeba-HQ. The evaluation involved fine-tuning diffusion models using different methods (Textual Inversion,TI+LoRA,full parameter fine-tuning) on protected and unprotected images. Results demonstrate that RID effectively protects images from personalization,achieving a speed of 8.33 images per second on a single GPU. While quantitative metrics show a slight decrease compared to other methods,qualitative analysis confirms effective protection across various personalization techniques,pre-trained models,and noise levels. RID also shows robustness against black-box attacks and post-processing manipulations.

5. Conclusion and Future Work

RID demonstrates robust protection capabilities using SD-series models. Future work includes integrating other DiT architectures into Adv-SDS for improved robustness and exploring the design of more benign perturbations,such as makeup-style alterations.


联系作者

文章来源:机器之心
作者微信:
作者简介:专业的人工智能媒体和产业服务平台

阅读原文
© 版权声明

相关文章

暂无评论

暂无评论...