CVPR2025
sym
Visual-Instructed Degradation Diffusion for All-in-One Image Restoration

Wenyang Luo, Haina Qin, Zewen Chen, Libin Wang, et al. (Co-first author)

TL;DR: We propose Defusion, an all-in-one image restoration framework that uses visual instruction-guided degradation diffusion to handle diverse and mixed degradations with a single, generalizable model.

Quick Read (Click Me) Image restoration tasks like deblurring, denoising, and dehazing usually need distinct models for each degradation type, restricting their generalization in real-world scenarios with mixed or unknown degradations. In this work, we propose Defusion, a novel all-in-one image restoration framework that utilizes visual instruction-guided degradation diffusion. Unlike existing methods that rely on taskspecific models or ambiguous text-based priors, Defusion constructs explicit visual instructions that align with the visual degradation patterns. These instructions are grounded by applying degradations to standardized visual elements, capturing intrinsic degradation features while agnostic to image semantics. Defusion then uses these visual instructions to guide a diffusion-based model that operates directly in the degradation space, where it reconstructs highquality images by denoising the degradation effects with enhanced stability and generalizability. Comprehensive experiments demonstrate that Defusion outperforms state-ofthe-art methods across diverse image restoration tasks, including complex and real-world degradations.