ACCV 2022
sym
Teacher-Guided Learning for Blind Image Quality Assessment

Zewen Chen, Juan Wang, Bing Li et al.

TL;DR: We introduce a novel NR-IQA framework that leverages an image restoration teacher network (TN) to transfer semantic and distortion knowledge to a student network (SN), enabling superior BIQA performance on limited annotated data with validated generalization, robustness, and effectiveness.

Quick Read (Click Me) The performance of deep learning models for blind image quality assessment (BIQA) suffers from annotated data insufficiency. However, image restoration, as a closely-related task with BIQA, can easily acquire training data without annotation. Moreover, both image semantic and distortion information are vital knowledge for the two tasks to predict and improve image quality. Inspired by these, this paper proposes a novel BIQA framework, which builds an image restoration model as a teacher network (TN) to learn the two aspects of knowledge and then guides the student network (SN) for BIQA. In TN, multi-branch convolutions are leveraged for performing adaptive restoration from diversely distorted images to strengthen the knowledge learning. Then the knowledge is transferred to the SN and progressively aggregated by computing long-distance responses to improve BIQA on small annotated data. Experimental results show that our method outperforms many state-of-the-arts on both synthetic and authentic datasets. Besides, the generalization, robustness and effectiveness of our method are fully validated.