Proposes no-reference IQA for real-world super-resolved images using content-free SSL. Pretrains multi-SR model representations via contrastive learning. Includes new SRMORSS dataset for pretext training.
Key Points
- 1.Degradation-focused, content-independent learning
- 2.Contrastive pairs from same/different SR models
- 3.Handles scaling factors and real LR degradations
Impact Analysis
Enables adaptive quality assessment in data-scarce real SR domains. Outperforms SOTA on benchmarks. Bridges gap in realistic SR-IQA.
Technical Details
Self-supervised pretext stage with preprocessing and auxiliary tasks. New dataset from diverse SR algorithms on real LR images. Domain-adaptive for ill-posed settings.