YOND: Practical Blind Raw Image Denoising Free from Camera-Specific Data Dependency

TPAMI Minor Revision


Hansen Feng1, Lizhi Wang1,2, Yiqi Huang1, Tong Li1, Lin Zhu1, Hua Huang2

1Beijing Institute of Technology    2Beijing Normal University   

Evaluation Guidelines


  • We have launched an online demo on Hugging Face Space!
  • YOND's core modules (CNE & EM-VST) have been temporarily obfuscated with PyArmor to comply with recent laboratory confidentiality regulations. You can still run inference, but the source code will remain encrypted until the paper is officially accepted.
  • Complete experimental results (visualized as RGB images) are available at Resources. This includes YOND's inference results on four public datasets (results) and crops used for comparison in the manuscript (crops@paper).

Abstract


The rapid advancement of photography has created a growing demand for a practical blind raw image denoising method. Recently, learning-based methods have become mainstream due to their excellent performance. However, most existing learning-based methods suffer from camera-specific data dependency, resulting in performance drops when applied to data from unknown cameras. To address this challenge, we introduce a novel blind raw image denoising method named YOND, which represents You Only Need a Denoiser. Trained solely on synthetic data, YOND can generalize robustly to noisy raw images captured by diverse unknown cameras. Specifically, we propose three key modules to guarantee the practicality of YOND: coarse-to-fine noise estimation (CNE), expectation-matched variance-stabilizing transform (EM-VST), and SNR-guided denoiser (SNR-Net). Firstly, we propose CNE to identify the camera noise characteristic, refining the estimated noise parameters based on the coarse denoised image. Secondly, we propose EM-VST to eliminate camera-specific data dependency, correcting the bias expectation of VST according to the noisy image. Finally, we propose SNR-Net to offer controllable raw image denoising, supporting adaptive adjustments and manual fine-tuning. Extensive experiments on unknown cameras, along with flexible solutions for challenging cases, demonstrate the superior practicality of our method.

teaser

Performance


Comparison

Quantitative Results

The comparison of different self-supervised methods on SIDD dataset and DND dataset.

Click to get details
SIDD dataset and DND dataset


Blind raw image denoising results on images from the DND dataset.


Discussion


The default settings enable YOND to outperform existing methods in most cases, while handling challenging cases is particularly essential in practice. A denoising method should be controllable to fulfill the slogan of ``You Only Need a Denoiser". Breaking data dependency has unlocked the powerful interactivity of YOND, allowing manual adjustments of various parameters to handle challenging cases. Next, we will we introduce two solutions to expand the applicability of YOND beyond its default settings.

Solutions Based on Fine-tuning



Solutions Based on YOND-p