Note: Christopher Burger has transitioned from the institute (alumni).
I'm working on image denoising, which is the problem of finding a clean image, given a noisy one. Noise in images arises for a number of reasons, including imperfect digital image sensors. The problem is of growing importance due to an explosion in the number of digital images recorded every day and the fact that all digital images contain some amount of noise. My personal web-page is kept more up-to-date.
My research can be divided into three main categories:
- Astronomical image denoising with a pixel-specific noise model.
For digital photographsof astronomical objects, where exposure times are long, the dark-current noise is a signicantsource of noise. Usually, denoising methods assume additive white Gaussian noise, with equal variance for each pixel. However, dark-current noise has different properties for every pixel. We use a pixel-specic noise model to handle dark-current noise, as well as an image prior adapted to astronomical images. Our method is shown to perform well in a laboratory environment, and produces visually appealing results in a real-world setting.
- A multi-scale meta-procedure for improving existing denoising algorithms.
Most denoising algorithms focus on recovering high frequencies. However, for high noise levels it is also important to recover low frequencies. We present a multi-scale meta-procedure that applies existing denoising algorithms across different scales and combines the resulting images into a single denoised image. We show that our method can improve the results achieved by many denoising algorithms.
- State-of-the-art image denoising with multi-layer perceptrons.
Many of the best-performing denoising methods rely on a cleverly engineered algorithm. In contrast, we take a learning approach to denoising and train a multi-layer perceptron to denoise image patches. Using this approach, we outperform the previous state-of-the-art. Our approach also achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Furthermore, we achieve outstanding results on other types of noise, including JPEG-artifacts and Poisson noise. Also, we show that multilayer perceptrons can be used to combine the results of several denoising algorithms. This approach often yields better results than the best method in the combination. We discuss in detail which trade-offs have to be considered during the training procedure. We are also able to make observations regarding the functioning principle of multi-layer perceptrons for image denoising.