📜 ⬆️ ⬇️

Noise reduction by combining images in java

Hello, Habr! I want to share the code of a simple program that I use to reduce noise from digital photographs.

About eight years ago, when looking at photos taken on my first digital camera, I found that some pictures with dim lighting have some kind of strange turbidity, colored spots, not sharpness. At that time I didn’t know what noise is, how it depends on the ISO parameter and was very disappointed that the camera was so “poor quality”. However, I noticed that in identical pictures these colored spots look somewhat differently, vary from frame to frame. As time went on, I learned to shoot on the manual settings, I learned what noise is, how to set light sensitivity properly, etc.

A few years later, when I started programming, I again noticed that the noise in the images is not static. An idea arose in my head: what if you take, take several absolutely identical images, and then somehow merge them, eliminating the difference between the pictures, i.e. noise?
')
So, below are 4 images showing some photos of the same object, with random noise in each shot. Red circles are represented as objects, and white are shown as noise.

sample snapshots

First of all, I decided to try a banal way to combine these images in Photoshop, setting transparency of each image = 50%. Of course, nothing good came of it.

processing result in adobe photoshop

This result is quite logical - the pixels are not averaged, but are simply added one to the other, and each subsequent layer also has a greater “weight” above the lower layers.

A little search on the Internet found that programs of this type already exist and are actively used among astrophotographers. The shutter speeds on which the stars are shot are huge (and therefore noise), and the subject itself is static (when shooting with guiding), which makes it possible to eliminate noise by combining identical shots. However, one thing remained: all the programs of this type that I found were very difficult to use, sharpened specifically for astrophoto and almost all cost a lot of money. I did not need such functionality, and I continued the search.

After some time, I found a site known in the circles of panoramic photographers of the German mathematician Helmut Dersch. At the moment, almost all software for gluing panoramic images is based on its algorithms. In addition to the software for processing panoramas on its website, I came across a program that eliminates noise from images - PTAverage . The program was incredibly simple - just drag the photos onto the label - and you get the result. Just what I was looking for. However, after playing a bit with PTAverage, I realized that this is not at all what I would like.

Result of image processing by PTAverage:

processing result in PTAverage

As you can see, the program works in the simplest way: it gets the color of the pixel of each image, adds them together, and then divides them by the total number of all images. However, I wanted some kind of selectivity: for example, if a pixel is black in two images and white in the third image, it is logical to assume that the pixel in the third image is noise. In the end, I did not find anything suitable, and in the end I decided to write this software myself, since everything looked very easy.

The program itself was written in java, because studied it by that time for about a year. The only catch was loading images in tiff format, but later I figured out the JAI library. The disadvantage of the program is a huge memory consumption - JAI does not know how (or maybe I just did not find it) to read the image pixel-by-pixel without loading the entire image into memory.

Program code
To make the code clearer, remove all checks (such as image resolution, bits per channel, etc.):

public class Denoise { /** * @param inputFiles      * @param outputFile ,      * @param difference     (0-255) * @throws IOException */ Denoise(File[] inputFiles, File outputFile, int difference) throws IOException { //     Raster[] rasters = new Raster[inputFiles.length]; //     for(int i = 0; i<inputFiles.length; i++) { try (ImageInputStream is = ImageIO.createImageInputStream(inputFiles[i])) { Iterator<ImageReader> imageReaders = ImageIO.getImageReaders(is) ; ImageReader imageReader = imageReaders.next(); imageReader.setInput(is); if(imageReader.canReadRaster()) { rasters[i] = imageReader.readRaster(0, null); } else { rasters[i] = imageReader.readAsRenderedImage(0, null).getData(); } } } //     ,       int width = rasters[0].getWidth(); int height = rasters[0].getHeight(); //     ,     WritableRaster outputRaster = rasters[0].createCompatibleWritableRaster(); //      ,      for(int x = 0; x<width; x++){ for(int y = 0; y<height; y++){ //,     int[] color = new int[3]; for(int band = 0; band<3; band++){ //,      int data[] = new int[rasters.length]; for (int imageNum = 0; imageNum<rasters.length; imageNum++) { data[imageNum] = rasters[imageNum].getSample(x, y, band); } //    color[band] = average(data, difference); } //     outputRaster.setPixel(x, y, color); } } //  BufferedImage output = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); output.setData(outputRaster); ImageIO.write(output, "tiff", outputFile); } /** * * @param data          * @param difference     * @return    */ private int average(int[] data, int difference){ /** */ int imagesCount = data.length; /**   */ int median; // ,        Arrays.sort(data); //    ,      //      if(imagesCount % 2 == 0) { median = (data[imagesCount / 2 - 1] + data[imagesCount / 2]) / 2; } else { median = data[(int)Math.floor(imagesCount / 2)]; } //         int min = median - difference; int max = median + difference; //     int sumBands = 0; //  ,     min  max int counter = 0; //        for(int i = 0; i<imagesCount; i++){ //      -      if(data[i]>=min && data[i]<= max){ sumBands = sumBands+data[i]; counter++; } } //          (  ) //  -     , //   -   ,      if(counter <= 1){ sumBands = 0; for(int i = 0; i<imagesCount; i++){ sumBands = sumBands + data[i]; } sumBands = sumBands/imagesCount; } else { sumBands = sumBands / counter; } return sumBands; } } 



We throw four original frames into the program and get an image without noise.

processing result

Practical application of the algorithm is difficult to find - the object for shooting must be static, the lighting of the object should not change, well, nothing will happen without a very stable tripod.

By the way, the program has a certain “side effect” - not only noise is removed, but any non-static object. For example, having made a large number of frames from a lively area, it is theoretically possible to “remove” all people. Below is a small example.

Pictures before processing; As you can see from the pictures, the banana moves in a mysterious way.



But the result of processing - as you can see, the banana is not completely gone. However, having made a larger number of frames, provided that the banana would continue to move, one could completely get rid of it.

processing result

What about noise? Here, too, everything is fine, only three frames were enough to significantly reduce it (astrophotographers, for example, use, as far as I know, 15+ frames).

noise comparison

Of course, the algorithm I described is incredibly simple, and it is of little use in real practice, but I hope that someone will probably use it, or get something useful from the article.

Source: https://habr.com/ru/post/237423/


All Articles