📜 ⬆️ ⬇️

Night Sight: how Pixel phones are seen in the dark


Left: iPhone XS ( full resolution photo ). Right: Pixel 3 Night Sight ( full resolution photo ).

Night Sight is a new feature of the Pixel Camera application that allows you to take clear and clean photos in very low light, even when the lights are so low that your eye can see little. It works on the main and front camera of all three generations of Pixel phones, and does not require a tripod or flash. In this article we will talk about why it is so difficult to take photos in low light, discuss computational photography and machine learning technologies superimposed on the HDR + format, and allowing Night Sight to work.

Why is it difficult to take photographs in low light?


Everyone who took photographs in a poorly lit place is familiar with noise in an image that looks like random brightness changes from pixel to pixel. For smartphones' cameras, with lenses and photomatrices of a small size, the main source of noise is the natural variation in the number of photons entering the lens, known as shot noise . Any camera suffers from it, and it will always be present, even with an ideal matrix quality. However, the matrix is ​​not ideal, so when converting the electric charge arising due to the ingress of light on the pixels, there is a second source of noise - the noise when reading. Other sources also contribute to the signal-to-noise ratio (SNR), showing how much of the image remains noise-free. Fortunately, SNR grows in proportion to the square root of the exposure time (or faster), therefore, the longer the exposure, the cleaner the picture. But in dim lighting, it is rather difficult to maintain stillness long enough to take a good picture, and what you are photographing may also not stand still.

In 2014, we presented the technology of computational photography HDR +, which improves this situation by obtaining several images at once, which are then programmatically aligned and merged together. The main goal of HDR + is to improve the dynamic range, that is, the ability to take photographs in a wide range of lighting conditions (for example, at sunset or in the case of backlit portraits). All generations of Pixel phones use HDR +. It turns out that merging multiple images also reduces the effect of shot noise and reading noise, so it improves SNR in dim light. To preserve the clarity of images in terms of hand-shake and movement of the subject, we use a short shutter speed. We also discard parts of the image for which no good alignment methods have been found. This allows HDR + to produce crisp images and collect more light.
')

How dark will it be?


But if receiving and merging multiple frames gives sharper photos in low light, why not use HDR + to capture several dozen images so that we can, in fact, see in the dark? Let's start with the definition of "darkness." Photographers, discussing the illumination of the scene, often measure it in suites . Lux is the amount of light incident on a surface unit, measured in lumens per square meter. So that you can roughly imagine the levels of illumination, here is a convenient table for you:



Smartphone cameras taking one photo are beginning to experience difficulties at a light level of 30 lux. Phones that take several shots, and combining them, can last up to 3 lux, but in darker conditions they do not cope and rely on flash. With the Night Sight technology, we sought to improve the quality of photos in the range from 3 to 0.3 lux using a smartphone, one click on the trigger and no flash. For this to work, several key elements are required, the most important of which is getting more frames.

Get the data


Increasing the frame exposure time increases the SNR and allows for a cleaner image, but causes two problems. First, the default shooting mode on Pixel phones uses the zero delay protocol (ZSL), which limits the exposure time. The camera application starts shooting as soon as you launch it, and stores photos in a ring buffer that constantly erases old frames, freeing up space for new ones. When you press the shutter, the camera sends the last 9-15 frames to process the HDR + or Super Res Zoom software. This allows you to capture exactly the moment that you need - hence the "zero delay of the descent." However, since we are helping you to aim, we show these images on the screen, HDR + limits the maximum exposure time to 66 ms, regardless of the level of illumination, which allows the viewfinder to maintain a frequency of at least 15 frames / sec. For dimmer scenes where a longer shutter speed is needed, Night Sight uses Positive Shutter Delay (PSL) technology and waits for a button press before taking photographs. To use PSL, you need to hold the phone motionless for a while after pressing a button, but this mode allows you to increase the shutter speed, improving the SNR at the worst illumination.

The second problem of increasing the exposure time per frame is the blur caused by shaking hands or movement of objects in the scene. Optical image stabilization (AE), available in Pixel 2 and 3, reduces the impact of shaking for an average shutter speed (up to 1/8 s), but does not help in the case of longer exposures or moving objects. To combat blurring, which is not subject to OSI, in Pixel 3, the default motion metering mode is enabled, using optical flow to measure recent scene movements and select the exposure time minimizing blur. Pixel 1 and 2 do not use this default mode, however, all three phones use this technology in the Night Sight mode, increasing the exposure time per frame up to 333 ms in the absence of motion. In Pixel 1, which has no OS, we do not increase the exposure time so much (and for self-cameras that do not have OS, the increase will be even more modest). If the camera is stabilized (leaning against a wall, put on a tripod), the shutter speed of each frame can be increased up to a second. In addition to varying the time-lapse exposure, we also vary the number of frames, from 6, if the phone is on a tripod, to 15, if shooting is done with it. These frame limits prevent user fatigue (and the need for a “cancel” button). So, depending on which Pixel phone you have, camera selection, hand shake, movement and scene brightness, Night Sight takes 15 frames with a shutter speed of 1/15 s or less, up to 6 frames with shutter speeds up to 1 sec

A specific example of using a short shutter speed when detecting motion:


Left: 15 frames shot in a row by one of two Pixel 3 phones located nearby. In the center: a snapshot in the Night Sight mode with motion measurement turned off, which forces the phone to use a shutter speed of 73 ms. The head of the dog is blurred. Right: Night Sight with motion measurement enabled, using 48 ms shutter speed. Blur is noticeably smaller.

An example of using a long exposure when the phone is on a tripod:


Left: part of the night sky shot taken from the hands with the help of Night Sight (the whole picture ). My hands were shaking a little, so Night Sight chose 333 ms Ă— 15 frames = 5.0 seconds. Right: tripod shot ( full picture ). Jitter is not fixed, so Night Sight used 1.0 s Ă— 6 frames = 6.0 seconds. The sky can be seen more clearly, less noise, more stars are visible.

Align and merge


The idea of ​​averaging frames to reduce noise is as old as digital imaging technology itself. In astrophotography, this is called exposure stacking . Although the technology itself is simple, the most difficult thing is to properly align the image when shooting is done with it. We started working on this topic in 2010 with the Synthcam application. It constantly took photos, leveled and merged them in real time at low resolution and showed the final result, which became the clearer the longer you looked.

Night Sight uses a similar principle, but at full resolution of the matrix and not in real time. In Pixel 1 and 2 phones, we use the HDR + fusion algorithm, modified and adjusted to enhance the ability to detect and reject incorrectly aligned parts of frames, even in very noisy scenes. On Pixel 3, we use Super Res Zoom , which adjusts depending on whether you are enlarging an image or not. And although the latter was designed for high resolutions, it is also capable of reducing noise, since it produces an average of several photographs put together. For some night scenes, Super Res Zoom gives better results than HDR +, but it requires a faster Pixel 3 phone processor.

By the way, all this happens on the phone in a few seconds. If you quickly switch to the list of photos (waiting for the shooting to end!), You can see how the image “manifests” as HDR + or Super Res Zoom ends.

Other difficulties


Although the basic ideas described sound simple, in the absence of sufficient lighting, some of the tricks of the Night Sight development turned out to be very difficult:

1. Auto White Balance (ABB) does not work correctly in low light


People are well aware of the colors of things, even in color lighting (or in sunglasses), showing color continuity . However, this process often fails when we take a photo in one light, and we look in another; the photo seems tinted to us. To correct this effect, camera perception changes the color of images in order to partially or fully take into account the main color of the backlight (which is sometimes called color temperature ), in effect, shifting the image colors to make it appear that the scene is highlighted in neutral white. This process is called automatic white balance .

The problem is that this task belongs to those that mathematicians call incorrectly posed . Was the snow shot on camera really blue? Or is it white snow, illuminated by a blue sky? Apparently, the last option. Such ambiguity complicates the search for white balance. The ABB algorithm, used in modes other than Night Sight, works well, but with poor or color lighting (for example, sodium discharge lamps ) it is difficult to make out the color of the backlight.

To solve these problems, we developed an ABB learning-based algorithm , trained to distinguish a well-balanced picture from a poorly balanced picture. When a photo is balanced poorly, the algorithm can offer color shift options so that the lighting seems more neutral. To train the algorithm, it was necessary to take a photo of many different scenes using Pixel devices, and manually adjust their balance by looking at the photo on the monitor with well-calibrated colors. You can see the work of the algorithm by comparing the same scene with poor lighting, photos of which were taken by different methods using Pixel 3:


Left: the camera balancer by default does not know how yellow the highlight of this hut was on the coast of Vancouver ( full photo ). Right: our learning algorithm did better ( full photo ).

2. Marking shades on too dark scenes


The purpose of Night Sight is to take photos of so dark scenes that it is difficult to distinguish them with the naked eye - to develop something like superpowers! Associated with this problem is the fact that in conditions of poor lighting, people no longer distinguish colors, since cones in our retina stop working, leaving all the rods unable to distinguish between wavelengths of light. At night, the scene does not lose its colors, we just stop to distinguish them. We want the photos in Night Sight to be in color - this is also part of the superpowers and a possible cause for another conflict. Finally, our chopsticks have limited spatial sharpness, so objects appear blurry at night. We want the photos in Night Sight to be clear and have more details than you can tell.

For example, if you put a DSLR on a tripod and set a very large shutter speed - a few minutes, or apply a few photos with a shorter shutter speed - then the night photo will look like a day photo. Details will be visible in the shadows, the scene will be colored and clear. Look at the photo below, made a DSLR; stars are visible, so it must be night, but the grass is green, the sky is blue, and the moon casts shadows of trees that look like sunny. The effect is pleasant, but not always necessary - and if you share such a photo with a friend, he will be at a loss as to how you made it.


Yosemite Valley at night. Canon DSLR, 28mm f / 4 lens, shutter speed 3 min, ISO 100 ( full photo )

Artists for many hundreds of years have known how to make a painting look like a night:


“A Philosopher Explaining the Model of the Solar System,” Joseph Wright, 1766. The artist uses different colors, from white to black, but the scene painted is clearly dark. How did he do it? Increased the contrast, surrounded the scene with darkness, blackened the shadows, in which no details are visible.

In Night Sight, we use similar tricks, in particular, using the S-shaped curve to mark the shades. However, it is quite difficult to find a balance between the "magic superpowers" and a reminder of what time the photo was taken. Below is a photo that partially succeeded:


Pixel 3, 6 seconds with Night Sight, on a tripod ( full photo )

How dark can it be for shooting with Night Sight?


When lighting is worse than 0.3 lux, autofocus starts to fail. If you do not see the car keys lying on the floor, your smartphone cannot focus either. To do this, we added two manual focus buttons on the Pixel 3 to the Night Sight mode: “Close” focuses at a distance of a little over a meter, and “Far” at 4 meters. The last is the hyperfocal distance of the lens, that is, everything that is at a distance from half (2 m) to infinity should get into focus. We are also working to improve the ability of Night Sight to autofocus in low light. You can still take great photos with light below 0.3 lux and even astrophotography, as demonstrated in this article , but this will require a tripod, manual focus and a special application using the Android Camera2 API.

How far can we go? In the end, at a certain level of illumination, the read noise eclipses the photons collected by the sensor. There are other sources of noise, including dark current , increasing in proportion to the exposure and dependent on temperature. Biologists in order to avoid such effects cool their cameras to temperatures below -20 ° C to get photos of fluorescent animals - but we do not recommend doing this with Pixel phones! Too noisy images are also difficult to align. Even if we solved all these problems, the wind blows, the trees sway, and the stars and clouds move. Taking photos with very long shutter speeds is hard.

How to get the most out of Night Sight


Night Sight not only makes cool pictures in low light: it's just fun to use, because it takes pictures even when you see almost nothing. We display the image of the chip when the scene is so dark that when using Night Sight, the photo will turn out better - but not limited to these cases. Immediately after sunset, at a concert in an urban setting, Night Sight takes clean photos with low noise and makes them brighter than reality. Here are some examples of photos taken with Night Sight, and A / B comparisons . Here are some tips for using Night Sight:



Manual focus buttons (Pixel 3 only).

Night Sight works best on Pixel 3. We made it for Pixel 2 and for the original Pixel, although the latter uses a shorter shutter speed due to the lack of optical stabilization. Also, the automatic white balancer is trained on Pixel 3, so on older phones it will work less accurately. By the way, we make the viewfinder brighter in the Night Sight mode so that it is easier for you to take aim, but it works with a shutter speed of 1/15 s, so it will be noisy and will not give an idea of ​​the quality of the final photo. So give him a chance - aim and click on the trigger. Often you will be pleasantly surprised!

Source: https://habr.com/ru/post/431280/


All Articles