📜 ⬆️ ⬇️

How Gcam Created HDR + Mode for Smartphone Cameras

image

Gcam has developed a solution for the Google Pixel camera and a number of other products of the Alphabet holding, one way or another related to image processing. The company appeared in 2011, when Sebastian Troon , then head of Google X , was looking for a camera that could be installed in Google Glass . In such glasses, the camera allows you to take photos "from the first person" and share bright moments with others without having to get a camera or smartphone.

This feature can be useful for any user, ranging from parents with young children to doctors performing operations. However, in order for people to want to use Glass, the glasses must have shooting capabilities at least at the level of existing flagship smartphones.
')
In the early stages of developing Glass, there were problems with the camera. She was too small and lacked light, so the photos in the dark or at high contrast were of poor quality. When compared with the same smartphones, the sensor in the camera was too small, which further reduced performance in low light conditions and dynamic range. In addition, glasses had very limited computing power and battery.

Since glasses should be lightweight and suitable for constant wear, finding a larger camera to solve this problem is not an option. Therefore, the team began to look for other ways. Then the creators asked themselves the question: what if instead of trying to solve the problem at the hardware level, try to do it with the help of software products?

image

The company attracted the development of Google Glass Mark Livoya (Marc Levoy), a teacher of computer science at Stanford University , who is also an expert in computational photography . He is particularly interested in software capture and image processing technologies.

In 2011, Livoy created a team in Google X, known as Gcam. Her task was to improve the photos on mobile devices, using computing technology photos. In search of a solution to the problems that the Glass project posed, the Gcam team investigated a method called Image Fusion , which takes a quick series of photos and then combines them to create a better picture. Such technology “pulls” the details in the pictures with dim lighting. On the whole, the photos were brighter and sharper.

Image Fusion debuted at Glass in 2013, and it soon became clear that the technology could be applied to other product lines. As people began to take more pictures to share important moments with others, the software on the basis of which the cameras worked had to provide a beautiful picture regardless of the lighting.

The next version of Image Fusion received a new name HDR +, went beyond the scope of the Glass project and was launched in the application for the Android camera in Nexus 5, and then in Nexus 6.

Today, this feature has spread among many applications and products, and the Gcam team in 2015 moved to Google Research. Gcam is currently developing technologies for Android, YouTube, Google Photos and Jump VR. Some of the team’s decisions are included in Lens Blur , a Google camera application, as well as software stitching videos into Panorama in Jump Virtual Reality.

Not so long ago, HDR + technology by default entered as a mode on the Google Pixel smartphone. The DxOMark team, which is one of the most objective camera ratings, said that the Pixel camera turned out to be “the best smartphone camera ever created.”

In 2016, reflecting on the further development of the project, Mark Livoy said: “It took five years to do everything really well. We are fortunate that Google X has given our team direction for long-term development and independence. ”

What's next with Gcam? Mark Livoy, who began his career with developing a cartoon animation system used by Hanna-Barbera, was delighted with the future of the team. One of the areas where it is going to be realized in the future is machine learning technology.

“There are many products that actually change the perception and feel of the image. It may even be such simple at first glance things like a filter for adjusting white balance. Or those who can do something with the background - darken, brighten or stylize it. We are in the best place in the world in terms of developing technologies in the field of machine learning, so we have a real opportunity to combine the creative world with the world of computational photography, ”said Livoy.

Source: https://habr.com/ru/post/402731/


All Articles