⬆️ ⬇️

New technology allows you to quickly embed new objects in images





In principle, even for an experienced graphic designer, embedding a new object into an existing image (photo) is a rather difficult task. Of course, the level of complexity of a similar problem varies, but still it is not a matter of a couple of minutes. But developer Kevin Karch, PhD of the University of Illinois, created his own “Photoshop” with those and those . The technology created by Kevin, allows you to embed almost any object on any image with a high degree of accuracy.



Kevin Karch demonstrated his work at the Siggraph conference, which was held in Hong Kong this month. According to him, even a beginner can use the technology, without special equipment, training and Adobe Photoshop itself. Everything can be done without experience, in just a few minutes. Moreover, the results obtained after image processing using Karch technology are impressive.

')









The developer describes his technology as “a method of realistic embedding of new objects into existing photos without the use of special equipment, multi-frame imaging or other tools”. The method used by Karch is truly unique. His software automatically creates a three-dimensional model of what is depicted in the photo. After that, the user can add any object.







For example, the following example shows an original and modeled photo, and also indicates those sections that the user has marked, initially not familiar with the program's interface (he just watched a short demo video).











All the work in the above example took ten minutes without rendering.



The developers conducted some research to find out whether users can recognize artificial objects in photos. It turned out that this is difficult even for specialists who consider themselves experts in this field. At the same time, the Kartsch algorithm provides approximately the same realism as the more complex and resource-intensive methods.



Such a system can be used in the cinema and gaming industries, as well as for interior design. For example, a user can take a photo of his room and see how the various elements of the interior will look like in it.











Other examples









































The algorithm works as follows.







1) the scene's geometry (floor, ceiling, walls, corners) is calculated by the difference in color of the pixels and a 3D model is compiled.







2) Given this model, the net value of the reflected light is calculated in all pixels.







3) Sources of direct light are calculated by quickly analyzing the bright and dark pixels in the photograph (shadow recognition).







4) Then this data is specified by the user, at the same time the user adjusts the geometry parameters and the contours of key objects.







5) The system recalculates the shape of light sources based on the feedback of the user.







5) Rendering based on source data.







6) Rendering based on user-specified information.





Via kevinkarsch + Dailymail

Source: https://habr.com/ru/post/134508/



All Articles