📜 ⬆️ ⬇️

3D scanner from a camera, but without a laser v 0.1



I wanted to get a stereo image in virtual reality glasses from a single camera (after all, they were made under a smartphone or tablet) to add an augmented reality to it.

The search for a ready-made solution has upset: a lidar is expensive and cumbersome; kinekt - financially and physically very heavy; 3D scanner with a laser - processing time.

Thinking, I decided to work with what is available, that is, with a laptop camera.
')
For the moment, we abstract away from the concrete implementation and turn to the algorithm.
The whole action initially takes place in three-dimensional space, there is an image plane that can be rotated in any direction, each coming pixel takes the luminance of one of the colors or their combination, and this pixel is taken perpendicular from the image plane to the number of luminosities (0-255).

image

It turns out that we bring out the pixels of the original image perpendicular to its plane, depending on the illumination.





As you can see, it turns out relatively tolerable 3D, suitable for obtaining stereo images.

But it was a picture with a sample of the range of illumination, that's what will happen if you remove the filter:



Well, as a bonus, a couple of images.
image
image

The source code is pretty simple.

Used language Processing

The sample shift is controlled by s \ w (turn on the English layout).

Code
int colfilter = 25; // filter range + -
import processing.video. *;
float rotx = 0; // variable rotation
float roty = 0;

int countcol = 1; // counter

float pixelBrightness = 0; // pixel illumination
float Brightness = 0; /// lightness
Capture cam;
int numPixels; // pixel variable
void setup () {
size (900,700, P3D);

cam = new Capture (this, 320,240); /// launch the camera
cam.start ();
numPixels = cam.width * cam.height;
frameRate (30); // frames per second

}

void draw ()
{

background (255);
translate (width / 2.0, height / 2.0-250, +200); // move to the middle
rotateY (roty); // image rotation
// rotateX (rotx);

if (cam.available () == true)
{
cam.read (); // read the camera

cam.loadPixels (); // load pixels

float col [] = new float [numPixels]; // create an array for storage of light

loadPixels ();

for (int i = 0; i <numPixels; i ++) {

pixelBrightness = red (cam.pixels [i]); /// take the illumination on the red color of each pixel
// pixelBrightness = brightness (cam.pixels [i]); // you can use any illumination
col [i] = pixelBrightness; // write illumination to an array
// Brightness = red (cam.pixels [i]);
// float Brightness2 = blue (cam.pixels [i]);
// float Brightness3 = green (cam.pixels [i]);

updatePixels (); // update pixels

}

translate (-160,120,0); // one by one we loop through the pixels and draw them
for (int i = 0; i <240; i ++)
{
for (int j = 0; j <320; j ++)
{

if (countcol <numPixels)
{
if (col [countcol] <colfilter + 50 && col [countcol]> colfilter-50)

{
fill (col [countcol]);
point (j * 1.3, i * 1.3, col [countcol] / 2);
}

countcol ++;

}}}

countcol = 0;
}}
void mouseDragged () {
float rate = 0.01;
rotx + = (pmouseY-mouseY) * rate;
roty + = (mouseX-pmouseX) * rate;
}

void keyPressed () ///// Control the position of the light sample
{
if (key == 'w')
{
colfilter ++;
}

if (key == 's')
{
colfilter--;
}
}

Well, the software itself (need Java, JRE, x64).

If something goes wrong - write, correct.

Source: https://habr.com/ru/post/366027/


All Articles