There is a Wi-Fi access point finder, described here , and an augmented reality device. It is required to highlight the location of a given Wi-Fi point.
Let me briefly remind you that the direction finder works on the principle of cold-hot: it shows how well we are aiming at the object.
In order to determine the location of the signal source, it is required to cross several bearing rays, preferably from different positions. And here the problems begin. Moreover, the problems begin even in the two-dimensional case (when we want to locate on the map): three or more rays by no means want to intersect at one point.
And in our case, even the two beams do not want to intersect.
So, the exact solution we will fail. We'll have to look for the approximate.
The basic algorithm is based on dividing the space into cubes. For each die we will bury store how many rays have passed through it. We will draw some of the most popular cubes. I prefer the balls, so I will draw the ball described around the cube.
Since we have relatively few rays, highly sparse data are foreseen, so we will store them in a hash table. As a key, we will have a pair of coordinates of the cube, and as a value, the number of transmitted rays.
Each bearing has origin coordinates (location of the direction finder) and direction. Due to the limitations of harsh reality, we, of course, intersect not the rays, but segments. Not only are we limited by the memory of the computer, we still cannot detect Wi-Fi points at an infinite distance from us :(
In order to mark the cubes, in computer graphics there is a Brezenham algorithm . True, the screens are now only two-dimensional, so the Brezenham algorithm is described everywhere on the plane, but nothing, we adapt for 3D.
Before reading about the implementation, watch the video how it works:
The camera, unfortunately, did not want to focus on the phone, but if you want, you can see that the name of the point we are looking for and the one found matches.
Legend has it that when Peter I's yacht first came to the shores of Kotlin Island, the Swedish soldiers guarding it gave a strekach so quickly that they left behind a burning fire on which food was prepared in the boiler. And because of this incident, the island got its name.
Among the mass devices of augmented reality, still, the most accurate and affordable are the devices based on Google Tango . Therefore, we will use them.
At the entrance we are given the current " pose " Tango of the device and the event that at the moment the device with some accuracy is directed to the signal source. The Bresenham algorithm requires getting the beginning and end of a segment.
But before you start looking for the end of the segment, let's see if you can find a previously unknown Wi-Fi point?
In order to obtain the end of a segment, it is required to add its length to the z -component of the starting point, and then rotate. Why to z ? Because in this coordinate system, z is "forward."
private float[] calcSagittaEnd(float[] start, float[] rotateMatrix) { float[] end = new float[4]; System.arraycopy(start, 0, end, 0, 3); end[3] = 0f; end[2] += BEARING_LENGTH; Matrix.multiplyMV(end, 0, rotateMatrix, 0, end, 0); return end; }
The rotation matrix is ​​taken from the Tango library:
public boolean addBearing() { /// area -> camera TangoPoseData startPose = TangoSupport.getPoseAtTime( mRgbTimestampGlThread, TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION, TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR, TangoSupport.TANGO_SUPPORT_ENGINE_OPENGL, 0); if (startPose.statusCode == TangoPoseData.POSE_VALID) { float[] end = new float[4]; float[] start = new float[4]; start[0] = (float) startPose.translation[0]; start[1] = (float) startPose.translation[1]; start[2] = (float) startPose.translation[2]; start[3] = 0.0f; TangoPointCloudData pointCloud = mPointCloudManager.getLatestPointCloud(); if (pointCloud == null) { return true; // : , . } // TangoPoseData colorTdepthPose = TangoSupport.calculateRelativePose( mRgbTimestampGlThread, TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR, pointCloud.timestamp, TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH); if (colorTdepthPose.statusCode == TangoPoseData.POSE_VALID) { // OpenGL TangoSupport.TangoMatrixTransformData depthTarea = TangoSupport.getMatrixTransformAtTime( pointCloud.timestamp, TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE, TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH, TangoSupport.TANGO_SUPPORT_ENGINE_OPENGL, TangoSupport.TANGO_SUPPORT_ENGINE_TANGO, 0); if (depthTarea.statusCode == TangoPoseData.POSE_VALID) { float[] depthTOpenGl = depthTarea.matrix; mRenderer.addBearing(start, depthTOpenGl); return true; } } } return false; // : , . }
It's time to see how we find the Wi-Fi point in the Scheherazade cafe:
Let us assign the x coordinate to which the longest projection of the segment of our bearing is.
In the Bresenham algorithm, each step increments the x coordinate, and the y coordinate increases only if a sufficient error has occurred. We will do the same with the z coordinate.
enum class LengthCoord {x, y, z} val space = hashMapOf<Coordinate, Int>() private fun markCubes(start: Coordinate, end: Coordinate, lengthCoord: LengthCoord) { // x , . fun getX(vertex: Coordinate) = when(lengthCoord) { LengthCoord.x -> vertex.x LengthCoord.y -> vertex.y LengthCoord.z -> vertex.z } fun getY(vertex: Coordinate) = when(lengthCoord) { LengthCoord.x -> vertex.y LengthCoord.y -> vertex.x LengthCoord.z -> vertex.y } fun getZ(vertex: Coordinate) = when(lengthCoord) { LengthCoord.x -> vertex.z LengthCoord.y -> vertex.z LengthCoord.z -> vertex.x } fun setXYZ(x: Int, y: Int, z: Int, lengthCoord: LengthCoord): Coordinate = when(lengthCoord) { LengthCoord.x -> Coordinate(x, y, z) LengthCoord.y -> Coordinate(y, x, z) LengthCoord.z -> Coordinate(z, y, x) } val dx = when { getX(end) > getX(start) -> 1 getX(end) < getX(start) -> -1 else -> 0 } val dy = when { getY(end) > getY(start) -> 1 getY(end) < getY(start) -> -1 else -> 0 } val dz = when { getZ(end) > getZ(start) -> 1 getZ(end) < getZ(start) -> -1 else -> 0 } val d = arrayOf(Math.abs(getX(start) - getX(end)), Math.abs(getY(start) - getY(end)), Math.abs(getZ(start) - getZ(end))) var x = getX(start) var y = getY(start) var z = getZ(start) var errY = d[0] / 2 var errZ = d[0] / 2 while (x != getX(end)) { x += dx errY -= d[1] errZ -= d[2] if (errY < 0) { y += dy errY += d[0] } if (errZ < 0) { z += dz errZ += d[0] } addPoint(setXYZ(x, y, z, lengthCoord)) // } addPoint(setXYZ(x, y, z, lengthCoord)) }
The addPoint function, which increases the count of rays passing through a cube:
private fun addPoint(coord: Coordinate) { lock.withLock{ var v = space[coord] if (v != null) { v++ space[coord] = v.toInt() } else { space[coord] = 1 } } }
Select the most popular cubes: sort, and then take the first few. In the current implementation no more than three.
fun getIntersection(): List<Array<Float>> { val resCandidates = ArrayList<Pair<Array<Float>, Int>>() lock.withLock { // , . for ((coord, counter) in space) { if (counter >= showThreshold) { resCandidates.add(Pair(scaleCoordinate(arrayOf( coord.x.toFloat(), coord.y.toFloat(), coord.z.toFloat())), counter)) } } } resCandidates.sortBy { item -> -item.second } // . // return resCandidates.subList(0, minOf( amountCellsToShow, resCandidates.size) ).map { elem -> elem.first } }
An approach to solving the problem of spatial positioning of an object using specified bearings was considered and its implementation was proposed.
All source code is available on GitHub .
Constructive criticism from Kotlin masters (and not only Kotlin) is welcome.
Thanks for attention!
Source: https://habr.com/ru/post/344278/
All Articles