Not so long ago, to create a service (and put a module “in store”), it was necessary to think of a way to quickly select the points located on the map from the sql database.
There will be little code that will not distract from understanding the system as a whole.

TK
And so, at the “entrance” it has a base of 2 million points (so far only a test one, since the real one is still small), a Google map (or Yandex, not fundamentally) each point has coordinates. The user must navigate the map, see all the points, the server should not be the size of the floor rack, taking into account the "desired" attendance of 200 people per second.
')
Ideas?
There are many points. Lots of. Variants with checks on the entry of a point into the map boundaries do not fit (like (x1 <x <x2) and (y1 <y <y2)), since such a request was made in 20–50 ms and we had to do it every time the map was moved + the amount of data went to tens of kilobytes (and given the fact that a “balun” with textual information can be added to a point, then megabytes). Query caching does not actually make sense because of low efficiency.
In the search for solutions came to the following points:
1. work with tile map (square, block)
2. store in the database at the point qtree tiles
3. when moving around the map without changing the scale, load only points from the “new” tiles
For understanding qtree
en.wikipedia.org/wiki/QuadtreeTo understand what will
come out of this
koti.mbnet.fi/ojalesa/quadtree/quadtree_intro.htmThe string with the “tree” for each point in the database looks like 01320232312222100231210323203 (the number of digits is equal to the maximum zoom level).
In fact, it means that if the world map is divided into 4 parts 2x2, then the point is in 1 block (numbering from zero and the “main block” with number 0 we missed for simplicity), if this block is divided again into 2x2, then it will be in block 3 and so on. With each level of scaling we must look “deeper” into our “tree”.

What is the beauty of data storage in this form?
The beauty is that knowing the coordinates of the tile and the scale, we get a qtree of the form 132023231, and to find all the points included in this area on the map at a given scale level, you must make a request with the condition
point_qtree LIKE '132023231%' (we are looking for the occurrence of a substring in the line with first character). In this case, the point_qtree field, we must remember to index, that the speed of the samples will raise to the maximum level. SQL will build a qtree tree, which will simplify the task for it as much as possible.
Hooray. The sample of points began to take many times less time.
Make a card
When moving around the map, we must do a number of things:
- determine which tiles we see
- determine for which tiles we have already loaded the points
To determine the visible tiles in any map API, you can query the coordinates of the map window and the zoom level. And based on these data, get a tile of the upper left and lower right and list all the others between them.
We will have an array of the form 012, 013, 021, 020.
Save it to a variable that we call oldTile.
Each time we move around the map, we calculate all the tiles and compare them with the current array oldTile, while we should have an array of newTile which lists all the tiles that are not yet in oldTile. Next, we need to send ajax request to the server for sampling points for each tile (in the request we can send an array of new tiles, but I will explain further the purpose of splitting into separate requests), let's say / point / 012, add new points to the map (without clearing the old ones) . Next, do a simple oldTile = oldTile + oldTile.
When changing the map scale, we will have to clear oldTile (and points on the map, one of the options. The second option: do not clean the points, but then the oldTile array should also not be cleared, but recalculated, for example, from 01 we should get 010, 011, 012, 013 ). And after that, “according to plan”, calculate visible tiles of the map, etc.
What gives us this method:
- we are requesting only new points from new places displayed on the map
- we get a good opportunity to answer the server with memekash and, setting up Nginx to work with it, get points in the tiles already from it (that's why we stopped at several requests to a particular tile, and do not do the sampling “in bulk”.
- reduce the number of requests to the server database
- reduce traffic
As the next stage of system optimization:
1. When adding a new point to the database, forcibly update all the arrays of the entire tree branch to the memcache. Thus, we can ensure that requests for further nginx + memcache will not go away and the data will always be relevant, which will allow to move from memokash to radishes and not be afraid of “cold starts”.