For those who have not read the first part, briefly: it described the solution to the problem of creating layers and effects, to combine the old and the new photo. Details about this can be
found here.In this part, our iOS developer,
heximal , will talk about how geo-positional-cartographic functionality was implemented. Since he has read only, I post this post again (it will be cool if someone gives him an invite). In his words, everything is written correctly.
“Initially, timera was designed as a means of creating time tunnels. According to many scientific theories, space and time are directly related, so the location for timers is a very important aspect. The first implementation of the timers meant searching for old photos exclusively on the map: the user opened the screen with a map in the application, found old photos around him, chose the one he liked, and started the timeout process. For this, a web service was developed that returns all old photos in the region, which was determined by the visible area on the map screen (minlat, maxlat, minlng, maxlng). At the testing stage, it became clear that this process needs to be optimized, since the number of pins in a certain area can reach such an amount that there will be an incomprehensible mess, and in the end it will be very difficult to choose an object on the map.
')

Solving this problem was solved by clustering. Clusters are pins on a map associated with a group, rather than with any particular object. Visually, this is an icon with a number that reflects the number of grouped objects.
It should be mentioned here that we chose Google Maps as a map service. Why did we prefer them to native Apple cards? Most likely, this decision will be revised in the near future. It’s just that at the time of the design there were still fresh memories of the Apple card fiasco, as well as my personal experience of using both frameworks.
We return to clustering. Initially, the possibility of implementing clusters locally was considered. It turned out that Google Maps has the ability to do it literally in one line of code, but this property was available only on Android. We have already started the local implementation of clusters on iOS, as a common thought has taken it into the collective consciousness: to improve the web service so that it returns the clusters. The server solution has a great advantage in terms of optimization: firstly, reducing the number of transferred objects (read, reducing traffic), and secondly, reducing the cost of storing data in Core Data. The Core Data model also had to be modified - a new entity MapCluster was added, which has attributes such as latitude, longitude, zoom, count, objectId
Where
latitude, longitude - cluster coordinates
zoom - the zoom level that the user sets.
count - the number of objects attached to the cluster.
objectId - a cluster can be tied to a specific object, thus, it should be displayed as a real clickable pin.
Further, a technical matter: if the user changes the location of the map or the zoom level, then a request is first made in the local storage for the selected area, and the clusters from the resulting collection are mapped, and a request is sent to the server with the same parameters. If everything is good with the connection, and the server returns the answer, the local clusters are removed from the database, and new ones are filled in their place - this is how the update occurs.
I am sure that a lot of interesting things about the implementation of a web service could be told by our server-side developers. I can only say that for the sake of this, new database entities were also created to aggregate objects into clusters, which are filled with a scheduled task.
I would also like to tell a couple of interesting points about the iOS implementation of clusters. To render the clusters, we had to create a small class that returns an image as a circle with a number, because the setIcon method of the GMSMarker class from GoogleMaps.framework needs a UIImage, in which it will display the corresponding pin.
As a result, the created class is a heir of UIView, which contains nested elements that form the image of a cluster, and the UIImage from this all is obtained by the following method:
-(UIImage *) renderedClusterImage { UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [UIScreen mainScreen].scale); [self.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return capturedImage; }
I would also like to talk about one feature of the GoogleMaps framework, with which I had to tinker pretty. This is a way to create a custom view of InfoWindow (a window with a description that appears when a user pokes a pin).
To display a custom window of Google Map information, call the delegate method.
- (UIView *)mapView:(GMSMapView *)mapView markerInfoWindow:(GMSMarker *)marker;
apparently, this method should return a UIView object. Different components can be placed on this view (UILabel, UIImageView etc), and all this will eventually be displayed next to the selected pin. It seems everything is clear and does not cause suspicion. However, in our case, it became necessary to redraw the window due to the fact that the preview image at the time of opening InfoWindow may not be loaded from the server. In this case, the image loading process is started, after which you need to redraw the InfoWindow. And then there was a nuance. I thought it would be enough to save a pointer to a UIView, which we return in the delegate method, and then use the properties to change the image to the nested UIImageView. It turned out that GoogleMaps rasterizes (translates to a UIImage) a UIView given to it, perhaps for optimization reasons, so all attempts to redraw it in a planned way were in vain.
As a result, had to invent a hack. It consisted in the following: when tapping on the pin, an empty InfoWindow is shown, if there is no data yet, the boot process starts, and then the following happens:
- (UIView *)mapView:(GMSMapView *)mapView markerInfoWindow:(GMSMarker *)marker { TMMapImagePreview *view = [[TMMapImagePreview alloc] initWithFrame:CGRectMake(0, 0, mapView.frame.size.width * 0.7, 64)]; id image = marker.userData; NSData *imgData = (((MapCluster *)image).image).imageThumbnailData; if (imgData) view.imgView.image = [UIImage imageWithData:imgData]; else { NSString * url = (((MapCluster *)image).image).imageThumbnailURL; if (url) { [[ImageCache sharedInstance] downloadDataAtURL:[NSURL URLWithString:url] completionHandler:^(NSData *data) { (((MapCluster *)image).image).imageThumbnailData = data; [marker setSnippet:@""]; [mapView_ setSelectedMarker:marker]; }]; } } return view; }
here TMMapImagePreview is a descendant class of UIView, the InfoWindow layout is formed in it. All the magic of forced redrawing is enclosed in the compeltion-block of the ImageCache single-file downloadDataAtURL method, which, as it is not difficult to guess, deals with downloading and caching graphic content.
It will be great you download the application, check it and give weighted criticism and comments. Moreover, we have released an update since the writing of the first part of the post. Need feedback Thank!