📜 ⬆️ ⬇️

We write Policy server on C ++ for Unity3d



Why do I need a policy server?


In Unity , starting with version 3.0, for assemblies for the Web player security mechanisms are used, similar to those used by Adobe in the Flash player. Its essence lies in the fact that when accessing the server, the client asks him for "permission", and if the server does not "allow", the client will not try to connect to it. These restrictions work to access remote servers through the WWW class and using sockets. If you want to make any request on the rest protocol from your client to a remote server, it is necessary that a special xml be in the root of the domain. It should be called crossdomain.xml and have the following format:

<?xml version="1.0"?> <cross-domain-policy> <allow-access-from domain="*"/> </cross-domain-policy> 

The client will download the security policy file before the request, check it and, seeing that all domains are allowed, will continue to fulfill the request you have made.

If you need to connect to a remote server using sockets (tcp / udp), before connecting, the client will make a request to the server for port 843 to get a security policy file describing which ports and from which domains you can connect:
')
 <?xml version="1.0"?> <cross-domain-policy> <allow-access-from domain="*" to-ports="1200-1220"/> </cross-domain-policy>" 

If the client data does not satisfy all parameters (domain, port), then the client will generate a SecurityException exception and will not attempt to connect to the server.

This article will focus on writing a server that will give security policy files, in the future I will call it Policy server.

How should a policy server work?


The server operation is simple:

  1. The server starts and listens to port 843 via the tcp protocol. It is possible to override the Security.PrefetchSocketPolicy () port
  2. The client connects to the server using the tcp protocol and sends xml with a request for the security policy file:

     <policy-file-request/> 
  3. The server parses the request and sends the xml client with the security policy

In practice, the process of parsing the request does not make any sense. The value is the time that the client waits before receiving the security policy file, since it increases the delay before connecting to the target port. We can modify the server operation process and give the client a security policy file immediately after connection.

What is already there?


At the moment there is a server written in a bunch of Java + Netty , the source code with the instruction and jar . One of its key drawbacks is dependence on jre. In general, deploying a jre on a linux server is not a problem, but often game developers are client programmers who want to make as few gestures as possible, all the more they don’t want to install jre and later administer it. Therefore, it was decided to write a policy server in C ++, which would work as a native application on a linux machine.

Written in C ++, the policy server should not be inferior in performance to the old, ideally should show the result much better. The key performance metrics will be: the time that the client spends waiting for the security policy file, and the number of clients that can receive security policy files at the same time, which, in fact, also comes down to the timeout for the policy file.

For testing, I used this script . It works as follows:

  1. Clears the average ping to the server
  2. Runs several threads (the number is specified in the script)
  3. In each thread, requests the policy server's security policy file.
  4. If the policy file matches the one expected, then each request is time spent waiting
  5. Prints the results to the console. We are interested in the following values: minimum wait time, maximum wait time, average wait time and the same parameters without ping

The script is written in ruby, but since the standard ruby ​​interpreter lacks support for operating system-level threads, I used jruby for work. The most convenient way to use rvm , the command to run the script will look like this:

 rvm jruby do ruby test.rb 

The results of testing Policy server'a written in Java + Netty :
Average, ms245
Min, ms116
Maximum, ms693

What is needed?


In essence, the task is to write a daemon on C ++ that could listen to several ports, create a socket when clients connect, copy text information to the socket and close it. It is desirable to have as few dependencies as possible, and if they do, they should be in the repositories of the most common linux distributions. For writing code, we will use the c ++ 11 standard. As a minimum set of libraries, take:


One port - one stream


The structure of the application is quite simple: you will need functionality for working with command line parameters, classes for working with streams, functionality for working with networks, functionality for working with logs. These are simple things that should not be a problem, so I will not dwell on them in detail. Code can be found here . Problem is the organization of processing client requests. The simplest solution is to connect all the data after connecting a client socket and close the socket immediately. Those. The code responsible for handling the new connection will look like this:

 void Connector::connnect(ev::io& connect_event, int ) { struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); int client_sd; client_sd = accept(connect_event.fd, (struct sockaddr *)&client_addr, &client_len); if(client_sd < 0) return; const char *data = this->server->get_text()->c_str(); send(client_sd, (void*)data, sizeof(char) * strlen(data), 0); shutdown(client_sd, 2); close(client_sd); } 

When I tried to test on a large number of threads (300, 10 connections per each), I could not wait for the end of the test script. From which we can conclude that this solution does not suit us.

Async


The operation of data transmission over the network is time-consuming, it is obvious that it is necessary to separate the process of creating a client socket and the process of sending data. It would also be nice to give the data in several threads. A good solution is to use std :: async , which appeared in the C ++ 11 async standard. The code responsible for handling the new connection will look like this:

 void Connector::connnect(ev::io& connect_event, int ) { struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); int client_sd; client_sd = accept(connect_event.fd, (struct sockaddr *)&client_addr, &client_len); std::async(std::launch::async, [client_addr, this](int client_socket) { const char * data = this->server->get_text()->c_str(); send(client_socket, (void*)data, sizeof(char) * strlen(data), 0); shutdown(client_socket, 2); close(client_socket); }, client_sd); } 

The disadvantage of this solution is the lack of control resources. By minimal intervention in the code, we are able to asynchronously perform sending data to the client, and we cannot control the process of generating new threads. The process of creating a new thread is expensive for the operating system, and a large number of threads can slow down server performance.

Pub / Sub


A suitable solution for this problem is the publisher-subscriber pattern. The scheme of the server should look like this:
The queue is suitable as a buffer, it is the first to connect to the server - the first is the policy file. In the standard C ++ library there is a ready-made queue container, but it will not work for us, since a thread-safe queue is required. At the same time, we need the operation of adding a new element to be non-blocking, while the reading operation was blocking. Ie at the start of the server several subscribers will be started, who will wait while the queue is empty. As soon as data appears there, one or several processors are triggered. Publishers, however, asynchronously write socket identifiers to a given queue.

A little googling, I found some ready-made implementations:
  1. https://github.com/cameron314/concurrentqueue .
    In this case, we are interested in blockingconcurrentqueue , which is simply copied into the project as a header .h file. Conveniently enough, and there are no dependencies. This solution has the following disadvantages:
    • There are no methods for stopping subscribers. The only way to stop them is to add data to the queue, which will signal to subscribers that they need to stop work. This is quite inconvenient and could potentially cause deadlock.
    • It is supported by one person, commits lately appear quite rarely.

  2. tbb concurrent queue .
    Multi-threaded queue from the library tbb (Threading Building Blocks). The library is developed and maintained by Intel, and it has everything we need:
    • Block read from queue
    • Non-blocking write to the queue
    • The ability to stop threads waiting for data at any time

    Among the minuses it can be noted that this solution increases the number of dependencies, i.e. end users will have to install tbb on their server. In the most common linux repositories, tbb can be installed via the operating system's package manager, so there should be no dependency problems.

Thus, the code for creating a new connection will look like this:

 void Connector::connnect(ev::io& connect_event, int ) { struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); int client_sd; client_sd = accept(connect_event.fd, (struct sockaddr *)&client_addr, &client_len); clients_queue()->push(client_sd); this->handled_clients++; } 

Client socket processing code:

 void Handler::run() { LOG(INFO) << "Handler with thread id " << this->thread.get_id() << " started"; while(this->is_run) { int socket_fd = clients_queue()->pop(); this->handle(socket_fd); } LOG(INFO) << "Handler with thread id " << this->thread.get_id() << " stopped"; } 

Queue code:

 void ClientsQueue::push(int client) { if(!this->queue.try_push(client)) LOG(WARNING) << "Can't push socket " << client << " to queue"; } int ClientsQueue::pop() { int result; try { this->queue.pop(result); } catch(...) { result = -1; } return result; } void ClientsQueue::stop() { this->queue.abort(); } 

The code for the entire project with installation instructions can be found here . The result of the test run with ten threads handlers:
Average, ms151
Min, ms100
Maximum, ms1322

Total


Comparison results table
Java + NettyC ++ Pub / Sub
Average, ms245151
Min, ms116100
Maximum, ms6931322

Links :

PS: Currently, the Unity Web player is experiencing hard times due to the closure of npapi in the top browsers. But if anyone else uses it and keeps the server on linux machines, then you can use this server, I hope it will be useful to you. Special thanks to themoonisalwaysspyingonyourfears for the illustration of the article.

Source: https://habr.com/ru/post/268091/


All Articles