📜 ⬆️ ⬇️

Configuring MongoDB ShardedCluster with X.509 Authentication

Good day to all! Recently, life has thrown the author into an exciting job of deploying a MongoDB cluster with setting up replication and sharding, as well as authentication using x.509 certificates. In this article, I first of all would like to state my thoughts and share the gained experience. Since some things were not trivial and could not be done the first time, I think my step-by-step instructions can be useful for covering the issue for those who are just familiar with sharding data and working with MongoDB as a whole.
Also, I will be very happy to see recommendations on adding / changing the cluster configuration and just questions or criticism on the article itself or on the substance of the issue.

Introduction


The project within which the cluster was implemented is a service for collecting statistics on client devices and aggregated presentation on the site or via the Rest-API. The project has been stable for a long time under low load and as a result, the MongoDB server installed as is out of the box (without sharding and data replication) coped well with its goal, and good sleep provided daily backups of the krone base. Thunder struck as usual at one point after the arrival of several large customers with a large number of devices, data and requests. The result was unacceptably long execution of queries to the grown database, and the culmination was the disruption of the server when we almost lost data.

Thus, overnight, there was a need to perform work on improving fault tolerance, data integrity, performance with the possibility of future scaling. It was decided to use the existing potential of MongoDB to eliminate the problems that have arisen, namely, to organize a shardirovany cluster with replication and migrate available data to it.

Some theory


For a start, let's take a quick look at ShardedCluster MongoDB and its main components. Sharding as such is a method of horizontal scaling of computing systems that store and provide access to data. Unlike vertical scaling , when system performance can be increased by improving the performance of an individual server, for example, by switching to a more powerful CPU, adding the amount of available RAM or disk space, sharding works by distributing the data set and the load among several servers and adding new servers as needed (this is our case).
')
The advantage of such scaling is its almost infinite potential for expansion while the vertically scalable system is deliberately limited, for example, by the available hardware of the hosting provider.

What is expected to get from switching to a shard MongoDB cluster? First of all, it is necessary to obtain load distribution of read / write operations between cluster shards, and secondly, to achieve high fault tolerance (constant data availability) and data integrity due to redundant copying (replication).

In the case of MongoDB, sharding of data takes place at the collection level, which means that we can explicitly specify the data of which collection needs to be distributed among the existing cluster shards. It also means that the entire collection of documents from the charding collection will be divided into equal parts by size - chunks, which later will be almost equally divided between the cluster charades by the monga balancer.

Sharding for all databases and collections is disabled by default, and we cannot shard cluster system databases, such as admin and config . When we try to do this, we will get an unambiguous refusal from Mongi:

mongos> sh.enableSharding("admin") { "ok" : 0, "errmsg" : "can't shard admin database" } 

A shardirovanny MongoDB cluster imposes three mandatory conditions on us: actually, there are shards in it, all communication between the cluster and its clients should be carried out exclusively via mongos routers and a cluster server must be present (based on the additional mongod instance, or as recommended on the basis of the Replica Set).

The mongodb official documentation says “In production, all shards should be replica sets.” . Being a replica ( Replica Set ) each shard at the expense of multiple copying of data increases its fault tolerance (in terms of data availability on any instance of the replica) well, of course, ensures their best preservation.

Replica (Replica Set) is the union of several running mongod instances that store copies of the same data set. In the case of a shard replica, this will be a set of chunks given to this shard by the balancer of the monga.

One of the copies of the replica is assigned to the main one ( PRIMARY ) and accepts all data write operations (supporting and reading at the same time), the other mongods in this case are announced by SECONDARY and update their copy of the data set in asynchronous communication with PRIMARY. They are also available for reading data. As soon as PRIMARY becomes inaccessible for some reason, ceasing to interact with the other participants of the replica, a vote on the role of the new PRIMARY is announced among all the available participants of the replica. In fact, besides PRIMARY and SECONDARY in the Replica Set there may be a third kind of participant - this is the arbiter ( ARBITER ).

The arbitrator in the replica does not play the role of copying the data set; instead, it accomplishes the important task of voting and is designed to shield the remark from the dead-end outcome of voting. Imagine a situation where there is an even number of participants in a cue and they vote for two candidates with the same total number of votes, and so endlessly ... Adding an arbitrator to this “even” comment of an arbitrator, he will decide the outcome of the vote by giving his vote to one or another candidate position ”PRIMARY, without requiring resources to service another copy of the data set.

I note that Replica Set is the union of mongod instances, that is, nothing prevents you from collecting a replica on a single server, specifying folders on different physical media as data repositories, and achieving some data security, but still ideal. This is the organization of the replica with the launch of mongod on different servers. In general, in this respect, the MongoDB system is very flexible, and allows us to assemble the configuration we need based on our needs and capabilities, without setting any hard limits. Replica Set as such, outside the context of Sharded Cluster, is one of the typical MongoDB server organization schemes, which gives a high degree of fault tolerance and data protection. In this case, each member of the replica stores a complete copy of the entire database dataset, and not its part defined by the shard chunk set.

Infrastructure


The following cluster configuration is built on three virtual containers (VBO) OpenVZ. Each of the virtualok is located on a separate dedicated server.

Two virtual machines (hereinafter server1.cluster.com and server2.cluster.com ) have more resources - they will be responsible for replication, sharding and providing data to customers. The third machine ( server3.cluster.com ) has a weaker configuration — its purpose is to ensure the operation of the mongod arbiters.

In the constructed cluster, we now have three charades. In our scheme, we withstood the recommendation of building shards based on a replica of sets, but with some assumption. Each shard replica of our cluster has its own PRIMARY, SECONDARY and ARBITER, running on three different servers. There is also a config server built also using data replication.

However, we have only three servers, one of which does not perform the functions of data replication (only in the case of a config replica) and therefore all three shards are actually located on two servers.

On the diagrams from the Mongi documentation, the Mongos are depicted on the application servers. I decided to break this rule and place the Mongos (we will have two) on the data servers: server1.cluster.com and server2.cluster.com, getting rid of the additional mongodb configuration on the application server and due to certain restrictions associated with the application servers. Application servers have the ability to connect to any of the two Mongos, so in case of problems with one of them, they will reconnect to the other after a short timeout. Application servers, in turn, sit at the DNS ohm on which Round Robin is configured. It alternately issues one of two addresses, providing a primitive balancing of connections (client requests). There are plans to replace it with some “smart” DNS (maybe someone will tell a good solution in the comments, I will be grateful!) For issuing the necessary server by geolocation of the client.

For clarity, I present the general scheme of the formed cluster with the names of the servers and the applications running on them. Colons indicate the designated application ports.



Primary setup


Go to server1.cluster.com and install the latest version of the MongoDB Community Edition package from the official repository. At the time of cluster assembly, it is version 3.2.8. In my case, the Debian 8 operating system is installed on all the cluster machines, you can find detailed installation instructions for your OS in the official documentation .
We import the public key into the system, update the package lists and install the mongodb server with a set of utilities:

 server1.cluster.com:~# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 server1.cluster.com:~# echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/3.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list server1.cluster.com:~# apt-get update server1.cluster.com:~# apt-get install -y mongodb-org 

Done! As a result of the actions performed, we get a server on our MongoDB server that is already up and running. For now, turn off mongod service (we’ll return to it):

 server1.cluster.com:~# service mongod stop 

Next, create a directory in which we will store all the data of our cluster, I have it located along the path “/ root / mongodb”. Inside we form the following directory structure:

 . ├── cfg ├── data │ ├── config │ ├── rs0 │ ├── rs1 │ └── rs2 ├── keys └── logs 

In the data subdirectory, we will store our replica data directly (including the config replica). In cfg, we will create configuration files to launch the necessary mongo {d / s} instances. In keys, we will copy the keys and certificates for x.509 authentication of cluster members. The purpose of the logs folder, I think everyone understands.

Similarly, the procedure with installation and directories must be repeated on the remaining two servers.

Before proceeding with setting up and linking the components of our cluster, make sure that everything works as we need. Run the mongod instance on port 27000, specifying the directory for the data in “/ root / mongodb / data / rs0”:

 mongod --port 27000 --dbpath /root/mongodb/data/rs0 

On the same server, open another terminal and connect to the running Mongo:

 mongo --port 27000 

If everything went well, we will get into the shell mongodb and can execute a couple of commands. By default, Monga will switch us to the test database, we can verify this by entering the command:

 > db.getName() test 

Delete the database we don't need with the command:

 > db.dropDatabase() { "ok" : 1 } 

And we initialize a new database with which we will experiment by switching to it:

 > use analytics switched to db analytics 

Now try to enter the data. In order not to be unfounded, I propose to consider all further operations in the article using the example of a system for collecting certain statistics, where data from remote devices periodically come in and processed on the application servers and then stored in the database

Add a couple of devices:

 > db.sensors.insert({'s':1001, 'n': 'Sensor1001', 'o': true, 'ip': '192.168.88.20', 'a': ISODate('2016-07-20T20:34:16.001Z'), 'e': 0}) WriteResult({ "nInserted" : 1 }) > db.sensors.insert({'s':1002, 'n': 'Sensor1002', 'o': false, 'ip': '192.168.88.30', 'a': ISODate('2016-07-19T13:40:22.483Z'), 'e': 0}) WriteResult({ "nInserted" : 1 }) 

Here,
s is the sequence number of the sensor;
n is its lower case identifier;
o - current status (online / offline);
ip - ip address of the sensor;
a is the last activity time;
e - sign of an error;

And now several records of statistical data of the form:

 > db.statistics.insert({'s':1001, 'ts': ISODate('2016-08-04T20:34:16.001Z'), 'param1': 123, 'param2': 23.45, 'param3': “OK”, 'param4': True, 'param5': '-1000', 'param6': [1,2,3,4,5]) WriteResult({ "nInserted" : 1 }) 

s - sensor number;
ts - TimeStamp;
param1..param6 - some statistics.

Clients of the statistical analytics service often perform aggregated queries in order to obtain some representative data on statistics collected from their devices. Almost all requests involved “the ordinal number of the sensor” (field s). Sorts and grouping are often applied to it, so for optimization (and also for sharding) we add an index to the statistics collection:

 mongos> db.statistics.ensureIndex({"s":1}) 

The choice and creation of the necessary indices is a topic for a separate discussion, but for the time being I will limit myself to this.

Authentication using x.509 certificates


To understand the task, let's go ahead and present mongod instances running on different servers that need to be combined into a replica, connect mongos to them and provide the ability for clients to safely connect to the formed cluster. Of course, the participants in the data exchange must be authenticated during the connection (to be trusted), and it is desirable that the data channel is also protected. In this case, MongoDB has support for TSL / SSL, as well as several authentication mechanisms. One of the options for establishing a trust relationship between the participants of data exchange in a cluster is the use of keys and certificates. Regarding the choice of the mechanism that uses this option in the Mongi documentation, there is a recommendation:

“Keyfiles are recommended for testing or development environments. For production environments we recommend using x.509 certificates . ”

X.509 is an ITU-T standard for public key infrastructure and privilege management. This standard defines the format and how public keys are distributed using signed digital certificates. The certificate associates the public key with a certain subject - the certificate user. The reliability of this connection is achieved through a digital signature that is performed by a trusted certificate authority.

(In addition to x.509 in MongoDB, there are also highly reliable Enterprise-level methods - this is Kerberos Authentication and LDAP Proxy Authority Authentication ), but this is not our case, and it will be considered how to configure x.509 authentication.

The authentication mechanism using x.509 certificates requires a secure TSL / SSL connection to the cluster, which is enabled by the corresponding mongod launch argument --sslMode , or by the net.ssl.mode parameter in the configuration file. Authentication of the client connecting to the server in this case comes down to authenticating the certificate, and not the login and password.

In the context of this mechanism, the generated certificates will be divided into two types: certificates of cluster members — tied to a specific server, intended for internal authentication of mongod instances on different machines, and client certificates — tied to a separate user, intended to authenticate external cluster clients.

To fulfill the conditions of x.509 we need a single key - the so-called “ Certificate Authority ” Certificate Authority (CA) . On its basis, both client and cluster member certificates will be issued, so first of all we will create a secret key for our CA. It will be correct to perform all the following actions and store the secret keys on a separate machine, but in this article I will perform all the actions on the first server (server1.cluster.com):

 server1.cluster.com:~/mongodb/keys# openssl genrsa -out mongodb-private.key -aes256 Generating RSA private key, 2048 bit long modulus .....................+++ ........................................................+++ e is 65537 (0x10001) Enter pass phrase for mongodb-private.key: Verifying - Enter pass phrase for mongodb-private.key: 

On the suggestion to enter a secret phrase, we enter and confirm some reliable combination, for example, “temporis $ filia $ veritas” (you will certainly have something of your own and more complicated). The phrase must be remembered, we will need it to sign each new certificate.

Next, we create a CA certificate (immediately after launching the command, we will be asked to enter a secret phrase from the key that we specified (in the “key” parameter):

 server1.cluster.com:~/mongodb/keys# openssl req -x509 -new -extensions v3_ca -key mongodb-private.key -days 36500 -out mongodb-CA-cert.crt 

Let me draw your attention to the days parameter - it is responsible for the duration of the certificate. I’m not sure who will be involved in the project that I’m working on at the moment, so in order to eliminate unpleasant surprises, we give the certificate 36,500 days of life, which is 100 years old (very optimistic, isn't it?).
After checking the phrase, we will be asked to enter information about the organization that owns the certificate. Imagine that our large organization is called “SomeSysyems” and is located in the city of Moscow (the entered information follows after the colon):

 Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: Statistics Common Name (eg server FQDN or YOUR name) []: CaServer Email Address []: info@SomeSystems.com 

Fine! CA is ready and we can use it to sign client certificates and certificates of cluster members. I will add that the validity of the entered data does not affect the functionality of the CA certificate itself, however, the signed certificates will now depend on the entered values, which will be discussed later.

The procedure for creating certificates for cluster members (certificates for external clients will be discussed separately) is as follows:

  1. We generate a private key (* .key - file) and a “certificate request” (csr file). A CSR (Certificate Signing Request) is a text file that contains, in coded form, information about the organization that issued the certificate and the public key.

  2. Using the private key and public certificate of our Certification Authority, we sign the certificate for the current server.

  3. From the new key and certificate of the cluster member we form a PEM file that we use to connect to the cluster.

We create a private key and certificate request for our first server (server1.cluster.com). I will pay attention to an important detail, when filling in all the fields remain the same as for the root certificate, except for CN (Common Name). It must be made unique for each certificate. In our case, the full domain name - FQDN (Fully Qualified Domain Name) of the specific server will be specified as the value:

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout server1.key -out server1.csr Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: Statistics Common Name (eg server FQDN or YOUR name) []: server1.cluster.com Email Address []: info@SomeSystems.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

I left the extra fields empty. If you decide to specify an additional password (A challenge password [] :), then in the mongod configuration you will need to specify a password for this certificate for which the net.ssl.PEMKeyPassword and net.ssl.clusterPassword parameters correspond . (Details on these parameters in the documentation here ).

Next, we will sign the CSR file with our CA certificate and get a public certificate (* .crt file):

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in server1.csr -out server1.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=Statistics/CN=server1.cluster.com/emailAddress=info@SomeSystems.com Getting CA Private Key Enter pass phrase for mongodb-private.key: 

Now we need to make a PEM file:

 server1.cluster.com:~/mongodb/keys# cat server1.key server1.crt > server1.pem 

We will use the PEM file directly when running mongod instances, and specify it in the configuration.
Now you need to repeat the certificate creation procedure for the remaining servers. For a complete understanding, I cite all the commands:

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout server2.key -out server2.csr Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: Statistics Common Name (eg server FQDN or YOUR name) []: server2.cluster.com Email Address []: info@SomeSystems.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

(extra-fields were not filled)

We sign the CSR file with our CA certificate to get the public certificate (* .crt file) of the second server:

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in server2.csr -out server2.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=Statistics/CN=server2.cluster.com/emailAddress=info@SomeSystems.com Getting CA Private Key Enter pass phrase for mongodb-private.key: 

Now we need to make a PEM file:

 server1.cluster.com:~/mongodb/keys# cat server2.key server2.crt > server2.pem 

And similarly for the third server certificate:

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout server3.key -out server3.csr Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: Statistics Common Name (eg server FQDN or YOUR name) []: server3.cluster.com Email Address []: info@SomeSystems.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

(extra-fields were not filled)

We sign the CSR file with our CA certificate to get the public certificate (* .crt file) of the third server:

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in server3.csr -out server3.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=Statistics/CN=server3.cluster.com/emailAddress=info@SomeSystems.com Getting CA Private Key Enter pass phrase for mongodb-private.key: 

Create a PEM file:

 server1.cluster.com:~/mongodb/keys# cat server3.key server3.crt > server3.pem 

I repeat that all the keys and certificates were created by me on the first server and then, if necessary, moved to the appropriate server. Thus, each of the three servers should have a public CA certificate (mongodb-CA-cert.crt) and a PEM file of this server (server <$ N> .pem).

Mongod instance configuration


For correct start we need to transfer a number of parameters to mongod instances. To do this, you can use the configuration file, or you can pass all the necessary values ​​as arguments to the terminal command. Almost all configuration parameters are reflected in the corresponding command line arguments. - , , , – -:

 mongod --config <path-to-config-file> 

, mongod - (rs0) :

 # # /root/mongodb/cfg/mongod-rs0.conf # replication: replSetName: "rs0" #   net: port: 27000 ssl: mode: requireSSL #    PEMKeyFile: /root/mongodb/keys/server1.pem clusterFile: /root/mongodb/keys/server1.pem CAFile: /root/mongodb/keys/mongodb-CA-cert.crt weakCertificateValidation: false #     allowInvalidCertificates: false #      security: authorization: enabled #    clusterAuthMode: x509 #   - MONGODB-X509 storage: dbPath : /root/mongodb/data/rs0 #     systemLog: destination: file #      path: /root/mongodb/logs/mongod-rs0.log #   - logAppend: true #  -    

- (rs1), , , :

 # # /root/mongodb/cfg/mongod-rs1.conf # replication: replSetName: "rs1" net: port: 27001 ssl: mode: requireSSL PEMKeyFile: /root/mongodb/keys/server1.pem clusterFile: /root/mongodb/keys/server1.pem CAFile: /root/mongodb/keys/mongodb-CA-cert.crt weakCertificateValidation: false allowInvalidCertificates: false security: authorization: enabled clusterAuthMode: x509 storage: dbPath : /root/mongodb/data/rs1 systemLog: destination: file path: /root/mongodb/logs/mongod-rs1.log logAppend: true 

(rs2):

 # # /root/mongodb/cfg/mongod-rs2.conf # replication: replSetName: "rs2" net: port: 27002 ssl: mode: requireSSL PEMKeyFile: /root/mongodb/keys/server1.pem clusterFile: /root/mongodb/keys/server1.pem CAFile: /root/mongodb/keys/mongodb-CA-cert.crt weakCertificateValidation: false allowInvalidCertificates: false security: authorization: enabled clusterAuthMode: x509 storage: dbPath : /root/mongodb/data/rs2 systemLog: destination: file path: /root/mongodb/logs/mongod-rs2.log logAppend: true 

, - , (rscfg).

, - mongod ( ), - Replica Set.

- “sharding.clusterRole” mongod :

 # # /root/mongodb/cfg/mongod-rscfg.conf # sharding: clusterRole: configsvr #     -   replication: replSetName: "rscfg" #   net: port: 27888 ssl: mode: requireSSL PEMKeyFile: /root/mongodb/keys/server1.pem clusterFile: /root/mongodb/keys/server1.pem CAFile: /root/mongodb/keys/mongodb-CA-cert.crt weakCertificateValidation: false allowInvalidCertificates: false security: authorization: enabled clusterAuthMode: x509 storage: dbPath : /root/mongodb/data/config systemLog: destination: file path: /root/mongodb/logs/mongod-rscfg.log logAppend: true 

. net.ssl.PEMKeyFile net.ssl.clusterFile (server2.pem, server3.pem).

Replica Set


mongod 27000, “” – . mongod , :

 mongod --port 27000 --dbpath /root/mongodb/data/rs0 

, , , -, , . x.509 ( ). , x.509 , . , , , . .

“x.509 ”. , ( mongod ), . , . ( root) (rs0). MongoDB, .

CA . :

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout rsroot.key -out rsroot.csr Generating a 2048 bit RSA private key ........................................................................+++ .........................+++ writing new private key to 'rsroot.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: StatisticsClient Common Name (eg server FQDN or YOUR name) []: rsroot Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

( A ):

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in rsroot.csr -out rsroot.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=StatisticsClient/CN=rsroot Getting CA Private Key Enter pass phrase for mongodb-private.key: 

PEM-:

 server1.cluster.com:~/mongodb/keys# cat rsroot.key rsroot.crt > rsroot.pem 

Organisation Unit Name (OU), , . , subject ( ) OU , , :

 { "ok" : 0, "errmsg" : "Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member.", "code" : 2 } 

x.509 , , (subject) . subject PEM- :

 server1.cluster.com:~/mongodb/keys# openssl x509 -in rsroot.pem -inform PEM -subject -nameopt RFC2253 subject= CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU -----BEGIN CERTIFICATE----- 

“subject=” ( “subject=” ). :

 mongo --port 27000 

 > db.getSiblingDB("$external").runCommand({createUser: "CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "root", db: "admin"}] }) 

$external – , MongoDB, , ( ).

mongod, c . . (rs0).
(rsroot) , — subject :

 server1.cluster.com:~/mongodb/keys# mongo admin --ssl --sslCAFile /root/mongodb/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb/keys/rsroot.pem --host server1.cluster.com --port 27000 

 > db.getSiblingDB("$external").auth({ mechanism:"MONGODB-X509", user: "CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU" }) 

:

 rs.initiate( { _id: "rs0", members: [ { _id: 0, host : "server1.cluster.com:27000" }, { _id: 1, host : "server2.cluster.com:27000" }, { _id: 2, host : "server3.cluster.com:27000", arbiterOnly: true }, ] } ) 

arbiterOnly , “ ”.

, “rs0” , :
rs0:PRIMARY ( SECONDARY).

.

1. ( ):

 mongod --port 27001 --dbpath /root/mongodb/data/rs1 

2. (rs1). , subject :

 mongo --port 27001 

 > db.getSiblingDB("$external").runCommand({createUser: "CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "root", db: "admin"}] }) 

3. mongod , . :

 root@server1.cluster.com# mongod --config /root/mongodb/cfg/mongod-rs1.conf root@server2.cluster.com# mongod --config /root/mongodb/cfg/mongod-rs1.conf root@server3.cluster.com# mongod --config /root/mongodb/cfg/mongod-rs1.conf 

4. , rs1:

 root@server1.cluster.com# mongo admin --ssl --sslCAFile /root/mongodb/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb/keys/rsroot.pem --host server1.cluster.com --port 27001 

 > db.getSiblingDB("$external").auth({ mechanism:"MONGODB-X509", user: "CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU" }) > rs.initiate( { _id: "rs1", members: [ { _id: 0, host : "server1.cluster.com:27001" }, { _id: 1, host : "server2.cluster.com:27001" }, { _id: 2, host : "server3.cluster.com:27001", arbiterOnly: true }, ] } ) 

(rs2).

1. ( ):

 mongod --port 27002 --dbpath /root/mongodb/data/rs2 

2. (rs2):

 mongo --port 27002 

 > db.getSiblingDB("$external").runCommand({createUser: "CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "root", db: "admin"}] }) 

3. . :

 root@server1.cluster.com# mongod --config /root/mongodb/cfg/mongod-rs2.conf root@server2.cluster.com# mongod --config /root/mongodb/cfg/mongod-rs2.conf root@server3.cluster.com# mongod --config /root/mongodb/cfg/mongod-rs2.conf 

4. , rs2:

 root@server1.cluster.com# mongo admin --ssl --sslCAFile /root/mongodb/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb/keys/rsroot.pem --host server1.cluster.com --port 27002 

 > db.getSiblingDB("$external").auth({ mechanism:"MONGODB-X509", user: "CN=rsroot,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU" }) > rs.initiate( { _id: "rs2", members: [ { _id: 0, host : "server1.cluster.com:27002" }, { _id: 1, host : "server2.cluster.com:27002" }, { _id: 2, host : "server3.cluster.com:27002", arbiterOnly: true }, ] } ) 

-


, . -, , , . -, - . :

 { "ok" : 0, "errmsg" : "Arbiters are not allowed in replica set configurations being used for config servers", "code" : 93 } 

SECONDARY-/ -. rscfg, .

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout rootuser.key -out rootuser.csr Generating a 2048 bit RSA private key ......................+++ .........................................+++ writing new private key to 'rootuser.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: StatisticsClient Common Name (eg server FQDN or YOUR name) []: root Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in rootuser.csr -out rootuser.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=StatisticsClient/CN=root Getting CA Private Key Enter pass phrase for mongodb-private.key: 

 server1.cluster.com:~/mongodb/keys# cat rootuser.key rootuser.crt > rootuser.pem 


 server1.cluster.com:~/mongodb/keys# openssl x509 -in rootuser.pem -inform PEM -subject -nameopt RFC2253 subject= CN=root,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU -----BEGIN CERTIFICATE----- 

1. :

 server1.cluster.com:~/mongodb/keys# mongod --port 27888 --dbpath /root/mongodb/data/config 

2. (rscfg).:

 server1.cluster.com:~/mongodb/keys# mongo --port 27888 

 > db.getSiblingDB("$external").runCommand({createUser: "CN=root,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "root", db: "admin"}] }) 

3. mongod -. :

 root@server1.cluster.com# mongod --config /root/mongodb/cfg/mongod-rscfg.conf root@server2.cluster.com# mongod --config /root/mongodb/cfg/mongod-rscfg.conf root@server3.cluster.com# mongod --config /root/mongodb/cfg/mongod-rscfg.conf 

4. , - (rscfg):

 root@server1.cluster.com# mongo admin --ssl --sslCAFile /root/mongodb/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb/keys/rootuser.pem --host server1.cluster.com --port 27888 

 > db.getSiblingDB("$external").auth({ mechanism:"MONGODB-X509", user: "CN=root,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU" }) > rs.initiate( { _id: "rscfg", members: [ { _id: 0, host : "server1.cluster.com:27888" }, { _id: 1, host : "server2.cluster.com:27888" }, { _id: 2, host : "server3.cluster.com:27888" } ] } ) 

- . mongos .

mongos


( mongos). MongoDB . mongos, server1.cluster.com server2.cluster.com.

, mongod, , .

mongos mongod , , , . config -. - sharding.configDB. - , : , . — 27017.

 # # /root/mongodb/cfg/mongos.conf # sharding: configDB: "rscfg/server1.cluster.com:27888,server2.cluster.com:27888,server3.cluster.com:27888" net: port: 27017 ssl: mode: requireSSL PEMKeyFile: /root/mongodb/keys/server1.pem clusterFile: /root/mongodb/keys/server1.pem CAFile: /root/mongodb/keys/mongodb-CA-cert.crt weakCertificateValidation: false allowInvalidCertificates: false security: clusterAuthMode: x509 systemLog: destination: file path: /root/mongodb/logs/mongos.log logAppend: true 

( PEM-) :

 mongos --config /root/mongodb/cfg/mongos.conf 

– mongos root, - (, - – ).

 mongo admin --ssl --sslCAFile /root/mongodb/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb/keys/rootuser.pem --host server1.cluster.com --port 27017 

“mongos> ” , OK.

 mongos> db.getSiblingDB("$external").auth({ mechanism:"MONGODB-X509", user: "CN=root,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU" }) 

( “1” )

“ “ root , . ( ) userAdminAnyDatabase . .

. analytics , .

, , analyticsuser :

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout analyticsuser.key -out analyticsuser.csr Generating a 2048 bit RSA private key ......................+++ .........................................+++ writing new private key to 'analyticsuser.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: StatisticsClient Common Name (eg server FQDN or YOUR name) []: analyticsuser Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

:

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in analyticsuser.csr -out analyticsuser.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=StatisticsClient/CN=analyticsuser Getting CA Private Key Enter pass phrase for mongodb-private.key: 

PEM-:

 server1.cluster.com:~/mongodb/keys# cat analyticsuser.key analyticsuser.crt > analyticsuser.pem 

subject :

 server1.cluster.com:~/mongodb/keys# openssl x509 -in rootuser.pem -inform PEM -subject -nameopt RFC2253 subject= CN=analyticsuser,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU -----BEGIN CERTIFICATE----- 

() :

 mongos> db.getSiblingDB("$external").runCommand({createUser: "CN=analyticsuser,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "readWrite", db: "analytics"}] }) 

analyticsuser analytics. ( ) analytics .


statistics – (Shard key) , . n , (Chunks) . , , chunksize - 64 Mb . , , .. .

. , ( authenticationMechanism ), ( authenticationDatabase ) ( u ). (root) «+” :

 mongo --ssl --sslCAFile /root/mongodb1/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb1/keys/rootuser.pem --host server1.cluster.com --port 27017 --authenticationMechanism "MONGODB-X509" --authenticationDatabase "$external" -u “CN=root,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU” 

config :

 mongos> use config mongos> db.settings.save({_id: "chunksize", value: NumberLong(32)}) WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 }) 

32 Mb. :

 mongos> db.settings.find({'_id':"chunksize" }) { "_id" : "chunksize", "value" : NumberLong(32) } 

( ), clusterAdmin . :

 server1.cluster.com:~/mongodb/keys# openssl req -new -nodes -newkey rsa:2048 -keyout clusterAdmin.key -out aclusterAdmin.csr Generating a 2048 bit RSA private key ................+++ .......................................+++ writing new private key to 'clusterAdmin.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: RU State or Province Name (full name) [Some-State]: MoscowRegion Locality Name (eg, city) []: Moscow Organization Name (eg, company) [Internet Widgits Pty Ltd]: SomeSystems Organizational Unit Name (eg, section) []: Statistics Common Name (eg server FQDN or YOUR name) []: clusteradmin Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: 

 server1.cluster.com:~/mongodb/keys# openssl x509 -CA mongodb-CA-cert.crt -CAkey mongodb-private.key -CAcreateserial -req -days 36500 -in clusterAdmin.csr -out clusterAdmin.crt Signature ok subject=/C=RU/ST=MoscowRegion/L=Moscow/O=SomeSystems/OU=Statistics/CN=clusteradmin Getting CA Private Key Enter pass phrase for mongodb-private.key: 

 server1.cluster.com:~/mongodb/keys# cat clusterAdmin.key clusterAdmin.crt > clusterAdmin.pem 

 server1.cluster.com:~/mongodb/keys# openssl x509 -in clusterAdmin.pem -inform PEM -subject -nameopt RFC2253 subject= CN=clusteradmin,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU -----BEGIN CERTIFICATE----- 

, OU OU .

root, – :

 mongos> db.getSiblingDB("$external").runCommand({ createUser: "CN=clusteradmin,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "clusterAdmin", db: "admin"}] }) 

mongos ( ):

 mongo --ssl --sslCAFile /root/mongodb1/keys/mongodb-CA-cert.crt --sslPEMKeyFile /root/mongodb1/keys/clusterAdmin.pem --host server1.cluster.com --port 27017 --authenticationMechanism "MONGODB-X509" --authenticationDatabase "$external" -u “CN=clusteradmin,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU” 

, , -:

 mongos> sh.addShard("rs0/server1.cluster.com:27000,server2.cluster.com:27000") mongos> sh.addShard("rs1/server1.cluster.com:27001,server2.cluster.com:27001") mongos> sh.addShard("rs2/server1.cluster.com:27002,server2.cluster.com:27002") 

, :

 mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5795284cd589624d4e36b7d4") } shards: { "_id" : "rs0", "host" : "rs0/server1.cluster.com:27100,server2.cluster.com:27200" } { "_id" : "rs1", "host" : "rs1/server1.cluster.com:27101,server2.cluster.com:27201" } { "_id" : "rs2", "host" : "rs2/server1.cluster.com:27102,server2.cluster.com:27202" } active mongoses: "3.2.8" : 1 balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: 

, – , . “databases”. , - . :

1. . analyitcs:

 mongos> sh.enableSharding("statistics") 

Check the result:

 mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5795284cd589624d4e36b7d4") } shards: { "_id" : "rs0", "host" : "rs0/server1.cluster.com:27000,server2.cluster.com:27000" } { "_id" : "rs1", "host" : "rs1/server1.cluster.com:27001,server2.cluster.com:27001" } { "_id" : "rs2", "host" : "rs2/server1.cluster.com:27002,server2.cluster.com:27002" } active mongoses: "3.2.8" : 1 balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "analytics", "primary" : "rs2", "partitioned" : true } 

analytics, , Primary- ( PRIMARY ) „rs2“. , Primary- (rs2).

2. .

As mentioned earlier, for partitioning the entire set of documents of the chardable collection into chunks, the monge needs a key index — the sharding key. His choice is a very responsible task, which must be approached wisely, guided by the requirements of your implementation and common sense. The index by which the collection will be divided into chunks is selected from existing indices, or it is intentionally added to the collection. Anyway, at the time of sharding, the key-corresponding index must exist in the collection. The sharding key does not impose special restrictions on the corresponding index. If necessary, it can be made composite, for example, {“s”: 1, “ts”: -1} .

, statistics analytics. statistics, – s . , :

 mongos> use analytics mongos> db.statistics.ensureIndex({"s":1}) 

:

 mongos> sh.shardCollection("analytics.statistics", {"s":1}) 

. , ( ), PRIMARY-, () . . 3 .

, sh.status() , :

 mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5773899ee3456024f8ef4895") } shards: { "_id" : "rs0", "host" : "rs0/server1.cluster.com:27000,server2.cluster.com:27000" } { "_id" : "rs1", "host" : "rs1/server1.cluster.com:27001,server2.cluster.com:27001" } { "_id" : "rs2", "host" : "rs2/server1.cluster.com:27002,server2.cluster.com:27002" } active mongoses: "3.2.8" : 1 balancer: Currently enabled: yes Currently running: yes Balancer lock taken at Sun Jul 29 2016 10:18:32 GMT+0000 (UTC) by MongoDB:27017:1468508127:-1574651753:Balancer Collections with active migrations: statistic.statistic started at Sun Jul 29 2016 10:18:32 GMT+0000 (UTC) Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 3 : Success 2 : Failed with error 'aborted', from rs2 to rs0 databases: { "_id" : "analytics", "primary" : "rs2", "partitioned" : true } analytics.statistics shard key: { "s" : 1 } unique: false balancing: true chunks: rs0 1 rs1 2 rs2 21 too many chunks to print, use verbose if you want to force print 

analytics statistics shard key. , . balancer , .

Supervisor


MongoDB Community mongodb “” . - MongoDB.

: /etc/init.d/mongod. , mongod mongos server1.cluster.com server2.cluster.com.

/etc/init.d/mongod, supervisor.

Supervisor mongo{d/s}' :

 supervisorctl start all supervisorctl stop all 

( — ).
supervisor linux , (Debian 8) :

 # apt-get install supervisor 

, .

mongod rs0:

 # # /etc/supervisor/conf.d/mongod-rs0.conf # [program:mongod-rs0] command=mongod --config /root/mongodb/cfg/rs0.conf user=root stdout_logfile=/root/mongodb/logs/supervisor/mongod-rs0-stdout.log redirect_stderr=true autostart=true autorestart=true stopwaitsecs=60 

, . command , – mongod . . stdout_logfile – supervisor. - , , .

redirect_stderr . autostart autorestart .

stopwaitsecs . - TERM, 10 . KILL, . .

, linux – /etc/supervisor/conf.d/ .

:

 # supervisorctl reload 


, :

 # supervisorctl stop mongod-rs0 # supervisorctl start mongod-rs0 # supervisorctl status mongod-rs0 

supervisor mongodb, 27017, ( ) mongos. /etc/init.d/mongod.

Useful information



3M ( sh.shardCollection() ), . 100M . sh.shardCollection() “timeout”. :

1. ;
2. “” , :

 mongoexport --db analytics --collection statistics --out statistics.json 

3. “” :

 > use analytics > db.statistics.drop() 

4. “” , :

 > db.analytics.ensureIndex({"s":1}) 

5. :

 > sh.shardCollection("analytics.statistics", {"s":1}) 

6. :

 mongoimport --db analytics --collection statistics --file statistics.json 

, , / json .


, , ( ), SECONDARY- . .

, . .

analytics mongodump MongoDB Community.

MongoDB backup, . x.509 . , , , subject:

 CN=backuper,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU 

backuper built-in backup:

 mongos> db.getSiblingDB("$external").runCommand({ createUser: "CN=backuper,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", roles: [{role: "backup", db: "admin"}] }) 

analytics. mongodump , ( --db ), ( -o ), --gzip , :

 mongodump --ssl --sslCAFile “/root/mongodb/keys/mongodb-CA-cert.crt” --sslPEMKeyFile “/root/mongodb/keys/backuper.pem” -u "CN=backuper,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU" --host server1.cluster.com --port 27017 --authenticationMechanism "MONGODB-X509" --authenticationDatabase "$external" --db analytics --gzip -o "/path/to/backup/" 

...


, . ++ Python, .

, C++. MongoDB mongodb-cxx-driver-legacy-1.1.1 .

 #include <mongo/client/dbclient.h> #include <mongo/client/options.h> ... mongo::DBClientConnection client(true); //   try { //    SSL  mongo::client::Options options; options.setSSLMode(mongo::client::Options::SSLModes::kSSLRequired); options.setSSLCAFile("/path_to_certs/mongodb-CA-cert.crt"); options.setSSLPEMKeyFile("/path_to_certs/analyticsuser.PEM"); mongo::Status status = mongo::client::initialize(options); mongo::massertStatusOK(status); // ,     client.connect("www.server1.cluster.com:27017"); //        mongos //  : , ,  mongo::BSONObjBuilder auth_params; auth_params.append("db", "$external"); auth_params.append("user", "CN=username,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU"); auth_params.append("mechanism", "MONGODB-X509"); client.auth(auth_params.obj()); //   } catch (const mongo::DBException &e) { std::cout << "DBException : " << e.toString() << std::endl; } ... 

, , mongo::client::Options , SSL (kSSLRequired) , CA (mongodb-CA-cert.crt) , PEM- ( — analyticsuser, ).

. – “$external”, subject , . , .. , – .

- , Python pymongo , mongoengine.

pymongo:

 import ssl db_hosts="server1.cluster.com:27017,server2.cluster.com:27017" db_port=None client = MongoClient(db_hosts, db_port, read_preference=ReadPreference.NEAREST, ssl=True, ssl_certfile="/path_to_certs/analyticsuser.PEM", ssl_cert_reqs=ssl.CERT_REQUIRED, ssl_ca_certs="/path_to_certs/mongodb-CA-cert.crt") db = client[db_name] db.authenticate(name=db_user, source="$external", mechanism="MONGODB-X509") 

— CA PEM-. db_hosts – . (db_port), , . pymongo, , . , , .. server1.cluster.com:27017 .

pymogo, , pytmogo.errors.AutoReconnect. , , , API- :

from functools import wraps
from pymongo.errors import AutoReconnect
import time

 def pymongo_reconnect(attempts=5): def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): tries_reconnect = attempts if tries_reconnect <= 0: tries_reconnect = 1 while tries_reconnect: try: return f(*args, **kwargs) except AutoReconnect as ar: tries_reconnect -= 1 print("Caught AutoReconnect exception.") if tries_reconnect <= 0: raise ar time.sleep(0.1) print("Attempt to reconnect (%d more)...\n" % tries_reconnect) continue return decorated_function return decorator 

( 5) .

read_preference . read_preference ( PRIMARY, ). :

PRIMARY — primary- ; PRIMARY_PREFERRED — primary- , secondary;
SECONDARY — secondary-c ;
SECONDARY_PREFERRED — secondary , primary;
NEAREST — ( pymongo), , , — , primary secondary.

PRIMARY- , / , .. SECONDARY- PRIMARY ( ). .

, PRIMARY SECONDARY pymongo OperationFailure, .

mongoengine . mongoengine:

 connect('default', host, port) 

OK, : “ mongoengine.connect pymongo ”. mongoengine.connect — : mongoengine.register_connection. MONGODB-X509. , „“ , , , “” mogoengine pymongo ( mongoengine).

, github , , .

x.509 :

 import ssl from mongoengine import DEFAULT_CONNECTION_NAME, register_connection db_hosts="server1.cluster.com:27017,server2.cluster.com:27017" db_port=None ssl_config = { 'ssl': True, 'ssl_certfile': "/path_to_certs/analyticsuser.PEM", 'ssl_cert_reqs': ssl.CERT_REQUIRED, 'ssl_ca_certs': "/path_to_certs/mongodb-CA-cert.crt", } register_connection(alias=DEFAULT_CONNECTION_NAME, name="statistic", host=db_hosts, port=db_port, username="CN=username,OU=StatisticsClient,O=SomeSystems,L=Moscow,ST=MoscowRegion,C=RU", password=None, read_preference=ReadPreference.NEAREST, authentication_source="$external", authentication_mechanism="MONGODB-X509", **ssl_config) 

MongoEngine, .. python/pymongo. - , “” .

, , x.509 MongoEngine.

Update
mongoengine.

Source: https://habr.com/ru/post/308740/


All Articles