Hello, my name is Alexander Zelenin , and I Web developer sysadmin.
Unfortunately, all the information about the full-scale development of the application on MeteorJS is rather fragmented, and you have to manually solve a million tasks. In this article I want to make out the most basic (but already sufficient for secure production in production) server configuration and the subsequent manual deployment process.
We will deploy on Ubuntu 16, but in general the scheme is 99% the same for Debian 8.
In fact, I am not close to a sysadmin, so I will be very happy with the proposed revisions, but on the whole the scheme is quite working.
As the file system during the installation, choose XFS - Mong is good friends with it.
If we have root access, the first thing to do is create a user.
adduser zav # %username% zav apt-get install sudo usermod -aG sudo zav # sudo, # - vi /etc/ssh/sshd_config # vi — nano vi. # — apt-get install nano
We need to change the port to a random one of your choice - this will immediately protect against part of the attacks on the ssh server.
Port 355 # Port 22 PermitRootLogin no #
Reboot SSH
/etc/init.d/ssh restart
Now we reconnect via SSH to a new port (355), to a new, newly created user.
I use 2 separate physical disks to minimize the chance of failure.
You can place everything on 1 at your own risk, the scheme will not change.
In this case, you can use the / secondary folder in the same way, but do not mount the disk in it.
sudo cfdisk /dev/sdb # (sdb) . # , 100% , sudo mkfs.xfs /dev/sdb1 # sudo mkdir /secondary sudo vi /etc/fstab # -
Add to the end (if there is no second disk, we omit the first line)
tmpfs - the file system will be located in RAM. Those. writing to the / data / inmemory partition will write to RAM, not to disk. It should be understood that when you restart it is cleared. size sets the size of this area. Your task is to have enough of it to place a monga with all indices. In my case, the RAM 128Gb, respectively, under the tmpfs allocated 32 gigabytes.
/dev/sdb1 /secondary xfs defaults 0 2 tmpfs /data/inmemory tmpfs size=25% 0 0
The actual installation method can be viewed on the official website.
In this example, there is installation of version 3.4 on Ubuntu 16.
Install:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6 echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list sudo apt-get update sudo apt-get install mongodb-org
Create folders, configs and set permissions:
sudo mkdir /data sudo mkdir /data/db sudo mkdir /data/inmemory sudo mkdir /secondary/data sudo mkdir /secondary/data/db sudo vi /data/mongod-memory.conf
An approach in which the primary in memory can be dangerous, since in case of a sudden power off, data that did not have time to go into replicas is lost. In my case, the loss of 1-2 seconds is not terrible, because The entire application is written with the possibility of recovery from any position. Financial data is written with a parameter confirming that the data already exists on the replica (that is, on the disk).
If your application to this is not ready - you can opt out of the memory section and do everything classically. In general, it will be enough to remove the tmpfs mount and slightly modify the config, making it similar to mongod-sec-d1.conf
processManagement: fork: true pidFilePath: "/data/inmemory/mongod.pid" storage: dbPath: "/data/inmemory" journal: enabled: false # # .. indexBuildRetry: true wiredTiger: engineConfig: cacheSizeGB: 8 # . # , , , , # .. , # , systemLog: destination: "file" path: "/var/log/mongodb/mongodb.log" # . # , # # , # (, ) logAppend: true quiet: false verbosity: 0 logRotate: "reopen" timeStampFormat: "iso8601-local" net: bindIp: 127.0.0.1 # # , — port: 27000 # , , # http: enabled: false # http JSONPEnabled: false RESTInterfaceEnabled: false ssl: # ssl mode: disabled security: # , # , authorization: "enabled" keyFile: "/data/mongod-keyfile" # javascriptEnabled: false # JS . replication: oplogSizeMB: 4096 # oplog'. # # oplog'a. # # — ? replSetName: "consulwar" enableMajorityReadConcern: false # , # . operationProfiling: slowOpThresholdMs: 30 # , , # # mode: "slowOp"
sudo vi /data/mongod-sec-d1.conf
For the most part, the config is repeated, the difference is only in a couple of places.
But leave for convenience
processManagement: fork: true pidFilePath: "/data/db/mongod.pid" storage: dbPath: "/data/db" journal: enabled: true # , , # indexBuildRetry: true wiredTiger: engineConfig: cacheSizeGB: 8 # , - Primary , # secondary , systemLog: destination: "file" path: "/var/log/mongodb/mongodb.log" logAppend: true quiet: false verbosity: 0 logRotate: "reopen" timeStampFormat: "iso8601-local" net: bindIp: 127.0.0.1 port: 27001 http: enabled: false JSONPEnabled: false RESTInterfaceEnabled: false ssl: mode: disabled security: authorization: "enabled" keyFile: "/data/mongod-keyfile" javascriptEnabled: false replication: oplogSizeMB: 4096 replSetName: "consulwar" enableMajorityReadConcern: false operationProfiling: slowOpThresholdMs: 30 mode: "slowOp"
sudo vi /data/mongod-sec-d2.conf
The difference is, in fact, only on the way to the database and depending on the port used.
processManagement: fork: true pidFilePath: "/secondary/data/db/mongod.pid" storage: dbPath: "/secondary/data/db" journal: enabled: true indexBuildRetry: true wiredTiger: engineConfig: cacheSizeGB: 8 systemLog: destination: "file" path: "/var/log/mongodb/mongodb.log" logAppend: true quiet: false verbosity: 0 logRotate: "reopen" timeStampFormat: "iso8601-local" net: bindIp: 127.0.0.1 port: 27002 http: enabled: false JSONPEnabled: false RESTInterfaceEnabled: false ssl: mode: disabled security: authorization: "enabled" keyFile: "/data/mongod-keyfile" javascriptEnabled: false replication: oplogSizeMB: 4096 replSetName: "consulwar" enableMajorityReadConcern: false operationProfiling: slowOpThresholdMs: 30 mode: "slowOp"
Add the key for the correct operation of the replica, set the rights to folders
sudo openssl rand -base64 741 > ~/mongod-keyfile sudo mv mongod-keyfile /data/mongod-keyfile sudo chmod 600 /data/mongod-keyfile sudo chown mongodb:mongodb -R /data sudo chown mongodb:mongodb -R /secondary/data
Create startup scripts
sudo apt-get install numactl sudo mv /lib/systemd/system/mongod.service /lib/systemd/system/mongod@.service sudo vi /lib/systemd/system/mongod@.service
@ in the name of the service means that we can run it with parameters.
This script sets all the necessary OS settings for working with Monga - conveniently.
[Unit] Description= Mongo Database on %i After=network.target [Service] Type=forking ExecStartPre=/bin/sh -c '/bin/echo never > /sys/kernel/mm/transparent_hugepage/enabled' ExecStartPre=/bin/sh -c '/bin/echo never > /sys/kernel/mm/transparent_hugepage/defrag' User=mongodb Group=mongodb PermissionsStartOnly=true ExecStart=/usr/bin/numactl --interleave=all /usr/bin/mongod --config /data/mongod-%i.conf LimitFSIZE=infinity LimitCPU=infinity LimitAS=infinity LimitNOFILE=64000 LimitNPROC=64000 TasksMax=infinity TasksAccounting=false [Install] WantedBy=multi-user.target
We tell the database to start at system startup, and also run instances right now.
Our parameter is set after @, for example, memory will indicate to use when starting /data/mongod-memory.conf
sudo systemctl enable mongod@memory sudo systemctl enable mongod@sec-d1 sudo systemctl enable mongod@sec-d2 sudo service start mongod@memory sudo service start mongod@sec-d1 sudo service start mongod@sec-d2
Connect to Mong, initialize replica, install users
mongo localhost:27000/admin
Perform in console monga
rs.initiate({ _id: "consulwar", // version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: false, // // configsvr: false, // , // ( — ) members: [ { _id: 0, // id host: 'localhost:27000', // arbiterOnly: false, // buildIndexes: true, // , hidden: false, // , .. priority: 100, // Primary — , slaveDelay: 0, // . . votes: 1 // Primary }, { _id: 1, host: 'localhost:27001', arbiterOnly: false, buildIndexes: true, hidden: false, priority: 99, // slaveDelay: 0, votes: 1 }, { _id: 2, host: 'localhost:27002', arbiterOnly: false, buildIndexes: true, hidden: false, priority: 98, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed : true, // , electionTimeoutMillis : 5000, // , // . ~7 . // 1 // 500, catchUpTimeoutMillis : 2000 } }); // , root' db.createUser({user: 'zav', pwd: '7Am9859dcb82jJh', roles: ['root']}); // - ctrl+c, ctrl+c
Connect under our user
mongo localhost:27000/admin -u zav -p '7Am9859dcb82jJh' --authenticationDatabase admin
Add a user to work with the application
use consulwar // consulwar db.createUser({ user: 'consulwar', pwd: '37q4Re7m432dtDq', roles: [{ // . // , .. role: "readWrite", db: "consulwar" }, { // oplog', role: 'read', db: 'local' }] });
Now a rather important and difficult question - real-time replication for backup. It will not be considered as a whole, due to the need to configure a number of other equipment, which I would like to avoid in this article.
We will replicate to an external server (otherwise, what's the point of backup? :-)).
The external server should have Mong installed in a similar way.
Approximate config:
processManagement: fork: true pidFilePath: "/data/db/mongod.pid" storage: dbPath: "/data/db" journal: enabled: true indexBuildRetry: false # wiredTiger: engineConfig: cacheSizeGB: 0 # systemLog: destination: "file" path: "/var/log/mongodb/mongodb.log" logAppend: true quiet: false verbosity: 0 logRotate: "reopen" timeStampFormat: "iso8601-local" net: bindIp: 222.222.222.222 # port: 27000 http: enabled: false # http, JSONPEnabled: false RESTInterfaceEnabled: false ssl: mode: disabled security: authorization: "enabled" keyFile: "/data/mongod-keyfile" # mongod-keyfile javascriptEnabled: false replication: oplogSizeMB: 0 replSetName: "consulwar" enableMajorityReadConcern: false operationProfiling: mode: "off" #
In the backup server firewall, we allow connection to port 27000 ONLY from the server application / database IP.
The same thing - on the application / database server in bindIp we specify to look also into the external interface (ip external server), and in iptables we allow access to ports 27000-27002 ONLY from the ip of the north backup.
At initialization / reinitialization of the replica we add
{ _id: 4, host: '222.222.222.222:27000', // , arbiterOnly: false, buildIndexes: false, // hidden: true, // ! priority: 0, // slaveDelay: 0, // , // , " " votes: 0 // }
Done! Now the data will flow in real-time also in the external backup, which is very cool.
In case of a complete crash of the application, we can initialize the replica in the same way, and it will be restored from backup.
Believe me, it is much faster than mongodump / mongorestore (according to personal estimates, 25-100 times).
Install the node, put the module n, put them version of the node 4.8.1 (the last version officially supported by the meteor).
We set pm2, since they will run the processes.
sudo apt-get install nodejs sudo apt-get install npm sudo npm install -gn sudo n 4.8.1 sudo npm install pm2@latest -g
We add the user from under which everything will be started and who will be responsible for deployment
sudo adduser consulwar
We go for this user
su consulwar mkdir app mkdir app/current
On the local machine, go to the directory with our meteor project.
We create the folder for builds, we collect the project in this folder.
mkdir ../build meteor build ../build/ --architecture os.linux.x86_64
We receive the received archive on our server, for example, on sftp. We go under our user for the application.
We load in the folder ~ / app.
We go on ssh for our user (consulwar for me).
cd app mkdir 20170429 # tar -xvzf consulwar-master.tar.gz -C 20170429 ln -s 20170429/bundle ~/app/current # , (cd current/programs/server && npm install) vi pm2.config.js # pm2
var settings = { ... }; // settings.json var instances = 10; // ? N-1 // N — var apps = []; for (var i = 0; i < instances; i++) { apps.push({ "script": "/home/consulwar/app/current/bundle/main.js", // "exec_mode": "fork", // .. Nginx, "name": "consulwar", // "env": { "ROOT_URL": "http://consulwar.ru/", // "HTTP_FORWARDED_COUNT": 1, // // IP "PORT": 3000 + i, // 3000 (3000, 3001, 3002...) "MONGO_URL": "mongodb://consulwar:37q4Re7m432dtDq@localhost:27000,localhost:27001,localhost:27002/consulwar?replicaSet=consulwar&readPreference=primary&authSource=consulwar", "MONGO_OPLOG_URL": "mongodb://consulwar:37q4Re7m432dtDq@localhost:27000,clocalhost:27001,localhost:27002/local?replicaSet=consulwar&authSource=consulwar", "METEOR_SETTINGS": settings } }); } module.exports = { apps : apps }
And run
pm2 startup pm2.js # , ... # pm2 start pm2.js pm2 status # , . pm2 logs
Done, the application is deployed and should already be available at the server ip / address with the port, for example http://consulwar.ru.03000
sudo apt-get install software-properties-common sudo add-apt-repository ppa:nginx/stable sudo apt-get install nginx sudo vi /etc/nginx/nginx.conf
user www-data; # - nginx worker_processes 6; # worker_rlimit_nofile 65535; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 4000; # - # 6 * 4000 = 24 000 # } http { map $http_upgrade $connection_upgrade { # - default upgrade; '' close; } include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; server_tokens off; sendfile on; # tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; gzip on; # gzip gzip_comp_level 6; gzip_vary on; gzip_proxied any; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; gzip_static on; # .gz . , main.js.gz main.js gzip_http_version 1.1; gzip_disable "MSIE [1-6]\.(?!.*SV1)" proxy_connect_timeout 60; proxy_read_timeout 620; proxy_send_timeout 320; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; upstream backends { #ip_hash; # , least_conn; # , # # 10, 10 server 127.0.0.1:3000 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3001 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3002 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3003 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3004 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3005 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3006 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3007 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3008 weight=5 max_fails=3 fail_timeout=60; server 127.0.0.1:3009 weight=5 max_fails=3 fail_timeout=60; # , server 127.0.0.1:3100 backup; } # ssl . ssl_certificate /etc/letsencrypt/live/consulwar.ru/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/consulwar.ru/privkey.pem; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_stapling on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:!RC4:!aNULL:!eNULL:!MD5:!EXPORT:!EXP:!LOW:!SEED:!CAMELLIA:!IDEA:!PSK:!SRP:!SSLv:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; server { server_name consulwar.ru; # 80 443 listen 80; listen 443 ssl http2; # location / { proxy_pass http://backends; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $remote_addr; } # nginx location ~* \.(jpg|jpeg|gif|ico|png)$ { root /home/consulwar/app/current/programs/web.browser/app; } # css js , location ~* "^/[a-z0-9]{40}\.(css|js)$" { root /home/consulwar/app/current/programs/web.browser; } location ~ "^/packages" { root /home/consulwar/app/current/programs/web.browser; } # , , # . location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } # SSL location ~ "^/.well-known" { root /home/consulwar/app/current/programs/web.browser/app/.well-known; } } include /etc/nginx/conf.d/*.conf; client_max_body_size 128m; }
Restart nginx
sudo service nginx restart
We get SSL from Let's Enctypt.
Of course, the domain should already be associated with this IP address.
sudo apt-get install letsencrypt sudo letsencrypt certonly -a webroot --webroot-path=/home/consulwar/app/current/programs/web.browser/app -d consulwar.ru sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 #
Out! SSL Works
sudo vi /etc/network/if-up.d/00-iptables
#!/bin/sh iptables-restore < /etc/firewall.conf ip6tables-restore < /etc/firewall6.conf
sudo chmod +x /etc/network/if-up.d/00-iptables apt-get install xtables-addons-dkms sudo vi /etc/firewall.conf
*filter :INPUT ACCEPT [193038:64086301] :FORWARD DROP [0:0] :OUTPUT ACCEPT [194475:60580083] -A INPUT -i lo -j ACCEPT # -A INPUT -m state --state RELATED,ESTABLISHED -p all -j ACCEPT # # SSH -A INPUT -m state --state NEW -p tcp -m multiport --dport 355 -j ACCEPT # Nginx -A INPUT -m state --state NEW -p tcp -m multiport --dport 80,443 -j ACCEPT # , , -A INPUT -p tcp -m tcp -j TARPIT # -A INPUT -p udp -j DROP COMMIT
sudo vi /etc/firewall6.conf # , sudo iptables-restore < /etc/firewall.conf
itables are configured and excess ports are closed.
DONE!
If you like the approach / format - lay out the configuration of the mail system, infrastructure monitoring, CI via Bamboo and a number of other things we use.
It is likely that I missed something (but in general, this is how it should work) - ask, I will add.
I hope that someone will now take less time than me :-)
PS: We have a vacancy for a front-end developer .
Source: https://habr.com/ru/post/327624/
All Articles