Hello! Throughout the year, we switch to React and think about how to make our users not wait for client templating, but see the page as quickly as possible. To this end, we decided to do server-side rendering (SSR - Server Side Rendering) and optimize SEO, because not all search engines can execute JS, and those that can spend time on execution, and the time crawling each site is limited.
Let me remind you that server rendering is the execution of JavaScript code on the server side in order to give the client ready HTML. This affects user perceived performance, especially on weak machines and with slow internet. There is no need to wait until it is downloaded, parsed and executed JS. The browser only needs to render the HTML immediately, without waiting for JSa, the user can already read the content.
This shortens the passive waiting phase. After the render, the browser will have to go through the finished DOM, check that it matches what was rendered.
on the client, and add event listeners. This process is called hydration . If during the hydration process there is a discrepancy of content from the server and generated by the browser, we will receive a warning in the console and an extra rerender on the client. This should not be, it is necessary to ensure that the result of the server and client rendering match. If they diverge, then this should be treated as a bug, as it negates the advantages of server rendering. If any element should diverge, you need to add suppressHydrationWarning={true}
.
In addition, there is one nuance: there is no window
on the server. The code that accesses it must be executed in lifecycle methods not called on the server side. That is, you cannot use window
in UNSAFE_componentWillMount () or, in the case of hooks, uselayouteffect .
In essence, the server rendering process comes down to getting the initialState from the backend, driving it through renderToString()
, picking up the finished initialState and HTML at the output, and giving it to the client.
In hh.ru hikes from client JS are allowed only in the api gateway on python. This is for safety and load balancing. Python already goes to the necessary backends for the data, prepares them and gives them to the browser. Node.js is used only for server rendering. Accordingly, after preparing the data for python, an additional approach to the node, waiting for the result and sending the response to the client is necessary.
First you had to choose a server to work with HTTP. We stopped at koa . Like the modern syntax with await
. Modularity is lightweight middleware, which, if necessary, can be installed separately or easily written independently. By itself, the server is easy and fast . Yes, and written by koa by the same team of developers that write express, captivates their experience.
After they learned how to roll out our service, they wrote the simplest code on KOA, which was able to give 200, and poured it on the prod. It looked like this:
const Koa = require('koa'); const app = new Koa(); const SERVER_PORT = 9400; app.use(async (ctx) => { ctx.body = 'Hello World'; }); app.listen(SERVER_PORT);
In hh.ru all services live in docker containers. Before the first release it is necessary to write ansible playbooks, with the help of which the service is rolled out in production environment and on test stands. Every developer and tester has his own test environment, which is most similar to the prod. We spent the most time and effort on writing playbooks. It happened so because two front-enders did this, and this is the first service on the node in hh.ru. We had to figure out how to switch the service to the development mode, to do it in parallel with the service for which the rendering takes place. Deliver files to container. Run a bare server to start the docker container, without waiting for the assembly. Build and rebuild the server with the service using it. Determine how much we need RAM.
In development mode, provided for the possibility of automatic reassembly and subsequent restart of the service when changing the files included in the final build. The node must be restarted to load the executable code. Changes and builds are monitored by the webpack . Webpack is needed to convert ESM to common CommonJS. For restart, we took a nodemon , which looks after the collected files.
Next, we taught the routing server. For correct balancing you need to know which server instances are alive. To check this, the operational heart beat goes to /status
every few seconds and expects to receive 200 in response. If the server does not respond more than the number of times specified in the config, it is removed from the balance. This turned out to be a simple task, a couple of lines and the routing is ready:
export default async function(ctx, next) { if (routeMap[ctx.request.path]) { routeMap[ctx.request.path](ctx); } else { ctx.throw(NOT_FOUND, getStatusText(NOT_FOUND)); } next(); }
And we answer 200 at the right urle:
export default (ctx) => { ctx.status = 200; ctx.body = '200'; };
After that, we made a primitive server that gave state to <script>
and ready HTML.
It took control of how the server works. For this you need to fasten logging and monitoring. Logs are not written in JSON, but to support the log format of the rest of our services, mainly Java. Log4js was chosen for benchmarks - it is fast, easy to configure and writes in the format we need. The general format of the logs is necessary to simplify the monitoring support - no need to write extra regulars to parse the logs. In addition to logs, we also write errors in sentry . I will not give the code for the loggers, it is very simple, basically there are settings.
Then it was necessary to provide for a graceful shutdown - when the server becomes ill, or when the release rolls, the server does not accept more incoming connections, but executes all the requests hanging on it. For the node there are many ready-made solutions. You took http-graceful-shutdown , all you had to do was wrap the call to gracefulShutdown(app.listen(SERVER_PORT))
At this point, received a production-ready solution. To check how it works, we included server rendering for 5% of users on one page. We looked at the metrics, realized that we significantly improved the FMP for mobile phones, for desktops the value has not changed much. We started testing the performance, found out that one server holds ~ 20 RPS (the javista was very amused by this fact). Found out the reasons why this is so:
One of the main problems turned out to be that they collected without NODE_ENV = production (they exhibited the ENV we needed for the server build). In this case, the reactor gives away a non-production assembly, which runs about 30% slower.
They raised the version of the node from 8 to 10, got about 20-25% of the performance.
What we left at last is launching the node on several cores. We suspected that it was very difficult, but here, too, everything turned out to be very prosaic. The node has a built-in mechanism - cluster . It allows you to run the required number of independent processes, including a master process that throws tasks to them.
if (cluster.isMaster) { cluster.on('exit', (worker, exitCode) => { if (exitCode !== SUCCESS) { cluster.fork(); } }); for (let i = 0; i < serverConfig.cpuCores; i++) { cluster.fork(); } } else { runApp(); }
In this code, the master process is started, the processes are started according to the number of CPUs allocated for the server. If one of the child processes fails - the exit code is not equal to 0
(we turned off the server ourselves), the master process restarts it.
And the performance increases by about the number of dedicated CPUs for the server.
As I wrote above, most of the time was spent writing the original playbooks - about 3 weeks. It took about 2 weeks to write the entire SSR, and about a month later we slowly brought it to mind. All this was done by the forces of 2 fronts, without the enterprise experience of node js. Do not be afraid to do SSR, the main thing - do not forget to specify NODE_ENV=production
, nothing complicated about it. SEO users will thank you.
Source: https://habr.com/ru/post/445816/
All Articles