📜 ⬆️ ⬇️

Headless CMS. Why i write my

Hello!

This recent article (yesterday I saw) prompted me to write this post.

Retell the main features of Headless / content-first / api-first, etc. CMS I will not, the material is full and probably already many are familiar with this trend. And I want to tell why and how I write my system, why I couldn’t choose from the existing ones, what I think about other systems I’ve encountered before and what prospects I see for all this. Pulp fiction will be voluminous (for material in two years), but I will try to write more interesting and useful. Who cares, I ask under the cat.

In general, the story is really long and I will try to tell it first. Whether so that it was more clear what are the true reasons for creating this engine, or because it would be difficult to explain on the ground on the ground why I am doing it this way, and not that way.
')
But for a start, I will nevertheless briefly outline the main criteria for the selection of a modern Headless-CMS for me personally, why I still could not choose for myself a ready-made solution. It’s just that people don’t break off reading a lot of beeches, they don’t understand what will be said in the end.

In short: I wanted everything to be in one place: both the back and the front (and not either this or the other), and the GraphQL-API, and that the database was managed and so much more, including the “Make Beautiful” button. This is not found. I myself have not done this myself yet, but on the whole it has turned out to be very much, and most importantly, it allows me to do real projects.

And so, my approach most likely can hardly be called scientific and reasonable. The fact is that I very often write something of my own. I like programming here. And two years ago (and before that, another 8 years) I was sitting on MODX CMF (under which I also invented many of my crutches). And for three years we started one rather large-scale project, under which, as it seemed to me, I could use MODX. But as it turned out, I could not ... The main reason was that it was a startup without any TK, with a bunch of ideas that were changed and supplemented every day (and several times a day). And every time, under a new idea, it was necessary to add some kind of new entity, register / change fields for existing ones, create / delete / change connections between these entities (according to the database structure change), I have at some point It began to take several hours to change these entities. Indeed, besides the fact that it was necessary to register these changes in the schema, it was necessary to change the database (almost manually), update the API, rewrite the program code, etc., etc. Accordingly, the front had to be updated for all of this. As a result, I decided that we should look for something new, more convenient, allowing somehow to simplify all this. Once again, I’ll clarify that at that time I was a php backend, so do not be surprised and do not laugh that I began to discover various front-line collectors, less-processors, npm, etc. etc. But anyway, gradually in our project the front appeared on react + less, API on GraphQL, and the server on express.

But not everything was so rosy, as it would seem to many now. Let me remind you, it was more than two years ago. If you have been in a modern JS web for less than two years, I recommend reading this article here: N reasons to use the Create React App (habr). To whom laziness is short: with the advent of react-scripts, you can not bother with configuring a webpack, etc. All this goes into the background. Good guys have already configured the webpack so that most react-projects work almost guaranteed to work on it, and the final developer focuses directly on programming the final product, rather than configuring heaps of dependencies, loaders, etc. But it is already later. And before that, I just had to configure this webpack, follow the update of the heap of everything that flew with it, and so on. etc. But this is only part of the work, only in fact the front. And still need a server. You also need an API. And still need SSR (Server-side rendering), which, by the way, react-script still does not provide, as far as I know. In general, everything was much more complicated then than it is now, there was not much, and every crutch as I could. And how can I crutch then ...

Just imagine:


As a result: one project with 500+ attendance is still working on one of the first versions, and in the season (in winter) it is 1000–1700 unique per day. Uptime 2 months. This is because I myself manually restarted the server after a preventive software update. And before this reboot, uptime was still 6+ months old. But the most interesting is memory consumption. Currently, almost 700 megabytes js-process. Yes, yes, I'm here with you too, laughing :) Of course, this is a lot. And before that, I still did a little prevention and improved this indicator. It used to be at all 1000M + per process ... Nevertheless, it worked and was quite well tolerated. Moreover, before Google changed PageSpeed ​​Insights algorithms in November, the site had a 97/100 performance indicator. Proof

Intermediate conclusion on the basis of this project on the basis of the system, which was developed further without this project (the project was left behind):

pros

  1. The project API has become more flexible due to the use of GraphQL, and the number of requests to the server has been significantly reduced.
  2. The project has access to a huge number of components on npm.
  3. Project management has become more transparent through the use of dependencies, gita, etc.
  4. Silent scripts and styles are certainly more pleasing than a bunch of separate scripts on old sites, when you don't know what can be removed from this zoo without consequences (and it is not uncommon to see several versions of bugs on one site).
  5. The site has become more interactive, the pages work without reloading, returning to previously viewed pages does not require repeated requests to the server.
  6. Data editing is done right on the page, according to the principle “edit what you see and where you see,” without any separate admin.

Cons (mostly for the developer)

  1. Everything is very difficult. Realistically. To connect to the project of some third-party developer is simply unrealistic. I myself hardly understood what and how it works and where my legs grow from. If you look at p. 3 of the advantages, where it is said about transparency, then only transparency is that if you hook something somewhere, it’s immediately clear that it’s broken (the scripts do not build, etc.), but by commits and diffs You can find where that hooked. Well, if you managed to add something new and it works, even if you clearly understand that yes, everything has flown in normally. But in general, it is still hell.
  2. Difficulties with caching. Then I opened the apollo-client for myself. And before that, as I said, I wrote my flux storage facilities. Due to these storages, it was possible to obtain the necessary data for rendering from different components, but the cache size on the client side was very large (for each set of typical entities there was a different store). As a result, it was difficult to ensure that the object was requested earlier or not (that is, whether the request to the server should be done to find it), whether all the related data is available, etc.
  3. Difficulties with schemas, database structure and resolvers (API functions for receiving / modifying data). As I said, I wrote the schemes by hand, and the resolvers too. With that, in resolvers I tried to provide caching, and the processing of subqueries and other subtleties. At that moment I had to immerse myself very deeply into the essence and the GraphQL program code. Plus, the output is that I generally understand quite well how GraphQL works, what are its pros and cons and how best to prepare it. The downside is that, of course, it’s impossible to write all the amenities and buns written by commands like apollo in one. As a result, when I discovered apollo, of course, I took great pleasure in using their components (but mostly on the front, I’ll tell you why below).

In general, this project on obsolete technologies is 100% mine, so I can afford to abandon it until better times. But there are other projects for which I had to go further and develop the platform. And several times it was necessary to rewrite everything practically from scratch. Next, I will talk in more detail about the individual tasks that I encountered and which solutions I eventually developed and applied.

Schema-first. First, the scheme, and then everything else

A site (web interface, thin client, etc.) is all information display (well, information management, if it is allowed, and the functionality allows). But first all the same database (tables, columns, etc.). Having encountered several different approaches to working with a database, I liked the Schema-first approach the most. That is, you describe the schema of entities and data types (manually or via the interface), deploy the schema, and you immediately apply the described changes in the database (tables and columns are created / deleted, as well as links between them). Depending on the implementation, you will also generate all the necessary resolver functions for managing this data. Most of all in this direction I liked the project prisma.io .

With your permission, since I have not even met a single article about a prism even at Habré, I’ll pay a little attention to them, because the project is really very interesting, and without them I wouldn’t have now such a platform that pleased me so much . Actually, that's why I called my platform prisma-cms, because prisma.io plays a very large role in it.

Actually, prisma.io is a SaaS project, but with a big reservation: almost everything they do is put on a githab. That is, you can use their servers for a very reasonable fee (and configure your own database and API for you in a matter of minutes), or you can fully deploy everything. In this case, the logical prism should be divided into two important separate parts:

  1. Prisma-server, that is, the server where the database is also running.
  2. Prisma-client. This is essentially also a server, but in relation to the data source (prisma-server) is a client.

Now I will try to explain this confusing situation. In general, the essence of the prism is to use one API-endpoint to work with various data sources. Yes, here anyone will say that this is all invented in GraphQL and the prisma is not needed here. In general, everyone will be right, but there is a serious point: GraphQL only defines the principles and general work, but by itself it does not provide out of the box work with the final data sources. He says, “You can create an API schema to describe which requests users will be able to send, but how you will handle these requests, you will be bothered by this already.” And the prism, of course, also uses GraphQL (by the way, and a lot of other things, including various apollo products). But the prism plus to this just provides work with the database. That is, by describing the scheme and its deployment, the necessary tables and columns will be created in the specified database (as well as the connections between them), and it will also generate all the necessary CRUD functions at once. That is, with a prism, you do not just get a GraphQL-server, but a full-fledged working API, which immediately allows you to work with the database. So, Prisma-server provides the database and interaction with it, and prisma-client allows you to write your resolvers and send requests to the prisma-server (or somewhere else, at least a few prisma-server). And it turns out that you can only deploy a prisma-client (and the SaaS prisma.io will be used as the prisma-server), and you can deploy the prisma-server, and not depend on the prism in general, this will be all yours

This is the prism I have chosen for myself as the basis for my platform. But then I had to tighten it up for myself in order to get a full-fledged platform.

1. Merzh schemes


At that time, the prism did not know how to combine schemes. That is the problem as follows:

You have a user model described in one module.

type User { id: ID! @unique username: String! @unique email: String @unique } 

and in another module

 type User { id: ID! @unique username: String! @unique firstname: String lastname: String } 

In one project, you want to combine these two schemes automatically to get the output

 type User { id: ID! @unique username: String! @unique email: String @unique firstname: String lastname: String } 

Then this prism did not know how to do. It turned out to be implemented using the merge-graphql-schemas library .

Work with arbitrary prisma-server.


In a prism, the configuration is written in a special config file. If you want to change the address of the used prism server, you need to edit the file. Trifle, but not pleasant. I wanted to make the URL to be specified in the command, for example endpoint = http: // endpoint-address yarn deploy (yarn start). Some days were killed for this ... But now one prism project can be used for any number of endpoints. By the way, still prisma-cms works easily even with a local database, even with a SaaS server prism.

Modules / Plugins


This was not enough at all. As I said, the main task of the prism is to provide work with various databases. And they do an excellent job with it. Already, they support work with MySQL, PostgreSQL, Amazon RDS and MongoDB, several more types of sources are on the way. But they do not provide any modular infrastructure. There is no marketplace or something like that yet. There are only a few typical blanks. But you cannot choose two or three from several blanks and install them on one project. We'll have to choose some one. I wanted it to be possible to install a different number of modules on the final project, and so that with the layout, the schemes and resolvers merge and get such a single project with total functionality. And although there is no any graphical interface yet, there are already more than two dozen working modules and components that can be combined in the final project. Here, at once, I will define a little with my personal definitions: a module is what is installed on the back (expanding the database and API), and a component is what is installed on the front (to add various interface elements). So far, there is also no graphical interface for connecting modules, but it’s not difficult for me to write this way (this is not often done):

  constructor(options = {}) { super(options); this.mergeModules([ LogModule, MailModule, UploadModule, SocietyModule, EthereumModule, WebrtcModule, UserModule, RouterModule, ]); } 

After adding new modules, it’s enough just to execute again with one team and that's it, here we have new tables / columns and added functionality.

5 front reacting to changes in the backend


That was not enough at all. There will be a lyrical digression. The fact is that all API-first CMS, as I have seen, say, "We provide an awesome API, and you fasten the front you want." This is their “screw what you want” actually in fact means “bother as you want.” Exactly the same way as UI frameworks say “look what we like cool buttons and all that we do, and be confused with the backend”. This is always killing. I wanted to find a simple integrated CMS written in javascript that uses GraphQL and provides both backing and front right away. But I did not find one. But I really wanted the API changes to be immediately immediately perceived at the front. And for this, several substeps were performed:

5.1 Generating API Fragments


On the front, the queries contain fragments from the schema file. When the API server is rebuilt, a new JS file with API fragments is generated. And in requests it is written like this:

 const { UserNoNestingFragment, EthAccountNoNestingFragment, NotificationTypeNoNestingFragment, BatchPayloadNoNestingFragment, } = queryFragments; const userFragment = ` fragment user on User { ...UserNoNesting EthAccounts{ ...EthAccountNoNesting } NotificationTypes{ ...NotificationTypeNoNesting } } ${UserNoNestingFragment} ${EthAccountNoNestingFragment} ${NotificationTypeNoNestingFragment} `; const usersConnection = ` query usersConnection ( $where: UserWhereInput $orderBy: UserOrderByInput $skip: Int $after: String $before: String $first: Int $last: Int ){ objectsConnection: usersConnection ( where: $where orderBy: $orderBy skip: $skip after: $after before: $before first: $first last: $last ){ aggregate{ count } edges{ node{ ...user } } } } ${userFragment} `; 

5.2 Unified context for all components


In react 16.3, a new context API has appeared . I made it so that in the child components at any level you could access the single context without enumerating the previously desired types from the context, but simply by specifying static contextType = PrismaCmsContext and getting all the charms through this-> context (including the API client, , requests, etc.).

5.3 dynamic filters


This is also very desirable. GraphQL allows you to build complex queries with nested structure. I wanted the filters to also be dynamic, formed from the API schema, and allow nested conditions to be made. Here's what happened:


5.4 Website Builder


And finally, what I lacked was an external site editor, that is, a designer. I wanted the server to have only a minimum of actions to perform, and all final design would be performed on the front (including setting up routing, forming samples, etc.). This is a topic for a separate article, because among other things, I also wrote my crutch wysiwyg editor for this on pure contentEditable, and there are a lot of subtleties. If I am reinstated and interested in anyone, I will write a separate article.

Well, finally, a short demo video designer in action. Still quite raw, but I like it.


On that while I finish. Many have not yet written that I would like to write, but so much happened. I would be happy to comment.

PS: all sources, including the source of the site itself, are here .

Source: https://habr.com/ru/post/448982/


All Articles