I started learning React and Redux not so long ago, but he had already managed to pretty much pat my nerves. Literally, every action has to be thought of - almost no changes in the code are possible without tearing something away. To just get a list of posts by API and display them, you probably need to write at least a hundred lines of code - create a root container, create a store, add an action for a request to the API, for a successful query result, for an unsuccessful query result, create action-creators , match action-creators and props, match dispatch and props, write a reducer for each action ... Uh, I don’t want to continue. And all this we have to do again for each web application is a very irrational waste of the programmer’s forces.
Yes, you can say to a beginner: "Look, there are a dozen packages here that can do every action from this list for you. Choose and use!" But the problem is that it is necessary to understand the settings and use a dozen packages, taking care that they coincide with the version that is described in the documentation and did not enter into conflicts with each other ... Too difficult. I want something simpler, as simple as in the world of Django, from which I came. Any one package, after installation of which in the store all the necessary data are magically added - take it and use it.
Well, I decided - if there is no such solution, I will write it myself.
Removing all the lyrics from the first paragraph, I get the task - we need to create a tool that will:
According to the description, the package will consist of the action creator, middleware and reducer.
Fortunately, as it was said in the first paragraph, many things on JS have long been written, and you will not have to write them again. For example, we’ll go to the API using redux-api-middleware
, monitor the immutability of the data using react-addons-update
, and normalize the data (where can we go without it?) Using normalizr
.
The most important thing in this package is easy setup. In order to simply describe the data model, entry points to the API, and invalidation of old data, we need a config. With it, we will invent the architecture of the application. Maybe architecturally this is not very correct, but my opinion is this: first of all, you need to dance from the convenience of the developer, even if it imposes difficulties on the technical implementation of the code.
1. We describe the data scheme with related entities on the example of posts and users:
const schema = { users: {}, posts: { author: "users" } };
Something like, right? It looks like a schema.Entity from normalizr. Yes, it was possible to use classes from normalizr right away, but I believe that this will harm the convenience of the config. In normalizr, the key must refer not just to a string, as in our config, but to an entity object, and the config would turn into this:
import {schema} from 'normalizr'; const user = new schema.Entity("users", {}); const post = new schema.Entity("posts", {author: user}); const normalizrSchema = { users: user, posts: post, }
And it is much less beautiful and comfortable than the first option.
2. Entry points and actions for the API.
Here we will follow the reverse logic - if there is a convenient configuration method written by someone before us, why change it? redux-api-middleware
config with the parameters that are passed to the action in redux-api-middleware
, and it will turn out to be quite convenient:
const api = { users: { endpoint: "mysite.com/api/users/", types: ['USERS_GET', 'USERS_SUCCESS', 'USERS_FAILURE'], }, posts: { endpoint: "mysite.com/api/posts/", types: ['POSTS_GET', 'POSTS_SUCCESS', 'POSTS_FAILURE'], } };
Of course, all types of action can be declared as separate variables, not as strings - this is done solely for simplicity. We only implement GET requests, so there is no need for the method field.
3. "Lifetime" data in the store.
Of course, sooner or later, the data on the client loses relevance - we cannot blindly rely on data that came to us from the server some time ago. Therefore, it is necessary to provide a mechanism for the invalidation of old data and write the "lifetime" of each type of data in the config.
const lifetime = { users: 20000, posts: 100000 };
Let's collect all parts of a config together:
const config = {schema, api, lifetime};
Thus, everything is quite simple - users "live" in the store for 20 seconds, and posts - for 100 seconds. As soon as the lifetime comes out, we will have to go for the data, even if they are already stored in the store, which means we will need to remember the time of arrival of the data. And this brings us to the next point - the planning store.
At this point, everything is quite simple - we need to store data and the time of their arrival. Let's get two keys in store - entities and timestamp. For those already familiar with normalizr, it becomes immediately clear - in entities we will store our entities, and it will look something like this:
const entities = { posts: {1: {id: 1, content: "content", author: 1}, 2: {id: 2, content: "not content", author: 2}}, users: {1: {id: 1, username: "one"}, 2: {id: 2, username: "two"}} };
That is, it is a dictionary with key-entities, each of which, in turn, is a dictionary with key-id models.
The timestamp will look very similar, but by id we will receive not the data, but the moment of delivery of the data to the client - Date.now()
.
const timestamp = { posts: {1: 1496618924981, 2: 1496618924981}, users: {1: 1496618924983, 2: 1496618924983} };
On this, in general, for now. The next part will describe the process of developing the components themselves.
Source: https://habr.com/ru/post/330422/
All Articles