📜 ⬆️ ⬇️

How to understand and make friends with transactions and JPA

Probably everyone knows about transactions in relational databases, everyone heard about ACID. But nevertheless, there is a difference between knowing and feeling, I faced it myself when I had to retrain myself in the developer’s backend. I think at that moment such an article would have helped me greatly, I hope it will be useful to you too.

When developing enterprise applications, they often interact with databases through ORM technology; in the world of Java, the most well-known technology is JPA (Java Persistence API) and its implementations Hibernate and EclipseLink. JPA allows you to interact with the database in terms of domain objects, provides cache, cache replication in the presence of a cluster in the middle tier.

As it usually happens:

  1. A REST request to update the document comes to the backend, and a new state in the request body.
  2. We are starting a transaction.
  3. The backend queries the existing state of the document from the EntityManager, which can subtract it from the database, or it can get it from the cache.
  4. Next, we take the object arrived in the request body, look at it, compare it with the state of the object representing the record in the database.
  5. Based on this comparison, we make the necessary changes.
  6. Commit a transaction.
  7. We return the answer to the client.

Where is the dog rummaged? Look, we took the data, most likely from the cache, maybe already rotten, maybe the server is processing the competitive request to change the same document right now, and the data is rotting out exactly at the moment when we make all these comparisons. Based on these data of doubtful accuracy and the body of the REST request, we make decisions on making changes to the database and commit them. Then the question arises, what the fuck did we just write to the database?
')
This is where transactions will help us. The key to understanding them is under what conditions the transaction will not work, or, in other words, when its rollback happens. And the transaction rollback happens if the changes you make violate the database constraints. The most important of them are:


And so, if our transaction has passed, then “crap”, which we commit a little higher, satisfies the foundations. It remains to configure the constraints so that the data satisfying them is a valid business entity.

Here is the most primitive and artificial example:

@Entity public class Document { @Id private String name; @Lob private String content; // getters and setters  } 

 @ApplicationScoped @Transactional //     -      public class DocumentService { @PersistenceContext private EntityManager entityManager; public void createDocument(String name, String content) { //          , //         Document documentEntity = entityManager.find(Document.class, name); if (documentEntity != null) { throw new WebApplicationException(Response.Status.CONFLICT); //  ! } //             documentEntity = new Document(); documentEntity.setName(name); documentEntity.setContent(content); entityManager.persist(documentEntity); } } 

Here, in the case of the creation of a document with the same name or if the data obtained from the porridge turned out to be outdated, a ConstraintViolationException happens at the time of the commit and the backend returns an error to the client 500. The user will repeat the operation a little later and receive a sensible error message or create a document.

In fact, 500 errors are not very desirable, the trick is that they will almost never happen, but if the specific use of your application is such that they happen too often, then you should think about something more sophisticated.

Let's try something more complicated. Suppose we want to be able to protect the document from being deleted. We get a new table:

 @Entity public class DocumentLock { @Id @GeneratedValue private Long id; @OneToOne private Document document; @Basic private String lockedBy; // getters, setters } 

And we add Document to the class:

  @OneToOne(mappedBy = "document") private DocumentLock lock; 

Now, to protect the document from being deleted, it is enough to create a DocumentLock referring to the document. Logic deleting document:

  public void deleteDocument(String name) { Document documentEntity = entityManager.find(Document.class, name); if (documentEntity == null) { throw new NotFoundException(); } DocumentLock lock = documentEntity.getLock(); if (lock != null) { throw new WebApplicationException( "Document is locked by " + lock.getLockedBy(), Response.Status.BAD_REQUEST); } entityManager.remove(documentEntity); } 

Look, we checked that there is no lock, but the cached data used for this is possibly already outdated, and possibly outdated right during the check. In this case, removing the document from our code will attempt to violate the referential integrity of the data, which means our transaction will not work. A couple of comments:

  1. Make sure that cascading deletes is disabled; in case of cascading deletes, deleting a document will delete all records that refer to it. Those. having a record of business lock will not hurt anything.
  2. In fact, the code above allows you to hang several locks on one document, i.e. it is required to configure still uniqueness.
  3. An example of a purely synthetic, most likely it makes sense to put data about the owner of a business location directly into the document, and not to get a separate table. And then use the explicit pessimistic lock to check the absence of this business lock when deleting a document.

In real-world tasks, referential integrity helps a lot when storing hierarchically organized data: organization staff, directory and file structure. In this case, for example, if we remove a boss and assign a subordinate to him concurrently in a parallel transaction, referential integrity ensures that only one of these operations is completed and the structure of the organization remains valid (each employee has a manager other than the director). At the same time, at the beginning of both operations, each of them looked feasible.

Summing up: even using outdated and questionable data (which may well be the case when working with the database through JPA) when deciding to make changes to the database, and even if conflicting changes are made competitive, the transaction mechanism will not allow us to do anything that violates the referential integrity will either not match the imposed constraints, all actions combined by this transaction and leading to this dismal result will be canceled in accordance with the principle of atomicity. Just keep this in mind when modeling the data and carefully arrange the constraints.

Source: https://habr.com/ru/post/325470/


All Articles