📜 ⬆️ ⬇️

Domain Driven Design in practice

Evans wrote a good book with good ideas. But these ideas lack a methodological basis. Experienced developers and architects, on an intuitive level, understand that you need to be as close as possible to the customer’s domain, and you need to talk to the customer. But it is not clear how to evaluate the project for compliance with the Ubiquitous Language and the customer's real language? How to understand that the domain is divided into Bounded Context correctly? How generally to define DDD in the project is used or not?

The last point is especially relevant. In one of his speeches, Greg Young asked those who practice DDD to raise their hands. And then I asked to omit those who create classes with a set of public getters and setters, have logic in the “services” and “helpers” and call it DDD. A chuckle passed through the hall :)

How to structure business logic in DDD style? Where to store the "behavior": in services, entities, extension-methods or everywhere a little bit? In the article I will talk about how to design the subject area and what rules I use.

All people are lying


Not specifically of course :) The fact is that business applications are created for a wide range of tasks and to satisfy the interests of various groups of users. At best, only top management understands business processes from start to finish. Not rarely misunderstand, by the way. Inside the subdivision, users see only a certain part. Therefore, the result of interviewing all stakeholders usually becomes a tangle of contradictions. The following follows from this rule.
')

First, analytics, then design, and only then - development


You need to start not from the database structure or set of classes, but from business processes. We use BPMN and UML Activity in conjunction with test cases . Charts are well read even by those who are not familiar with the standards. Test cases in tabular form help to better identify the boundary cases and eliminate inconsistencies.

Abstract talk is just a waste of time. People are convinced that the details are not significant and "there is no need to discuss them at all, because everything is already clear." Please fill in the table of test cases clearly shows that the options are not really 3 and 26 (this is not an exaggeration, but the result of analytics on one of our projects).

Tables and charts - the main tool for communication between business, analytics and development. In parallel with the compilation of BPMN diagrams and test case tables, we are starting to write terms into the project’s thesis . The dictionary will help later for the design of entities.

Select contexts


A single subject model for the entire application can be created only if the policy of using a single consistent language throughout the organization is adopted and implemented at the top management level. Those. when the sales department says “account production”, they both understand the word in the same way. This is the same account, not "account in CRM" and "legal entity. Client."

In real life, I have not seen this. Therefore, it is desirable to immediately roughly "cut" the subject model into several parts . The less they are connected, the better. Usually, it still turns out to grope some set of common terms. I call this the core of the domain. Any context may depend on the kernel. It is highly desirable to avoid dependencies between contexts. Potentially, this approach leads to a “swelling” of the core, but the mutual dependence of contexts gives rise to a strong connectivity, which is worse than a “thick” core.

Architecture



Ports and adapters, onion architecture , clean architecture - all of these approaches are based on the idea of ​​using a domain as the core of an application . Evans casually touches on this question when he talks about the “domain” and “infrastructure”. Business logic does not use the terms “transaction”, “database”, “controller”, “lazy load”, etc. n-layer - the architecture does not allow to spread these concepts. The request will be sent to the controller, transferred to the “business logic”, and the “business logic” will interact with the DAL . And DAL is a solid "transaction", "table", "lock", etc. Clean Architecture allows you to invert dependencies and separate flies from cutlets. Of course completely abstract from the details of the implementation will not work. RDBMS, ORM, network interaction will still impose its limitations. But in the case of using Clean Architecture this can be controlled. In n-layer sticking to a “single language” is much more difficult because of the storage structure on the bottom layer.

Clean Architecture works well with Bounded Context. Different contexts may represent different subsystems. Simple contexts are best implemented using simple CRUD . For asymmetric load contexts, CQRS is well suited. For subsystems requiring Audit Log, it makes sense to use Event Sourcing. For the subsystems loaded for reading and writing with limitations on bandwidth and latency, it makes sense to consider an event driven approach. At first glance, this may seem inconvenient. For example, I worked with the CRUD subsystem and I received a task from the CQRS subsystem. It will take some time to look at all these Command and Query as a new gate. The alternative - to design a system in the same style - is short-sighted. Architecture is a set of tools, and each tool is suitable for solving a specific problem.

Project structure


I structure the .NET projects as follows:

 /App /ProjectName.Web.Public /ProjectName.Web.Admin /ProjectName.Web.SomeOtherStuff /Domain /ProjectName.Domain.Core /ProjectName.Domain.BoundedContext1 /ProjectName.Domain.BoundedContext1.Services /ProjectName.Domain.BoundedContext2 /ProjectName.Domain.BoundedContext2.Command /ProjectName.Domain.BoundedContext2.Query /ProjectName.Domain.BoundedContext3 /Data /ProjectName.Data /Libs /Problem1Resolver /Problem2Resolver 

Projects from the Libs folder do not depend on the domain. They only solve their local problem, for example, generating reports, parsing csv, caching mechanisms, etc. The domain structure corresponds to the BoundedContext' . Projects from the Domain folder are independent of Data . Data contains DbContext , migrations, configurations related to DAL. Data depends on Domain entities for building migrations. Projects from the App folder use an IOC container to inject dependencies. Thus it turns out to achieve maximum isolation of the domain code from the infrastructure.

Model entities


By an entity, we mean an object of the domain that has a unique identifier. For example, let's take a class describing a Russian company in the context of obtaining accreditation in a certain department.

 [DisplayName("  ()")] public class Company : LongIdBase , IHasState<CompanyState> { public static class Specs { public static Spec<Supplier> ByInnAndKpp(string inn, string kpp) => new Spec<Supplier>(x => x.Inn == inn && x.Kpp == kpp); public static Spec<Supplier> ByInn(string inn) => new Spec<Supplier>(x => x.Inn == inn); } //  EF protected Company () { } public Company (string inn, string kpp) { DangerouslyChangeInnAndKpp(inn, kpp); } public void DangerouslyChangeInnAndKpp(string inn, string kpp) { Inn = inn.NullIfEmpty() ?? throw new ArgumentNullException(nameof(inn)); Kpp = kpp.NullIfEmpty() ?? throw new ArgumentNullException(nameof(kpp)); this.ValidateProperties(); } [Display(Name = "")] [Required] [DisplayFormat(ConvertEmptyStringToNull = true)] [Inn] public string Inn { get; protected set; } [Display(Name = "")] [DisplayFormat(ConvertEmptyStringToNull = true)] [Kpp] public string Kpp { get; protected set; } [Display(Name = " ")] public CompanyState State { get; protected set; } [DisplayFormat(ConvertEmptyStringToNull = true)] public string Comment { get; protected set; } [Display(Name = "  ")] public DateTime? StateChangeDate { get; protected set; } public void Accept() { StateChangeDate = DateTime.UtcNow; State = AccreditationState.Accredited; } public void Decline(string comment) { StateChangeDate = DateTime.UtcNow; State = AccreditationState.Declined; Comment = comment.NullIfEmpty() ?? throw new ArgumentNullException(nameof(comment)); } 

In order to choose the right units and relationships, often one iteration is not enough. First, I cover the basic structure of classes, define one-to-one, one-to-many, and many-to-many relationships and describe the data structure. Then I trace the structure by business process, referring to the BMPN and test cases. If some case does not fit into the structure, then an error is made in the design and the structure must be changed. The resulting structure can be arranged in the form of a diagram and further coordinated with experts in the subject area.

Experts may point out errors and inaccuracies in the design. Sometimes in the process it turns out that for some entities there is no suitable term. Then I offer options and after a while is suitable. A new term is introduced into the thesis. It is very important to discuss and agree on terminology together. This eliminates a large amount of misunderstanding problems in the future.

Choosing a unique identifier


Fortunately, Evans makes clear recommendations on this: first we look for an identifier in the subject area: TIN, CAT, passport details, etc. If found - use it. Did not find - rely on GUID or Id generated by database. Sometimes it is advisable to use as an Id identifier other than the domain, even if the latter exists. For example, if the entity must be versioned and the system must store all previous versions, or if the identifier from the subject model is a complex composite and is not friendly with persistance.

Real designers


To materialize ORM objects, reflection is most often used. EF will be able to reach the protected constructor, but programmers will not. They will have to create a correct legal entity. person identified by TIN and KPP. The designer is supplied with guards. Creating an incorrect object simply will not work. The ValidateProperties extension method validates against DataAnnotation attributes, but NullIfEmpty does not allow you to pass empty strings.

 public static class Extensions { public static void ValidateProperties(this object obj) { var context = new ValidationContext(obj); Validator.ValidateObject(obj, context, true); } public static string NullIfEmpty(this string str) => string.IsNullOrEmpty(str) ? null : str; } 

To validate the TIN, an attribute of the following form was specifically written:

 public class InnAttribute : RegularExpressionAttribute { public InnAttribute() : base(@"^(\d{10}|\d{12})$") { ErrorMessage = "     10/12 ."; } public InnAttribute(CivilLawSubject civilLawSubject) : base(civilLawSubject == CivilLawSubject.Individual ? @"^\d{12}$" : @"^\d{10}$") { ErrorMessage = civilLawSubject == CivilLawSubject.Individual ? "       12 ." : "       10 ."; } } 

A constructor without parameters has been declared protected, so that it is used only for ORM. For materialization, reflection is used, so the access modifier is not a hindrance. In the "real" designer passed both required fields: TIN and CAT. The remaining fields of legal entity in the context of the system are optional and are filled in by a company representative later.

Encapsulation and Validation


TIN and KPP properties are declared with a protected setter. EF will be able to reach them again, and the programmer will have to use the DangerouslyChangeInnAndKpp function. The name of the function clearly hints that changing the TIN and the checkpoint is not a regular situation. Two parameters are passed to the function, which means that if you change the TIN and PPC, then only together. TIN + PPC could even be made a composite key. But for compatibility, I left a long Id . Finally, when calling this function, validators will work and if the TIN and checkpoint are not correct, a ValidationException will be thrown.
You can further strengthen the type system . However, the approach described by reference has a significant drawback: the lack of support from the standard ASP.NET infrastructure. Support can be added, but such an infrastructure code is worth something and needs to be accompanied.

Properties for reading, specialized methods for changing


According to the business process, the organization can be “accepted” or “rejected”, and in case of rejection, you must leave a comment. If all properties were public, then this could only be learned from the documentation. In this case, the status change rules are visible from the method signatures. In the article I gave only a fragment of the class legal entity. In fact, there are much more fields there and understanding what is connected with it helps a lot, especially when connecting new team members. If the property can be uncontrolledly changed in isolation from others without explicit business operations, the setter can also be made public. However, this property should be alerted: if there are no explicit operations associated with the data, perhaps this data is not needed?
An alternative is to use the “ state ” pattern and put the behavior into separate classes.

Specs


For some time, it was not clear that it would be better to write extensions that modify Queryable or to mess with expression trees . In the end, the LinqSpec implementation was the most convenient.

Extension methods


Ad hoc polymorphism for interfaces (so that you don’t have to implement methods in each successor) will appear in C # sooner or later . While it is necessary to be content extension-methods.

  public interface IHasId { object Id { get; } } public interface IHasId<out TKey> : IHasId where TKey: IEquatable<TKey> { new TKey Id { get; } } public static bool IsNew<TKey>(this IHasId<TKey> obj) where TKey : IEquatable<TKey> { return obj.Id == null || obj.Id.Equals(default(TKey)); } 

Extension methods are suitable for use in LINQ for greater expressiveness. However, the ByInnAndKpp and ByInn cannot be used inside other expressions. They can not make out the provider. In more detail about the use of extension-methods a la DSL told Dino Esposito on one of the DotNext .

 public static class CompanyDataExtensions { public static CompanyData ByInnAndKpp( this IQueryable<CompanyData> query, string inn, string kpp) => query .Where(x => x.Company, Supplier.Specs.ByInnAndKpp(inn, kpp)) .FirstOrDefault(); public static CompanyData ByInn( this IQueryable<CompanyData> query, string inn) => query .Where(x => x.Company, Supplier.Specs.ByInn(inn)); } 

Note the unusual Where with two parameters . EF Core began to support InvokeExpression . The application code is used as follows:

 var priceInfos = DbContext .CompanyData .ByInn("") .ToList(); 

An alternative is to use SelectMany .

 var priceInfos = DbContext .Company //     extension-    .ByInnAndKpp("", "") .SelectMany(x => x.Company) .ToList(); 

I have not fully studied the equivalence of options with Select and SelectMany from the point of view of IQueryProvider . I would be grateful for any information on this topic in the comments.

Related collections


 public virtual ICollection<Document> Documents { get; protected set; } 

It is advisable to use only in the Select block to convert to an SQL query, because the code of the form company.Documents.Where(…).ToList() does not build a query to the database, but first lifts all the related entities into RAM, and therefore applies Where to sampling in memory. Thus, the presence of collections in the model can adversely affect the performance of the application. At the same time, refactoring will be difficult, because you will have to transfer the necessary IQueryable from outside. To control the quality of requests you need to glance in miniProfiler .

Services


In an anemic model, in general, all logic is stored in services. I prefer to add services only when necessary, if the logic is out of place in the unit code or describes the interaction between the units. The best option when the domain contains the exact name for the service - "cash", "warehouse", "call center". In this case, the “Service” postfix can be omitted. The set of methods in each class corresponds to a set of use cases, grouped by user interface elements. It works well if the interface is designed in the style of Task Based UI .

Write methods accept an entity or DTO as input. Validation of the request is performed in a separate layer strictly before the method is executed. If the method can fail, it should be explicitly indicated in the signature using the Result type. Exceptions remain for exceptional situations.

Read methods return DTOs for serialization and sending to the client. Thanks to Queryable Extensions in AutoMapper and Mapster, you can use mappings to translate into expressions for Select , which allows you not to drag the whole entity from the database as a whole.

Managers (Manager)


I use rarely, for operations within one unit. AspNet.Identity , for example, contains a UserManager . Basically, managers are needed when it is necessary to implement logic on an aggregate that is not directly related to the domain.

TPT for union-type


Sometimes one condition may be associated with one of several others. TPT can be used to create a consistent storage system, and pattern matching can be used for control flow. This approach is described in detail in a separate article .

Queryable Extensions for DTO projections


Using DataMapper allows reducing the number of boilerplate code, and using Queryable Extensions - to build requests for receiving DTO without the need to write Select manually. Thus, you can reuse expressions for in-memory mapping and expression tree construction for IQueryProvider . AutoMapper quite AutoMapper in memory and not fast, so it eventually replaced it with Mapster .

CQRS for individual subsystems


When working in conditions of high uncertainty, the risk of design error is also great. Before designing a database structure, making decisions on denormalization or writing stored procedures makes sense to resort to quick prototyping and test hypotheses. When there is confidence: what is at the entrance, and what is at the output you can do optimization.

In the absence of implementation commands, IQuery returns identical results on identical input data. Therefore, the bodies of such methods can be aggressively cached. Thus, after replacing implementations, the infrastructure code (controllers) will remain unchanged, and only the body of the IQuery method will have to be modified. The approach allows you to optimize the application pointwise in small pieces, and not all of it.
The approach is limited to a very, very busy resource due to the overhead of the IOC container and memory traffic for per-request lifestyle. However, all IQuery can be made singleton'ami, if you do not inject dependencies from the database into the constructor, and instead use the using construct.

Work with legacy code


When working with the existing code base, you should decide on the format of work: “support” or “development”. In the first case, the appearance of a new functionality and the refinement of the system are not expected. Maximum - add a few new reports, a couple of forms here and there. In the second - there is a need for significant processing of the subject model and / or architecture in general. If the project needs to be “supported” and not “developed”, it is better to follow the existing rules, no matter how successful they are. If there is a frank shit in front of you, it is better to refuse the offer to bid it.

Project development is a more difficult task. The topic of refactoring is beyond the scope of this article. I will note only two of the most useful patterns: " anti-corruption layer " and " strangler ". They are very similar. The main idea is to build a "facade" between the old and the new code bases and gradually have an elephant rewrite the entire system in pieces. The facade takes on the role of a barrier that prevents the problems of the old code base from leaking into the new one and ensuring that the old business logic is mapped to the new one. Be prepared that the facade will consist entirely of hacks, tricks and crutches and sooner or later will sink into oblivion along with the entire old code base.

Source: https://habr.com/ru/post/334126/


All Articles