📜 ⬆️ ⬇️

.NET API in web development: past and future

Many people think that creating an API is a specific process, and it should be understood only by those who are professionally engaged in the development of API projects. But in reality, we all write APIs one way or another: for new classes, services, for colleagues or customers.

Under the cut, we tried to find some practical tips for you on how to create a web API design, plan changes, manage versioning and interact with API users. We tried to find out what difficulties may arise and what typical mistakes are usually made. And also read about various interesting facts from history and about expected changes in the future.

Our current companion, Dylan Beattie, has a high reputation at StackOverflow , which means that he is well, interesting and, correctly, answer the questions correctly. Great experience allows him to tell something from the past through his own examples, and the profession of the architect keeps in shape and brings new knowledge about trends and advanced technologies.
')


- What API are you developing? At what exact moment did the API design suddenly go into software development?

Dylan Beatty: This is a very interesting question, because I think that one of the biggest misconceptions in software development is that the API design is something that happens separately from everything else. Of course, there are certain types of API projects, such as HTTP APIs, which will be opened publicly - and here it makes sense to consider design as a separate part of the work. But the truth is that most developers, in fact, create APIs all the time - they simply do not realize that they are doing this. Every time you write the public method in one of your classes or choose a name for a database table, you create an interface — in the usual daily English meaning of the word — that will eventually be used by other developers at a certain point in time in the future. . Other people on your team will use your classes and methods. Other commands will use your data scheme or message format.

What is interesting is that if developers understand that the code they are working on is likely to become part of the API, then they are too much in the same direction. They begin to implement edge cases and any other things that they don’t really need, just because they may be needed afterwards. I think that here you need to feel the edge, and it seems to me that the key to understanding this edge is that you need to be extremely specific in creating the set of functions that you need right now. But it is important to make this set as far as possible reusable and self-describing. Pieter Hintjens has a good essay called Ten Rules for Good API Design , which gives a more detailed idea of ​​ideas of this type.

The biggest API project I'm working on at the moment is the one I'm doing at the English company Spotlight. This is a hypermedia API that reveals various information about professional actors. Job offers in acting projects, in movies and on television and various other data used in the casting industry. We build this API on an architectural style known as REST - and if you are not quite aware of what REST is, then you simply must come to my address at the DotNext conference in St. Petersburg and find out about it.

There are various patterns for creating the HTTP API - there is REST, there is GraphQL, there are such things as SOAP and RPC. But for me, the biggest attraction of REST is that the limitations of a RESTful style lead to a natural weakening of the connections in the concepts and operations that your API needs to support, which makes it easier to make changes and develop the design for a long time.

- One of the most famous programs, "killed" by backward compatibility, is IE. This browser had too many applications for which it was necessary to maintain compatibility with previous versions. The problem was solved by simply adding a new application called Edge, which is updated and supports all new standards. How not to fall into the trap of backward compatibility? Maybe you should use modularity, which in turn does not use layers? Or maybe replacing the API with a RESTful API, Service Oriented Architecture or something else?

Dylan Beatty: I started building web applications a long time ago. He wrote his first page a couple of years before the advent of Internet Explorer. Then, when the only browsers were NCSA Mosaic and Erwise. It is fascinating to look back into the history of the web and realize that the web that exists now has been modeled and influenced by things like Internet Explorer. And you are absolutely right; One of the reasons why Microsoft introduced the all-new Edge browser in recent versions of Windows is that Internet Explorer's commitment to backward compatibility has made the work of implementing new web standards on the existing IE code base very difficult.

Part of the reason why this backward compatibility exists in IE is that around 2000 there has been a slight shift in the way that corporate IT systems are developed.

There are countless companies that have their own applications for various types of business operations: equity control, product accounting, HR, project management and others. In the 1980s and early 1990s, most of them used a central mainframe system, and employees used something like a terminal emulator to connect to a central server. But after the first wave of dotcoms in the late 1990s, companies realized that most of their computers now have a web browser and a network connection, and can replace the old terminal applications of the mainframe with new web applications.

Windows at that time had a huge market share, and Internet Explorer was the default browser on most Windows PCs, so many organizations built intranets that only worked with a certain version of Internet Explorer. Sometimes they did this in order to take advantage of specific features, such as ActiveX support; but more often, I think they did it, just to save money by not having to do cross-browser testing. It happened with some fairly large commercial applications; even in 2011, Microsoft Dynamics CRM still did not support any other browsers except Internet Explorer.

In this way, we also have a large number of companies that have invested time and money in creating applications that work with Internet Explorer. These applications were not created using web standards or progressive improvements, or with any attempt to create compatibility with subsequent versions — they were explicitly designed for one particular browser version running on an operating system. And every time Microsoft released the new version of Internet Explorer, these applications “crashed” - and companies did not want to invest in updating their outdated intranet applications and blamed the browser. But the story is not over: now in 2017, Microsoft still supplies IE11, which has a compatibility mode, where it switches to the IE9 engine, but sends the user agent string that it is IE7. Now everyone I know is using Google Chrome or Safari to surf the net, but at the same time, many people still have an IE label on their desktop that allows you to access one of the legacy systems.

So ... going back to the question of how Microsoft could have avoided this trap. I think there were many opportunities. As an option, initially create IE using a modular rendering engine, so that subsequent versions can selectively launch the corresponding engine to draw a specific website or application. They could put more effort into supporting web standards that existed at the time, instead of implementing ad hoc support for things like MARQUEE tag and ActiveX plugins - this would help to avoid a headache later on by supporting these esoteric features in new versions. But then it all meant nothing. When developing the first versions of Internet Explorer, it was important not to create a great application with first-class support for web standards, but to kill Netscape Navigator and win in the market section. And it worked.

- Suppose someone is going to submit a new API. He collects some requirements, offers a version and gets feedback. The process looks simple and simple. But are there any pitfalls on the way?

Dylan Beatty: Always! Requirements will be changed - it is a fact. One of the biggest mistakes you can make is to try to anticipate these changes and make your future-proof design. Theoretically, this may pay off, but more often you end up with an even more complex design on your hands, because you were trying to put some future changes into it. Often, pitfalls are something that is beyond your control. For example, the law will change, and you will need to provide data differently. Or there will be changes in other systems of your organization. Or one of your cloud hosting providers will declare that they have transferred some function you need to obsolete.

It is best to choose from the possible options for the interface something simple and usable, do it and hand it over. To get to the stage as quickly as possible when your API is stable, there is no external technical debt, and the team can move on to the next task. So if you suddenly encounter any unforeseen problem after that, then you will have a stable code base ready for use in your solution, and a team that will have time and desire to work in order to solve everything. And if by some coincidence you don’t fall into any pitfall, you can simply move to the next item in your backlog.

- Here we have released version v1.0 of our API and on the way to v1.1. Probably many of us will create both http://example.com/v1/test , and http://example.com/v1.1/test or something like that. What practices, in your opinion, can help a developer to make a v1.1 API design better than v1.0?

Dylan Beatty: It would be nice to read about the concepts of semantic versioning (SemVer) and take the time to really understand the differences between the major, minor and patch versions. SemVer says that you should not have any critical changes between versions 0.x and 0.1, so the most important part is to understand what will be the critical change for your particular API.

If you are working with HTTP APIs that return JSON, for example, then a typical non-critical change will be to add a new data field to one of your resources. It is expected that customers who use version 1.1 will see an additional field and be able to take full advantage of this, while customers who still use version 1.0 will not be able to take into account an unrecognized property. This is another pretty close question about how you should manage the versioning of your APIs. One of the most popular solutions is to submit a URL through routing - api.example.com/v1/ instead of api.example.com/v1.1

But if you comply with the limitations of RESTful systems, then you need to understand whether a change in a version will represent a change in the underlying resource or presentation. Remember that a URI is a Uniform Resource Identifier, and we really should not change the URI that we use to direct to the same resource.

For example, we have a resource api.example.com/images/monalisa. We can request this resource as JPEG (Accept: image / jpeg) or as PNG (Accept: image / png), or ask the server if it has a plain-text representation of this resource (Accept: application / json) - but that’s all only different representations of the same underlying resource, and they should all have the same URI.

Or, say, you completely replaced the CRM system used in your organization, in which case the “version 1” client represents the record that is used in the old CRM system, and the “version 2” represents the same client, but after migrating to completely new platform. In this case, it probably makes sense to treat them as different resources and give them different URIs.

Versioning is a tricky thing. The easiest way is never to change anything.

- .NET Core - what do you think of its API?
Dylan Beatty: .NET Core was first announced in 2015 and was originally called .NET Core 5.0. It was expected that this would be a simplified alternative to the .NET Framework and the Common Language Runtime. It was a terrific idea - to create all the conditions to facilitate porting. NET Core to other platforms. The truth is that there is still a considerable difference between the API open in .NET Core and the “standard” .NET / CLR API on which most applications are built.

I think — and this is only my interpretation, based on what I read, and on conversations with various people — the idea is that .NET Core will provide the fundamental blocks and things like threading, access to the file system, network access, and platform vendors, along with the open source community, will begin to develop modules and packages that ultimately reach the level of functionality provided by solutions like the Java Class Library or the .NET Framework. In principle, this is a great idea, but it also creates something like a vicious circle: no one wants to create libraries for a platform without users, and no one wants to use a platform without libraries.

So it was decided that cross-platform .NET requires a standard API specification that will provide the libraries expected by users and developers on various supported platforms. This is .NET Standard 2.0, which is already fully supported by the .NET Framework 4.6.1 and will be fully supported in future versions of .NET Core and Xamarin. Of course, .NET Core 1.1 is already out and running well, and you can use it now to create C # web applications, even if you are using Windows or Linux or macOS, which is very cool. But I think that the next release of .NET Core will make many creators of frameworks and packages willing to migrate projects to .NET Core, which in turn will make it easier for developers and organizations to migrate their own applications.

- API flexibility vs API accuracy. You can create an API method design with the ability to get many different types of values. But you can create a method with some rules in the parameters. Both ways are true. Where is the boundary between these two possibilities? When should we make a strict API, and when should we create a flexible API design?

Dylan Beatty: When implementing an API in which method signatures are flexible, all you do is send complexity somewhere else in your stack. Let's say we create an API for finding ski passes, and we have a choice between the DoSearch (SearchCriteria criteria) and the DoSearch (string resortName, string countryCode, int minAltitude, int maxDistanceToSkiList).
The first of these methods is fairly easy to expand. We can expand the definition of a SearchCriteria object without changing the method signature. But in this case we do not just change the specific method - we also change the behavior of the system.

In contrast, we can add new arguments to the signature of our second DoSearch method. If we work with a language like C #, then we can provide arguments with default values. And we will not break anything in the project, adding new arguments to the method, if at the same time we set reasonable default values ​​with these very arguments.

At some point, you will need to communicate with users of the API and let them know which search options are supported by your API. And here there are several ways to do it. If you create a .NET API that installs as a NuGet package and is used from code, using XML comments on your methods and parameters is a great way to tell your users what they need to specify when they make requests to your API. If your API is an HTTP service, then pay attention to such hypermedia formats as SIREN , with which you can designate which parameters and ranges are supported.

I think in the next decade we will see a completely different API category driven by machine learning systems, where many commonly accepted API design rules will not apply. It won't surprise me at all if we suddenly get the ski voucher search API, where you just need to specify what you need in a normal language. And there won't even be a method signature - you just call something like DoSearch ("ski chalet, in France or Italy, 1400m or higher, that sleeps 12 people in 8 bedrooms, available from 18-25 January 2018") - and lying Basically, the system will do everything for you. These ways of developing using machine learning are an exciting thing. But their creators want to make some more interesting changes for developers and designers in order to attract additional attention.

This year, Dylan Beattie will visit Russia again and speak at the DotNext conference in St. Petersburg in May, so you have a chance not only to listen to his new report , but also to ask him your questions or say hello to the London .NET User Group.

Source: https://habr.com/ru/post/328090/


All Articles