📜 ⬆️ ⬇️

5 development trends of microservices in 2018

image

2017 was an important year for DevOps, as the number of players in the ecosystem increased significantly, and the number of projects with CNCF tripled. Looking ahead a year, we expect that innovation and market changes will accelerate even more. Below we looked at trends in microservices in 2018: service bundles (the so-called “meshes”), event-driven architectures, native container security, GraphQL and Chaos engineering.

1. Service bundles are popular!


Service bundles or meshes, a dedicated level of infrastructure to improve communication between services, is currently the most discussed in the context of cloudiness. As containers become more common, service topologies are becoming more dynamic, requiring improved network functionality. Service bundles can help manage traffic through service discovery, routing, load balancing, health checks, and observability. Service bundles try to tame the unshakable complexity of the container.
')
Obviously, service grids are becoming increasingly popular as load balancers, such as HAProxy, traefik and NGINX, have begun to rebuild and present themselves as data planes. We have not yet seen a large-scale deployment, but we know about companies that already use bundles on live servers in “production”, so to speak. In addition, service bundles are not exclusive to microservices or Kubernetes environments, and can also be used in a VM environment without a server. For example, the National Center for Biotechnology Information (NCBI) does not have containers, but Linkerd is used.

Service bundles can also be used for Chaos engineering, “disciplines of experimentation in a distributed system, designed to increase confidence in the ability of the system to withstand turbulent conditions.” Instead of installing a daemon that runs on each node, the service bundle can introduce latencies and objects of failure in the environment.

Istio and Buoyant's Linkerd are the loudest proposals in Chaos engineering. Please note that Buoyant released Conduit v0.1, an open source service bundle for Kubernetes, in December last year.

image

2. Climbing event-driven architecture


As the need for business agility increases, we began to see a move to a “push” or event-based architecture, in which one service sends an event, and one or more observer containers that followed the event respond by launching logic asynchronously without knowing fact that the producer responded to the events. Unlike the request-response architecture, in event-driven systems, the functional process and transaction loading in the initiating container are independent of the availability and termination of remote processes in downstream containers. An additional advantage of this is that developers can be more independent in developing their respective services.

While developers can create container environments for producing events, Function-as-a-Service (FaaS) essentially embodies this quality. In FaaS architectures, a function is stored as text in a database and is triggered by an event. After calling the function, the API controller receives the message and sends it through the load balancer on the message bus, which puts it in a queue to schedule and provide it to the call container. After execution, the result is stored in the database, the result is sent to the user, and the function is decommissioned until called again.

The benefits of FaaS include: 1) a shorter time from writing code to starting a service, because there is no artifact to create or go beyond the source code and 2) reducing overload, because functions are managed and scaled by FaaS platforms such as AWS Lambda. However, FaaS is not completely trouble free. Since FaaS requires decoupling each part of the service, there may be many functions that are difficult to detect, manage, organize, and control. Finally, without full visibility, including dependencies, it is difficult to debug FaaS systems, and endless loops can occur.

Currently, FaaS is poorly suited for processes that require longer calls, a large amount of data loaded into memory, and constant performance. Although developers use FaaS for background tasks and temporary events, we believe that use cases will expand over time as the storage layer improves and platforms become more efficient.

In the fall of 2017, the Cloud Native Computing Foundation (CNCF) surveyed more than 550 people, of whom 31% use serverless technologies and 28% plan to use them over the next 18 months. The survey was deepened with the question of which particular server platform is being used. Of the 169 using serverless technology, 77% said they used AWS Lambda. Although Lambda may lead serverless platforms, we believe that there may be interesting opportunities on the outskirts. The calculation of these boundaries will be particularly significant for IoT and AR / VR.

3. Security needs change


Container-packed applications are for the most part safer by default due to kernel access. In VM environments, the virtual device driver is the only point of visibility. Now, going into the container environment, the OS has system calls and semantic meaning. This seems like a much richer communication. Previously, operators could get some share of such opportunities by transferring an agent to a virtual machine, but that would be difficult. Containers provide greater visibility, and integration in a container environment is trivial compared to the VM environment.

With this in mind, in the “451 Research survey” we can find reports that security is the biggest obstacle to accepting containers. Initially, vulnerabilities were a major security concern in container environments. As the number of ready-to-use container images in public registries multiplied, it became important to make sure that they were free from vulnerabilities. Over time, image scanning and authentication became a commodity.

Unlike virtualized environments, where the hypervisor serves as an access and control point, any container with access to the kernel root (kernel root) ultimately has access to all containers on the kernel. In turn, organizations must ensure that containers interact with the host and which containers can perform certain actions or system calls. Strengthening a host to ensure proper configuration of groups and namespaces is also important for security.

Finally, traditional firewalls rely on IP address rules to allow network traffic. This method does not apply to container environments, as dynamic orchestrators reuse IP addresses. Detection and response in real time is crucial for production environments and is achieved by means of fingerprints in the container environment and creating a detailed picture for the baseline of behavior, so it is easy to detect the abnormal behavior and organize the attacker. The report 451 states that 52% of surveyed companies use production containers, which indicates that companies are accelerating their decision-making based on containers to detect threats during operation.

4. Transition from REST to GraphQL


Created by Facebook in 2012 given to the public as an open source in 2015, GraphQL is an API specification that is the language and execution of queries. Systems like GraphQL allow developers to define data schemes. New fields can be added and fields can be removed (declared obsolete) without affecting existing requests or restructuring the client application. GraphQL is powerful because it is not tied to a specific database or storage engine.

The GraphQL server works as a single HTTP endpoint, which expresses the full range of service capabilities. By defining relationships between resources in terms of types and fields (rather than endpoints, as in REST), GraphQL can follow links between properties so that services can retrieve data from multiple resources using a single query. In addition, the REST API requires loading multiple URLs for a single request, increasing network load and slowing down requests. With a decrease in the number of hits, GraphQL reduces the amount of resources required for each data request. The returned data is usually formatted in JSON.

There are additional benefits of using GraphQL over REST. First, clients and servers are untied, so they can be maintained separately. Unlike REST, GraphQL uses a similar language for communication between clients and servers, so debugging is easier. The query form fully corresponds to the form of data retrieved from the server, which makes GraphQL very effective and efficient compared to other languages, such as SQL or Gremlin. Requests reflect the form of their response, so deviations can be detected, and you can define fields that are not resolved correctly. Since requests are simpler, there is more stability in the whole process. This specification is most popular in supporting external APIs, but we also see that it is used for internal APIs.

GraphQL users also connect Amplitude, Credit Karma, KLM, NY Times, Twitch, Yelp, etc. In November, Amazon confirmed the growing popularity of GraphQL by launching AWS AppSync, which included support for GraphQL. It will be interesting to see how GraphQL develops in the context of gRPC and the alternative Twitch's Twirp RPC framework.

5. Chaos is becoming more famous.


Initially popular with Netflix, and later - Amazon, Google, Microsoft, and Facebook, experimented with Chaos to create a system to increase confidence in its ability to withstand problems in production. Over the past ten years, Chaos Engineering has developed into a legitimate technology. It started with Chaos Monkeys, which turned off services in production environments by expanding the scope of testing Failure Injection Testing (FIT) and, Chaos Kong, suitable for larger environments.

Superficially, it seems that chaos engineering is just an injection of confusion. While system hacking can be fun, it may not always be productive or provide useful information. The Chaos technique embodies a wider range of not only unsuccessful injections, but also other symptoms, such as traffic surges, unusual combinations of requests, etc., in order to identify existing problems. In addition to checking some assumptions, he must also identify new properties of the system. By identifying weaknesses in the system, teams can help improve resiliency and prevent negative user experiences.

New technologies, such as neural networks and deep learning, are so complex that determining how something works can be less important than proof that it works. The Chaos technique helps to solve this problem by checking the integrity of the system to identify instability. This is likely to become an even more acceptable practice, as engineers are working to make their increasingly intricate systems more reliable.

As chaos engineering becomes more common, it can take the form of existing open source projects, commercial offers, or, as mentioned above, implemented through a service bundle.

Source: https://habr.com/ru/post/347428/


All Articles