Imagine that you are developing an application, moreover, on your laptop, where the work environment has a certain configuration. The application relies on this configuration and depends on certain files on your computer. Other developers may have slightly different configurations. In addition, your organization has test and industrial environments with custom configurations and filesets. You would like to emulate these environments as closely as possible, but you don’t want to play complex and heavy servers on your machine. How to make the application work in all environments, pass the quality control and get into production, without encountering a lot of problems on the way, requiring constant code refinement?

Answer: use containers. Together with your application, the container contains all the necessary configurations (and files), so it can be easily transferred from the development environment to the testing environment, and then into the industrial environment, without fear of any side effects. The crisis is resolved, all in the win.
This is, of course, a simplified example, but Linux-containers do allow you to successfully cope with many problems associated with portability, customizability, and application isolation. Whether it is a local IT infrastructure, a cloud or a hybrid solution based on them, containers effectively cope with their task. Of course, besides the container itself, the choice of the right container platform is also important.
')
What are linux containers
A Linux container is a set of processes that is isolated from the rest of the operating system and run from a separate image that contains all the files necessary for their work. The image contains all the dependencies of the application and therefore can be easily transferred from the development environment to the testing environment, and then into the industrial environment.
Isn't it just virtualization?
Yes and no. This approach will help to understand:
- Virtualization allows multiple operating systems to run simultaneously on one computer.
- Containers use the same operating system kernel and isolate application processes from the rest of the system.
What does it mean? First of all, the fact that the simultaneous operation of several operating systems on a single hypervisor (a program that implements virtualization) requires more system resources than a similar container-based configuration. Resources, as a rule, are not limitless, therefore, the less your applications “weigh”, the denser they can be placed on servers. All Linux containers running on a computer use the same operating system, so your applications and services remain lightweight and work in parallel mode without hindering each other.
Brief history of containers
The progenitor of containers can be considered technology
FreeBSD Jail , which appeared in 2000. It allows you to create several independent systems running on its core, so-called “prison cells” (jails), within one
FreeBSD operating system. Cameras were conceived as isolated environments that an administrator can safely provide to internal users or external clients. Since the camera is built on the basis of a
chroot call and is a virtual environment with its own files, network and users, processes cannot go beyond the camera and damage the main OS. However, due to design constraints, the Jail mechanism does not provide complete isolation of the processes, and over time, methods of “escaping” from the chamber appeared.
But the idea itself was promising, and already in 2001
a VServer project appeared on the Linux platform, created,
according to its founder , Jacques Gelinas, in order to run “several standard Linux servers on a single machine with a high degree of independence and security. " Thus, in Linux, a foundation appeared for the implementation of concurrently running user environments, and what we call containers today began to emerge.
Towards practical use
A major and quick step towards isolation was the integration of existing technologies. In particular, the
cgroups mechanism, acting at the Linux kernel level and restricting the use of system resources by a process or group of processes, with the
systemd initialization system responsible for creating user space and starting processes. The combination of these mechanisms, originally created to increase overall manageability in Linux, allowed for much better control of isolated processes and laid the foundation for successful media separation.
The next milestone in the history of containers is related to the development of user namespaces (
User namespaces ), “allowing to separate the user and group identifiers assigned to a process inside and outside the namespace. In the context of containers, this means that users and groups may have privileges to perform certain operations inside the container, but not beyond it. ” This is similar to the concept of Jail, but more secure due to additional isolation of processes.
Then came the
Linux Containers project (LXC) virtualization system, which offered a number of highly demanded tools, templates, libraries, and language support tools, drastically simplifying the use of containers in practice.
Docker Appearance
In 2008, the Docker company (then called
dotCloud ) with the same technology came to the scene, combining the achievements of LXC with advanced tools for developers and further facilitating the use of containers. Today, the open source technology Docker is the most well-known and popular tool for deploying and managing Linux containers.
Along with many other companies, Red Hat and Docker are members of the Open Container Initiative (OCI) project, which aims to unify the standards for managing container technologies.
Standardization and Open Container Initiative
The
Open Container Initiative project works under the auspices of the
Linux Foundation . It was established in 2015 "with the goal of creating open industry standards for container formats and execution environments." At the moment, its main task is to develop specifications for container images and execution environments.
The runtime specification specifies a set of open standards that describe the composition and structure of the container filesystem set (filesystem bundle), and how this set should be unpacked by the runtime. Basically, this specification is needed in order for the container to work as intended, and all the necessary resources are available and located in the right places.
The container image specification defines standards for "image manifest, file system serialization, and image configuration."
Together, these two specifications define what is inside the container image, as well as its dependencies, environments, arguments, and other parameters necessary for the correct execution of the container.
Containers as an abstraction
Linux containers are another evolutionary step in the development of methods for developing, deploying and maintaining applications. Providing portability and version control, the container image ensures that if the application runs on a developer’s computer, it will work in an industrial environment.
Demanding less system resources compared to a virtual machine, the Linux container is almost as good as isolation and makes it easier to maintain complex multi-tier applications.
The meaning of Linux containers is to speed development and help you respond quickly to business requirements as they become available, and not provide any specific software for solving emerging problems. Moreover, it is possible to pack not only the entire application into the container, but also separate parts of the application and services, and then use technologies like
Kubernetes to automate and orchestrate such containerized applications. In other words, you can create monolithic solutions, where all the application logic, runtime components and dependencies are in the same container, or you can build distributed applications from multiple containers operating as microservices.
Containers in industrial environments
Containers are a great way to speed up the delivery of software and applications to customers using them in industrial environments. But this naturally increases the responsibility and risks. Josh Bresser, Strategist for Security at Red Hat, explains how to keep containers safe.
“I’ve been involved in security issues for a long time, and they are almost always not given proper attention until technology or an idea becomes mainstream,” complains Bresser. - Everyone agrees that this is a problem, but the world is so arranged. Today the world is seized by containers, and their place in the overall security picture is beginning to become clearer. I must say that containers are not something special, it is just another tool. But since today they are in the spotlight, then it's time to talk about their security.
At least once a week they assure me that running workloads in containers is safe, so you shouldn’t worry about what’s inside them. In fact, this is not at all the case, and this attitude is very dangerous. Security inside the container is as important as security in any other part of the IT infrastructure. The containers are already here, they are actively used and distributed with amazing speed. However, there is no magic about security. The container is as safe as the content running inside it is safe. Therefore, if your container contains a bunch of vulnerabilities, the result will be exactly the same as in the case of bare metal with the same bunch of vulnerabilities. ”
What's wrong with container security
Container technology is changing the established view of the computing environment. The essence of the new approach is that you have an image that contains only what you need and that you launch only when you need it. You no longer have any extraneous software that is installed, but it is unclear why and can cause great trouble. From a security point of view, this is called “attack surface”. The less things you have running in the container, the smaller this surface, and the higher the security. However, even if few programs are running inside the container, you still need to make sure that the contents of the container are not outdated and not infested with vulnerabilities. The size of the attack surface is irrelevant if something is installed inside that has serious security vulnerabilities. Containers are not all powerful, they also need security updates.
Banyan published a
report titled “
More than 30% of official images in Docker Hub have security vulnerabilities with a high level of severity.” 30% is a lot. Since the Docker Hub is a public registry, it contains a lot of containers, created by various publics. And since anyone can place containers in such a registry, no one can guarantee that a newly published container does not contain old, leaky software. Docker Hub is both a blessing and a curse. On the one hand, it saves a lot of time and effort when working with containers; on the other hand, it does not guarantee that the loaded container does not contain known security vulnerabilities.
Most of these vulnerable images are not malicious, no one has built into them "leaky" software with malicious intent. Just someone at one time packed the software into a container and laid it out on the Docker Hub. Time passed, and a vulnerability was discovered in the software. And until someone follows this and is engaged in updating, the Docker Hub will continue to be a hotbed of vulnerable images.
When deploying containers, base images are usually “pulled” from the registry. If this is a public registry, then it is not always possible to understand what you are dealing with, and in some cases get an image with very serious vulnerabilities. Container content is really important. Therefore, a number of organizations are beginning to create scanners that look inside the container images and report found vulnerabilities. But scanners are only half the solution. After all, after the vulnerability is found, for it you need to find and install a security update.
Of course, you can completely abandon third-party containers in order to develop and manage them solely on your own, but this is a very difficult decision, and it can seriously distract you from basic tasks and goals. Therefore, it is much better to find a partner who understands container security and is able to solve relevant problems so that you can focus on what you really need.
Red Hat Container Solutions
Red Hat offers a fully integrated platform for deploying Linux containers, which is suitable for small pilot projects as well as for complex systems based on orchestrated multicontainer applications - from the operating system for the host where the containers work to verified container images for building your own applications or same platform orchestration and management tools for an industrial container environment.

Infrastructure
- Host
Red Hat Enterprise Linux (RHEL) - Linux distribution, earned a high reputation in the world in terms of trust and certification. If you only need support for container applications, you can use a specialized distribution Red Hat Enterprise Linux Atomic Host . It provides the creation of container solutions and distributed systems / clusters, but does not contain the functionality of the general-purpose operating system, which is in RHEL. - Inside the container
Using Red Hat Enterprise Linux inside containers ensures that regular, non-containerized applications deployed on the RHEL platform work just as well inside containers. If the organization itself develops applications, then RHEL inside containers allows you to maintain the usual level of technical support and updates for containerized applications. In addition, portability is ensured - in other words, applications will work without problems wherever RHEL is available, starting with the developer’s machine and ending with the cloud. - Data store
Containers may require a lot of storage space. In addition, they have one constructive flaw - when the container crashes, the stateful application in it loses all its data. Integrated into the Red Hat OpenShift platform, software storage Red Hat Gluster Storage provides flexible managed storage for containerized applications, eliminating the need to deploy an independent storage cluster or spend money on an expensive expansion of traditional monolithic storage systems. - Infrastructure-as-a-service (IaaS)
The Red Hat OpenStack Platform combines physical servers, virtual machines, and containers into one unified platform. As a result, container technologies and containerized applications are fully integrated with the IT infrastructure, opening the way to full automation, self-service, and quoting of resources across the entire technology stack.
Platform
- Container application platform
The Red Hat OpenShift platform integrates key container technologies, such as docker and Kubernetes , with the Red Hat Enterprise Linux enterprise-class operating system. The solution can be deployed in a private cloud or in public cloud environments, including with the ability to be escorted by Red Hat. In addition, it supports both stateful and stateless applications, providing translation to container rails without architectural processing of existing applications. - All-in-One Solution
Sometimes it's better to get everything at once. It is for such cases that the Red Hat Cloud Suite package is intended, which includes a container application development platform, infrastructure components for building a private cloud, integration tools with public cloud environments, and a common management system for all components. Red Hat Cloud Suite allows you to upgrade your corporate IT infrastructure so that developers can quickly create and deliver services to employees and customers, and IT professionals have centralized control over all components of the IT system.
Control
- Hybrid Cloud Management
Success depends on flexibility and choice. There are no universal solutions, so in the case of corporate IT infrastructure it is always worth having more than one option. Complementing public cloud platforms, private clouds and traditional data centers, containers expand the selection. Red Hat CloudForms allows you to manage hybrid clouds and containers in an easily scalable and understandable way, providing integration of container management systems, such as Kubernetes and Red Hat OpenShift, with Red Hat Virtualization and VMware virtual environments. - Container Automation
Creating and managing containers is often monotonous and time-consuming. Ansible Tower by Red Hat allows you to automate it and get rid of the need to write shell scripts and perform operations manually. Ansible scripts provide the ability to automate the entire life cycle of a container, including assembly, deployment, and management. You no longer have to engage in routine, and there will be time for more important things.
Containers and most technologies for deploying and managing them are released by Red Hat as open source software (open source).
Linux containers are another evolutionary step in how we develop, deploy, and manage applications. They provide portability and version control, helping to ensure that a developer working on a laptop will work in production.
Do you use containers in your work and how do you evaluate their prospects? Share your pain, hopes and successes.