Jenkins Pipeline Plugin is a very handy thing to arrange for continuous software delivery. The plugin allows you to break the software delivery to the end user at the stage , each of which can be controlled (on which node, what and how to do) and, ultimately, visualize the delivery process. Together with the Blueocean plugin, it all looks very tasty. In real life, it sometimes happens that in addition to Jenkins, there are other systems that are involved in this process ( workflow ), and the question arises - how to integrate them with existing solutions. An example is Jira , in which there is a certain issue falling on the tester, clicking on the interface (well, or doing other useful work), and only after blessing it, our artifact has the right to move further in the direction of the waiting client.
So what are our options for implementation?
Obviously, there are at least two of them:
The first option is at least inconvenient because you need to write a loop that will check something, do not forget to interrupt it after a certain amount of time, and, in general, polling is not the coolest method, I think. Therefore, we will immediately look in the direction of the web hook.
Having covered the documentation, I did not find the features with the name of the web hook, and this, in fact, is not very good, because what is is rather a kind of workaround than a targeted solution.
We will experiment on a very simple configuration of a spherical horse in a spherical vacuum (literally we take an example from examples):
node { stage 'Stage 1' echo 'Hello World 1' stage 'Stage 2' echo 'Hello World 2' stage 'Stage 3' build job: 'hello-task', parameters: [[$class: 'StringParameterValue', name: 'CoolParam', value: 'hello']] }
To describe the steps of the sequence of actions, the plugin uses groovy-dsl. In the above example, everything will be performed on one node (and this is the master, so do not do it;)). As you can see, there are three stages of execution, two of which simply write Hello World
to the console (which is unexpected), and the third invokes at least a simple job and passes a parameter to it, which also needs to be printed to the console.
If we execute this task, we will see something similar in the logs:
Started by user admin [Pipeline] node Running on master in /var/jenkins_home/jobs/pipeline-test/workspace [Pipeline] { [Pipeline] stage (Stage 1) Entering stage Stage 1 Proceeding [Pipeline] echo Hello World 1 [Pipeline] stage (Stage 2) Entering stage Stage 2 Proceeding [Pipeline] echo Hello World 2 [Pipeline] stage (Stage 3) Entering stage Stage 3 Proceeding [Pipeline] build (Building hello-task) Scheduling project: hello-task Starting building: hello-task #2 [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS
Hooray, we have completed and our teams from the script, and our child task, which we defined separately.
Now imagine that between the first and second stage, we need approval from the external system to continue the execution of the work. In order to implement it, we will use the mechanism of waiting for data input by the user through the input construction. In its simplest form, it will look like this:
input 'Ready to go?'
By running our job again, we will see that we are now required to perform a confirming action in the interface:
But the interface is cool for those who like to click the mouse, but it does not solve our problem, so we are going to smoke API . And then the documentation is not all right. To understand what and how to call, you need to ask advice from people who know the subject , and examine the code .
Since in our example there are no parameters, you can use the proceedEmpty
method to confirm the action. To do this, you need to throw a POST request to the URL:
JENKINS_ROOT_URL/job/JOB_NAME/BUILD_NUMBER/input/INPUT_ID/proceedEmpty?token=YOUR_TOKEN
The main difficulty here is precisely in getting INPUT_ID
, because through the API I couldn’t get it, and you can understand what it is, only by parsing the page or by viewing the form submission traffic. The good news is that the INPUT_ID
always constant. Bad - by default it is generated randomly and is a string of characters. To walk and to recognize her every time is not the most fun exercise, so you must set this ID manually through the id
property:
input message: 'Ready to go?', id: 'go'
Here it is worth noting that the real ID will always begin with a capital letter. As a result, in my case, the request was as follows:
http://localhost:8080/job/pipeline-test/16/input/Go/proceedEmpty?token=f7614a8510b59569347714f53ab1e764
An additional key to the input mechanism is the ability to set additional parameters, which can then be used:
def testPassParamInput = input( id: 'testPassParam', message: 'Pass param?', parameters: [ [$class: 'StringParameterDefinition', defaultValue: 'hello', description: 'Test parameter', name: 'testParam'] ])
To do this, we can define some parameter that we want to pass to the child job , in our case testParam
. Accordingly, we can rewrite the call to the child job so that it accepts this parameter:
build job: 'hello-task', parameters: [[$class: 'StringParameterValue', name: 'CoolParam', value: testPassParamInput]]
Note that the entire object is passed to value . If there are several parameters, you must explicitly specify which parameter to take:
testPassParamInput['testParam']
In the interface, we will now have something like this:
But again, the GUI is of little interest to us and we are going to study the API further. To forward a parameter through plain HTTP, you need to use another method: proceed
:
JENKINS_ROOT_URL/job/JOB_NAME/BUILD_NUMBER/input/INPUT_ID/proceed?token=YOUR_TOKEN
In this case, we need to pass the form with the parameters and their values. To do this, first of all we will generate the correct JSON :
{ "parameter" : [ { "name" : "testParam", "value" : "new cool value" } ] }
Here name
is the name of the parameter, and value
its value.
Now the question is how to properly convey it, and here the uninitiated begin to have problems. Since Jenkins implements JSONP , this content is not transmitted directly in the body of the request. Instead, it is necessary to wrap it in a form and stuff it in a json
box. If you do this through Postman, then the final query will look like this:
----WebKitFormBoundaryE19zNvXGzXaLvS5C Content-Disposition: form-data; name="json" { "parameter": [ { "name" : "testParam", "value" : "new cool value" } ] } ----WebKitFormBoundaryE19zNvXGzXaLvS5C
Not very nice, but it works. Now in the logs we will be able to observe that the actions have indeed been confirmed by the user (in our case, by the administrator):
Hello World 2 [Pipeline] input Ready to go? Proceed or Abort Approved by admin [Pipeline] input Input requested Approved by admin [Pipeline] stage (Stage 3) Entering stage Stage 3 Proceeding [Pipeline] build (Building hello-task) Scheduling project: hello-task Starting building: hello-task #11
In the case when the external system does not give good, it needs to pull the abort
method:
JENKINS_ROOT_URL/job/JOB_NAME/BUILD_NUMBER/input/INPUT_ID/abort?token=YOUR_TOKEN
No data transfer is required. In the logs after the execution of this request, we will see that the execution was really rejected by the user:
Rejected by admin Finished: ABORTED
And finally. Do not forget that all these requests require basic authorization, token and crumbs . The latter can be obtained at: JENKINS_ROOT_URL/crumbIssuer/api/json
:
{ "_class":"hudson.security.csrf.DefaultCrumbIssuer", "crumb":"f4c1a2dc6a67c70e66c35c807e542f4e", "crumbRequestField":"Jenkins-Crumb" }
After that, you need to insert into the headers of the http-request a new Jenkins-Crumb
header and its value from the crumb
field.
In its current form, the Pipeline Plugin provides opportunities for embedding control actions from external systems, which opens up many opportunities for automating software delivery during complex and transitional implementation processes. At the same time, I still want a more obvious and beautiful API for these actions.
Source: https://habr.com/ru/post/302274/
All Articles