Despite the no coding paradigm, unit testing on complex Pega projects is just as important as other software development projects. I was convinced of this personally, working in projects for the end-to-end automation of business processes based on Pegasystems solutions.
On Habré, I found just one article on the Pega platform. Meanwhile, Pega annually receives high marks in the most reputable
ratings of BPM solutions and CRM applications.
Developing the theme of work on Pega, I suggest you translate my
article on Ninja , a tool for testing Pega applications. In the course of commenting the terminology used in this material.
')

In 2015, I participated in automating the corporate lending process on the Pega platform in one of the largest financial institutions in Russia and Eastern Europe.
At one of the stages of the project, the team was late with the release of the next release. It became obvious that in such a complex project an alternative approach to automated testing is needed, allowing to detect possible defects in the early stages. Then we decided to transfer the practices used in Java projects to the Pega platform. The fact that this came out, and will be discussed in the article.
One day something went wrong
So that you can imagine the scale of the system, I will say that by February of this year it has reached more than 32 thousand rules (rule is a unit for building an application on the Pega platform, in Java this may correspond to a method / class, but the analogy is rough). Now it is being developed by three independent teams that produce more than four thousand chekins per day.
In 2015, together with colleagues from
LANIT-BP, we completed the next stage of the project and prepared a release containing many complex integration interactions with the customer’s back-end systems.
The problems started unexpectedly. System testing revealed a number of errors in integration scenarios.
Yes, I am, of course, aware that in the Russian corporate software development industry there is no unanimity in the interpretation and application of this term. Therefore, I’ll make a reservation that I will call here system testing of all applications involved in the implementation of a business process.
So, until the completion of system testing, there were two weeks left, and during this time, the teams of all the interacting systems had to fix their applications. Let in this context “systems” and “applications” be synonymous.
The list of immediate tasks resembled an ordinary project routine. Had:
- highlight problematic integration scenarios;
- identify conflicts in the integration specifications of different systems that caused the non-working integration scenarios;
- in each case, agree on which side to correct the specifications and implementation;
- fix and release updates for all applications.
The list of scenarios that required fixes on the side of our system was not short, but did not seem frightening. Correction of the integration layer of our application took a week of work for the whole team.
None of us could have imagined that it would take another two weeks to correct all the problems that arose after such a significant revision of the application and would require countless internal testing cycles. The deadlines for the completion of this (very important for the customer) project phase have shifted. We needed to find a solution that, although it could not help immediately, would prevent similar situations in the future.
A sourceProblem analysis
We analyzed this painful failure and identified the following main reasons.
- Due to the high complexity of the application, it was impossible to verify that changes made in one place did not break anything in a completely different (completely unexpected) place. This caused a large number of regression defects.
- Description of the components of the application that we conducted on the project wiki were not tied to the code and were often irrelevant - they did not describe the specific behavior of the rudders under certain conditions. Thus, developers using existing components did not have enough information on how to handle all possible exceptions or specific return values.
- The developers focused on processing the main scenarios, leaving unrealized alternative branches, error handling or behavior in the absence of data.
Taking into account our experience in Java, we perfectly understood that similar problems in Java projects are usually solved with the help of well-known practices: unit testing, test-driven development and continuous integration. It seemed logical to apply these practices in our Pega-project.
Designing Unit Tests
We decided to start with covering modular tests of the application integration layer, since It was the most difficult part of our application and generated up to 80% of all defects.
The typical integration component of our application consisted of 6 main elements.
- Connect activity is the main element that combined all other elements and served as an entry point for invoking integration from different parts of the application.
- Request mapping data transform filled out a request based on business data.
- Stream XML converted the request from the integration model to XML.
- The Connector rule communicated with the external system using the required protocol.
- Parse XML parsed the XML response and transformed it into an integration data model.
- Response mapping data transform transformed the response from the integration model into a business entity.
Stream and Parse XML rules were usually created automatically using the Connector and Metadata Wizard together and the integration data model. The Connector steering wheel was specifically left “thin” by moving all the logic into Connect activity. The greatest number of defects was generated by Connect activity and Request / Response mapping data transform, since they contained the main logic of the integration component.
Given the integration architecture described above, we intended to create the following test suite for each integration point.

- Test Connect activity in isolation to test its logic, including non-standard situations.
- Test Request mapping data transform in isolation.
- Test Response mapping data transform in isolation.
- A test for the entire integration component in order to verify that the outgoing XML is generated, and the incoming XML is processed correctly (in accordance with the specification).
The elements in the diagram are colored in accordance with the “fun” and the desired degree of test coverage:
- red - elements that are highly error prone should be tested in isolation to achieve greater coverage (unit tests);
- green - elements less prone to errors, when testing them, it is enough to check them in conjunction with other elements (we are talking about inner-com tests, that is, tests of the internal component in isolation from external systems);
- gray - components that are unlikely to contain errors and may remain uncovered by the test suite.
Unit Testing with Pega Platform Tools
The first thing we tried was Pega Test Cases - automated tests created by the appropriate platform tool. Although they are useful for simple applications, it turned out that in corporate-scale projects, their use is substantially limited: they do not allow controlling the isolation level of dependent rudders in tests.
In our case, this meant that we could not perform testing in isolation, not only for the Connect activity, but also for the Data transform, since they can use (and often use) Data Pages, which, in turn, use other rules as a data source (Activity, Report Definitions, other Data Transform, etc.).
Thus, unit testing from Pega was suitable for implementing only one of the four types of tests - inner-com tests. Moreover, this was only possible thanks to the Integration Simulation mechanism, which, in turn, created problems for some of the flexibility and maintenance of the tests.
The above limitation was not the only thing that bothered us in the unit tests from Pega.
- Prior to Pega 7.2 (we used Pega 7.1.8 at the time), there was no convenient way to manage the preparation of data for the test and validation of results. A side effect of this restriction was the need to re-create all the tests after significant changes to the rules.
- Low degree of reuse of test code: even with the significant improvements in Pega 7.2, it is not possible to split the complex test checks into individual blocks and use them in several scenarios.
- There is no possibility to write complex test scenarios, including calling up several rules and checking the overall result of their work.
- Pega provides unit testing support for a limited set of rules (Data Pages, Activity, Decision Table, Decision Tree, Flow, Service SOAP).
Given all these problems, we definitely needed an alternative approach to automated testing.
Research & Development
We started with the selection of a strong team of veterans from Java and Pega projects. Through brainstorming and feedback from project teams, we achieved a crystal clear understanding of what features we needed. It:
- rudder insulation management;
- Mock and Stub for rules (similar to Mock and Stub for classes in Java);
- decomposition of complex tests into small, reusable blocks;
- run unit tests on any modern build-server.
After several months of prototyping and implementation, we released the alpha version and started piloting it on a small part of the application code. Piloting has shown the success of our ideas - tests began to be useful: they helped to detect several significant defects when writing tests, and also allowed us to detect regression defects in a timely manner.
Over the course of a year, we introduced a new testing approach to most of our Pega projects. The created tool required a name, and it appeared by itself - Ninja.
A sourceUnit Testing with Ninja
Ninja provides a Java library that allows you to write JUnit tests for rules. Thus, unit testing of an application on the Pega platform becomes as simple as in Java.
Let's take a closer look at what makes Ninja so convenient.
First, Ninja encrypts and securely stores your credentials to connect to the Pega development environment (is a web application known as Pega Designer Studio) and run tests on your behalf. This allows you to test the private version of the steering wheel, located in the checkout. Here, just in case, I will explain: the Pega platform allows you to take checklists into checkout and make changes to them that are “visible” only to the author. After the author has checked the changes, he performs a check-in. Then the changes are "visible" to everyone.
Secondly, Ninja uses the same session for consecutive test runs. This allows you to determine the Requestor (system session) used by Ninja and connect to it using Tracer, a debugging tool in the Pega platform, which allows you to view detailed information about the steps being performed and the state of objects in memory. In this case, Tracer is needed to analyze what happens when you run the test rudder.
Thus, Ninja allows you to prepare a test environment easily and accurately.
final String myClass = "MyOrg-MyApp-Work-MyCase";
You can define stubs for handlebars, directly or indirectly caused by the test rudder. This allows the rudders to be tested in controlled isolation.
In the test script, you can call almost any rudder with Pega behavior. Unit testing is not applicable for long-living processes and GUIs, so Ninja does not support Flow, Flow Action, Section and other GUI-related rudders.
You can comprehensively test your test results.
More examples can be found in the Ninja Cookbook on
GitHub .
With Ninja, we covered the main integration scenarios with unit tests.
- We test Connect activity in isolation by simulating (mocking) other rudders. This helped us focus on integration logic in these tests and check all branches, including exceptional situations.
- Data transformations have been exhaustively tested in isolation from all other rules (Function, various Decision rules, data sources for Data Pages) called directly or indirectly from Data Transform rules.
- We stubbed the Connector steering-wheels to avoid calling external systems from inner-com tests, as well as to validate the XML generated by the component.
Creating our base of unit tests, we were able to identify several common procedures for preparing a test environment and checking test results, on the basis of which we built a library of reusable components for writing tests.
Confidence in the quality of the application
As of February 2017, the credit process has several parallel branches of development: the correction of defects arising in the framework of trial operation (Prod-branch), and several branches for the development of the functionality of future releases.
Each branch is covered by its own unit tests, which are developed together with the application. Tests are stored in the version control system and are also divided into branches. Prod-branch as of February 2017 had 350 tests. Let me clarify that at the time of the translation of the article, in July 2017, the latest release thread already contained about 1,650 tests, the execution time of which does not exceed 15 minutes.
Unit tests are run by the build server separately for each branch every 30 minutes. This means that the team receives a “health report” of the system every 30 minutes and can respond to emerging defects in a timely manner.
Ninja allowed us to more quickly produce a higher quality product due to fewer Dev-QA cycles, because most defects are now detected on the dev-environment and immediately eliminated.
There are fewer defects in trial operation - thanks to more comprehensive testing by the QA team, which now does not have to deal with trivial defects that block the passage of test scenarios.
To be continued
Our experience suggested that unit tests are not the only “weak” area of ​​the Pega platform: when compared to Java or other “traditional” software development platforms, the Pega tools that developers can use have significant limitations.
Taking into account the feedback from the project teams and the results of numerous brainstorming sessions, we defined the following functions that will appear in Ninja:
- Rule Refactoring - refactoring of rules;
- Code analysis - advanced static code analysis;
- Code review - code audit;
- Release automation - automation of assemblies and deliveries;
- Continuous delivery pipeline - continuous delivery pipeline.
I note that now most of these functions are already available in Ninja. I am sure that such functions will allow Ninja to become a comprehensive toolkit for implementing DevOps on Pega projects.
Our colleagues were interested in our solution, and I am glad that this case can serve as an example for other development teams: in Russia, you can make niche solutions for cool IT products and export them successfully.
For more information on Ninja, look
here .
If you want to join our team In conclusion, I offer you a short survey: