📜 ⬆️ ⬇️

Analyze this or about software quality.

Almost all of my conscious career as a developer, project manager, development process consultant, I remained captive to a very common and simple fallacy. If the program performs the required functions, there are no complaints about stability and performance, then this is a “normal” program. I apologize for the somewhat exaggerated wording, but that's the way it is, if you look at it.

Behind the definitions of the term "software quality" is not a sin to refer to standards. Several definitions from different standards are conveniently shown on the same wiki page . And what?! The focus of the program's ability to meet the needs of the customer.

So, the main emphasis was and is being made on functionality. The customer formulates his functional requirements and accepts the program according to the “feature list” too. Testing in most cases is reduced to functional. Numerous automated testing tools solve exactly this class of problems. Yes, sometimes requirements are formulated for resiliency or maximum acceptable user interface response time, or report execution. Yes, those who are particularly picky can even do load testing. But the program is accepted for trial operation (and it happens that it goes straight into industrial) in the absence of defects in functions of a certain level, for example, without critical bugs. I have observed this acceptance process many times in projects of different sizes and with different customers, in my own development departments, and in projects with outsourcing, on different technologies and varying degrees of criticality — the main functionality is everywhere.

Well, that is probably correct. But is this really enough to calmly say that we made (or made to us custom-made) a “quality” product? What is this "quality application"? Is it possible to measure quality, on what factors does it depend, how to improve it?
')
It is clear that end users, customers or business, as they say now, almost never know how the software product they actually use is arranged: how well the code is written, how complex the program is and how many dependencies it has on external libraries , is safe for the information stored in it, is it accompanied by normal documentation and much more. But the user is perfectly aware of something else - errors are being corrected slowly, little is added from version to version, the release interval for versions is constantly increasing, there are problems with the stability of work, and finally, the product begins to be unprofitable to have analogues.

Even with a superficial study of the topic, it turns out that today it is still necessary to take into account more quality factors, for example, security, maintainability, efficiency, portability, reliability, etc. It is clear that for different applications and different conditions of use the critical factors of quality will be very specific characteristics or vulnerabilities, if you want. Is it possible to somehow formalize what, for example, the “accompanied” application is? Or "safe"? Yes, it turns out possible, and this work was carried out, and moreover, is ongoing. ISO 25000 defines a reference quality model consisting of 8 quality characteristics.

Below you will find some useful resources on this topic:


How now this information can be used in practice? Models, standards, recommendations, best practices - this is all great, but it takes a lot of time to “learn”.

I apologize in advance for the primitiveness of examples in PHP. I know that using $ this in static methods will result in an error. It is necessary to find all the entries about this code:

<?php class MyClass { public $message = "A message"; static function printMessage(){echo $this->$message; //VIOLATION return; } } ?> 

Or, for example, it is not recommended to use exit () or die (), because it will be difficult later to understand the true cause of the error.

 <?php $filename = '/path/to/datafile'; $f = fopen($filename, 'r') or die("Cannot open file ($filename)"); // VIOLATION ... operations on file ... ?> 

Or, you need to find how often programmers copy blocks of code, and, if possible, to work to eliminate this drawback to improve the "maintainability" of the program.

And here many will say that these tasks have long and successfully been solved by analyzers of the source code of programs. In general, it is not necessary to use any software tools. There is such a good and useful engineering practice known as “revising or viewing a code” or code review . However, in real life it is necessary to take into account a number of difficulties:


How wonderful it would be if the practice of viewing the code was devoid of these shortcomings. You want to "shake up" all megabytes of accumulated code in different programming languages ​​with the confidence and knowledge of an expert - please. Want to check the quality and track the improvements on each checkin is a great idea. Do you want to see normal reports and graphs on the results of the analysis, and not just long lists of defects found - well, and who does not want? And to assess the complexity of the corrections of the discovered deficiencies and vulnerabilities, at least approximately. And automatically assign tasks to programmers in Jira, so as not to get out. And it would be good even ...

It turns out that there really is a definite need for software source code analyzers. Why this class of quality management tools remained and remains with us practically unclaimed. I see several reasons here. Firstly, it is considered to be the only tool for the programmer. For example, Microsoft Visual Studio includes such an analyzer in its tool package. Those. The results of the code analysis are so technical that they are little understood by those who are interested in improving the quality of the product, but are not ready to go into details. Secondly, there is still a very narrow understanding of the issue regarding the concept of “quality” of a software product. Thirdly, there is a very specific conflict of interest. The programmer may not be interested in finding out about the whole truth about his code. The development manager is already in constant time trouble, planning the release dates of the versions and forming the composition of changes in future versions. About technical debt, he already knows. Well, it will reveal that the analyzer still has a ton of vulnerabilities and flaws in its turn. Best case scenario. Testers are busy and motivated to identify errors in functionality, and their managers do not see the overall picture of how well / poorly the program is actually made.

But still. Beauty must save the world, or not ?! Clean and correct code is also a beauty! And we found what we were looking for - a modern cloud static code analyzer - Kiuwan . At least out of curiosity, look at their site. Check your programs - it will not take more than a few minutes. Spaniards have made a cool product!

A whole swarm of technology and programming languages ​​is supported:


Objective-C, Java, JSP, Javascript, PHP, C / C ++, ABAP IV, Cobol, JCL, C #, PL / SQL, Transact-SQL, SQL, SQL Forms, RPG, VB6, VB.Net, Android, Hibernate, Natural, Informix SQL

Alas, Pascal / Delphi / RAD is not supported. ReactJS too. Metrics, indicators, reports, charts - all at the most modern level. The applied quality model can be customized, or it can be expanded - add your own rules for your vulnerabilities. This will be a separate article in our blog.

It can integrate with other code analyzers - for example, it can “digest” the results of analyzing Ruby code from another analyzer, Brakeman . We will try to make an article about it in the near future.

Integrates with Jira, SBM. Supports different version control systems.

What may not like:
1) If your program has "a lot of letters", then they will ask for money.
2) Yes, this is a cloud service. there is a program for local code analysis, but the results of the code analysis will still be sent to your personal account in the cloud.
3) In English

You can start from here

Source: https://habr.com/ru/post/275037/


All Articles