📜 ⬆️ ⬇️

Our translation of the article: Twenty-five goals of the software industry for 2015–2019

Our translation of the article:
Twenty-five goals of the software industry for 2015–2019.
With the permission of the author: Capers Jones

Capers Jones, VP and CTO Namcook Analytics LLC
Version 5.0 February 28, 2015

Short review
The development of the software industry is similar to the gait of an alcoholic, in which movement is simultaneously leveled and reverses. For example, a flexible development methodology is the alignment of small projects, and pair programming is a backward move that is too expensive for an engine role.
')
One of the reasons for the lack of steady progress is the lack of an effective measurement system. The two oldest and most common metrics, the cost of defects and program lines, do not indicate progress and actually distort reality. With the calculation of the cost of a defect, quality suffers, and for a program line, high-level programming languages ​​are rejected.

This small article presents 25 real goals that can be achieved within 5 years, taking 2015 as a starting point. Some goals are expressed in terms of the functional points metrics ( Function Points - FPs ), which is the only widely used metric that is acceptable for covering economic performance and quality without serious errors.

Another useful metric is defect removal efficiency (DRE ), which shows the percentage of errors found and eliminated during development compared with the number of errors reported by users during the first 90 days of using the program. If the developers found 95 errors, and users reported 5 errors in three months, the DRE is 95%. Currently, the average DRE in the United States is only at about 90%, but it is technically possible for each software development project to reach 95%, and for many to reach 99%.
Web: www.Namcook.com
Blog: Namcookanalytics.com
Email: Capers.Jones3@gmail.com
© Capers Jones, 2014 - 2015. All rights reserved.

Twenty-five tasks of the software industry for 2015 - 2019.


Introduction

The following is a selection of 25 goals or objectives for programming development, developed by Namcook Analytics LLC for implementation over five years from 2015 to 2019. Some of these goals are achievable in 2015, but not all companies managed to achieve them. A small number of selected leading companies achieved some of the noted goals.
Unfortunately, less than 5% of US companies and transnational companies achieved any of the goals presented, and less than 1% achieved most of the goals. None of the author's clients managed to achieve all the goals.

The author proposes that each major software development company and government organization create their own selection of targets for a five-year period, using the proposed list as a starting point.
1. Increase the defect recovery efficiency ( DRE ) from <90.0% to> 99.5%. This goal is the most important for the industry. It can not be achieved only by testing, but requires the use of pre-test control and static analysis. DRE is measured by comparing all defects found during development with those reported by consumers during the first 90 days. The current average DRE in the United States is approximately 90%. Dynamics is approximately 92%. Only a few leading companies, using a complete set of defect prevention, pre-test defect correction and formal review with mathematically designed test cases and involving certified test specialists, can achieve 99% DRE.

2. Reduce the software defect rate indicator from> 4.0 for a functional score to <2.0 for a functional score. The defect capability indicator is the sum of errors found in technical requirements, configuration, design, code, user documents, and poorly fixed patches. Errors of technical requirements and design often numerically exceed code errors. Currently, the defect capability rate can reach 6.0 per functional score in large systems with 10,000 functional scores in the standard range. Achieving this goal requires effective defect prevention, which includes, for example, joint application design ( joint application design - JAD ), structuring the quality function deployment (QFD ), officially approved reusable components, etc. This also requires a program complete software quality control. To achieve this goal, it is also necessary to improve the training on common sources of defects found in technical requirements, design and source text. The most effective way to reduce the possibility of defects is to switch from designing to customer specifications and manually coding, which are inherently prone to errors. Designing for the creation of officially approved reusable components can significantly reduce the potential for software defects.

3. Reduce the cost of quality (COQ ) cost from> 45.0% of the development cost to <15.0% of the development cost. Detection and troubleshooting for more than 50 years and has so far been the most costly task in programming. To achieve this goal, a synergistic combination of defect prevention, pre-test control and static analysis is necessary. The likely outcome could be an increase in the efficiency of eliminating defects from the current average below 90% to 99%. At the same time, the possibility of defects can be reduced from the current average of 4.0 for a functional score to less than 2.0 for a functional score. This combination will have a strong synergistic effect on maintenance and support costs. By the way, reducing the cost of quality will also reduce technical debt. However, at the end of 2014, technical debt is not a standard metric and differs so significantly in various places that it is difficult to calculate it.

4. Reduce the average cyclomatic complexity from> 25.0 to <10.0. Achieving this goal requires careful analysis of software structures and, of course, measurement of cyclomatic complexity for all modules. Since cyclomatic tools are generally accepted and some of them are open source, each application should use them without exception.

5. Increase test coverage for risks, routes and technical requirements from <75.0% to> 98.5%. Achieving this goal requires the use of mathematical design methods to create test cases; such a method is, for example, planning experiments. Measurement of test coverage is also required. Also required are predictive diagnostic tools that can predict the number of test cases based on functional scores, code volumes and cyclomatic complexity. An authoring software risk diagnostic tool (SRM) predicts test cases for 18 types of testing and, therefore, can also predict the likely test coverage.

6. Eliminate error-prone modules (error-prone modules - EPM) on large systems. Faults are not distributed randomly. Achieving this goal requires careful measurements of code defects during development and after release using software tools that can track errors to specific modules. Some companies, such as IBM, have been doing this for many years. Error-prone modules (EPM) typically make up less than 5% of the total number of modules, but they account for more than 50% of all errors. Prevention is the best solution. Existing error-prone modules in legacy applications may need surgical removal and replacement. However, static analysis should apply to all recognized EPMs. In one study, the main application consisted of 425 modules. 57% of all errors were found in as few as 31 modules compiled by one department. Over 300 modules were defect free. EPM is easy to prevent, but difficult to fix if they have already been created. Usually surgical removal is inevitable. EPM - the most expensive artifacts in programming history. EPM have similarities with smallpox disease; that is, it cannot be completely eliminated with the help of “vaccination” and effective control methods. Error-prone modules often achieve a rate of 3.0 errors per function and are removed in quantities of less than 80% before software release. They also seek to rank among the top 50 in terms of cyclomatic complexity. Elimination of defects at a higher level through testing is difficult due to high levels of cyclomatic complexity.

7. Eliminate security defects in all software applications. As cybercrime spreads, there is a growing need to improve security. Achieving this goal requires monitoring the state of security, testing security systems and automated tools for detecting security defects. Ethical hackers may also be needed to work with host systems that contain valuable financial or confidential data.

8. Reduce the likelihood of cyber attacks from> 10.0% to <0.1%. Achieving this goal requires a synergistic combination of advanced firewalls, continuous anti-virus monitoring and constant updates of virus signatures; and you need to work on improving the immunity of the software itself through changes in the basic configuration and access rights strategies. It may be necessary to revise the hardware and software configurations to increase the level of security for both.

9. Reduce the injection of poor-quality errors from> 7.0% to <1.0%. Few people know that about 7% of attempts to eliminate software errors introduce new errors into the directly corrected areas, which are usually called “low-quality corrected areas”. When the cyclomatic complexity reaches 50, the rate of poor-quality error correction can soar up to 25% or more. To reduce poor error correction injections, it is necessary to control cyclomatic complexity, use static analysis for all defect elimination areas, test all fixes and check all areas of significant fixes before assembling them.

10. Reduce the drift of technical requirements from> 1.5% for a calendar month to <0.25% for a calendar month. Technical drift has been an endemic problem in the software industry for over 50 years. As long as prototypes, dynamic embedded users and collaborative application development (JAD) are suitable, it is technically possible to use automated technical requirements models to improve the completeness of technical requirements. The best method would be to use pattern matching to recognize the functionality of applications similar to those under development. A precedence technology would be a useful taxonomy of the functionality of a software application that actually does not exist yet in 2015, but could be developed within a few months of focused research.

11. Reduce the risk of project failure or cancellation on large projects of 10,000 functional points from> 35.0% to <5.0%. The abolition of large systems due to poor quality, low level of change control and cost overruns, which in essence turns the return on investment (ROI) from positive to negative, is an endemic problem in the software industry that could well be done without. A synergistic combination of effective defect prevention, pre-test control and static analysis can bring closer to eliminating this common problem. Parametric estimation tools that can predict risks, costs, and schedules with greater accuracy than manual calculations are also recommended.

12. Reduce the probability of a lag in the schedule from> 50.0% to <5%. Due to the fact that the reasons for the delay in the schedule are low quality and excessive drift of technical requirements, the solution of some of the above problems of this list will also serve as a solution to the problem of the delay in the schedule. Most projects seem to be done on time until testing begins, when a huge number of errors do not begin to stretch the testing schedule to infinity. Defect prevention in conjunction with pre-test static analysis can reduce or eliminate cases of delays in the schedule. It is a treatable condition and can be excluded within five years.

13. Reduce cost overruns from> 40.0% to <3.0%. The cost overruns for software and schedule delays have similar root causes; that is, low quality control and low change control combined with excessive drift of technical requirements. Better defect prevention in combination with pre-test defect repair can help cure both of these endemic software problems. The use of accurate parametric estimation tools, rather than manual optimistic calculations, is also useful for reducing cost overruns.

14. Reduce cases of litigation on outsourced contracts from> 5.0% to <1.0%. The author of the article was an expert witness in 12 litigation on breach of contracts. It seems that all of these cases have similar underlying causes, which include low quality control, low change control, and very low status tracking. A synergistic combination of early size assessment and risk analysis prior to contract signing plus effective defect prevention and pre-defect repair can reduce cases of contract infringement in the software industry.

15. Reduce maintenance and repair costs to> 75.0% compared to 2015 values. Since about 2000, the number of US service programmers has begun to exceed the number of software developers. IBM found that effective prevention and pre-test defect repair contributed to reducing the number of final defects to such a low level that maintenance costs were reduced to at least 45% and sometimes even 75%. Effective software development and effective quality control have a greater impact on maintenance costs than on development. It is technically possible to reduce maintenance of new applications by more than 60% compared to current averages. The analysis of legacy applications and the elimination of error-prone modules, as well as a small reorganization of the source code, can also contribute to reducing maintenance costs for each legacy application to about 25%. Technical debt will also be reduced, but technical debt is not a standard metric and varies so significantly that it is difficult to calculate. Static analysis tools should run regularly with respect to all active legacy applications.

16. Increase the amount of officially approved reusable materials from <15.0% to> 75.0%. Custom design and manual coding is essentially error-prone and inefficient, no matter what technique is used. The best way to turn programming from a craft into a modern profession would be to construct applications from libraries of officially approved reusable materials; that is, through the use of reusable technical specifications, designs, codes, and test materials. All intended for use reusable materials should be reviewed, code segments checked and a series of static analysis started. Indeed, the reusable code must be accompanied by reusable test materials and related information, such as cyclomatic complexity and user information.

17. Improve average development impact from <8.0 functional points per month to> 16.0 functional points per month. Performance ratios vary according to size, complexity, team experience, methods, and several other factors. However, when all projects are considered in aggregate, the average performance is below 8.0 functional points for a regular month. In order to double this factor, you need to combine better quality control with much larger volumes of officially approved reusable materials; probably 50% or more.

18. Reduce work time by a functional score from> 16.5 to <8.25. Goal number 17 and this goal are essentially the same, but use different metrics. However, there is one important difference. Working hours per month will not be equal to the same number in each country. For example, a project in the Netherlands with 116 working hours per month would be the same working time as a project in China with 186 working hours per month. But the Chinese project will require fewer calendar months than the Dutch project.

19. Improve maximum performance up to> 100 functional points for a regular month of 1000 functional points. Currently, as of the beginning of 2015, the performance ratio per 1000 functional points is in the range of about 5 to 12 points for a regular month. In fact, it is impossible to achieve 100 functional points for a regular month using customer-specific design and manual coding. Only the construction of standard reusable components from libraries can allow such a high efficiency factor to become a reality. However, it would be possible to increase the amount of reusable materials. The previous product needs a good taxonomy of software functionality, reusable material catalogs and a certification process to add new reusable materials.It is also necessary to apply the revocation method in case the reusable materials contain errors or need to be changed.

20. Reduce the average software development schedule to> 35% compared to 2015 averages. The most common complaint of software clients and corporate managers at the level of the Informatization Director (CIO) and CFO (CFO) is that software projects take a lot of time. Surprisingly, to reduce this time is not difficult. A synergistic combination of better defect prevention, pre-test static analysis and control, as well as a larger amount of officially approved reusable materials, can contribute to a significant reduction in schedule intervals. In the modern world, increasing the size of software applications in functional points with a degree of 0.4 provides a convenient approximation to the duration of the plan in calendar months. But current technologies can reduce the exponent to 0.37. An increase of 1000 functional points with a degree of 0.4 indicates a schedule of 15.8 calendar months. An increase of 1000 functional points with a degree of 0.37 indicates a schedule of only 12.9 calendar months. Such a shorter schedule is made possible through the use of effective defect prevention, enhanced pre-test control and static analysis. Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.But current technologies can reduce the exponent to 0.37. An increase of 1000 functional points with a degree of 0.4 indicates a schedule of 15.8 calendar months. An increase of 1000 functional points with a degree of 0.37 indicates a schedule of only 12.9 calendar months. Such a shorter schedule is made possible through the use of effective defect prevention, enhanced pre-test control and static analysis. Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.But current technologies can reduce the exponent to 0.37. An increase of 1000 functional points with a degree of 0.4 indicates a schedule of 15.8 calendar months. An increase of 1000 functional points with a degree of 0.37 indicates a schedule of only 12.9 calendar months. Such a shorter schedule is made possible through the use of effective defect prevention, enhanced pre-test control and static analysis. Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.An increase of 1000 functional points with a degree of 0.37 indicates a schedule of only 12.9 calendar months. Such a shorter schedule is made possible through the use of effective defect prevention, enhanced pre-test control and static analysis. Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.An increase of 1000 functional points with a degree of 0.37 indicates a schedule of only 12.9 calendar months. Such a shorter schedule is made possible through the use of effective defect prevention, enhanced pre-test control and static analysis. Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.Reusable software components could reduce the exponent to 0.3 or 7.9 calendar months. The schedule lag is growing rapidly today, but it is treatable and can be eliminated.

21. Increasing the scope of a maintenance task from <1500 functional points to> 5000 functional points. The metric “maintenance task scope” refers to the number of functional points that one maintenance programmer can maintain and maintain in operation mode throughout the calendar year. The range is from <300 functional points for defects and complex software to> 5000 functional points for modern software released under effective quality control. The current average is approximately 1500 functional points. This is a key metric for forecasting the staffing of technical support for projects on the technical conditions of the customer, and corporate portfolios.Achieving this goal requires effective defect prevention, effective pre-test defect correction, and effective testing using modern test case design methods based on mathematical calculations. To achieve this goal, a low level of cyclomatic complexity is also required. Static analysis should be run in all applications at design time, as well as in legacy applications.as well as in legacy applications.as well as in legacy applications.

22. Replace today's static and stringent technical requirements, configuration, and design methods with a set of animated design tools combined with pattern matching. During their work, software applications are the fastest objects that have ever been created by the human race. During its development, software applications grow and change every day. It is unlikely that any design method is static and consists of text such as scores of text blocks, or very primitive and limited diagrams such as flowcharts or diagrams in the Unified Modeling Language (UML). The technology of creating a new method of animated design with graphic means in full color and three dimensions has been in existence today, since 2014.It remains only to develop a character set and begin to animate the design process.

23. Develop an interactive learning tool for programming based on large-scale interactive game creation technologies. New concepts in programming appear almost every day. New programming languages ​​are released weekly. Software sentences medicine, law, and other forms of engineering to ongoing learning. But learning live is expensive and inconvenient. There is a need for an online learning tool with a function of integrated planning of the training program. It is currently possible to develop such a tool. Using the licensing of the game engine, it is possible to create a university model of programming in which avatars could attend classes and contact each other.

24. Develop a set of dynamic animation planning and assessment tools for the project that will show the development of software applications. Nowadays, the output of all software assessment tools is provided in the form of static tables, supported by several graphs. But software applications are developed in the development of more than 1% per calendar month and continue to evolve after the release of more than 8% per calendar month. Obviously, software planning and assessment tools need dynamic modeling capabilities that can show an increase in functionality over time. They should also show the arrival (and detection) of errors or defects from technical requirements, design, configuration,code and other sources of defects. The ultimate goal, which is technically achievable today, can be a graphical model that shows the growth of the application from the first days of technical requirements to the 25th anniversary of its use.

25. To introduce licensing and professional certification of software developers and specialists. It is highly recommended that every reader of this article also read the book of Paul Starr "The Social Transformation of American Medicine". This book was awarded the Pulitzer Prize in 1982. The Starr book shows how the American Medical Association managed to improve academic education, reduce professional negligence, and achieve a higher level of professionalism than other technical areas. Medical licenses and professional certification of specialists were key factors in the progress of medicine. It took medicine more than 75 years to achieve the existing professional status,but with Starr’s book as a guide, programming could have done the same for 10 years. This is outside the 5-year window of this article, but the process should start in 2015.
Please note that the functional score metric used in this article refers to the functional score in the definition of the International Function Metrics User Group (IFPUG). Other definitions of functional scores, which, among other things, include COSMIC, FISMA, NESMA, unregulated, etc., may also be used, but will have other quantitative results.

The many technologies available in 2015 are already enough to achieve each of these 25 goals, despite the fact that few companies have achieved them. Some technologies associated with achieving these 25 goals include among others:
Useful technologies for implementing programming targets


We are on the verge of a time preceding the transformation of programming from a craft into a modern profession. This is also the time preceding switching from partial and inaccurate analysis of software results to high accuracy results, both for forecasting before the start of projects, and for measurements after their completion.

The above 25 goals are positive targets, which companies and government organizations should strive to achieve. However, software engineering also has a number of malicious methods that should be avoided or excluded from use. Some of them are so harmful that they cannot be considered as professional negligence. Below are six dangerous methods of designing software, some of which have been continuously used for over 50 years without fully recognizing their harmfulness:

Six Hazardous Software Design Techniques to Avoid

1. Stop trying to measure the economics of quality by “spending on a defect”. This metric always shows the lowest value for the most complete software malfunction, thus compromising quality. The metric also understates the true economic value of software by a few hundred percent. This metric violates standard economic assumptions and can be considered professional negligence in measuring quality economics. The best way to measure the cost of quality is to calculate the “cost of eliminating defects by a functional score”. The cost of the defect does not take into account the fixed costs of writing and running test cases. This is a well-known principle of the industrial economy, according to which,if there is a high proportion of fixed costs in the process, unit costs are raised. City bikes that it would take 100 times more to correct an error after the release of the software than before it are groundless; costs are almost equivalent if properly calculated.

2. Stop trying to measure software performance with the “lines of code” metric ( lines of code - LOC). This metric rejects high level programming languages. This measurement tool also does not reflect in the calculations work outside the coding, which are, for example, technical requirements and design. This metric can be viewed as professional negligence in an economic analysis involving multiple programming languages. The best measure of software performance is the calculation of working time for a functional point and functional points for a regular month. Measurements of these two quantities can be used at activity levels, as well as for projects in general. This metric can also be used to measure work outside the code, such as technical requirements and design, for example. The LOC metric has limited functionality relative to the code itself,but poses a danger to larger economic research of entire projects. The LOC metric does not take into account the cost of technical requirements, design and documentation, which usually exceed the cost of the code itself.

3. Stop the measurement of “design, code and unit testing” or (design, code, and unit test - DCUT). Measure entire projects, including manual, technical requirements, design, coding, assembly, documentation, all forms of testing, etc. DCUT measurements cover less than 30% of the total costs of software development projects. Measuring the costs of just a part of a software development project leads to a state of professional confusion.

4. Be careful with “technical debt”. This is a useful metaphor, which, however, is not a complete metric for understanding the economics of quality. Technical debt neglects the high costs of canceled projects, which include both collateral losses of clients and costs of litigations, as well as possible designated damages to plaintiffs. Technical debt takes into account only 17% of the true cost of low quality. Quality cost measurement (COQ) is the best metric for quality economics.

5. Avoid “pair programming”. Pair programming is expensive and less effective than a combination of control and static analysis. Familiarize yourself with the literature on pair programming and, in particular, with presentations by programmers who specifically quit their jobs to avoid pair programming. Literature approving pair programming also illustrates the general weakness of a programming study that does not compare pair programming with methods with proven quality of results, such as control and static analysis. Only pairs and individual programmers are subjected to comparison without discussion of tools, methods, controls, etc.

6. Stop relying only on testing without using effective methods of preventing and eliminating defects in a test, such as monitoring and static analysis. Testing by itself without pre-test defect repair is costly and rarely reaches 85% of the level of defect recovery. A synergistic combination of prevention and pre-test repair of defects, such as static analysis and control, can increase DRE to> 99%, while at the same time reducing costs and shortening schedules.

Software engineering has always been very different from older and mature forms of engineering. One of the most significant differences in the field of software engineering engineering from traditional engineering is that software engineering engineering has very poor measurement experience and has too subjective information instead of strong empirical data.

This short article offers a selection of 25 quantitative goals, the achievement of which will contribute to significant advancement both in software quality and in its effectiveness. However, the most important message is that poor software quality is a critical factor that needs to be improved in order to, in turn, improve performance, schedules, costs, and economics.

References

Abrain, Alain; Software Estimating Models; Wiley-IEEE Computer Society; 2015
Abrain, Alain; Software Metrics and Metrology; Wiley-IEEE Computer Society; 2010
Abrain, Alain; Software Maintenance Management: Evolution and Continuous Improvement; Wiley-IEEE Computer Society, 2008.Beck, Kent; Test-driven development; Addison Wesley, Boston, MA; 2002; ISBN 10: 0321146530; 240 pages.
Black, Rex; Managing the Testing Process: Testing Process; Wiley; 2009; ISBN-10 0470404159; 672 pages.
Chess, Brian and West, Jacob; Secure Programming with Static Analysis; Addison Wesley, Boston, MA; 20007; ISBN 13: 978-0321424778; 624 pages.
Cohen, Lou; Quality Function Deployment - How to Make QFD Work for You; Prentice Hall, Upper Saddle River, NJ; 1995; ISBN 10: 0201633302; 368 pages.
Crosby, Philip B .; Quality is Free; New American Library, Mentor Books, New York, NY; 1979; 270 pages.
Everett, Gerald D. And McLeod, Raymond; Software Testing; John Wiley & Sons, Hoboken, NJ; 2007; ISBN 978-0-471-79371-7; 261 pages.
Gack, Gary; Managing the Black Hole: The Executives Guide to Software Project Risk; Business Expert Publishing, Thomson, GA; 2010; ISBN10: 1-935602-01-9.
Gack, Gary; Applying Six Sigma to Software Implementation Projects; software.isixsigma.com/library/content/c040915b.asp .
Gilb, Tom and Graham, Dorothy; Software Inspections; Addison Wesley, Reading, MA; 1993; ISBN 10: 0201631814.
Hallowell, David L .; Six Sigma Software Metrics, Part 1 .; software.isixsigma.com/library/content/03910a.asp .
IFPUG (52 authors); The IFPUG Guide to IT and Software Measurement; Auerbach publishers; 2012
International Organization for Standards; ISO 9000 / ISO 14000; www.iso.org/iso/en/iso9000-14000/index.html .
Jacobsen, Ivar; Ng Pan-Wei; McMahon, Paul; Spence, Ian; Lidman, Svente; The Essence of Software Engineering: Applying the SEMAT Kernel; Addison Wesley, 2013.
Jones, Capers; The Technical and Social History of Software Engineering; Addison Wesley, 20124.
Jones, Capers; “A Short History of Lines of Code Metrics”; Namcook Analytics LLC, Narragansett, RI 2014.Jones, Capers; “A Short History of Defect Metric”; Namcook Analytics LLC, Narragansett RI 2014.
Jones, Capers; The Technical and Social History of Software Engineering; Addison Wesley Longman, Boston, Boston, MA; 2014
Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality; Addison Wesley, Boston, MA; 2011; ISBN 978-0-13-258220-9; 587 pages.
Jones, Capers; Software Engineering Best Practices; McGraw Hill, New York; 2010; ISBN 978-0-07-162161-8; 660 pages.
Jones, Capers; “Measuring Programming Quality and Productivity”; IBM Systems Journal; Vol.17, No. one; 1978; pp. 39-63.
Jones, Capers; Programming Productivity - Issues for the Eighties; IEEE Computer Society Press, Los Alamitos, CA; First edition 1981; Second edition 1986; ISBN 0-8186-0681-9; IEEE Computer Society Catalog 681; 489 pages.
Jones, Capers; “A Ten-Year Retrospective of the ITT Programming Technology Center”; Software Productivity Research, Burlington, MA; 1988.
Jones, Capers; Applied Software Measurement; McGraw Hill, 3rd edition 2008; ISBN 978 = 0- 07-150244-3; 662 pages.
Jones, Capers; Critical Problems in Software Measurement; Information Systems Management Group, 1993; ISBN 1-56909-000-9; 195 pages.
Jones, Capers; Software Productivity and Quality Today - The Worldwide Perspective; Information Systems Management Group, 1993; ISBN -156909-001-7; 200 pages.
Jones, Capers; Assessment and Control of Software Risks; Prentice Hall, 1994; ISBN 0-13-741406-4; 711 pages.
Jones, Capers; New Directions in Software Management; Information Systems Management Group; ISBN 1-56909-009-2; 150 pages.
Jones, Capers; Patterns of Software System Failure and Success; International Thomson Computer Press, Boston, MA; December 1995; 250 pages; ISBN 1-850-32804-8; 292 pages.
Jones, Capers; Software Quality - Analysis and Guidelines for Success; International Thomson Computer Press, Boston, MA; ISBN 1-85032-876-6; 1997; 492 pages.
Jones, Capers; Estimating Software Costs; 2nd edition; McGraw Hill, New York; 2007; 700 pages..Jones, Capers; “The Economics of Object-Oriented Software”; Namcook Analytics; Narragansett, RI; 2014.
Jones, Capers; “Software Project Management Practices: Failure Versus Success”; Crosstalk, October 2004.
Jones, Capers; “Software Estimating Methods for Large Projects”; Crosstalk, April 2005.
Kan, Stephen H .; Metrics and Models in Software Quality Engineering, 2nd edition; Addison
Wesley Longman, Boston, MA; ISBN 0-201-72915-6; 2003; 528 pages.
Land, Susan K; Smith, Douglas B; Walz, John Z; Support for the Sigma Software Process Definition: Using IEEE Software Engineering Standards; WileyBlackwell; 2008; ISBN 10: 0470170808; 312 pages.
Mosley, Daniel J .; The Handbook of MIS Application Software Testing; Yourdon Press, Prentice Hall; Englewood Cliffs, NJ; 1993; ISBN 0-13-907007-9; 354 pages.
Myers, Glenford; The Art of Software Testing; John Wiley & Sons, New York; 1979; ISBN 0 471-04328-1; 177 pages.
Nandyal; Raghav; Making Sense of Software Quality Assurance; Tata McGraw Hill Publishing, New Delhi, India; 2007; ISBN 0-07-063378-9; 350 pages.
Radice, Ronald A .; High Quality Low Cost Software Inspections; Paradoxicon Publishingl Andover, MA; ISBN 0-9645913-1-6; 2002; 479 pages.
Royce, Walker E .; Software Project Management: A Unified Framework; Addison Wesley Longman, Reading, MA; 1998; ISBN 0-201-30958-0.
Starr, Paul; The Social Transformation of American Medicine; (Pulitzer Prize in 1982); Basic Books, 1982;
Strassman, Paul; The Squandered Computer; The Information Economics Press, New Canaan, CT; 1997; 426 pages.
Wiegers, Karl E .; Peer Reviews in Software - A Practical Guide; Addison Wesley Longman, Boston, MA; ISBN 0-201-73485-0; 2002; 232 pages.

Source: https://habr.com/ru/post/265215/


All Articles