📜 ⬆️ ⬇️

The human factor in software development: psychological and mathematical aspects

Software product development is a process in which the human factor plays a very important role. The article will talk about various psychological and mathematical laws and principles. Some of these principles and laws are well known to you, some are not very well, and some will help explain your behavior or the behavior of your employees and colleagues.

Software development is a non-linear process

Software development is a non-linear process. If 5 developers are allocated for the project, they must develop a product in 5 months (25 people / month), then 25 developers will not be able to do the same work in 1 month (the same 25 people / month).
')

image


Brooks in his book “The Mythical Man-Month” gives a remarkable expression: 9 women cannot give birth to a child in 1 month. By this, he hints at the existence of a certain limit to which you can compress development timelines. Steve McConnell claims that this threshold is defined as 25% of the original estimates.

And yes, most likely, 10 developers will not be able to do the initial amount of work in 2.5 months either, since the joint actions of 10 developers on a project of 5 months duration will most likely lead to deadlock or other communication problems.

Project Evaluation and Cost of Error

To illustrate the “price of a mistake”, a cone of uncertainty is often used - a graph, on the horizontal axis of which time is indicated, and on the vertical axis - the value of the error, which is laid when estimating the labor intensity. With the passage of time, as more and more data becomes available about the project being evaluated, about what specifically and under what conditions you have to do, the “spread” of the error becomes less and less.





How does the cost of a mistake grow in case of overvaluation and undervaluation? It turns out that when reassessing, the price of an error grows linearly, while underestimating the price of an error grows exponentially.

The linear growth in the first case is explained by the Parkinson law, which states: the work fills the time allotted to it. Also, this law is called "student syndrome" - no matter how much time is given for the coursework, it is still done until the last day before passing.





Continuing the theme of project evaluation, let's consider different approaches.

Simple approach. We take the number of hours, multiply by the hourly rate, add 20-30% (risks): (N * hourly_rate) * [1.2-1.3] = project_cost

20-30% - according to statistics, the average value of the underestimation of projects. Then this value can be 10% (which means you have a brilliant team), sometimes - 50% (well, also normal), often even more.

The reason is that people are optimists. And they give estimates too optimistic. Therefore, it is desirable to provide two estimates - optimistic and pessimistic (I’ll reveal the secret - pessimistic can also be shown to customers, most of them react normally).

More difficult approach:





Where:Consider an example. The optimistic estimate is $ 20 million with a probability of 30%, the pessimistic estimate is $ 75 million with a probability of 70%.





The final estimate is $ 58.5 million, much higher than the “average” value, which is equal to $ 47.5 million (an error of about 23%) and which most of the managers will most likely choose as a compromise. Thus, another confirmation that, on average, you need to add 20-30% to your “normal” estimate.

Well, since we started talking about “average” values, we need to add that any decision made by the majority will always be worse than a decision made by a limited number of people. That is why important decisions in large corporations are made by the board of directors, and not by popular vote of all employees. And that is why the decisions made by all members of the teams are likely to be worse than the decisions made by the project manager in conjunction with, for example, the team leader.

Technical duty

A deliberate compromise solution, when the customer and the contractor clearly understand all the benefits of a quick, if not ideal, technical solution for which you have to pay later. This term is introduced by Ward Cunningham.

Common causes of technical debt:

  1. The pressure of a business, when a business requires to release something earlier than all the necessary changes are made, there will be an accumulation of technical debt, including these unfinished changes.
  2. Lack of processes or understanding when a business has no idea of ​​technical debt, and makes decisions without taking into account the consequences.
  3. The absence of loosely coupled components created when the components are not based on modular programming, the software is not flexible enough to adapt to changing business needs.
  4. Lack of tests - encouraging rapid development and risky fixes (“crutches”) to correct errors.
  5. Lack of documentation when the code is created without the necessary supporting documentation. The work required to create supporting documentation is also a debt that must be paid.
  6. Lack of interaction, when the knowledge base does not spread throughout the organization and business efficiency is affected, or younger developers are incorrectly trained by their mentors.
  7. Parallel development at the same time in two or several branches can cause an accumulation of technical debt, which ultimately will need to be replenished in order to merge changes together. The more changes that are made isolated, the greater the final debt.
  8. Deferred refactoring - while project requirements are being created, it may become obvious that parts of the code have become cumbersome and need to be reworked to support future requirements. The longer the refactoring is delayed and the more code written using the current state of the project, the more debt accumulates, which will have to be paid at the time of the subsequent refactoring.
  9. Lack of knowledge when the developer simply does not know how to write quality code.

In most cases, technical debt is written off at the expense of deadlines for the delivery of the project, hiring additional employees or in the form of overtime.

Pareto principle (principle 20/80)

The rule of thumb, named after economist and sociologist Wilfredo Pareto, in its most general form is formulated as “20% of effort gives 80% of the result, and the remaining 80% of effort gives only 20% of the result”. You need to understand that the numbers 20 and 80 are conditional (for example, Google, Apple and Microsoft like the distribution of 30 to 70 for their application store), but this does not change the general meaning.


image


The Pareto principle is a fairly well-known rule that can be applied to various areas of life, but in IT this principle shows itself in all its glory.

The most important consequences of the Pareto law:

  1. There are few significant factors, and there are many trivial factors - only single actions lead to important results.
  2. Most efforts do not produce the desired results.
  3. What we see is not always true - there are always hidden factors.
  4. What we expect to receive as a result, as a rule, differs from what we receive (hidden forces always act).
  5. It is usually too difficult and tedious to understand what is happening, and often it is not necessary - you just need to know whether your idea works or not, and change it so that it works, and then maintain the situation until the idea ceases work.
  6. Most successful events are due to the action of a small number of high-performance forces; most of the troubles are due to the action of a small number of highly destructive forces.
  7. Most of the actions, group or individual, is a waste of time. They do not give anything real to achieve the desired result.

Also, one should not forget about Pareto efficiency - a system state in which the value of each particular indicator characterizing a system cannot be improved without deterioration of the others.

In a simplified form - the rule of a triangle: quickly, efficiently, inexpensively - select any 2 points.





One percent rule

A rule that describes the uneven participation of the Internet audience in creating content. It is argued that, in general, the overwhelming number of users only browses the Internet, but does not take an active part in the discussion (in forums, in Internet communities, etc.).

More generally, it describes the number of employees who generate new ideas or are proactive.

Consider an example. You had a certain event attended by 30 people. You want to get feedback by conducting a survey after the event. Does this make sense? If you do not try to come up with any synthetic conditions, then there is no point in such a survey, since it will respond to ... 0.3 people. Even if one person answers (which is equivalent to 3%, which in itself is an excellent option), then 1 answer is unlikely to suit you. It's easier to ask the opinion of people leaving the building :-)

Peter principle

Peter’s principle states that in a hierarchical system, each individual has a tendency to rise to the level of his incompetence.

Features:
  1. It is a special case of general observation: any well-working thing or idea will be used in more and more difficult conditions until it becomes the cause of a catastrophe.
  2. According to Peter’s principle, a person working in any hierarchical system is promoted until he takes a place where he will not be able to cope with his duties, that is, he will be incompetent. This level is called the level of incompetence of the employee.
  3. It is not possible to determine in advance at what level an employee will reach the level of incompetence.

It is for this reason that in 37signals they refused to raise employees on the career ladder, and they offer employees to improve their skills and knowledge (and, therefore, pay) as part of their position.

When the number of employees of the company 37signals exceeded 25 people, something strange happened. She left the employee, who had too many ambitions. She wanted to raise, but the flat structure of the company did not imply the presence of managers. After that, Jason Fried wrote a column in which he explained why it is so important for them to preserve this particular structure of the organization. According to him, it is common for companies to develop “vertical” ambitions in employees, that is, the desire to move up the career ladder. And in 37signals they try to take those who are close to “horizontal” ambitions, that is, the need to become more professional in what you love most in the world. The village

For a deeper dive into the topic I recommend the book Rework and Remote.

Hanlon Razor

The statement about the probable role of human errors in the causes of unpleasant events, which says: never attribute to malicious intent something that can be completely explained by stupidity.

Hanlon's razor is one of my favorite principles. In most cases, all shoals are trying to push on the "action of unforeseen force," "conspiracy theory," although it is easily explained by the usual stupidity of employees.

In continuation of this topic, one must also recall the Murphy's law - a philosophical principle, which is formulated as follows: if there is a possibility that some kind of trouble may occur, then it will necessarily happen. Such a law of meanness.

In formal form:
For any n, there is m, moreover m <n, such that if n is large enough to fulfill the Murphy's law under given concrete conditions, then m tests are sufficient for at least one of them A to produce an undesirable result.

The consequence of the Murphy's law in software development: you need to implement "protection against a fool" - additional checks, additional levels of abstraction and isolation, and other techniques, better known as "best practices".

And finally, I will mention another effect - the effect of Dunning - Kruger. This is a metacognitive distortion, which is that people with a low level of qualification make erroneous conclusions, make unsuccessful decisions and are unable to realize their mistakes due to the low level of their qualifications. This leads to an overestimation of their own abilities, while really highly qualified people, on the contrary, tend to underestimate their abilities and suffer from a lack of self-confidence, considering others to be more competent. Thus, less competent people in general have a higher opinion of their own abilities than is typical of competent people, who also tend to assume that others rate their abilities as low as they themselves.

And since people are more inclined to believe those who speak confidently and brazenly, the truth, said quietly and without anguish, may not be heard, and the decision made to be made on the basis of incorrect or inaccurate data. Therefore, it is very important to correlate all assessments and decisions with what one employee or another said, so that next time you should listen to a more competent, but less courageous employee.


I hope that the understanding of the principles and laws described will help you develop the software with higher quality and always complete it on schedule and budget!

Source: https://habr.com/ru/post/244783/


All Articles