📜 ⬆️ ⬇️

Assessment of consultants

Many of our customers are asking questions about how the manager and his employee evaluate the tasks assigned, understand where to grow and develop a coordinated view of this. In other words, how to formalize the process of assessing the qualifications of employees. It is quite logical desire caused by a number of reasons: the need to reduce the subjectivity and influence of the personal factor when evaluating an employee, organizing a visual base for discussion with the employee, reducing the number of conflicts between the manager and the employee that arose on the basis of unfounded estimates and decisions. This was the reason for writing the article, and this desire is not just to share their experience, but together with other leaders to begin a discussion of the situation that companies increasingly face, and, perhaps, to find a solution.

Among Habr's readers and this article in particular, there are certainly not only employees, but also heads of departments. Judging by our experience, the task of mutually adequate assessment of employees is often important for them. So this article seems to us useful and interesting to many. Here we will share our qualification assessment practitioners, and you, dear readers, please join the discussion.

So, in order. We will tell about our experience in one company. For example, using the example of a consultant’s assessment (although the technique is quite applicable to other specialists). Initially, we understood that the output should be a table with the estimates of the manager and the employee’s self-assessment, so that the dialogue on the qualifications of the employee has a basis. At the same time, right at the very beginning of work on the assessment methodology, we faced a number of ambiguous issues, which will be discussed.

Concept


The overall concept of the assessment was determined immediately, but the first questions immediately appeared. For example, a consultant has certain competencies, but as a result of his work, solved tasks appear, and not manifested competences. So from this it is more convenient to evaluate in the end?
')
To begin with, just in case, we will define the terminology. Here, “competences” are understood as different qualities, knowledge and skills. For example, knowledge of the functionality of the automation system, the skill of conducting an interview, such quality as attentiveness. “Tasks” consist of a set of competencies. For example, the task of the consultant “Developing Role-Instruction Instructions” consists of knowledge of the functionality of the automation system, the skill of designing IT service management processes, the skill of developing documentation and the quality of time management (for example, an abbreviated list of competencies related to this task is given).

The following decision was made regarding the assessment of tasks and competencies: initially, the manager and the employee evaluate the competencies (for the employee this is self-assessment). But the discussion and the final assessment take place according to the tasks made up of them. In this case, if necessary, you can return to the level of competence and analyze the assessment of the employee in another context.

So we had a rough understanding of the structure of the methodology: a list of competencies for assessment by a manager, a list of competencies for self-assessment by an employee, and a summary summary table in which competences are scattered across tasks and at the same time visible ratings by the manager and employee.

Then we realized that in order to make the decision on the appointment or transfer of an employee to a certain position to be more transparent, you need to enter target values ​​for each position (minimum thresholds). Thus, the next step was the placement of target values. At the first iteration, this affixing was almost intuitive. The following iterations consisted in testing employees holding the positions in question. This made it possible to clarify the estimates for each competence in each task for each position.

In addition to setting targets, the testing allowed us to identify another important issue that was to be resolved. Comparison of self-evaluations and evaluations of the manager showed that the former were significantly higher. This was largely due to the fact that employees were aware of many of their qualities that were not yet possible to manifest at work. And then a new question arose: “If we leave the calculation of assessments in such a state, then conflicts, misunderstandings and long discussions during the evaluation of the employee are inevitable. At the same time, if we agree that the employee will evaluate only those of his qualities that a manager could notice, we may not learn about his other useful skills. What better way to do?

After long discussions, we decided to highlight on the sheet of employee self-assessment a separate field “Practical application” with the answer options “Yes” and “No” for each evaluated criterion. Thus, the final assessment of the employee's skill remains unchanged if this skill has been applied, and is considered to be a reduction factor if there is no practical use (see Figure 1).

image

Figure 1. An example of using the field "Practical application"

A similar situation arose in view of employee certificates - how to take them into account? I didn’t want to further complicate the calculation of assessments, besides, a large number of different coefficients would lead to an average temperature in the hospital and nothing concrete could be said about the employee’s knowledge. So we decided in this case not to introduce additional factors, but to limit ourselves to the color highlighting on the final summary table of knowledge and skills for which the certificate was obtained. After some time, inspired by the evaluation method, we even made a multi-colored highlight depending on the level of the certificate. :-)

Reflecting on the registration of certificates and the practical application of skills, we were confronted with the most controversial question that we repeatedly asked ourselves: “When should we stop while detailing and complicating the methodology? When do we need to add different weights for parameters in different tasks, different rating scales, and when is it enough to use unified indicators? ”.

We did not find an exact answer, so we decided to leave further thoughts for the post-operation period, limiting ourselves in the first version of the matrix with the following features of the assessment:


An example of a summary table is shown below (see Figure 2). On it you can visually see the implementation of the evaluation features developed by us.

image

Figure 2. Example summary table

disadvantages


Of course, it is impossible to speak about the complete objectivity and universality of such an approach to assessing the competencies of consultants. It is necessary to recognize a number of potential risks inherent in such formalization. Firstly, the evaluation flexibility decreases, which can fit all employees under a certain framework, “stamping out” similar consultants. The possibility of this is probably not so bad, especially given the fact that employees will eventually become more interchangeable and they will have an accurate understanding of what to strive for. On the other hand, then individual development will be limited, so that a talented consultant in something concrete risks remaining an unrecognized professional.

Secondly, duality appears: on the one hand, we reduce the subjectivity of evaluation, since well-known evaluation criteria are in common access, but on the other, we increase the variance of the estimates obtained (due to the use of a specific fixed scale).

And finally, thirdly, when creating any formalized systems, there is another important risk - the risk of a primary error. It may appear, for example, when setting weights or relationships in the summary pivot table. However, such errors are often difficult to detect, but their consequences can be serious.

At the same time, I would like to note that in order for the entire procedure for assessing the qualifications of a consultant to be more objective, it is important to remember that such a matrix cannot be the only factor in the assessment. Discussions and teamwork experience with the evaluated employee complement the view that the completed matrix gives. Therefore, to create the most complete and objective understanding of an employee’s qualifications, it is necessary to determine the circle of those people who can evaluate it. This will keep all the above risks and disadvantages to a minimum, and will also help turn the competency assessment matrix into a convenient auxiliary tool for management, HR and valued company employees to talk.

findings


Summing up the results of our work and this article, we can say that the first version of our matrix turned out to be quite useful, convenient and visual. Some shortcomings can not be avoided, but the main objectives of the development of the matrix have been achieved.

Now we see a new, more ambitious goal. If initially our matrix was developed for evaluating consultants, now the plans are to create a universal tool for assessing competencies and other functional roles. So, dear leaders and our other colleagues, let's discuss and share experiences. :-)

Source: https://habr.com/ru/post/237839/


All Articles