Grading is an important corporate management tool that serves to maintain objectivity, especially in borderline cases, against the backdrop of increasingly agile and international structures. hkp.com talked to hkp///group experts David Voggeser and Carsten Schlichting about the importance of central governance for the evaluation of jobs and functions in corporations, and in the sizable international medium-sized business sector.
Mr. Voggeser, Mr. Schlichting, time and time again we see similar or comparable positions being evaluated differently within business areas and countries. How do you explain these inconsistencies?
David Voggeser: The most straightforward explanation is that different evaluations occur when companies lack governance or when this is insufficient. An evaluation system may be perfect in individual cases, but if it’s used in another part of the company with different standards, its usefulness is gone. The purpose of grading is to establish and maintain objectivity and comparability.
Carsten Schlichting: ... or to look at it in terms of method: grading processes aren’t calibrated scales that dispense identical results all over the world; they are assessment tools that are in human hands. Even the analytical processes in grading aren’t measurements as such but rather criteria-based comparisons made by human beings.
What are the specific causes of evaluation errors?
Carsten Schlichting: All processes used for grading contain descriptions of the evaluation criteria and levels. And, as everyone knows, words can be interpreted in a number of ways, both in a positive and negative sense. Added to this are the translations into multiple languages – and, just like with a book, each translation is already an interpretation of the original text.
David Voggeser: Plus, evaluations are often carried out by different people, who have varying levels of training and experience, and who belong to different management levels.
So, a sound professional understanding and a strong position are the must-haves for consistent evaluations?
David Voggeser: They are definitely the essentials that no one should take on a grading process without. Of course, professional understanding doesn’t just mean sitting through a short two-hour information session along the way; it’s about an appropriately extensive transfer of knowledge and, above all, experience. Knowledge is only part of the story; practical experience is almost more important.
Carsten Schlichting: The discussion of grading proposals with appropriate colleagues is of course another key aspect.
In that case, could a cross-company grading network be the solution?
Carsten Schlichting: A distinction is needed here. In the case of positions below management level – in Germany we use the term “tariff” – the approach is usually country and industry-specific. So there isn’t much of a need for international dialog. Experts in the field discuss the value of guidelines at a local level and these then serve as anchor points for future evaluations.
A network isn't really necessary then. But how many grading experts should there be for such a task?
David Voggeser: The size of the company is of course decisive in this regard. In relative terms, the larger a company is, the fewer people need to be familiar with this particular matter. The rationale is that there should be at least one expert per region and/or per employee group. Generally speaking, it’s better to have a few people who carry out evaluations frequently than several who do so only once or twice a year.
Presumably the situation is different for non-tariff employees?
Carsten Schlichting: No, the method is basically the same for NT employees as well. The difference here is the comparative aspect, which is becoming increasingly broader.
Could you be more specific?
Carsten Schlichting: Visualize a series of concentric, expanding circles. First of all, evaluations need to be consistent at one location, then across business areas in a country and finally across all countries relevant to the company in question. On top of that, cross-comparisons within job families are also expected for management functions. And finally, the evaluations have to fit the external market with which the compensation is compared.
That sounds extremely difficult to carry off. Wouldn’t it be more effective to have one external consultant handle the whole issue of grading across locations?
Carsten Schlichting: Unfortunately not, although that would at least ensure that the same method is used. But our own extensive experience with such responsibility within a corporate group setting have shown that even evaluations carried out by the same consulting firm in different countries can result in significant deviations. This is mainly because the individual units present themselves as more independent than they actually are. The influences of Head Office or globally active business units are often kept quiet or downplayed.
Why is that?
Carsten Schlichting: Well, you have to keep in mind that grading ultimately determines where the money goes. And that means that there are always powerful interests at play. The higher up the positions are, the more influential the stakeholders are, who ultimately commission the consultant. A local consultant will only be able to counter this if they know the structure of the company extremely well and have the support of the relevant department.
What advice do you have for ensuring objective grading processes across companies?
David Voggeser: The corporate center needs to have governance in place for top management grading – whether this involves just one level or the first two to three depends on the size of the company in question – and for the top level in foreign subsidiaries. This determines the upper thresholds for all units worldwide. There should also be experts positioned within countries and business units, who regularly communicate with the center of excellence, which functions as a knowledge provider and facilitator.
Carsten Schlichting: In the interest of transparency, the evaluations carried out at international locations should also be stored in a central database to which relevant management members have direct access. That way, grading deviations, whether intentional or accidental, don’t just become visible when an inconsistency is revealed at some point down the line.
And then... all’s well that ends well?
David Voggeser: Almost... we often find that the reasons for an evaluation aren’t sufficiently documented. As I said earlier, guidelines are open to interpretation. So, for ambiguous cases at least, there should be a record of how the descriptions have been understood and applied to the specific evaluation.
That sounds time-consuming...
David Voggeser:… which is exactly why it doesn't happen enough. But in the end, it means that specific knowledge is retained by the people involved. And it gets lost if they should change position or leave the company.
Given that humans have the potential to contribute to ambiguity within the grading process, doesn’t the use of AI make sense here?
David Voggeser: Experiences with our automated grading tool, the hkp/// group Grading Robot, has shown that technology is pretty useful for an initial assessment, but human expertise is still needed for finetuning.
Carsten Schlichting: And as grading is always particularly interesting when it comes to borderline cases, the human factor will continue to play a key role alongside technology in the foreseeable future.
Mr. Voggeser, Mr. Schlichting, thank you for the interview.