MAKING GOOD SELECTION DECISIONS GETTING GOOD SELECTION DATA HAS BECOME EASY. USING IT, REMAINS HARD
ABOUT THE AUTHOR Nik Kinley is a Director at our London office, and is regional lead for our work in the Middle East. His prior roles include Global Head of Learning for Barclays RBBF and Global Head of Assessment & Coaching for the BP Group. He has specialised in the fields of leadership assessment and development for nearly thirty years and in that time has worked with CEOs, factoryfloor workers, life-sentence prisoners, government officials and children. He began his career in commercial roles, before spending the next decade working in and around prisons as a forensic psychotherapist. Thirteen years ago he returned to working with organisations, and since then has worked with over half of the top 20 FTSE companies, identifying and developing talent across the globe. He has written books on corporate learning, talent management, and behaviour change, and published numerous papers in leading journals. He is a regular lecturer at leading global business schools, and in the last year has collaborated with faculty at IMD, Stanford, and Cambridge/ Judge. He is a regular speaker at industry conferences, and is often cited in the media on matters relating to talent assessment and development.
02 Š 2018 YSC
USING SELECTION DATA Once upon a time, finding reliable, objective data on recruitment and promotion candidates was the challenge in selection. You had CVs and references – both of which were far from reliable – and that was about it. These days, it is different.
Most large firms now use assessment processes and tools of one sort or another to support selection decisions. There are capability-based interviews, assessment centres, and psychometrics galore: all providing data on candidates’ characteristics, skills and experience. Yet, as the tools and data available to us have multiplied, a fundamental challenge has emerged. One that has always been there, lurking behind all the process and policy: how do you ensure that at the point of decision – of who to hire and who to promote – that the available data is used effectively? After all, businesses can have pressures that may overrule any data; decision-makers can have biases and preferences; and just ensuring proper processes are followed can be tough. So how do you make sure that hiring managers and other decisionmakers use the data available in a way that ensures they make the best possible selection decisions? ELIMINATE MANAGERS FROM HIRING DECISIONS? One immediate possibility is the idea that we should remove not just managers, but all humans, from the decision-making process. It is not an entirely mad idea, and to see why, we
need to go back to 1954. That was when the American psychologist Paul Everett Meehl published a study looking at how medical diagnoses were made. He found that when the results of tests were combined into a diagnosis using statistical methods, the correct diagnosis was achieved more often than when doctors relied only on their clinical judgement. His one caveat was that humans seemed better at identifying unusual bits of information – things that were not part of standard diagnostic tests. But, in general, mechanical judgement trumped human judgement. In the half century since Meehl’s work, most of the research investigating the matter has supported his findings. It is not that human judgement cannot be good: in fact, at times it can exceed mechanical judgement. It is simply that human judgement is too unreliable. Sometimes it is great, other times it is not. Mechanical algorithms, though, are always reliable. They are never distracted, never influenced by mood, and never rushed to a premature decision. As a result, some people have suggested that we might be better off doing a series of tests and then plugging the results into an algorithm.
That is certainly how the early stage sifting of large numbers of candidates is already done in many firms, and how assessment centre data is sometimes turned into selection decisions. But should we extend this and eliminate people from all selection decisions? The answer is ‘no’ and for two important reasons. First, how hiring managers and other stakeholders feel about a candidate are a critical part of the equation. It does not matter how good an algorithm says someone is: if their manager does not like them, they are probably not going to do well. The second reason is that what algorithms are good at is spotting people who fit a template. So if you want an army of clones, then algorithms are what you need. But if you want a diverse team, with all sorts of people with all sorts of characters and skills, then you need a human, because what humans are particularly good at is spotting outliers – things that are different and distinctive. So, for the moment at least, we need people, not machines, to be making selection decisions. Which brings us to our challenge, because with human judgement, comes human error. 03
THE KEY CHALLENGES Assuming we stick with human decision-making, then there are some common challenges all firms face.
1. UNDERSTANDING THE DATA In some countries, regulations require vendors to sell assessment tests only to people trained in their use. In South Africa, there is even a law to back this up. The idea is that as these tests can be complex, the people interpreting results ought to understand the complexities. That’s the principle, anyway. Yet the reality is that in most countries assessment results end up in the hands of people who are not trained in the technical complexities of the trade. Vendors are aware of this, of course, which is why they produce ‘manager versions’ of assessment reports which explain and simplify the results for lay users. These pre-interpreted results may seem to solve the issue of understanding data, but they are prone to oversimplifying issues, and do not help people understand the broader issue of what assessment results can and cannot tell us (See page 10, “Two things every hiring manager should know”). So, for 04 © 2018 YSC
all the market trends towards simplified assessment reports, the bottom line is that many – if not most – of the people using assessment results do not have a good understanding of them. And that cannot be good. 2. THE ‘JUST-GIVE-ME-ASTRAIGHT-ANSWER’ ISSUE This is a related issue, and can be one of the consequences of people lacking a good understanding of assessment. But it is more about people’s desire for clarity. They want assessments to give them an unambiguous message or recommendation. A single number; a straight ‘yes’ or ‘no’. And vendors have responded to this by trying to give it to them. Unfortunately, assessment results almost always require further investigation and cross-referencing, and indeed become most powerful when you do this. So when it comes to assessment, simplifying really is dumbing-down. And yet people are people and they still want their clear answer, so the issue remains.
3. THE RESPONSIBILITY ISSUE Crunch time: an assessment report says that a candidate is just not right for a role, but the hiring manager is keen on them anyway and just dismisses the assessment as wrong, without further consideration. Is it OK for them to ignore the report like this? If your firm uses assessment tools then sooner or later you will face this question. It can be about individuals’ overconfidence in their own judgement, or it can be about politics and power, about managers wanting to feel they are in control and that it is up to them who they employ. You are also likely to face the opposite of this, too – hiring managers who overly trust assessments and take the results as gospel, effectively out-sourcing the decision to the tests. Either way, you have an issue. And whatever the cause, a lack of clarity about what assessments are and how they should be used can undermine the value of them. 4. THE DONE DEAL A related issue is where a decisionmaker already has a candidate firmly in mind before the selection process begins. Often, they are a great candidate. The best available. But sometimes they are not, and whenever a decision-maker begins the process with a firm favourite in mind, it undermines the quality of the decision-making process.
5. CLARITY ON FUTURE STRATEGY In many respects, selection is more of a strategic decision than a personnel one: a decision about what you need someone to do and achieve. Clarity on what is required from candidates if selected is thus an important foundation for good decision-making. At junior levels, this is usually fairly straightforward. But the more senior level that roles are, the less simple this tends to be. Sure, you may have a lengthy job description, but what you need from the potential future leaders of your business may be far less clear. The future strategy for a business or business unit may be uncertain, and even if it isn’t, things may change. So decision-makers can be faced with the unenviable task of trying to establish which candidate is best fit for something that is itself not clear. When this happens, decision-makers have little option but to devolve to looking at candidates’ personal characteristics, skill and experience – to looking for the best candidate in general, rather than the one that most suits the specific role in question. And that reduces the likelihood that they will end up making the right decision. So having a clear strategy – and thereby a clear context – for selection decisions is a critical issue. 6. INDIVIDUAL BIASES This is probably the best-known decision-making challenge: the simple fact that humans are riddled with cognitive biases and personal preferences that can lead them to misinterpret information, misjudge the importance of it, or just fail to process and appraise it objectively. There is anchoring bias, or relying too heavily on the first piece of information you hear. There is egocentrism bias, or favouring candidates who are similar to you. And there is over-confidence bias, or overrating your ability to objectively analyse data and make decisions. In fact, there are a host of such biases, with one recent book listing 50 of them!
THE BEST LEADERS HAVE BIG MOUTHS One of our favourite examples of unconscious bias is the research showing that people with wider mouths tend to be viewed as more dominant and successful, and more likely to be leaders. As a result, we are more likely to select candidates who have bigger mouths. You could easily just dismiss this as some weird one-off research finding, were it not for the additional research showing that leaders with wider mouths tend to be rated as higher performing, and companies who have CEOs with wider mouths tend to have higher profits. And then there’s the research showing that the two factors that most predict who will win US Senate races are incumbency and – yep – wider mouths. What is going on? Well, 200,000 years ago what predicted leadership performance were things like combat and hunting ability. And we know that the best predictor of fighting ability in primates is tooth and mouth size. So researchers’ best guess of what is going on is that over thousands of years our brains evolved to prefer and select leaders who have bigger mouths. As biases go, this is probably about as unconscious and useless as it gets. In fact, it is worse than useless. Using a 200,000 year-old automatic cognitive assumption to select leaders into complex roles in modern-day global businesses is just begging for trouble.
They are the reason why algorithms are more reliable than humans and as long as you have humans involved in the decision process they are unavoidable to some degree. And every single one of them is a threat to the decisionmaking process.
business contexts, and the way they may have done things may not be a good fit for the role being selected for. These types of cultural pressures and how they influence decisions are often less visible, but they are no less potentially damaging for it.
7. CULTURAL PRESSURES
8. MIGRATION TO THE MEAN
Even if you could eliminate human bias, there are still cultural dynamics that can impinge on decision-making. The most common cultural pressure is risk-avoidance – prioritising the lowest risk candidates over potentially better ones who appear to involve a larger degree of risk. A simple example can be seen in how some firms will, wherever possible, go for candidates who have prior experience in doing the role being selected for. That may sound sensible, but prior experience of doing something does not in itself make someone a better candidate. Their experience will have been in different
Another cause of cloning can come from the interview process. Often, firms will want multiple stakeholders to see candidates – especially for more senior roles. In many ways this is a good idea, but it carries with it a risk. The more people who see a candidate, the more people there are who could potentially object to them. As a result, the more people who have a say in the hiring, the more likely it is that unusual, or potentially ‘spikey’ will be rejected. The outcome is a general movement towards the average, or least objectionable candidate. Or in other words, cloning. 05
9. THE BUMS ON SEATS ISSUE The final common issue is the simple need to just get bums on seats: to fill the role and have someone there doing something, no matter if they are not the perfect candidate. This is a powerful pressure and what it leads to is rushed decisions and compromises. These are pragmatic responses, and sometimes the only solution possible. You just need someone. But unchecked and unbalanced, the pressure to simply get someone in role – anyone – can lead to serious hiring mistakes – especially in countries where it is not easy to then remove new hires who do not perform well. Moreover, the biggest risk of rushed hiring is not catastrophic failure, which usually becomes apparent fairly quickly and is resolved through the individual leaving and being replaced. Instead, it is the minimally effective hire. The hire who is not bad enough to be considered a complete failure, but who lowers the overall average quality of the people in the business, and who drags down the effectiveness of those around them. Getting bums on seats is a risky business.
WHAT THE DATA SAYS... 84% OF FIRMS SAID THAT
ENSURING THAT THE DECISION-MAKERS IN SELECTION DECISIONS CORRECTLY UNDERSTOOD ASSESSMENT DATA AND RESULTS WAS A PROBLEM IN THEIR BUSINESS
83% OF FIRMS SAY THEY DO
NOT SYSTEMATICALLY REVIEW HIRING DECISIONS AND WHETHER THEY WORK OR NOT
75% OF FIRMS SAID THAT CULTURAL
ISSUES IN THEIR BUSINESS – IDEAS ABOUT THE TYPES OF PEOPLE WHO FIT IN OR NOT – THREATENED TO UNDERMINE THE QUALITY OF SELECTION DECISIONS.
53% OF FIRMS DO NOT CURRENTLY COLLECT RATINGS FROM INTERVIEWS
50% OF FIRMS DO NOT CURRENTLY PROVIDE ANY GUIDANCE OR TRAINING TO LINE LEADERS ABOUT HOW TO USE ASSESSMENT DATA AND RESULTS
06 © 2018 YSC
THE KEY SOLUTIONS These then, are the nine most common challenges facing good selection decisions. Most are unavoidable to some degree, but the good news is that all of them can be minimised and mitigated to some degree. In fact, there are three key things you can do: Training and education, process improvements, and outcome reviews.
1
TRAINING AND EDUCATION
The first lever for improving the quality of selection decisions is training. And there are three key options here that we have seen companies employ: A) TRAINING IN ASSESSMENT TOOLS This first option is simply increasing the number of people who have training in the key assessment tools and processes you use. This can include formal certification in particular tools, in order to ensure decision-makers understand all the complexities in assessment data. In this vein, we have seen firms train all 07
HR Business Partners in particular psychometric tools, so they can support hiring managers in understanding the outputs. Yet these programmes can be expensive and long. So an alternative here is to provide very simple e-learning or online training in assessment tools – just a little information to ensure a modicum or core baseline of understanding. We have thus seen firms use a 10-minute, PowerPoint-based e-learning tool to ensure all managerial staff have a basic understanding of the psychometrics that the firm uses. B) EDUCATION IN HOW TO USE ASSESSMENT DATA The second option is to provide either the HR staff who support decisionmaking or all managerial staff some brief and basic information about what assessment data can and cannot do. We have written elsewhere about two key pieces of information that decision-makers need to understand in particular: that higher scores are not always better; and that results
are estimates, not facts (see the text-box “Two key things every hiring manager should know”). This kind of information can be delivered to managers very efficiently, and yet can make a big difference to how they view and use assessment data. An alternative is to use what we call The 3 C’s Model. As its name suggests, this focuses on three C’s of all assessment results – Contexts, Consequences and Caveats – and educating managers in a simple model like this can help ensure that they interpret and use assessment results effectively. In our experience, providing HR and line leaders with a simple model and educating them in a few brief principles about assessment data can go a long way to resolving the issues of decision-makers treating assessment scores as gospel, or what to do when a hiring manager disagrees with an assessment result or recommendation. In fact, we find that this kind of education tends to be more effective than creating policies
THE 3 C’S MODEL Whenever we are told that someone has a particular quality or ability, such as being strongly driven to achieve results, we do not like to automatically accept this as a good thing. Instead, we like to ask what the Contexts, Consequences and Caveats of this are. And through using this simple model, you can help managers better interpret assessment ratings and results. All they need to do is ask three questions: • Is a particular quality or ability relevant for the context of the role someone is being considered for? For example, an individual may be highly driven to achieve tangible results, but this may not be that relevant for a ‘maintenance’ role, which is more about keeping a process running smoothly than achieving targets. • What are the consequences of a particular score or ability, in terms of how does it help an individual perform better? • What are the caveats to the result or rating – or under what circumstances would a quality or ability become amplified or reduced? For example, what if their confidence drops, or they have a manager that is particularly pushy and demanding?
08 © 2018 YSC
around how managers should treat assessment data, and whether they have to follow certain ratings. There is little point trying to force managers to hire someone they do not want or do not believe in. And as a matter of principle, we believe that people who are accountable for decisions should be the ones making the decisions. So education is usually the best kind of policy. C) TRAINING TO REDUCE BIAS One common solution seen in businesses these days is training to reduce bias in decision-making, often undertaken in the name of reducing discrimination against gender or race. We have, however, begun to see a bit of a backlash against these sorts of programmes. On the one hand, there is evidence to suggest that educating decisionmakers in some of the biases they are open to can reduce the risk of over-confidence in their decision-making ability, and make them more aware of the biases they are susceptible to. However, the backlash has come as firms have realised that simply telling people that they can unconsciously discriminate against certain types of candidate does little to reduce the likelihood of them actually doing so, especially if the broader culture of a business supports discrimination. While the evidence against such ‘unconscious bias’ training has built, we do still think there is merit in some more general education for managerial staff on cognitive biases and how they can affect selection decisions. At the very least, it appears to make hiring managers less likely to trust their instinct, and more likely to take objective assessment data into consideration. But this input needs to be brief and simple, and if firms really want to reduce bias in selection decisionmaking, the research points to a different type of training as the most effective: Frame-of-Reference training.
This involves providing people with examples of interviews and then getting them to rate the interviewee. Importantly, this needs to be done in small groups and involve some sort of discussion. The idea is that this helps create a common reference point for what good looks like. We have seen firms get really creative in how they implement this sort of training, too. Starting at the most senior levels, they provide leaders with 3 or 4 brief videos or Powerpoint case-studies describing individuals. The leader then uses a team meeting to present these and discuss them with their team. This process is then cascaded down through the business, with each leader first participating in such an exercise and then running it for their own team. Such a solution can be a very efficient and effective way to develop a common frame-ofreference for what good looks like among leaders. It may not directly tackle bias, but the evidence shows that establishing a common frameof-reference does nonetheless help diminish bias. Proving training and education, then, may sound expensive and impractical, but it need not be. In fact, the best solutions tend to be the brief and pragmatic ones. And what they excel at is helping ensure that decisionmakers are more informed and more careful in how they interpret and use assessment data. The solutions above point to a broader issue, too: should you just train HR or do you need to train managers as well? A lot of the answer to this depends upon the relationship between individual HR Business Partners and the line leaders they support. But in general, what has happened over the past few decades, is that firms have focused more on training HR BPs or specialists in their recruitment teams, and less on training managers. The drivers for this have partially been financial (training less people is cheaper), and partially a desire to upskill HR professionals and boost their expertise, standing,
and role in the business. These are laudable goals but, in our experience, the individuals responsible for the hiring decision, who have the final say, are the ones who most need training. Otherwise, firms become reliant on the willingness and ability of line leaders to be more swayed by HR staff than any of the issues described above. And unfortunately, some of the issues above are very loud and very powerful. So our recommendation is to always train both HR and line leaders. Yes, HR will probably get deeper training, but the line needs something.
2
PROCESS IMPROVEMENTS
The second key way to drive better decision-making in this case is through changing the selection process so that it requires decision-makers to do certain things. These changes need to be introduced with care, because any steps that require extra work on the
part of managerial staff is likely to be met with resistance. But there are some things that businesses can do that have a minimal impact on time, but that can have a big impact on the quality of final decisions. Five things in particular stand out: A) CROSS-REFERENCING Given that assessment results are estimates, not facts (see the text-box “Two key things every hiring manager should know�), it is important that they are cross-referenced and checked. One simple example of this is that rather than just accepting the results of a personality test, interviewers ask candidates about the results – for example, whether they feel the results are a true reflection of them, and what impact they think their personality has on their work. This is becoming standard practice in many firms, too, as personality test providers now commonly suggest 09
TWO KEY THINGS EVERY HIRING MANAGER SHOULD KNOW In our experience, there are two important facts about assessment data that every hiring manager needs to know. HIGHER SCORES ARE NOT ALWAYS BETTER A common misunderstanding is that higher scores are always better. It is true that people who score higher on certain tests generally go on to perform better. But the key word here is “generally.” Consider intelligence. We know that it is the single best predictor of success. But a genius may well grow bored in some jobs, and exceptionally high intelligence scores are sometimes accompanied by less desirable qualities – for example, the inability to communicate ideas or think more pragmatically. Likewise, people who score very highly on measures of conscientiousness can sometimes come across as inflexible or bureaucratic. And being very high in agreeableness is not always a good thing, either, especially in roles that require tough-mindedness. So higher is not always better. RESULTS ARE ESTIMATES, NOT FACTS A second common misunderstanding is that people view the results of measures as facts or truths. For example, a job candidate may obtain a low agreeableness score in a personality test, from which a recruiter may conclude that the individual is not agreeable. This certainly sounds reasonable. But it is not, because ratings and results are not absolute facts or truths: they are more like estimates. Peter Saville, one of the founding figures of modern measurement, uses the analogy of golf to explain this. Golfers have a handicap score – a kind of average score that shows how good they are. But on any one day, the score they achieve may not match this handicap. They may do far better than their handicap would suggest one day, but far worse the next. And assessment results are pretty much the same. When you see an intelligence test score, it is how someone did on one particular day, not their average score, and while this may be representative of how bright they are on other days, it may not be. This is partly because every measure is open to inaccuracies. Assessors may make a wrong judgement, or job applicants may pretend to be something they are not. Part of it, though, is also down to the fact that how people perform varies from hour-to-hour and day-to-day. And when you are assessing them, you do not know if you are catching them on a good day or a bad day. Of course, without testing someone many times, there is no way to know this. So it is important that when we see assessment results, we understand that the scores are not perfect indicators. They are ballpark figures. And it is down to the people reading and interpreting the results to work out where exactly in the ballpark they are – whether a score is at the top end of someone’s range or the bottom end.
10 © 2018 YSC
interview questions that can be used to sense-check test results. Other examples include cross-referencing intelligence test scores with academic achievement, and individual psychological assessment reports with interview outputs. This kind of checking can be simple and quick, and the easiest way to institute it is to add it to a standard decision-making process. With this in mind, we have seen firms distribute a short checklist for selection decisions that includes things like crossreferencing. And we have seen other firms add the importance of crossreferencing to a brief training video or e-learning for hiring managers. Not all decision-makers will use the checklist or follow the training. But some will, and for everyone who does, the overall success rate of new hires in your firm will increase. B) COLLECT DATA The second key thing firms can do is to make sure that they collect data at every step of the hiring process. So if you have interview guides, require ratings to be made and – importantly – then centrally collate that data. HR Business partners or Recruiters can be asked to do this to minimise impact on line leaders, too. The reason why this is important is two-fold. First, by requiring ratings to be made and data to be collected, it requires people to consider something, and as such is a way of steering behaviour. If you want decision-makers to think about the degree of fit between a candidate and the role requirements, or between the candidate and the manager they will be working for, then ask for a rating of this. The second reason it is important to do is that if you collect this data, it then gives you a useful source of information that you can come back to and analyse to improve your selection processes (more on this later). So collect data. A couple of numbers may add process, but it does not add a lot of time for line leaders. And it may not be exactly scientific, but it is a lot better than doing nothing.
C) MEASURE FIT We hinted at this in the previous point, but one of the things you should collect data on is fit – so not whether someone is the best and most capable candidate, but whether they are the right candidate. This does not mean that capability is not important, or that companies should not aspire for every new hire to raise the average capability level of the firm. It just means that if their skills do not fit the role and the context and the company, then it will not matter how good they are, they are likely to fail. Interestingly, researchers have shown that there are four different types of fit: • Person-job fit. The degree of fit between a person’s qualities and the requirements of a particular role.
• Person-organisation fit. The degree of fit between a person’s characteristics and the working environment or culture. • Person-team fit. The degree of fit between a person and the colleagues they will be working most closely with. • Person-manager fit. The degree of fit between a person and the manager(s) they will be working for. A review of 172 separate research studies found that each of these four types of fit are important for success, although in slightly different ways. As we might expect, person-job fit is important for predicting performance, productivity, and reduced job stress. Person-organisation fit, seems to be the best predictor of commitment, organisational citizenship behaviours, and staff turnover. Unsurprisingly,
THE IMPORTANCE OF FIT The importance of the level of fit between people’s talents and the demands of their jobs can be seen in a famous study that looked at the impact of General Electric (GE) leaders when they moved to new companies. GE is a particularly interesting example, since they deliberately tried to develop leaders with a range of experiences who would possess generic leadership skills that they could transport into any role. They were the personification of the “Martini” manager, who would be good “any time, any place, anywhere.” The market certainly seemed to believe this, anyway, because in 85 per cent of cases, the hiring company’s stock price rose as soon as it was announced that a CEO from GE had been appointed. The researchers, however, wanted to check if this faith was warranted. So they categorised both the strategic challenges facing each company and, using résumés, the skill sets of the former GE leaders (distinguishing between different types of leadership experiences). They then divided the CEOs into two groups. In one there was a good match between business need and the leaders’ skill sets, and in the other there was a mismatch. They found that the performance of the businesses where there was a good match with the leader’s skills was over double that of the mismatched group. So leadership skills do not appear to be as transportable as has sometimes been thought, and ensuring good person-job fit pays. Literally.
person-team predicts the quality of relationships with co-workers. And finally, person-manager fit predicts both employees’ satisfaction levels and turnover. As the old adage says, people join companies, but leave bosses. There is also some evidence that the importance of these different types of fit for subsequent job performance differs in certain cultures. Personorganisation and person-job fit seem more important in more individualistic cultures, such as in North America and Europe. And person-team fit and person-manager fit appear more important in more collectivist cultures, such as in East Asia. Researchers are now taking this one step further and looking at the relative importance of the four different types of fit in different companies. And the results seem to suggest that which aspect of fit is most important does indeed vary between companies as well as geographies. The consequence of this is that one simple win for firms is to measure fit, collate the data, and then follow up to see which type of fit is most important for their firm. Ratings of fit are easy to add to interview guides, and the potential pay-off is some really interesting data that may enable you to improve your recruitment processes. D) DECISION QUALITY AGENTS Another option we have seen some firms introduce is the idea of decision quality agents. They go by different names in different businesses – Super-Assessors, Selection Supervisors – but the idea is the same: to provide a small number of people with in-depth training in how to make effective selection decisions, and then to stipulate that any selection decision has to have involved them. In effect, what this means is that managers meet with one of these individuals for 30 minutes to discuss their selection decision – usually once the pool of candidates has been reduced to two or three people and before a final decision is made. 11
The role of the decision quality agent is not to rubber stamp or approve the hiring manager’s decision, but to question and coach the hiring manager in how they make the decision – the criteria they are using, the data they are considering, and the conclusions they are making. In many ways, this is similar to the role in many firms that HR is supposed to play in the selection process. But by appointing line leaders as these decision quality agents, it both strengthens capability in the line and can reinforce the perceived importance of this part of the process. E) DEVIL’S ADVOCATE The physician, psychologist and inventor Edward de Bono made this one famous. One simple way of improving decision making is to appoint a Devil’s Advocate – someone whose role it is to question decisions, as a way of trying to improve them. The idea is that semi-formally giving someone this role empowers them to ask difficult questions and increases the chances that if there are lurking doubts about a candidate then they will be voiced. Another way to achieve the same thing is to record doubts about every new hire, along the lines of ‘if they struggle or fail, why might that be?’ This is important to both inform onboarding processes and to give you a data point that you can later go back to if things do not go well with the appointment. These, then, are some of the process points you can use to steer behaviour in decision-making. To ensure certain things are considered, or that certain conversations are had. The emphasis has to be on light-touch, so as not to appear bureaucratic. But done in the name of rigour and right decisions, most firms’ cultures will accept at least a little process. And the important thing is to remember to use process to collate data, because as we will now go on to see, data gives you the possibility and the power to improve. 12 © 2018 YSC
BEWARE OF OVERSIMPLIFYING RESULTS At present, the market tends to present measurement results – and businesses consequently tend to view them – too simplistically. For example, a typical personality test report will list the various dimensions measured and show the scores obtained by an individual. It will then briefly explain what these scores mean. A particular sales manager might be high in conscientiousness, low in agreeableness and about average in everything else. If it is a really good report, it might point out that conscientiousness is a reasonable predictor of success in sales staff, though slightly less effective for managers. And it may add that having below average agreeableness is not generally a problem for managers. What such a report will usually not mention is that the combination of high conscientiousness and low agreeableness is not a good sign. There is some evidence that others may see people with these traits as micromanaging and inflexible. So what is frequently not shown is the interaction among the various factors being measured. Even when interactions are shown for the factors assessed by a single test, they are rarely shown for the factors measured by different tests. We struggle, for instance, to think of an intelligence test report that provides advice on the impact of personality profiles on how people use their intellect. Why the simplistic view? Well, our ability to interpret the interactions between factors is limited by the fact that researchers have not studied them in great depth. Yet the bigger factor here is that vendors tend to present talent data in the simplest format because that is what the businesses seem to want. They want as clear and unambiguous a message as possible. People generally believe that they themselves are complex combinations of qualities and characteristics. Yet when it comes to others they often want a very simple box to put them in. This is understandable, too. Managers, who are often the ultimate users of measurement results, are busy enough without having to decipher complex reports. Yet the counterpoint here is that the oversimplification of assessment risks ruining it and undermining its potential value to businesses. At a fundamental level, people are complex and if we ignore this reality we will inevitably make poor decisions about them.
3
OUTCOME REVIEWS
The final step you need to take to ensure and improve the quality of selection decisions then, is to review selection decisions and whether they worked out. This reviewing is vital to help individual managers and businesses as a whole improve their people decisions. It can help you identify who is great at selection, who is not, and how you can improve both. And it can tell you more about what kinds of people succeed in your firm, and what kinds do not – information that can then be fed back into selection processes to help improve them. Unfortunately, most firms seem to miss this step. One recent survey reported that only 23 per cent of firms check whether selection processes work, and in our experience the real figure is probably below that. Fortunately, some of the solutions here are fairly simple, with firms having two main options.
knowing what we now know about the new hire, what were the strongest indicators back then of how they have subsequently done?”. The idea is not to lay blame if things are not going well, but to have a genuinely curious inquiry with the aim of helping people improve their selection skills. Another firm we work with adds to this process by simply keeping a record of when new hires or promotees fail or do minimally well, and then checking to see if any of the ratings made during the selection process stand out as different or indicative of this group’s subsequent lack of success. Another firm does a similar thing, but through asking managers for a 6 month-in performance rating of new appointees. Yet another does this by asking managers to rate how satisfied they are with the new hire after six months. Which brings us to the second thing firms can do.
B) DATA ANALYSIS As phrases go, ‘data analysis’ sounds grander and sexier than the reality often is. It can conjure images of big systems and complex data. Yet, in reality, a simple spreadsheet and the will to collate data is all you need. In most cases it doesn’t need expensive systems or extra headcount. Advanced statistical techniques are sometimes required, but nine times out of ten, simple averages and bar charts are sufficient. For all the talk of big data, for most companies, small and simple data will do. As for what data to collect, there are a few basics: interview ratings and assessment scores, plus some kind of subsequent performance data. This performance data can simply be endof-year performance ratings, but could also be things like bonus allocations, whether a new hire leaves within 12 months, or whether they are tagged as ‘high potential’ within a certain period.
A) CHECKPOINTS The idea here is to simply insert a routine check of how new hires and promotees do six to nine months down the line. This can be useful to help proactively identify potential issues or where individuals need support. But it also provides an opportunity for HR and line leaders to learn from the selection process and improve expertise and process. For example, one firm we work with always follows up the appointment of new hires with a review meeting involving HR and the hiring manager. The meeting is quick, usually lasting no more than thirty minutes. During the meeting, how the new hire is doing is reviewed, as is the information available at the time of hire to check to see if any potential signs of success or failure were missed. So common questions are: “Is there anything that has surprised us about the new hire, anything we didn’t predict?”, and “Looking back, 13
The analysis typically then involves asking two simple questions. WHAT IS THE SUCCESS RATE OF NEW APPOINTEES? This can be analysed firm-wide, or by role level, country, or business unit. HOW ARE THE CHARACTERISTICS OF THE NEW APPOINTEES WHO SUCCEED DIFFERENT FROM THOSE WHO DO NOT DO SO WELL? This is important to help you understand what pieces of data collected in the selection process most predict subsequent success or failure, which in turn can help you better select people going forwards. And again, this can be analysed firm-wide, or by role level, country, or business unit. Data analysis, then, is simply about collating data and then returning to it; pairing it with performance data and checking averages and correlations. It needn’t be complex stuff. For example, a global business recently asked us to help it establish assessment processes to support three key people decisions: the recruitment of new hires, the identification of high-potentials, and selection for promotion. The processes created were not complex. They mostly involved interviews, supported at more junior levels by psychometric tests and at senior levels by individual psychological assessment. But they were implemented with all data centrally collected. As a result, the firm was able to use this simple data to improve their selection decisions. • They were able to look at the competency ratings of new hires in each business division. This enabled them to check two things: were some divisions attracting stronger candidates than others? And were the qualities of new hires aligned with each unit’s business objectives? Sure enough, two divisions appeared to be attracting lower quality candidates. Another unit, whose strategy involved fast organic growth, 14 © 2018 YSC
was hiring relatively risk-averse people. As a result of these findings, all three divisions were able to change their attraction and hiring activities. • They compared the average competency ratings of new hires with those of the people nominated as high-potentials. They found that the new hires had an uncannily similar pattern of strengths and weaknesses to the current employees. This kickstarted a debate in the business about whether it was just employing clones, which eventually led to changes in the hiring process. • They also compared the average ratings of new hires with those of applicants who were not selected. They found that what most distinguished those who were not selected was that they tended to be extroverts and less risk-averse. This reinforced the finding that the company was just employing clones. These findings were all accomplished with simple data and without resorting to expensive systems. But they ultimately helped the business to improve its selection decisions and thereby its ability to deliver its growth strategy.
NEXT STEPS Recently, some research into the effectiveness of Leadership Development activities found that they were more likely to improve poor performers than high performers – more likely to bring the bottom up and thus raise the average, than further raise the heights reached by the best.
It may help to look at attempts to improve selection activities the same way. That it is more about driving down the misuse of assessment data, and preventing the worst decision errors. That may not sound like much to aim for, but given the cost of poor selection decisions, it is a valuable and admirable goal. The precise processes businesses choose to use will depend upon their size and culture. What makes ensuring proper usage so challenging for some businesses is the fact that it requires consistency. For larger, decentralised or geographically dispersed organisations, this can be tough. The good news is that the three foundations above are all fairly simple and can be implemented in a light-touch manner. They do not take significant time or resources. They do not need to create bureaucracy. And
they do not need to be expensive. All they really need is a little will. Change and improvement in how assessment results are used certainly cannot be achieved overnight. It is a cultural thing. This is particularly so when organisations have a long tradition of using assessments in a certain way. Yet change can come, and the solutions can be straightforward. Given this and the cost of not doing anything, we are continually surprised by the number of businesses that seem to have a blind spot here. But businesses must act, because investing in assessment solutions without ensuring that they are used properly, fundamentally undermines their utility and value. To make headway in improving the success of selection decisions, the decision-making itself needs to be improved.
15
YSC.COM