Carl Johan Holmberg on reference checking

Carl-Johan Holmberg on: How structuring your reference checks increases your hiring accuracy

  • 40 minutes
  • Candidate Experience, Talent Acquisition
  • Ep 33

How reliable are reference checks? That depends on how you conduct them. On this episode of How We Hire, we speak to Carl-Johan Holmberg, a researcher for Refapp, a digital reference-checking platform. Carl plays a vital role in improving reference checking and applying digital tools and science to make this often overlooked process more efficient and fair. In this episode, Carl shares his knowledge and insights on how to make reference checking a reliable, useful process for employers wishing to improve their hiring accuracy. 

Key takeaways

  • - Should candidates nominate their referees? 
  • - Gender biases in reference checking and how to minimise discrimination
  • - How to achieve structure with reference checking
  • - What questions to ask and avoid when running a check

 

On the show

Carl-Johan Holmberg
Carl-Johan Holmberg Researcher, Refapp
Linnea Bywall
Linnea Bywall Head of People & Operations at Alva Labs

Carl-Johan Holmberg

Carl-Johan Holmberg is a researcher at Refapp, with a deep passion for evidence-based practises in recruitment. Carl holds degrees in work and organisational psychology and work science. He is at the forefront of integrating science into hiring. At Refapp, Carl plays a key role in improving reference checking. He uses digital tools and science to make this often overlooked process more efficient and fair. He lives by the motto, if there is science, why not use it? And he's here to share valuable insights into the evolving world of reference checking.

Linnea Bywall

Linnea Bywall is a former NCAA athlete turned licensed psychologist – and Head of People at Alva Labs. Linnea was recently listed as one of the most inspiring women in tech by TechRound and was featured as one of the 22 Innovative HR Leaders to follow in 2022 by AIHR Academy to Innovate HR. 

From attracting and hiring to onboarding and growing Alva's employees, Linnea's main mission is to change the world of hiring every day by challenging biases in recruitment.

Show notes 

  • Introduction-2:01
  • Why reference checking is important-5:00
  • The disparity between what the research says and how companies are using reference checking-7:28
  • How reliable are reference checks? 9:13
  • Making sense of validity versus reliability when talking about reference checks-10:44
  • Should candidates nominate their referees? 13:30
  • What happens when the referee is a candidate’s current employer- ethical dilemmas unpacked-20:10
  • Gender biases in reference checking-22:14
  • How to minimise gender biases in reference checking-24:47
  • How to achieve structure with reference checking-25:11
  • The difference between digital reference checks and personal checks conducted over the phone-34:57
  • What questions to ask and omit when conducting a check-37:32
  • Tips and tricks on running checks: 44:40

How We Hire Podcast Episode 33 Transcript

Carl (00:00):

So when the referees are asked to evaluate candidates using free text responses, there tend to be a gender bias against women. And this bias comes in different forms. So for example, when male candidates are described, the referees tend to use more standout a objectives such as he was exceptional at his work. And those kind of a objectives are less common when female candidates are described. And another example of this gender bias is that female candidates tend to be described with more doubt tracing comments. So for example, in one of the studies a referee wrote, she may not become a superstar, but she's very solid. So in one hand, the referee says that this is a stable candidate. On the other hand, the referee puts in some doubt by saying that she may not become a superstar. And those kind of doubt raising comments are also more common when referees rate female candidates. And this is regardless of the gender of the referee that is worth mentioning. So it seems like the gender of the candidate, it's the factor that contributes these group differences.

Linnea (01:26):

Welcome to How We Hire a podcast by Alva Labs with me, Lanaya, licensed psychologist and head of people. This show is for all of you who hire or just find recruitment interesting. In every episode, I will speak with thought leaders from across the globe to learn from their experiences and best practices within hiring, building teams and growing organizations. Our guest on today's episode

Linnea (01:55):

Is K awan Hobe. Kawan is a

Linnea (01:58):

Researcher at Refa with a deep passion for evidence-based practices

Linnea (02:01):

In recruitment, holding degrees in work and organizational psychology and work science. He is at the forefront of integrated science into hiring. At Ref app, Kwan plays a key role in improving reference checking. He uses digital tools and science to make this often overlooked process more efficient and fair. He lives by the motto, if there is science, why not use it? And he's here to share valuable insights into the evolving world of reference checking. And with that introduction, welcome to how we hire

Carl (02:33):

Kwan. Thank you, Linea. Thank you. What a great way of being introduced.

Linnea (02:38):

What a great saying. If there's science, why not use it? I love that. I might steal that

Carl (02:43):

One. Yeah, you are welcome.

Linnea (02:45):

Now you know everyone will know where I stole it from.

Carl (02:48):

Yes, that's good.

Linnea (02:50):

Can't you just start us off with talking a little bit about how you ended up at REF app and your journey there? I think it's such a fascinating story.

Carl (02:57):

Yeah, yeah, of course. So for those of you who are not familiar with RevUp, we are offering a software for digital reference checking. And it actually started when I was studying organizational psychology at university and I read the famous scientific paper by Schmidt and Hunter, which was published in the late 1990s. And I think we will come back to that paper later in this episode. But anyway, in this paper I saw that compared to other selection methods, reference checking has relatively low predictive validity on work performance. In other words, if you want to predict performance during a recruitment process, reference checking is a poor way of doing so. And being a curious individual, I googled different providers of reference checking solutions and I found RevUp and on their website they said improve validity by using RevUp. And I contacted them and asked, how do you know that you actually improve the validity? And they replied with something like, we don't really know that, but we are assuming that we are increasing the validity. And since that day I have read every paper I could find on reference checking and I ended up working here for RevUp. So it actually started with me being skeptical about reference checking in general. So that is my background.

Linnea (04:33):

Your claim to fame. I love that the candidate that reached out and complained about the product didn't even apply for a job. Got the job.

Carl (04:43):

Exactly, exactly.

Linnea (04:45):

Great. Wait, okay. So the focus of today's episode will obviously be geeking out and nerding out around reference checking. So can you just start us off in talking a little bit about why is reference checking actually important?

Carl (05:00):

That is a good question because like I said, it has this rumor of having low validity in general, and that is also true among scientists. So when I talk to the few scientists who actually conduct these kind of studies, they say that when they go to a conference and meet other researchers, there is this attitude against them. They hear things like, everybody know that reference checking is bad, it has poor validity and so on. But still it is after the employment interview, the most common method to use in personnel selection. And I think that from my perspective, reference checking is valuable because it is usually the only method where the information about the candidates come from someone other than the candidates themselves. So in all of the other process steps, the candidates themselves are the one providing the data about the candidates that we use in the recruitment process.

(06:01):

So I think that is valuable for several reasons. There are certain aspects of work performance that the candidates may not be inclined to talk about themselves. For example, what we usually call counterproductive work behaviors. So if you are having an employment interview and you ask the candidate, are you usually late for work? No one would answer yes to that question. So that kind of information can actually be collected by contacting former employers. And I think that could serve as I add on to what you are usually doing when you are applying integrity testing or doing other kinds of criminal background records. Adding reference checking can help you protect your organization against that kind of unwanted counterproductive behavior.

Linnea (06:52):

I think to be super transparent here, I mean I'm not a researcher, but I'm definitely one of those that have talked about reference checks as not being scientific enough and not worth the time. And I think what's interesting here is in your mind, has the real world outside of research been aware about the fact that it isn't that helpful or have they felt that it actually is helpful? How come the gap between all the researchers that said it's not helpful, but everyone's still using it and now we're finding that maybe it actually is more helpful than we thought That imbalance? What's your take on that?

Carl (07:28):

That is a very good question. I think many organizations do it out of habit. They have always conducted a reference. It is a very old fashioned selection method, and I think many do it because they have always done it. And compared to other methods, it is fairly cheap to just pick up the phone and call someone. And I think also it can be related to usually you conduct references as the final step. Many times you have already decided what candidate you want to hire to ensure yourself that it is almost like a confirmation bias in many cases that I think this candidate will be a good fit for this role. I get the impression that he or she is very structured, cooperative and so forth, but you want to confirm that information with some other source and then you go to the references to do that. And that is also a problem I think because I have acted as a referee many times myself and being called on my phone and the recruiter says that, oh, I have so good vibes about this person, can you just confirm that she's really structured? Okay, so it is you don't

Linnea (08:44):

Mind it

Carl (08:44):

Do Exactly. So it is basically in many cases I would say a way of acting out your confirmation bias on the candidates.

Linnea (08:53):

I think that's a really good way to put it. We're used to doing it and we maybe not do it in a optimal way.

Carl (09:02):

No,

Linnea (09:03):

And I mean obviously we're going to dive into how one should do it, but before that or maybe that going backward, but the elephant in the room, that is how reliable our reference checks.

Carl (09:13):

Yeah, it's a great question. So like I said before, according to the article by Smitten Hunter, which is widely cited and still used in academia today, reference checks has a validity of R equals point 25 or something. And a prerequisite for validity is of course reliability. And a problem with traditional reference checking where you are calling the referees is that the number of referees you can collect is limited by the time you can spend on talking on the phone. And here something has changed with digitalization of reference checking. So by using software like Refab, you can collect data from a greater number of referees without adding to the workload of the recruiter analysis of our own data. And also in previous studies show that the reliability increases with a number of raters, which should not come as a surprise, but before there was a limitation to how many referees you could contact. But with the digital tools we have today, there is much easier to gain higher reliability by collecting data from a larger number of raters.

Linnea (10:32):

And just since everyone of the rest of the world is not researchers, should we just quickly validity versus reliability? Explain what that, those two concepts?

Carl (10:44):

Yeah. Great. So validity in this case refers to what extent you are actually measuring what you intend to measure. Reliability, as I said, is a prerequisite for validity, and that refers to how reliable the data is.

Linnea (11:01):

Have you heard the analogy of the dartboard where validity is that you hit the bullseye? Meaning that you measure what you intend to measure and that it's the right thing. But that reliability is that when you throw several darts is that you hit on the same spot. So if you have a very scattered dartboard with darts all over the place, that reliability is really low. That

Carl (11:25):

Is a great analogy. I haven't heard it before. It's good,

Linnea (11:28):

Right? Yeah. So if you have high reliability, but low validity, it would means that all the errors hit but very far out from the bullseye. So you can't have one without the other, but you cannot have true validity if you don't also have.

Carl (11:42):

That is a great way of putting it. So since you stole my motto in the beginning, I will steal this reliability and validity analogy

Linnea (11:50):

Be my guest. Thank you. Okay. But I think that's what you mentioned then is with the digitalization of reference checking, we can vastly increase the reliability, meaning how certain we are that we get the same result all over again so that we can truly trust it. And I think really interesting aspect of how this is viewed in the real world or happening in the real world, I think you mentioned someone calls you up and just asks you to refer and you don't really have time and you're standing in the bus queue and it's super awkward and you as a recruiter have to chase these reference calls. But I have given a lot of references over the last two, three years. All of them have been digital majority. I would say like 95% in RevUp.

Carl (12:41):

Wow. And what is your experience since you have experience both the old and the new way of doing it?

Linnea (12:47):

No, I think I can do it whenever I want. I don't have to talk to someone. I'm surprisingly introverted for being a super extroverted, I love to interact with people, but when it comes to those things, I don't want people to interrupt me. So I love it making commercials here. But then when we have say tools like LinkedIn where we can see what someone has done in the past and who someone has worked with, how can you best then pick who should give the reference? I mean, the reality is a candidate will rarely give you a shitty reference person. They will give someone that like them, right? Yeah,

Carl (13:30):

Exactly. So that is a great question. So let's start with your comment here about the candidate nominating their own referees. So that is one of the problems that people often address with reference checking, oh, you can't trust the referee because the candidate has cherrypicked, the ones who are going to say positive things about the candidate. And then you can think about the other process steps again, like in the employment interview, if the referees are inclined to talk positive about the candidate, how inclined aren't the candidate to present themselves in a positive way? So that is one way of thinking about it, but still that is a problem. And recent studies show that even though the candidates nominate their own referees, there are still enough variance in the data to actually predict future important outcomes such as turnover or work performance. But of course the validity would probably be even higher if the candidate wasn't allowed to cherry pick their own referees.

(14:37):

So I think that like you said, social networks like LinkedIn recruiters now have much better tools to actually be part of the nominating process. You can see what previous managers, colleagues, and also customers that the candidate has interacted with and worked together with. And then you can talk to the candidate and say, I have identified this persons, I would like to have them as referees. And if the candidate says, no, I don't want that, then you can have a conversation about why. Because there is a phenomena called backdoor referencing. And that is when the recruiter contacts these people without the knowledge of the candidates. There is a study from the United States, which was published only I think last year, where they asked recruiters, how often do you conduct a backdoor reference check? A majority, I think it was 59% of the recruiters said that they had conducted backdoor reference checks and also that they put a higher value on that information compared to the value you put on referees that the candidate has nominated. So it seems like from a face validity perspective, that recruiters think that these so-called back to reference that can provide additional value since the candidate has not nominated them, him or herself. But there is this ethical perspective that perhaps you can identify people on LinkedIn, but you probably want to talk about contacting them with a candidate because the candidate may not be open about looking for a new job, for example.

Linnea (16:27):

And I think 59% is a shockingly high number. Maybe I shouldn't be shocked, but I am. Can we just dive a little bit into the ethical dilemma here? I think one thing with GDPR, if a candidate decides that they want to have all the data that you have on them, you obviously have to actually share that you have done a back to reference call, someone that they didn't know that you contacted, and you have to share what that person said. So that's a little bit uncomfortable, but in most scenarios that might not happen. But it's still, as you said, an ethical dilemma. What's your take on this?

Carl (17:02):

Yeah, I think the largest dilemma for me personally, and also when I address this, when I talk to, I do a lot of lectures at universities in Sweden on the HR programs, and I usually raise this question to get a discussion going in the classroom. And I think the most common ethical dilemma is that the candidate might not be unemployed. He or she may have employment at a current employer, and if the recruiter contacts the present manager of the candidate, that could cause a lot of inconvenience if the candidate is not open about him or her looking for a new job, and then the candidate fails to proceed in the recruitment process and has to continue going to this work where the manager now knows that the candidate does not want to be. And that may be in the end a good way. I don't know that the manager finds out that a candidate is not happy with the current employment situation. However, I don't think that information should come from a recruiter calling the manager without knowledge of the candidate. So what do you think about this?

Linnea (18:14):

I really agree, and I think my penny to the discussion would be that it's also about the transparency. I think for me, one of the most important aspect of good candidate experience is that we are transparent with what's happening, that we're not trying to fool the candidate, that we're not trying to set them up for failure. And I think for me, this goes in the bucket of trying to fool the candidate doing something that you're not completely honest with. And regardless if it creates a problem or not, I just think that it's a delicate situation as is because it's often the employer that has the upper hand, they have the job and the candidate is they have less power in that situation because they're asking for the job. So I think you need to be super delicate of how you treat the candidate. And I don't know, for me it's like if you couldn't write things out in a job ad, then why are you doing them?

(19:12):

And then for me, then the rationale behind doing something that you can't tell the candidate that needs to be something out of this world to justify it. I think it's more like transparency and moral compass, but also because it feels like in my gut, if I were to apply to a job and someone did this to me, everyone doesn't like me. And that's probably the reality, and maybe that's a good thing, but it's still not being able to explain because maybe, well, we did not get along because of X, Y, and Z, so of course this person's going to say these things about me. But also, yeah, we got along really, really well. Of course this person's going to say this stuff about me. I think just not being in control of that's a little bit scary. But then I think that the point that you raised is obviously the worst scenario where you would actually blow the cover of a candidate. Yeah,

Carl (20:10):

Exactly. And I also like that, like you said, the candidate experience perspective, which is super important. And therefore I think the golden middle way is that you can look up other people than the candidate has nominated, but for God's sake, ask the candidates about permission of contacting them. And then like you said, okay, I don't want you to contact this person because we didn't get along, then you can have this conversation about it, but I don't think you should do it behind the back of the candidates. And also going back to the scientific literature, to my knowledge, there is only this single article published on this phenomena, the backdoor reference checking, and that was from a recruiter's perspective to identify the prevalence of this practice. So I think if there are any researchers listening to this or students who are going to write your essays in the future, this could be something to look into from a candidate perspective, the practice of backdoor reference checking and the moral aspects of it.

Linnea (21:11):

And I mean to the point of say that it would be 10 times higher validity, I'm just making things up now, but say that would be the case and then it would be obvious. I'm like, oh, well this is the way to go. I think there's still a world where you can say, Hey, you know what? We use backdoor references. We are going to contact people, we won't tell you who, but we will make sure that it's not from anyone at your current employer and now you know that it will happen. And that's still transparency. So I think that if it would be something that you would like to justify, I'm sure there's ways around it, but I mean to your point of if there's science, why not use it? But until there is science maybe

Carl (21:55):

Chill. Good point.

Linnea (21:59):

Still feels like we're on the same page here. So one thing that I find really interesting here, and we talked a little bit about this before we hit record gender biases in reference checks. Tell me all about it.

Carl (22:14):

Of course, I will. And this is something that again, when I talk about this, people get really fascinated and their jaws drop when they hear about this. And this is something that

Linnea (22:25):

If there's a sound like clunking on the table, it will be my jaw.

Carl (22:29):

Yeah, okay. Yeah, great. So this is something that has been replicated several times in different studies and it has to do with free text responses from the referees. So when the referees are asked to evaluate candidates using free text responses, there tend to be gender bias against women. And this bias comes in different forms. So for example, when male candidates are described, the referees tend to use more standout adjectives such as he was exceptional at his work. And those kind of adjectives are less common when female candidates are described. And another example of this gender bias is that female candidates tend to be described with more dove tracing comments. So for example, in one of these studies a referee wrote, she may not become a superstar, but she's very solid. So in one hand the referee says that this is a stable candidate. On the other hand, the referee puts in some doubt by saying that she may not become a superstar, and those kind of doubt raising comments are also more common when referees rate female candidates. This is regardless of the gender of the referee that is worth mentioning. It seems like the gender of the candidate, it's the factor that contributes to this group differences

Linnea (24:05):

In this research. Are there any models of explanation that you can share?

Carl (24:12):

So it's hard of course to say for sure what cultural relationship is, but it probably has to do with stereotypes of men and women. And when you can freely write about a person, maybe you associate men and women with different traits or attributes, and that comes into play when you are asked to write a longer text about that person. It might be one explanation that gender stereotypes are one factor here

Linnea (24:43):

And what's the conclusion? How can we minimize this risk?

Carl (24:47):

Yeah, great. So this was actually found last year in a large study where they looked at over 1 million digital structured reference checks. And I haven't mentioned this before, but if we look at the validity of reference checks, it seems to improve a lot if you are using structured reference checks compared to unstructured reference checks.

Linnea (25:08):

Should we double click on that? What's the difference?

Carl (25:11):

Yeah, so the difference is almost like comparing an unstructured interview with a structured interview. So in reference checking to achieve structure, you need to start with a proper job analysis, identify the characteristics that you want to measure in the reference check procedure, and that goes for every selection method of course, because if you do not ask questions based on a job analysis, there is a risk that the information you collect from the referees is irrelevant and useless. The second criteria to achieve structure is that when you have conducted your job analysis, you want to measure identified characteristics with validated questions. And the research so far suggests that in reference checking, you should use work-related behavioral questions because since the rater is someone other than the candidates, the behaviors you are asking about needs to be easily observed. The third criteria, and this is where the gender bias comes in, is that these scientifically based questions should be scored on a standardized scale. So for example, you might want to use a liquid scale with numeric options instead of allowing the respondent to use free text answers.

Linnea (26:36):

So like one to three, one to five, whatever.

Carl (26:39):

Yeah, exactly. So using numeric scales and the I talked about, which was published last year, analyzed data from over 1 million structured reference checks where they used numerical scales instead of Tex responses, they found that the gender of the candidates did not affect the rating, and this was also true in gender stereotyped occupations such as truck drivers and nurses. So it seems like this gender bias that appears to be prevalent in free text references is mitigated or maybe even eliminated when you use standardized numeric scoring scales instead of allowing their rater to freely express his or her opinion about the candidates.

Linnea (27:32):

That's so interesting, and if we allow ourselves to zoom out a little bit, I think, I mean this explains so well why structure and a thought through process is the most important thing when you want to make accurate and efficient hires because you need to start in the job analysis, know what you're looking for, decide on beforehand, what questions are you going to ask, how are you going to assess this rate that not just let a candidate speak freely and you just then use your gut feeling to feel how the answer was. That's so true for reference checks as you mentioned, but also for interviews, case exercises, so work sample tests. I think this is just a mindset that we need to truly, truly embrace.

Carl (28:20):

I totally agree, and I think in many of the other selection methods, this has been adopted by many organizations, but for some reason it seems like reference checking hasn't been included in that mindset. You still want to read between the lines when you're calling a referee, but why would you want to do that? It is allowing your own bias to contaminate the data using structure throughout the whole process is very important for several reasons. Of course.

Linnea (28:51):

Yeah, and it's fascinating what you say with allowing to read between the lines. It's like we're magically going to find some, again, dirt on the candidates to try to fool them and try to prove that they're not right for this position rather than going into the recruitment process with the assumption that I'm going to make great candidates and I'm going to prepare them as well as I can so that they can show their best versions. I think it's just such a big difference in how you will tackle your hiring process of these two different mind shifts.

Carl (29:23):

Exactly.

Linnea (29:24):

Okay, so the conclusion we can make is structure is key and limit. Well, both like the unstructured references, I mean that's maybe the first step, but also limit the amount of free text answers to ensure that you treat every candidate fairly. I think that's a good

Carl (29:41):

Takeaway. Exactly. And also another criteria for meeting structure in the reference checking process specifically is that you should ask the same questions to every referee. And again, it is to ensure that we treat each candidate equally and also that you can get data that is comparable across the candidates and also regarding how you phrase your question and the standardized scoring scales, except for mitigating bias by reducing the free text options, it is also the case that if you are using open-ended free text responses, when you ask about things like counterproductive work behavior, which we mentioned before, the referees aren't going to write about those bad behaviors if you are not pushing them and asking questions about specific behaviors. So that is also something that you want to consider when you are designing your questionnaire, except for mitigating gender bias and other kinds of bias. By removing the free text responses, you do want to ask about specific behaviors and use a standardized scale so that the referee can score whether or not this candidate or to what extent this candidate has displayed this kind of behavior. That is another reason that you get more informative data by using standardized scales rather than free text responses.

Linnea (31:14):

And I think what you raised there is also that if we can be explicit with what we want to know, it's going to be a lot easier for the reference giver to actually provide us with the right information. It's going to be hard for them to guess what we're looking for. But also going back to what you said, same questions to all candidates, I think, I mean that's music to my ears. That's such a, for me at least a given practice when it comes to hiring that we need to give everyone a fair chance to prove themselves and that we can compare apples and apples, not apples and pears. But why, because I think this is where reference checking is a slightly different beast where it feels like it's more allowed to, as you say, read between the lines and go deeper and truly understand the last missing pieces of the puzzle. Is that because it is the only way or the only method where we talk to other people? Is it the biggest, it's a final step? What's creating this Let's go rogue feeling.

Carl (32:15):

Yeah, great question and I don't have a certain answer, but I think it's a combination of those two factors and probably the feeling that you want someone else to confirm the picture you have painted about the candidate may be a reason for why you put the reference checking in the final step because then you have time to create your own picture of the candidate. And then at the final step, before you hire this person, you want to just have another outsider confirm your own ideas about this person,

Linnea (32:50):

Which is of course super comfortable because hiring is super scary and it's really hard to make a decision. And if someone just confirms what I think is going to be a lot easier to perceive, this might be the worst description of references ever made, but it feels like the last party of the season when you have been so diligent and done so well for such a long time, you just feel like you need to go out and go crazy. It feels like sometimes that reference checking, is that the last party of the season to structure throughout the process? I am going to go rogue. Exactly. I don't know if when use that analogy as you would like to use the analogy for the dartboard, feel free to steal that one as well. Yeah,

Carl (33:38):

Thank you. I probably will. Another reason I think for why you go wild and crazy on reference checking and throw out the structure that you had in the previous selection methods is that what if the referee says that this is a bad candidate? You don't want to hear that because then you have to repeat the entire process that you have put so much time and money on, and then on a final call, someone are going to tell you don't hire this person. You don't want to go through all that again. So think that could also play a role in this bizarre show of confirmation bias that you really don't want to hear from a referee that this is a bad hire because that would cause a lot of extra work by finding a new candidate and dragging that through the entire process.

Linnea (34:30):

That's like getting too drunk too early at the last party of the last season. So you missed the entire party. Yeah, the analogy works.

Carl (34:39):

It does, yeah.

Linnea (34:41):

Okay. So we have talked a little bit about digital reference checks, but can you just double click on what's the difference between digital reference checks compared to the old fashioned lifting up the phone and calling someone?

Carl (34:54):

So the main argument for doing this shift I think is the time you save. Like we talked about it earlier, it is very time consuming to try to call the referee, make an appointment, and usually the referees want to answer the questions about the candidate outside office hours when the recruiter is working. So it allows the referee to answer on demand whenever it suits the referee. And also, like I mentioned before, the gain is that you can collect data from a larger number of referees without adding to the workload and thus increase the reliability of the data. But then we have aspects except for the time aspect and the reliability aspect. And that is I think that a digital format facilitates structure. As we talked about before, if you want to use a standardized scoring scale, it can be kind of tricky to do that in a telephone call and say things like, I'm going to ask you a question.

(36:01):

You are going to respond with a number between one and five, one equals blah, blah, blah. And that could be kind of difficult. So doing it in a digital way will facilitate the degree of structure in the data collection. There are also some interesting scientific findings that it seems like people are less likely to lie when they provide written digital information. For example, sending an email compared to talking to someone on the telephone. And this is derived from studies within the field of human computer interaction. Again, this has not been tested in the context of reference checking. So this is another suggestion if someone wants to conduct that students

Linnea (36:50):

Out there

Carl (36:50):

Outside the study. However, in general, people are more likely to lie in a telephone call compared to sending an email. And we have reasons to believe that that may also be the case in reference checking.

Linnea (37:04):

And I think that is win-win win-win win, where it's smoother for the recruiters, smoother for the reference giver, more reliable and also more trustworthy. Yeah, that's a pretty sweet deal.

Carl (37:18):

It is.

Linnea (37:19):

Okay. So should we then dive into types of questions? Are there particularly smart questions areas that we should cover shouldn't cover in a reference check?

Carl (37:32):

Yeah, that is a good question and something that our clients usually ask Rev up about, it very much depends on the purpose of the reference check because some organizations, they want to use the reference check as a validation of the previous data collection. So for example, they want to confirm that the candidate actually had an employment at this company between these years and that he or she had this kind of responsibility and so forth. And that is one way of looking at it and one purpose to just confirm the information that the candidate has provided. And if you look at the scientific studies on reference checking as the gold standard of a validity study is to look at the predictive validity that is how well can the score from this measurement predict future work performance? That is usually what scientists are interested in looking at.

(38:32):

And then you can see that if you want to predict task using reference checking, it seems like a good idea, like I mentioned before, to ask questions about work-related behavior. And the questions that has been used in these studies mainly comes from observer ratings of personality. So basically the big five, the five factor model, and using the same kind of items that you would use in a self-reported personality inventory. But from an observer perspective, however, it should be said that there are only a few studies that has tried this out and only one study looked at the incremental validity. That's a

Linnea (39:16):

Hard word. What's

Carl (39:17):

That? So that is if you have different selection methods, let's say a self-report of personality and observating of personality, that's the observating ad validity and explain something that the self report already does not explain. So does this actually add additional value or are we just collecting the same data once again? And there is one study that looked at the incremental validity of reference checks when you also use a measure of general mental ability in the process. And those constructs were weekly correlated and suggested that yes, reference checks do add additional value to that general mental ability test. However, it would be interesting of course to look at a situation where you have the candidates do self-report of personality and then use observer ratings in reference check. But to my knowledge, that has not been done yet. So that seems to be a good way of collecting data using observer ratings based on work-related behavior. That kind of questions. But I also want to mention one thing that I think is problematic when we look at the studies, almost every study use task performance as a criteria that is what you want to predict. But as I mentioned earlier, there are other aspects of work performance like counterproductive work behavior or organizational citizenship behavior. And I think that

Linnea (40:52):

Can you just explain those two concepts if someone doesn't know what those are?

Carl (40:55):

Yeah. So counterproductive work behavior refers to intentional behavior that violates organizational norms and could cause harm to the organization. Its employees and also customers. And then we have organizational citizenship behavior that includes doing things that aren't required by you according to your official employment agreement. You may help people who have a lot of their on their table, you are voluntarily introducing and onboarding new members of the organization, cleaning up the kitchen area or whatever you voluntarily do things that goes beyond your formal work tasks. And I think that in the reference checking process, again, looking for counterproductive work behaviors is very important if you are conducting a reference check. Because like I mentioned before, the candidate will not probably talk about those behaviors even if you are asking about it in an interview. And then we have integrity tests and criminal background checks and so forth.

(42:03):

But we have seen in the United States and in Sweden that if you fail to do a proper reference check and hire an individual that display counterproductive behaviors, that could be very costly for your organization. And then you find out that you probably could have stopped that if you just contacted some former employers and they would probably have shared with you what kind of behaviors this individual displayed in previous organizations. And again, even though this is the second most common selection method after the employment interview, very few scientists look at reference checks. And that is kind of sad because I think we need more evidence to help us understand what kind of information we should collect in our reference check, how we should collect it, and also help us to develop this selection method in general.

Linnea (43:02):

But if I'm going to sum up what you just said, it sounds like a, you need to know the purpose of, I mean, that goes for me back to intent in every activity have the clear purpose for why you're using references base, the content of the references on numerical skills, clear behavior questions that are linked to the job description or evaluation, and then ask for both task performance, counterproductive work behaviors and organizational citizenship behaviors. I think that's some sit up quite well to me.

Carl (43:33):

And I also want to, we were talking about mindset before in the recruitment process, you want to think about structure, and I also like to add that you should think about different methods as means of data collection. So in the reference checking process, you are collecting data from someone other than the candidate. What kind of information do you think that the candidate will not provide, but perhaps referees could provide you with? So starting again in that, and except for the job analysis of course, and asking certain types of questions, you should ask yourself, what kind of information do I want to know that I probably can't retrieve from the candidates themself?

Linnea (44:17):

Yeah, great point, Alice. Long as it's the same information from all candidates. Great addition. So it's time to wrap up. I think we've learned a lot about reference taking in this episode. Kawan would love to hear on a closing note, what's your advice to someone who wants to improve their ways of working when it comes to reference checks? What's the step one in your book?

Carl (44:40):

Yeah, again, the step one in every process, I think start with a proper job analysis because my experience is that this is some many times overlooked in organizations. Usually you have a manager that is cherrypicking different competencies that sounds good to have in your organization. And then the entire recruitment process, reference checking included, will be based on that often arbitrary job analysis. So I think that is something we need to be better at doing.

Linnea (45:14):

Yeah, that's a great point. Start from the job analysis and work your way through the closing party of the year, the reference checks. Yeah. Thank you so much for joining How we Hire, and obviously anyone listening can recommend reaching out to you, Kwan, if they want to continue this discussion or start researching references. Yeah, I mean, of course you have so many projects.

Carl (45:34):

I do. Yeah, feel free to contact me.

Linnea (45:36):

Amazing.

Carl (45:37):

And thanks for having me. It was a pleasure.

Linnea (45:39):

Pleasures all mine. Thank you everyone for listening. I hope you tune in for the next episode of How We Hire So Long.