Hiring and firing are hot!
Since the start of the pandemic, the state of the labor market has been a topic of constant attention: where can we find enough people? How can we offer a great candidate experience? Where can we find people with the right skills sets and certifications? Why are people quitting and not coming back to work? Do they even want to work?
It all started with the “Great Resignation” and ever since, newspapers and magazines have had a field day with labor trends, particularly around talent acquisition. I can’t remember a time where hiring and firing demanded so much attention. Last week I shared this picture on LinkedIn and it received a lot of feedback – you’ll notice that every trend is either Great or Quiet ;).
It’s no secret that employers are frantically trying to attract the right people, and that hiring teams are busy like never before. An even though an economic downturn or a recession might provide some temporary relief, the tough hiring situation will persist for the foreseeable future.
The world’s demographic developments support this view:
- in Europe, North America and Japan, working age populations are shrinking and it will be difficult to find enough people (quantity)
- and globally, there aren’t enough people with the right skill sets to support the digital transformation of the 2020s (quality)
The job market might not be as hot as it was a few months ago, but many job seekers have options. Yes, there are layoffs in the tech industry, but so far it seems fairly contained. It’s an opportunity for companies in other industries to finally hire the tech profiles they’ve been trying to employ for so long.
How to hire?
Last week was Linkedin’s Talent Connect, and I encourage you to watch the keynote. CEO Ryan Roslansky shared some great charts about the current state of the job market, and what applicants on LinkedIn value most. Two slides grabbed my attention:
- Only 14% of vacancies are remote, yet they get 51%(!) of all applications
- HR professionals are changing jobs more than every before
Why is it so difficult to find the right people? I remember the introduction of the first Applicant Tracking Systems (ATS) . One of the main benefits of an ATS was the Talent Pool: the functionality to store the resumes of candidates that you couldn’t immediately place, and develop a relationship with them through nurturing. By occasionally keeping in touch with them, they would get to know your company, and form an excellent base for all your future hiring needs. So far, whenever I bring up the topic of the Talent Pool, it seems that recruitment leaders have forgotten all about this reservoir of candidate data. Relationship building hasn’t happened, and now it is too late. Or is it? Did your company build a talent pool? I’m really curious if this has become an obsolete feature. Let me know if you run a successful candidate relationship program.
How to choose?
So if we haven’t developed deep relationships with candidates, how do companies choose new hires? Well, that’s where technology comes in. You can have the ATS search the job candidates that responded to a vacancy, and create a short list for you. But how that short list is generated, is now starting to attract attention. Because in the past years, the role of artificial intelligence (AI) has grown. Modern solutions include AI functionality, for a variety of functional applications.
These AI-based hiring solutions promise to solve two key challenges recruitment professionals currently face:
- the increasing workload as a result of high volume recruitment
- the pressure to fulfill corporate diversity, equality, and inclusion (DEI) goals through hiring
And that gets us to the issue of hiring bias. When you let AI select your short list, how do you know that it applied unbiased criteria? How are potential candidates added to the short list? How did the AI tool screen candidates? What do you know about those selection criteria anyway? Because you will now have to know the answer to that question.
New York City is introducing a law (No. 1894-A) that goes into effect on January 1, 2023 and targets Artificial Intelligence based hiring. Employers that utilize AI decision-making tools in their hiring practices must inform applicants that they do so. Candidates have the right to request an alternative process or accommodation. Employers must also conduct independent bias audits to ensure that these tools do not have a discriminatory impact on candidates.
And while you might think this won’t affect you, consider this: the EU is also working on regulating AI, and legislation will be brought to vote next year. Based on the draft, HR tools that make decisions on hiring, promoting and firing by using AI algorithms, will be considered high-risk AI systems. Users will have to proof that these systems prevent algorithmic biases and ensure transparency. If they don’t, fines can be imposed.
Can you explain it to a judge?
Just last week, the information commissioner in the UK warned companies to steer clear of “emotional analysis” technologies or face fines, because of the “pseudoscientific” nature of the field. Biometric attempts to detect emotions and other character traits are often used in the talent acquisition process, e.g. to detect how “open” a candidate is, or if they are a good match with the company culture. In those cases, the candidate sits in front of a screen and completes some exercises, while the camera tracks facial expressions and scores them. Companies that make important decisions, like hiring, based on these technologies could be liable for the consequences.
And we’ve already seen that there are consequences:
- Three women, who worked for Estée Lauder’s subsidiary MAC, were being made redundant, partly on the basis of automated judgment by a computer. The company agreed to an out-of-court settlement.
- A group of 60 Facebook contractors were reportedly being told to not come back after being selected “at random” by an algorithm.
- The Dutch court ordered Uber to reinstate British drivers that were fired based on incorrect assessments made by an algorithm.
As AI becomes more sophisticated, workers fear that it will be used for more serious, high-risk decisions, such as using performance metrics to determine who should get promoted or get fired. And even if a human is involved, that person might only perform a menial task, such as sending or signing the decision document. In other words, a person is involved in the outcome of the decision, but not in the decision itself. And that’s a problem. Because employees will take their employer to court when they feel mistreated. And the first question that judge will ask is: can you explain your decision?
We’ve seen several cases (Amazon, Google) of hiring discrimination by algorithm. Often, this happens as an unintentional side-effect when teams use systems that fail to account for racial or other bias. In almost all cases, the algorithms made gender-biased decisions and were discontinued as a result. The fundamental question is: can systems even make these decisions?
Researchers from Cambridge University just published their latest research titled: “Does AI Debias Recruitment?” Spoiler alert: it does not. The researchers even go so far as to state that most AI-powered tools are misleading in their claims that they enable unbiased hiring.
Cambridge undergraduates released an online simulation application, that I encourage you to try. The app applies the OCEAN criteria to your headshot, and lets you experience the effects of AI. You first create the blue OCEAN baseline with your picture (I used a stock photo). Next, I adjusted Contrast and Brightness using the sliders on the left side. The green line shows how the OCEAN scores change, just because a candidate e.g. hasn’t optimal lighting set up. It demonstrates how irrelevant external circumstances can highly influence outcomes, or what you think you are measuring. Proceed with care!
Solving the AI gap
Just to be safe, should we ban the use of AI from hiring applications, or HR systems in general? Not at all! And none of the (proposed) laws implement or even suggest a ban. What they mandate is that you know exactly what is going on: what algorithms are used, how criteria are applied, and do they reduce or increase bias? What will an audit of your hiring decisions reveal?
And before you think this is too much trouble, and you’re not going to use AI, I want to introduce an opposing view: people that have been the subject of biased or discriminatory hiring practices by humans are more welcoming towards the use of AI. They view algorithms as more neutral and objective decision makers and like to see that applied more often. There is an opportunity for the use of AI, as long as we firmly maintain an unbiased baseline.
But it’s clear that there is a gap between the current state of technology and existing and developing legal standards. So it’s time for an in-depth conversation with the developers of your solution. They should provide you with a detailed description of their AI approach, and allow you to make changes that can be audited.
And if you (or even worse: they) don’t know exactly what’s happening behind the scenes, you should pause algorithmic decision making until you do. In the meantime, be careful with outcomes and decisions. And maybe read up on ways to counter bias in hiring tools so they work better for everyone.