How AI Can Help Companies Create a Diverse Workforce (and Why It’s Not a Magic Bullet)

Analysts and entry-level associates account for more than 70% of the people that Goldman Sachs hires in a given year. So in 2019, when the New York–based investment bank set new diversity goals, it sent a firm-wide email announcing new targets for hiring women, Black, and Latinx professionals for these positions in the Americas and the United Kingdom.

Goldman Sachs has technology – specifically, an artificial intelligence (AI) tool that it uses to screen job applicants for interviews – to help with this goal. Candidates record video interviews in which they answer a standard set of questions designed to assess their ability to think analytically, weigh their integrity, and evaluate other job-related competencies. The AI grades the interviews; then human recruiters use the videos to inform their decisions about who to invite for an interview in person. (Goldman Sachs gives advice to applicants to help them practice, while numerous independent guides and discussion groups share interview questions and offer tips.)

Initially, Goldman Sachs deployed the tool in 2016 because it faced more competition for entry-level talent, and it was hiring more people with skills beyond accounting, finance, and economics (such as programmers). Writing in Harvard Business Review, Dane Holmes, then-global head of human capital management for Goldman Sachs, observes that the company traditionally drew from a narrow pool: graduates of prestigious schools with the highest GPAs. Recruiters had to evaluate more applicants for jobs and internships, but they couldn’t visit every campus where they might find qualified applicants, and they could only interview a limited number of them.

In 2019, after Goldman Sachs had used the program for three years, the AI-assisted approach had also produced “the most diverse ever” group of campus recruits, Holmes writes, “composed entirely of people who were selected through rigorous, objective assessments. There’s no way we aren’t better off as a result.”

Many companies are turning to AI for help with identifying, hiring, and promoting talent. According to a report by Mercer, a global consultancy, roughly 70% of companies were already using or starting to use AI and related technologies for talent acquisition and management in 2020. And business leaders believe it can help them with an increasingly critical imperative: building a diverse workforce.

Across all industries, organizations are under intense pressure to bring personalization and customization to their products and services and to build deeper, lasting connections with their customers. Heterogenous teams diverse in gender, race, age, and other factors can unlock new approaches and new ways of thinking that can lead to profit-generating innovation, sounder business decisions, and better customer experiences.

Group of young businesspeople having a brainstorming session in a modern office

In a guide for organizations that are pursuing greater diversity and equity, the World Economic Forum notes that companies with a diverse workforce demonstrate up to a 20% higher rate of innovation, 19% higher innovation-driven revenues, up to a 30% greater ability to spot and reduce business risk, and better talent retention rates. In addition, a recent study by McKinsey of 1,000 companies globally found that those with the most diverse executive teams were up to 36% more likely to have above-average profits compared to competitors with the least diverse teams.

Numerous factors contribute to whether companies can create and retain a diverse workforce. For example, leaders need to commit to a culture of inclusion, where employees feel welcome, are encouraged to contribute, and are rewarded fairly. “The diversity problem is very complex,” says Josh Bersin, founder and dean of Bersin Academy, a professional development network for HR professionals. Organizations need to make sure that “every management decision in the company is made in an equal and fair way,” he cautions. AI isn’t a magic bullet, but, he says, it “is starting to turn into a useful tool.”

One problem facing companies is that the traditional way to recruit, hire, develop, and promote staff fails to identify and select diverse people at every step in the process. Humans are predisposed to favor others who are like themselves; in a hiring process, that means decision-makers – whether consciously or not – tend to hire people who look like them, went to the same schools, and have the same interests and hobbies. The same set of biases come into play when deciding on raises, promotions, and seats on boards of directors, says Bersin.

Companies are using tech to put social responsibility at the forefront of their strategy

AI can counter such biases in several ways. During the hiring process, for instance, it can scour databases to find candidates in underrepresented categories and match applicants, as well as current employees, with jobs they may never have thought to apply for. It can take bias out of the resume-screening process by stripping out details such as names that denote race or gender. And as with video interviews, it can supplement the bias-prone, face-to-face interview with a standard set of questions customized for the open position. AI can also be used to hold managers accountable by evaluating their decisions.

To use AI successfully, however, companies need to lay the proper groundwork and have clear goals. The point isn’t to speed up hiring (though it likely will); it’s about broadening the talent pool and making sure diverse candidates aren’t blocked as they make their way through the hiring pipeline. Beyond recruitment and hiring, AI has the potential to transform all of a company’s promotion and retention activities when deployed as part of a comprehensive corporate push for diversity and inclusion.

To ensure that happens, companies need to avoid the pitfalls that arise from using bad data, applying technology without transforming business processes, and allowing AI to make decisions unchecked. “The idea that AI is going to fix the diversity problem is ridiculous; it’s not,” says Bersin. What it can do is quickly identify patterns of poor behavior, unfair behavior, misalignment, and inconsistency.”

Man and woman looking at map on screen

The push for diversity

AI is gaining attention as a tool for achieving diversity because companies are becoming more aware that having a diverse workforce is more than just the right thing to do.

CEOs who were interviewed by the Conference Board prior to the COVID-19 pandemic said that finding talent was their most pressing internal concern for 2020. Creating new business models due to disruptive technology and creating a more innovative culture ranked second and third. The pandemic, meanwhile, has accelerated companies’ efforts to become more digital, to offer omnichannel buying experiences for their customers, and to enable mobile and remote working scenarios for their employees. All of that requires looking in new ways at how the business operates.

The challenge for companies, however, is that it’s difficult to drive innovation and come up with new ideas and new business models when the workforce is relatively homogenous. Companies are looking for fresh eyes, different ways of approaching problems, and a deeper understanding of the customer. That’s what a diverse workforce and leadership team can deliver.

And yet the absence of diversity, including in the leadership ranks of big companies, is well documented. Only 9% of Fortune 500 company board seats are held by people who identify as Black, according to a recent survey by Deloitte and the Alliance for Board Diversity. The McKinsey study found that, on average, women make up only 20% of corporate executive team members in the United States and United Kingdom, and more than a third of the companies surveyed worldwide still have no women in their executive ranks.

Companies that succeed at creating a more diverse workforce take deliberate steps to do so.

The consumer goods company Unilever, whose customers are 70% women, has put diversity across gender, age, race, disability, and sexual orientation at the heart of its global sustainable development agenda to “boost financial performance, reputation, innovation, and staff motivation – and bring us closer to our consumer.” In 2009, Unilever set a goal of gender balance, and by the end of 2019, 51% of managers were women.

Unilever built an AI-based platform that has helped create diverse teams. The platform screens job candidates using a series of games that measure aptitude, logic, reasoning, and appetite for risk. The results are correlated with the qualities required for the job to determine whether the candidate is a good match. In the next step of the application process, candidates submit a video interview that is graded by the system through a mix of natural language processing and body language analysis. The company says the system helps eliminate subjectivity that may derive from evaluation of resumes, recommendations, biographical details, and face-to-face interviews. Diverse hires have increased 16% since the technology was deployed.

Typically, recruiters want to fill positions quickly, so they keep turning to the same places to find talent – well-known business schools, say, or networks that have produced candidates in the past – rather than broaden their search to new talent pools. In addition, companies often write narrow job descriptions that demand, for example, a specific number of years of experience or a certain certification (such as an MBA). These credential-based pitches tend to scare off potential applicants who might have the ability to do the job but feel they don’t match the requirements.

For candidates who do apply, conventional resume-scanning software ranks them based on keyword matches rather than less computer-friendly concepts such as potential, skills, and ability to learn, screening out candidates who may have related experience. If a candidate makes it to a face-to-face interview, the process can be notoriously subjective. Intel boosted the percentage of women and underrepresented minorities it hired for technology positions from 32% to 45% in two years, in part by requiring interview panels to include diverse members.

At companies where AI has helped change these dynamics, they did more than simply apply the technology to their existing data and processes. They took a hard look at the types of personnel decisions that resulted from their existing processes, and they used technology to transform them.

A couple shaking hands with businesswoman

How AI can help

AI systems can help with six major aspects of recruitment, hiring, and talent management that create barriers to shaping a diverse workforce:

Companies can identify red flags in hiring, pay disparities, and lack of diversity in upper management. They can find gaps or blind spots and set priorities for what issues to address first.

  1. Analyze internal company data to identify biases in pay, promotions, and leadership positions. AI can unearth meaningful patterns in large data sets. For example, it can gather baseline information on diversity within the existing workforce and compare the track records of hiring managers within the company. Bersin points out that by performing this assessment, companies can identify red flags in hiring, pay disparities, and lack of diversity in upper management. They can find gaps or blind spots and set priorities for what issues to address first.

    Jody Atkins is global head of talent at NextRoll, a digital marketing company that considers diversity to be an important aspect of its company culture. An AI platform helps the company identify job candidates in underrepresented categories and provides recruiters and hiring managers with data to understand how they are progressing toward their diversity goals.

    “It’s very eye-opening when you see the graph of the hiring pipeline and how it compares to objectives,” Atkins says in a published case study about the effort. “It’s shining a light on something that before was ambiguous. It really makes you stop and think, ‘Why is it that way?’ and ‘What can we do about it?’”
  1. Create more inclusive job descriptions. Writing job descriptions differently can broaden the applicant pool, observes Kamal Ahluwalia, president of Eightfold.ai, a talent acquisition and management software company that worked with NextRoll. For example, he points out that only 15% of software developers who know the Python programming language are women, so if a company includes knowledge of Python as a requirement in a software developer job description, only 15% of candidates are likely to be women. If a company calls instead for general statistical analysis skills, the field widens.

    With an AI system, a new job opening might prompt internal job experts to take a survey to identify the behavioral qualities needed to be successful in that job. They might be asked, for example, whether the company should look for someone who “attends to details,” “suggests original ideas,” “communicates in an easily understood way,” or “handles complaints and resolves grievances,” says Caitlin MacGregor, CEO of Plum, a talent management software company.
Coworkers fist bumping
  1. Anonymize resume information that can trigger biases. AI can strip out and anonymize any information that can be used to identify someone’s gender, race, age, disability status, and other factors that are not related to their qualifications.

    Simply by masking gender-revealing information in resumes, Tata Communications has increased its percentage of female hires from 19% to 31% over the past three years. “More women are getting through the door of initial screening, which is a classic problem that occurs when unconscious bias is happening,” says Aman Gupta, global head of talent management and talent acquisition.
  1. Use defined criteria to evaluate candidates. Using job-specific criteria to evaluate candidates mathematically makes interviews fairer, argues Nathan Mondragon, chief IO psychologist at HireVue, which provides video interviewing software and has worked with Goldman Sachs and Unilever. “It’s not GPA. It’s not a gap in employment. It’s not the school they went to. For a retail associate, it’s ‘Do they have a service orientation and are they conscientious?’ For a nurse role, ‘Are they empathetic and excellent problem solvers?’ ‘Are they a problem solver with high levels of general mental abilities?’ for an engineer role. Then here are the questions to ask in the interview that are linked to service orientation, problem solving, or empathy.”

    AI can then analyze and evaluate candidates’ answers to interview questions and other data points to determine how closely the candidate matches the job criteria, Mondragon adds. Hiring managers can decide how to use the information; for example, they can look at every candidate’s set of scores or ask the system to identify a certain range (such as the top 40%) of candidates.

    The potential benefits of this approach led Michael Shelsen, global head of campus recruitment and talent development at Scotiabank, to eliminate the traditional résumé in favor of an AI-based assessment tool (the company worked with Plum). In a video about the effort, Shelsen says, “Let’s look at people for their skills, as opposed to the biases we’re aware of and also the unconscious ones.”
Young woman sitting at her desk surrounded by three large computer monitors
  1. Match candidates with other open jobs. Large enterprises might have dozens or even hundreds of openings at any one time. AI can match applicants to additional positions – a task beyond the scope of a manual hiring process.

    For example, by identifying the behavioral qualities needed in a job and measuring candidates for multiple positions against those needs, AI can not only expand opportunities for the candidates but can also give employers a wider pool of potential hires. Using a similar approach, companies can identify internal individuals for positions in which they might thrive. In doing so, AI can help managers become more aware of diverse candidates on their own teams and elsewhere in the company whom they may not have considered in the past, says MacGregor.
  1. Encourage career progression. By crunching data about current employees, AI can help managers identify those from underrepresented groups who have leadership potential that is unrecognized because managers (whether consciously or not) tend to tap people who are like themselves. According to MacGregor, some companies have asked current employees to take the same AI-based assessments they use for new hires as a way of discovering unseen potential in the existing workforce.

    Scotiabank has analyzed employees’ internal social media exchanges and performance data to identify variables that correlate with employee success. The company was able to develop metrics and an “emerging leader index” that it has used to identify employees who should be developed for bigger roles and encouraged to advance. The goal: to drive out unconscious bias and put more candidates on managers’ radar when making promotion decisions.
Business meeting

Rooting out the bias in the machine

Despite the potential for AI to assist with creating greater equity in hiring and promotion decisions, it can backfire spectacularly – and it has. The World Economic Forum points out that new technologies “can contain biases that deepen rather than counteract exclusion.”

For example, researchers have identified high error rates when facial recognition technology is applied to images of darker-skinned people. In 2018, Amazon abandoned an experimental tool that used AI to help rank job candidates because it was biased against women applying for technical jobs. Although engineers attempted to fix it, executives lost confidence that AI would not discriminate in other ways.

“Simply taking a bad, broken, biased system and using historical data and AI to speed up that broken system is not the right approach.”

Caitlin MacGregor, CEO, Plum

The core of the problem is a familiar one: garbage in, garbage out. The results depend on the quality of the underlying data the algorithms use and how it’s applied (Amazon relied on historical data about its workforce, which was heavily male). “In the early days of AI, there was a belief that if you got all this data, the system would be so intelligent that it would identify bias,” observes Bersin. “But a lot of early systems introduced bias, so the vendors had to work to take the bias out again.”

Cathy O’Neil is CEO of O’Neil Risk Consulting & Algorithmic Auditing, which helps companies identify issues of fairness, bias, or discrimination in their algorithms and recommends ways to address them. “Algorithms are opinions embedded in code,” she notes in a TED talk, and they “can go wrong with good intentions. Algorithms don’t make things fair. They automate the status quo, codifying sexism and other bigotry.” However, the good news is, as O’Neil puts it: “Algorithms can be interrogated, and they tell the truth.” In other words, companies can perform data integrity checks, and they can correct mistakes.

“Simply taking a bad, broken, biased system and using historical data and AI to speed up that broken system is not the right approach,” observes MacGregor. It doesn’t work because the algorithm will seek out candidates who match the existing workforce rather than surface diverse candidates who would succeed in roles based on who they are and what they’re capable of doing – no matter their education, gender, or ethnic background.

Ultimately, companies should retrain their models annually to continually improve the algorithms and achieve the desired results. “It’s not a one-and-done thing,” says Mondragon. “Additionally, companies should evaluate the models for any levels of bias against race, gender, age, plus other group classes to ensure the algorithms are operating in a fair and consistent manner.”

If you thought that AI would eventually push humans out completely, think again

No decisions should be left solely to AI. Mondragon adds that interviewers should understand how algorithms score candidate assessments. They can use this information as a guide to guard against their own biases. Managers, meanwhile, can compare the decisions of human interviewers and the algorithms to determine when interviewers may need training or algorithms may need adjusting. “It’s never man or machine,” says Mondragon. “Rather, man plus machine yields the best decisions.”

Goldman Sachs, for example, uses AI to analyze its many thousands of candidate video recordings to calibrate its scoring systems, improve the questions it asks candidates, and assess whether it is measuring the right competencies for the job. The system algorithms also correlate the grades it gives with grades provided by the human interviewers to help identify any unconscious bias among the interviewers.

The more data and the wider variety of data you have, the better off you will be, advises Ahluwalia of Eightfold.ai. “If you only try to learn from one company’s data, most of the time you don’t have enough data to train the models accurately, and you end up with a wrong recommendation.” Companies can counteract this problem by deploying algorithms that are trained on a broad range of anonymized, aggregated data sources encompassing billions of data points, he adds.

Another major flaw of using historical data about successful employees to predict how applicants will perform is that job functions are changing so fast that the skills required for the job three years ago will certainly not be the skills of a successful candidate three years down the road.

For example, when hiring software developers, expertise in a programming language that is useful today might not be needed a few years from now. Rather, the ability to learn new programming languages will be far more important over time. A company is less likely to find people who will last in their roles if they feed their AI a list of programming languages and weigh those qualifications more heavily than related skills that point to their ability to learn, says Ahluwalia.

IT Specialist holds laptop and discusses work with server technician.

Make AI a partner

AI isn’t a panacea for companies hoping to reap the business benefits of a more diverse workforce – one that brings the company closer to its customers and brings innovative perspectives and approaches to business problems. It’s important for companies to build consensus for increased diversity and inclusion across the organization. In addition, companies need to create a work environment in which the culture enables and encourages openness to new ideas and different opinions.

But AI can help by providing deeper insights into how companies make their personnel decisions – and, when applied correctly, it can help managers counter the biases, whether conscious or not, that all people have when evaluating others.

Organizations can start by understanding how diverse their workforce currently is, including which departments, business units, functional areas, or specific managers have built diverse teams and which haven’t. It’s also important to understand why and whether a successful manager’s approach can be replicated across the company. The hiring pipeline has multiple steps, so companies need to identify the major pain points and address those first, based on the data that comes out of that initial assessment.

The next step is to define specific targets – ambitious goals that can be measured. Without targets, organizations have no way to track how well they’re doing or to hold managers accountable.

Then companies can apply technology to help managers see their workforce in more inclusive terms. Though humans should always make the final call on personnel decisions, creating a feedback loop – in which people continually train the algorithms to be less biased and the algorithms train the people to recognize their own biases – will ultimately help companies find and employ the best talent among the available candidates for any role.

Concludes Gupta of Tata Communications, “We realized that technology can play a major role in providing business managers with the right kind of input and the right kind of reporting.”