Artificial intelligence promises to make hiring an unbiased utopia.
There’s certainly plenty of room for improvement. Employee referrals, a process that tends to leave underrepresented groups out, still make up a bulk of companies’ hires. Recruiters and hiring managers also bring their own biases to the process, studies have found, often choosing people with the ‘‘right-sounding’’ names and educational backgrounds.
Many companies lack racial and gender diversity, with the ranks of underrepresented people thinning at the highest levels of the corporate ladder. Fewer than 5 percent of chief executives at Fortune 500 companies are women, and there are only three black CEOs. Racial diversity among Fortune 500 boards is almost as dismal; four of five new appointees to corporate boards in 2016 were white.
‘‘Identifying high-potential candidates is very subjective,’’ said Alan Todd, the CEO of CorpU, a technology platform for leadership development. ‘‘People pick who they like based on unconscious biases.’’
AI advocates argue it can eliminate some of these biases. Instead of relying on people’s feelings to make hiring decisions, companies such as Entelo and Stella IO use machine learning to detect skills needed for certain jobs. The AI then matches candidates who have those skills with open positions. The companies claim not only to find better candidates, but to pinpoint those who may have previously gone unrecognized in the traditional process.
Stella IO’s algorithm assesses candidates based only on skills, for example, said founder Rich Joffe. ‘‘The algorithm is only allowed to match based on the data we tell it to look at. It’s only allowed to look at skills, it’s only allowed to look at industries, it’s only allowed to look at tiers of companies.’’
Entelo has released Unbiased Sourcing Mode, a tool to further anonymize hiring. The software allows recruiters to hide names, photos, schools, employment gaps, and markers of someone’s age, as well as to replace gender-specific pronouns.
AI is also being used to help develop internal talent. CorpU has formed a partnership with the University of Michigan’s Ross School of Business to build a 20-week online course that uses machine learning to identify high-potential employees. Those ranked highest aren’t usually the individuals who were already on the promotion track, Todd said, and often exhibit qualities such as introversion that are overlooked during recruiting.
‘‘Human decision-making is pretty awful,’’ said Solon Borocas, an assistant professor in Cornell’s Information Science department who studies fairness in machine learning. But we shouldn’t overestimate the neutrality of technology, either, he cautioned.
Borocas’s research has found that machine learning in hiring, much like its use in facial recognition, can result in unintentional discrimination. Algorithms can carry the implicit biases of those who programmed them. Or they can be skewed to favor certain qualities and skills that are overwhelmingly exhibited among a given data set.
‘‘If the examples you’re using to train the system fail to include certain types of people, then the model you develop might be really bad at assessing those people,’’ Borocas said.
Not all algorithms are created equal — and there’s disagreement about which algorithms have the potential to make hiring more fair. One type of machine learning relies on programmers to decide which qualities should be prioritized in candidates. These ‘‘supervised’’ algorithms can be directed to scan for individuals who went to Ivy League universities or who exhibit certain qualities, such as extroversion.
‘‘Unsupervised’’ algorithms determine on their own which data to prioritize. The machine makes its own inferences, based on existing employees’ qualities and skills, to determine those needed by future employees. If that sample includes only a homogeneous group of people, it won’t learn how to hire different types of individuals, even if they might do well in the job.
Companies can take steps to mitigate these forms of programmed bias. Pymetrics, an AI hiring startup, has programmers audit its algorithm to see if it’s giving preference to any gender or ethnic group. Software that considers ZIP code, which correlates with race, is likely to have a bias against black candidates, for example.
Stella IO also has humans monitoring the quality of the AI.
‘‘While no algorithm is ever guaranteed to be foolproof, I believe it is vastly better than humans,’’ said founder Joffe.