Amazon's AI Recruiting Troubles Aren't Unique—And They're Not All Bad, Either
28 januari 2019
Technological innovation rarely follows a straight line; there are often a few curves along the way.
Such is the case with Amazon's artificial intelligence (AI) recruiting tool which, according to a 2018 report, by Reuters was scrapped after a discovery that the software favored male applicants. According to the report, Amazon's computer models were trained to vet applicants based on patterns found in resumes submitted to the company over the previous decade, a majority of which were submitted by men. As a result, the automated program penalized female-specific terms and downgraded applicants from two all-women's colleges.
This isn't the first time we've been warned that AI recruiting tools are subject to bias. In 2017 technology writer Sara Wachter-Boettcher explored the ways bias persisted in modern technology in her book Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech That same year former White House technology and economic adviser for President Obama, Dipayan Ghosh, wrote an op-ed titled "AI is the future of hiring, but it's far from immune to bias."
In my view, this penchant for bias is a pitfall companies need to continue to work at avoiding—not a reason to stop using AI in hiring altogether. Instead, HR teams should implement AI with the understanding that bias could (and likely will) emerge, make adjustments as needed, and continue innovating the recruiting process in order to make it equitable for all candidates.
Amazon's AI Experience is a Model—not a Misstep
Despite the Amazon incident, AI as an effective workplace tool is spreading across industries. At the beginning of 2018, 38 percent of companies reported implementing AI to not only support HR departments but also to assist with everything from marketing decisions to employee training. In the coming years, adoption is expected to continue growing. And overall, that's a good thing: one study suggests the technology could increase workplace productivity up to 40 percent by 2035.
But in addition to the promise of productivity, many implementations of AI (recruiting included) may be subject to unintended bias—not because the AI is biased itself, but because the data input expresses a bias. If historical hiring data suggests a preference for Ivy League degrees, for example, the AI may "learn" to select for Ivy League candidates over other qualifying factors. And Amazon is far from the only company to discover that the data it fed the AI system produced undesirable results. Companies planning to implement AI, therefore, should be aware that bias might present itself, share those findings, and end use of the tool until that bias can be eliminated.
Moreover, companies using AI are responsible for bias monitoring whether the AI is a proprietary tool, or whether the AI services are from an outside provider. A provider might promise their tool has no bias, but companies using the tool have an obligation to ensure their applicants are being treated fairly by the technology. To help in bias detection, companies can leverage a growing set of AI-based tools designed to flag bias and discrimination in the hiring process when they might not be plainly visible. The better companies can address and eliminate bias, the more quickly they'll be able to take advantage of AI's ability to reduce time-to-hire and save costs.
As Adoption Grows, Practice Transparency
Despite concerns about bias, more than half of HR managers believe AI will play a significant role in their industry in the next five years, according to a recent survey conducted by CareerBuilder, and we believe that impact will be overwhelmingly positive. Already, AI tools are showing early success in helping streamline the recruiting process, initiate and onboard successful candidates, and even enable those candidates to reach their full potential as employees.
Photo: Creative Commons
In the short term, I hope to see more transparency in the recruiting process, especially when automation is involved, so that candidates know who, or what, is vetting their application. Companies need to be transparent about when the AI does or doesn't work rather than trying to hide it—and highlight the work being done to systematically eliminate biases. By demonstrating this working to eliminate biases in its system, it can build a trustworthy connection with candidates.
In the long term, I believe the good in this technology will far outweigh setbacks like the bias at Amazon. The AI itself is not inherently biased, and many experts agree once we can clean up our biased data, the machines of the future will be able to make more equitable hiring decisions than a human would. The more we are able to identify and eliminate biases in our hiring and recruiting, the closer we come to a day when candidates are vetted on merit alone.
The path to this future won't necessarily follow a straight line, but will ultimately empower HR teams, candidates and employees to reach their full potential in the workplace.