The Ethics of AI in HR
July 15, 2020
We recently announced the launch of our Cornerstone Innovation Lab for AI. Situated in Paris, our AI Centre of Excellence brings together experts on machine learning, artificial intelligence, human resources and data protection to research new ways of applying this technology to the field of HR, with a special focus on the employee experience.
This lab created the Skills Graph – a skills finder that uses the most cutting-edge AI technology to make it easier for our customers to uncover their employees’ skills, opening a world of possibility. A world where we can help our employees discover new professional careers and suggest content and training that meet the employee’s professional ambitions. We hear a lot about reskilling and upskilling and our system’s innovative vision make these tasks easier, fairer and more transparent.
On the one hand, AI can be used for process automation, allowing us to optimise the way we work and become more efficient. On the other hand, it allows us to make correlations that are not immediately obvious – if they were, of course, natural intelligence would suffice! The combination of these two factors can significantly improve our HR processes but the algorithm cannot think for itself and if we are not careful, it could repeat past mistakes. That is why it is important to talk about ethics and to consider how to incorporate the notion of algorithm behaviour into the design stage. We cannot ignore the fact that AI creates uncertainty and risk, and the field of HR is no exception. One commonly discussed example is the use of unfiltered people data for a recruitment algorithm that results in gender-based discrimination. The reason why this kind of situation has occurred is that often programming AI is based on having a lot of data and searching for repetitive patterns within that historical data. The data can reflect outdated realities – for example, the fact that most people holding certain positions are men, a situation that can rarely be justified. For this reason, companies that work with this technology have a responsibility to bring together data engineers, HR professionals and ethics professionals, in order to use this technology effectively.
There are many reasons to innovate in this area and these have become more evident in recent times: speed, volume, very quick changes and the limitations of traditional manual processes. For example, let’s consider a company that wants to adapt to change and adopts a learning strategy based on reskilling. A manual process is limited in both capacity and quality; thus, automating processes using AI will help us manage these processes faster, making sure they are useful.
But in order to achieve this, HR departments will have to be updated and become “experts” on AI – or more accurately, expert users of AI.
- Choosing the right data. What data are we using to create these algorithms? If we use historical data, this can have consequences we need to consider during design.
- Using the algorithm. Once the algorithm has been implemented, its users – i.e. HR teams – will have to learn how AI works in order to be able to assess the accuracy of its results, correct errors, reduce risk and contribute to the improvement of AI.
This phenomenon will open up new jobs within HR departments, while offering a great opportunity for this department to expand its skills – upskilling and reskilling.
This is why we focus our innovation on ethics at the Cornerstone Innovation Lab for AI. We always think about the EU’s seven key requirements for ethical AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability.
AI is a very powerful tool. How it is used depends on us, the HR professionals.