Blog Post

How to ensure your organization's AI applications are ethical

Jose Alberto Rodriguez Ruiz

Data Protection Officer at Cornerstone OnDemand

HR teams are increasingly embracing AI applications. These teams use the applications to help with recruiting, performance and succession management, and task allocation.

All this AI has triggered the need to develop compliance standards. But because technology has advanced faster than regulation, organizations have been responsible for self-assessing the effectiveness of their AI applications.

But that’s about to improve.

The European Union recently proposed a risk-based regulatory framework that holds organizations to new compliance standards for adopting AI. The proposal specifically identifies AI systems for HR, among others, as high-risk and advises adequate oversight and requirements to address concerns about safety and fundamental human rights.

According to the proposal, high-risk AI systems will require human oversight, added transparency measures, high-quality training data to avoid bias, and documentation for regulators and users to understand how the system operates.

What makes AI ethical

Good compliance standards are certainly required. AI systems in HR will make recommendations that impact millions of people. And we are not talking about recommendation engines that influence people to buy a PlayStation over an Xbox (I can help with that too if you need advice!).

This AI will be responsible for influencing serious and impactful personal questions like: Am I going to get this job or not? Am I going to get this promotion or not? Am I going to get access to the right training?

And that's why HR is being singled out as high impact and therefore high risk.

While we don’t know yet when the EU will adopt the proposed regulation or its final form, compliance standards are going to get stronger. It’s up to organizations to consider risk and operate ethically, whether they are already using AI systems or thinking about adoption.

By testing and monitoring the technology, developing a deep understanding of how it operates and increasing transparency about its utilization, HR teams can benefit from AI systems while protecting their people.

AI risk management in HR today

In the HR industry, we have not yet settled down the cursor of risk-management between “let’s be cautious and take the time to test” and “let’s innovate and move forward fast.”

HireVue, an enterprise video interviewing technology company, is an example of an organization innovating fast but ultimately pausing AI advancement due to concerns about fundamental human rights. The organization formerly used AI to visually assess a candidate’s hireability during video interviews. After recognizing that nonverbal data hardly contributed to the AI’s predictive capabilities and growing concerns about the technology contributing to potential gender and racial bias, HireVue removed facial expressions as a factor in their algorithms. And they reiterated their stance against this practice in a recent commitment to transparency. Risk-based compliance standards and regulations can help to avoid these issues.

How Cornerstone does ethical AI

At Cornerstone, we are looking to learn from these experiences and develop a secure approach that integrates risk considerations into development teams.

For example, the different datasets we avoid including information that may lead to discrimination. There are ongoing reviews of how the algorithms work. The algorithms don’t make decisions for HR professionals but rather allow individuals to review what the machine proposes as an input for their decision-making process.

This practice gives us a human review and allows us to work on the accuracy of the algorithm. If suddenly we realize people are rejecting the algorithm’s output at a rate of 90%, we know something is wrong, and we can investigate.

Our system positions our team to proactively address risk and safety concerns, which may help avoid compliance pitfalls — not to mention unintentional bias and discrimination — even as we develop more robust AI applications.

4 questions to ask when vetting AI systems

While companies like Cornerstone and HireVue are developing AI directly for HR teams, many outsource AI tools from vendors. As HR leaders assess AI solutions to implement into their organization, asking these questions can help reduce risk and maintain compliance.

1) Who built the technology, how and where does the data come from?

Understanding where and how the AI technology is developed can allow HR teams to investigate their vendor’s ethics framework. And knowing where the data comes from can give HR teams more confidence in and control over the outcomes.

2) What checks and balances are in place to reinforce ethical use?

Asking the vendor about their checks and balances provides HR teams with confirmation they are partnering with an organization committed to ethical use. Having a clear understanding of the vendor’s testing process also gives HR teams insight into what testing is being done during development.

3) How will I utilize the technology?

If HR teams simply accept every outcome the technology offers, they could be missing unpredictable deviations that point to red flags. And if they don’t monitor the technology on an ongoing basis, teams won’t be able to escalate concerns early on. And remember: If the humans utilizing the technology are not trained to identify bias, the system won’t be either.

4) How will my employees utilize the technology to detect bias?

It’s not always obvious where algorithmic decisions come from. I encourage users to look at AI outcomes and ask, “what’s the relationship?” While we may not be able to explain each recommendation at this stage of use, we can explain the logic behind the system as a whole in a way that everyone understands. Therefore, everyone can uphold in an ethical, risk-averse way.

Bring ethical AI to your organization

Vendors and users both have a role to play to ensure that AI use is ethical and compliant through shared responsibility of testing and monitoring. Providers must test the technology for biases throughout development, but there’s no way to guarantee that it’s going to be 100% perfect. This is why users also have a responsibility to continue monitoring the technology on an ongoing basis. Deviations may happen that are very difficult to forecast, which is why it’s necessary to test the technology as it is built and monitor the technology as it’s used.

As HR professionals take the next steps to enforce ethical AI use, they can conceptualize their role by imagining a pilot and an aircraft. While we all have this romantic idea of the pilot holding a joystick and directing the plane, it hasn’t worked like this for ages. Instead, the pilot enters numbers into a computer, and the computer makes the plane turns.

HR professionals are going to have to become pilots of AI algorithms. With more regulation ahead, guiding the technology in the right direction will be paramount to its success.

Related Resources

Want to keep learning? Explore our products, customer stories, and the latest industry insights.

Cornerstone Performance

Datasheet

Cornerstone Performance

Whether you do performance reviews only once a year or have a robust, continual performance management process, with Cornerstone Performance, you can set goals, coach employees, receive feedback, guide development, and give recognition. Seamlessly link performance and skills data with internal learning opportunities to enable employee-driven, manager-supported growth. Because the best way to invest in business growth and achievement is by investing in the growth and achievements of your people with Cornerstone Performance.

Schedule a personalised 1:1

Talk to a Cornerstone expert about how we can help with your organisation’s unique people management needs.

© Cornerstone 2024
Legal