What should HR expect now the EU AI Act is in force?
The EU AI Act came into force on August 1st. It sets out how companies operating within the European Union should work with AI tools. So what do HR teams need to be aware of?
The EU AI Act came into force on August 1st. So what do HR teams need to be aware of?
Experience has shown that leaving the EU did not mean all the red tape disappeared. The UK continues to be affected by the regulations of our neighbouring EU bloc, and whenever a new EU Act is passed, UK HR teams still need to check to see if it has implications for their work.
On the 1st of August, the European Artificial Intelligence Act came into force, with its stated purpose being to foster responsible artificial intelligence development and deployment in the EU. The Act introduces a uniform framework across all EU countries, impacting the way they manage operations, people and engagement with other businesses.
Since the EU remains the UK’s biggest trading partner, accounting for 41% of our export market for goods and services last year, this will have implications for UK businesses and UK HR teams. But how and where will these impacts be felt?
How will the EU AI Act impact UK HR teams?
UK HR teams with direct responsibility for staff in EU countries will need to adapt their processes in line with European regulations. However, UK HR professionals without EU staff may still face a compliance burden if their organisation trades with EU organisations that require their supply chain to comply with the AI act as part of their corporate social responsibility policy.
UK HR teams could be facing requests from their sales directors to retrofit HR processes to comply with the AI Act as part of the tendering process.
The Act outlines a standard framework for governing AI usage, which is as follows:
- Minimal risk: Most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: Systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
- High risk: high-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality data sets, clear user information, human oversight, etc.
- Unacceptable risk: for example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore banned.
The EU AI Act insists on the transparent use of chatbots
AI chatbots, such as our unique AI HR Assistant AMI, are now making their way into the HR process to help employees and managers with their HR queries.
The EU AI Act regulatory burden seems straightforward and sensible here. If you are using an AI chatbot for HR self-service, it should be clearly labelled as such, as we have done with AMI.
It would be unlawful to try and pass off a chatbot as a real human. Employees must know in no uncertain terms and at all times when they are chatting with an HR bot.
AI-assisted decision-making is considered high-risk
AI-assisted HR decision-making is coming under scrutiny because it is considered a high-risk activity under the act. The framework has strict requirements for AI scoring or selection systems relating to recruitment particularly, but this would arguably extend to areas such as AI-assisted performance assessment or redundancy selection software.
HR managers with immediate EU exposure can breathe a small sigh of relief as businesses have until August 2026 to comply. Also, the act is not about banning these systems but offering a governance framework, which includes safeguards such as human oversight, data integrity, information clarity, and risk mitigation.
February 2025 ban on AI systems posing an unacceptable risk
From February 2025, the EU AI Act will prohibit activities which pose an unacceptable risk.
Some of the prohibitions will probably not impact HR as they relate to AI-powered social scoring systems, normally deployed by governments and applied to its citizens, like the Chinese social credit system.
However, some of the AI areas facing an outright EU ban are likely to affect the corporate sphere. After 2025, AI-powered biometric categorisation systems, facial recognition, and fingerprint databases will be prohibited in the EU. Notably, UK businesses deploy AI-assisted metrics, and the UK Information Commissioner is accepting of this practice and has guidance on how to do it safely and ethically.
This represents a conflicting employment law scenario between EU and UK AI policy that HR managers with EU offices may need to navigate come February 2025.
It’s worth pointing out that the EU is not the only country with AI laws, as over 20 countries including the UK have some form of AI regulation.
It’s also worth noting that only the EU AI Act has been drafted in quite high-level terms and detailed guidance is to follow over the next few months. This explains why there are staggered enforcement dates over the next 2 years. You can find more information on the EU AI Act on the EU website.