Skip to main content
Learn how the EU AI Act and the proposed Digital Omnibus affect HR AI systems, from high-risk classification and Article 26(7) consultation to vendor contract red flags, penalties and practical governance checklists for employers in and outside the EU.
EU AI Act Delay on the Table: How HR Leaders Should Prepare for Either Outcome

Digital Omnibus shifts timelines, not EU AI Act HR compliance duties

The proposed Digital Omnibus would postpone some high-risk AI obligations for HR systems into a later implementation window, but it does not alter the core risk-based architecture of the EU AI Act set out in Regulation (EU) 2024/1689 of 13 June 2024. For HR compliance and ethics officers, the key message is that recruitment, performance evaluation and worker monitoring tools will still be classified as high-risk systems under Annex III, even if detailed harmonized standards for those systems arrive closer to December 2027 or August 2028. Aligning HR technology with the AI Act therefore remains a strategic priority for European companies that already deploy artificial intelligence in talent acquisition, productivity tracking or internal mobility.

Under the current law, deployers of high-risk HR AI systems face an August 2, 2026 deadline for full compliance with obligations on data quality, documentation, human oversight and transparency toward affected workers, as set out in Articles 9, 10, 13 and 14 of Regulation (EU) 2024/1689. The Digital Omnibus proposal would allow the European Commission to defer some of these obligations for high-risk HR systems if harmonized standards for high-risk systems and general-purpose AI models are not ready, but the underlying legal duties in the regulation remain in force. HR leaders should treat any delay as breathing space to strengthen governance rather than as a signal to slow down work on risk system mapping, internal policies and vendor due diligence.

The European Commission has been explicit in its communications that the AI Act is designed as a risk-based framework, which means that the higher the potential impact on fundamental rights, the stricter the obligations on deployers and providers. Recruitment chatbots, automated CV screening tools and performance scoring systems will almost always fall into the high-risk category because they affect access to work, career progression and workplace surveillance. Even if the Digital Omnibus shifts exact dates, HR teams will still need to document their systems, classify each AI application by risk level and ensure that prohibited practices such as manipulative emotion recognition in the workplace are not used by any vendor. A short internal primer that explains the high-risk category, core duties for deployers and the role of general-purpose AI components can help non-legal stakeholders understand why HR tools are treated as sensitive systems under the AI Act.

Article 26(7) consultation and two track planning for HR offices

While lobbying discussions about the Digital Omnibus focus on high-risk deadlines, Article 26(7) of the EU AI Act already requires consultation with employee representatives before deploying high-risk HR AI systems. This consultation duty applies regardless of whether the technical standards for high-risk systems arrive in 2026 or later, which means HR compliance teams in every European office must build structured dialogue with works councils and unions now. Governance of AI in HR therefore has a dual nature; it is both a legal documentation exercise and a human oversight process that must be co-designed with the workforce.

Article 26(7) expects companies to explain the purpose of each AI system, the categories of data used, the logic of the models where possible and the safeguards that will protect fundamental rights such as non-discrimination and privacy. In practice, this means HR offices need clear, non-technical summaries of how recruitment scoring tools, productivity analytics platforms or workplace monitoring systems will operate, including how generated content such as automated interview notes or performance summaries will be reviewed by humans. A short consultation checklist can help: describe the use case and data sources, clarify whether any emotion recognition or behavioral scoring is involved, outline human review steps, and record questions, objections and agreed mitigations from employee representatives. HR teams can turn this into a simple template with four columns: system description, data and model logic, human oversight measures, and consultation outcomes.

To manage uncertainty about timelines, HR leaders should adopt a two-track plan that integrates consultation into documentation and deployment readiness. On the first track, teams map all AI systems used in HR, classify them as high-risk or lower risk, and build core documentation such as risk assessments, human oversight procedures and transparency notices for workers in all relevant Member States. On the second track, they monitor Digital Omnibus negotiations, prepare to align with future harmonized standards for general-purpose AI and GPAI models, and adjust technical controls once the European Commission finalizes detailed code of practice documents for purpose GPAI systems used in HR workflows. This dual-track approach allows HR offices to move ahead on governance while still being able to fine-tune technical and contractual controls when secondary legislation and standards become available.

Global employers, gpai models and vendor contract red flags

US-headquartered employers with staff in the European Union cannot treat AI Act obligations for HR as a purely European issue, because the law applies to deployers of high-risk HR AI systems wherever their main office sits. If a US company uses a global recruitment platform or performance management tool that screens EU-based candidates or employees, that company becomes a deployer of a high-risk system under the AI Act. These global companies will need to align their internal governance, risk and compliance frameworks so that HR teams in New York, Dublin and Berlin apply the same standards to high-risk systems and human oversight.

Many HR tools now embed general-purpose AI and GPAI models to generate interview questions, summarize candidate profiles or flag potential misconduct, which raises specific transparency obligations under the AI Act for both providers and deployers. Vendors sometimes market these features as harmless productivity assistants, but HR compliance officers should ask whether the generated content influences hiring or firing decisions and whether any emotion recognition or behavioral scoring is involved. Contracts that deny access to model documentation, restrict audit rights or shift all legal risk to the customer are red flags that should not be signed this quarter, especially when prohibited practices or opaque high-risk systems might be hidden behind friendly user interfaces.

When reviewing contracts, HR and legal teams should insist on clear allocation of obligations between provider and deployer, explicit confirmation that no prohibited practices are used and detailed descriptions of how human oversight is technically enabled in the system. They should also require commitments that any purpose GPAI components comply with the European Commission code of practice for GPAI models, including safeguards for fundamental rights and mechanisms to handle law enforcement requests without turning workplace tools into covert surveillance systems. A practical clause might state that the provider will supply up-to-date technical documentation, support independent audits on reasonable notice and promptly inform the customer of any material changes affecting compliance with Articles 9, 10, 13, 14, 26(7) or 99 of Regulation (EU) 2024/1689. A disciplined, forward-looking approach to vendor governance will help companies avoid rushed purchases, align with the risk-based logic of the AI Act and build trustworthy artificial intelligence practices across all HR systems.

Key quantitative signals for EU AI Act HR compliance

  • Current deadline for deployers of high-risk HR AI systems such as recruitment, evaluation and monitoring tools is set for early August 2026, with potential Digital Omnibus extensions only if harmonized standards are delayed, according to Regulation (EU) 2024/1689.
  • Proposed fallback dates under discussion would move some high-risk AI obligations to December of the following year or to August of the subsequent year, depending on the readiness of technical standards for high-risk systems and GPAI models in the EU standardization process.
  • Fines for non-compliant deployers of high-risk systems can reach up to EUR 15 million or 3% of global annual turnover, whichever is higher, under the enforcement regime of the EU AI Act in Article 99(3) of Regulation (EU) 2024/1689.
  • Use of prohibited practices, including certain forms of manipulative emotion recognition or unjustified law-enforcement-style monitoring in the workplace, can trigger penalties up to EUR 35 million or 7% of global turnover, as set out in Article 99(2).
  • A survey by the Center for Data Innovation in 2023 reported that fewer than 30% of EU small and medium-sized enterprises had started structured AI compliance work, highlighting a significant preparation gap for HR functions and underlining the need for early planning.

Questions HR leaders also ask about EU AI Act HR compliance

How does the EU AI Act classify HR technologies as high risk systems ?

The EU AI Act treats AI systems used for recruitment, promotion, performance evaluation and worker monitoring as high-risk because they can significantly affect access to employment and working conditions. Any system that automates or materially supports these decisions, including tools powered by general-purpose AI or GPAI models, falls under strict obligations on data quality, documentation, transparency and human oversight. HR leaders should therefore map every tool that influences hiring, firing or evaluation outcomes and assume high-risk classification unless a detailed legal analysis shows otherwise, using Annex III of Regulation (EU) 2024/1689 as the starting reference.

What should an HR office include in its AI risk and governance framework ?

An effective HR AI governance framework starts with a complete inventory of systems, including embedded GPAI models and purpose GPAI components inside larger platforms. It should define roles and responsibilities for legal, HR, IT and data protection teams, set out procedures for risk assessment, and describe how human oversight will operate in practice for each high-risk system. The framework must also address transparency obligations toward employees, safeguards for fundamental rights, escalation channels for complaints and clear rules on avoiding prohibited practices such as unjustified emotion recognition or covert law-enforcement-style monitoring. Many organizations capture these elements in a concise AI policy, a risk register for HR tools and a standard operating procedure for reviewing new systems before deployment.

How are US based employers with EU staff affected by EU AI Act HR compliance ?

US-based employers that use AI systems to manage EU-based candidates or employees become deployers under the EU AI Act, regardless of where their main office is located. They must comply with obligations for high-risk HR systems, including documentation, risk management, human oversight and cooperation with European supervisory authorities in relevant Member States. This often requires aligning global HR policies with European legal standards, renegotiating vendor contracts and ensuring that generated content from GPAI models does not introduce hidden bias or prohibited practices into HR decisions.

What are the most critical red flags in HR AI vendor contracts under the EU AI Act ?

Key red flags include clauses that deny customers access to technical documentation about the AI system, prohibit independent audits, or shift all legal responsibility for EU AI Act HR compliance onto the deployer without meaningful support from the provider. Contracts that are silent on prohibited practices, emotion recognition features, or the use of general-purpose AI and GPAI models in decision making should also trigger concern. HR and legal teams should insist on clear descriptions of high-risk systems, explicit commitments on transparency obligations and concrete mechanisms for human oversight before signing any agreement, ideally supported by a standard contract review checklist that flags missing safeguards.

How should HR teams handle employee consultation under Article 26(7) ?

Article 26(7) requires employers to consult employee representatives before deploying high-risk HR AI systems, which means early and structured engagement with works councils or unions. HR teams should present the purpose of each system, explain how models work at a high level, describe what generated content will be used for and outline safeguards for fundamental rights such as privacy and non-discrimination. Consultation should be treated as an ongoing governance process, with regular updates, feedback loops and clear channels for employees to raise concerns about high-risk systems, human oversight gaps or potential prohibited practices. Recording minutes of each consultation, agreed mitigation measures and follow-up actions will also help demonstrate compliance if supervisory authorities later review the deployment.

Published on