The European Union's Artificial Intelligence Act (AI Act) is a landmark piece of legislation set to fundamentally reshape how businesses develop, deploy, and govern AI.1 If your company uses any form of AI for workforce management—from recruitment algorithms to performance trackers—this law will have direct, significant, and legally binding implications for you.2
For companies seeking to navigate this complex new landscape, understanding the Act isn't just about compliance; it's about future-proofing your operations and building a framework of trustworthy AI. This article breaks down exactly what you need to know and how to prepare.3
As a Deployer of AI (the legal term for an organization using an AI system), the burden of compliance rests heavily on your shoulders.4 The risks of non-compliance are severe, but with expert guidance, this transition can be a strategic opportunity.
?️ The EU AI Act: A Risk-Based Framework
The Act categorizes all AI systems based on their potential risk to human rights and safety.5 Your employee monitoring tools will fall into one of two critical categories: Prohibited or High-Risk.6
1. Prohibited AI: What is Banned Outright?
Certain AI practices are deemed to pose an "unacceptable risk" and are banned entirely in the workplace.7 If your current or planned systems perform any of these functions, their use must cease immediately to avoid the most severe penalties.
Prohibited practices in the workplace include:
- Emotion Recognition: Using AI to infer the emotions or state of mind of employees (e.g., analyzing facial expressions in video meetings to gauge engagement or stress).8
- Biometric Categorization: Using biometric data (like face scans or fingerprints) to categorize employees based on sensitive attributes such as race, political opinions, trade union membership, or sexual orientation.9
- Social Scoring: Evaluating or classifying employees based on their social behavior or personal characteristics, where that score leads to detrimental or unrelated treatment (e.g., demoting someone based on their out-of-work social media posts).10
- Subliminal Manipulation: Deploying systems that use manipulative or deceptive techniques to distort an employee's behavior in a way that could cause harm.11
2. High-Risk AI: The Default for Most HR Tech
This is the most critical category for employers. The AI Act explicitly classifies most AI systems used in "employment, workers management and access to self-employment" as high-risk.
If your company uses an AI system for any of the following, it is high-risk and subject to strict compliance rules:
- Recruitment and Selection: AI used to filter CVs, screen candidates, or evaluate applicants during interviews.13
- Promotion and Termination: AI used to make or support decisions about promotions, demotions, or terminating contracts.14
- Performance and Behavior Monitoring: Any system that monitors, evaluates, or rates employee performance, productivity, or behavior (e.g., task-tracking software, keyboard activity monitors, or automated performance reviews).
- Task Allocation: AI systems that automatically assign tasks or shifts to workers based on their behavior, personal data, or predictions.
? Your Obligations as a "Deployer" of High-Risk AI
As a company using a high-risk AI system, you have a set of legally binding obligations, distinct from the company that built the AI. Your key responsibilities include:
- Conducting a Fundamental Rights Impact Assessment (FRIA): Before you deploy any high-risk AI system, you must conduct and document an FRIA.16 This is a complex assessment of the system's potential negative impact on employees' fundamental rights (privacy, non-discrimination, etc.) and the steps you will take to mitigate those risks.
- Ensuring Human Oversight: You must appoint and train competent staff to effectively oversee the AI system.18 The AI's recommendation cannot be the final word. A human must always be able to interpret the system's output, question it, and ultimately override its decision.
- Monitoring and Data Governance: You are responsible for monitoring the system while it's in operation to ensure it functions as intended. You must also ensure that the input data you use (your employee data) is relevant, high-quality, and not feeding the system with biases.
- Transparency: You must inform employees when they are interacting with or being subject to a high-risk AI system.21
- Record-Keeping: You must maintain the system's logs to ensure traceability, which is crucial for incident investigations or audits.
? The High Cost of Non-Compliance
The penalties for violating the AI Act are staggering and designed to ensure compliance. They are applied in tiers based on the severity of the infringement:
| Violation Type | Maximum Fine |
|---|---|
| Using Prohibited AI (Article 5) | Up to €35 million or 7% of global annual turnover |
| Non-compliance for High-Risk AI | Up to €15 million or 3% of global annual turnover |
| Supplying incorrect information | Up to €7.5 million or 1% of global annual turnover |
These fines demonstrate that regulators view non-compliance as a major corporate failure, not a minor cost of doing business.
? How Formiti Data International Can Be Your Trusted Partner
The EU AI Act is not a one-time IT problem; it is an ongoing governance, risk, and compliance challenge that sits at the intersection of data, law, and ethics.25 This is where Formiti Data International provides critical value.
We partner with companies to move beyond simple compliance and build robust, ethical AI governance frameworks.26
Our Expert Services:
- AI Audit & Risk Classification: We start by conducting a full inventory of your current and planned AI systems. We identify which tools are in use (even "shadow AI" in different departments) and legally classify them under the Act's risk framework.
- Fundamental Rights Impact Assessments (FRIA): This is the most urgent and complex new requirement for deployers. Our team of experts will conduct this mandatory assessment for you, delivering the necessary documentation to prove compliance and mitigate your risk before deployment.
- AI Governance Framework Development: We don't just hand you a report. We help you build a lasting, cross-functional AI Governance Committee (involving HR, Legal, IT, and Data teams) and establish the policies, procedures, and training programs for ongoing compliance.28
- Vendor Compliance & Contract Review: Are your software providers AI Act compliant? We audit your AI vendors' documentation and help you amend contracts to ensure they meet their legal obligations as "Providers," protecting you from supply chain risk.
- Staff Training & AI Literacy: We deliver targeted training to your HR teams, managers, and appointed "human overseers" to ensure they are competent in managing and interpreting AI outputs responsibly.29
❓ Frequently Asked Questions (Q&A)
Q: We only use a third-party SaaS tool for recruitment. Does this still apply to us?
A: Yes. The company that built the tool is the "Provider," but as the company using it for your own hiring purposes, you are the "Deployer."30 You have your own distinct legal obligations, including conducting a Fundamental Rights Impact Assessment (FRIA) and ensuring human oversight.31
Q: What is the single most important thing we should do right now?
A: Start an AI inventory. You cannot comply with the law if you don't know what AI systems you are using. Identify all tools used in HR and worker management, from the obvious (recruitment software) to the less obvious (productivity plug-ins, scheduling tools).32 This is the essential first step that Formiti can help you lead.
Q: Does the AI Act replace GDPR?
A: No, it supplements it. You must still comply with all GDPR requirements for processing personal data.33 The AI Act adds a new, specific layer of rules on top of GDPR that governs the function and risk of the AI system itself, particularly around bias, transparency, and human oversight.34
Q: We are not based in the EU, but we have employees there. Does this Act affect us?
A: Yes. The AI Act has extraterritorial scope.35 If your company operates in the EU or if the output of your AI system (e.g., a hiring or performance decision) is used on people within the EU, you are subject to the Act.
The EU AI Act is complex, but the path to compliance is clear. It requires proactive, expert-led governance.36
