EU AI Act: Regulating Artificial Intelligence in Manufacturing
The EU AI Act (EU 2024/1689) is the world’s first comprehensive legal framework for Artificial Intelligence. Published in August 2024 and entering into force in stages, it is far from an abstract tech topic. For manufacturing companies, it directly impacts the use of AI in quality inspection, process control, predictive maintenance, and production planning. Any company using or planning to deploy AI in the factory must understand the risk classification of their systems and fulfill the corresponding legal obligations.
The Risk Classification System
The EU AI Act categorizes AI systems into four risk levels based on their potential impact:
1. Unacceptable Risk – Prohibited
Includes AI for real-time biometric surveillance in public spaces, social scoring, or manipulative AI. These prohibitions rarely play a direct role in industrial production.
2. High-Risk – Strictly Regulated
This is the most critical category for manufacturing. High-risk systems are subject to extensive obligations before they can be placed on the market or put into service. In production, this primarily includes AI used as a safety component in machines (under the EU Machinery Regulation), AI in products requiring CE marking, and AI in critical infrastructure.
3. Limited Risk – Transparency Obligations
Systems like chatbots or AI-generated content must be identifiable as AI. This is generally less relevant for industrial shop floors.
4. Minimal Risk – No Specific Obligations
The majority of industrial AI applications fall into this category—such as AI-supported production optimization, anomaly detection in machine data, or AI-driven maintenance planning, provided they do not perform direct safety functions.
What Qualifies as "High-Risk AI" in Production?
A system is classified as high-risk if it serves as a safety component in a regulated product or is used in critical infrastructure.
- Potential High-Risk Examples: AI making automated quality decisions for safety-critical components (e.g., in automotive or medical device manufacturing) or AI-driven control systems in energy/water supply.
- Non-High-Risk Examples: AI systems that optimize production parameters, predict maintenance needs, or automate OEE analysis are generally not high-risk—as long as a human remains the final decision-maker.
Obligations for High-Risk AI: Providers vs. Operators
The EU AI Act distinguishes between Providers (those who develop AI) and Deployers/Operators (those who use AI in their operations).
Obligations for Providers:
- Establish a Risk Management System for the entire lifecycle.
- Ensure training, validation, and testing datasets are representative and error-free.
- Provide detailed Technical Documentation and transparency for operators.
- Design the system to allow for effective Human Oversight.
- Complete a Conformity Assessment and apply the CE mark.
Obligations for Operators (Deployers):
- Implement technical and organizational measures to ensure the AI is used according to its purpose.
- Ensure oversight by qualified personnel.
- Monitor the AI system during operation and report serious incidents.
- Conduct a Data Protection Impact Assessment (DPIA) if personal data is processed.
Compliance Timeline: Key Dates
The EU AI Act is being rolled out in stages:
- February 2025: Prohibitions on unacceptable risks take effect.
- August 2025: Obligations for providers of General-Purpose AI (GPAI) and governance structures start.
- August 2026: High-risk obligations for AI in regulated products (e.g., machinery) fully apply. This is the most critical deadline for manufacturing.
- August 2027: High-risk obligations for other specified AI systems.
AI in MES and the Principle of Human Oversight
MES platforms integrating AI for Predictive Quality or automated anomaly detection must be evaluated carefully. The central principle is Human Oversight: If the AI only provides recommendations that a human must confirm, the risk profile is significantly lower than that of fully automated decision systems. This "human-in-the-loop" approach is the recommended design for AI-compliant MES solutions.
FAQ
Does a manufacturer buying standard AI software need to be compliant?
As a Deployer (Operator), the company must ensure the AI is used as intended and that human oversight is maintained. Technical compliance of the software itself is the responsibility of the Provider. Manufacturers should demand conformity documentation and technical specs from their software vendors, similar to CE certifications.
Does the AI Act apply to internally developed AI?
Yes. If you develop an AI system in-house for your own use, you are considered both the Provider and the Operator and must fulfill both sets of obligations if the system is classified as High-Risk.
What is the difference between the AI Act and the EU Machinery Regulation 2023?
The Machinery Regulation regulates the overall safety of machines, including AI components used for safety functions. The AI Act regulates AI as a technology. For an AI-controlled safety component in a machine, both regulations must be met.

