The EU AI Act (EU 2024/1689) is the world’s first comprehensive legal framework for Artificial Intelligence. Published in August 2024 and entering into force in stages, it is far from an abstract tech topic. For manufacturing companies, it directly impacts the use of AI in quality inspection, process control, predictive maintenance, and production planning. Any company using or planning to deploy AI in the factory must understand the risk classification of their systems and fulfill the corresponding legal obligations.
The EU AI Act categorizes AI systems into four risk levels based on their potential impact:
Includes AI for real-time biometric surveillance in public spaces, social scoring, or manipulative AI. These prohibitions rarely play a direct role in industrial production.
This is the most critical category for manufacturing. High-risk systems are subject to extensive obligations before they can be placed on the market or put into service. In production, this primarily includes AI used as a safety component in machines (under the EU Machinery Regulation), AI in products requiring CE marking, and AI in critical infrastructure.
Systems like chatbots or AI-generated content must be identifiable as AI. This is generally less relevant for industrial shop floors.
The majority of industrial AI applications fall into this category—such as AI-supported production optimization, anomaly detection in machine data, or AI-driven maintenance planning, provided they do not perform direct safety functions.
A system is classified as high-risk if it serves as a safety component in a regulated product or is used in critical infrastructure.
The EU AI Act distinguishes between Providers (those who develop AI) and Deployers/Operators (those who use AI in their operations).
The EU AI Act is being rolled out in stages:
MES platforms integrating AI for Predictive Quality or automated anomaly detection must be evaluated carefully. The central principle is Human Oversight: If the AI only provides recommendations that a human must confirm, the risk profile is significantly lower than that of fully automated decision systems. This "human-in-the-loop" approach is the recommended design for AI-compliant MES solutions.
As a Deployer (Operator), the company must ensure the AI is used as intended and that human oversight is maintained. Technical compliance of the software itself is the responsibility of the Provider. Manufacturers should demand conformity documentation and technical specs from their software vendors, similar to CE certifications.
Yes. If you develop an AI system in-house for your own use, you are considered both the Provider and the Operator and must fulfill both sets of obligations if the system is classified as High-Risk.
The Machinery Regulation regulates the overall safety of machines, including AI components used for safety functions. The AI Act regulates AI as a technology. For an AI-controlled safety component in a machine, both regulations must be met.