MES Software: Vendors, Features & Costs Compared 2026
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
TL;DR: Manufacturing data is the full set of production-relevant information generated by machines, processes, products, orders, materials and people on the shop floor. It is not one data type — it is six, each with different cadences (milliseconds to days), volumes (kilobytes to terabytes per plant per year), latency requirements and retention windows. The single largest architectural mistake in manufacturing digitisation is treating all of this as one homogeneous bucket in a generic "data lake." The categories behave differently and need to be handled differently — at the edge, in transit, and at rest.
Manufacturing data is every production-relevant signal, measurement, event, transaction and context record generated while goods are being produced — from the PLC tag that fires every 100 milliseconds to the ERP production order that changes state twice a shift. The term is an umbrella, not a single data type, and its scope is defined operationally by what a plant needs to decide, document, improve or prove.
ISA-95 provides the reference architecture for where this data lives and how it moves. Levels 0–2 produce sensor, control and supervisory data in real time. Level 3 — the MES layer — aggregates, contextualises and enriches that data into the operational records used for scheduling, performance, quality and traceability. Level 4 — the ERP — consumes and produces the commercial-layer data: orders, materials, costs. Manufacturing data is what fills every one of those levels; calling it all "manufacturing data" is accurate but architecturally unhelpful unless you break it down.
Every useful discussion about manufacturing data eventually decomposes into the same six categories. Each has a characteristic capture method, cadence, volume and retention need, and each behaves differently under load.
| Category | What it contains | Typical cadence |
|---|---|---|
| Machine data | States, cycle counts, stop events, speed, alarms | Event-driven + 1 Hz samples |
| Process data | Temperature, pressure, torque, flow, vibration | 100 ms – 1 s time series |
| Quality data | Measurements, inspections, scrap, rework, SPC | Per-part or sampled |
| Order / operational data | Orders, operations, quantities, statuses, BDE | Event-driven, minutes |
| Material / logistics data | Consumption, batches, serial numbers, WIP, movements | Event-driven, minutes to hours |
| Personnel / context data | Operator, shift, skill, training, assignment | Shift-based, daily |
A single produced part pulls data from all six categories. The machine was in a state; the process ran at certain parameters; the part passed or failed inspection; it belonged to a production order; it consumed a specific batch of material; it was produced on a specific shift by a specific operator. The part's full digital identity — the thing you need for traceability, root-cause analysis or OEE — is the join of six time-stamped streams, each with its own source, cadence and authority.
This is where most manufacturing-digitisation projects quietly fail, and the failure pattern is consistent enough to name. A plant decides to "consolidate manufacturing data" and buys a generic data-lake or data-warehouse platform. Everything goes into one place. Six months later, process-data queries are slow, machine events are missing, ERP joins are broken, and nobody trusts the numbers. The problem is not the tool. The problem is that the six categories have fundamentally different characteristics that any single storage pattern will optimise for some and punish others.
The table below is the reason a serious manufacturing data architecture looks different from a generic enterprise data architecture. The numbers below are orders of magnitude, drawn from the platform-operation data across the SYMESTIC installed base — a three-shift discrete-manufacturing plant with roughly 50 machines.
| Category | Volume / year | Latency need | Useful retention |
|---|---|---|---|
| Machine | 10–50 GB | Seconds (OEE live) | 2–5 years |
| Process | 500 GB – 2 TB | Sub-second (control) | 90 days – 2 years |
| Quality | 1–10 GB | Minutes (SPC) | 5–10 years (regulated) |
| Order | 0.5–2 GB | Seconds (dispatch) | 7–10 years (audit) |
| Material | 0.5–5 GB | Minutes | 5–10 years (traceability) |
| Personnel | < 0.1 GB | Hours | Per labour law |
The gap between process data (up to 2 TB/year, sub-second writes, 2-year retention) and personnel data (well under 100 MB, hourly cadence, legally bounded retention) is four orders of magnitude in volume and three in cadence. Any architecture that treats these the same is either over-engineered for one or broken for the other. The practical implication is tiered storage and purpose-specific stores: time-series databases for process data, event stores for machine events, relational stores for orders and materials, object storage for long-term archive. Trying to force all six into one engine is the single most expensive mistake I see plants make, and it is the reason so many "Industry 4.0" data projects stall at year two.
Capture method determines data quality. The four patterns below cover roughly everything you will encounter in a discrete-manufacturing plant, and the gap in data quality between the top and the bottom is large enough to change business outcomes, not just dashboards.
OPC UA on modern controllers. The current standard. Sub-second sampling, rich information model, secure by design, vendor-neutral. Covers virtually everything built in the last ten years. This is the path of least friction when it is available.
Digital / analogue I/O gateways on brownfield equipment. For machines without a usable digital interface — and they are still the majority of the installed base in most plants — a small gateway wired to the control cabinet picks up state signals from existing PLC outputs, contactors or light barriers. No PLC intervention, no production downtime, installation in two to four hours per machine. The data is less rich than OPC UA but sufficient for 80–90% of MES use cases, which is why this pattern quietly powers most brownfield digitisation in the mid-market.
Operator input at shop-floor clients. The category that everyone plans to eliminate and nobody fully does. Reason codes for downtime, quality attributes that cannot be measured automatically, order start and end confirmations that precede or follow the physical event. Manual input is a secondary but irreducible data source, and a well-designed terminal interface is worth more than another sensor.
System integration (ERP, LIMS, CMMS). The context data that turns raw machine events into operational records. Order headers from ERP give machine cycles their meaning. Material batches from WMS enable traceability. Maintenance work orders from CMMS close the loop between unplanned stops and asset reliability. These integrations are usually the last 20% of an MES project and often consume the last 40% of the effort.
ISA-95 remains the most useful reference for where each data category belongs and how it should flow between levels. The short version:
The value of ISA-95 is not the levels themselves — most people intuit those. The value is the clear definition of what data belongs where, which prevents the two most common architectural anti-patterns: ERPs trying to manage shop-floor events, and SCADA systems trying to handle production orders. Both happen when teams skip the Level 3 layer, and both produce brittle systems that collapse when the plant scales beyond two or three lines.
The attributes below are what I use when reviewing a customer's manufacturing data landscape. A plant that scores well on all six has a platform that holds up under expansion, regulatory scrutiny and leadership change. A plant that scores poorly on any one of them has a hidden project in its future.
The SYMESTIC platform is built around the six-category model explicitly. Machine events arrive through OPC UA or digital I/O gateways and are stored in an event store optimised for state transitions and stop reasons. Process data flows into process data backed by a time-series engine that handles the 100-ms cadence without starving the event store. Quality data, order data and material data live in a relational layer joined to the event stream by time and context. The production KPIs module and the alarms module are both consumers of this layered model; they see the data as one logical stream but the storage underneath is tiered exactly as the categories require. Integration to ERP, CMMS and QMS happens through the REST API — bidirectional, schema-stable, exportable. This is not a design choice we market; it is the reason the platform scales from 10 machines to 500+ without a re-architecture. Authoritative references for the underlying frameworks are ISA-95 (manufacturing operations architecture), ISO 22400 (KPI definitions) and the OPC UA specifications — all easy to find by name.
Is manufacturing data the same as MDE or BDE?
No. MDE (machine data acquisition) and BDE (operational data acquisition) are two specific capture methods for two specific data categories — machine data and order/operational data respectively. Manufacturing data is the umbrella term covering all six categories including MDE and BDE outputs. Most plants use the terms interchangeably, which is fine in conversation and unhelpful in architecture discussions. See the dedicated MDE and BDE entries for the narrower definitions.
How much manufacturing data does a typical plant generate?
For a mid-market discrete-manufacturing plant with 30–80 machines running three shifts, total manufacturing data volume lands between 0.5 and 3 TB per year, with process data accounting for 70–90% of that by volume and machine/event data for most of the analytical value. These numbers vary by factor 10 depending on how many process parameters are captured at what sample rate; the single largest retention cost driver is the decision to store sub-second process data for more than 90 days.
Can all manufacturing data live in one database?
Technically yes, practically no — at least not in one database engine. Different categories have different cadence, volume and query profiles that no single engine optimises for. What works is a unified logical model (one schema, one API) backed by a tiered physical store (time-series, event, relational, object). The application layer sees one platform; the storage layer is purpose-built per category. Plants that try to force everything into one engine — relational or data-lake — hit scale and performance walls within two to three years.
What is the difference between manufacturing data and process data?
Process data is one of the six categories of manufacturing data — specifically the continuous time-series of physical parameters (temperature, pressure, torque) captured at sub-second cadence. Manufacturing data is the full umbrella including machine, process, quality, order, material and personnel data. In volume terms, process data is usually the largest single category, which is why it gets confused for the whole.
How long should manufacturing data be retained?
It depends on the category and the regulatory context. Order, quality, material and traceability data in regulated industries (pharma, food, aerospace) is retained 5–10 years by law. Process data at sub-second resolution is typically retained 90 days to 2 years — long enough for root-cause analysis, short enough to keep storage costs rational. Machine event data is retained 2–5 years for long-term OEE trending. Personnel data follows labour-law retention rules. Treating every category with the same retention policy is a legal and financial mistake in both directions.
Why do manufacturing data projects so often stall?
Three recurring reasons, in order of frequency. First, the six-category structure is ignored and a generic data-lake solution is deployed that cannot serve real-time shop-floor use cases. Second, integration to ERP, CMMS and QMS is underestimated and consumes 40%+ of the project budget after go-live. Third, data governance — the naming, schema and definition discipline — is treated as a documentation exercise rather than a platform feature, and within twelve months nobody remembers what the KPIs mean. The technology is rarely the problem. The architectural assumption underneath it usually is.
Does AI / machine learning need all manufacturing data?
No. Useful models in manufacturing — predictive maintenance, quality prediction, anomaly detection — need the right data, not all data. Typically that means high-resolution process data plus contextual order, material and quality data joined on time and equipment. Starting with everything is usually an excuse to not start at all. Starting with one asset class, one failure mode and one data slice is how real models get deployed.
Related: Machine Data Acquisition (MDE) · Process Data · Downtime Periods · OEE · Manufacturing Operations Management · Traceability · MES · Cloud MES vs. On-Premise · Process Data Module.
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
OEE software captures availability, performance & quality automatically in real time. Vendor comparison, costs & case studies. 30-day free trial.
MES (Manufacturing Execution System): Functions per VDI 5600, architectures, costs and real-world results. With implementation data from 15,000+ machines.