MES Software: Vendors, Features & Costs Compared 2026
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
Production parameters are the measurable control variables that determine how a manufacturing process behaves — temperatures, pressures, speeds, feed rates, dwell times, torques, concentrations, force curves, positioning tolerances. The textbook treats them as a list to be optimized. From where I sit, after 35 years of connecting machines to higher-level systems — starting with Simatic S5 in 1991, ending this morning with an OPC UA server on an injection-moulding press — the interesting question is not what the list contains. The interesting question is whether the plant actually knows what values those parameters had during the last shift, or whether it only knows what values the recipe said they should have.
That distinction — between the setpoint (what you commanded) and the actual value (what the process did) — is where almost all the useful information in a production line hides. Plants that manage only setpoints are managing a document, not a process. Plants that capture actual values and correlate them to quality outcomes are managing the process itself. The gap between those two modes is not philosophical; it is measurable, it is large, and it is the single biggest source of parameter-related scrap, micro-stops and quality drift that I have seen across hundreds of shop floors. This article is about how that gap arises, how to close it, and what changes when you do.
In theory, a production parameter is a single number with a setpoint and a tolerance. In practice, every useful parameter exists in three tiers, and confusing the tiers is where most parameter strategies break down. The tiers are not academic distinctions; they correspond to three different data sources, three different capture mechanisms, and three different management actions.
| Tier | What it is | Source | What it tells you |
|---|---|---|---|
| 1. Setpoint | The commanded value — what the recipe or order says the parameter should be | ERP order, MES recipe, manual entry by setup operator | What you intended. Nothing about what happened. |
| 2. Actual value | The measured value the process produced — sampled continuously or per cycle | PLC, sensor, OPC UA server, I/O gateway | What the process actually did. The basis for everything else. |
| 3. Deviation over time | Actual minus setpoint, tracked across cycles, orders, shifts, weeks | Derived in the MES / analytics layer from tiers 1 and 2 | The actionable signal. Where drift, wear and root causes become visible. |
Most plants I walk into have tier 1 under control. They have recipes, they have SOPs, they have parameter sheets on the machines. Tier 1 is easy because it is a document problem — the plant decides what the parameter should be and writes it down. Tier 2 is where the work is: getting the actual value out of a machine that was not necessarily designed to share it, sampling it at the right frequency, time-stamping it correctly, tying it to the production order that was running at that moment. Tier 3 is where the value is — the derived signal that says "the melt temperature has drifted 8 °C over the last three weeks even though the setpoint has not changed, and the scrap rate for product A has climbed in lockstep." That is the conversation a process engineer can act on. You cannot have that conversation if you only have tier 1.
The failure mode is almost always the same, and it is almost always at tier 2. The recipe is correct. The intention is correct. The operators are doing their job. But the actual values from the process are either not captured, or captured too coarsely, or not tied to anything meaningful. Three specific patterns show up repeatedly.
Pattern 1: Setpoints treated as actuals. The report says "melt temperature: 230 °C" because that is what the recipe specified. Nobody has looked at the actual zone temperatures on the last 500 shots. The assumption is that the machine did what it was told. Machines do not always do what they are told — valves stick, heaters age, thermocouples drift, insulation degrades. A plant that manages parameters from the recipe and not from the measurement is flying blind with a correct flight plan.
Pattern 2: Brownfield capture gaps. Maybe 20–40 % of the machines in a typical mittelständisch plant are from before 2005. They have PLCs, but not OPC UA. They have sensors, but no exposed data interface. The plant's "parameter strategy" covers the modern CNC centres and the new injection-moulding presses, and stops at the edge of the brownfield area. Everything older than a decade remains a black box, and that black box typically contains the plant's highest-variance processes. I have lost count of how many times a client has told me "those machines cannot deliver data" — and then I connect an I/O gateway that afternoon and pull out 40 parameters by Friday without touching the PLC. The machines can almost always deliver data. The industry has just built a habit of assuming they cannot.
Pattern 3: Parameters captured, never correlated. The actual values flow into a historian. The historian grows. Nobody ever asks the question that matters: "When this parameter deviated from its setpoint on Tuesday at 14:32, what happened to the parts we produced for the next 30 minutes?" Without the correlation to quality outcomes — rejects, rework, customer complaints — the parameter history is just telemetry. Telemetry without correlation is expensive noise, and I have seen plants spend six-figure budgets collecting it before realising nobody was reading it.
Field observation from 35 years of machine integration: the single most valuable 30 minutes in a parameter project is the first time the process engineer sees the actual tier-2 values on a dashboard next to the tier-1 setpoints for the same shift. It almost always shows something nobody expected — a zone running 12 °C hotter than recipe across every order, a pressure curve whose peak has drifted six bar over the last quarter, a cycle that takes 3.4 seconds on the morning shift and 3.9 seconds on nights. These deviations have been there for months or years. They explain scrap patterns and micro-stops that nobody had been able to root-cause. They only become visible the moment setpoints and actuals sit side by side. That is the effect of doing tier 2 properly, and it is genuinely hard to overstate how often this scene repeats.
When people ask me how to capture parameters, they usually expect me to say "OPC UA." OPC UA is the right answer for maybe 30–40 % of the typical industrial machine park. For the rest, the honest answer is "it depends on what the machine already has, and we will work from there." A modern machine-integration strategy is not one protocol. It is a tiered approach that covers every machine in the plant, not just the convenient ones.
| Machine class | Typical capture mechanism | What you get |
|---|---|---|
| Modern controls (post-2015) | OPC UA server on the PLC, standardised companion specifications where available | Full parameter set, structured, vendor-agnostic, real-time |
| Legacy PLCs with Ethernet (2005–2015) | Protocol-specific adapter (S7, EtherNet/IP, Modbus TCP) via edge gateway, mapped to MES schema | Most parameters accessible without PLC changes, slightly more mapping effort |
| Older PLCs, no Ethernet (pre-2005) | Serial-to-IP converters or I/O gateways tapping the sensor wiring directly | Parameters accessible at the sensor layer without touching the PLC program |
| Brownfield without any digital interface | Digital I/O gateway wired to relay outputs, counters, signal lamps; add-on sensors for critical parameters | Core cycle and state parameters, extensible with targeted retrofit sensors |
The practical point is that every machine can deliver parameters, but the mechanism varies. A good integration strategy covers all four tiers on day one. A bad strategy covers only the first one, declares the rest "not feasible," and leaves the plant blind on its most troublesome processes. The cost difference between the four mechanisms is smaller than most people think; the coverage difference is enormous. In a recent project the retrofit of a 1998 press with a digital I/O gateway took six hours and required no PLC programming. That machine is now delivering twelve parameters per cycle to the MES. Six hours.
The parameter topic most underestimated in textbooks is drift. A parameter rarely fails suddenly. It drifts — slowly, gradually, often silently — as sensors age, valves wear, thermal masses change, and the process equipment ages around it. Statistical process control works only if your measurement baseline is stable. If the baseline itself is drifting, every control chart you draw is wrong in a direction you cannot see.
I have seen lines where the real root cause of a rising scrap rate was not a process change at all — the process was running exactly as specified. The root cause was a thermocouple that had drifted 7 °C cold over 18 months, so the PID controller was heating the zone 7 °C hot to hit the (incorrectly reported) setpoint. The product quality had degraded in lockstep with the drift. The control charts had been green the entire time, because the charts were built on the drifting measurement itself. The fix was a thermocouple replacement. The detection mechanism — had it existed — would have been a comparison of this sensor against a reference, or better, against other similar sensors on the same line. Multi-sensor comparison and long-term baseline tracking are the only reliable defences against drift, and they only exist where tier 2 and tier 3 are instrumented properly.
Kamps is one of Germany's largest bakery groups, running highly automated baking, proofing and packaging lines for industrial bread production. Food manufacturing is parameter-intensive in ways that are easy to underestimate — oven temperature profiles across multiple zones, proofing chamber humidity and time, conveyor speeds, dough weights, cooling curves. The difference between a loaf that passes inspection and one that does not is, in mechanical terms, a parameter deviation of a few percent over a few minutes. Get the parameters right, and the process is stable. Miss the drift, and the scrap rate climbs quietly until somebody notices at the packaging station.
The SYMESTIC implementation at Kamps connected the core production lines — including Rademaker and König equipment — via OPC UA into the platform. The technical setup captures the parameters the process actually needs: oven zone temperatures per shot, conveyor speeds, cycle events tied to active production orders, alarm events from the line controllers. The parameters are not just logged; they are tied to orders, correlated with quantity outcomes, and made visible in real time on dashboards that shift leaders and process engineers use operationally rather than retrospectively. The modular SYMESTIC catalogue lets Kamps extend the parameter set and add further lines incrementally without a vendor engagement each time.
What matters for a parameter-strategy discussion is what the setup enabled operationally, rather than any single headline number:
| Capability | What it delivered operationally |
|---|---|
| OPC UA capture of line parameters | Actual values for temperature, speed and cycle events tied to the running order, without PLC intervention |
| Setpoint vs. actual comparison per order | Deviations surfaced per shift and per product, not hidden in batch averages |
| Parameter-to-alarm correlation | Alarm events mapped back to the parameter state at the moment of the event, turning cryptic alarms into diagnosable patterns |
| Self-service extensibility | Kamps adds further lines and further parameters from the modular catalogue without a new project engagement |
Across the SYMESTIC installed base — over 15,000 connected machines in 18 countries, with parameter-driven use cases running in automotive, food, pharma and metal processing — the typical quality impact of doing tier-2 capture and tier-3 correlation correctly is a 5–15 % reduction in scrap and a 5–10 % reduction in speed and micro-stop losses. Kamps sits in that pattern. The reduction is not caused by the parameters themselves being better; it is caused by the parameters being seen for the first time, which is the precondition for everything else that follows.
What are production parameters in manufacturing?
Production parameters are the measurable control variables that determine how a process behaves — temperatures, pressures, speeds, feed rates, dwell times, torques, forces, concentrations. They exist in three tiers: the setpoint (what you commanded), the actual value (what the process did) and the deviation over time (the actionable signal derived from the first two). Managing parameters from setpoints alone is managing a document. Managing them from actuals tied to deviations is managing the process.
What is the difference between a setpoint and an actual value?
The setpoint is the commanded or specified value — typically stored in the recipe, the production order or the ERP. It reflects what the process is supposed to do. The actual value is what the process in fact did, sampled from a PLC, sensor or gateway in real time. Setpoints and actuals diverge routinely: valves stick, sensors drift, heaters age, material properties vary. The gap between the two is where almost all useful parameter-related information lives, and it is the gap most plants do not systematically measure.
How do you capture parameters from older machines without PLC intervention?
The right answer depends on what the machine already exposes. Machines with modern controls deliver parameters via OPC UA. Machines with older PLCs on Ethernet deliver parameters via protocol-specific adapters without code changes. Machines with pre-2005 PLCs or no Ethernet at all deliver parameters through I/O gateways wired directly to the sensor layer or to relay outputs — no PLC program changes required, no production interruption. In a recent retrofit on a 1998 press, a digital I/O gateway installed in six hours surfaced twelve parameters per cycle without any programming work.
Why does parameter drift matter more than parameter deviation?
Because drift is silent and deviation is loud. A sudden parameter deviation triggers alarms, operator attention and usually a fix within a shift. A slow drift — a sensor aging 0.3 °C per month, a valve gradually sticking, a thermal mass changing as insulation degrades — hides behind a green control chart for years. The scrap rate climbs in lockstep with the drift, but because the measurement baseline itself is moving, conventional SPC cannot see it. Multi-sensor comparison and long-term baseline tracking are the only reliable defences, and they require tier-2 capture with enough history to see the slow curve.
Is OPC UA enough for production-parameter capture?
It is enough for modern machines and it should be the default for anything installed post-2015. It is not enough as a complete strategy, because the typical mittelständisch plant has 20–40 % of its machines pre-dating OPC UA availability, and those machines are often the ones with the highest process variance. A parameter strategy that covers only OPC UA leaves the most troublesome processes uninstrumented. The realistic stack is OPC UA for new machines, protocol adapters for legacy Ethernet PLCs, and I/O gateways for genuine brownfield — applied together, not in sequence.
What is the relationship between production parameters and SPC?
SPC is a statistical method for detecting when a parameter has moved out of its expected distribution. It requires three things: an accurate actual value, a stable measurement baseline, and a sufficient sample size. Plants that run SPC on setpoints — or on actuals with a drifting baseline — get charts that stay green while the process quietly degrades. SPC is only as good as the tier-2 capture underneath it. Done properly, SPC and parameter capture together give you early detection of genuine process changes; done improperly, they give you a false sense of control. I would rather run no SPC at all than run SPC on untrustworthy data.
How do production parameters connect to quality outcomes?
Through correlation. Parameters captured but never correlated to scrap, rework or customer complaints are just telemetry — expensive telemetry that nobody reads. The correlation has to be made at the level of the individual order, cycle or batch, because batch-average correlation loses the transient deviations that cause most of the problems. In a well-instrumented line, every quality event — a rejected part at end-of-line inspection — is automatically mapped back to the parameter state that produced it. That mapping turns a parameter history into a diagnostic tool instead of a storage cost. Plants that miss this step collect parameters for years without ever extracting the value.
What does SYMESTIC do for production-parameter capture?
Multi-tier capture across the whole machine park: OPC UA for modern controls, protocol adapters for legacy Ethernet PLCs, digital I/O gateways for brownfield machines without any native digital interface — all feeding into one unified data model. Every parameter is tied to the active production order automatically, so tier-1 setpoints (from ERP/MES) and tier-2 actuals (from the machines) sit in the same record. Deviations are computed continuously and surfaced on dashboards designed for shift use, not retrospective reporting. Parameter histories correlate with alarm events and quality outcomes, so a quality defect can be traced back to the parameter state that produced it. 15,000+ connected machines across 18 countries on this architecture. See SYMESTIC Process Data.
Related: Manufacturing Processes · Process Standardization · Performance Measurement · Machine Data Acquisition · OPC UA · Six Sigma · MES · SYMESTIC Process Data
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
OEE software captures availability, performance & quality automatically in real time. Vendor comparison, costs & case studies. 30-day free trial.
MES (Manufacturing Execution System): Functions per VDI 5600, architectures, costs and real-world results. With implementation data from 15,000+ machines.