MES Software: Vendors, Features & Costs Compared 2026
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
Production speed is the rate at which a manufacturing process produces finished units over time — usually expressed as parts per minute, parts per hour, or cycles per minute. In OEE terms, it is the raw input to the Performance Rate: the ratio of what a machine actually produced against what it would have produced running at its designed speed for the available time. Every other manufacturing metric above it (throughput, lead time, utilisation) is downstream of this one number.
I spent three years running DMAIC projects on the Johnson Controls headliner lines as a Six Sigma Black Belt, and production speed was on at least 70 percent of the project charters. It is also the single metric, in my experience, that plants report most confidently and measure least honestly. Almost every production-speed number I have ever seen in a management report is wrong — not by a little, by 10 to 20 percent. This article is about why, and about the sequence in which a mid-market plant should actually work on it.
The first source of confusion in almost every cycle-time conversation is that people use the word "speed" to mean three different things, and they rarely specify which. Getting these three apart is the prerequisite for every useful discussion about production speed, and for every honest Performance Rate.
In a healthy plant the gap between nameplate and actual is 5 to 15 percent. In an unhealthy plant it is 25 to 40 percent, and almost nobody on the floor knows it, because the OEE dashboard uses the nominal speed as the 100 percent reference instead of the nameplate. The dashboard says Performance is 96 percent. The machine is running at 74 percent of its design speed. Both statements are simultaneously true, and only one of them is useful.
Production speed is the P in OEE. OEE is Availability × Performance × Quality. The instinctive reaction when a plant wants "more throughput" is to push the P — run faster, reduce cycle time, squeeze the tact. In roughly 80 percent of mid-market plants I have audited, that instinct is incorrect as a starting point.
The arithmetic is brutal. Take a typical mid-market line:
Squeezing three percentage points out of cycle time moves Performance from 88 to 91 and OEE from 55.5 to 57.4. A 1.9-point OEE gain. The same three points spent on Availability — say, reducing unplanned stops from 35 to 32 percent of shift time — moves OEE from 55.5 to 58.1. More gain, less engineering pain, and without risking quality.
DMAIC rule of thumb I learned the hard way: do not attack Performance until Availability is stable above 85 percent. A faster cycle on an unreliable machine just means you produce the same number of bad hours a little faster. The exception is the identified bottleneck station — where every second counts because Goldratt was right. For every other station, the order is Availability first, Quality second, Performance third.
The single largest speed loss in most plants is not slow running. It is micro-stops. Stoppages shorter than five minutes — a jam, a sensor that briefly dropped signal, an operator reaching for a part, a pallet that didn't index cleanly. Individually invisible. Collectively devastating. In the plants I have instrumented with automatic capture for the first time, micro-stops typically account for 8 to 15 percent of throughput loss, and 80 percent of them were never in the logbook.
The reason they are invisible is structural, not cultural. A paper-based shift log with a five-minute threshold literally cannot capture them. An operator is not going to stop what they are doing to write down a 30-second jam. An end-of-shift OEE entry based on total output minus estimated losses hides them in the performance number. They only become visible when they are captured automatically, with sub-second resolution, tied to a stop reason. Until then, they show up as "cycle time variance" or "performance loss" — vague categories that nobody can act on.
The operational consequence of making micro-stops visible is counter-intuitive: the reported production-speed number usually gets worse, not better, in the first month. That is the honest baseline finally appearing. Actual improvement starts from there.
This section is the uncomfortable one. Having sat through several hundred OEE reviews across four continents, five patterns account for almost all the speed-reporting dishonesty I have seen. They are rarely malicious. They are usually baked into the measurement system from the day someone first set it up.
1. The nominal cycle time is set 10–15 % below nameplate. "For realism." The intention is reasonable: the machine never actually hits nameplate, so the spreadsheet would report Performance below 100 every shift. The consequence is that Performance now reports 95–100 % every shift, independent of what the machine actually does. The 10–15 % built-in speed loss is invisible forever.
2. The denominator excludes micro-stops. Some OEE calculations define Performance as "units produced in running time" ÷ "ideal units in running time", where "running time" excludes short stops. This is convenient but wrong: micro-stops are a performance loss, not an availability loss, and moving them out of the Performance denominator makes the P rate look clean at the cost of making the whole OEE number meaningless.
3. Planned slowdowns that are not planned. A machine runs at 80 % of nameplate all shift because the material is slightly off-spec, or because the downstream station is buffered and would block anyway. Nobody writes it down. It shows up as normal Performance. In reality it is either a quality-upstream problem or a line-balancing problem, and it will persist indefinitely because the metric does not see it.
4. Operator-entered cycle counts. The shift leader types in "1,420 parts" at end of shift. The number was 1,395. Not dishonest — rounded, approximated, remembered. Repeat across 300 shifts a year and the cumulative fiction is enormous. Automatic PLC-based counting eliminates this in one day.
5. Aggregation that hides the story. Line averages 85 % Performance for the month. Detail: day shift 94 %, night shift 71 %, changeover batches 58 %, steady-state batches 92 %. The 85 % number is arithmetically correct and operationally useless. The only Performance number that drives improvement is the disaggregated one — by shift, by product, by operator, by material batch.
Having attacked the topic from the wrong direction more times than I want to admit, the sequence that works is roughly the same in every plant. Deviation from this order is the most common reason speed-improvement projects fail.
Brita GmbH is an internationally leading provider of drinking water filtration solutions, with manufacturing sites in Germany, the UK, Italy and China. The engagement with SYMESTIC started at the Taunusstein plant and was framed around exactly the problem described in this article: the reported cycle-time and output numbers were the ones that had always been reported, nobody trusted them completely, and nobody could easily replace them with something better.
The approach was pragmatic. No proof-of-concept phase. Digital signals tapped from existing machine interfaces captured actual output. Stop signals were picked up through separate digital I/O, categorised, and surfaced on the plant monitor. Modern lines were connected via OPC UA to their line-control systems to capture alarms. Within the first year the approach scaled from Taunusstein to Bicester in the UK — by the Brita team themselves, using the modular feature catalogue without vendor dependency.
The measured results across the connected lines:
The 7 percent output gain is the production-speed number in this article. It did not come from running the lines faster than they had been designed for. It came from closing the gap between what the machines were already capable of and what the plant had been getting out of them. That is almost always where the first tranche of production-speed improvement lives in a mid-market plant: not in pushing the ceiling, but in reaching it.
What is the difference between cycle time, takt time and production speed?
Cycle time is how long it takes to produce one unit at the machine. Takt time is how long the customer demand rhythm allows per unit (available time ÷ customer demand). Production speed is the inverse of cycle time expressed as a rate (parts per hour). All three are related but answer different questions: cycle time is a capability measure, takt time is a demand measure, production speed is a throughput measure.
How do I calculate Performance Rate correctly?
Performance = (ideal cycle time × total count) ÷ operating time, where operating time is scheduled time minus downtime, and ideal cycle time is the nameplate cycle time — not a softened "nominal" version. If your Performance calculation reliably reports 95-100% across all shifts and products, your ideal cycle time is almost certainly set too low, and you are hiding real speed losses inside the Performance denominator.
What is a good production speed in practice?
The wrong question. The right one is: what is the gap between your nameplate speed and your actual running speed, and what is driving it? A world-class plant runs at 92-97 percent of nameplate in steady state, 85-92 percent including changeovers. Most mid-market plants run at 72-85 percent and believe they run at 92-97 percent because the nominal cycle time has been set to make the Performance Rate look good.
Should we always try to run faster?
No. Running the bottleneck station faster is almost always the right move. Running non-bottleneck stations faster is almost never the right move — it just builds inventory, accelerates wear, and increases the probability of quality incidents without adding a single unit to the line output. Theory of Constraints is forty years old and still correct.
How do we handle the speed-quality tradeoff?
Honestly and with data. The real calculation is not cycle time versus first-pass yield; it is effective throughput (cycle time ÷ FPY) over a meaningful period. A machine running at 95 percent of nameplate with 99 percent FPY produces more good parts per hour than the same machine running at 100 percent of nameplate with 92 percent FPY. Instrument both and let the arithmetic decide. See also Quality Metrics.
How long until we see honest production-speed numbers after instrumenting a line?
Three to five working days for the baseline. One to two weeks for people to believe the baseline. Four to eight weeks for the first structural improvement actions to move the baseline measurably. In the Brita case, the first measurable output gains came within the first three months on the instrumented lines.
How does SYMESTIC support production-speed improvement?
Automatic cycle counting at 1-second resolution, every stop captured and categorised, separation of nameplate / nominal / actual speed on the dashboard, Performance Rate calculated against the true nameplate (not a softened nominal), disaggregation by shift / product / operator / material. Plus bidirectional integration with SAP, InforCOM, Navision and other ERPs so the actual-speed data flows back into planning. See SYMESTIC Production Metrics.
Related: OEE · Performance Rate · Cycle Time · Takt Time · Throughput · Micro-Stops · Quality Metrics · SYMESTIC Production Metrics
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
OEE software captures availability, performance & quality automatically in real time. Vendor comparison, costs & case studies. 30-day free trial.
MES (Manufacturing Execution System): Functions per VDI 5600, architectures, costs and real-world results. With implementation data from 15,000+ machines.