MES Software: Vendors, Features & Costs Compared 2026
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
Cost of Poor Quality (COPQ) is the sum of all costs a manufacturer incurs because products, processes, or services fail to meet quality requirements. It covers scrap, rework, warranty claims, complaint handling, and the appraisal work needed to catch defects — everything that would disappear if every unit were right first time. Synonyms: non-conformance cost, quality failure cost, Fehlerkosten.
COPQ is the operational counterweight to OEE: where OEE quantifies lost capacity, COPQ quantifies lost margin. From 25+ years running Six Sigma DMAIC projects at Johnson Controls and leading global MES rollouts at Visteon, I've seen COPQ routinely hide 15–40% of revenue — and nearly every plant under-reports it, because the costs sit in ten different accounts, not one.
COPQ and Cost of Quality are not synonyms, and the distinction matters when you build a business case. CoQ is the full Juran framework; COPQ is the failure subset inside it. Everything you invest in prevention is CoQ but not COPQ — and a rising prevention budget is usually what makes COPQ fall.
| Dimension | Cost of Poor Quality (COPQ) | Cost of Quality (CoQ) |
|---|---|---|
| Scope | Failure costs only (internal + external) | All quality-related costs |
| Categories | Scrap, rework, warranty, recalls, complaints | Prevention + Appraisal + Internal Failure + External Failure |
| Purpose | Quantify the cost of non-conformance | Balance prevention investment against failure cost |
| Typical share of revenue | 15–40% (low maturity), 5–10% (Six Sigma mature) | 20–45% total, shifting from failure to prevention over time |
| Origin | Crosby (1979), Juran (1951) | Juran, Feigenbaum (PAF model) |
Rule of thumb: if someone quotes a single "quality cost" number, ask whether prevention is in or out. The answer tells you whether they're negotiating budget or measuring pain.
COPQ breaks into four buckets that trace back to Juran and Feigenbaum, codified in ASQ's PAF model. Internal failure costs cover everything caught before shipment: scrap, rework, re-inspection, downgraded material, and the downtime that follows a scrap event. External failure costs are what leaks to the customer: warranty claims, field repairs, recalls, premium freight to replace defective parts, and the sales margin you lose when a demanding customer walks away. Appraisal costs fund the inspection that keeps internal failures from becoming external ones — incoming inspection, in-process SPC checks, final test, gauge calibration, audits. Prevention costs sit outside strict COPQ definitions but belong in the discussion: training, FMEA, control plans, design reviews, supplier development. Shift spend from failure to prevention, and total quality cost falls — that is the entire playbook.
A defensible COPQ figure never comes from a single ledger. Start by pulling twelve months of transaction data from your ERP: scrap postings, rework hours, returns, warranty reserves, concession credits. Map each cost to the four buckets above. Then add the invisible costs your ERP doesn't tag — scrap-related downtime valued at contribution margin per minute, expedited freight on replacement orders, quality-engineer hours burned on 8D reports. A modern MES like SYMESTIC automates the machine-side half of this: every scrap event gets a timestamp, reason code, order, shift, and operator, so the downtime cost is quantified at the source instead of reconstructed from memory. The benchmark from ISO 22400 and Juran: COPQ usually lands between 15% and 40% of revenue in a mid-maturity plant, and 5–10% in a Six Sigma mature one. If your first number is under 5%, you are under-measuring.
Three structural reasons. First, the costs are scattered across accounts: scrap sits in manufacturing, warranty in finance, premium freight in logistics, customer credits in sales. No one owns the aggregate number. Second, opportunity costs are never booked — the order you couldn't ship because a line was scrapping 8% doesn't show up anywhere. Third, the most expensive defect categories are measured least accurately: field failures discovered six months post-shipment, intermittent problems that customers stop reporting and start buying from competitors instead. When we run the first honest COPQ analysis with a new customer, the number is almost always 2–4× what the finance team reported. Not because anyone was dishonest — because the system was built to produce accounting numbers, not quality numbers. Until machine data and quality events flow through one system, COPQ is an estimate, not a measurement.
Hard-earned lesson from a JIT headliner rollout at Johnson Controls: We tracked scrap at 2.3% and thought COPQ was under control. After wiring up real-time capture on the line, we discovered that 4–7 second microstops — each officially "too short to log" — compounded into 11% of available production time, most of it caused by out-of-spec incoming material. The official scrap rate was accurate. The official COPQ number was off by a factor of three, because the downtime and the secondary scrap from restart transients were invisible. The fix wasn't a better inspection plan; it was connecting the PLC directly to the quality system so every microstop got reason-coded. Within 90 days, true COPQ dropped from roughly 12% to 7% of line revenue. Principle: if your COPQ calculation depends on humans writing things down, you are measuring compliance with the form, not the cost.
At SYMESTIC we see the same hidden-COPQ pattern across 15,000+ connected machines in 18 countries. Brita's Taunusstein lines cut downtime by 5% within the first year after digital signals from Optima assembly machines started feeding real-time quality dashboards. Neoperl correlated PLC alarms with quality defects on fully automated assembly and reduced scrap by 15% and stops by 10%. Carcoustics, running 500+ machines across seven countries on MQTT via IXON gateways, ties every machine cycle to a SAP production order through bidirectional IDoc, so failure costs post to the right order instantly — no month-end reconciliation. The architecture is deliberately boring: digital I/O or OPC UA into an Azure-native MES, one reason-code taxonomy across plants, no spreadsheets. Zero customer churn in 2024 and roughly 150% SaaS growth tell us the pattern works whether you're forging, moulding, bottling, or assembling.
What is a typical COPQ percentage of revenue?
Plants that have never measured COPQ systematically usually land at 15–40% of revenue. Mature Six Sigma environments push it below 10%, and world-class operations run at 3–5%. If your reported number is below 5% and you haven't been running formal quality cost accounting for years, you're almost certainly under-measuring — most likely missing opportunity costs and field-failure tails.
What's the difference between COPQ and scrap rate?
Scrap rate is one input to COPQ, not a substitute for it. A 2% scrap rate tells you how many parts were rejected; COPQ tells you what those rejections, plus the rework, the downtime, the expedited freight, the warranty claims, and the lost margin actually cost the business. Scrap rate is a count. COPQ is a P&L number.
How is COPQ related to Six Sigma?
COPQ is the financial justification layer of every Six Sigma programme. DMAIC projects are selected and prioritised against the COPQ they eliminate — typically with a required minimum project savings (e.g. €100k per Black Belt project). Without a credible COPQ baseline, Six Sigma becomes a training exercise rather than a P&L instrument.
Does COPQ include prevention costs?
Strictly, no. Prevention costs — training, FMEA, design reviews, SPC infrastructure — are part of Cost of Quality but not COPQ. The point of measuring both is to show that prevention investment reduces failure cost at a 5:1 to 10:1 ratio. If you fold prevention into COPQ, you lose the ability to make that business case.
How long does it take to reduce COPQ meaningfully?
First measurable reductions appear within one to three months once real-time machine data starts feeding the reason-code system. A 30–50% drop in measured COPQ inside the first year is common when the starting point is a plant without automatic capture. The slow part is never the tooling — it's aligning accounting, quality, and production on one definition.
Can COPQ be calculated without an MES?
Yes, but the number will be wrong in predictable directions. Manual capture systematically under-reports microstops, reason-code accuracy, and the downtime tail behind scrap events. Spreadsheet-based COPQ typically lands at 40–60% of the true figure. An MES doesn't replace the accounting layer; it fixes the source data feeding it.
How does COPQ connect to OEE?
Tightly. The Quality factor of OEE captures the in-line scrap half of COPQ. Availability losses caused by scrap-driven downtime sit inside the Availability factor. A plant that optimises OEE without tracking COPQ usually discovers it is shifting cost between buckets rather than eliminating it — running faster to scrap more efficiently.
How does SYMESTIC implement COPQ reduction?
SYMESTIC captures every cycle, stop, and scrap event in real time via OPC UA or digital I/O gateways, assigns reason codes at the source, ties each event to the active production order through ERP integration, and surfaces the cost impact on live dashboards. The result is a COPQ number that updates every minute rather than every quarter. Start with Production Metrics and extend into Alarms when correlating PLC events to quality defects.
Related: OEE · MES · MES Software Comparison · OEE Software · Six Sigma · Statistical Process Control · FMEA · Scrap Rate · Production Metrics · Alarms.
MES software compared: vendors, functions per VDI 5600, costs (cloud vs. on-premise) and implementation. Honest market overview 2026.
OEE software captures availability, performance & quality automatically in real time. Vendor comparison, costs & case studies. 30-day free trial.
MES (Manufacturing Execution System): Functions per VDI 5600, architectures, costs and real-world results. With implementation data from 15,000+ machines.