Skip to content

Process Quality: Stability, Capability & How It's Measured

By Christian Fieg · Last updated: April 2026

What is process quality?

Process quality is the degree to which a production process produces conforming output consistently, without drift, with predictable variation, and with enough margin against specification limits to absorb normal disturbances. It is not the same as product quality. Product quality describes what comes out of the process; process quality describes how reliably the process produces it. A factory can ship good parts while having poor process quality — it just happens by inspection and rework, at full cost of poor quality. And a factory can have excellent process quality that temporarily produces bad parts when inputs shift. The two dimensions are related but independent, and quality engineering exists largely to keep them distinct.

The cleanest framing comes from Avedis Donabedian's quality triad, originally developed for healthcare but widely adopted in manufacturing: structure (the resources, equipment, people and systems available), process (how those resources are used to transform inputs into outputs), and outcome (what the customer receives). Process quality is the middle layer — the one that is hardest to see from outside the factory and the one where most improvement leverage lives.

The two conditions for high process quality

This is the single most important distinction on the topic, and it is the one most generic articles get wrong. High process quality requires two separate conditions, and both have to be true at the same time:

  1. Stability — the process is in statistical control. Its variation over time is random, not systematic. No trends, no cycles, no sudden shifts. The distribution of outputs tomorrow looks like the distribution today.
  2. Capability — the process, given its stable behaviour, is centred and narrow enough relative to the specification limits that it produces acceptable output as a matter of course. Not as a matter of inspection.

A process can be stable but incapable — it reliably produces bad parts. It can be capable but unstable — the average result meets spec but you cannot predict tomorrow's output. Only when both conditions hold is the process truly in good shape. Quality engineering has separate tools for each: control charts for stability, capability indices for capability. Confusing the two is the most common mistake on shopfloors that are trying to improve.

Capability indices — Cp, Cpk, Pp, Ppk

Once a process is stable, its capability is expressed through four closely related indices. They look similar, differ in subtle ways, and get confused constantly.

Index What it measures Typical target
Cp Potential capability (spread only — assumes process is perfectly centred) ≥ 1.33 standard, ≥ 1.67 critical
Cpk Actual capability (spread and centring — uses short-term variation) ≥ 1.33 standard, ≥ 1.67 critical
Pp Potential performance (like Cp but uses long-term variation) ≥ 1.33 standard
Ppk Actual performance (like Cpk but uses long-term variation) ≥ 1.33 standard

The practical rule of thumb: Cpk is the number you report to customers, Ppk is the number that tells you the truth over time. Cpk uses within-subgroup variation (short term) and tends to be optimistic. Ppk uses total variation (long term) and is what the customer actually experiences. A process with Cpk = 1.67 but Ppk = 1.10 is telling you that something shifts between subgroups — a shift change, a tool change, a material batch — and you haven't found it yet. In automotive (IATF 16949) the common contractual requirement is Cpk ≥ 1.67 at PPAP and Ppk ≥ 1.33 in ongoing production.

The sigma scale — what process quality looks like in DPPM

Process capability is often expressed as a sigma level, which translates directly into expected defects per million parts (DPPM or DPMO). The table below is the canonical reference every Six Sigma practitioner knows, with the 1.5-sigma shift Motorola introduced to account for long-term drift.

Sigma level DPMO (with 1.5σ shift) Yield Typical context
308,537 69.1 % Uncontrolled process
66,807 93.3 % Industry average for many processes
6,210 99.4 % Competent manufacturing
233 99.977 % Best-in-class automotive
3.4 99.9997 % Aerospace, pharmaceuticals, semiconductor

Two observations from seeing these numbers in real factories over two decades. First, most mid-market manufacturers operate their critical processes between 3σ and 4σ and often think they are higher because they only measure end-of-line yield, not process capability. Second, moving from 3σ to 4σ is usually a bigger win financially than anything in the OEE availability or performance buckets — one sigma level is a 10× reduction in defect rate.

Common cause vs special cause — why it matters who acts

Walter Shewhart in 1924 and later W. Edwards Deming made a distinction that still defines modern quality thinking: variation has two sources, and confusing them is the single most expensive mistake in process management.

  • Common-cause variation is the inherent noise of the process — the natural spread of outcomes when nothing unusual is happening. It can only be reduced by changing the process itself: better equipment, tighter tolerances, a different method. Attempting to "fix" common-cause variation point by point actively makes things worse (Deming called this "tampering"). This is a management-level problem.
  • Special-cause variation is a specific, identifiable disturbance — a broken tool, a wrong material batch, a mis-set parameter, a new operator. It can and should be fixed point by point at the shopfloor. Leaving it unaddressed means the process is not in control, which means capability indices are meaningless.

Control charts exist to tell these two apart. Points inside the control limits with no trend = common cause, leave it alone, improve systemically. Points outside the control limits or showing non-random patterns = special cause, stop and investigate. In practice, operators are often pulled into "investigating" common-cause variation because management demands an explanation for every small fluctuation. That is where process quality collapses, not because the process is bad, but because the reaction to its normal variation is.

How process quality is measured in practice

For decades, process quality was measured retrospectively: take samples, measure them offline, plot them on paper control charts, calculate Cpk at the end of the week. This is still how most mid-market plants operate. It works, sort of — but it turns process quality into a lagging indicator. By the time the Cpk report arrives on Monday, Friday's shift has already produced a full day of drifted output.

The current-generation answer is to capture process parameters continuously at the machine, run SPC rules against them live, and alert on special-cause signals in the moment they appear. Across the 15,000+ machines SYMESTIC has connected, this pattern is the one that separates plants whose capability indices slowly improve from plants whose capability indices plateau: not the statistics, which are the same in both cases, but the lag between drift occurring and being noticed. Neoperl's 15 % scrap reduction did not come from new statistical methods. It came from cutting that lag from weekly to minutes, by correlating PLC-level alarm signals with stops and quality defects automatically. The same pattern shows up in every serious quality-improvement programme.

FAQ

What is the difference between process quality and product quality?
Product quality describes how well the finished output meets specification and customer expectations. Process quality describes how consistently and capably the production process produces that output. High process quality almost always produces high product quality; the reverse is not true. See the related article on product quality for the customer-facing view.

Is high Cpk enough to declare a process "good"?
No. Cpk is only meaningful after stability has been established — usually by showing the process is in statistical control on a control chart for at least 20–30 subgroups. Calculating Cpk on an unstable process produces a number, but that number does not predict future behaviour. Ppk should always be compared with Cpk; a large gap between them indicates that something is shifting between subgroups and has not been identified.

What is considered good process quality in 2026?
It depends entirely on industry. Automotive IATF 16949 work typically requires Cpk ≥ 1.67 at PPAP and Ppk ≥ 1.33 in series production. Medical devices and aerospace push higher. General discrete manufacturing benchmarks against Cpk ≥ 1.33. Anything below Cpk 1.0 is a process that produces some defects as a matter of design, not accident.

Do I need a full Six Sigma programme to improve process quality?
No. Six Sigma methodology (DMAIC, Black Belts, tollgate reviews) is valuable in large organisations with complex processes and a structured improvement culture. In a mid-market plant with ten production lines, the higher-leverage move is usually to instrument the process with real-time data capture first, and only then decide which DMAIC projects to start. Running DMAIC without live process data is possible but slow — you spend most of the Measure phase assembling data that should have been captured automatically.

What does SYMESTIC do for process quality specifically?
Three things, all delivered on the same platform. First, continuous capture of process parameters (temperature, pressure, cycle time, parameter values) at the machine via OPC UA or digital I/O. Second, capture of reject events and alarm signals at the PLC level, mapped to the production order and linked to the process parameters that were running at that moment. Third, SPC rules running live against the joined dataset, surfacing special-cause signals before they become reject clusters. The typical outcome across customers is a 5–15 % reduction in scrap and rework within the first 6–12 months — not because statistics changed, but because the time between drift and detection collapsed.


Related: Product Quality · Statistical Process Control · Cp/Cpk · Six Sigma · Production Defect · Process Data Module

About the author
Christian Fieg
Christian Fieg
Head of Sales at SYMESTIC. 25+ years in manufacturing, including three years as a Six Sigma Black Belt running DMAIC projects on automotive headliner production. Former global MES & traceability lead at Johnson Controls (900+ machines, 30+ processes, four continents). IATF 16949 environments in Germany, Mexico, China, Czechia, Hungary. Author of "OEE: One Number, Many Lies" (2025). · LinkedIn
Start working with SYMESTIC today to boost your productivity, efficiency, and quality!
Contact us
Symestic Ninja
Deutsch
English