Skip to content

Operational Excellence Software: What Actually Works

By Christian Fieg · Last updated: April 2026

What is operational excellence software?

Operational Excellence software (OpEx software) is the loose umbrella term for any digital tooling that helps a manufacturing organisation measure, analyse and improve its operations on a continuous basis. In practice the label covers four overlapping software categories, none of which was built as a single product, and which most vendors now rebrand as "OpEx platforms" because the phrase sells better than the underlying acronym did.

Having run OpEx programmes on four continents, audited roughly fifty shortlists for customers who thought they were buying "an OpEx suite", and watched a large share of those projects stall, I will say the uncomfortable part out loud. There is no such thing as a single OpEx software product that replaces the work. What exists is a stack of three specific layers, and the value lies entirely in getting the bottom layer right. This article is about that stack, the category confusion around it, and what to buy in what order.

The four categories dressed up as "OpEx software"

When a plant manager starts researching operational excellence software, the first two hours of Google and vendor demos produce a fog. Four very different product categories all claim to be the answer. They are complementary, not competing, and the first step in any honest evaluation is to separate them.

Category
What it really is
Representative vendors
MES / OEE platforms
The data layer. Captures machine states, orders, quality, OEE in real time
SYMESTIC, Hydra, Forcam, Shoplogix, MachineMetrics
Shopfloor management (SFM)
The action layer. Digital huddle boards, action items, shift log
SFM-Systems, Valuestreamer, Operations1
Process mining / analytics
The analysis layer. Finds patterns, bottlenecks, process variants
Celonis, Signavio, UiPath Process Mining
CI / idea management
The governance layer. Kaizen ideas, DMAIC project tracking, audits
KPI Fire, i-nexus, idea2share, Isolocity

A usable OpEx stack combines three of the four: a real-time data layer at the bottom, an analysis layer in the middle, and an action or governance layer at the top. The plants I have seen fail most consistently are the ones that bought the top layer first because it looked clean in a demo, then spent two years trying to feed it with data that did not exist.

The stack that actually works

Strip the category marketing and a functioning OpEx stack has three concrete jobs. Every product in the market maps to one of them. The order of the three jobs is not negotiable. Each layer depends on the one below.

Layer 1: the data layer. Continuous, automatic capture of machine states, cycle counts, rejects, order progress, setup events and alarm codes, tied to the production order and stored in one place. This is the job of an MES or OEE platform. Without it, every number on every dashboard above it is an estimate, and every improvement decision is a guess.

Layer 2: the analysis layer. Pareto ranking of losses, SPC on quality, trending on availability, drill-down from KPI to root event. In smaller plants this lives inside the MES dashboard. In larger plants, or when cross-plant comparison matters, a process mining or BI layer (Celonis, Signavio, Power BI) sits on top of the MES data. Process mining without a clean MES feed underneath is a party trick.

Layer 3: the action layer. The place where identified improvements become accountable work. Digital shopfloor boards with shift-level KPIs. Kaizen idea tracking with owner and due date. DMAIC project boards with status and impact. Audit and compliance workflows. This is where SFM systems and CI/Kaizen tools live.

Practical rule: a plant with a strong layer 1 and no layer 3 can still improve, because the data shows the problems so clearly that people act on them anyway. A plant with a strong layer 3 and no layer 1 cannot improve, because the nicest Kaizen board in the world is just a decoration when it is fed by operator estimates and gut feel. Buy bottom-up.

Why "operational excellence software" projects stall

OpEx software projects have a higher failure rate than MES projects, which is saying something. Having seen this play out repeatedly, the pattern is consistent enough to name. Five failure modes cover the majority.

1. Top-down buying. Corporate picks a beautiful SFM or OpEx suite at headquarters level. Plants are asked to feed it with data they do not have. Twelve months later the boards are empty and the budget is gone. The right sequence is always data first, action layer second.

2. Fake baselines. The OpEx software ingests OEE numbers that operators type in at end of shift. These numbers are routinely 10 to 30 percentage points higher than the honest real-time number. The improvement programme runs on fiction, and any "gain" later cannot be distinguished from better reporting. My book OEE: One Number, Many Lies is an entire book on this single failure mode for a reason.

3. Process mining without process data. Process mining tools ingest ERP logs and make flow diagrams. In a real manufacturing environment, 60 to 80 percent of the interesting variance never reaches the ERP, because it happens between the order-release event and the order-confirm event. Without MES-level event data, process mining shows a sanitised cartoon of the plant.

4. Governance tools as theatre. Kaizen-idea platforms with a thousand ideas and no measurable impact. The software is doing exactly what it is built for. The missing part is the data layer that would let anyone verify whether an idea actually moved a metric.

5. Too many suites, no integration. SFM here, QMS there, Kaizen tracker on a third platform, MES data in a fourth, and the BI layer stitching them together in PowerPoint. This is not a stack, it is an archaeological site. The fix is usually to consolidate layer 1 first and let the analytics and action layers hang off the same source of truth.

Buying sequence: what to procure first

Given the above, the procurement order for a plant or plant network that is starting fresh is counter-intuitive to most shortlists I have been shown. It goes against how category-oriented analysts structure their RFPs, and it is the right answer anyway.

  1. First: an MES / OEE platform with automatic data capture. Cloud-native if the plant infrastructure allows, with order-level granularity and 1-second resolution on machine states. No manual entry for the basics.
  2. Second: the analysis capability that already lives inside the MES. Every competent MES vendor ships Pareto, trending, SPC and drill-down. Use those first. Only if cross-plant or cross-process analytics are needed should a dedicated process mining or BI platform come on top.
  3. Third: the SFM / CI layer. Once the data is honest and the analysis routine is running, a digital SFM board or a Kaizen tracker multiplies the value because the action items now link to verifiable numbers.
  4. Fourth, if applicable: enterprise consolidation. A group-level BI or process mining layer that sits across multiple plants. Justifiable only once the individual plants have clean layer-1 data.

The plants that have compressed this sequence into months rather than years are the ones that stopped treating OpEx software as a category problem and started treating it as a data problem first.

A real case: Neoperl

Neoperl is an international manufacturer of precision water-flow components, headquartered in Müllheim with plants in Bulgaria, the UK and Italy. The company had a mature continuous-improvement culture already. Kaizen events ran quarterly, the CI organisation had its own governance, and the action layer was functioning. What was missing was the data layer underneath it, which meant the CI programme was running on alarm counts that nobody had categorised and short-stop data that nobody had captured.

The SYMESTIC engagement was framed from day one in exactly the terms of this article: not as "an OpEx suite", but as the layer-1 foundation for an existing CI programme. A four-week PoC on a single line proved that PLC alarms could be captured automatically, categorised, and correlated with quality defects. Three lines went live next, and the rollout has been continuous since.

The measured results across the connected lines after the first year:

  • 10 % fewer stops, because short-stop causes were finally visible and categorisable
  • 8 % higher availability, through structured action on the top alarm categories
  • 15 % less scrap, via automatic correlation of PLC alarms with downstream quality defects
  • 15 % productivity gain, as the combined result of the three above

None of those numbers required Neoperl to replace its CI methodology, its Kaizen structure, or its shopfloor governance. What it required was the bottom of the stack done properly. That is the unglamorous, repeatable truth of operational excellence software: the layer you do not show in the boardroom is the one that makes the rest of the stack work.

FAQ

Is there a single "operational excellence software" product?
No, despite how vendors market it. OpEx is a stack of three layers (data, analysis, action), and every serious programme uses products from at least two of them. Anyone selling a single box that covers the entire stack end-to-end is usually strong in one layer and thin in the others.

Do we need a dedicated SFM tool, or is the MES enough?
For a single plant with up to 15 to 20 machines, a modern MES with built-in dashboards and action-item capability is usually enough. Dedicated SFM platforms start to earn their keep once a plant has multiple areas, multiple shifts across more than one site, and formal daily-management cadences that span organisational layers. Buying an SFM tool before layer 1 is solid is a common and expensive mistake.

Where does process mining fit in?
Process mining is a powerful analysis layer, but it lives and dies on the quality of the event log it ingests. In manufacturing that event log needs to come from the MES, not from the ERP alone, because the interesting variability happens between ERP transactions. Process mining on ERP logs only shows a cleaned-up version of the plant.

What about process management and quality modules?
Quality management systems (QMS) and process-management tools are real and useful, but they belong inside the stack, not next to it. A modern MES like SYMESTIC integrates bidirectionally with QMS platforms (CASQ-it, Böhme & Weihs) so the quality actions are triggered by the same data that drives the OEE dashboard. Two systems, one source of truth.

How long does an OpEx software programme take to show results?
With the right sequence (data layer first), measurable improvement on availability, OEE and scrap is typical within the first 12 weeks of go-live. The Neoperl and Meleghy numbers were delivered inside a year of rollout. Programmes that start with the action layer instead often take 18 to 24 months to show anything, and frequently never do.

How does SYMESTIC fit into the OpEx stack?
SYMESTIC is the layer-1 platform: real-time OEE, availability, scrap, order progress, alarm capture, correlated to the production order. Built-in Pareto and analysis covers a large share of layer 2 out of the box. Bidirectional integrations with SAP, CASQ-it, Böhme & Weihs and major ERP systems make it the single source of truth that SFM and CI tools sit on top of. See SYMESTIC Production Metrics.


Related: MES · OEE · Process Improvement · Quality Metrics · Production Data · Operational Excellence Manager · SYMESTIC Production Metrics

About the author
Christian Fieg
Christian Fieg
Head of Sales at SYMESTIC. 25+ years in manufacturing including Johnson Controls, Visteon, iTAC and Dürr. Six Sigma Black Belt with three years on the DMAIC side of the Johnson Controls headliner lines. Led global MES and traceability rollouts across 900+ machines in China, Mexico, USA, France, Tunisia and Russia. Author of OEE: One Number, Many Lies (2025). · LinkedIn
Start working with SYMESTIC today to boost your productivity, efficiency, and quality!
Contact us
Symestic Ninja
Deutsch
English