Over the past decade factories and plants across the world have been instrumenting the production line with sensors of all stripes. The goal was optimizing production, maintenance and reliability, equipment effectiveness, and broadly – better evidence-based management of plant assets. The immediate objective was to move from expensive corrective maintenance to preventive maintenance, with the long-term ambition of running the plant along predictive maintenance lines.
Real world outcomes from the investments in IoT have been lower than the hype would have you believe.
Particularly in the domain of maintenance and reliability.
What happened?
The system of instrumentation assumed its own completeness. That the assumption was implicit didn’t make things better. The assumptions were as follows:
- Self-evident utility. The IoT camp made two assumptions – the sensors will generate a consistent stream of hard facts, namely time series data on the state of production and the condition of assets, algorithms will generate anomalies, trends, and forecasts which will be understood, their utility self-evident. Humans will take note of the data, use it as is, regard the alarms and forecasts as signals to be acted upon without question.
- The IoT believers assumed the absence of an existing maintenance process. Any plant, regardless of the level of technological sophistication, is managed by teams that have a way of doing things and a belief system about how the plant and assets behave. In practice, there are no self-optimizing autonomous plants. This was somehow missed by IoT advocates of the 2010s.
- Processes and teams change. Industrial plants operate in a dynamic ecosystem where operating conditions continuously evolve. As the production and maintenance context changes so does utilization and interpretation of machine data. Any system of maintenance that cannot accommodate this inevitable drift in how things are done, will fail. This too was ignored by the original proponents of IoT in the 2010s
What’s the way forward? A data, visualization, and workflow layer
The moment an alarm is ignored, it’s a sign that the investments in the shiny system of instrumenting the plant will likely not yield the desired ROI.
We propose the following:
- A data layer consolidating machine data into a view suitable for floor workers. The instrumentation infrastructure has to fit into the existing production and maintenance workflow. This implies that alarms have to be consolidated, notifications have to be aggregated and transformed into something the plant workers will actually use. The thing to aim for is trust in what will be reported to the industrial field worker. The thing to fight against is drift – the moment a notification is generated once too often or doesn’t agree with what on ground personnel believe, there appears a gap in the signal-decision-correction action triad as it exists on paper and the de facto process – i.e how it’s done in practice.
- A configurable digital workflow solution. Little is achieved if every asset has best-in-class sensors, but everyday tasks are decisions that happen off paper forms or informal channels – such as emails with attachments. The core challenge in industrial plants is the inevitable departure from the process as captured by the officially designated information system, and what plant workers use (and the data they trust). To ensure this rift does not appear and the information system acquired or built through much pain and expense doesn’t end up as shelfware, the workflow must be highly configurable. Change must be easy and shouldn’t require calls to software makers.
- An UX suitable for industrial field workers. None of this can happen without enthusiastic adoption by industrial field workers. Training and top-down mandates can only do so much (much shelfware is the consequence of the top-down mindset). Consuming and capturing data should be as frictionless as possible. Mobility, paper-like UX, plain text queries, voice to text – whatever it takes.
RECOMMENDATIONS
Based on a decade spent building software for both routine and capital maintenance projects, Maximl would recommend the following:
- Think less in terms of a final, finished, system-of-record kind of solution. And more in terms of a system that can learn. The goal is to accommodate both what the sensors, PLCs, and MES generate, and the decisions taken and interventions made. The goal is to be the repository of ever evolving context.
- Think less in terms of a dashboard and more in terms of a workflow. Dashboards on their own, sans the context of decisions, conversations, interactions with data, and interventions, will fade in relevance. Instead, it’s better to think in terms of a layer that bridges the gap between data and decisions, made up of a system of rules and workflows.
- Think low tech – data quality and not AI. Converting raw data into human readable text (the way ‘AI’ is commonly understood in 2026) achieves little without high quality data presented on context. There are many legit use cases of both classic AI and generative AI, but those use cases are about highly specific areas, such as retrieving information from unstructured sources. Any vendor claims of using AI to automate decisions end-to-end should be looked upon the scepticism.


