Agents That Act, Not Just Answer
Every industrial organisation has tried LLMs. They answer questions. Agentic Operations is what happens when AI starts acting on the answers.
IndustrialClaw Team · March 7, 2026
Every industrial organisation has run some version of this experiment: paste some historian data into ChatGPT and ask it what’s wrong. Or feed it a maintenance report and ask for a summary. Or give it an alarm log and ask for the pattern.
The answer is often useful. Plausible, coherent, sometimes genuinely insightful. ChatGPT reads the trend data, identifies the anomaly, offers a diagnosis.
Then what? The engineer copies the answer into an email. Or writes it up in a report. Or heads to the control room to verify. The AI provided the answer. Everything that happens after the answer — raising the work order, sending the briefing, escalating to the right person — happens exactly the way it always did.
This is the gap between LLMs and Agentic Operations.
What LLMs Do and Don’t Do
Large language models are retrieval and generation systems. You provide context, they generate a response. When the context is good — well-structured data, clear question — the responses can be remarkably useful for an operations team.
What LLMs don’t do is act. They don’t wake up at 02:00 when an alarm fires. They don’t have persistent access to your historian, running continuously and tracking asset behaviour across shifts. They don’t create a work order in your maintenance system. They don’t post the shift handover briefing to the incoming operator’s channel at 05:45 before they arrive.
The session starts when a person opens the chat window. It ends when they close it. Between those moments, the AI is genuinely useful. Outside those moments, it doesn’t exist as an operational presence.
What Acting Means in an Operational Context
Acting, in the context of industrial operations, means a specific set of actions with measurable operational value.
Raising a work order. When an agent detects a degradation pattern that crosses a threshold — bearing vibration trending toward the failure signature, process temperature deviation persisting beyond normal bounds — raising the work order in the maintenance system is an action with direct operational value. Not a recommendation to raise a work order. Raising it.
Sending a structured briefing. When an alarm fires or an exception is detected, an agent that assembles the context — trend data, maintenance history, correlated variables, recommended action — and posts it to the right person’s channel is acting. The operator doesn’t have to go looking. The briefing comes to them.
Escalating an exception. When a condition exceeds the agent’s confidence threshold, or crosses into territory that requires human judgment, the agent escalates — to the right person, with the right context, through the right channel. Not a generic alert. A structured escalation with everything the recipient needs to make a decision.
Posting the shift handover summary. The outgoing shift’s key events, open work orders, assets under observation, and recommended next actions — assembled and posted before the incoming operator arrives. Not waiting to be asked.
The Blast Radius Question
Before an agent can act, the governance question has to be answered: what is this agent authorised to do, and what happens if it acts incorrectly?
The concept of blast radius — borrowed from security engineering — is useful here. What’s the scope of potential damage if an agent acts on incorrect information or makes a wrong decision?
Read-only agents carry a blast radius close to zero. An agent that monitors historian data, assembles briefings, and posts summaries — but takes no write actions — can be wrong without causing operational harm. The human reads the briefing, verifies against their own knowledge, and acts. A wrong briefing is an inconvenience, not a production event.
Write-capable agents carry risk proportional to what they’re authorised to write. An agent that can raise work orders in the CMMS carries more risk than a read-only briefing agent. An agent that can adjust setpoints carries more. The governance model explicitly defines the blast radius — through role-based authorisation, scoped permissions, and hard capability limits — and starts at zero. Every capability is granted deliberately, not available by default.
HAS (Human Agency Scale) provides the framework for understanding where an agent sits on the spectrum from pure advisory to governed autonomy. You start at HAS 1–2 — read-only, advisory — and move toward higher levels only when the governance architecture, operational track record, and risk assessment support it.
The Always-On Property
What changes when an agent is running persistently isn’t just faster response times. It’s the detection of exceptions that happen in the window between shifts — the 3am bearing degradation trend that develops while no experienced engineer is watching the historian.
In a traditional operation, that trend might not be noticed until the day shift engineer runs their morning check, or until it crosses into alarm territory. In an operation with persistent agent monitoring, the degradation is detected when it starts, not when it becomes a problem. The work order is raised before the failure. The briefing is waiting when the shift engineer arrives.
A Tier 1 mining producer has deployed agents monitoring thousands of control loops across their full operation — diagnosing degradation patterns before they become failure events and identifying tuning actions continuously. The value isn’t from agents making decisions faster. It’s from agents catching the patterns that humans miss between shifts — the 3am bearing trend that never crosses an alarm threshold but was detectable three weeks before failure. The value came from the always-on monitoring property, not from replacing human decision-making.
From Answering to Acting
The question for operations teams evaluating AI isn’t whether LLMs are useful. They are. The experiment of pasting historian data into ChatGPT produces genuinely useful outputs, and that utility is real.
The question is whether that utility extends beyond the session window. Whether the AI is present at 2am when the alarm fires. Whether it raises the work order or just recommends one. Whether it briefs the incoming shift or waits to be asked.
An AI that answers questions is useful when someone is asking. An AI that acts is useful all the time.
See how IndustrialClaw handles these use cases in production →