SYSTEM: OPERATIONAL
·
SYS:INDUSTRIALCLAW.AISTATUS:NOMINALOT/IT CONNECTORS: 150+AUTONOMOUS OPERATION: 15+ DAYSINDUSTRIES: MINING · OIL & GAS · ENERGYGOVERNED AUTONOMY: ENFORCEDAUDIT TRAIL: IMMUTABLEBLAST RADIUS: ZERO
Agentic OperationsThree-Layer World ModelGoverned AutonomySoftware Defined Automation

The AI Labs Are Building the Wrong Layer for Industrial Operations

Foundation model labs are racing to solve Layer 1 - world dynamics and physical prediction. Industrial operations need Layers 2 and 3. Here is why that gap matters, and what it will take to close it.

Pieter van Schalkwyk · March 22, 2026

The foundation model labs are building world models. Meta’s V-JEPA 2, Google DeepMind, World Labs - all racing toward the same objective: AI that understands physical reality, predicts what happens next in the physical world, and reasons about causality from raw sensory input.

This is genuinely important work. And for industrial operations, it addresses almost none of the actual problem.

Three Layers, One Missing

A useful way to think about what AI needs to understand for industrial autonomy is as three distinct layers, each with its own dynamics, its own data, and its own failure modes.

Layer 1: Physical and World Dynamics. The behaviour of materials, fluids, machines, and physical processes. What does the sensor read? What happens to pressure when temperature rises? How does a bearing degrade? This is the layer the foundation labs are building. It is necessary. It is not sufficient.

Layer 2: Operational and KPI Dynamics. How operations actually run - production schedules, shift patterns, maintenance windows, quality targets, throughput constraints, energy budgets. This is not physics. It is operational policy. It changes when the product changes, when the market changes, when the contract changes. Nobody in the ML research community is systematically building this layer.

Layer 3: Socio-Technical and Organisational Dynamics. The human systems that govern operations - escalation protocols, approval workflows, regulatory constraints, shift handover culture, the difference between what the procedure says and what experienced operators actually do. This layer is even further from ML research, and it is where the most consequential decisions get made.

Industrial AI needs all three. Autonomous industrial agents that understand physics (Layer 1) but not operational policy (Layer 2) and not organisational context (Layer 3) will generate recommendations that are physically plausible and operationally wrong.

Three layers of industrial knowledge
flowchart TD L3["Layer 3 - Socio-Technical Dynamics<br/>Escalation, approval, regulatory, organisational context"] L2["Layer 2 - Operational KPI Dynamics<br/>Production schedules, quality targets, maintenance windows"] L1["Layer 1 - Physical World Dynamics<br/>Physics, materials, sensor data - what foundation labs build"] L3 --> L2 L2 --> L1

Foundation model labs are building Layer 1. Industrial autonomy requires all three.

The Gap That Operations Engineers Already Know

This is not an abstract framework. Every process operator has lived the gap between Layer 1 and Layer 2.

An AI system trained on physics can tell you that your distillation column reflux ratio is drifting from optimal. What it cannot tell you - because this is Layer 2, not Layer 1 - is that the plant is currently in a grade transition, that the product specification changes in four hours, that the maintenance window opens tomorrow morning, and that running at reduced throughput now to preserve equipment margin is the right trade-off given those constraints.

Layer 2 knowledge is not in the sensor data. It is in the CMMS, the production schedule, the shift log, the operator’s mental model built over two decades. It changes continuously. It is not learnable from physics.

Layer 3 is even further out of reach for pure ML approaches. The experienced senior operator who overrides a recommendation at 2am based on a heuristic that exists nowhere in any formal system - that is Layer 3. The difference between the procedure that says “escalate to shift supervisor” and the real escalation pattern that experienced teams follow under pressure - that is Layer 3. When the operator retires, both leave with them.

Why This Matters for Industrial Autonomy

Industrial automation has largely solved the execution problem. PLCs execute deterministically. DCS systems supervise effectively. APC optimises within scope.

What none of them can do is reason across the boundaries between layers - correlating Layer 1 sensor data against Layer 2 operational constraints while respecting Layer 3 organisational context - and propose a coherent, governed action.

This is the reasoning gap. It is not a gap in sensing, logging, or computation. It is a gap in governed cross-layer reasoning.

The governed reasoning layer proposed by IndustrialClaw addresses this gap with a specific architecture:

  • Observe across all three layers simultaneously - not just sensor data, but operational schedules, maintenance history, regulatory state, and organisational context.
  • Reason about what the combination means - using neuro-symbolic architecture that combines the pattern recognition of neural models with the constraint enforcement of symbolic systems.
  • Propose actions through a governed pipeline - where every recommendation is validated against formal operational constraints before it becomes an instruction.
  • Execute only through validated paths - through the same supervisory control paths human operators use, with the same audit trail.

Software Defined Automation makes this architecture actionable. In a traditional PLC environment, an agent can observe and recommend, but cannot act - changing control logic requires physical access and specialist knowledge. In an SDA environment built on IEC 61499 function blocks and OPC-UA, the control logic is software. An agent with the right permissions and the right validation layer can propose and execute changes through supervisory control paths. SDA creates the execution surface area that a three-layer reasoning agent needs.

The Ten Laws That Govern the Gap

The Digital Twin Consortium published a framework for trustworthy industrial AI agents in early 2026: the Industrial AI Agent Manifesto. Ten laws for agents that operate safely across all three layers of industrial knowledge.

The manifesto establishes requirements that directly address the three-layer gap: physics-aware and process-aware intelligence (Law 2), symbolic primacy with sub-symbolic intelligence (Law 3), progressive autonomy with safety boundaries (Law 8), and multi-agent safety orchestration (Law 9).

These laws are not a vendor wishlist. They emerge from the recognition that industrial agents operating without three-layer grounding - without operational context, without organisational awareness, without neuro-symbolic constraint enforcement - create a blast radius that consumer AI architectures are not designed to contain.

IndustrialClaw is built to implement all ten laws in production industrial environments. The MAGS layer (Multi-Agent Generative Systems) provides the three-layer reasoning capability. The governance layer implements the symbolic constraints that validate every proposed action before execution. The actuation layer delivers governed action to OT systems through bounded, auditable paths.

Human Agency Scale
HAS 1 → 5
HAS-01

MONITORING

Agents observe and alert. Humans decide and act. Zero agent-initiated actions.

READ-ONLY
HAS-02

ADVISORY

Agents observe, diagnose, and recommend. Humans approve. Agents explain their reasoning.

RECOMMEND-ONLY
HAS-03

ASSISTED ACTION

Agents initiate routine actions within approved boundaries. Humans retain override authority.

BOUNDED-WRITE
HAS-04

SUPERVISED AUTONOMY

Agents handle the full exception workflow. Humans monitor and intervene on exception only.

SUPERVISED
HAS-05

FULL AUTONOMOUS

Agent networks coordinate autonomously. Humans set objectives and governance parameters. Full audit trail maintained.

FULL-AUTONOMY

Tier 1 mining operator has progressed to HAS-05 across 200+ facilities

The Human Agency Scale shows what progressive autonomy looks like in practice: from HAS-01 (pure monitoring) through HAS-03 (AI-augmented human decisions) to HAS-05 (coordinated autonomous multi-agent operations). Progressing safely up that scale requires agents that understand not just the physics of the process, but the operational constraints at Layer 2 and the organisational context at Layer 3.

An agent that only understands Layer 1 should not be operating at HAS-04 or HAS-05. This is not a governance argument. It is an architecture argument.

What This Means for Deployment

If you are planning an industrial AI deployment in 2026, the first question is not which foundation model to use. The first question is: which layers does this system understand?

A system that understands Layer 1 (physics) but not Layer 2 (operational policy) will generate recommendations that are physically coherent and operationally wrong. It will recommend throughput maximisation during a grade transition. It will propose maintenance interventions that conflict with the production schedule. It will miss the context that experienced operators carry in their heads.

A system that understands Layers 1 and 2 but not Layer 3 (organisational dynamics) will generate recommendations that operators do not trust and cannot act on - because the recommendations don’t account for how the organisation actually works, who has authority to approve what, and what the regulatory constraints are in this jurisdiction.

Building genuine industrial autonomy means building for all three layers. The foundation labs are working hard on Layer 1. The industrial AI community needs to build Layers 2 and 3 with the same rigour - and the governance architecture to connect them safely.

That is the work IndustrialClaw is doing.


Explore the architecture further at Why IndustrialClaw, or talk to us about what a three-layer deployment looks like in your operation.

See IndustrialClaw in your environment

Get started Talk to us

Apply for early access — 2026 cohort

Enterprise, heavy asset & mission-critical industries only. Senior decision makers prioritised. Acceptance at XMPro's discretion.

By submitting you agree to receive communications from XMPro. Applications reviewed — acceptance at XMPro's discretion.