Ten Laws for Trustworthy Industrial Autonomy
The Digital Twin Consortium published a framework for industrial AI agents that must operate safely in critical infrastructure. Here is what those ten laws mean in practice - and why they set a higher bar than enterprise AI governance.
Pieter van Schalkwyk · March 20, 2026
In February 2026, the Digital Twin Consortium published a technical brief titled the Industrial AI Agent Manifesto - ten laws for AI agents that operate in critical infrastructure. The framework emerged from the Composability Framework Working Group, drawing on field experience from industrial deployments across process manufacturing, energy, and mining.
The laws are not a wishlist. They are the minimum requirements for an AI agent that operates in an environment where the consequences of failure are measured in safety incidents, regulatory penalties, and unplanned production losses - not API credits.
This article explains each law and what it means in practice for industrial deployments.
Why Ten Laws, Not One
Enterprise AI governance frameworks - and there are many - tend to converge on a small number of concerns: transparency, fairness, accountability. These are important. They are not sufficient for industrial environments.
Industrial AI agents operate in environments where:
- Physical processes respond to commands in milliseconds, with failure modes that include equipment damage, process upsets, and safety incidents.
- Control systems must remain authoritative - agents cannot bypass PLCs, DCS safety interlocks, or hardware safety systems.
- Audit trails are not just good practice - they are regulatory requirements under ISA/IEC 62443, IEC 61511, and sector-specific frameworks.
- The humans in the loop are not software engineers comfortable with prompt engineering - they are process operators, maintenance technicians, and plant managers who need to trust, override, and verify AI recommendations.
Ten laws are needed because industrial AI governance is multidimensional in a way that enterprise AI governance is not.
Law 1: Deterministic Validation and Execution
What it says: AI agent recommendations must be validated by deterministic, formal constraint systems before execution. Non-deterministic outputs from neural models must be bounded by symbolic constraints before they touch physical systems.
What it means in practice: The Propose → Validate → Execute architecture. The neural/LLM layer proposes. A symbolic governance layer validates the proposal against formal constraints - operating limits, safety thresholds, equipment boundaries - before any instruction reaches the DCS or SCADA layer. If the neural layer produces a recommendation that violates a constraint, the governance layer rejects it regardless of how well-reasoned the recommendation appears.
This is not prompt-based guardrails. It is an architectural constraint. The symbolic layer cannot be argued around.
Law 2: Physics-Aware and Process-Aware Intelligence
What it says: Industrial AI agents must understand the physical and process domain they operate in - not just general knowledge.
What it means in practice: An agent connected to a distillation column must understand thermodynamics, P&ID conventions, alarm management standards (ISA-18.2, EEMUA 191), and the operating envelope of that specific unit. An agent trained on general web knowledge does not have this. It will produce physically coherent-sounding recommendations that violate process constraints in ways that are invisible to anyone who is not a domain expert.
Physics-aware intelligence requires domain-specific training data, historian time-series understanding, OT protocol knowledge, and ontologies that map industrial concepts - not just natural language capability.
Law 3: Symbolic Primacy with Sub-Symbolic Intelligence
What it says: Symbolic reasoning - logic, rules, constraints, formal models - takes primacy over sub-symbolic (neural, statistical) reasoning in industrial AI systems.
What it means in practice: This is the neuro-symbolic architecture principle. Neural models excel at pattern recognition, anomaly detection, and natural language reasoning. They cannot guarantee logical consistency or constraint adherence. Symbolic systems can.
The correct architecture uses neural intelligence where it is strong (reading historian patterns, interpreting operator notes, correlating heterogeneous data) and symbolic intelligence where it is necessary (enforcing safety constraints, maintaining logical consistency, producing auditable reasoning chains). Symbolic primacy means the symbolic layer has final say - neural outputs are bounded and validated before execution.
Law 4: Separation of Control with Standardised Interoperability
What it says: AI agent intelligence must be cleanly separated from control system execution, with interoperability through standardised industrial protocols.
What it means in practice: Agents connect to industrial systems through OPC-UA, MQTT, and standard historian APIs - not through proprietary integrations that create vendor lock-in and security vulnerabilities. The agent layer proposes; the control layer executes through its own authoritative paths. This separation ensures that an agent failure cannot directly cause a control system failure, and that the agent layer can be replaced or updated without touching the control architecture.
Software Defined Automation (SDA) - IEC 61499 function blocks, OPC-UA as universal protocol, containerised control on standard edge compute - is the execution substrate that makes this separation clean and actionable.
Law 5: Emergency Stop, Human Override, and Graceful Degradation
What it says: Every AI agent in an industrial environment must support immediate human override at every level of the autonomy stack, and must degrade gracefully when autonomous capability is suspended.
What it means in practice: At any Human Agency Scale level - from HAS-01 monitoring through HAS-05 full autonomy - a human operator must be able to stop, override, or suspend the agent immediately. The system must continue operating safely in degraded mode while the override is active.
Graceful degradation means the plant does not become less safe when AI is turned off - it simply reverts to the previous operating mode. Agents that create dependencies that make this impossible are unacceptable in industrial environments.
Law 6: Interoperability with Operational Systems
What it says: Industrial AI agents must integrate natively with the operational systems that industrial organisations actually run - historians, CMMS, DCS, SCADA, ERP, MES - not require replacement of the installed base.
What it means in practice: An agent platform that requires ripping out OSIsoft PI, replacing SAP PM, or restructuring SCADA architecture to function will not be deployed. Industrial organisations have made decades of investment in operational systems. The AI layer must be additive.
Native connectors to OSIsoft PI, Ignition, SAP PM, Maximo, and major DCS/SCADA platforms are not a nice-to-have. They are a deployment requirement.
Law 7: Auditability and Transparency
What it says: Every agent action, every tool call, every decision, and every recommendation must be logged to an immutable audit trail, with the reasoning that produced each decision available for regulatory and incident review.
What it means in practice: In ISA/IEC 62443 environments, this is a compliance requirement. In practice it means that when a regulator asks “why did the system recommend reducing reflux ratio at 02:47 on March 15?”, the answer must be available - the data inputs that informed the recommendation, the reasoning chain that produced it, the validation checks that passed, and the human approval or autonomous execution that followed.
This is not logging for the sake of logging. It is the foundation for operator trust. Operators who can read back the reasoning behind a recommendation will trust and act on it. Operators who cannot will override it - and be right to do so.
Law 8: Progressive Autonomy with Safety Boundaries (HAS)
What it says: Autonomy must be granted progressively, with explicit governance at each level, rather than deployed at full autonomy from the start.
What it means in practice: The Human Agency Scale provides the framework. HAS-01 (monitoring and alerting) requires no special governance. HAS-03 (AI-augmented human decisions) requires human review of recommendations. HAS-05 (fully autonomous multi-agent operations) requires formal safety validation, defined operating envelopes, and explicit regulatory approval in many jurisdictions.
Industrial deployments that start at HAS-05 and try to govern backwards do not survive contact with operations teams or regulators. The correct path is progressive: prove value at HAS-01, build trust at HAS-02 and HAS-03, earn the right to operate at HAS-04 and HAS-05 through documented performance and governance.
MONITORING
Agents observe and alert. Humans decide and act. Zero agent-initiated actions.
ADVISORY
Agents observe, diagnose, and recommend. Humans approve. Agents explain their reasoning.
ASSISTED ACTION
Agents initiate routine actions within approved boundaries. Humans retain override authority.
SUPERVISED AUTONOMY
Agents handle the full exception workflow. Humans monitor and intervene on exception only.
FULL AUTONOMOUS
Agent networks coordinate autonomously. Humans set objectives and governance parameters. Full audit trail maintained.
Tier 1 mining operator has progressed to HAS-05 across 200+ facilities
Law 9: Multi-Agent Safety Orchestration
What it says: When multiple AI agents operate in the same industrial environment, their actions must be coordinated by a safety-aware orchestration layer that prevents conflicts, manages shared resources, and enforces system-level constraints.
What it means in practice: An agent monitoring bearing vibration and an agent managing production scheduling may individually make correct recommendations that conflict with each other. Without orchestration, both recommendations reach the operator simultaneously. The operator must resolve the conflict manually - which defeats the purpose of autonomous operations.
Multi-agent orchestration defines agent ownership boundaries, conflict resolution protocols, escalation paths when agent recommendations disagree, and system-level constraints that no single agent can violate regardless of its local reasoning. This is MAGS - the Multi-Agent Generative Systems layer that coordinates agent teams in industrial environments.
Individual agents reason about their domain. MAGS orchestrates across domains, resolves conflicts, and maintains system-level safety constraints.
Law 10: Safe and Secure Continuous Learning
What it says: Industrial AI agents must be capable of learning from operational experience, but that learning must be governed: validated before deployment, bounded by safety constraints, and protected against adversarial inputs.
What it means in practice: An agent that cannot learn from experience becomes stale as the process evolves - new equipment, new feedstocks, new operating regimes make yesterday’s model wrong today. An agent that learns without governance is a security and safety risk.
Safe continuous learning means: model updates are versioned and validated before deployment (not “run latest”), the learning pipeline is protected against prompt injection and adversarial sensor data, and the symbolic constraint layer remains authoritative even as the neural layer updates. Skills are treated as privileged infrastructure, not as npm packages.
IndustrialClaw as an Implementation of the Ten Laws
The ten laws are an architecture specification. IndustrialClaw is designed as a production implementation of that specification.
Each law maps to a specific architectural component:
- Laws 1, 3, 7 → The governance layer: symbolic constraint engine, immutable audit trail, formal validation gate
- Laws 2, 6 → The knowledge and integration layer: OT-native connectors, domain-aware agent skills
- Laws 4, 5 → The separation and override architecture: SDA compatibility, human override at every HAS level
- Laws 8, 9 → The MAGS orchestration layer: progressive autonomy framework, multi-agent coordination
- Law 10 → The skill governance framework: version-pinned skills, hash verification, adversarial input protection
The ten laws do not describe what IndustrialClaw should eventually become. They describe what must be true from the first deployment in a safety-critical environment.
The Digital Twin Consortium Industrial AI Agent Manifesto is available at the Digital Twin Consortium website. Explore how IndustrialClaw implements the ten laws at Why IndustrialClaw, or contact us to discuss your deployment requirements.