Loading...
Loading...
You cannot delegate what you do not govern
Under the Corporations Act s180, directors must exercise care and diligence — including taking reasonable steps to be informed about material decisions. When AI makes thousands of material decisions daily, the duty of care extends to AI governance. Most boards have zero visibility into AI decision-making. That is not a technology gap. It is a fiduciary gap.
Australian directors owe four statutory duties under the Corporations Act 2001. These duties were codified when all material decisions were made by humans. AI does not create new duties — but it extends each existing duty into territory that most boards have not yet mapped.
Directors must take reasonable steps to be informed about the company's operations and to make informed decisions.
AI implication
Extends to understanding AI systems making material decisions. 'Reasonable steps' includes understanding what the AI decides, on what basis, and with what constraints.
Directors must act in good faith in the best interests of the corporation and for a proper purpose.
AI implication
Deploying AI without governance is not acting in the best interests of the corporation. It creates unmonitored liability and unaccountable decision-making.
Directors must not improperly use their position to gain advantage for themselves or others.
AI implication
Allowing AI to make unsupervised decisions that benefit some stakeholders over others — without governance — may constitute improper use of position through omission.
Directors must not improperly use information obtained through their position.
AI implication
AI systems process vast quantities of organisational data. Directors have a duty to ensure AI's use of this information is governed, not just technically secured.
The principle: AI does not create new duties. It extends existing duties into algorithmic decision-making. A director who would be liable for failing to oversee a human making material decisions is equally liable for failing to oversee an AI making the same decisions. The standard is not “did you prevent every bad AI output?” — it is “did you take reasonable steps to govern AI decision-making?”
AI decisions are growing exponentially. Board awareness is declining. The gap between what AI decides and what boards know about is not linear — it is exponential. Every year, AI makes more decisions and boards understand less of what those decisions are.
Material AI decisions per day (log scale) compared to board awareness index. The governance gap widens exponentially as AI capability increases.
Board Awareness Index: percentage of material AI decisions the board has visibility into. Green line shows coverage with constraint-based governance.
The core liability problem
The duty of care requires directors to take “reasonable steps to be informed.” When AI makes 10,000 material decisions per day and the board is aware of zero of them, the question is not whether directors have breached their duty. The question is whether they can demonstrate they took reasonable steps to govern algorithmic decision-making. For most boards, the answer is no.
Most boards have zero visibility into AI decision-making. Not limited visibility. Not partial visibility. Zero. The AI systems making material decisions on behalf of the organisation are invisible to the people who bear fiduciary responsibility for those decisions.
Examples: Loan approvals, insurance pricing, product recommendations, support routing
Board visibility: Zero — managed by product teams, invisible to board
Examples: Hiring filters, performance scoring, resource allocation, risk prioritisation
Board visibility: Zero — managed by operations, reported only as aggregate metrics
Examples: Transaction monitoring, fraud detection, AML screening, regulatory reporting
Board visibility: Partial — board sees outputs but not AI decision criteria
Examples: Email filtering, content moderation, chatbot responses, document generation
Board visibility: Zero — fully automated, no governance trace
The informed decision doctrine: Directors are not expected to review every operational decision. But they are expected to have governance systems that provide reasonable visibility into material decision-making. When AI makes material decisions and the board has no governance system to provide visibility, the informed decision doctrine is breached — not by any single bad decision, but by the structural absence of oversight.
Consider three scenarios where AI makes a decision the board did not know was being made. In each case, the question is not whether the AI was “right” or “wrong.” It is whether the board can demonstrate governance over the AI's decision-making process.
The pattern
In every scenario, the question is the same: can the board demonstrate that it had governance over the AI's decision-making? Not that it reviewed every decision. Not that it predicted the specific failure. But that it had a system in place — constraints, traces, escalations — that provided reasonable oversight of algorithmic action. The presence of governance architecture is evidence of due diligence. Its absence is evidence of breach.
Most organisations that attempt AI governance do so reactively: they audit AI outputs after the fact, review samples, and investigate complaints. This is equivalent to a board that only reviews financial statements after a fraud has been discovered. It is not governance. It is forensics.
Reactive Oversight | ||
|---|---|---|
| Timing | After the fact — review samples, investigate complaints | Before action — constraints defined, checked at moment of decision |
| Coverage | Samples only — 0.01% of AI decisions reviewed | 100% — every AI action checked against constraints |
| Speed | Weeks to months — audit cycles, quarterly reviews | Real-time — governance check happens before action executes |
| Evidence quality | Reconstructed from logs — incomplete, may be ambiguous | Contemporaneous governance trace — complete, auditable, permanent |
| Bias detection | Discovered after harm — complaints, audits, media exposure | Prevented by constraint — protected attribute proxies excluded from decision surface |
| Director liability | High — board became aware only after harm occurred | Low — board can demonstrate due diligence through governance architecture |
Pre-governance does not require directors to personally review every AI decision. It requires the existence of an architecture that constrains AI action, traces AI decisions, and escalates governance events — automatically, in real-time. The architecture produces the evidence. The directors set the constraints. The system connects the two.
Pre-AI | Post-AI | |
|---|---|---|
| Scope of oversight | Human decisions — board, management, employees | Human + algorithmic decisions — includes every AI system acting on behalf of the organisation |
| Decision speed | Decisions happen at human speed — days to weeks for material decisions | AI decisions happen at machine speed — thousands per second, each potentially material |
| Decision volume | Tens to hundreds of material decisions per quarter | Potentially millions of material AI decisions per quarter |
| Information basis | Directors can review the information that informed a decision | AI processes data at volumes no human can review — basis for decisions is opaque |
| Auditability | Meeting minutes, board papers, email trails — imperfect but available | Model weights, embedding spaces, prompt contexts — practically non-auditable without governance architecture |
| Delegation chain | Board → CEO → Management → Staff — clear accountability chain | Board → CEO → Management → AI — accountability breaks at the last link |
| Informed decision standard | Director must understand the human decision-making process | Director must understand the AI decision-making boundaries, constraints, and failure modes |
| Liability exposure | Limited to decisions the board knew or should have known about | Extends to AI decisions the board should have governed — ignorance is not a defence |
The shift: Every dimension of director duty is amplified by AI. The scope is wider, the speed is faster, the volume is higher, and the auditability is lower. Directors cannot meet the post-AI standard using pre-AI governance tools. Board papers and quarterly reviews cannot govern systems that make thousands of decisions per second. The tooling must match the challenge.
Your fiduciary duty extends to AI. If your organisation deploys AI systems that make material decisions — and most organisations now do — you have a duty to understand and govern those systems. “Management handles the AI” is not a defence under s180 any more than “management handles the finances” would be. The duty of care requires reasonable steps to be informed, which requires governance architecture that provides visibility into algorithmic decision-making.
The liability surface is real and growing. Every AI system making material decisions without governance creates potential s180 exposure for every director on the board. The risk is not that AI makes a bad decision — that is operational risk. The risk is that when AI makes a bad decision, the board cannot demonstrate it had governance in place. That is fiduciary risk. It attaches to directors personally.
AI governance is not a compliance checkbox. It is institutional infrastructure that produces the evidence directors need to demonstrate due diligence. Constraint-based governance — where AI operates within defined boundaries, decisions are traced, and violations are escalated — provides this evidence automatically. It does not slow AI down. It makes AI accountable.
The trajectory is clear. AI decisions are increasing exponentially. Regulatory scrutiny of AI governance is intensifying. The ASIC v Bekier three-question standard will be applied to AI decisions — it is a matter of when, not whether. Organisations that build governance telemetry now will have years of decision traces when the regulator comes asking. Those that do not will have nothing but model weights and good intentions.
Fiduciary Duty
The legal obligation of directors to act in the best interests of the corporation with care, diligence, good faith, and proper purpose. Under Australian law (Corporations Act 2001), fiduciary duties are codified in sections 180-184.
Duty of Care (s180)
Directors must exercise their powers with the degree of care and diligence that a reasonable person would exercise. In the AI era, this extends to understanding and governing AI systems that make material decisions on behalf of the organisation.
Informed Decision Doctrine
The principle that directors must take reasonable steps to inform themselves about matters before making decisions. Applied to AI governance, directors must understand the boundaries, constraints, and failure modes of AI decision-making systems — not just their outputs.
AI Governance Gap
The difference between the volume and materiality of AI decisions and the board's visibility into those decisions. In most organisations, this gap is near-total: AI makes thousands of material decisions daily while the board has zero visibility.
Pre-Governance
Defining the conditions and constraints for AI action before delegation occurs. Determines which options may exist on the decision surface, rather than filtering outputs after the fact. Produces the governance trace that directors need to demonstrate due diligence.
Constraint-Based Governance
A governance architecture where AI operates within explicitly defined boundaries (constraints). Each constraint is inspectable, auditable, and contestable. The Constellation model implements this through real-time governance checks at the moment of AI action.
The case that established three questions every board must answer about material decisions
Why 'who decides?' matters more than 'how smart?' in AI governance
Why legitimacy is determined at the option level, not the outcome level
What happens when decision authority exceeds organisational capacity