30% of Australian companies are reimagining business through AI, but only 12% are transforming at a significant level (vs 25% globally). 69% are using agentic AI. Yet governance readiness sits at just 30%, data management readiness at 40%, and only 20% report their talent is highly prepared. The 2026 Tech Leaders Survey confirms: 78% see AI as the defining trend, but only 7% believe Australia has the capability to meet future demand.
AI Governance in ANZ 2026
The Regulatory Landscape, Enterprise Readiness, and What Actually Matters Only 30% of ANZ enterprises are governance-ready for AI. The gap between AI adoption (87% in NZ, 69% using agentic AI in AU) and governance maturity (21% with mature agentic models) is the defining enterprise risk of 2026. This analysis covers the regulatory landscape, board expectations, ISO 42001 trajectory, industry-specific requirements, and what the Big 4 are selling versus what you actually need.
By Gregory McKenzie · Registered Patent Attorney & Systems Architect · NETEVO
The ANZ Regulatory Landscape: No AI Act, But No Free Pass
Australia has deliberately rejected standalone AI legislation. Unlike the EU AI Act, Australia's approach strengthens existing laws and empowers sector-specific regulators. The premise: AI risks like discrimination, privacy breaches, and safety failures are already addressed by current statutes, provided those statutes are updated and rigorously enforced. This sounds permissive. It isn't. The practical effect is that AI governance obligations are distributed across multiple regulators, each with enforcement powers, making the compliance landscape harder to navigate, not easier.
The Privacy Act 1988, amended in December 2024, now explicitly addresses automated decision-making. Organisations must disclose when personal information is used for substantially automated decisions affecting individual rights. The Australian Privacy Principles apply to both inputs and outputs of AI systems, and critically, AI-generated inferences about individuals are considered collected personal information under APP 3. This means using a model to infer a customer's creditworthiness or health status triggers the same obligations as directly collecting that data.
The National AI Centre replaced the ten voluntary guardrails of the 2024 AI Safety Standard with six Essential Practices (AI6) in October 2025: accountability, impact assessment, risk management, transparency, testing and monitoring, and human control. These are voluntary for the private sector but mandatory for Commonwealth entities, and they set the benchmark for government procurement. If you sell to government, AI6 compliance is effectively non-negotiable.
The Australian Government's Policy for Responsible Use of AI in Government v2.0 became effective in December 2025, requiring accountable officials, transparency statements, and mandatory AI Impact Assessments by June 2026. The AI Safety Institute is becoming operational in early 2026 to assess risks from frontier models and coordinate regulatory insights. Meanwhile, in New Zealand, 85% of citizens want assurance about trustworthy AI use, and the NZ Privacy Commissioner is calling for strengthened legislation amid increasing data breaches.
Key Regulatory Instruments for ANZ Enterprises (March 2026)
- Privacy Act 1988 (amended Dec 2024) — automated decision-making disclosure, APPs apply to AI inputs and outputs
- AI6 Essential Practices — six guardrails replacing the VAISS, voluntary for private sector, mandatory for Commonwealth
- APRA CPS 230 (July 2025) — operational risk management including AI vendor registers
- APRA CPS 234 — AI as critical technology stack, information asset classification
- Government AI Policy v2.0 (Dec 2025) — mandatory AI Impact Assessments by June 2026
- AI Safety Institute — operational early 2026 for frontier model risk assessment
- TGA — AI as medical device under Therapeutic Goods Act 1989 when applicable
The Readiness Gap: Adoption Outpacing Governance
Enterprise AI adoption data from Deloitte, Tech Council of Australia, and NewZealand.AI.
82-87% of NZ businesses use AI in some capacity, up from 48% in 2023. Large enterprises lead at 92%. Public sector AI use cases grew from 108 to 272 in one year. But only 34% of New Zealanders trust AI systems. 81% believe specific AI regulation is necessary, and 85% want assurance about trustworthy use. The adoption is ahead of the trust infrastructure.
This is the most consequential finding: 75% of companies plan agentic AI deployments within two years, but only 21% have a mature governance model in place. Agentic AI, where autonomous agents perform complex workflows, requires real-time governance, not the narrative-based compliance reports most organisations still rely on. Policy-as-code, not policy-as-PDF.
ISO/IEC 42001 is the premier AI governance certification, but the market is constrained. Only 10-15 accredited auditors exist in ANZ. Only 8-10 consultants have genuine AI governance plus ISO implementation experience. First-year costs: $73K (small, 30 staff), $185K (mid, 120 staff), $353K+ (large, 500+ staff). Early certifications: KPMG International (December 2025), CrowdStrike (January 2026), Darktrace (early 2026). The standard is right; the implementation supply is not yet ready.
The AICD has articulated five governance domains for boards: oversight (demanding AI risk metrics, not just policies), boardroom wisdom (using AI to challenge assumptions), strategy (asking 'if we built this organisation today as AI-enabled, what would we build?'), ESG (leading mixed human-agent teams), and resilience (understanding inference housing and jurisdictional data exposure). The ASX Corporate Governance Principles 5th edition is under expert review, with AI governance updates expected.
APRA's CPS 230 (Operational Risk Management, effective July 2025) requires end-to-end critical operations mapping including AI vendors. Material service provider registers were due October 2025. CPS 234 (Information Security) treats AI as critical technology stack. A 2025 tripartite assessment found many organisations still struggle with AI information asset classification. The projected $48.9B GDP impact from AI in finance by 2035 is contingent on regulatory certainty that doesn't yet exist.
What the Big 4 Are Selling — And What You Actually Need
Comparative analysis of enterprise AI governance approaches in ANZ.
From Research to Practice
NSW Department of Industry
90%+ Audit preparation time eliminated
Governed platform delivery for a state government agency. Policy-as-code enforcement, automated compliance evidence, and audit-ready infrastructure. Demonstrates the governance principles described in this analysis applied to production systems.
Read Case Study →RISKflo at HSBC
99%+ Uptime over 24+ months
Event-sourced risk platform serving 1,100+ daily users at HSBC. Governance was architectural: immutable audit trails, policy-as-code enforcement, and Zero Trust access controls. The governance approach that CPS 230 and CPS 234 now demand, implemented before the regulations required it.
Read Case Study →Assessing Your AI Governance Readiness
From awareness to board-defensible governance.
Regulatory Mapping
Week 1-2
- Map applicable regulations to your AI use cases (APRA, Privacy Act, AI6, sector-specific)
- Identify material service providers and AI vendor dependencies
- Assess current governance against AICD five-domain framework
- Review board reporting capabilities and gap analysis
Readiness Assessment
Week 2-3
- Score organisational readiness across Deloitte's four dimensions (infrastructure, strategy, data, talent)
- Benchmark against ANZ enterprise peers
- Audit existing AI experiments and shadow AI usage
- Assess agentic AI governance maturity (21% benchmark)
Framework Design
Week 3-6
- Design AI governance policy architecture (policy-as-code, not policy-as-PDF)
- Map to ISO 42001 requirements where certification is a goal
- Define decision rights matrix and accountability chains
- Design AI risk classification schema and board reporting framework
Implementation & Enablement
Week 6-24
- Deploy policy-as-code for AI governance (automated enforcement, not checklists)
- Implement AI risk monitoring and evidence capture
- Deliver workforce AI fluency program (foundation, practitioner, builder tiers)
- Establish board reporting cadence and regulatory monitoring
Questions
AI Governance in ANZ: FAQ
What AI regulations apply to Australian businesses in 2026?
Australia has rejected standalone AI legislation, instead strengthening existing laws via sector-specific regulators. Key instruments: (1) Privacy Act 1988 (amended December 2024) — requires disclosure for substantially automated decisions affecting individual rights. (2) AI6 Essential Practices — six voluntary guardrails from the National AI Centre (October 2025), effectively the benchmark for government procurement. (3) APRA CPS 230 (July 2025) — requires end-to-end understanding of critical operations including AI vendors as material service providers. (4) APRA CPS 234 — treats AI as critical technology stack. (5) Government AI Policy v2.0 — mandatory for Commonwealth entities with AI Impact Assessments required by June 2026.
What is ISO 42001 and how many Australian organisations are certified?
ISO/IEC 42001:2023 is the world's first certifiable Artificial Intelligence Management System (AIMS) standard, adopted in Australia as AS ISO/IEC 42001:2023. It provides a structured framework for organisations to demonstrate AI governance maturity through formal certification. As of early 2026, the certification market in Australia is described as immature but accelerating. There are approximately 10-15 accredited auditors in ANZ who can certify to this standard, and only 8-10 consultants in Australia with genuine AI governance experience integrated with ISO implementation backgrounds. This creates a significant bottleneck. Early adopters include KPMG International (first Big Four entity certified, December 2025), Behavox (RegTech, early 2026), CrowdStrike (Falcon platform, January 2026), and Darktrace (11-month certification process with BSI). Costs vary: approximately $73,000 AUD for a small AI user (30 employees), $185,000 for mid-sized AI developers (120 employees), and $353,000+ for large enterprises (500+ employees) in the first year, plus $8,000-$20,000 annually for surveillance audits.
What does the AICD expect from boards regarding AI governance?
The Australian Institute of Company Directors has articulated board AI governance expectations across five domains in early 2026: (1) Oversight — moving beyond high-level policies to demanding AI risk metrics and qualitative reporting. Boards must shift from passive awareness to active oversight. (2) Boardroom Wisdom — using AI to challenge director thinking and conduct red-team/blue-team assumptions about strategy. (3) Strategy — asking crucial questions about business model reinvention. The AICD poses: 'If this organisation were designed today as AI-enabled, what would we build?' (4) ESG — reframing organisational culture to lead mixed teams of humans and autonomous agents. (5) Resilience — understanding inference housing, the jurisdiction where AI models process data, and its exposure to foreign legal regimes. The ASX Corporate Governance Principles (4th Edition, 2019) remain in effect. Work on a 5th edition was paused in 2025 for a broader review, with the ASX now assuming primary responsibility supported by an Advisory Group chaired by Dr Philip Lowe.
How prepared are ANZ enterprises for AI governance in 2026?
The data reveals a significant execution gap. According to Deloitte's State of AI in the Enterprise 2026 report: Governance readiness is only 30%. Data management readiness is 40%. Strategic readiness is 42%. Talent readiness has decreased, with only 20% of organisations reporting their people are highly prepared. Only 12% of Australian companies are transforming at a significant level through AI, compared to 25% globally. For agentic AI specifically, only 21% of companies have a mature governance model despite nearly 75% planning deployments within two years. The Tech Council of Australia and Datacom 2026 survey reports that only 7% of tech leaders believe Australia has the capability and infrastructure to meet future AI demand. In New Zealand, 82-87% of businesses use AI in some capacity but only 34% of citizens trust AI systems, and 85% want assurance about trustworthy use before increasing trust.
What AI governance requirements exist for Australian financial services?
APRA has made AI risk core to financial stability, not an optional extra. CPS 230 (Operational Risk Management), effective July 2025, requires APRA-regulated entities to have end-to-end understanding of critical operations and material service providers including AI vendors. Material service provider registers must be submitted to APRA. CPS 234 (Information Security) expects entities to treat AI as part of the critical technology stack. A 2025 tripartite assessment identified that many organisations still struggle with incomplete identification and classification of information assets in AI systems. For the broader financial sector, the economic impact of Generative AI is projected at $48.9 billion to Australia's GDP by 2035, but this is contingent on regulatory certainty. Organisations report that immature regulatory settings make business leaders uncertain about safe AI deployment. KPMG's 2026 surveys show that AI implementation has become the number one challenge for Australian business leaders, surpassing inflation.
How do the Big 4 consulting firms approach AI governance in ANZ?
The Big 4 have moved beyond advisory into platform-based governance. KPMG: Trust-First philosophy with ISO 42001 certification (December 2025). PwC: Scale-First with Agent OS (25,000 agents globally), Three Lines of Defence model, 30-40% faster innovation cycles. EY: $1.4 billion EY.ai platform on IBM watsonx with 24+ AI tools in Australian audit. Deloitte: Sovereign AI focus emphasising governance under local laws. The key difference: Big 4 sell governance within broader transformation at $500K-$3M+. Specialist firms deliver focused governance frameworks at $30K-$250K with faster implementation and direct practitioner access.
Specialist vs Big 4 AI Governance
Governance is architectural, not procedural
Policy-as-code means compliance is automated and auditable, not a manual checklist. The same approach that achieved 99%+ uptime at HSBC and 90%+ audit time savings at NSW DOI.
Patent attorney rigour applied to AI governance
Regulatory requirements translated into executable controls with the precision of patent claims. Defensible under board scrutiny, auditor examination, and regulatory review.
ANZ regulatory expertise, not imported frameworks
APRA CPS 230, Privacy Act amendments, AI6 Essential Practices, AICD board expectations — governance designed for the Australian regulatory landscape, not adapted from US or EU templates.
From Understanding to Implementation
Board-defensible AI governance framework and implementation.
This research page explains the landscape. The AI Governance service delivers the framework, policy-as-code deployment, and workforce enablement.
Learn more →Policy-as-code for your software delivery lifecycle.
AI governance requires governed infrastructure. If your SDLC isn't already policy-as-code, start here — it provides the foundation that AI governance builds on.
Learn more →Governed agent architecture for enterprise AI.
With 75% of companies planning agentic AI within two years but only 21% governance-ready, the agent infrastructure engagement bridges the gap between AI ambition and governed operations.
Learn more →Sources
- Privacy Act 1988 — Australian Government — Federal Register of Legislation. Amended December 2024. Includes automated decision-making disclosure requirements and application of Australian Privacy Principles to AI system inputs and outputs.
https://www.legislation.gov.au/C2004A03712/latest/text - Prudential Standard CPS 230 — Operational Risk Management — Australian Prudential Regulation Authority (APRA). Effective July 2025. Requires end-to-end understanding of critical operations and material service provider registers for APRA-regulated entities.
https://www.apra.gov.au/operational-risk-management - Prudential Standard CPS 234 — Information Security — Australian Prudential Regulation Authority (APRA). Treats AI as part of the critical technology stack. 2025 tripartite assessment identified gaps in AI information asset classification.
https://www.apra.gov.au/information-security - ISO/IEC 42001:2023 — Artificial Intelligence Management System — International Organization for Standardization (ISO). The world's first certifiable AI management system standard. Adopted in Australia as AS ISO/IEC 42001:2023.
https://www.iso.org/standard/81230.html - Policy for the Responsible Use of AI in Government 2.0 — Digital Transformation Agency (DTA). Mandatory for Commonwealth entities from December 2025. Requires AI Impact Assessments by June 2026.
https://www.digital.gov.au/ai/ai-in-government-policy - Guidance for AI Adoption (AI6 Essential Practices) — Department of Industry, Science and Resources — National AI Centre. Evolved from the Voluntary AI Safety Standard. Six essential practices: accountability, impact assessment, risk management, transparency, testing and monitoring, and human control.
https://www.industry.gov.au/publications/guidance-for-ai-adoption - Australian Institute of Company Directors (AICD) — AICD. Board AI governance expectations across five domains: oversight, boardroom wisdom, strategy, ESG, and resilience.
https://www.aicd.com.au - Deloitte — State of AI in the Enterprise 2026 — Deloitte. Source for governance readiness (30%), agentic AI maturity (21%), talent readiness (20%), and the 12% significant transformation figure for Australian enterprises.
https://www.deloitte.com/au/en/Industries/technology/perspectives/state-of-ai.html - KPMG — Trusted AI Framework — KPMG. First Big 4 entity to achieve ISO 42001 certification (December 2025). Trust-first governance philosophy with Microsoft and IBM alliances.
https://kpmg.com/au/en/home/insights/2024/07/trusted-ai-framework.html - PwC — Responsible AI — PwC Australia. Agent OS platform with 25,000 agents deployed globally by 2026. Three Lines of Defence model and 30-40% faster innovation cycles with embedded governance.
https://www.pwc.com.au/artificial-intelligence.html - EY — Board Oversight of Artificial Intelligence — EY. $1.4 billion EY.ai platform on IBM watsonx. 24+ AI tools deployed in Australian audit and assurance.
https://www.ey.com/en_au/ai
Where Does Your Organisation Stand?
With only 30% of ANZ enterprises governance-ready and mandatory government AI Impact Assessments arriving in June 2026, the gap between AI adoption and governance maturity is closing fast. A readiness assessment gives you the board-ready data to act.