When AI creates risk — or governance has already failed — boards need independent assurance
AI Risk & Assurance provides independent investigation and structured assurance when AI use has created — or risks creating — operational, legal, or reputational harm.
When AI Governance Fails, Boards Need an Independent View
AI-related incidents rarely result from bad intent. They result from adoption outpacing oversight.
Tools are deployed without controls. Decisions are influenced by AI outputs without defined accountability. Governance frameworks fail to keep pace with how the technology is actually being used on the ground. Staff rely on AI systems they don’t fully understand to produce outputs that feed into consequential decisions.
When something goes wrong — or when a board becomes aware that it could — the instinct is often to manage it internally. The problem with internal review is that it cannot produce the independence a board needs. It cannot objectively assess whether leadership itself bears accountability for the governance failure. And it will not satisfy a regulator, an auditor, or a counterparty seeking evidence that the matter has been properly investigated.
Ethos Advisory provides independent AI risk and assurance for New Zealand boards. We conduct structured, evidence-based investigations and governance reviews that give directors a clear, objective picture of what happened, why, and what needs to change.
The Landscape New Zealand Boards Are Navigating
AI governance failures in New Zealand carry consequences that are becoming harder to contain.
The Privacy Act 2020 imposes obligations on organisations regarding automated decision-making, data handling, and the use of personal information in AI systems. The Office of the Privacy Commissioner has signalled active attention to AI-related privacy risks. Organisations that cannot demonstrate they have identified and managed these risks face increasing regulatory exposure.
Beyond privacy, AI-related incidents are generating reputational harm, employment disputes, procurement failures, and service delivery breakdowns across New Zealand’s public and private sectors. Boards that cannot show they exercised active oversight — not just approved a policy — face questions about whether their governance obligations were met.
Independent AI risk and assurance provides the evidence base your board needs to demonstrate it has taken the matter seriously and responded appropriately.
What AI Risk & Assurance Involves
Ethos Advisory provides independent assurance across five areas:
Incident Investigation Structured, independent investigation of AI-related incidents, governance failures, or concerns raised about how AI is being used within the organisation. We establish what occurred, what controls failed, and what the contributing factors were — producing findings your board can rely on and act from.
AI Usage Assessment Identification of where AI tools and systems are in use across the organisation, what decisions they are influencing, and whether appropriate controls, oversight, and accountability exist at each point of use. This frequently reveals AI use that was not visible to leadership — including shadow AI deployments at team level.
Governance and Controls Review Assessment of existing governance structures, policies, oversight mechanisms, and accountability frameworks relating to AI use. We evaluate these against ISO/IEC 42001, NIST AI RMF, and applicable New Zealand regulatory requirements — including the Privacy Act 2020 — to identify specific gaps and their consequences.
Risk Exposure Analysis Independent identification of operational, legal, ethical, and reputational risks associated with current AI practices or specific incidents. We assess materiality and likely consequence, and give the board a clear picture of the organisation’s exposure if the matter is not addressed.
Corrective Recommendations Practical, prioritised recommendations to strengthen governance, controls, and oversight — with clear accountability for implementation. Recommendations are written for board and executive consumption, not for technical teams.
What Your Board Receives
Following independent investigation and review, your board will have:
- An independent investigation report outlining findings, root causes, and contributing factors
- Clear identification of governance and control weaknesses
- An assessment of risk exposure under the Privacy Act 2020 and other applicable frameworks
- Practical, prioritised recommendations to strengthen AI governance
- An executive and board-level briefing on risks, findings, and proposed next steps
- Guidance on ongoing oversight and monitoring to prevent recurrence
The report is designed to withstand scrutiny from regulators, auditors, and legal counsel — and to give your board the foundation it needs to demonstrate it has discharged its governance obligations.
Who This Service Is For
AI Risk & Assurance is appropriate for New Zealand organisations where one or more of the following applies:
- Concerns have been raised — internally or externally — about how AI tools are being used
- An AI-related incident has created operational, legal, or reputational risk for the organisation
- AI systems have been deployed at scale without appropriate governance, controls, or board oversight
- The board requires independent assurance that AI risks are being properly identified and managed
- Leadership needs an objective view following a governance failure, before the matter escalates
- A regulator, auditor, or counterparty has raised questions about the organisation’s AI governance
Why Independence Matters
Internal reviews cannot produce what a board needs when AI governance has failed. They are conducted by people whose own decisions may be under scrutiny, and they cannot provide the objectivity that external accountability requires.
Ethos Advisory is independent of your organisation, your technology vendors, and your implementation partners. We do not sell AI tools or platforms. We have no interest in the outcome of an investigation other than producing accurate findings and sound recommendations. That independence is what gives our reports their credibility with boards, regulators, and legal counsel.