93% of Businesses Expect AI to Deliver. 73% Have No Governance to Back It Up.

Published Date :
January 31, 2026
Last Updated ON
April 8, 2026

There is a number that should stop every board meeting cold. Ninety-three percent of businesses expect a 173% return on investment from AI. That is not a rounding error or an outlier projection. It is the expectation sitting inside nearly every organisation making AI investment decisions right now — and it was one of the sharpest statistics to emerge from the IDC and Lenovo AI Leadership Event in Sydney, 2026.

The number that should follow it immediately — and rarely does — is this one: 73% of those same organisations do not believe they have comprehensive AI governance in place to protect that investment.

Those two statistics do not live comfortably beside each other. An organisation expecting 173% ROI from a capability it cannot govern is not building a competitive advantage. It is building a liability that has not yet been discovered.

This blog draws directly from the insights shared at that event — the conversations in the room, the questions nobody wanted to ask out loud, and the patterns that Storata recognises from working inside the environments where AI governance is the difference between an investment that compounds and an incident that compounds instead.

 

73%

of organisations do not believe they have comprehensive AI governance in place — yet 93% expect significant ROI from AI investment.

— IDC & Lenovo AI Leadership, Sydney 2026

Shadow AI Is Not Coming. It Already Arrived.

The conversation at the event that cut through everything else was about Shadow AI — and it was not the conversation most people expected. Shadow AI is not a future risk to plan for. It is a present reality to manage. And unlike Shadow IT, which was about people storing files in the wrong place or using the wrong messaging app, Shadow AI is about people making decisions with tools that nobody in the organisation has reviewed, governed, or even acknowledged exist.

The insight that landed hardest in the room: this behaviour is not age-related, it is not about technical literacy, and it is not going to be solved by policy. It is formed behaviour. The knowledge worker who pastes a client brief into an external AI tool to get a faster answer has made a rational decision in the context of their working day. The meeting is in twenty minutes. The answer arrives in twenty seconds. The governance framework that was supposed to prevent that does not exist in a form they have ever encountered.

What this means practically is that banning Shadow AI — the instinct of most IT and security teams — does not work. It has never worked for any technology that offers a genuine productivity benefit. People push back. They find workarounds. They do it on personal devices and personal accounts, which creates risks that are materially worse than the governed alternative. The answer the event converged on, and the answer Storata's own client experience confirms, is the same: build a governed path that is faster, safer, and better than the ungoverned one. Give people the productivity they are looking for inside an environment where the data, the access, and the audit trail are controlled.

 

You cannot stop people doing their jobs. You can make the safe way the fast way. That is the only AI governance strategy that actually works.

 

The organisations getting this right are not the ones with the strictest AI policies. They are the ones who recognised that the policy conversation was already lost — and shifted their energy to building the governed environment that makes Shadow AI unnecessary rather than merely prohibited.

The Eight Questions Your Organisation Should Be Asking Right Now

One of the most useful outputs of the event was a set of diagnostic questions — the kind of deep, uncomfortable questions that surface the real state of AI governance rather than the assumed state. These are not compliance checkbox questions. They are the questions that, when asked honestly, reveal the gap between what leadership believes is happening with AI and what is actually happening.

Storata has added its own context to each one based on what we see in client environments — because knowing the question is not enough. You need to know what a good answer looks like.

 

1. Who within your organisation is responsible for AI governance?

Not who should be responsible — who actually is, right now, with a defined remit and accountability. In most organisations the honest answer is nobody has that role formally. The CIO thinks it is the CISO. The CISO thinks it is the data governance team. The data governance team thinks it is part of the broader privacy and compliance function. When responsibility is distributed without a clear owner, governance happens in the gaps — which means it does not happen at all.

 

2. What AI tools have actually been rolled out across your organisation — including the ones IT did not approve?

This question requires an honest inventory, not a list of sanctioned tools. The answer will include Microsoft Copilot, personal ChatGPT subscriptions, browser-based AI extensions, AI features embedded in SaaS platforms the business already uses, and tools that individual departments have procured independently. The gap between the sanctioned list and the actual list is the Shadow AI footprint — and in most organisations it is significantly larger than leadership believes.

 

3. What are your in-house AI policies — and has anyone in the business actually read them?

Most organisations that have an AI policy have it as a document that was written by legal or IT, circulated once, and filed. The meaningful question is not whether the policy exists but whether it is embedded in how decisions are made day to day. A policy that lives in a SharePoint folder is not a governance control. It is a document that will be cited after an incident to demonstrate that someone wrote it down.

 

4. What legacy systems are in your environment — and how does AI interact with them?

This is the question most AI governance frameworks miss entirely. Legacy systems frequently have weaker access controls, poorer data classification, and less robust audit logging than modern platforms. When AI is given access to data that spans both modern and legacy environments — which is common in any organisation with history — the governance posture of the weakest system determines the governance posture of the whole. AI does not discriminate between well-governed and poorly-governed data sources.

 

5. What does your compliance posture look like specifically in relation to AI?

Not compliance in general — compliance in the context of AI. Are your data classification policies comprehensive enough that AI tools understand what they should and should not surface? Are your data loss prevention controls configured to apply to AI interactions, not just email and file transfers? Have you mapped your AI usage against the Australian Government's Voluntary AI Safety Standard guardrails? Are you positioned for ISO 42001 alignment? These are the specific compliance questions that most organisations have not yet answered.

 

6. How do you measure AI implementation — what does your baseline look like?

A 173% ROI expectation requires a baseline to measure against. Without one, the ROI claim is aspiration rather than projection. The baseline question covers productivity metrics before and after deployment, data exposure risk before and after governance controls, compliance posture scored against a framework, and the cost of incidents prevented versus the cost of the governance investment. Organisations that cannot answer this question are deploying AI on faith rather than measurement.

 

7. How are you managing multiple AI platforms, agents, and models simultaneously?

This is described at the event as a real problem — and it is understated in most AI governance frameworks. The average enterprise now uses multiple AI platforms across different functions: Microsoft Copilot for productivity, Salesforce Einstein for CRM, bespoke models for analytics, third-party agents for specific workflows. Each has its own data access model, its own privacy implications, and its own governance requirements. The interaction between them — particularly as agentic AI becomes more prevalent — creates governance complexity that no single policy document addresses.

 

8. Who are your AI power users — and how are access and usage governed for them?

Every organisation has people who push AI further and faster than the average user. These individuals are simultaneously the most valuable and the most exposed users in the environment. They are valuable because they surface the productivity potential of AI faster than anyone else. They are exposed because their usage patterns are more likely to encounter the edges of what governance controls have been designed for. Identifying them, understanding their usage, and designing governance that contains the risk without eliminating the value is one of the most important and least discussed aspects of AI governance.

 

Storata's clients do not just operate more securely — they operate with the kind of confidence that comes from knowing exactly where they stand, at any point, in front of any audience.

Expertise is not claimed. It is demonstrated — in the environments we have governed, the incidents we have prevented, the boards we have prepared, and the standards we hold ourselves to before we ask anything of our clients.

That is what twenty years in regulated, high-stakes industries looks like.

That is Storata.

 

The Governance Reality Most CISOs Are Not Across

One of the more candid moments at the event was the acknowledgement that most CISOs are not fully across the AI governance problem in their own organisations. This is not a criticism — it reflects the speed at which AI has moved from pilot to pervasive in the enterprise. The CISO who was managing endpoint security, identity governance, and cloud security posture twelve months ago is now also expected to govern a rapidly expanding AI surface across productivity tools, business applications, and increasingly autonomous agents.

The specific governance challenges the event identified — and that Storata encounters consistently in client environments — are these:

 

Securing access control in AI platforms is genuinely difficult. The permission models of AI tools do not map cleanly onto existing identity governance frameworks. Conditional access policies designed for applications do not automatically extend to AI interactions with those applications.

Setting and enforcing policies for AI model use is a new capability most teams do not have. Who decides which models can be used for what purposes? Who reviews that decision as models evolve? Who is accountable when a model produces an output that creates legal or reputational risk?

Multi-platform AI governance requires integration across tools that were not designed to talk to each other. The audit trail for an AI interaction that spans Microsoft Copilot, a third-party agent, and a legacy line-of-business system is fragmented across platforms with different logging standards and retention policies.

The speed of AI adoption has outpaced the speed of governance framework development. ISO 42001 was published in December 2023 and adopted in Australia in February 2024. Most organisations have not yet mapped their AI usage against it. Those in regulated industries — financial services, healthcare, legal — are already behind.

 

 

Governance is not the enemy of AI adoption. It is the condition under which AI adoption becomes sustainable. Without it, the 173% ROI expectation becomes a 173% risk exposure.

— Storata

 

The Infrastructure Dimension Nobody Is Talking About Loudly Enough

The event surfaced a point that Storata believes is significantly underweighted in most AI governance conversations: the infrastructure cost and consumption implications of AI at scale. As the cost of cloud AI services increases, a growing number of organisations — particularly in Europe — are evaluating GPU as a service models and on-premises AI infrastructure as alternatives to pure cloud consumption. Nvidia-based server infrastructure is becoming a real consideration for organisations running large-scale AI workloads.

The governance implication of this shift is direct: when AI moves from a cloud service consumed through a subscription to infrastructure owned and operated by the organisation, the accountability model changes. The cloud provider's governance, security, and compliance framework no longer applies. The organisation owns the risk entirely.

There is also a consumption dimension that is not stated loudly enough in most AI business cases. AI model inference — the process of running a query through a model — consumes significant compute and therefore significant energy. The quality of prompts directly affects the compute required to generate a useful response. Poorly written prompts that require multiple iterations to produce a usable output are not just a productivity problem — they are a power consumption problem. The education piece around prompt engineering is not merely about getting better AI outputs. It is about responsible resource consumption — which connects directly to corporate sustainability obligations and the ASRS reporting requirements that are now mandatory for large Australian organisations.

The point about ringfencing Azure and Copilot usage to align with corporate responsibility is precisely this. When AI consumption is ungoverned, the cost and the carbon footprint are both ungoverned. When it is governed — with usage policies, consumption monitoring, and user education — the organisation can demonstrate to its board, its insurers, and its sustainability auditors that AI is being deployed responsibly across every dimension, not just the security one.

 

What Responsible AI Governance Looks Like in Practice

The most grounded example of AI governance done well that emerged from the event came from the insurance sector. Several insurers are now building AI agents for claims processing — automating the high-volume, structured elements of claims review while preserving human expertise for complex cases, edge cases, and decisions that require judgment and accountability. The design principle is deliberate: automate the process, always give the expert an alternative.

This is precisely the pattern Storata advocates across every AI deployment. Not AI as a replacement for human judgment — AI as a multiplier of human capacity in the structured, repeatable parts of the work, so that expert attention is concentrated where it creates the most value. Copilot, not Autopilot. The governance that makes this work is not a constraint on AI productivity. It is the condition under which AI productivity is sustainable.

 

Automate the process. Always preserve the expert. That is the design principle behind every AI deployment Storata governs.

 

Storata invests significantly in integrating platforms, IP, and solutions that bring AI to market in a way that scales responsibly. That investment is not in AI tools. It is in the governance layer that makes AI tools safe, defensible, and compounding in value over time — rather than accumulating risk that surfaces at the worst possible moment.

The practical components of that governance layer are the same ones that the eight questions above are designed to surface:

 

ISO 42001 alignment — the world's first AI management system standard, adopted in Australia in February 2024. Storata is currently the only Consultancy in Australia delivering ISO 42001 as a managed governance capability (Responsible AI as a Service).

Microsoft Purview information protection — data classification that tells AI tools what they should and should not surface, enforced as a technical control rather than a policy aspiration.

Identity and access governance — Microsoft Entra ID configured so AI tools inherit the right access model, not the broadest available one.

Audit-ready evidence trails — logging and monitoring that answers the governance questions boards, regulators, and insurers are already asking.

Cost-neutral outcomes by design — governance that creates value rather than overhead, so the AI investment delivers the ROI it promised rather than funding the incident response it was supposed to prevent.

 

The Close

The IDC and Lenovo event did not produce a single answer to the AI governance problem. What it produced was a clearer picture of where the problem actually sits — not in the technology, not in the intention, but in the gap between the two.

Ninety-three percent of businesses expect AI to deliver transformative returns. That expectation is reasonable. The technology is capable of it. The barrier is not capability. The barrier is the 73% governance gap that sits between the investment and the outcome — the access controls not configured, the shadow tools not governed, the questions not asked, the baseline not measured.

The organisations that close that gap are not the ones with the most sophisticated AI tools. They are the ones whose governance compounds at the same rate as their AI adoption — so every month the environment is safer, more controlled, and more defensible than the month before.

 

 

The question is not whether your organisation is ready for AI. It is whether your governance is ready for what your organisation is already doing with it.

 

Storata Pty Ltd  |storata.com  |  Sydney, Australia  |  Sources: IDC & Lenovo AI Leadership Event 2026, Microsoft Work Trend Index, ACSC Annual Cyber Threat Report FY2024-25, Standards Australia AS ISO/IEC 42001:2023, Australian Government Voluntary AI Safety Standard 2024

 

Talk to Us

A direct conversation with a senior engineer who has delivered for regulated organisations across ANZ.

Book Your Cyber Risk Assessment

A senior engineer reviews your security posture against Essential Eight and industry-specific requirements. Clear findings. No obligation.
By subscribe to you agree with our privacy policy & Terms Conditions