AI Will Not Be Waiting for Audit to Catch Up
This piece began with a conversation.
A friend of mine, a senior leader in technology risk, and I were talking about how differently organisations are reacting to AI. In some places there is organised experimentation, governance pilots, and model inventories. In others, there is pause, quiet avoidance, or the view that if policy isn't right yet, then adoption should wait.
Around the same time, I was speaking with friends in software engineering. Some were being asked if AI would "replace" software developers. The tenor of these conversations felt familiar, the same shape of concern that appeared with automation, compilers, and tooling shifts in the past.
Software engineering was not harmed when programming moved from punch cards to higher level languages. It changed. It improved. Productivity increased. Expectations rose. Skill sets moved up.
AI is another such change.
It is good to be cautious. It is not good to avoid.
In the world of audit and technology risk, there is a visible nervousness. Some practitioners are struggling to audit AI-enabled processes. Some leaders are using new tools without understanding the control implications. Others simply opt out, waiting for the "right" framework to arrive.
But AI is already integrated into reporting processes, analytics engines, development tooling, customer interfaces, and operational decision support. It is not waiting for assurance models to catch up.
The Big 4 have had AI governance and model risk on their radar for years. Frameworks exist, and advisory capabilities have been building. Yet in many real environments, adoption is outpacing understanding. That's where risk emerges.
AI changes the assurance problem
Risk professionals can't treat AI as a theoretical future risk, and they can't treat it as a policy only topic. AI introduces probabilistic outputs into systems that often assume deterministic behaviour. This is a shift in how assurance needs to be approached.
The appropriate course of action is not to constrain AI until everyone is comfortable. It is to get literate: how models are developed, where bias is introduced, how configuration data differs from training data, where human oversight is required, and how outputs feed into financial reporting, operational processes, and customer outcomes.
Knowledge is power. Training is credibility.
Governance must follow understanding
Governance needs to follow hard on the heels of understanding. Access controls must include people who configure, deploy, and monitor AI systems. Change management must include model updates. Monitoring must adapt to identify output drift or unusual behaviour. Documentation must describe architectural reality, not marketing language.
But governance without understanding is farce. AI use will increase, not decrease. The organisations that benefit will be the ones whose control environments adapt to it.
What to do now
If you work in IT risk, internal audit, or first line risk governance, there are practical steps you can take immediately:
- Build AI literacy at leadership level. Go beyond awareness, understand how AI is being used in your organisation today.
- Map where AI already exists. Many organisations are surprised by how far it has already penetrated products and operations.
- Revisit ITGC domains through an AI lens. The domains still matter, the application and evidence often need to evolve.
- Define where human oversight is required. Be explicit about accountability for outcomes that affect customers, operations, or reporting.
- Invest in capability. Upskill risk professionals, hire technical expertise where needed, and treat AI governance as an evolving discipline.
AI is not a trend that will come and go. It is an acceleration layer in modern technology. The question is not whether it will be used, the question is whether governance will develop quickly enough to enable it safely.
Conclusion
Avoidance creates blind spots. Caution creates resilience, but only when it is paired with understanding. Audit and risk leaders who invest early in literacy, evidence, and control adaptation will be the ones who keep pace.