
While no technology has attracted as much hype as AI, I struggle to recall another in the past 30 years that has arrived with so many warnings attached. Even the launch of the World Wide Web in the 1990s was greeted with unbridled optimism – although, at the time, few imagined the negative uses it would eventually enable.
Today, the AI debate is split between two extremes. On one side stands relentless vendor enthusiasm. On the other, a chorus of independent commentators, analysts, and media voices issuing warnings about bias, errors, skills erosion, job displacement, and broader social disruption.
But there is one risk that dominates the thinking of many business and public-sector leaders, even if it is not spoken of publicly – the fear that if they don’t embrace AI, they will be outpaced by those who do.
In short, the perceived risk of doing nothing now outweighs the risk of doing something.
This tension sits at the heart of a new report, Turning Hesitation Into Action: How Risk Leaders Can Unlock AI’s Potential, which I had the privilege of authoring for Cisco and the Governance Institute of Australia. Its central thesis is simple: risk assessment is a critical factor in successful AI adoption, and we must ask what role risk professionals should play in guiding organisations toward AI maturity.
The report draws on extensive discussions with Australian risk professionals and reflects their lived experience. While many do not consider themselves technologists, they already possess a strong appreciation of AI’s inherent risks, albeit with limited visibility into how those risks can be understood, mitigated, and governed.
What emerged even more strongly, however, was the breadth of responsibility today’s risk professionals see themselves carrying. Their role extends well beyond managing the tangible risks of today, such as safety, compliance, and financial controls. Increasingly, they view themselves as custodians of the organisation’s long-term sustainability. That includes encouraging leaders to adopt new technologies and approaches where these may provide competitive advantage or at least protect against falling behind more agile competitors.
That means embracing AI — but doing so intelligently. As one participant put it:
“I have spent a lifetime trying to encourage people to take a risk intelligently. That is the job of the risk officer.”
The report outlines several recommendations for how risk leaders can help their organisations move forward safely and confidently, and you can read all of them here.
My strongest insight however was about the risk profession itself. Every decision in business carries risk – including the decision to delay. Risk professionals bring a unique blend of judgement, structure, and foresight that is essential for balancing innovation with responsibility.
For this reason, they have a pivotal role to play in helping organisations harness AI safely, effectively, and to their long-term benefit.