iTnews – State of HR Tech 2025

Blurred crowd of unrecognizable at the street

AI is a team sport – one with profound implications for an organisation’s people. That alone is reason enough for human resources leaders to take an active role in shaping their organisation’s AI strategy.

iTnews has just published my State of HR Tech Report 2025, and while AI dominates the conversation, what stands out is the strong duty of care HR leaders feel for their people during this period of rapid change.

Alongside AI, the familiar priorities remain: creating productive workplaces, developing skills, improving recruitment, and nurturing supportive, high-performing cultures. Technology remains central in enabling these outcomes, particularly the shift toward HRIS platforms that don’t just streamline HR workflows but genuinely improve the day-to-day experience of the people who rely on them.

Yet through every chapter, AI looms as both a catalyst and a challenge. While its long-term effects are still emerging, its short-term risks and responsibilities fall squarely into the hands of HR leaders. The future may be uncertain, but the need for thoughtful leadership today is not.

If you’d like to explore the full findings, you can read all four chapters of the report here.

Why AI’s greatest risk isn’t what you expect

While no technology has attracted as much hype as AI, I struggle to recall another in the past 30 years that has arrived with so many warnings attached. Even the launch of the World Wide Web in the 1990s was greeted with unbridled optimism – although, at the time, few imagined the negative uses it would eventually enable.

Today, the AI debate is split between two extremes. On one side stands relentless vendor enthusiasm. On the other, a chorus of independent commentators, analysts, and media voices issuing warnings about bias, errors, skills erosion, job displacement, and broader social disruption.

But there is one risk that dominates the thinking of many business and public-sector leaders, even if it is not spoken of publicly – the fear that if they don’t embrace AI, they will be outpaced by those who do.

In short, the perceived risk of doing nothing now outweighs the risk of doing something.

This tension sits at the heart of a new report, Turning Hesitation Into Action: How Risk Leaders Can Unlock AI’s Potential, which I had the privilege of authoring for Cisco and the Governance Institute of Australia. Its central thesis is simple: risk assessment is a critical factor in successful AI adoption, and we must ask what role risk professionals should play in guiding organisations toward AI maturity.

The report draws on extensive discussions with Australian risk professionals and reflects their lived experience. While many do not consider themselves technologists, they already possess a strong appreciation of AI’s inherent risks, albeit with limited visibility into how those risks can be understood, mitigated, and governed.

What emerged even more strongly, however, was the breadth of responsibility today’s risk professionals see themselves carrying. Their role extends well beyond managing the tangible risks of today, such as safety, compliance, and financial controls. Increasingly, they view themselves as custodians of the organisation’s long-term sustainability. That includes encouraging leaders to adopt new technologies and approaches where these may provide competitive advantage or at least protect against falling behind more agile competitors.

That means embracing AI — but doing so intelligently. As one participant put it:

“I have spent a lifetime trying to encourage people to take a risk intelligently. That is the job of the risk officer.”

The report outlines several recommendations for how risk leaders can help their organisations move forward safely and confidently, and you can read all of them here.

My strongest insight however was about the risk profession itself. Every decision in business carries risk – including the decision to delay. Risk professionals bring a unique blend of judgement, structure, and foresight that is essential for balancing innovation with responsibility.

For this reason, they have a pivotal role to play in helping organisations harness AI safely, effectively, and to their long-term benefit.

Why ‘Co-pilot’ is the wrong metaphor for AI

airplane flying in sunset sky

As a regular flyer, one of my favourite YouTubers is Swedish airline pilot Petter Hörnfeldt and his channel, Mentour Pilot. Petter dives deep into the inner workings of commercial aviation, especially the systems and protocols that keep millions of people safe in the air every day.

One of his most memorable explanations isn’t about technology at all – it’s about how pilots decide who’s actually flying the plane.

In a modern commercial cockpit, there are two pilots: a captain and a first officer. Before every flight, they decide who will be the “pilot flying” (the one with hands on the controls) and who will be the “pilot monitoring” (the one who manages supporting tasks, checks systems, and keeps an eye on the pilot flying).

Crucially, both pilots are fully briefed and capable of performing every part of the flight plan. If the pilot flying becomes incapacitated, the pilot monitoring can seamlessly take control. That’s the essence of co-piloting – two fully trained, fully capable humans working in tandem, ready to back each other up when things get difficult.

That’s not a job for AI.

When Microsoft and others use “Copilot” to describe AI, they’re unintentionally assigning it a role it was never designed to play.AI is much closer to an autopilot system:

  • Great at handling routine tasks.
  • Helpful for reducing workload during normal operations.
  • Useful in assisting during certain critical phases of flight.

But the autopilot does not step in when things go wrong. It doesn’t take over from the captain in an emergency. In fact, in turbulent or unexpected scenarios, pilots often switch it off. It’s the pilot monitoring — a fully capable human — who steps up. Not the autopilot.

The aviation industry is built on layered safety systems and shared responsibility. The co-pilot metaphor implies equivalence, or at least shared accountability. AI doesn’t meet that bar. AI can:

  • Accelerate routine work
  • Spot patterns humans might miss
  • Improve decision-support in stable conditions

But it cannot:

  • Fully understand context
  • Manage edge cases outside its training
  • Take accountability when the unexpected happens

We’ve already seen what happens when automation is trusted too far. The recent AWS outages were a reminder: when the system encounters something outside its model, everything stops, and humans scramble to fix it.

The lesson from aviation is simple: Use AI as autopilot. Keep people as co-pilots.

Never use AI as an excuse to not train your people to the fullest extent necesary. Invest in their ability to understand, monitor, and override AI when needed. Don’t let the “copilot” marketing metaphor lull you into delegating responsibility to a system that isn’t designed to carry it.

AI can make the flight smoother. But when the storm hits, you’ll want a human in the left seat.

So what is organisational culture?

Today I was part of a group discussion at the Clutch Events Project Management Technology Summit in Melbourne, where someone asked the question ‘what is organisational culture’.

For me – that’s an easy question to answer: organisational culture is the manifestation of individual behaviour at scale.

It is an emergent quality that is the sum of the actions of the people that contribute to it.

When people behave in ways that are supportive and demonstrate ethical decision making, then their organisation’s culture will reflect that.

You can plug in different behaviours and the output will reflect them accordingly. Hence when behaviours are individualistic or geared towards profit at the expense of all else, the organisational culture will form accordingly.

So the second question that naturally arises is: what are the key factors that influence organisational culture?

If you’re willing to accept my argument about culture emerging from individual behaviour, then there are two critical factors that I believe outweigh all others.

The first of these is the organisation’s statement of purpose. Whether this is a single sentence or is expressed through a set of a dozen leadership principles, the organisation’s purpose provides a test that can guide workers in their decision making, simply by offering the opportunity to ask, ‘does this action align to our purpose’.

This makes the defining of purpose and principles something that must be considered with great care, as they can influence critical actions right through the workforce and will go a long way to shaping its culture.

The second factor is the behaviour of the leadership group. Culture may emerge from behaviour throughout the organisation, but it is set from the top, which means that the behaviour of senior leaders is critical to shaping the culture

If leaders preach inclusivity and supportiveness but behave in an opposite fashion, workers below them will mimic the behaviour, not the rhetoric. Worse still, they will understand that the leadership is duplicitous, and the resultant culture will reflect all this trait.

These realisations arose thanks to a research project I was engaged in several years ago, when I had the chance to examine numerous businesses up close and pose the question of why some were able to undergo successful transformations, and others weren’t.

Amongst the findings was the realisation that organisations which were able to transform successfully had a strong statement of purpose that workers could align with, even in those times when the transformation was directly disrupting their lives. When they could see that result of disruption would enable them to better serve their purpose, then disruption became something they were more willing to put up with.

which led me to my final contribution to yesterday’s discussion – never discount the importance of organisational culture as a factor in successful transformation programs.

Welcome to the era of Human Intelligence

Late last month I travelled to Queenstown, New Zealand, to deliver a speech to a group of property managers and investors on the topic of technology-driven change and the choices we face (thanks to Dinesh Pillutla and the team at Core Property Research for inviting me to speak).

It was an interesting experience, not only for the chance to delve into an industry that is itself undergoing significant change, but also because the speech that I gave wasn’t the one that I had first set out to deliver.

Having spent more than 20 years swept up the rapid changes of the technology sector, it is easy to forget that people outside the sector have a very different perspective on the role of technology, and a different appreciation for what it can and can’t do. As a speaker, this makes it all too easy to bamboozle an audience with demonstrations and prognostications of the technological utopia/apocalypse ahead – which might be entertaining (or unsettling) in the moment, but which holds little value over the long term.

This time I set out to take a different approach. My main thesis was that while technology is evolving quickly, we are putting our focus on the wrong things, in that we are focusing too much on technology and what it can do, not enough on what we want it to do.

In short, we need to stop thinking so much about technology, and start thinking a lot more about ourselves.

So when it came time to talk about AI, I choose to talk about something that technologists rarely talk about – human intelligence – and the skills and abilities that we already possess (and should be thinking about more) when it comes to understanding our role in a future world where AI is a major factor.

Why? Because getting from the first Australian computer (CSIRAC, built in 1949) to today’s AI took less than 75 years. We have gone from basic machines to a simulacra of intelligence in the blink of an eye. Evolutionary biology took approximately 750 million years to complete the same task.

It’s an impressive achievement, and not something we’ve needed to be overly concerned about – until now. Throughout history the development of technology has mostly been in support of human endeavour, and has tended to create more opportunities that it has erased. Now we may have reached a point where instead of supporting us, technology is competing with us (or more precisely, we are competing with it), and given its rapid evolution, we will fall behind quickly.

This is something we have seen time and time again throughout history – especially in sectors such as manufacturing – but now the emergence of more capable AI systems means that field of competition has broadened considerably. The most common question I get asked in any conversation about AI is ‘will AI take my job’. And the answer I give is most often ‘yes’ – it’s just a question of when.

At some point many of the jobs we do today won’t exist, but the expectation (still – and far from proven) is that more new ones will be created. The key for us as individuals is to anticipate which roles AI will perform better than us – and by when – and then work out what we need to do to ensure we stay relevant in that AI-focused future.

Hence the need to think more about human intelligence.

So in my presentation in Queenstown I posed the question of whether my audience would find their jobs replaced by AI, and then answered with a provisional ‘yes’ – that being if:

  • You had lost your sense of curiosity.
  • You were unable to listen and learn from diverse perspectives.
  • You cannot elevate yourself out of your immediate environs to see the bigger picture.
  • You lack empathy.
  • You are unable to align to others.
  • You cannot communicate.
  • You have stopped learning.
  • You are not adaptable.

If those traits describe you, then there is a very good chance that you will find your job replaced by AI. But it only takes the exercising of a few of those skills to provide a foundational capability that will help you maintain or grow your value in the turbulent years that lie ahead.

In summary – we need to be worrying a lot more about the exercising of our own human intelligence than we are worrying about the artificial kind.

No one can predict the future. We can make inferences and predictions, and we can run the risk of being very, very wrong.

But even though we can’t predict the future, we can consciously change the future through the actions that we take today.

And that is a capability that no machine can match (at least not yet).

ENDS

Why you really need an AI strategy

Last month I had the pleasure of joining a panel session at the Municipal Association of Victoria’s MAV Technology conference, to discuss the challenges and impact of AI.

Not only was it a chance to sit alongside luminaries such as Adam Spencer, Lisa Andrews, Morris Miselowski, and our moderator Holly Ransom, but it was an opportunity to explore exactly what AI means for local government – which it turns out, is not dissimilar to what it means for many other mid-sized organisations.

The key question I considered when going into the session was whether an organisation such as a local council actually needs an AI strategy.

My conclusion was a resounding yes.

Despite its label, AI is a very human challenge – one that can create fear and uncertainty among workers, customers, and communities.

Having an AI strategy doesn’t mean developing a complex technological roadmap for the creation of AI systems. What it does mean is being able to articulate how an organisation is using AI today and its guidelines for how it will use AI in the near future.

Many of the applications for AI have come into common use almost by stealth, such as unlocking a smartphone using facial recognition, or using predictive text on a word processor, or shopping recommended items on a website. AI has been a part of everyday life for many years – it is only the accessibility of Chat GPT and similar generative AI tools and their ‘wow factor’ that has thrust AI into the spotlight.

This sudden rise to prominence has created a lot of questions – principal among them being “will AI take my job”. This is quickly followed by “should I use AI to help me with my job”, “should I be feeding data into an AI to improve its usefulness”, and “what are the privacy and copyright implications when I do?”.

These questions are only the tip of the iceberg, and they are being asked by executives, managers, and workers all around Australia. Without an AI strategy, where can they turn to for answers?

For local government, there is also the need to answer the questions of rate payers, many of whom may be concerned by the use of AI and how it might impact their privacy and other rights.

The use of facial recognition without consent is already a contentious topic, and the Robodebt scandal has further eroded people’s trust in government and its use of technology. Recent months have also seen many council meetings interrupted by people who are concerned about how technology is being used today to manage communities – and how it might be abused in the future.

At the very last, an AI strategy needs to consider:
– Guidelines and commitments regarding where AI will or will not be used, in alignment with expectations of privacy and human rights. This needs to be specific in relation to the use of AI in activities that involve with the general public (chat bots for customer service, automation of penalty notices, use in video surveillance, etc) and should provide clarity for staff whose working lives may be impacted by these technologies (such as customer service agents).
– An inventory of where AI is being used today, and why. This may require an investigation of existing software applications to determine their own use of AI.
– Clarification of decision-making processes and guidelines to be used when making future investments in AI based technologies.
– Guidelines for managers and staff as to which AI services are cleared for use, and for what purposes.
– Further guidelines regarding how different data types can be used in relation to AI systems.

This is not an exhaustive list, and the creation of a strategy should start with the creation of a stakeholder group that can work though a more comprehensive set of considerations.

AI has massive potential to do good things for local government, such as improving services and reducing their cost of delivery. But as with many fast-developing technologies, the potential for backlash – and very real damage – is equally strong.

The metaverse, marketing, and neurotech – a match made in a dystopian nightmare

In an era where privacy has been steadily eroded, the one sanctuary that most of us have held on to is the privacy of the thoughts within our heads.

But it seems even this last redoubt might soon come under siege. Because while for centuries now psychics have claimed the ability to read minds, now we are making this capability real, thanks to rapid development in the field of neurotechnology, and specifically, the creation of brain computer interface (BCI) devices.

But once more it seems the pace at which we can develop new capabilities is going to outstrip our ability to consider and manage the consequences.

So what is a BCI? Put simply, it is a device for sensing and interpreting the signals of the brain. Where common neurotechnology devices such as MRI scanners can determine what parts of the brain are active at any given time, a BCI device can determine what the brain is actually doing – or more specifically – what it is thinking.

The detail and accuracy of BCI devices is astounding – down to the level of individuals words. A trial of a BCI device in 2021 on a person who was paralysed and non-verbal saw them use an implanted BCI device to communicate at a rate of 18 words per minute at 94 per cent accuracy. While the techniques used suggest there is still some want to go to true mind-reading (this example focused on imagined muscle control), this is another step along a seemingly inevitable pathway.

Today the capabilities of BCI devices greatly depends on the proximity they can achieve to the neurons they are trying to sense, with the best results achieved using implanted devices where electrodes are inserted under the skull, such as in the example described above. Good results have also been achieved from devices implanted under the scalp, and even wearable (non-invasive) devices are showing promise.

Exactly how accurate these wearable devices will prove remains to be seen, but given their use is mostly unregulated (especially as they are not ‘medical’devices), there is a good chance that a lot of investment dollars will be keen to see how finely their resolution can be tuned (Elon Musk certainly seems keen).

But the implications of BCI technology go far beyond giving speech to the speechless. Creating a devise that enables one party access to the thoughts or another has massive implications across many aspects of life.

Take marketing for example. Not only might a marketer be able to see through the difference between what a person thinks and what they say, but they could also be able to pick up on signals and make suggestions based on thoughts that a person might not even be aware of. This would be a much more accurate form of contextual advertising, based on evidence rather than inference.

Whether any individual might be willing to submit to constant mind surveillance by their favourite brand is unlikely – although with the right incentive, not impossible. However, there is one scenario where BCIs are likely to play a major role – the metaverse.

One of the key barriers to truly immersive virtual reality experiences is the control interface, which must use hand and body gestures as proxies to control actions within the virtual world. Using a BCI however means a person might only have to think about ‘running’ in a specific direction, or about picking up an object, or any manner of other interactions, for that thought to be translated into an action in the virtual world.

How much of a stretch is it to go from monitoring a participant’s commands to interpreting all of the other data that the BCI is extracting?

One immediate application is contextual advertising, and the ability to present a brand at the precise moment when a person is thinking about that product category.

For content platforms, whose job is to keep people engaged, the BCI can be used to present content which has been determined as being most likely to garner a response at that moment in time. Given the furore that erupted when Facebook was shown to be manipulating people’s moods through the content it showed them, it is not hard to see the possible harm that might result.

Or what about for an online casino, which now knows exactly what it needs to offer to keep a player engaged and spending?

While none of these possibilities are viable with the BCI technology available today, at the current rate of progress, this decade is the one where the boundaries will be tested – not some distant and unforeseeable future.

So what will the world look like when not even the thoughts in our head are ours alone?

If you’re interested in learning more about the technical, legal, and ethical challenges of neurotechnology, then please come along to the second Neurotechnology Forum, taking place in Sydney on May 17.

CMO – How to include disabled communities in marketing

Disabled Australians eat fast food, wash clothes using laundry powder, and even drive cars. But looking at the people used to promote these products in advertising shows not a single disabled person in sight.

The tendency towards only featuring able-bodied people in advertising might be defended based on the law of averages (as the average Australian person is not likely to be visibly disabled), but this runs against the spirit of inclusivity that many brands preach. It also ignores the reality that one in six Australians have a disability.

In this article for CMO Australia I had the chance to explore the topic of representation for disabled Australians in mainstream advertising, and speak to some of the marketers that are working to bring greater representation to a diverse group of Australians.

You can read more by clicking here.

How to achieve virtual experience success – CMO

With physical meetings, training sessions and conferences all around Australia being cancelled, service providers are scrambling to reinvent them as online experiences. But running a successful online event takes more than just a webcam and a microphone. If you’re looking for advice on how to host an effective online expereince, you might want to check out some of the tips from industry experts in my story for CMO.

I’ve also switched over my own communications and presentation training seessions to be conducted virtually, and am looking forward to starting deliver to clients next week.

Why thought leadership needs a rethink

Pic by Akua Sencherey

Much of my working life is spent developing thought leadership articles and reports, which provides me with fantastic opportunities to dig deeper into concepts and hopefully get people thinking a little differently as a result.

What I’ve learned over the years however is that there are certain conditions and processes that need to be considered if a thought leadership exercise is going to deliver the desired results.

A good thought leadership piece is not simply an announcement – that can be better achieved through a press release or an ad. Good thought leadership is about joining a conversation that is already taking place – or better still, starting a new one.

Unfortunately for some brands, they have no natural place in the conversations they wish to enter. Hence the goal is to find a point of connection around which they can build their relevancy, and use that to earn the permission needed to join into the conversation. Just because a brand hasn’t yet earned its place in a conversation doesn’t mean it should give up however. Authenticity and relevancy obviously help, and often these can be established over a period of time. But doing so requires commitment.

Other times a brand will simply be too late to the conversation to offer anything meaningful. This happens a lot regarding discussion on transformation – a topic that is very important, but which many people have tuned out of. Earning a voice in this conversation requires significant effort to establish a perspective that is new or different – a difficult task given everything that has been said about it already.

Being part of a conversation means being committed to that conversation over the longer term. A single discussion paper or research report might gain some attention, but its true value is unlocked when it is part of an ongoing campaign that builds over time, possibly using multiple different voices in many different forums. In some ways building good thought leadership is a bit like building a brand – it takes consistency and commitment.

Good thought leadership also needs to be something that is not immediately obvious .If the outcome is a conclusion that anyone could have come to – or worse still, one that obvioulsy is designed to serve the brand message and nothing else – it will have no impact. But having a recipient say “I hadn’t thought of that before” or “I’d never looked at it that way” is likely to ensure that the core ideas remains in their thinking over the longer term.

It helps then if the concept is one that is easy to grasp. This points to one of the most important truths regarding the creation of thought leadership – while the ultimate product needs to be simple to understand, the process of its creation is usually anything but.

Many of the best thought leadership projects are the result of extensive research and discussions, which is designed to bring forth basic truths that might have been hidden under layers of noise, and shine a light on hidden patterns. Hence it is best to commence a thought leadership project with a question, rather than stipulating the outcome from the outset. Many great thought leadership projects simply give life to concepts which might at first glance seem obvious, but which have not yet been expressed so simply and eloquently. These projects that bring life to the obvious are often the most powerful, as they exist within a framework which can be easily grasped, but are the result of a long process of sifting and sorting.

All of this points to another basic truth of thought leadership – that it is earned. For a brand to simply blunder into a conversation for which it is neither relevant nor prepared is to invite disaster. But when a brand is prepared to put in the hard work to earn its position, and then build that over time, amazing things can happen.