Not only was it a chance to sit alongside luminaries such as Adam Spencer, Lisa Andrews, Morris Miselowski, and our moderator Holly Ransom, but it was an opportunity to explore exactly what AI means for local government – which it turns out, is not dissimilar to what it means for many other mid-sized organisations.
The key question I considered when going into the session was whether an organisation such as a local council actually needs an AI strategy.
My conclusion was a resounding yes.
Despite its label, AI is a very human challenge – one that can create fear and uncertainty among workers, customers, and communities.
Having an AI strategy doesn’t mean developing a complex technological roadmap for the creation of AI systems. What it does mean is being able to articulate how an organisation is using AI today and its guidelines for how it will use AI in the near future.
Many of the applications for AI have come into common use almost by stealth, such as unlocking a smartphone using facial recognition, or using predictive text on a word processor, or shopping recommended items on a website. AI has been a part of everyday life for many years – it is only the accessibility of Chat GPT and similar generative AI tools and their ‘wow factor’ that has thrust AI into the spotlight.
This sudden rise to prominence has created a lot of questions – principal among them being “will AI take my job”. This is quickly followed by “should I use AI to help me with my job”, “should I be feeding data into an AI to improve its usefulness”, and “what are the privacy and copyright implications when I do?”.
These questions are only the tip of the iceberg, and they are being asked by executives, managers, and workers all around Australia. Without an AI strategy, where can they turn to for answers?
For local government, there is also the need to answer the questions of rate payers, many of whom may be concerned by the use of AI and how it might impact their privacy and other rights.
The use of facial recognition without consent is already a contentious topic, and the Robodebt scandal has further eroded people’s trust in government and its use of technology. Recent months have also seen many council meetings interrupted by people who are concerned about how technology is being used today to manage communities – and how it might be abused in the future.
At the very last, an AI strategy needs to consider:
– Guidelines and commitments regarding where AI will or will not be used, in alignment with expectations of privacy and human rights. This needs to be specific in relation to the use of AI in activities that involve with the general public (chat bots for customer service, automation of penalty notices, use in video surveillance, etc) and should provide clarity for staff whose working lives may be impacted by these technologies (such as customer service agents).
– An inventory of where AI is being used today, and why. This may require an investigation of existing software applications to determine their own use of AI.
– Clarification of decision-making processes and guidelines to be used when making future investments in AI based technologies.
– Guidelines for managers and staff as to which AI services are cleared for use, and for what purposes.
– Further guidelines regarding how different data types can be used in relation to AI systems.
This is not an exhaustive list, and the creation of a strategy should start with the creation of a stakeholder group that can work though a more comprehensive set of considerations.
AI has massive potential to do good things for local government, such as improving services and reducing their cost of delivery. But as with many fast-developing technologies, the potential for backlash – and very real damage – is equally strong.