close
close

Australian Government introduces AI policy for responsible government use


Australian Government introduces AI policy for responsible government use

The Australian Government has introduced a comprehensive new Artificial Intelligence (AI) implementation policy that recognises the need for a unified strategy to effectively harness the potential of AI. The Digital Transformation Agency has unveiled the ‘Responsible Use of AI in Government Policy’, which represents an important step towards achieving this goal while building public trust. Set to take effect from 1 September this year, the policy positions the Australian Government as a role model for the safe and ethical use of AI technologies.

Guided by the Enable, Engage and Evolve framework to introduce principles, mandatory requirements and recommended actions, the AI ​​Policy has been designed to evolve with technology and community expectations. It sets out how the Australian Public Service (APS) will reap the benefits of AI by using it confidently, safely and responsibly; build public trust through improved transparency, governance and risk assurance; and adapt over time by adopting a forward-learning approach to changes in both the technology and policy environments.

“This policy will ensure the Australian Government shows leadership in using AI for the benefit of Australians,” said Lucy Poole, General Manager of Strategy, Planning and Performance, in a DTA press release. “By engaging with AI in a safe, ethical and responsible way, we will meet society’s expectations and build public trust.”

The adoption of AI technology and capabilities varies across the APS. The policy aims to standardise the government’s approach by setting basic requirements for AI governance, security and transparency. This will remove barriers to government adoption by giving agencies confidence in their approach to AI and creating incentives for safe and responsible use for the benefit of the public.

The AI ​​policy also aims to increase public trust in the government’s use of AI by ensuring greater transparency, governance and risk certainty. “One of the biggest challenges to the successful adoption of AI is the lack of public trust in government adoption and use. The lack of public trust acts as a handbrake on adoption. The public is concerned about how their data is being used, the lack of transparency and accountability in the use of AI and how the decision-making supported by these technologies affects them. The policy addresses these concerns by implementing mandatory and optional measures for authorities, such as monitoring and evaluating performance, increasing transparency in the use of AI and introducing standardized governance,” it added.

It also seeks to enshrine a forward-looking, adaptive approach to government use of AI that is designed to evolve over time. AI is a rapidly changing technology and the extent and nature of change is uncertain. This policy is designed to ensure a flexible approach to the rapidly changing nature of AI and requires authorities to adapt to changes in the technological and policy environment.

To support the implementation of the policy, the DTA has published a standard for Accountable Officials (AOs) to guide their agency to improve the governance of AI adoption, establish a culture that equitably balances risk management and innovation, improve response to and adaptation to changes in AI policies, and engage in cross-agency coordination and collaboration.

“We encourage AOs to be the primary point of contact for partnerships and collaborations within their agency and between others,” Poole explained. “They link the relevant internal areas to responsibilities under the policy, gather information and encourage the agency’s participation in cross-agency activities. Cross-government forums will continue to support coordinated integration of AI into our workplaces and monitor current and emerging issues.”

The challenges posed by government use of AI are complex and closely linked to other issues such as the APS Code of Conduct, data governance, cybersecurity, privacy and ethical practices. The Policy is designed to complement and strengthen – not duplicate – existing frameworks, laws and practices affecting government use of AI. The Policy must be read and applied alongside existing frameworks and laws to ensure that authorities meet their obligations.

The DTA will also shortly publish a standard for AI transparency statements, setting out the information that agencies should make publicly available, such as the agency’s intentions, why it is using or considering adopting AI; categories of use that allow direct interaction with the public without human intermediaries; governance, processes or other measures to monitor the effectiveness of deployed AI systems; compliance with applicable laws and regulations; and efforts to protect the public from adverse impacts.

“The explanations must be written in clear, simple language and avoid technical jargon,” Poole stressed.

The AI ​​Directive recommends that, within six months of the entry into force of this Directive, agencies provide training on AI fundamentals for all staff, aligned with the approach set out in the Directives. Additional training for staff taking into account their roles and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems.

It noted that authorities should consider where and how AI is used in the authorities, develop an internal register of this information and integrate AI considerations into existing frameworks such as privacy, security, data retention, cyberspace and data.

The AI ​​Directive also requires authorities to make publicly available a statement setting out their approach to the introduction and use of AI within six months of the entry into force of this Directive, as set out in the DTA. The statement must be reviewed and updated annually, or earlier if the authority materially changes its approach to AI.

In addition, the statement must provide the public with relevant information about the Agency’s use of AI, including information on compliance with this Directive, measures to monitor the effectiveness of deployed AI systems, and efforts to protect the public from adverse impacts.

Agencies should consider participating in the Australian Government AI Assurance Framework pilot and provide feedback to the DTA on the pilot outcomes to plan next steps. They must also implement generative AI guidelines.

The AI ​​Policy also noted that agencies should consider monitoring and evaluation approaches that continually review internal policies and governance approaches to AI to ensure they continue to be fit for purpose; monitor AI use cases to identify unintended impacts; integrate AI into a whole-of-government approach; keep abreast of changes in the policy and governance environment; adapt quickly to ensure continued compliance; and engage in whole-of-government capacity building to promote APS-wide capability building over time.

Further information on additional options and measures will be issued over the coming months.

Earlier this month, the European Commission launched a multi-stakeholder consultation on a Code of Conduct for providers of general-purpose artificial intelligence (GPAI) models. The Commission invites GPAI providers with establishments in the EU, companies, civil society representatives, rights holders and academic experts to provide their views and insights, which will feed into the Commission’s upcoming draft Code of Conduct for GPAI models. Interested parties are also invited to provide input on ensuring trustworthy general-purpose artificial intelligence models in the EU.

Leave a Reply

Your email address will not be published. Required fields are marked *