Guard(rail)ing the development and deployment of AI: the Australian Government’s proposal
The Australian Government has proposed the introduction of mandatory guardrails for the use of Artificial Intelligence (AI) in high-risk settings. The proposal suggests criteria for defining high-risk settings, proposes a set of 10 mandatory guardrails, and outlines possible approaches to mandating these guardrails by legislation.
Organisations involved in developing or deploying AI should familarise themselves with these proposals, and consider their posture in response. Feedback on all aspects of the Proposals Paper is encouraged and consultation is open until 4 October 2024.
In tandem, the Australian Government has released a Voluntary AI Safety Standard (Standard) which provides best practice guidance on using AI safely and responsibly. All organisations in the AI supply chain should review this Standard and seek to implement the voluntary guardrails in order to uplift their AI maturity.
The case for AI guardrails
The Proposals Paper and Standard each follow the Australian Government’s interim response to the Safe and Responsible AI in Australia discussion paper (Discussion Paper) released 17 January 2024. Commitments made in the Discussion Paper included:
- considering and consulting on the case for and the form of new mandatory guardrails for organisations developing and deploying AI systems in high-risk settings;
- considering possible legislative vehicles for introducing mandatory safety guardrails for AI in high-risk settings;
- developing a voluntary AI Safety Standard for the responsible adoption of AI by Australian businesses;
- developing options for voluntary labelling and watermarking of AI-generated materials; and
- establishing an expert advisory body to support the development of options for further AI guardrails.
Submissions to the Discussion Paper stressed that voluntary application of Australia’s 8 voluntary AI Ethics Principles was no longer ‘enough’ in high-risk settings, and effective regulation and enforcement of AI was required across all sectors.
The Australian Government has adopted these arguments, outlining the case for mandatory AI guardrails in Australia given that:
- public trust in AI remains low, and government is a crucial actor in creating an environment that supports safe and responsible AI innovation and use, while reducing the risks posed by these technologies;
- the lack of specifically targeted rules has enabled the rapid development and proliferation of AI technologies in Australia;
- internationally, there has been a shift in AI technology regulation – these proposals aim to ensure alignment with international counterparts including the EU, Canada and the UK;
- AI amplifies and creates new risks. These risks are distinct from those attaching to traditional software models, and in some cases have been realised, and harm has occurred;
- while current legal frameworks and regulatory regimes are responsive to certain risks posed by AI, there are gaps in how the risks arising from some AI characteristics are prevented or mitigated and inconsistencies across sectors;
- several features of AI highlight the need to shift the focus from remedial to preventative AI regulation.
Defining ‘high-risk’ AI
The Proposals Paper sets out the Australian Government’s proposed approach to defining those high-risk settings where the mandatory guardrails would apply.
Known or foreseeable uses
The Australian Government is seeking feedback on the merits of a principles vs. a list-based approach to defining high-risk AI where the proposed uses of an AI system or general-purpose AI (GPAI) model are known or foreseeable.
A list-based approach would entail a blanket list of high-risk settings, though this would likely inadvertently capture some low-risk uses of AI. For example, the EU and Canada have identified biometrics, employment and law enforcement as high-risk.
The alternative is adopting a principles based approach. The Australian Government has proposed adopting this approach, designating the following proposed principles to guide consistent assessment by organisations of whether their development or deployment of an AI system is high-risk (Principles):
-
a. The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations
This includes consideration of the potential for AI systems to discriminate based on protected attributes, potential interference with rights and freedoms, and interoperability with Australia’s domestic and international human rights laws / obligations.
-
b. The risk of adverse impacts to an individual’s physical or mental health or safety
This includes consideration of potential risks to an individual’s health or safety due to the purpose of an AI product / system (i.e. if it forms part of an individual’s safety system) or how it has been developed or implemented.
-
c. The risk of adverse legal effects, defamation or similarly significant effects on an individual
This includes consideration of potential impacts, including undue impacts, to an individual’s legal rights stemming from use of an AI system, particularly where the individual is not able to opt out of the relevant use of the AI system.
-
d. The risk of adverse impacts to groups of individuals or collective rights of cultural groups
This includes consideration of the potential for AI to create unequal, adverse or damaging outcomes for specific groups of individuals.
-
e. The risk of adverse impacts to the broader Australian economy, society, environment and rule of law
This includes consideration of potential systemic risks to the Australian community, as well as adverse impacts to society at large or the environment.
-
f. The severity and extent of those adverse impacts outlined in principles (a) to (e) above.
This Principle is intended to ensure that the guardrails remain appropriately targeted at AI systems that are high-risk, and lower risk AI can develop largely unimpeded. Factors influencing the severity and extent of adverse impacts include (but are not limited to):
- who may experience the impacts (including the demographic and number of individuals);
- whether there will be disproportionate impacts for individuals, groups or society;
- the likely consequences; and
- the likelihood of adverse impacts and whether this can be mitigated.
General-purpose AI
The Proposals Paper outlines that while GPAI models have the potential for known or foreseeable risks, they also pose unforeseeable risks because they can be applied in contexts they were not designed for.
The Proposals Paper proposes the following definition of GPAI:
“An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”.
Noting GPAI can be adapted for use for an array of purposes, and the corresponding unforeseen risks, the Australian Government proposes to apply mandatory guardrails to all GPAI models (i.e. GPAI models are to be considered inherently high-risk).
Guardrails
The Proposals Paper sets out 10 proposed mandatory guardrails for AI systems used in high-risk settings and GPAI. Each of the guardrails is preventative in nature, with an overarching focus on testing, transparency, and accountability.
The Australian Government notes in the Proposals Paper that the guardrails:
- are not intended to be static, and will evolve with technological advancements;
- will need to operate alongside laws and regulatory instruments to enable remediation and enforcement action when harm does occur;
- are intended to be interoperable with those that other comparable jurisdictions have developed and adopted; and
- should be implemented by developers and deployers of AI systems, with accountability to be apportioned based on various contextual factors.
The proposed guardrails broadly align with those contained in the Standard, with the exception of guardrail 10 under which the Standard directs organisations to engage with stakeholders and evaluate their needs and circumstances. The proposed mandatory guardrails are:
-
1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance
Organisations developing or deploying a high-risk AI system must create an accountability process outlining governance policies and clear roles to ensure compliance with the guardrails. Accountability processes must be publicly available and accessible to improve public confidence in AI products and services.
-
2. Establish and implement a risk management process to identify and mitigate risks
Organisations must establish, implement and maintain risk management processes to address risks arising from a high-risk AI system. Developers and deployers must consider any potential impacts on people, community groups and society before the high-risk AI system is in use, and must implement strategies to contain or mitigate any residual risks where elimination is not possible.
-
3. Protect AI systems, and implement data governance measures to manage data quality and provenance
Organisations must ensure they have appropriate data governance, privacy and cybersecurity measures in place. Data used to train, fine-tune or test a model must be fit for purpose and representative, and must not contain illegal and harmful material. Data must also be legally obtained and data sources must be disclosed. Data must be stored and managed to protect it from unauthorised access and exploitation.
-
4. Test AI models and systems to evaluate model performance and monitor the system once deployed
Organisations must test and evaluate the performance of an AI model before placing a high-risk AI system on the market, and continuously monitor the system to ensure it operates as expected. The AI model must meet specific, objective and measurable performance metrics (which will vary, informed by the intended or foreseeable use of the high-risk AI system, and any associated risks).
-
5. Enable human control or intervention in an AI system to achieve meaningful human oversight
Organisations must ensure that humans can effectively understand a high-risk AI system, oversee its operation and intervene where necessary across the AI supply chain and throughout the AI lifecycle. Those responsible for oversight must be sufficiently qualified to interpret output and understand the core capabilities and limitations of an AI model.
-
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
Organisations will be required to inform end-users on how AI is being used and where it affects them, in a clear, accessible and relevant manner. This guardrail will entail 3 distinct requirements:
- Organisations must inform people when AI is used to make or inform decisions relevant to them;
- Organisations must inform people when they are directly interacting with an AI system; and
- Organisations must apply best efforts to ensure AI-generated outputs, including synthetic text, image, audio or visual content, can be detected as artificially generated or manipulated.
-
7. Establish processes for people impacted by AI systems to challenge use or outcomes
Organisations will be required to establish processes for people who are negatively impacted by high-risk AI systems to contest AI-enabled decisions or make complaints about their experience or treatment.
-
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
Organisations will be required to share information about a high-risk AI system with other actors involved in the development or operation of the system. This is intended to enhance transparency across the AI supply chain to help organisations meet their legal obligations and enable them to effectively identify and mitigate risks.
Developers must give deployers information about how to use the high-risk AI system and ensure they can understand and interpret outputs. Deployers must report adverse incidents and significant model failures to developers, who can then issue improvements to the model.
-
9. Keep and maintain records to allow third parties to assess compliance with guardrails
Organisations must keep and maintain a range of records about a high-risk AI system over its lifecycle, and must be ready to give these records to the relevant authorities on request.
Any organisation training large state-of-the-art GPAI models with potentially dangerous emergent capabilities must disclose these ‘training runs’ to the Australian Government.
-
10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails
Organisations will be required to show they have adhered to the guardrails by carrying out (or having a third party, government entity or regulator carry out) a conformity assessment. This must be completed before placing a high-risk AI system on the market and periodically repeated to ensure continued compliance.
Regulatory options to mandate guardrails
The Proposals Paper notes that in Australia, the development and use of AI is shaped by a range of legal frameworks, including consumer, privacy, anti-discrimination, competition, data protection, and copyright law, however:
- while current legal frameworks and regulatory regimes are responsive to certain risks of AI, there are gaps in how risks arising from some AI characteristics are prevented or mitigated and inconsistencies across sectors;
- regulatory regimes are limited to their defined scopes and the entities they regulate;
- regulatory regimes are consistently focused on remedial, rather than preventative regulation; and
- thresholds for action and penalties vary across regulatory regimes.
The Proposals Paper also reiterates the stance set out in the Australian Government’s interim response to the Discussion Paper that existing laws do not have the level of specificity required for effective AI risk management in high-risk settings.
The Proposals Paper outlines three ways the Australian Government could implement guardrails in high-risk settings to address the limitations of the current system:
Option 1: adapting existing regulatory frameworks to include the guardrails
This would involve embedding the guardrails into existing regulatory frameworks.
Option 2: introducing framework legislation that will require existing laws to be amended for the framework legislation to have effect
This would involve creating new framework legislation that defines the mandatory guardrails and the threshold for when they would apply, which would be implemented through amendments to existing regulatory frameworks.
Option 3: introducing a new cross-economy AI Act
This would involve the introduction of a new AI-specific Act, which would:
- define high-risk applications of AI;
- outline the mandatory guardrails; and
- establish a monitoring and enforcement regime overseen by an independent AI regulator.
Key takeaways
The Proposals Paper outlines three key ways the Australian Government may proceed to implement guardrails in high-risk settings to address the limitations of the current system
All organisations involved in developing or deploying AI should familarise themselves with the Proposals Paper, and consider their response. Important in this review will be a consideration of risk management processes and data governance measures, which should be subject to frequent review and uplifts in any event. The Australian Government is seeking feedback on all aspects of the Proposals Paper and consultation is open until 4 October 2024.
All organisations in the AI supply chain (i.e. not just those developing or deploying AI systems in high-risk settings) should also review the Standard, which provides best practice guidance on using AI safely and responsibly. We consider that organisations which seek to implement the voluntary guardrails in the short term, will be well positioned to adapt with ease to future regulatory compliance obligations and to align with international norms. Adoption of the Standard will be demonstrative of an organisation’s desire to protect individuals and communities, data, and society at large from AI-related risks and harms.
Please reach out if there is any support we can provide your organisation, including in responding to the Proposals Paper, adopting the Standard or reviewing internal AI policies.
Please reach out if there is any support we can provide your organisation, including in responding to the Proposals Paper, adopting the Standard or reviewing internal AI policies
Talk to one our team members
Keep up to date with our legal insights and events
Sign upRecent articles
Windfall Gains Tax: Securing best-outcomes for Council-owned land
How can councils implement recent changes to Windfall Gains Tax effectively?...
Nicotine vaping reforms – what you need to know
From 1 October 2024, certain vapes can be supplied by a pharmacy to adults without a prescription
How would a reasonable business person interpret a commercial contract? Recent case reaffirms key principles of construction
Reviewing the Alliance Building and Construction Pty Ltd v Veesaunt Property Syndicate 1 Pty Ltd [2024] QCA 75 case
Privacy reforms - what now? Our top 3 tips for Australian Government agencies
Steps that Australian Government agencies should take to prepare for the new obligations and requirements.
Partner
Sydney