Legal Insights

Guard(rail)ing the development and deployment of AI: the Australian Government’s proposal

By
• 16 September 2024 • 8 min read
  • Share

The Australian Government has proposed the introduction of mandatory guardrails for the use of Artificial Intelligence (AI) in high-risk settings. The proposal suggests criteria for defining high-risk settings, proposes a set of 10 mandatory guardrails, and outlines possible approaches to mandating these guardrails by legislation.

Organisations involved in developing or deploying AI should familarise themselves with these proposals, and consider their posture in response. Feedback on all aspects of the Proposals Paper is encouraged and consultation is open until 4 October 2024.

In tandem, the Australian Government has released a Voluntary AI Safety Standard (Standard) which provides best practice guidance on using AI safely and responsibly. All organisations in the AI supply chain should review this Standard and seek to implement the voluntary guardrails in order to uplift their AI maturity.

The case for AI guardrails

The Proposals Paper and Standard each follow the Australian Government’s interim response to the Safe and Responsible AI in Australia discussion paper (Discussion Paper) released 17 January 2024. Commitments made in the Discussion Paper included:

  • considering and consulting on the case for and the form of new mandatory guardrails for organisations developing and deploying AI systems in high-risk settings;

  • considering possible legislative vehicles for introducing mandatory safety guardrails for AI in high-risk settings;

  • developing a voluntary AI Safety Standard for the responsible adoption of AI by Australian businesses;

  • developing options for voluntary labelling and watermarking of AI-generated materials; and

  • establishing an expert advisory body to support the development of options for further AI guardrails.

Submissions to the Discussion Paper stressed that voluntary application of Australia’s 8 voluntary AI Ethics Principles was no longer ‘enough’ in high-risk settings, and effective regulation and enforcement of AI was required across all sectors.

The Australian Government has adopted these arguments, outlining the case for mandatory AI guardrails in Australia given that:

  • public trust in AI remains low, and government is a crucial actor in creating an environment that supports safe and responsible AI innovation and use, while reducing the risks posed by these technologies;

  • the lack of specifically targeted rules has enabled the rapid development and proliferation of AI technologies in Australia;

  • internationally, there has been a shift in AI technology regulation – these proposals aim to ensure alignment with international counterparts including the EU, Canada and the UK;

  • AI amplifies and creates new risks. These risks are distinct from those attaching to traditional software models, and in some cases have been realised, and harm has occurred;

  • while current legal frameworks and regulatory regimes are responsive to certain risks posed by AI, there are gaps in how the risks arising from some AI characteristics are prevented or mitigated and inconsistencies across sectors;

  • several features of AI highlight the need to shift the focus from remedial to preventative AI regulation.

Defining ‘high-risk’ AI

The Proposals Paper sets out the Australian Government’s proposed approach to defining those high-risk settings where the mandatory guardrails would apply.

Known or foreseeable uses

The Australian Government is seeking feedback on the merits of a principles vs. a list-based approach to defining high-risk AI where the proposed uses of an AI system or general-purpose AI (GPAI) model are known or foreseeable.

A list-based approach would entail a blanket list of high-risk settings, though this would likely inadvertently capture some low-risk uses of AI. For example, the EU and Canada have identified biometrics, employment and law enforcement as high-risk.

The alternative is adopting a principles based approach. The Australian Government has proposed adopting this approach, designating the following proposed principles to guide consistent assessment by organisations of whether their development or deployment of an AI system is high-risk (Principles):

General-purpose AI

The Proposals Paper outlines that while GPAI models have the potential for known or foreseeable risks, they also pose unforeseeable risks because they can be applied in contexts they were not designed for.

The Proposals Paper proposes the following definition of GPAI:

“An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”.

Noting GPAI can be adapted for use for an array of purposes, and the corresponding unforeseen risks, the Australian Government proposes to apply mandatory guardrails to all GPAI models (i.e. GPAI models are to be considered inherently high-risk).

Guardrails

The Proposals Paper sets out 10 proposed mandatory guardrails for AI systems used in high-risk settings and GPAI. Each of the guardrails is preventative in nature, with an overarching focus on testing, transparency, and accountability.

The Australian Government notes in the Proposals Paper that the guardrails:

  • are not intended to be static, and will evolve with technological advancements;

  • will need to operate alongside laws and regulatory instruments to enable remediation and enforcement action when harm does occur;

  • are intended to be interoperable with those that other comparable jurisdictions have developed and adopted; and

  • should be implemented by developers and deployers of AI systems, with accountability to be apportioned based on various contextual factors.

The proposed guardrails broadly align with those contained in the Standard, with the exception of guardrail 10 under which the Standard directs organisations to engage with stakeholders and evaluate their needs and circumstances. The proposed mandatory guardrails are:

Regulatory options to mandate guardrails

The Proposals Paper notes that in Australia, the development and use of AI is shaped by a range of legal frameworks, including consumer, privacy, anti-discrimination, competition, data protection, and copyright law, however:

  • while current legal frameworks and regulatory regimes are responsive to certain risks of AI, there are gaps in how risks arising from some AI characteristics are prevented or mitigated and inconsistencies across sectors;

  • regulatory regimes are limited to their defined scopes and the entities they regulate;

  • regulatory regimes are consistently focused on remedial, rather than preventative regulation; and

  • thresholds for action and penalties vary across regulatory regimes.

The Proposals Paper also reiterates the stance set out in the Australian Government’s interim response to the Discussion Paper that existing laws do not have the level of specificity required for effective AI risk management in high-risk settings.

The Proposals Paper outlines three ways the Australian Government could implement guardrails in high-risk settings to address the limitations of the current system:

Option 1: adapting existing regulatory frameworks to include the guardrails

This would involve embedding the guardrails into existing regulatory frameworks.

Option 2: introducing framework legislation that will require existing laws to be amended for the framework legislation to have effect

This would involve creating new framework legislation that defines the mandatory guardrails and the threshold for when they would apply, which would be implemented through amendments to existing regulatory frameworks.

Option 3: introducing a new cross-economy AI Act

This would involve the introduction of a new AI-specific Act, which would:

  • define high-risk applications of AI;
  • outline the mandatory guardrails; and
  • establish a monitoring and enforcement regime overseen by an independent AI regulator.

Key takeaways

The Proposals Paper outlines three key ways the Australian Government may proceed to implement guardrails in high-risk settings to address the limitations of the current system

All organisations involved in developing or deploying AI should familarise themselves with the Proposals Paper, and consider their response. Important in this review will be a consideration of risk management processes and data governance measures, which should be subject to frequent review and uplifts in any event. The Australian Government is seeking feedback on all aspects of the Proposals Paper and consultation is open until 4 October 2024.

All organisations in the AI supply chain (i.e. not just those developing or deploying AI systems in high-risk settings) should also review the Standard, which provides best practice guidance on using AI safely and responsibly. We consider that organisations which seek to implement the voluntary guardrails in the short term, will be well positioned to adapt with ease to future regulatory compliance obligations and to align with international norms. Adoption of the Standard will be demonstrative of an organisation’s desire to protect individuals and communities, data, and society at large from AI-related risks and harms.

Please reach out if there is any support we can provide your organisation, including in responding to the Proposals Paper, adopting the Standard or reviewing internal AI policies.

Please reach out if there is any support we can provide your organisation, including in responding to the Proposals Paper, adopting the Standard or reviewing internal AI policies

Talk to one our team members

By
  • Share

Recent articles

Online Access