Legal Insights

Privacy Perspectives: Inside the EU’s Artificial Intelligence Act

By Katherine Armytage, Indi Prickett, and Clarissa Kwee

• 25 March 2024 • 5 min read
  • Share

The European Parliament has voted in favour of the Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework on artificial intelligence (AI).


The AI Act, which is expected to become EU law in May 2024, will set a global benchmark for AI regulation. While Australia does not yet have any publicly available plans to implement corresponding legislation, Australian Government agencies wanting to explore and use AI tools should still ensure that their planned use cases are consistent with the fundamental legal and ethical principles relevant to AI which are reflected in the AI Act. This will assist to ensure that robust technical, risk and governance controls are in place to continue protecting the rights and personal information of Australians.

Overview of the AI Act

The AI Act is designed to regulate the use of AI in the EU, with the aim of ‘promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection against harmful effects of artificial intelligence systems in the Union and supporting innovation.

The AI Act will achieve its aims in a number of ways, including by:

  • prohibiting certain AI practices; and
  • imposing specific requirements in relation to high-risk AI systems, and imposing obligations for operators of these systems.

For example, the following AI practices will be prohibited:

  • using biometric categorisation systems that categorise people based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientations (although there will be exceptions for law enforcement);
  • using AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; and
  • using AI systems to infer emotions of people in workplaces and educational institutions, except where the use is for medical or safety reasons.

It is clear that the AI Act has been developed to be a foundational piece of legislation that will guide the EU to use AI in a measured, ethical and privacy-compliant manner.

Relevance to Australian Government agencies

The Australian Government is continuing to take a coordinated approach to AI, noting the fast emergency of AI technology, and the need to regulate AI appropriately. For example, there is work being undertaken by Data and Digital Ministers across Australia to develop an initial framework for assuring AI used by governments, which aligns with the AI Ethics Principles and includes common assurance processes.

This focus on AI is reflective of the Australian community’s divergent attitudes to emerging technologies like AI – while some Australians certainly appreciate that the introduction of AI will streamline services and programs, thus delivering economic benefits, many Australians are cautious about the use of AI in decision-making, and otherwise have concerns about the privacy and security risks associated with AI technology.

Although there is currently no formal specific regulation of AI in Australia, Australian Government agencies wanting to explore and use AI tools should ensure that their planned use cases are consistent with the fundamental legal and ethical principles relevant to AI. This will include ensuring that robust technical, risk and governance controls are in place to continue protecting the personal information of Australians. For example, in the most recent Community Attitudes to Privacy Survey (commissioned by the Office of the Australian Information Commissioner (OAIC)):

  • 96% of respondents stated that they wanted conditions in place before AI is used to make a decision that might affect them (e.g. the right to have a human review the relevant decision);
  • 71% of respondents want to be told that AI is being used to handle their personal information;
  • 56% of respondents want the accuracy of the results of AI to be validated by a human;
  • 43% of respondents are concerned about their personal information being used by AI technology; and
  • 70% of respondents do not consider it is fair and reasonable for AI to be used to inform decision-making that may have a significant impact on an individual.

Next steps

If your agency is considering implementing AI tools, you should consider first conducting a foundational privacy impact assessment (PIA) or otherwise obtaining foundational privacy advice to guide your agency in ensuring that its use of AI tools is not only compliant with the Privacy Act 1988 (Cth), but is also consistent with ethical principles and the expectations of the Australian community.

For example, we anticipate that this PIA or advice would consider, amongst other things:

  • the need to ensure that, where possible, individuals are given information about how their personal information will be handled using AI tools, to enhance the agency’s compliance with APP 1 and APP 5;
  • the need to ensure that any collection of information, including “collection by creation”, through the use of AI tools will comply with APP 3;
  • the need to consider features of AI tools that may create security risks;
  • how the personal information will be stored and protected; and
  • how the quality of the data will be assured, noting that AI tools can inherit biases, factual errors and objectionable content from their training data, which can degrade the accuracy and objectivity of outputs, and they can otherwise create ‘hallucinations’ and other errors due to the way the algorithms operate.

If you would like to discuss or have questions about any of the above, please contact us.

By Katherine Armytage, Indi Prickett, and Clarissa Kwee

  • Share

Recent articles

Online Access