OVIC’s updated resources for use of GenAI tools in the Victorian Public Sector

The Office of the Victorian Information Commissioner (OVIC) has recently provided helpful guidance for the Victorian Public Sector (VPS) which sets out how entities might use Generative Artificial Intelligence (GenAI) tools, and practical steps to mitigate risk when using these tools. Serving as a third instalment in our review of OVIC guidance on the use of GenAI, the following piece provides practical commentary on the recent OVIC guidance pieces, both dated March 2025, along with learnings the VPS may take away and apply in their procedures.
For further background on this series, please review the eAlerts which have been prepared previously by Maddocks. You can find them here and here.
OVIC has recently published statements that outline its position on the use of GenAI tools within Victorian public sector organisations. The guidance has been separated into two distinct application types, and covers tools which are publicly available, and tools which require procurement at an enterprise (organisational) level. We have briefly summarised the positions on each application type below, and provided some overall commentary on GenAI within the VPS more generally.
Like other GenAI guidance notes published by OVIC, this piece includes specific advice on the risks that are posed in relation to the Information Privacy Principles (IPPs) set out in Schedule 1 to the Privacy and Data Protection Act 2014 (Vic). OVIC asserts that the use of GenAI (in either public or enterprise forms) carries significant risk of an organisation breaching the IPPs. While all 10 IPPs are relevant, OVIC specifically mentions IPPs 1 to 5 as areas of concern for GenAI use. These IPPs relate to collection of personal information, use and disclosure of personal information, and data security. Further information on the IPPs can be found here.
We have set out a brief summary of the guidance for public and enterprise GenAI tools, key takeaways that we believe are important for responsible GenAI use, and ways that VPS organisations may address the evolving risks posed by these forms of technology.
Publicly Available Tools
When categorising publicly available tools, OVIC specified that this includes a GenAI tool which is publicly available and can be accessed via web browser or application. Examples of these programs given by OVIC include ChatGPT, Google Gemini, Grammarly, Claude, Perplexity, Otter, Quillbot and Llama.
OVIC has identified that many of these tools will have “minimal controls for how the information entered is used or protected”. It is for this reason that OVIC recommends that VPS organisations:
- limit the public sector information that is entered into publicly available GenAI tools to information that is already publicly known or approved for public release; and
- ensure that personal information is not submitted to publicly available GenAI tools.
OVIC identified its key concerns with these types of publicly available GenAI tools as follows:
- Inadvertent disclosure of personal information: an inadvertent or accidental disclosure of Personal Information to publicly available GenAI tools can lead to subsequent uses by unauthorised persons or entities. OVIC notes that this may constitute a breach of IPPs 2.1 and 4.1 (relating to use and disclosure of Personal Information, and information security);
- Collection of Personal Information: generating new information with GenAI tools constitutes a new collection of personal information. OVIC highlights the importance of ensuring the necessity, lawfulness and fairness of collecting new Personal Information this way (in using and relying on GenAI outputs);
- Retention of Personal Information: the information that is submitted to GenAI tools may be retained by the owner of the tool for an unknown period, and it may be difficult for a VPS organisation to retrieve any submission made. If this were to occur, the organisation would risk being in breach of IPP 4.2 relating to data destruction; and
- Fairness (relating to decision making, or assessments): while not related to privacy specifically, OVIC specified that GenAI tools should not be used to make decisions, carry out assessments or to perform other administrative functions that may impact individuals.
Enterprise Tools
OVIC has also provided further guidance in connection with the use of GenAI tools which are not publicly available, but are instead procured at an enterprise level and operate within a VPS organisation’s managed environment. The OVIC guidance provides examples of enterprise tools that integrate with an organisation’s existing systems, including Microsoft 365 Copilot, Chat GPT Enterprise, Zoom IQ and Slack GPT.
OVIC has stated that the use of the enterprise tools may both amplify existing information privacy and security risks, and create new issues entirely. In order to mitigate this, OVIC has provided VPS entities with a set of minimum expectations. These expectations encourage VPS organisations to identify existing information holdings and systems that may be impacted by the introduction of GenAI tools, consider how the GenAI outputs may be assessed, and conduct a security risk assessment for the integration of an enterprise GenAI system.
Much like the guidance for publicly available tools, OVIC has recommended that enterprise GenAI tools should not be used to make decisions, undertake assessments, or for other administrative functions that may have “consequences for individuals or cause them significant harm”. We have detailed how this can adversely affect VPS organisations previously, which can be found here.
What does this mean for VPS organisations?
A consistent theme between both OVIC pieces is the nexus between a decision to use GenAI and the impact that this might have on how that organisation deals with the IPPs.
In order to meet OVIC’s expectations, VPS organisations should ensure they are using GenAI programs responsibly and in accordance with comprehensive policies and procedures which have been developed for the organisation. Relevantly, as derived from the OVIC guidance, we consider that this can be achieved by undertaking the following:
- for publicly available tools, the recommended approach is relatively simple:
- no personal information should be shared with any public tool; and any other information shared with publicly available tools must be already publicly available;
- in the case of enterprise solutions, the onus is on VPS employees to ensure that the outputs of the tools are used appropriately. We consider that this creates an obligation for VPS organisations to keep clear records of when and how enterprise AI tools have been used;
- the organisation should be aware that continued enterprise GenAI use may result in an increased dependence on the tools in their everyday operation. Ensuring that this is anticipated, and that the organisations have checks and balances to prevent increased reliance becoming a greater issue is one step that organisations can take in responsibly implementing AI tools at an organisational level.
- regardless of whether a VPS organisation uses public or enterprise tools, we find it necessary to emphasise that OVIC’s recommendation to not use GenAI in decision making. Unfortunately, this situation has previously occurred (which we have summarised here) and resulted in an OVIC investigation into a VPS Organisation. VPS users should implement GenAI to achieve operational efficiencies, instead of assuming the role of a decision maker.
We find that an appropriate step in implementing any GenAI tool (irrespective of whether public or enterprise) is for organisations to undertake a review of their relevant data and technology use policies prior to use. The use of GenAI tools may result in organisational efficiencies, however organisations risk neglecting their duties or breaching their policies if the use of the tools contravenes appropriate control measures. We have found that an appropriate approach for the VPS when implementing these tools is to first undertake a Privacy Impact Assessment (PIA). These assessments have encouraged responsible AI use, and have resulted in the implementation of appropriate AI governance controls (for example, this has included the drafting of an organisation specific AI Transparency Statement). We have assisted with the preparation and implementation of these assessments and their findings, and we are available to discuss how these might be implemented in your organisation.
Maddocks are well equipped to provide assistance with AI related queries and concerns. If your organisation has recently implemented enterprise AI tools, or wishes to use publicly available tools in their operations, please reach out to the Government and Not-for-Profit team to understand how we can assist your organisation mitigate any associated risk.
Maddocks are well equipped to provide assistance with AI related queries and concerns
If your organisation has recently implemented enterprise AI tools, or wishes to use publicly available tools in their operations, please reach out to our Government and Not-for-Profit team to understand how we can assist your organisation mitigate any associated risk.
Keep up to date with our legal insights and events
Sign upRecent articles

Understanding Australian Contract Law Through Evans v Air Canada
This case illustrates how important it is for contracts to be drafted clearly and comprehensively.

Navigating Privacy Reforms: Challenges and opportunities for government in 2025
This article covers key considerations for government agencies to ensure they are adapting to the changes.

The High Court clarifies the approach to site value and improvements
The significance of this decision goes to the heart of how site value (formerly ‘unimproved land value’) is assessed.

PFAS: What Victorian councils need to know
A summary of what Victorian councils need to know about PFAS.
Partner
Melbourne