Legal Insights

What Victorian Government personnel need to know about ensuring privacy compliance with ChatGPT usage

By
• 17 December 2024 • 6 min read
  • Share

Serving as a second instalment in a series highlighting statements of the Office of the Victorian Information Commissioner (OVIC) and their findings on practical uses of Generative AI (GenAI) in the Victorian Public Service, the below outlines a recent investigation into a public servant where their actions were found to have contravened a number of Information Privacy Principles (IPPs).

Much like our first piece (which can be found here), this is intended to be communicated as a practical summary of a public statement made by OVIC and will focus on any learnings which can be drawn for other Government and Non-Government organisations.

OVIC has recently published an investigation report (OVIC Report) which addresses an instance of misuse of ChatGPT by a member of the Victorian Public Service (VPS). In what it is calling a very real example of the privacy risks associated with the use of GenAI, OVIC stated that the VPS employee entered a “significant amount of personal and delicate information” into ChatGPT, including names and other sensitive information.

Conduct necessitating the investigation

The VPS employee had engaged the use of the GenAI engine when drafting a report containing sensitive information in relation to a child (Report). The use of GenAI in this instance resulted in a Report that contained inaccurate personal information, which ultimately downplayed the risks to the child. Fortunately, OVIC noted that the impact of GenAI did not change the decision making of the VPS body or the Court in relation to the child.

Through OVIC’s investigation, it determined that the VPS employee entered sensitive information about the mother, father, carer and child into ChatGPT. In entering this information, OVIC considered that the VPS employee made an inadvertent disclosure to OpenAI (the company that owns ChatGPT). Such an inadvertent disclosure will become part of an unknown database, managed by the owner of the engine. In this instance, the sensitive information was released from the control of the VPS body, and OpenAI is unable to ascertain if and when any further disclosure of the information is made.

In completing the investigation, OVIC looked at a wider sample of the work completed by this specific VPS employee. Over a one year period, there were an additional 100 cases where there were indications of use of GenAI engines in their work. Despite finding close to 100 further cases where ChatGPT indicators were present, the VPS body reviewers noted that none of the further instances had the same potential impacts on the assessment of risk as was the case in the Report incident.

Further, and more broadly, in the period of July to December 2023, nearly 900 of the VPS body’s employees had accessed the ChatGPT website. Such a figure represented around 13 per cent of its total workforce.

OVIC’s findings

OVIC conducted a thorough investigation to ascertain if any further inadvertent disclosures were made, and if so, the exposure to risk that the VPS body might face as a result. Through their joint investigation, it was established by OVIC and the VPS body that a number of indicators were evident where someone had employed the use of GenAI engines. OVIC considered the following examples to be indicators that ChatGPT had been used:

  • Sophisticated language
  • Unusual content
  • Unusual Child Protection Intervention
  • Overly positive descriptors
  • Unusual terminology
  • Nonsensical references
  • Inaccurate information
  • Unusual reference to legal intervention
  • American spelling/and or phrasing

Recognising these drafting habits is helpful if you are ever looking to identify instances where GenAI use has taken place. By auditing work in this way, you may find similar drafting habits which can lead to the exposure of any possible issues before they eventuate into a larger problem (as seen with this VPS body incident).

At the time of the investigation, the VPS body submitted that it had a range of controls in place to mitigate the privacy risks associated with the use of ChatGPT by staff at the time of the Report incident. These controls included, but were not limited to, the Acceptable use of Technology Policies, eLearning modules on privacy awareness and security awareness, and the VPS body’s values (collectively, the Measures).

In the context of the Report incident, OVIC found the Measures to be insufficient in mitigating privacy risks. OVIC established that based on the materials provided to staff in order to train them (including the Measures), it could not be expected that staff would gain an understanding of whether they could use novel GenAI tools (like ChatGPT), or how to appropriately use them based only on existing general guidance materials. Whilst the VPS body had organised tailored professional development sessions which covered risks associated with GenAI, it was later established that these sessions were directed at managers and leaders, with the intention that this would provide knowledge to inform conversations at a team level. On balance, OVIC found that such an approach was not an effective way of educating the general workforce about how GenAI tools work, and the privacy risks associated with them.

What this means for other Government and Non-Government Entities

The findings of the OVIC Report give helpful guidance to the VPS regarding the use of GenAI when carrying out their duties, but highlights the very real privacy risks that should be considered before making use of this technology. The OVIC Report also highlights the importance of organisations implementing robust policies and procedures in relation to the use of GenAI by their staff.

We consider OVIC’s comments on the preparation and education of the relevant VPS body’s staff to be an important takeaway. Staff training and development is a necessity, but there is typically very little guidance as to what must be communicated or what is considered to be best practice. In any other scenario, it would be considered reasonable to use eLearning Modules, IT use policies, and general staff training sessions. However, by OVIC finding that such measures were insufficient in the AI context, it serves as an important learning opportunity to ensure VPS employees at all levels are adequately trained on the privacy risks associated with the use of generative AI. You will need to be agile when it comes to understanding new technological trends, and need to have an awareness of the ever evolving need to make the training that is delivered to staff meaningful.

Looking for assistance?

Don’t hesitate to contact a member of Maddocks Victorian Privacy, Data and Information Law team with any queries about your organisation's cyber or privacy governance or operations.

By
  • Share

Recent articles

Online Access