Legal Insights

The Dr(obot) will chat(GPT) with you now

By
• 23 May 2023 • 5 min read
  • Share

With the rise of ChatGPT, we consider how to harness this technology while ensuring the risks associated with it are mitigated.

In brief

ChatGPT has exploded onto the scene and threatens to revolutionise how we work. In addition to passing the US Bar exams, ChatGPT 4.0 is reported to have passed the US medical licensing exam with flying colours.

Given this sophistication, the healthcare industry can (in theory) leverage ChatGPT's powers to streamline processes and improve services. However, many of the challenges associated with AI and machine learning remain. An organisation may be able to shield itself from many of the principal legal and commercial risks of ChatGPT 4.0 whilst enjoying its benefits by:

  • developing a ChatGPT use policy and risk assessment framework, and providing staff training;
  • including appropriate liability and warranty provisions in contracts with service providers that use ChatGPT; and
  • including appropriate exclusion of liability provisions in contracts with customers (if they are not individuals).

ChatGPT risks

Getting it right
First and foremost, ChatGPT might give the wrong answer. This may be for any one of several reasons:

  • Hallucinations: There are several well-documented instances in which an AI system has simply made up seemingly plausible information or stated something untrue. This is referred to as a 'hallucination'. Perhaps most famously, this occurred when Google's own AI system (Bard) produced an error in its first demo.[1]
  • Bias: As ChatGPT draws its information from sources that include many user-generated content sites, its answers may contain inappropriate biases or conclusions. In this way, it could produce wrong, harmful and/or biased answers.
  • Timing: ChatGPT does not currently know about events after September 2021, which can cause it to make simple reasoning mistakes or not consider more recent information.

Accordingly, there is a real risk that ChatGPT (or an equivalent chatbot) will generate a wrong answer. In the context of healthcare, this could lead to very serious repercussions, unless carefully managed.

Controlling the content
Second, ChatGPT is not a safe system to receive information that the provider does not want to be disclosed. The Terms of Use for ChatGPT (Terms) provide that the user’s 'Content' (i.e. inputs and responses) may be used as OpenAI (the owner of ChatGPT) sees fit. Further, OpenAI has detected bugs which led to ChatGPT being 'persuaded' by certain inputs to divulge information that it should not disclose.

Accordingly, the use of ChatGPT presents risks that need to be managed for organisations operating in the healthcare sector. They include:

  • Privacy: Content may be subject to the Privacy Act 1988 (Cth), raising potential privacy breach issues, including non-permitted cross-border data transfers and non-permitted disclosures of sensitive Health Information.
  • Confidentiality: Content may be subject to confidentiality obligations, including those outlined in the Australian Medical Association Code of Conduct.
  • Intellectual Property: Content may infringe the intellectual property rights of other parties.
  • Other: AI responses may also be lacking in empathy, fail to take into account broader context and fail to comply with applicable medical professional standards.

Liability: Blaming the bot
Another issue arises from the fact that the ChatGPT Terms state that:

  • neither OpenAI nor any of its affiliates will be liable for a range of damages, even if OpenAI has been advised of the possibility of such damages; and
  • OpenAI does not warrant that the Services will be uninterrupted, accurate or error-free, or that any content will be secure or not lost or altered.

Accordingly, there is a real risk that if ChatGPT generates a wrong answer, users cannot hold ChatGPT or OpenAI responsible due to the Terms. This magnifies all of the above risks.

Mitigating the risks of an AI System

To mitigate the risks outlined above, organisations may wish to consider:

  • Use Policy: Implementing an AI use policy that dictates the appropriate manner of internal use of AI systems. For example, it may be permissible to use ChatGPT to develop an initial issues list, create a draft of various documents or undertake basic research – which would all need to be subject to careful checking – but not to use it to generate reports or medical guidance or advice.
  • Contract terms: Including clear obligations in all contracts with service providers:
    • restricting or managing their use of AI systems.
    • requiring them to comply with the organisation’s ChatGPT use policy and to provide prompt notice of any inadvertent disclosures.
  • Records: Implementing correct and up-to-date records of the AI systems and versions in use and imposing similar obligations on service providers that use AI systems.
  • Training: Providing detailed mandatory training to staff members (and service providers) who may be using AI, to ensure that they know the benefits and limitations of the system.
  • Responsibility: Understanding and preparing for the fact that the organisation, not ChatGPT, retains responsibility for its advisory services. If information is obtained from service providers, the relevant contract should clearly state that the service provider is liable for any incorrect information.
  • Warranties: Including clear warranties in service provider contracts for AI-generated responses, including a guarantee that the service provider has checked the AI system's responses and accepts responsibility for any errors in those responses.
  • Risk assessment: Before using ChatGPT, conducting a risk assessment to confirm that the organisation understands the advantages and disadvantages/risks of the latest ChatGPT developments and use cases. If ChatGPT is to be used by a service provider, a risk assessment is also required.

By
  • Share

Keep up to date with our legal insights and events

Sign up

Recent articles

Online Access