Legal Insights

Defamation laws changed this month - four things that every business with an online presence should know (Part 4)

• 11 July 2024 • 9 min read
  • Share
Camera man filming in focus

The regulation of digital platforms in New South Wales and the Australian Capital Territory has taken on a new character from 1 July 2024, as ‘Part A’ of the ‘stage two’ national defamation reforms come into force. These reforms are directed to the publication of material on virtually any digital platform, including social media platforms, review websites, search engines and forum platforms. Consequently, the reforms will capture a wide group of online publishers, from digital behemoths (such as Meta and X (formerly Twitter)) to mums and dads who host community Facebook pages.

These legal reforms continue to evolve Australia’s approach to uniform defamation legislation, as other states and territories look to catch up to New South Wales and the Australian Capital Territory. Victoria is close behind, with other states and territories trailing behind in enacting new legislation to address the regulation of digital intermediaries in Australia. While other states catch up, there is a brief window of time in which plaintiffs seeking to commence defamation proceedings may cherry-pick their jurisdiction, depending on their particular objectives. What sets New South Wales and the Australian Capital Territory apart from other Australian jurisdictions from this month are four key changes:

  1. new exemptions from liability for defamation have been introduced for digital intermediaries in certain circumstances;
  2. greater flexibility for publishers in the content of offers to make amends;
  3. a new statutory defence for digital intermediaries who embed a streamlined complaints mechanism for users of their platform; and
  4. courts have been armed with sweeping powers to make orders against non-party digital intermediaries.

New legislative concepts: Digital matters and digital intermediaries

The reforms in New South Wales introduced a number of new concepts, including:

  • the “digital matter”, being a matter published in electronic form by means of an online service (including internet-based social media platforms, forums and websites that enable user to share knowledge with one another); and
  • the “digital intermediary”, being a person, other than an author, originator or poster of the digital matter, who provides or administers the online service by means of which the digital matter is published.

These changes capture a significant amount of online content and extend to a broad range of people and organisations. The impacts of the changes are threefold:

  1. digital intermediaries are likely to incorporate a streamlined complaints mechanism that allows for the removal of access to potentially defamatory material online within 7 days from receipt of the complaint;
  2. parties that complain that a digital matter contains material that defames them will now be able to ask a Court to make orders requiring a digital intermediary take down a matter which they allege is defamatory, even where the digital intermediary is not party to the Court proceedings; and,
  3. a greater range of conduct by a greater range of online participants will be immune from Australia’s defamation laws and parties will have access to a greater range of defences or exemptions to liability.

The reasons for change - Parliament’s response to High Court’s findings

These reforms were introduced following a series of decisions by the Courts, which held that third-party providers of digital platforms could be found to have ‘participated’ in a publication and therefore held liable for defamatory material published by its users. For example:

  • In 2018, Mr Trkulja was successful in a defamation action against Google, after discovering statements asserting (incorrectly) that he was a Melbourne criminal along with photos of himself displayed in response to searches for “Melbourne criminals” and “Melbourne criminal underworld figure”. The High Court unanimously held that Google was capable of being held liable for auto-complete predictions and auto-generated snippets containing defamatory statements about Michael Trkulja. This liability could arise even before Google had notice of the publication of the material on its platform.

  • In 2021, the High Court held that Facebook was responsible for defamatory comments left by its users on its social media pages about the plaintiff Dylan Voller. Mr Voller had been detained in the Don Dale Youth Detention Centre in Darwin and, after his release, had given a number of interviews about his experiences. Video recordings of that interview were published online and a number of people left comments on those video posts alleging (again, incorrectly) that Mr Voller had engaged in violent crimes. In finding against Facebook, the Court emphasised:

    • Facebook’s role in facilitating the publication of the third party’s Facebook defamatory post; and
    • Facebook’s commercial strategy in profiting from the publication of material on its platform by third-party users publishing material online and the need for Facebook to be accountable for the consequences of its commercial activities.

  • In 2022, the High Court found that Google had not published a defamatory statement by merely providing a hyperlink in its search engine to a defamatory article concerning alleged links between the underworld and a Victorian solicitor. The Court did, however, leave open the possibility that a sponsored hyperlink may amount to defamation.

In these decisions, the Courts confirmed the orthodox approach to the liability for a ‘publication’ . However, the Court’s findings gave rise to a wave of stakeholder concerns about the possible exposure of digital intermediaries to liability for matters published by their users. Additionally, concerns were raised as to the possible flow-on effects such as the risk of a dampening of free speech due to digital intermediaries limiting or removing user content to avoid liability. The legislative reforms that commence this month (July 2024) respond to those concerns, aiming to reach a balance between free speech, liability of digital intermediaries and reputational protection of individuals.

1. Exemptions from liability for defamation for digital intermediaries

The reforms provide for two exemptions from liability for defamation targeting two specific classes of digital intermediaries – “passive service providers” and “search engine providers”:

  • Passive service providers (being those that provide caching services, conduit services or storage services, such as cloud storage providers) who can show that:
    • their role in the relevant publication is limited to providing the relevant service; and,
    • the provider did not take an active role in the publication (for example, encouraging, promoting or editing the publication).
  • Search engine providers are exempt from liability in respect of standard (that is, non-sponsored) search results that contain defamatory material or a hyperlink to defamatory material that appears on third-party websites – effectively mirroring the High Court’s 2022 decision discussed above.

Under the new regime, a court must determine whether a digital intermediary exemption is established as soon as practicable before the trial commences unless it is satisfied that there are good reasons to postpone that question to a later stage in the proceedings. The Court may determine the question on the basis of the claim alone (without the need to review evidence), if the claim is sufficiently detailed to should that the exemption requirements are met.

2. Greater flexibility in offers to make amends

The reforms broaden the scope of matters that may be put forward by a person alleged to have published defamatory material in any offer to make amends. For instance, the changes permit any party who is alleged to have published a ‘digital matter’ to offer to take an “action prevention step” in relation to a matter. An “access prevention step” includes any step to remove the matter from the relevant platform or to block, disable or otherwise prevent access to the digital matter, to some or all users of the relevant platform.

3. New statutory defence available to digital intermediaries

Digital intermediaries may now rely on a statutory defence to a defamation claim, provided that they can prove that:

  • they were acting as a digital intermediary in relation to the publication;
  • they had provided an “accessible complaints mechanism” (for example, an email address, direct messaging address, webpage or part of a webpage) for complainants to use; and
  • they had taken reasonable access prevention steps no later than 7 days after a complaint was received by them about a publication.

4. Courts now armed with sweeping powers to make orders against non-party digital intermediaries

Finally, the reforms empower the Court to order digital intermediaries to take steps to prevent access to certain material that the Court has found to be defamatory or that is the subject of an interlocutory injunction in ongoing defamation proceedings. Such orders may be made:

  • on an interim or final basis, when the Court considers such steps are necessary to prevent or limit defamatory material from being published or republished; and
  • on a final basis, only after providing the relevant digital intermediary with an opportunity to make submissions to the Court as to whether the order should be made.

What next?

Parties that have been defamed by material published online by a digital intermediary should consider at an early stage:

  • to make use of any accessible complaints mechanism available on the relevant platform to try to remove the offending matter from the platform; and
  • if necessary, seeking a court order that one or more digital intermediaries limit access to the defamatory material.

For digital intermediaries (extending from Meta and X through to small businesses with an interactive webpage or social media page), it is particularly important for them to think about the complaints processes available to their users. At the very least, they should establish:

  • a complaints mechanism for users to notify them of a potentially defamatory matter published on their platform;
  • a complaints procedure ensuring they are quickly assessed and acted on (noting the need to do so within 7 days); and
  • different functions on their platform that permit them to take an ‘access prevention step’, including removing or limiting access to the defamatory matter.


Sign up to receive our latest legal updates

Keep up to date with our legal insights and events

Sign up

Recent articles

Online Access