This document has been prepared by the ACCC, ACMA, eSafety Commissioner and OAIC in their capacity as members of the Digital Platform Regulators Forum (member regulator). The information in this publication is for general guidance only. It does not constitute legal or other professional advice, and should not be relied on as a statement of the law in any jurisdiction. Because it is intended only as a general guide, it may contain generalisations. You should obtain professional advice if you have any specific concern.

The member regulators have made every reasonable effort to provide current and accurate information, but do not make any guarantees regarding the accuracy, currency or completeness of that information.

Parties who wish to re-publish or otherwise use the information in this publication must check this information for currency and accuracy prior to publication. This should be done prior to each publication edition, as member regulator guidance and relevant transitional legislation frequently change. Any queries parties have should be addressed to the


This literature summary is the first output of Digital Platform Regulators Forum (DP-REG)’s joint work in exploring relevant digital platform technologies and their regulatory implications. Each DP-REG member is also separately considering more specific harms stemming from AI relevant to their respective mandates, outlined below. 

Background to DP-REG

The Australian Government is taking a wide range of regulatory interventions to help protect Australians online.

To support a streamlined and cohesive approach to regulating digital platforms, the Australian Communications and Media Authority (ACMA), the Australian Competition and Consumer Commission (ACCC), the Office of the Australian Information Commissioner (OAIC), and the Office of the eSafety Commissioner joined forces in March 2022 to establish the Digital Platform Regulators Forum (DP-REG).

DP-REG allows these independent regulators to share information about, and collaborate on, cross-cutting issues and activities on the regulation of digital platforms. This includes consideration of how competition, consumer protection, privacy, online safety and data issues intersect. Where appropriate, DP-REG engages with stakeholders collectively on issues of mutual interest or concern. 

DP-REG is not a decision-making body and has no bearing on members’ existing regulatory powers, legislative functions or responsibilities. Collaboration under DP-REG is intended to be flexible and recognise the limits of each member’s respective regulatory framework. Members are still free to engage bilaterally or outside of DP-REG on issues related to digital platforms. 

The current DP-REG governance structure, as outlined in our terms of reference, enables effective cooperation among our regulators at different levels. The heads of each member regulator determine DP-REG’s strategic direction, including agreement on the group’s annual priorities. 

DP-REG’s strategic priorities for 2022-23 are outlined in our June 2022 communique. This includes a focus on assessing the impact of algorithms, seeking to improve transparency of digital platforms’ activities and how they are protecting users from potential harm, and increased collaboration and capacity building between the four members.

Relevance of algorithms to the remit of DP-REG members


The Australian Communications and Media Authority (ACMA) is the independent authority responsible for regulating media and communications in Australia. Most of the entities it regulates use algorithms to deliver content and advertising to Australians, which brings corresponding benefits, risks and challenges. Algorithms are crucial in moderating content and recommending news items. They help broadcasters and streaming services provide targeted advertising and content to users. 

However, algorithms can spread and amplify harmful content such as misinformation and disinformation. To address these concerns, the Australian Government plans to grant new powers to ACMA in this area, following the ACMA’s oversight of the development and operation of the Australian Code of Practice on Disinformation and Misinformation(Opens in a new tab/window) since 2020.

Beyond mis- and disinformation, the ACMA takes measures to respond to other sector-specific challenges involving algorithms. This includes monitoring technological solutions to reduce the severity of scams, engaging with stakeholders to understand the role of algorithms in targeted advertising, and conducting consumer research to gain insights into the changing communications and media environment.


The Australian Competition and Consumer Commission (ACCC) is an independent Commonwealth statutory agency that promotes competition, fair trading and product safety for the benefit of consumers, businesses and the Australian community. The primary responsibilities of the ACCC are to enforce compliance with the competition, consumer protection, fair trading and product safety provisions of the Competition and Consumer Act 2010 (CCA), regulate national infrastructure, and undertake market inquiries and studies.

The ACCC has been closely considering the competition and consumer impacts of digital platform services over recent years. This includes publishing reports such as the 2019 Final Report in the Digital Platforms Inquiry, the 2021 Final Report in the Digital Advertising Services Inquiry, and the current Digital Platform Services Inquiry, which commenced in February 2020 and is producing six-monthly reports until 2025. 

The ACCC recognises algorithms are used in a variety of contexts for many different purposes, bringing both benefits as well as potential risks for competition and consumers. In addition to the relevance of algorithms to the ACCC’s current Digital Platform Services Inquiry, the operation of algorithms may be subject to similar types of competition and consumer law considerations that arise in other sectors. 

The ACCC has also taken several enforcement actions in cases involving algorithm-related misconduct (for example, see the Trivago(Opens in a new tab/window) case and the iSelect case)(Opens in a new tab/window).

Given the focus of this literature summary on particular types of algorithms most relevant to all DP-REG member regulators, some potential issues posed by algorithms relevant to our remit are not explored in this document (e.g. algorithmic collusion).


The Office of the Australian Information Commissioner (OAIC) regulates the Privacy Act, which applies to Australian Government agencies and some private sector organisations. The Act contains 13 Australian Privacy Principles (APPs), which apply across the personal information lifecycle, from collection, through to use and disclosure, storage and destruction. 

The APPs are technology-neutral and are designed to adapt to changing and emerging technologies. For example, the obligations in the Privacy Act will apply where personal information is used to train, test or deploy algorithms. This includes obligations to notify individuals about the handling of their personal information, limitations on collecting personal information (including for collection through creation), limitations on use and disclosure of personal information, and providing mechanisms for individuals to access and correct their personal information, among other obligations. 

The OAIC publishes guidance that can help entities developing and deploying algorithms that use personal information to identify and mitigate privacy risks and impacts. The Guide to data analytics and the Australian Privacy Principles(Opens in a new tab/window) provides guidance in the data analytics context, while the Guide to undertaking privacy impact assessments(Opens in a new tab/window) provides general guidance on identifying and mitigating privacy risks. In addition, the Australian Privacy Principles guidelines(Opens in a new tab/window) outline the mandatory requirements of the APPs and how the OAIC interprets them. 

eSafety Commissioner

eSafety is Australia’s national online safety educator and regulator. Its functions include coordinating activities of Commonwealth Departments, authorities and agencies relating to online safety for Australians. eSafety’s approach to algorithms is multi-faceted and involves prevention, protection, and proactive and systemic change.


eSafety supports and conducts education and community awareness. Recognising the importance of enhancing digital and algorithmic literacy and giving people the skills and confidence to manage their online experiences, eSafety is developing education and training programs to raise awareness of the potential harms associated with recommender systems and the tools to manage them.


eSafety administers reporting schemes(Opens in a new tab/window) to investigate and act against illegal and restricted online content, non-consensual sharing of intimate images (image-based abuse), cyberbullying material targeting a child, and seriously harmful abuse material targeting an adult. If harmful content goes undetected by algorithm-based moderation systems or if it spreads through recommender systems, individuals can report it to eSafety. 

Proactive and systemic change

The Online Safety Act 2021 enables eSafety to require online service providers to report on how they are meeting the Australian Government’s Basic Online Safety Expectations(Opens in a new tab/window). This includes asking services about their content moderation and recommendation algorithms to improve transparency, accountability, and safety practices.

In June 2023, eSafety introduced enforceable industry codes(Opens in a new tab/window) that require five sections of the online industry to take steps to reduce the availability of seriously harmful online content such as child sexual abuse and pro-terror material. This includes proactive detection requirements on certain social media services. The code obligations come into effect in December 2023. 

The industry codes scheme under the Act is a co-regulatory scheme and eSafety is moving to develop industry standards for two sectors of the online industry. The use of systems, technology and/or processes (including algorithms) to detect harmful content plays a crucial role in determining appropriate measures for these standards. 

eSafety conducts consultation and horizon scanning to remain focused on the future and ready for emerging issues. This informs the development of position statements on tech trends and challenges. You can find eSafety’s position on recommender systems and their algorithms here(Opens in a new tab/window).

eSafety also supports the online industry to enhance its online safety measures – including content moderation and recommendation algorithms – through its Safety by Design(Opens in a new tab/window) initiative.

Purpose of this literature summary

This literature summary expands and consolidates DP-REG members’ understanding of the types of algorithms relevant to their work, and also supports DP-REG’s strategic priorities for 2022/23. Desktop research was conducted using resources available to DP-REG member regulators. Deepening our knowledge of these risks can support the future work of individual regulators and of DP-REG. 

DP-REG does not claim this paper covers every potential impact of digital platform algorithms or includes every relevant source.