Senate Select Committee on Adopting Artificial Intelligence (AI)

PO Box 6100 

Parliament House 

Canberra ACT 2600

Dear Chair,

The Digital Platform Regulators Forum (DP-REG) welcomes the opportunity to contribute to the Select Committee on Adopting Artificial Intelligence (AI).

DP-REG comprises the Australian Competition and Consumer Commission(Opens in a new tab/window) (ACCC), the Australian Communications and Media Authority(Opens in a new tab/window) (ACMA), the eSafety Commissioner(Opens in a new tab/window) (eSafety) and the Office of the Australian Information Commissioner(Opens in a new tab/window) (OAIC).

Through DP-REG, members share information about, and collaborate on, cross-cutting issues and activities involving the regulation of digital platforms. This includes consideration of how competition, consumer protection, privacy, online safety and data issues intersect. The structure, purpose and goals of DP-REG are outlined in our Terms of Reference.

The purpose of this submission is to draw the Committee’s attention to DP-REG’s work to date in developing an understanding of the opportunities and impacts arising out of the uptake of AI technologies in Australia.

The uptake of AI technologies in Australia has created both risks and opportunities. In particular, the use of AI by digital platforms can have direct impacts on users and businesses that government and regulators need to actively manage. Digital platform regulators are at the front line of these issues, and we are seeing that many of AI’s impacts may exacerbate existing and widespread harms that we are already working to address.

In July 2023, DP-REG members provided a joint submission to the Department of Industry, Science and Resources consultation(Opens in a new tab/window) on the safe and responsible use of AI in Australia.

  • In this submission, DP-REG highlighted the potential impacts of AI in relation to each member’s existing regulatory framework. The submission supported an approach which considered how existing regulatory frameworks may be utilised or strengthened, including through existing reform proposals, to provide appropriate safeguards for the Australian public in relation to this technology.
  • The submission also flags that coordination between DP-REG members and other arms of government to leverage complementary strengths and expertise will remain crucial to Australia’s response to AI.

In November 2023, DP-REG published a working paper on the large language models (LLMs) used in generative AI. 

  • The working paper provides an overview of LLMs and their impact on the regulatory roles of each member. The paper support DP-REG’s 2023-24 strategic priorities, which includes a focus on evaluating the benefits, risks and harms of generative AI. 

In November 2023, DP-REG also published a literature summary on the harms and risks of algorithms, which considers the harms and risks posed by some commonly used types of algorithms to end-users and society. This paper expands and consolidates members’ understanding of the types of algorithms relevant to their work.

Recognising rapid developments in this area, such as the increasing availability of models which can generate other forms of content (for example, images, video or audio), DP-REG is continuing its exploration of generative AI in support of our 2023-24 strategic priorities.

In addition to this joint work, member regulators are also individually considering AI within the context of their existing and/or proposed regulatory frameworks. A brief explanation of each regulator’s role and some examples of the work they are carrying out are provided below.  


Consumer protection & Competition – The ACCC’s consumer protection role includes enforcement of the Australian Consumer Law (ACL) to ensure that consumers and small businesses are protected from misleading and deceptive conduct, unconscionable conduct, unfair terms and conditions and unsafe products, and to promote fair trading. The ACCC also operates the National Anti-Scam Centre (NASC) and Scamwatch website which helps Australians learn how to recognise, report, and protect themselves from scams.

Another primary responsibility of the ACCC is to promote competition by enforcing the Competition and Consumer Act 2010 (Cth), regulating national infrastructure (such as telecommunications infrastructure), implementing the Consumer Data Right, and undertaking market studies as directed by the Treasurer, including in relation to digital platforms services. 

The uptake of AI technologies in Australia could have serious implications for the work of the ACCC, including the potential to exacerbate a range of risks and harms that the ACCC is working to address. The emerging AI issues most relevant to the ACCC's regulatory responsibilities include:

  • the use of AI to create and disseminate misleading or deceptive advertising, scams, fake reviews and harmful applications
  • the potential for AI to harm consumers and small businesses – for example, by LLM chatbots providing false but authoritative-sounding statements in response to user queries
  • the need to ensure healthy competition in the provision of AI technology and services, as well as related markets affected by uptake of AI
  • the potential use of AI to engage in anti-competitive conduct, such as through ‘algorithmic co-ordination’ that enables competing firms to indirectly set prices, determine bids or share markets

In addition to considering these intersections as part of its core responsibilities, the ACCC is monitoring the work of international counterparts in this area. For example, the US Federal Trade Commission (FTC) is inquiring into whether investments and partnerships pursued by dominant companies risk distorting and undermining fair competition, while the UK Competition and Markets Authority recently released its second paper on AI foundation models and has noted a range of risks, such as powerful incumbents exploiting their position to distort choice and restrict competition.[1] Authorities in a range of other jurisdictions, such as the EU, Canada, India and France are also considering these issues.[2]

In addition, the ninth interim report of the ACCC’s Digital Platform Services Inquiry will consider competition and consumer issues in relation to general search services in Australia, including search quality. As part of this report, the ACCC is looking to understand the potential impact of generative AI on the competitive landscape in general search services. An Issues paper(Opens in a new tab/window) for this report was published on 18 March 2024 and the ACCC is required to submit its report to the Treasurer by 30 September 2024.


Media and the information environment – The ACMA is the independent statutory authority responsible for the regulation of communications and media, and some aspects of regulation of online content delivered by digital platform services in Australia. The ACMA currently oversees the voluntary Australian Code of Practice on Disinformation and Misinformation. The ACMA also has powers to combat scams delivered by phone and SMS.

  • In July 2023, the ACMA released its second report to government(Opens in a new tab/window) on Digital Platforms’ Efforts Under the Australian Code of Practice on Disinformation and Misinformation. In that report, the exponential growth of generative AI and the potential for AI to be misused to create and distribute mis – and disinformation was highlighted.
  • The Government is progressing legislation that would provide the ACMA with new powers to combat misinformation and disinformation. These powers could be used to improve transparency around how the systems and processes that digital platforms use to respond to misinformation employ AI.
  • Existing arrangements through broadcasting codes of practice require broadcasters to present factual content in news and currents affairs accurately. These obligations apply regardless of whether content was created with support of generative AI tools or not.
  • The ACMA has registered and enforces the Reducing Scam Calls and Scam SMS industry code, which requires telecommunications providers to identify, trace and block scam calls and text messages. These rules can assist to identify and prevent phone and SMS scams that utilise generative AI.

eSafety Commissioner

Online safety – eSafety is Australia’s national independent regulator for online safety. eSafety’s purpose is to help safeguard Australians from online harms and to promote safer, more positive online experiences. 

  • eSafety fosters online safety by exercising its powers under Australian government legislation, primarily the Online Safety Act 2021 (Cth), which regulates online services’ systems and processes. The Online Safety Act provides for industry bodies to develop new codes to regulate ‘class 1’ and ‘class 2’ illegal and restricted online material(Opens in a new tab/window), and for eSafety to register the codes if they meet the statutory requirements. If a code does not meet the requirements, then eSafety can develop an industry standard for that section of the online industry instead. Class 1 and class 2 material ranges from the most seriously harmful online content, such as videos showing the sexual abuse of children or acts of terrorism, through to content which is inappropriate for children, such as online pornography.
  • As part of its work as an anticipatory regulator, eSafety has a Tech Trends(Opens in a new tab/window) workstream, and conducts horizon scanning and works with subject matter experts. This allows eSafety to identify the online safety risks and benefits of emerging technologies and online behaviours, as well as the regulatory challenges and benefits they may present. In August 2023, eSafety published a position statement on generative AI(Opens in a new tab/window), which provides an overview of the generative AI lifecycle, examples of its use and misuse, consideration of online safety risks and opportunities, as well as regulatory challenges and approaches including an explanation of how the Online Safety Act 2021 (Cth) applies. It also provides specific Safety by Design(Opens in a new tab/window) interventions that industry can adopt immediately and other approaches to improve user safety. 
  • eSafety’s work to prevent AI-related harm through education and awareness raising includes updates to its professional learning program(Opens in a new tab/window), which now includes a webinar about online safety considerations for generative AI in education. 
  • eSafety’s investigative and regulatory schemes cover both real and synthetic child sexual abuse material, deepfake image-based abuse, AI enabled-content used to target Australian youths through cyberbullying and adult cyber abuse. eSafety is starting to receive reports from the public about AI-driven abuse and has taken remedial action against an Australian adult creating deepfake image-based abuse.
  • In terms of eSafety's systemic regulatory powers, the codes and standards address online safety issues in eight sections of the online industry across the digital stack. While AI-generated material is treated under the legislation in the same way as ‘real’ class 1 material, the risks associated with AI generated material have necessitated specific requirements in relation to AI-related features. For example: 
    • the Search Engine Services (SES) code, registered on 12 September 2023 and which came into effect on 12 March 2024, requires search engines to take steps to reduce the risk that class 1 material like child sexual abuse material (CSAM) is returned in search results and that AI functionality integrated into search engines is not used to generate ‘synthetic’ versions of this material. 
    • the draft Designated Internet Services (DIS) standard, prepared after the Commissioner’s rejection of an industry-drafted code, proposed specific obligations on generative AI services used to generate high impact material (such as pornography) and on platforms which distribute generative AI models. eSafety is currently considering the feedback from submissions, which are available on eSafety’s website,(Opens in a new tab/window) and is finalising the standards. eSafety is working closely with the Department of Industry, Science and Resources as well as with other departments and regulators to ensure a coherent approach to AI-related regulation.
  • The Basic Online Safety Expectations (BOSE) outline the Australian Government’s expectations that social media, messaging and gaming service providers and other apps and websites will take reasonable steps to keep Australians safe. The Minister for Communications establishes the BOSE through a legislative determination.
    • Since the commencement of the Online Safety Act, eSafety has issued 19 transparency notices covering 30 different services, requiring detailed information how industry is keeping Australians safe. The notices have included specific questions on how AI is used to enable safety (for example identifying child sexual abuse material and grooming), as well as create risks, through for example recommender systems. Findings have been published on the eSafety website. In February 2024, notices were issued covering generative AI in relation to terrorism and extremism, and child sexual abuse, with the findings to be published in due course.
  • The Department of Infrastructure, Transport, Regional Development, Communications and the Arts (DITRDCA) held a public consultation on ways to extend and improve the BOSE Determination which closed in February 2024. This included a proposed amendment regarding the safety of generative AI, recommender systems and user controls. Further information on DITRDCA’s consultation is available on DITRDCA’s website(Opens in a new tab/window).
  • On 13 February 2024, Minister Rowland announced(Opens in a new tab/window) the review of the Online Safety Act 2021 (Cth). The Terms of Reference(Opens in a new tab/window) note that the Review will be broad ranging and include consideration of whether additional arrangements are warranted to address online harms not explicitly captured under the existing statutory schemes – including potential harms raised by a range of emerging technologies, such as generative AI. On 29 April 2024 Minister Rowland announced(Opens in a new tab/window) public consultation for the review was open, closing 21 June 2024. The Issues paper and further information about the review is available on the Department of Infrastructure, Transport, Regional Development, Communications and the Arts (DITRDCA) website(Opens in a new tab/window)
  • On 1 May 2024, it was announced(Opens in a new tab/window) that the Government will introduce legislation to ban the creation and non-consensual distribution of deepfake pornography. 
  • In addition, in July 2023, eSafety provided a submission(Opens in a new tab/window) to the inquiry into the use of generative AI in the Australian education system to highlight online safety considerations for such use. 


Privacy, information access and information management: The OAIC is an independent Commonwealth regulator established to bring together three functions: privacy functions (protecting the privacy of individuals under the Privacy Act 1988 (Cth) (Privacy Act) and other legislation), freedom of information functions (access to information held by the Commonwealth Government in accordance with the Freedom of Information Act 1982 (Cth)), and information management functions (as set out in the Australian Information Commissioner Act 2010 (Cth)). 

  • Privacy, access to information and information management all have an important role to play in fostering the responsible use of AI in ways that benefit Australians. 
  • Privacy obligations will apply where personal information is used to train, test or deploy algorithms, or AI models or systems. 
  • The OAIC’s privacy regulatory priorities(Opens in a new tab/window) include a focus on online platforms, social media and high privacy impact technologies, including practices involving the use of generative AI, facial recognition and the use of other biometric information. The Australian Information Commissioner has made determinations concerning the collection of biometric information by Clearview AI and 7-Eleven to match facial biometric templates.[3]The OAIC also has ongoing investigations into the use of facial recognition technology by Bunnings Group Limited and Kmart Australia Limited. These technologies typically rely on artificial intelligence through the use of machine learning algorithms to match biometric templates.
  • In addition to the joint DP-REG submission to the Department of Industry, Science and Resources consultation on the safe and responsible use of AI in Australia discussion paper, the OAIC provided an additional submission(Opens in a new tab/window) highlighting:
    • the role that privacy plays in building trust and confidence in the adoption of new and emerging technologies like AI
    • our views on measures that can further support this objective by strengthening the existing privacy framework through the Attorney-General’s Department’s ongoing review of the Privacy Act.
  • The OAIC continue to provide expertise on privacy considerations to inform work across government to respond to AI.[4]
  • The Australian Information Commissioner also has functions in relation to the collection, use, disclosure, management, administration or storage of, or accessibility to, information held by the Government. In the context of AI, robust information management and governance is essential to promoting a culture of responsible, accountable and transparent deployment of AI tools across government. Government should be proactive in publishing information about AI to ensure individuals and businesses understand when AI is being deployed and how it’s informing government-decision making. The OAIC has developed the Principles on open public sector information(Opens in a new tab/window) to help agencies implement best practice information management. 
  • Effective information management is also an important pre-requisite to the meaningful operation of the Freedom of Information Act 1982 (Cth), which provides a mechanism to request access to government-held information with a core objective to increase scrutiny and review of government’s activities. This right to access to government-held information plays an important role as an enabler of other rights, such as contractual rights, administrative review and judicial review. For example, access to information can facilitate transparency into how and why AI-informed decisions have been made and provide the information needed to challenge these decisions.
  • Australia’s Third Open Government Partnership (OGP) National Action Plan for 2024-25(Opens in a new tab/window) (the Plan) includes a commitment to ensuring greater transparency about the use of automated decision-making in government and the responsible use of AI while facilitating innovation. The FOI Commissioner leads the OAIC’s engagement with the OGP to progress the commitments set out in the Plan.

We hope this information about the AI-related work of the DP-REG members is of assistance to the Committee. 

We look forward to continued collaboration and welcome further engagement in response to our submission.

[3] Commissioner initiated investigation into Clearview AI, Inc. (Privacy) [2021] AICmr 54; Commissioner initiated investigation into 7-Eleven Stores Pty Ltd (Privacy) (Corrigendum dated 12 October 2021) [2021] AICmr 50

[4] The OAIC has discretionary advice functions including providing advice to a Minister or entity about any matter relevant to the operation of the Privacy Act – see Privacy Act 1988 (Cth) s 28B.