In a submission to the Productivity Commission’s review on harnessing data and digital technology, published this week, the parent company of Facebook, Instagram and WhatsApp urged the government to align with global privacy standards rather than implementing stricter local regulations
The submission comes as Australia implements significant privacy law reforms, with stricter regulations on personal and sensitive information processing expected in 2024-2025.
The Privacy and Other Legislation Amendment Bill 2024, which passed both houses on 29 November 2024 and received royal assent on 10 December 2024, represents the most substantial overhaul of Australia’s privacy framework in decades.
Meta argued that generative AI models “require large and diverse datasets” and cannot rely on synthetic data – data generated by AI alone. The company said available databases, such as Australian legislation, were limited in what they could offer AI compared to datasets containing personal information.
“Human beings’ discussions of culture, art, and emerging trends are not borne out in such legislative texts, and the discourse that takes place on Meta products both represents vital learning on both how individuals discuss Australian concepts, realities, and figures, as well as, in particular, how users of our products engage,” Meta said.
“This means that authentic and effective learning to ultimately power meaningful products of communication is best realised from training that includes those discussions and artefacts themselves.” it said.
The tech giant’s position highlights a growing tension between AI development and privacy protection. Meta confirmed last year that it uses data obtained from Australian Facebook users to train its AI models, while users in the European Union can opt out of data scraping, but the feature isn’t available in Australia.
This disparity in user rights reflects broader global regulatory fragmentation. In Europe, Meta has faced significant resistance to its AI training practices.
Meta said back in June 2024 that it would pause plans to start training its AI systems using user data in the EU and U.K. following pushback from the Irish Data Protection Commission, though the company later resumed these plans in April 2025.
The regulatory landscape has become increasingly complex for technology companies navigating data laws across multiple jurisdictions, especially with the rapid evolution of artificial intelligence.
In response, Australia’s privacy regulator has issued two guidance notes on AI: one covering the development of generative models, and another on the use of commercial AI products.
Meta expressed concerns in its Productivity Commission submission that Australia’s evolving privacy regulations risk diverging from global standards, creating conflicting obligations for tech companies that could undermine efforts to foster safe, age-appropriate online experiences.
Meta argues that AI needs access to real Australian social media to understand local slang, culture, and online discourse. Without it, the company says, models will struggle to grasp how Australians communicate.
This push for “global policy alignment” comes at a crucial time for Australia’s digital economy. The Productivity Commission is examining how governments can help maximise the potential economic benefits of AI technology, balancing innovation with consumer protection.
The debate reflects broader questions about data sovereignty and the role of multinational tech companies in shaping local digital experiences. Australia’s current privacy laws are considered out of date and not fit for purpose, prompting the recent reforms that Meta now seeks to influence.
Critics say Meta puts profits over privacy, arguing that consent should be strengthened and user benefits clearly proven before using personal data for AI.
The Productivity Commission’s review represents a critical juncture for Australia’s approach to AI governance. Industry submissions emphasise the need for frameworks addressing fairness, transparency, and accountability, while avoiding overly stringent regulations that could hinder productivity.
As the commission prepares its recommendations, the outcome will likely influence how Australia balances innovation with privacy protection in the AI era.
The decision could set precedents for how other countries approach the intersection of social media data, AI development, and user rights.
The stakes are particularly high given Australia’s position as a significant market for Meta’s services and a potential model for other nations grappling with similar regulatory challenges in the rapidly evolving AI landscape.

