Australia’s internet regulator has accused the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Compliance Failures Revealed in First Major Review
Australia’s eSafety Commissioner has documented a worrying pattern of failure to comply among the world’s most prominent social media platforms in her first formal review since the ban came into effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement adequate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification systems, noting that some platforms have allowed children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings indicate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has made clear that merely demonstrating some children still hold accounts is inadequate; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures intended to stop under-16s from creating accounts in the first place. This shift reflects the government’s determination to hold tech giants responsible, with potential penalties looming for companies that do not meet the legal requirements.
- Allowing previously banned users to re-verify their age and restore account access
- Enabling multiple tries at the identical verification process with no repercussions
- Weak mechanisms to stop accounts for under-16s from being established
- Insufficient reporting tools for families and the wider community
- Lack of transparent data about regulatory measures and user account terminations
The Magnitude of the Issue
The substantial scale of social media usage amongst young Australians highlights the compliance challenge facing both the authorities and the platforms in question. With numerous accounts already restricted or removed since the implementation of the ban, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to enforcing age restrictions have turned out to be considerably more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from fraudulent ones. This complexity has left enforcement authorities wrestling with the fundamental question of whether current age verification technologies are sufficient for the purpose.
Beyond the operational challenges lies a broader concern about the willingness of platforms to place compliance ahead of user growth. Social media companies have consistently opposed strict identity verification requirements, citing data protection worries and the genuine difficulty of confirming age online. However, the regulatory report suggests that some platforms may not be making sufficient effort to deploy the infrastructure mandated legally. The move to active enforcement represents a pivotal moment: either platforms will significantly enhance their compliance infrastructure, or they risk facing substantial fines that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Figures Indicate
In the first month subsequent to the ban’s launch, Australian authorities reported that 4.7 million accounts had been restricted or taken down. Whilst this statistic initially looked to prove compliance achievement, subsequent analysis reveals a more complex picture. The substantial number of account deletions indicates that many under-16s had managed to establish accounts in the initial stages, demonstrating that protective safeguards were insufficient. Furthermore, the data casts doubt about whether removed accounts reflect authentic compliance or simply users deleting their pages of their own accord in in light of the new restrictions.
The minimal transparency surrounding these figures has troubled independent observers attempting to evaluate the ban’s true effectiveness. Platforms have provided little data about their implementation approaches, performance indicators, or the nature of suspended accounts. This opacity makes it challenging for regulators and the wider public to evaluate whether the ban is functioning as designed or whether teenagers are merely discovering different means to access social media. The Commissioner’s push for comprehensive proof of structured adherence protocols reflects growing frustration with platforms’ unwillingness to share comprehensive data.
Sector Reaction and Pushback
The major tech platforms have responded to the regulatory enforcement measures with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination remains a major challenge across the industry. The company has advocated for a alternative strategy, proposing that strong age verification systems and parental consent requirements implemented at the application store level would be more effective than enforcement at the platform level. This stance reflects wider concerns across the industry that the existing regulatory system places an unrealistic burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had suspended 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, sector analysts dispute whether such figures demonstrate genuine compliance or merely reactive account management. The fundamental tension between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an whole age group remains unresolved. Companies have consistently opposed stringent age verification, pointing to privacy issues and technical constraints, establishing an impasse between authorities and platforms over who carries responsibility for implementation.
- Meta maintains age verification should occur at app store level instead of on individual platforms
- Snap asserts to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical obstacles as barriers to effective age verification
- Platforms contend they are doing their best whilst challenging the ban’s overall effectiveness
Wider Inquiries About the Ban’s Efficacy
As Australia’s under-16 online platform ban moves into its implementation stage, fundamental questions persist about whether the law will accomplish its stated objectives or merely drive young users towards less regulated platforms. The regulator’s initial compliance assessment reveals that despite months of implementation, substantial gaps remain—children keep discovering ways to circumvent age verification mechanisms, and platforms have had difficulty prevent new underage accounts from being created. Critics argue that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply shift towards other platforms, secure messaging apps, or virtual private networks designed to conceal their age and location.
The ban’s worldwide effects contribute further complexity to assessments of its success. Countries such as the United Kingdom, Canada, and multiple European countries are observing Australia’s initiative closely, exploring similar regulatory measures for their respective populations. If the ban fails to reduce children’s online activity or fails to protect them from damaging material, it could undermine the case for equivalent legislation elsewhere. Conversely, if implementation proves sufficiently strict to effectively limit underage access, it may inspire other nations to implement similar strategies. The conclusion will potentially determine international regulatory direction for years to come, making Australia’s regulatory efforts scrutinised far beyond its borders.
Who Gains and Who Loses
Mental health supporters and organisations focused on child safety have championed the ban as a necessary intervention against algorithmic manipulation and contact with harmful content. Parents and educators maintain that taking young Australians off platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around shared interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s practical impact reaches past individual users to impact content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that are dependent on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously used effectively. Meanwhile, the ban inadvertently favours large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Enforcement
Australia’s eSafety Commissioner has announced a marked change from passive monitoring to active enforcement, marking a key milestone in the execution of the age restriction. The regulator will now collect data to determine whether platforms have omitted “reasonable steps” to prevent underage access, a statutory benchmark that extends beyond simply noting that children remain on these services. This approach requires tangible verification that organisations have established appropriate systems and protocols designed to exclude minors. The enforcement team has signalled it will conduct enquiries systematically, developing arguments that could lead to considerable sanctions for non-compliance. This shift from oversight to enforcement reveals growing frustration with the companies’ present approach and suggests that consensual engagement by itself is insufficient.
The implementation stage presents critical issues about the appropriateness of fines and the operational systems for ensuring platform accountability. Australia’s statutory provisions offers regulatory tools, but their efficacy hinges on the eSafety Commissioner’s commitment to initiate official proceedings and the platforms’ capability to adjust effectively. Global regulators, particularly regulators in the UK and EU, will carefully track Australia’s implementation tactics and consequences. A robust enforcement effort could create a model for further jurisdictions considering equivalent prohibitions, whilst shortcomings might compromise the entire regulatory framework. The coming months will determine whether Australia’s pioneering regulatory approach produces substantive defence for teenagers or becomes largely performative in its effect.
