Australia’s internet regulator is prepared to compel app stores and search engines to block access to artificial intelligence services that do not verify the ages of their users, a move signaling one of the most assertive efforts globally to safeguard young people from potential harms posed by AI. The potential intervention, announced Tuesday, comes as a March 9 deadline approaches for AI platforms to restrict under-18s from accessing harmful content or face penalties of up to AUD 49.5 million (approximately $35 million USD).
The eSafety Commissioner’s office indicated it would not hesitate to target “gatekeeper services” – including app stores and search engines – if they facilitate access to AI platforms failing to comply with the new regulations. “eSafety will use the full range of our powers where there is non-compliance,” a representative for the commissioner stated, according to Reuters.
A recent review by Reuters found that of 50 leading text-based AI chat services operating in Australia, only nine had implemented or announced plans to implement age verification measures. Eleven services opted to block all Australian users, even as the remaining 30 had not taken public action as of one week before the deadline. The content subject to restriction includes pornography, extreme violence, material promoting self-harm, and content related to eating disorders.
Australia’s escalating efforts to regulate AI follow a similar, groundbreaking move in December, when the country became the first to ban social media for teenagers, citing concerns about mental health. This earlier ban prompted discussion among global leaders, with several indicating consideration of similar measures. The current crackdown on AI represents a broadening of this strategy, extending age restrictions to the content accessible through AI technologies.
The question of responsibility for age verification in the digital space is a subject of ongoing debate. In the United States, Apple and Google are reportedly lobbying for platforms, rather than app store operators, to bear the primary responsibility for implementing age checks. Australia’s regulators, however, appear to be taking a more expansive view, potentially holding app stores and search engines accountable for ensuring compliance.
The move comes amid a growing number of lawsuits against AI companies alleging failures to prevent self-harm or violence, and increasing research suggesting that AI platforms may pose a greater risk to youth mental health than traditional social media. Australia’s eSafety regulator has not yet specified the precise mechanisms it will employ to enforce the new regulations, leaving open the possibility of a range of actions against non-compliant services and their distribution channels.