apple, meta and other tech giants Australian authorities have ordered internet companies to report twice a year on the steps they are taking to tackle child sexual abuse material on their platforms, in an escalation of Australia’s online safety compliance regime.
eSafety Commissioner Julie Inman Grant issued legal notices to eight companies on Wednesday, requiring them to report on their efforts to tackle child sexual abuse material in Australia every six months for the next two years.
Apple, Google, Meta (and WhatsApp), Microsoft (and Skype), as well as the owners of chat platforms Discord and Snapchat, have been targeted by the new reporting system, in part in response to answers to a previous round of legal notices.
Ms Inman Grant said the “worrying but unsurprising” answers confirmed what the online safety regulator had long suspected, namely that there were “significant gaps and differences between services’ practices”.
“In our subsequent conversations with these companies, we have not seen any meaningful changes or improvements to these identified security flaws,” it said in a statement on Wednesday.
Ms. Inman Grant cited the first Basic Online Safety Expectations (BOSE) transparency report in December 2022, saying there had been no attempt by Apple and Microsoft to proactively disclose child abuse material on their iCloud and OneDrive platforms.
While eSafety has since introduced Mandatory criteriaCloud and messaging service operators will not be required to detect and remove known child abuse material before December.
Ms. Inman Grant also expressed concerns that Skype, Microsoft Teams, FaceTime and Discord still do not use technology to detect live streaming of child sexual abuse in video chats.
Information sharing between Meta services is another concern, as infringers banned on services like Instagram in some cases continue to commit violations on other parent company platforms like WhatsApp or Facebook.
The legal notices require companies to clarify how they handle child abuse material, live-streamed abuse, online grooming, sexual exploration, and fake child abuse material created using generative AI.
Ms. Inman Grant said on Tuesday that she He was in discussions with Delia Ricard.which is reviewing the Internet Safety Act, on the need to close “loopholes” in existing legislation and codes that currently only extend to horrific content and pornography.
“There is a clear gap here and now is the time to think about what kinds of powers we might need to be more effective at a systemic level,” Ms Grant said.
Another concern is the speed with which companies respond to user reports of child sexual exploitation, with Microsoft noting that it took an average of two days to respond in 2022.
Ms Inman Grant said the new expectations would force firms to “raise their game” and show they were “making improvements”, with the regulator reporting regularly on the results of notifications.
This mechanism is one of three available under BOSE to help “lift the lid” on online safety initiatives pursued by social media, messaging and gaming providers.
“These notices will allow us to see if these companies have made any improvements to online safety since 2022/23 and ensure that these companies remain accountable for the harm that continues to be committed against children on their services,” said Ms Inman Grant.
“We know that some of these companies have made improvements in some areas – and this is an opportunity to show the progress they have made across the board.”
The six companies will have to submit a first round of responses by February 15 or face penalties of up to $782,500 per day.
Do you know more? Contact James Riley via e-mail.