Bacchus Marsh Grammar, Melbourne, June 2024

In June 2024, at least 50 female students at Bacchus Marsh Grammar, a co-educational school on the outskirts of Melbourne, discovered that AI-generated nude images featuring their faces had been created and circulated online.

"Approximately 50 girls targeted in AI explicit photo scandal at Melbourne school." SBS News, June 2024

The parents of the victims described the experience as devastating. The girls had done nothing wrong. They had simply existed on social media with publicly visible photos. AI tools did the rest.

Gladstone Park Secondary College, Melbourne, February 2025

In February 2025, formal school photos of up to 60 female students at Gladstone Park Secondary College were edited using AI to create explicit images. The images were shared by Year 11 students online and in group chats. Two students were suspended. Police said more could be involved.

These were not photos the victims had posted themselves. They were official school photos, the kind that schools routinely share on their public Facebook Pages.

Sydney high school, January 2025

A male student at a Sydney high school was reported to police after allegedly using AI to create explicit deepfake images of female classmates. He reportedly scraped photos from social media accounts and school events, then distributed the images through fake social media profiles. NSW Education Minister Prue Car called the incident "abhorrent."

The pipeline: from school photo to deepfake

The path from a school Facebook post to a deepfake is disturbingly short:

Four steps

1. School posts identifiable photos of children on a public Facebook Page
2. Those photos are accessible to anyone on the internet. No login, no connection to the school.
3. Freely available AI tools can generate explicit or manipulated images from a single clear face photo
4. The generated images are distributed. The child has no recourse. The original photos cannot be "unlearned."

The tools required for step 3 are widely available, often free, and require no technical expertise. A clear, well-lit photo of a child's face, exactly the kind posted on school Facebook Pages, is the ideal input for these systems.

The scale of the problem

These are the incidents that made the news. They are not the full picture. In June 2025, eSafety Commissioner Julie Inman Grant reported that deepfake reports from under-18s more than doubled in the preceding 18 months, exceeding the total number of reports received in the seven years prior. Four out of five reports involved the targeting of girls.

In South Korea, the problem reached crisis scale in 2024. Students at more than 500 schools were targeted in a coordinated wave of deepfake sexual abuse, with perpetrators creating Telegram groups to share AI-generated explicit images of classmates and teachers. More than 800 victims reported to authorities in the first nine months of 2024 alone.

In September 2025, the eSafety Commissioner took enforcement action against a UK-based company providing "nudify" services used to create AI-generated sexual exploitation material of Australian school children. Those services were attracting around 100,000 visitors per month in Australia. The company subsequently blocked Australian access.

Separately, Human Rights Watch found that identifiable photos of at least 362 Australian children had been scraped from personal blogs, school websites, and social media, and included in the LAION-5B dataset used to train AI image generators including Stable Diffusion. In many cases, the children's names were embedded in captions or URLs, making them easily traceable.

Why school Facebook Pages are particularly dangerous

School Facebook Pages are an unusually rich source of children's images for bad actors because:

Researchers at the University of Utah and Carnegie Mellon University described school Facebook pages as potentially "the largest existing collection of publicly accessible, identifiable images of minors" after analysing 18 million school Facebook posts.

The law is catching up, but prevention still matters most

The legal landscape has shifted significantly since the Bacchus Marsh incident. In September 2024, the federal Criminal Code Amendment (Deepfake Sexual Material) Act 2024 took effect, creating criminal offences for sharing non-consensual sexually explicit deepfakes (up to 6 years imprisonment) and an aggravated offence for those who create the material (up to 7 years).

At the state level, Victoria criminalised deepfake intimate images in 2022 (up to 3 years imprisonment). New South Wales passed the Crimes Amendment (Intimate Images and Audio Material) Act 2025, effective from February 2026. South Australia introduced penalties of up to $20,000 or 4 years imprisonment for creating degrading deepfake images.

In December 2025, Australia became the first country to ban children under 16 from holding social media accounts, with platforms facing fines of up to $49.5 million for non-compliance.

The eSafety Commissioner can also issue removal notices for image-based abuse material, and in early 2026 flagged Grok (xAI's chatbot on the X platform) after a doubling of reports about its use to generate sexualised images without consent, including images of minors.

These are welcome changes. But they are all reactive. They address distribution and creation after the fact. Prevention is the only reliable protection. Once a photo is public, it can be captured in seconds. Once a deepfake is created, it can be distributed endlessly. The only intervention point that a school controls is whether or not it makes children's photos publicly accessible.

This is about what we can control

We can't control what AI tools exist or who uses them. But we can control whether our children's photos are sitting on a public Facebook Page, freely accessible to anyone. A private Group, or better still not posting on Facebook at all, closes the easiest door. That's something we can do together, right now.

← Facial Recognition  ·  All evidence  ·  Training Datasets →