Let's be honest with each other

There is no simple fix. Photos already posted on public Facebook Pages have already been scraped. AI models trained on those photos cannot unlearn them. Facial recognition databases that captured those faces will hold them indefinitely.

What is done is done. But we can stop it getting worse, together. Each step we take reduces our children's exposure going forward.

Three levels of protection

Level 1: Move to a private Facebook Group

The minimum first step. What we are asking schools to do right now.

Threat Protected?
Third-party scrapers (Clearview AI, LAION, etc.) Yes. Content not publicly accessible.
Search engine indexing Yes. Private Group content not indexed.
Random public access to children's photos Yes. Membership approval required.
Meta's own AI training Unclear. Meta says it does not currently use private Group content for AI training, but its terms grant a broad licence over all content on its platforms. Those terms can change at any time.
Members saving or resharing photos No. Any member can still screenshot, save, and share.
Historical scraping (already happened) No. Already in databases and models.
Why this step still matters

A private Group immediately eliminates the broadest attack surface: unrestricted public access. It stops indiscriminate, automated, mass-scale scraping by third parties. Right now, anyone on the internet, with no connection to the school, can access and download every photo of every child. A private Group closes that door.

Level 2: Get off Facebook entirely

The next step. Removes Meta from the equation.

Threat Protected?
Third-party scrapers Yes
Search engine indexing Yes
Random public access Yes
Meta's own AI training Yes. No content on Meta's platform.
Other platform AI training Depends on the alternative platform's terms and policies.
Historical scraping No. Already in databases and models.

Moving to a purpose-built school communication platform removes Meta entirely. No data on Meta's platform means no Meta AI training.

Alternatives already used by Australian schools:

The right alternative depends on the school community. The key question: does the school control the platform, or does a company that mines data for AI?

Level 3: Stop publishing identifiable children's photos online

The most protective approach. Not the first step, but where the conversation leads.

Threat Protected?
All scraping and AI training Yes. No identifiable images online to scrape.
Facial recognition capture Yes
Deepfake source material Yes
Historical scraping No. Already in databases and models.

This is the most protective position. It means schools share photos of activities and events without identifiable faces, or share them only through non-digital channels (printed newsletters, physical displays in the school).

We are not advocating for this as a first step. Parents value seeing their children's school life. But every digital photo of a child, on any platform, carries some risk. The question is how much risk a school community is comfortable accepting.

What about the photos already out there?

This is the hardest part. Photos already posted on public Facebook Pages may have already been:

Deleting those posts will not remove data from systems that have already processed it. But deleting them still helps:

Cleaning up historical public posts is a necessary companion to any move forward.

The direction of Australian regulation

Now
No specific protections for children's images on school social media
The NSW Permission to Publish form bundles everything together. No granularity. No AI disclosure.
December 2025
Under-16 social media ban now in effect
Children cannot be ON social media, but schools can still post their photos ON social media.
July 2026
Privacy Act extends to small businesses
Over 100,000 small businesses become regulated by the Privacy Act for the first time.
December 2026
Children's Online Privacy Code and automated decision-making transparency
The OAIC's exposure draft (released March 2026, open for consultation until June 2026) introduces a "best interests of the child" standard. Automated decision-making disclosure requirements also take effect, requiring organisations to reveal AI use in their privacy policies.
Pending
Second tranche of Privacy Act reform
Expected to introduce a "fair and reasonable" test for data use. No confirmed introduction date.

The direction is clear: stronger protections for children's data, less tolerance for unconsented use of personal information, greater accountability for platforms that scrape and train on that data. Schools that act now are ahead of where the law is going. Schools that wait are accumulating risk.

What we are asking for right now

What we are asking, together

We are not asking schools to stop communicating with us. We are not asking them to abandon social media overnight. We love seeing our kids at school. We are asking for one specific, practical, free change: move from a public Facebook Page to a private Facebook Group.

This is Level 1. It will not solve everything, and we owe each other honesty about that. But it immediately stops the worst of it: the indiscriminate public exposure of our children's photos to every scraper, every AI system, and every stranger on the internet.

Then we keep going. Together.

← No Consent  ·  All evidence  ·  What You Can Do →