Clear policy changes that Australian governments can make right now. Most of these don't require new legislation. They require updating guidelines that are already out of date.
Australia has already shown it can act on children's online safety. The under-16 social media ban passed Parliament in November 2024 and took effect in December 2025. The Children's Online Privacy Code is in development. The appetite for action exists. Here's what can be done right now.
The NSW DoE social media procedures currently state that school accounts "must not restrict access or be set as 'private' or 'closed.'" This policy was written when "public" meant "accessible to the school community." In 2026, "public" means "scraped by AI, indexed by search engines, harvested by facial recognition, and available to anyone on earth."
The fix is straightforward: update the procedures to allow (or require) school social media accounts to use private or restricted settings. This is a policy update, not legislation. The Department can do this immediately.
The current consent form is binary (all public publishing or none) and makes no mention of AI training, facial recognition, or data scraping. It needs:
This is a form update. It does not require legislation.
The Department should issue updated guidance to all NSW public schools addressing:
Schools are following current DoE guidance in good faith. That guidance is outdated. Updating it immediately changes the practice of every public school in NSW.
Every state and territory education department has its own social media policy for schools. None of them were written for the AI age. A coordinated national approach would ensure all Australian schoolchildren receive the same baseline protection, regardless of which state they live in.
The Education Council (the body that brings together state and territory education ministers) could agree to a set of minimum standards for school social media that include:
The OAIC's Children's Online Privacy Code is due by December 2026. It should explicitly address:
The current Privacy Act reform process should address the gap between what parents consented to ("publishing") and what actually happens ("permanent extraction into AI systems"). Consent given for one purpose (school communication) should not automatically extend to another (AI training, facial recognition).
The Privacy Act should also clarify that:
Meta offers European users an opt-out from AI training because GDPR requires it. Australians have no equivalent protection. Senator David Shoebridge summarised the situation directly: "Meta made it clear today that if Australia had these same laws, Australians' data would also have been protected."
Australia should legislate a right for all users, including schools and organisations, to opt out of having their public content used for AI training. Until then, the only defence is to not make the content public in the first place.
The Australian Information Commissioner found Clearview AI breached the Privacy Act and ordered the company to stop collecting Australian data and delete what it had. There is no evidence of compliance. The OAIC should actively pursue enforcement, not let the matter rest.
Updating the NSW DoE social media procedures, updating the Permission to Publish form, and issuing new guidance to schools are all administrative actions. They can happen immediately. They would protect every child in every NSW public school. And they would set a precedent for every other state and territory to follow.