Clearview AI and Australia

The Australian Information Commissioner investigated Clearview AI and found the company breached the Privacy Act 1988 by collecting Australians' facial images and biometric templates without consent. Commissioner Angelene Falk described the practice as carrying:

"It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI's database." Angelene Falk, Australian Information Commissioner

The OAIC ordered Clearview AI to stop collecting images from individuals in Australia and to delete all images previously collected. Regulators in France, Italy, Greece and the Netherlands have since imposed fines totalling over €100 million for similar breaches.

Clearview AI has not complied

Despite the Australian order to delete data and cease collection, there is no evidence Clearview AI has complied. In August 2024, Privacy Commissioner Carly Kind stated she was "not satisfied that further action is warranted" in the case. The company continues to operate. Its database has grown from 30 billion images at the time of the order to over 50 billion and counting.

What Clearview AI actually does

Clearview AI built the world's largest facial recognition database by scraping public photos from social media platforms, primarily Facebook, Instagram, LinkedIn and YouTube. The company's CEO, Hoan Ton-That, acknowledged to the BBC that photos were taken without users' knowledge.

How it works

A user uploads a photo of any person. Clearview's AI matches that face against its database of 50 billion+ images and returns every photo it has ever found of that person, along with the source URLs. One face, one search, and you can find every public photo of a person across the internet.

A class photo posted on a school's public Facebook Page gives Clearview AI a clear, high-quality facial image of every child in that photo. That image becomes a permanent biometric identifier, searchable by anyone with access to the system.

Clearview AI is not the only one

Clearview AI is the most documented facial recognition scraper, but it is far from the only one. Consumer services like PimEyes and FaceCheck.id let anyone upload a photo and find where that face appears online. These tools are marketed for personal safety and journalism, but are easily used for stalking, harassment and doxxing. The technology to scrape and index faces from public social media is widely available and cheap. Any public photo of a child on Facebook can be, and likely has been, harvested by multiple facial recognition systems.

The global regulatory response

Multiple countries have taken action against Clearview AI, but enforcement has been inconsistent:

Despite over €100 million in fines across multiple jurisdictions, Clearview AI has not paid any of the European penalties and continues to grow its database. None of these fines have been collected. The company has no physical presence in Europe and treats the orders as unenforceable.

It is getting worse, not better

In 2025, the US Army signed Clearview AI to a contract for special forces operations, with options extending to 2030. Immigration and Customs Enforcement (ICE) awarded a $9.2 million contract. Customs and Border Protection followed in early 2026. The company settled a US class action for $51.75 million but continues to operate and expand.

Wrongful arrests from facial recognition misidentification are mounting. At least eight Americans have been wrongfully arrested after police relied on facial recognition matches without basic investigative steps. In 2025, a Tennessee grandmother spent five months in jail after Clearview AI matched her face to bank fraud committed over 1,000 miles away. She was released on Christmas Eve when prosecutors dropped the charges.

In Australia, the OAIC found that both Bunnings (2024) and Kmart (2025) breached the Privacy Act by using in-store facial recognition technology without consent. These cases confirm that facial recognition is spreading from law enforcement into everyday retail settings.

What this means for school Facebook Pages

Every photo on a public Facebook Page is accessible to Clearview AI and systems like it. There is no technical barrier. There is no legal barrier that has proven effective. The only defence is to not make the photos public in the first place.

A private Group helps, but doesn't solve everything

Moving to a private Facebook Group prevents third-party scrapers like Clearview AI from accessing content, because private Group posts are not visible to non-members or to web crawlers. This is a meaningful layer of protection. However, it does not prevent Meta itself from processing the data, and it does not address historical images already scraped from the public Page. Removing historical public content is a necessary companion step.

The lifetime impact on children

A child whose face enters a facial recognition database at age 5 will be trackable by that system for their entire life. Facial recognition algorithms can match faces across decades. A kindergarten class photo and a university graduation photo can be linked by the same system. The children in these school Facebook photos have had no say in the creation of their permanent biometric identity, and currently have no mechanism to remove themselves from these databases.

← AI Training  ·  All evidence  ·  Deepfakes →