Americas

Asia

Oceania

evan_schuman
Contributor

Australian data regulator backs off Clearview AI

News
21 Aug 20244 mins
Data PrivacyFacial recognitionRegulation

The Australian Information Commissioner (OAIC) still believes Clearview AI erred by ‘indiscriminately’ grabbing face images from the Internet.

People on street AI facial recognition
Credit: Shutterstock

The Office of the Australian Information Commissioner (OAIC) on Wednesday abandoned its multi-year effort against Clearview AI, which it had ordered to stop collecting images of people in Australia after accusing the company of improperly grabbing images of faces from “across the Internet.”

The OAIC decision went out of its way to not withdraw its accusations, and it even suggested that Clearview never stopped the practice, nor did it apparently delete images. But OAIC said that it decided it wasn’t worth the resources to pursue given that Clearview is facing so many other investigations. 

“We reiterate that the determination against Clearview AI still stands,” Privacy Commissioner Carly Kind said in a statement. “I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinizing the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States,” Kind said. “Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.”

But Kind then stressed that Clearview is hardly alone and that many AI companies are capturing all manner of sensitive data found around the world. 

“The practices engaged in by Clearview AI at the time of the determination were troubling and are increasingly common due to the drive towards the development of generative artificial intelligence models. In August 2023, alongside 11 other data protection and privacy regulators, the OAIC issued a statement on the need to address data scraping, articulating in particular the obligations on social media platforms and publicly accessible sites to take reasonable steps to protect personal information that is on their sites from unlawful data scraping,” Kind said. “All regulated entities, including organizations that fall within the jurisdiction of the Privacy Act by way of carrying on business in Australia, which engage in the practice of collecting, using or disclosing personal information in the context of artificial intelligence are required to comply with the Privacy Act. The OAIC will soon be issuing guidance for entities seeking to develop and train generative AI models, including how the APPs apply to the collection and use of personal information. We will also issue guidance for entities using commercially available AI products, including chatbots.”

The original OIAC accusations, from October 2021, “found that Clearview AI, through its collection of facial images and biometric templates from individuals in Australia using a facial recognition technology, contravened the Privacy Act, and breached several Australian Privacy Principles (APPs) in Schedule 1 of the Act, including by collecting the sensitive information of individuals without consent in breach of APP 3.3 and failing to take reasonable steps to implement practices, procedures and systems to comply with the APPs,” the OIAC said.

The concerns are extensive. Back in 2021, the European Parliament explicitly called for bans on using facial recognition technology and specific bans on private facial recognition databases such as those created by Clearview AI. 

In 2022, the UK Information Commissioner’s Office fined Clearview £7.5 million for breaking data protection laws.

Martin Kuppinger, principal analyst for German consulting firm KuppingerCole Analysts, said he was taken aback by the decision.

“Although the decision is surprising, it illustrates the limitations data protection laws are facing in the internet and, specifically, in the age of AI. It also highlights the challenges of authorities in navigating through the dilemma of diverging targets of AI innovation, innovation based on AI, strengthening the cybersecurity posture and attack defense, and privacy and data protection. There is no simple answer here,” Kuppinger said. “Shall we hinder the defenders in using pictures of faces to train their models? Attackers don’t care, they just do and, for instance, use public pictures and videos from the Internet for training their deep fake models.”

evan_schuman
Contributor

Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek, Computerworld and eWeek and his byline has appeared in titles ranging from BusinessWeek, VentureBeat and Fortune to The New York Times, USA Today, Reuters, The Philadelphia Inquirer, The Baltimore Sun, The Detroit News and The Atlanta Journal-Constitution. Evan can be reached at eschuman@thecontentfirm.com and he can be followed at twitter.com/eschuman. Look for his blog twice a week.

The opinions expressed in this blog are those of Evan Schuman and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author