Clearview AI, the controversial company working with law enforcement agencies, said someone with “unauthorised access” stole its entire list of customers, but apparently it wasn’t a hack.
The company has drawn backlash from privacy advocates and major social-media platforms over its facial-recognition tool, which lets police and other government agencies (such as those fighting terrorism) match suspects against a database of faces using AI to improve efficacy.
Tor Ekeland, Clearview AI’s attorney, has confirmed the breach didn’t result from a hack into Clearview’s servers but from a “flaw” that gave someone unauthorised access to the company’s client list. “Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century,” he said, adding that the company would “continue to work to strengthen our security.”
The Clearview AI breach is notable because the company markets its services to law-enforcement agencies and has previously avoided disclosing who its clients are. The New York Times reported in January 2020 that the company’s customers included hundreds of law-enforcement agencies across the US and Canada, including the FBI and Department of Homeland Security. The company is also expanding its client base internationally, and says it has received enquiries from law-enforcement agencies all over the world.
To facilitate its facial recognition tech, Clearview AI has amassed a database of billions of images which it scraped from Facebook, Twitter, YouTube and other publicly-available sites – many of which have threatened legal action against the company because they say this breached their terms and conditions. Clearview founder and CEO Hoan Ton-That responded by saying that gathering publicly accessible photos is protected by the US Constitution’s First Amendment, although he agrees that regulation of this newly-emerging sector is important and welcome.
This type of application is one of the reasons the EU has suggested it may introduce a five-year moratorium on facial recognition tech while it figures out what to do. (see EU considers ban on facial recognition tech) In the US facial recognition by law enforcement agencies is also a hot bed of discussion. The San Francisco board of supervisors has already banned its use by law enforcement agencies. Other cities, including Oakland and Somerville, Massachusetts, are also considering bans on the technology, viewing it as a threat to civil liberties.
In Asia things look different. The perception of privacy is viewed very differently to how it is seen in the US and Europe. Widespread applications are already being rolled out in countries such as China where the benefits of facial recognition are viewed as outweighing any concerns. For example, in China the technology is used at pharmacists to stop individuals buying medications that can be used to produce illegal drugs. And in airports facial recognition is used to improve security and provide a better customer experience by using someone’s face for recognition and authorisation. China Eastern Airlines, China Unicom and Huawei even won a GloMo Award for their use of the technology in delivering a smart airport.
Omnisperience’s View
Facial recognition based on AI has taken over from ‘super recognisers’, who were previously used to identify and arrest known criminals. But as we all are now fully aware, the technology has a wider range of uses. At airports it’s becoming a standard feature for passport control in Asia and the Middle East, for example.
In Europe, our data is covered by strict regulations that determine how long companies and government agencies can retain data and what it can be used for. However, this type of technology is pushing the boundaries of privacy and data protection legislation. Who can argue with a technology that claims to be able to mop up crime and keep us safe from terrorists? After all, we already have extensive citizen surveillance in the form of CCTV cameras which are scattered through our streets. Drones and UAVs are being fitted with cameras to take images of citizens acting ‘suspiciously’.
Identification requires these images to be matched against known individuals, which mean solutions are only as good as the database of images that an agency has access to and the AI that performs the comparison. Now that we have extensive image databases and powerful AIs that can match and identify individuals with a high degree of accuracy, the potential for misuse is huge. At what point is a suspect no longer suspicious? Will images ever be deleted from such databases? At what point does the rights and duties of the state override the right of citizen privacy?
The issue is a thorny one because the concept of privacy varies massively between countries – with some seeing societal needs and efficiency overriding the rights of the individual, and others being incredibly privacy-centric and demanding action to protect citizens’ rights. What’s important is that robust rules are introduced that build confidence in this type of technology. Otherwise we risk the same kind of backlash that we saw with the Cambridge Analytica fall out. But Europe cannot afford to take too long to introduce such rules, or it will fall even further behind in the development and application of a technology that also has the potential to deliver a huge amount of benefits to society. (see UK cannot afford to dither on AI)