5 minute read
Clearview AI: The end of privacy?
By: Grace Price
Advertisement
Clearview AI debuted in 2017 as a small software company, but three years later is known for its ground-breaking technological advancement: a facial recognition app that’s out to end human anonymity.
This resource allegedly gives its owner the ability to scan any face and immediately retrieve access to all public information kept on that person. This power comes with a long list of dangers and downsides, not to mention its sketchy means of obtaining data and its questionable CEO.
Clearview AI is a tool that sounds good in theory, its advertised purpose being to aid law enforcement with the ability to catch predators and identify victims with just the scan of a picture. With proper restrictions, perhaps an argument could be made to defend its use; however, misuse has already emerged after its short time of being in the public eye.
While this AI system has existed since 2017, it wasn’t until very recently that many people discovered its existence. At the end of January this year, London’s Metropolitan Police Department announced they would adopt the AI as a new form of surveillance, which is exactly what Clearview AI promises not to be. In fact, the company’s CEO, Hoan Ton-That, often uses that claim as a defense for his company’s actions, iterating that because it is not a surveillance service, it is not a breach of privacy.
After the news of the London’s Metropolitan Police Department’s plan spread, a new onslaught of complaints surfaced but not because of surveillance. This time, it was a wave of cease and desist letters on behalf of companies who Clearview AI had taken photos from. Clearview AI had amassed a collection of over 3 billion photos, which they still have currently, from sites like Google, Twitter, YouTube, Facebook and Venmo.
This raised a huge problem; the companies who’d been targeted had terms of services ensuring its users that something like this would not happen. For example, Twitter found out their users’ pictures had been, in a sense, stolen and immediately demanded Clearview AI stay clear of their platform.
The damage had already been done; however, pictures that had been on any of the violated applications and sites were already in Clearview AI’s possession.
Clearview AI has also been faced with issues of false claims. In August last year Clearview AI claimed they had helped crack a terrorism case in New York, a claim which they sent to a massive network of police organizations. This skyrocketed use of the AI and many police departments adopted it.
The New York Police Department (NYPD), though, insisted Clearview AI was completely uninvolved with the case. They said the suspect was caught after comparing a still from a security camera to arrest photos they had already possessed and obtained in a lawful manner, unlike the AI.
A police department in Toronto admitted in February 2020 to using the tech-
nology without their police chief’s awareness since October 2019. Unsurprisingly, their police chief immediately seized the department’s use of the resource upon finding out. Prior to this exposure, Clearview AI’s participating in the Toronto force’s legal work had gone completely under the radar. Clearview AI is providing law enforcement with access to unprecedented surveillance, which allows them to obtain any information desired on a person. If it’s easy enough to get past their boss for over four months, wouldn’t they be able to covertly use it in everyday life too?
Clearview AI has also faced some recent legal trouble as citizens in Illinois have filed lawsuits against the company on grounds of threatening civil liberties and violating multiple privacy laws. The case has just been assigned a judge and the predicted outcome appears unclear to those involved. If any of Clearview AI’s past actions are factored though, it seems the company will once again get away with their unlawful behaviors.
Even the company’s CEO maintains a shady background with his past allegations. In 2009, he grew to be a person of interest after his website, ViddyHo, was exposed for phishing users. Essentially, it was stealing people’s sensitive information like passwords and credit card details through scam emails. He denied this, citing it as a “software bug.” Not long after though, his second website, Fastforwarded.com, was exposed for the same thing and yet, Ton-That never received any consequences.
Overall, public concern had generally been focused on the AI’s unethical ability to capture information about a person just from a blurry security camera photo. As new scandals and information emerge, people have shifted their gaze to whether or not it’s even lawful for law enforcement to possess the resource at all, regardless of the potential good it can do.
A recent news story has provoked even more questions of whether or not the technology is justifiable. In late February, only about a month after the public even became aware of the AI’s existence, the public was alerted to the newest Clearview AI problem: hackers.
Sometime last month Clearview AI’s entire client list was stolen by hackers. So far, this just means hackers know which users have had and continue to have access to the software, but for the future it could mean unprecedented use of the technology’s features.
Speculation prior to this had already sparked a fear of what would happen if this technology was in the public’s hands, but now it seems closer to reality than fiction. Another notable problem with Clearview AI is its less than 100 percent accuracy. The company itself has proclaimed it is not perfect in identifying or providing information. An artificial intelligence researcher from the Surveillance Technology Oversight Project, Liz O’Sullivan, spoke to Buzzfeed News about the problem.
“There has to be some personal or professional responsibility here. The consequences of a false positive is that someone goes to jail,” O’Sullivan said.
As technology continues to adapt and advance, people must continue to be wary about what these new products and resources mean for their safety, especially when it’s an AI specifically designed to exterminate their privacy.