Blog Post

Reflections on artificial intelligence, ethics, and technology

Getting AI Ethics Exactly Wrong

January 23, 2023

If you want people to trust AI, look at how Clearview.ai behaves, and then do the opposite.

Clearview.ai came to light in a New York Times exposé highlighting the company’s unscrupulous business strategy. The paper revealed that Clearview scrapes billions of photos from social media and then links those photos to personal information. It then sells the results, in the form of a searchable database, to law enforcement (federal, state, and local levels), universities (such as Columbia, and the University of Alabama), and corporations (such as Albertsons, Bank of America, Home Depot, Rite Aid, and Walmart). Clearview has also made their services available to countries with highly questionable human rights records .

Clearview’s main business model is an attack on individual privacy. Not exactly the sort of thing that engenders trust in AI.



As awareness of Clearview and what it stands for grows, so will its trust deficit.


And here we go again. At the end of 2022, the French privacy watchdog CNIL ruled that Clearview was unlawfully processing private information, and ordered the company to do two things: stop collecting data, and delete data already collected on French residents . The CEO’s response:

“There is no way to determine if a person has French citizenship purely from a public photo from the internet, and therefore it is impossible to delete data from French residents.”

It has yet to be seen whether Clearview will be protected from French law by the US courts. But even if they are, there are similar orders in process by Australia, Italy, and the United Kingdom. The company is also being sued by California, Illinois, New York, and Virginia. Clearview’s bad behaviour has even attracted the attention of Congress .

What can we learn from this? A few things. First, we can see that a lack of proper consideration of AI Ethics is putting the company and its investors in terrible jeopardy. When a large part of your effort goes to defending yourself against lawsuits, and no company who might buy your services wants to be publicly associated with you because you are known to be a bad actor, that starts eating into your bottom line. It saps your strength and ability to carry out the sort of innovation necessary to excel in the hyper-competitive world of technology. And it makes your future dependent on the vagaries of the courts, which may simply decide to pull the plug on you.



In the end, it's a lack of AI Ethics that will destroy the company.


Second, and more specifically, the company’s response to the French order shows an inability to anticipate even the most obvious blowback. Before this business model was ever put into practice, a giant threat ought to have been noted: loss of privacy is a direct harm in a democratic society, and people are going to notice. Even if we don’t pass judgement on Clearview’s motivations, the company should have taken precautions in expectation of exactly the sort of order issued by the French regulator.

This means the company should either have built its technology so that residents can be removed from the database based on jurisdiction, or the company should not have assembled the database in the first place. As awareness grows of Clearview and its activity, so will distrust in the company, and in AI generally. Democratic governments and corporations (who don’t want to be seen flaunting privacy, at least in the public sphere), will increasingly bring pressure to bear. In the end, it’s a lack of AI Ethics that will destroy the company.