Blog Post

Reflections on artificial intelligence, ethics, and technology

AI Bill of Rights and the Trust Deficit

February 16, 2023

Airline travel depends on trust in something that should be intuitively frightening. It involves moving at hundreds of miles per hour, thousands of feet above the ground, in a pressurized metal tube. On the face of it, this doesn’t sound very safe. But airline travel is safe (mostly), and we trust it (usually). We trust the technology will function without harming us, that design tolerances exceed the stresses likely to be encountered by an aircraft in flight, and that backup systems are in place in case something goes wrong. Trust is essential, or very few people would be flying. As we enter the AI revolution, trust is at the heart of how readily and comfortably people will engage with a technology that is as new today as airline travel was a century ago — and even more dangerous if not properly managed.

The government has taken note. The Biden administration recently released a framework designed to mitigate the harms caused by AI, and inspire trust in the technology: the Blueprint for an AI Bill of Rights. The Blueprint’s preamble notes several well-known harms of AI: patient care that AI makes unsafe or biased; hiring algorithms that are discriminatory; extensive data collection on social media without explicit consent. Then, it claims that such outcomes are deeply harmful, yet not inevitable.

What is good about the Blueprint is that it shows the government has been paying attention, and has recognized that AI has a credibility gap. Many experts already grasp the potential harms and unregulated nature of the technology. But when the government gets involved, it’s different. The Blueprint puts organizations on official notice: “Anything goes” will soon be gone.

What’s less good about the Blueprint is its generality. It is simply too broad to address the main challenge confronting AI: trust. Just as we would be reluctant to fly on a plane we didn’t trust, so too are we wary of using AIs we don’t trust. What’s worse: lack of trust in some AIs easily influences how people view AI in general.

The trust deficit is enormous. The Pew Research Center released a report which says that more Americans are concerned than excited about AI . This is also perfectly understandable. AIs have had a particularly shabby role in undermining our institutions , in spreading bias , and causing death . AI transgressions are numerous and easy to find .

How do we address the trust deficit? The Blueprint doesn’t do it. It lends itself to checkboxing, telling us, in the most anodyne language possible, what principles we should espouse. If only ethical AI were that simple. First, checkboxing doesn’t work. Second, what happens when it’s not obvious what ethical principle applies to a case? Or what happens when principles come into conflict? The Blueprint makes broad, general sense, but does little to help those who work with real-world AI.

Look at the Blueprint’s Algorithmic Discrimination Protection . According to this principle, nobody should be subject to discrimination by algorithm. We are shown a bunch of helpful categories: race, color, ethnicity, gender, age, etc. Now, suppose you have an AI that does a first-pass for job candidates, screening out candidates who are not a good fit for a given position. There are cases where a candidate might obviously suffer discrimination, such as if your AI discarded applicants from all traditionally marginalized groups. But there are also cases where it is not clear whether discrimination is happening at all: what about candidates who have one grandparent who is a member of such a group, or whose background is unknown?

Furthermore, anti-discrimination policies can easily come into conflict. For example, if your company is composed entirely of people from one gender, you might specifically seek out candidates with a different gender (and hence implicitly discriminate against some gender). Or you might be seeking candidates with enough experience to fill a senior position (and hence implicitly discriminate against some ages). The conditions under which these strategies ought to be employed are bounded by ethics (and sometimes law), and require careful, reasoned, ethical consideration. It is not as simple as deciding whether to check a box.

The government’s AI Blueprint is a good start. But in order to realize the dreams we have invested in AI, we need AI Ethics to help us think through the difficult issues. AI Ethics is how we address the trust deficit.