Blog Post

Reflections on artificial intelligence, ethics, and technology

Stop Using Checkbox Ethics (Part 2)

January 12, 2023

In Part 1, we discussed the ways in which checkboxing fails for AI Ethics: it is fragile, blunt, and closed. In Part 2, we introduce an alternative that avoids the pitfalls of checkboxing: an inquiry-based approach to AI Ethics.

As a professor of philosophy, inquiry as a strategy for grappling with complex challenges is well-known to me. It’s an approach that goes back at least to Socrates' call to live an examined life , and was crystallized in John Dewey's philosophy of education . We can draw on the insights of Socrates and Dewey by approaching issues in AI Ethics in two parts: a good question, and a structured discussion. Let’s look at what is meant by this.

A Good Question

At the beginning of any inquiry-based approach is a good question, which isn’t always easy to define. A good question is open-ended without being unanswerable, and encourages productive discussion.

For example, suppose we are developing a machine-learning model, and are considering which features should be tracked by the model. We might ask, “What features of our model will regulators object to in five years?” This isn’t a particularly good question, because it is unanswerable: without a time machine, we can only make vague predictions as we can’t know what the regulatory landscape will be like. Furthermore, this question is unlikely to foster productive discussion, because it is closed. It’s asking for an exact list.



As a strategy, inquiry is more specific and complex than the word implies.


A better question might be, “What are some of the features likely to raise red flags with regulators by the end of next year?” In this case, we moved the timeline up: it’s reasonable, looking at how the regulatory landscape has changed over the past two years, to infer what might raise red flags by the end of next year. Furthermore, the question signals an openness to discussion: we are looking for some likely features. All of a sudden, there are multiple reasonable responses.

Notice that a multiplicity of reasonable answers is still compatible with the question being answerable. Good questions often frame an issue as one of likelihoods, while still acknowledging that there are better and worse answers.

A Structured Discussion

Structured discussion lies at the core of applied ethics, and inquiry is a natural fit. Structured discussions need multiple participants (you can’t have one alone), and a commitment to open consideration. If everyone at the table is echoing the opinion of a single person, you’re not really having a discussion, and you run the risk of leaving important ideas on the table.

But the most important feature of structured discussion is the structure part. In AI Ethics, this involves three phases: Slow it Down; Think it Through; and Get it Right. Suppose we have developed a text-to-image AI that we are making available to the public (such as DALL-E or Stable Diffusion). One of the ethical questions we might ask is: what should we reveal about the training dataset?

Slow it Down. For the first phase, our aim is to stop ourselves rushing to an answer that might seem intuitive. Instead, we want to consider the context and background (Do key players have a reasoned approach to this question? What are our technological restrictions?); our possible actions (Which training images should we reveal? To whom should we reveal them?); and who the stakeholders are (members of our organization; our clients; our potential clients; the public at large; artists whose work is in the training set). Answering some of these questions might require further research before moving to the next phase.



Structured discussion lies at the core of applied ethics.


Think it Through. The second phase is where much of the important creative thinking gets done. This is where we reason out which actions and stakeholders merit moral consideration, and how stakeholders may be affected by each action. We aim to ground our discussion by making manifest our reasons for taking a position, and avoid appealing to our private intuitions. For example, you might argue that the general public does not merit moral consideration for this particular question, or merits less consideration than artists; but the aim is to be able to ground this position by saying why. The same goes for positions on how actions affect stakeholders: why is the stakeholder likely to be affected in a certain way?

Get it Right. For the third phase, weigh the various options to converge on a reasonable solution. It is very unlikely that all the options we have considered are equally good, and serve stakeholders equally well. The goal of this phase is to pick out the best options, knowing that we do not have perfect knowledge of the future, but we have determined which stakeholders merit moral consideration. This usually takes the form of a series of balance-of-evidence arguments, where stakeholders are weighted relative to each other, and we try to balance the likely benefits and harms. Again, inquiry encourages us to avoid intuitive assertions, and instead to be persuasive about our positions.

Of course, it is natural to want to find the perfect answer. But this is not what inquiry aims to deliver. The whole point of structured discussions is to surface multiple reasonable courses of action, and discard the bad. And as Aristotle correctly observed (II, 9) , we are blamed by missing the mark by a lot, not by a little.

Start Using Inquiry

There is a lot more to inquiry than we have covered in this brief exploration, but this is the seed of the idea: inquiry allows our AI Ethics discussions and decisions to be serious, open, and grounded. Most importantly, it avoids the pitfalls of checkboxing. Inquiry might seem like a tall order, but it’s far cheaper than the cost of potential errors. Inquiry in AI Ethics lets us anticipate issues before they become unmanageable. And the more we engage in it, the better we get.