Blog Post

Reflections on artificial intelligence, ethics, and technology

AI Ethics and Wicked Problems

March 15, 2023

You are doing work in AI, and you realize your organization are starting to make decisions that have serious ethical consequences. What should you do? You could just do what your intuition tells you. But scandals resulting from shoot-from-the-hip ethics are legion: Cambridge Analytica's data breach, IBM's photo-scraping without permission, Google's harvesting of private healthcare records, to name but a few. This costs money, ruins reputations, and sometimes destroys organizations.

Knowing about AI Ethics is a better approach, but it brings challenges. The problems that AI Ethics deals with are wicked problems, which makes them hard to solve. In today’s article we’re going to look at the nature of wicked problems and how to make progress on them. I can sum up the advice I’m going to give you upfront (tl;dr):

  • If you want shoot-from-the-hip ethics scandals, just follow your intuition
  • If you want passive content, watch a YouTube video on AI Ethics
  • If you want microcredentials, participate in a generic online workshop
  • If you want to approach AI Ethics in a serious way, with a view to avoiding organizational scandals, learn to use inquiry

Let’s drill down further on wicked problems and inquiry.

What is a Wicked Problem?

Wicked problems were first described by design theorists Horst Rittel and Melvin Webber. Such problems have ten characteristics; but when applied to AI Ethics, four are essential:

  • There are several reasonable solutions
  • Solutions tend to be better/worse rather than right/wrong
  • Solutions cannot be discovered by trial and error
  • There is little tolerance for error

For example, suppose you are developing a facial recognition system. You have billions of images scraped from the web, together corresponding to personal information. Your goal is to train a neural network to take photos of people as input, and return the corresponding personal information as output. You then intend to sell access to your system to clients.

Although a full ethical review of this system involves many different aspects, for the sake of our example let’s focus on one question: how much personal information should be revealed to a client? Straight off, you can probably sense how difficult this question is. A doctor treating an unidentified patient in an ER might make lifesaving choices if she can match the face to a name and look up the patient’s medical background. But a stalker who matches a face to a name might use that same information to determine an address and phone number. Between these poles, there is a range of cases, some reasonable users of the system, others not. So how much information should be revealed to a client? A wicked problem like this has several reasonable solutions.

Second, wicked problems tend to have solutions that are better or worse rather than right or wrong. Imagine that there is some ideal solution that determines exactly who should get access and who shouldn’t. Better (or worse) solutions would be more (or less) aligned with the ideal. Now suppose that you can’t be absolutely certain you’ve hit on the ideal (as is mostly the case with solutions to problems in AI Ethics)—that doesn’t mean there still are not solutions that are better than others. Even if I don’t know where Jupiter is, it is better that I aim my telescope at the night sky rather than a wall. In the case of giving information to clients, even if I don’t know an ideal solution, there are still some solutions that are better than others.

Third, wicked problems cannot be discovered by trial and error. This is usually because solutions are exclusive or irreversible. Once the information is out, it’s out. Either you give the doctor access to the data or not. And if you give the stalker access, you cannot hit undo.

Fourth, wicked problems tend to engender little tolerance for error. If you get a reputation for giving stalkers, disreputable businesses, or oppressive governments access to personal information based on photographs, you will irrevocably harm your organization’s interests. You will impugn trust, get sued, or worse. You only have to make a mistake once to have it blow up all over the New York Times .

Much more can be (and has been) said about wicked problems. But the bottom line is this: wicked problems, like the example given above, are challenging to solve, and risky to leave unsolved. AI Ethics is focussed on mitigating these risks, but you have to approach it in the right way. This means using inquiry.

What is Inquiry?

Inquiry, first articulated in John Dewey's philosophy of education, is the preferred method for approaching wicked problems. Inquiry is particularly amenable to problems in AI Ethics. With inquiry, we start with a good question, and then engage in a process that has the following characteristics:

  • Collaboration: Inquiry explores a solution space from multiple perspectives, gathers and analyzes information, and considers how diverse stakeholders are affected
  • Iterative: Inquiry asks questions, pursues answers, and identifies blind spots that need to be pursued further as part of an iterative process; the cycle continues until the context is understood and a satisfactory solution is found
  • Evidence-based: Inquiry relies on clear evidence to inform decision making, including ethical principles, expert opinions, and empirical data
  • Reflection: Inquiry prompts us to reflect on our assumptions and values, and draws out biases that influence our thinking; this helps us to develop nuanced solutions

As you can see, inquiry is a complex process; but this is to be expected when we try to solve wicked problems. It requires that we actively engage with difficult questions, and grapple with their subtleties. The good news is that you get better at inquiry with practice: learn how it works and apply it to your own work.

How do you learn how it works? Inquiry is an active process, requiring engagement with others and useful tools for critical thinking. This is not easily found in a traditional classroom, presentation, or online workshop setting. Instead, what is needed is a live experience working with others who are skilled in the art of inquiry. By doing this, you will put yourself in a position to recognize opportunities to use inquiry in your own work.