Blog Post

Reflections on artificial intelligence, ethics, and technology

Stop Using Checkbox Ethics (Part 1)

January 6, 2023

Checkboxing. You’re probably familiar with this ubiquitous aspect of corporate compliance. Whether you’re on a board, run your own company, or work 9–5, most organizations require attendance at mandatory training sessions. The topics may be important and the goals laudable, but in the end they too often become about checking off boxes. It happens in the private, non-profit, and public sectors alike.

When it comes to AI Ethics, checkboxing isn’t just a tiresome exercise — it’s actively harmful to an organization’s interests. It strikes at the heart of innovation and institutional health, because it fails to provide a sustainable framework for developing goods and services. We risk wasting talent, time, and money with projects whose implications we failed to understand because we paid only nominal attention to ethical guidance. Unintended consequences can also damage an organization’s reputation, and in serious cases, harm the very people or interests we aim to serve.



The individuals working on AI must be encouraged, required even, to engage deeply with the ethical implications of their work.


The use of AI is growing exponentially and yet many organizations are already falling into the checkboxing trap and the risks it entails. We can see this by noting the impact of the Association for Computing Machinery’s Code of Ethics and Professional Conduct . Updating the Code a few years back was a good idea in principle: it offered a modern statement on ethical standards that organizations should follow. Yet the parade of ethical scandals in AI continues unabated. The code has little effect on the choices made by developers, because the appeal of checkbox ethics is both obvious and strong: little time and even less intellectual energy spent, followed by the immediate gratification of checking off that box.

At first glance, checkboxing may appear to work. That’s because it does work when the question is straightforward and has a simple answer. Yet when applied to something as complex as AI, checkboxing suffers from big shortcomings. For starters, it’s fragile, which means it does not adapt easily to new circumstances or capture complexity. Second, it’s is a metric, and a very blunt one at that. Checkboxing allows and even encourages us to rationalize poor decisions, and evade the truly important questions. Finally, it’s closed to intellectual engagement, which is our most valuable asset when it comes to understanding ethical questions, powerful technologies, and the nexus between them. Let’s explore these three issues further.

Fragile

Consider this scenario. In an effort to create a Twitter bot that does not breach ethical norms, developers carefully filter and clean training data. Box checked. But those same developers also include the possibility of learning from and responding to prompts on Twitter, so that it can partake of conversations in real time. In no time at all, the bot is spewing racist and gender-based vitriol.

This is not a hypothetical scenario: it’s what happened to a Microsoft chatbot operating on Twitter, called Tay. Tay went from “humans and super cool” to endorsing Nazism in less than 24 hours .



When it comes to AI Ethics, checkboxing isn't just a tiresome exercise — it's actively harmful to an organization's interests.


What the developers failed to do was move past the checkbox. Filtering and cleaning data is a good first step: the bot should not have been specifically trained to be a racist. But by checking the “filter and clean” box, developers acted as if AI Ethics concerns were addressed. They were not. What the developers needed to do was go beyond the fragility of checkboxing, and ask whether their approach is going to introduce new and unforeseen problems. This is a difficult question, but it is entirely tractable. And we don’t ask this question because it doesn’t fit our checkbox culture.

Blunt

That takes us to the second problem. Economist Charles Goodheart observed that when a metric becomes the goal, the metric ceases to be a good measure. In essence, universal metrics tend to create perverse incentives. In one classic example, nineteenth century British rulers were concerned about the number of venomous cobras in Delhi, and so instituted a policy of bounties for dead snakes. This led to many bounties being disbursed, in part because enterprising people were breeding cobras to collect the reward.

Difficulties posed by blunt metrics are familiar to those working in reinforcement learning: an AI will often take an unexpected path to its goal. The same happens in AI Ethics. I was recently part of a data science reading group where people were discussing how they were required to remove racially identifiable features from their datasets—after they did so, they could no longer determine whether their models were racially biased. They checked the box, but the result was they had no way to determine whether their resulting models were biased. In this case, checking the box that prevents the model from using racially identifiable features was the very mechanism that made it impossible to determine whether the model captured racial biases on some other grounds. At the same time, the data scientists could claim to have done the work they were required to do: they removed racially identifiable features. This type of scenario allows developers to justify poor decisions by insisting they followed the instructions they were given, and leads them to evade the problem they were supposed to be dealing with.

Closed

Engaged minds are the most important assets for ethical challenges, yet intellectual engagement is exactly what checkboxing suffocates. Sure, we think a little about whether to check the box, but for the most part it prevents us from engaging with the issue at hand in any serious way. Checkboxing is intellectually closed. Data scientists as a group tend to be intelligent and capable. Handcuffing them with checkboxes signals that we do not trust their judgement. It says: just take this list, and cover our ass.



Checkboxing allows and even encourages us to rationalize poor decisions, and evade the truly important questions.


This is demoralizing, and helps breed both contempt and apathy. In addition, nothing is more harmful to ethical concerns. Ethics requires careful consideration, structured discussion, and most of all active engagement. It is not top-down and can’t simply be decreed by management. The individuals working on AI must be encouraged, required even, to engage deeply with the ethical implications of their work.

A careful approach to ethics will not only avoid the types of problems described here, but will also serve as a framework, a sort of site map that helps direct data scientists’ energies. It allows them to work from a common foundation while exploring disparate ideas and projects, avoiding needless conflict and confusion that hamper rather than foster creativity and progress. A framework is possible because AI Ethics differs from the ethics people engage with in their everyday lives and that vary among individuals and cultures. AI Ethics provide a fully shareable and unifying understanding that should be woven into an organization’s mission.

Part 2 of this blog (next week) offers a solution to the problem of checkboxing, starting with the concept of genuine inquiry and the steps it involves.