Resources List

A curated collection of resources

Books


Technology and the Virtues

Shannon Vallor — Oxford, 2016

There are many different ways to think about ethics, but the three most common approaches are: utilitarianism (choose the action that maximizes fulfilment), deontology (choose the action that follows an inherently good rule), or virtue ethics (choose the action that builds character). While most people are utilitarians by default, Vallor argues that we should approach ethics from the perspective of virtue ethics.

According to Vallor, the modern age is afflicted with “acute technological opacity,” which means we are unable to predict the future of technology or how our technologies will be used. Thus, ethical systems that depend on anticipating the future (such as utilitarianism) will not work. Instead, she argues that we have to cultivate a good character in order to prepare for the unknown. Vallor then gets into the details about what a good character is and how to cultivate it, grounding her work both in historical figures (such as Aristotle and Confucius) and particular applications (such as social media, robots ethics, and weapons of war).

This book’s concerns are technology in general rather than AI in particular, but the discussions have direct bearing on questions of AI Ethics. The prose is sometimes dense, but lucid and clear. It is an excellent resource for practitioners of technology who want a serious grounding in how to approach ethical questions.

Book

Categories: book &#x2022 scholarly


The Alignment Problem

Brian Christian — W.W. Norton, 2020

The alignment problem is the challenge of aligning AI behaviour with human values. We can see it as the central problem in AI Ethics, because we think that an AI goes wrong when it does not align with our values (even when it does what we told it to do). This book considers recent developments of AI together with alignment challenges.

Christian’s strength is to take a technical subject and weave together the story of its development and challenges in an entertaining way. The prose is not dense, and each chapter focuses on presenting a single idea. The book relies on an intuitive sense of right and wrong, rather than a philosophical justification. But the questions it raises, particularly concerning research and industry, are very important and intensely philosophical.

There is also an audiobook version, where Christian reads his own book (about 13.5 hours), and an 80,000 Hours interview that hits most of the highlights (about 3 hours).

Book Audiobook Podcast

Categories: book &#x2022 popular


Rebooting AI

Gary Marcus and Earnest Davis — Pantheon, 2019

There’s no denying there is a lot of hype surrounding AI: AI can solve any problem; AI will soon surpass human intelligence; AI is the new electricity. Much of the excitement around AI is real and justified, but sometimes it becomes difficult to separate what is truly possible from the dreams of the futurists. Marcus and Davis are researchers who have been working in AI for decades, and provide a sober evaluation of what AI can really do, and what its strengths and weaknesses are.

One of their concerns, after considering the reasons for the gap between the promises of AI and what AI has been able to deliver, is trust. Trustworthy AI is grounded in reason, common sense, and robust engineering practices. In order for AI to deliver on its promises, we have to be able to trust it: its foundations must be resilient, and its actions must be bounded. Marcus and Davis draw analogies to engineering other technologies, and advocate AI systems that anticipate failure, are easy to maintain, and employ metrics.

The discussions in this book are quite enlightening. The authors pull together lessons that are useful for both executives considering an AI strategy, AI developers, and practitioners of data science. The approach is broad, but practical, and provides sober evaluation of the state of the art.

Book

Categories: book &#x2022 popular


Articles


Your Apps Know Where you Were

Jennifer Valentino-DeVries et al. — New York Times

The New York Times acquired access to a database of more than a million phones in the New York area. Using this database, they were able to trace people as they moved around the city, including children. When you give your phone permission to access your location for certain purposes (such as finding a parking spot, or for whether information, or following a local sports team), it turns out that these apps often use location data to track your whereabouts. There is no federal law limiting the collection and sale of such information.

According to the article, a location data CEO defended the practice by saying, “You are receiving these services for free because advertisers are helping monetize and pay for it. You would have to be pretty oblivious if you are not aware that this is going on.” And yet the fully story is, naturally, somewhat more complex.

Article (Times) Article (Archive.org)

Categories: article &#x2022 investigative


ELIZA

Joseph Weizenbaum — Communications of the ACM

One of the most famous researchers from the early days of AI is MIT professor Joseph Weizenbaum. He was behind the famous ELIZA program, which was a very early natural language processing AI (written around 1965). There has even been a full-length documentary made about him (very rare amongst researchers in computer science).

What motivates this article is concern for the future of humanity with AI. Weizenbaum was shocked when he saw people treating ELIZA as an intelligence rather than a computer program. Claims in 2022 that a chatbot is sentient is an example of exactly what kept Weizenbaum up at night. In this article, he lays bear the details of ELIZA, specifically with a view to showing readers the program is not intelligent: once the details are revealed, he says, “the magic crumbles away.”

Weizenbaum came to be vehemently skeptical of AI in his later years. He could be relied on to forcefully spout his concern about AI if asked. But from a historical perspective, it is interesting to see how much faith he has in AI transparency. He thinks that if we know how an AI works, we will automatically understand that it is just a human creation, and not actually intelligent.

Article (PDF) ELIZA Online Documentary

Categories: article &#x2022 scholarly &#x2022 foundations


Transparent AI

Fenna Woudstra — Filosofie in Actie

Even neural networks of moderate size can seem to be impenetrable. We know that a model produces a certain output for a given input, but often we do not know why that model produce that output. This is not to say that we cannot give a technical explanation: we can say how the output was produced mathematically. But a mathematical account is of little comfort when the results are not explainable in human terms. So finding a way for our neural networks not to be black boxes, but to be explainable, is highly desirable.

Sometimes transparent AI is thought to be synonymous with explainable AI. In this article, Woudstra reminds us otherwise. Explainable AI is part of transparent AI; but the latter is much broader. For example, if AI is truly to be transparent, we must know the environmental costs of training and using the model; whether humans are used to moderate content and what their working conditions are; what the user’s rights are; how the training data was collected and modified; and the degree to which ethical considerations are integrated into the model and its use.

Below are two links: the original research paper, and a summary. The research paper contains many links to resources on the principles of AI ethics. The summary cites additional popular resources.

Paper Summary

Categories: article


Moral Crumple Zones

Madeline Clair Elish — Engaging Science and Technology

A vehicle’s crumple zone protects driver and passengers in the event of an accident: we protect the vehicle occupants by sacrificing a component of the vehicle. Elish draws our attention in this paper to a moral crumple zone, which designed to protect a technological system at the expense of the nearest human operator. In essence, a moral crumple zone allows us to deflect blame from technologies onto operators, and in doing so miss the real issue.

Technologies are often poorly designed, with designs that make accidents more likely. Elish considers three different cases: the partial nuclear meltdown at Three Mile Island, the crash of Air France flight 447, and the first Uber vehicle that killed a pedestrian in Tempe, Arizona. In each of these cases, governments and companies sought to place the blame on human operators, while minimizing the role of design choices that played a significant role in causing the accidents.

The question of responsibility when humans interact with technology is an especially persistent challenge for AI Ethics. This paper demonstrates that the answers are not simple, but invites us to think about to consider risks of technological design.

Article (PDF)

Categories: article &#x2022 scholarly


Ethics as Design

Caroline Whitbeck — The Hastings Center Report

There are parallels between solving an engineering challenge and solving an ethical challenge. Such challenges typically have several acceptable solutions, there are different ways to effect those solutions, and there are outcomes we definitely want to avoid. In this essay, Whitbeck argues that solving moral challenges is often not a matter of choosing the best solution, but rather one that will achieve a reasonable outcome from the acceptable possibilities.

Often, ethics is seen as a rule-based system, whereby we are exhorted to pick the best option. While some ethical systems do advocate such an approach, other thinkers are more alive to constraints of practical action, such as uncertainty and time pressure. These are a part of what it is to act in the world: we cannot ignore them, but nor do they justify our choices on their own.

Whitbeck seeks to undermine the idea that ethics is strictly rule-based by drawing out an analogy to the engineering design process. The application to AI Ethics is is both persuasive and incisive

Article (PDF)

Categories: article &#x2022 essay


The Challenge of Cultural Relativism

James Rachels

Rachels explores cultural relativism, which the idea that morality is just code for socially approved habits by a particular culture. Rather than arguing against cultural relativism directly, Rachels argues against its consequences And if we want to reject its consequences, and are being rational, it’s necessary to reject relativism itself.

The consequences of not rejecting relativism, according to Rachels, are quite dire. The relativist struggles to be consistent while critiquing any of the practices of her own culture (because she defines those practices as morally good for her), or criticizing abhorrent practices of other cultures (because she defines those practices as morally good for them).

This article is succinct, and challenges relativists on their own grounds. It originally appeared as a chapter in The Elements of Moral Philosophy, but is frequently read independently owing to the author’s style and perspicuity of his commentary. It provides easy way to access a central ethical question.

Article (PDF)

Categories: article &#x2022 popular


Truth as Correspondence

Tony Roy

This article is concerned with the “common sense” definition of truth. The writing is straightforward and clear. Roy begins with a simple question: does the phrase “it’s true for me but not for you” have any meaning? He argues that it doesn’t, or rather that when we utter such a phrase, we mean something other than what we seem to to be saying on the surface.

The concept of truth is important for the study of ethics. For example, even if we believe that ethical norms vary from culture to culture, we’d like to be able to say whether ethical variance is actually true; even if we believe our friend shouldn’t act in a particular way, we’d like to know what it would mean to say our belief is true. To make progress, we need to know what truth is.

You might protest that there is no way to define “truth.” But before you commit to the truth of that position, at least consider what Roy has to say. According to him, truth is not as big a mystery as it seems at first glance.

Article (PDF)

Categories: article &#x2022 popular


Other


Video: Epsilon Data

Epsilon Marketing — YouTube

This is a video from a company that wants to flaunt its credentials as a data aggregator. They claim to have “proprietary data on nearly every US consumer: who they are, what they care about, and what they buy.”

The breadth of the profile the company builds on consumers is quite striking: your purchases (both in person and online), how you spend your leisure time, your likely hobbies, your income, your marital status, and so on. The only way to do this is to weave together a network of dozens of different companies, all of whom are willing to provide your personal data to Epsilon.

Companies that are explicitly mentioned as Epsilon’s partners include: Facebook, Instagram, Twitter, Yahoo, Comcast, Hulu, Dish, NBC Universal, Viacom, and Verizon.

Video

Categories: other


AI Incident Database

The Responsible AI Collaborative

Organizations are increasingly using AI, which increases the number of incidents: deployments that cause or nearly cause real-world harm. In aviation and computer security, there are databases that collect incidents (such as accidents or security breaches). Now AI has something similar.

The Responsible AI Collective has long been collecting incidents, originally publishing its collection in a shared spreadsheet going back to 2015. Now, the AI Incident Database has its own website. Incidents are catalogued by name, date, and description, and the records contain information about who was harmed, the timeline of the event, news or scholarly articles regarding the event, all in a conveniently searchable database.

The records in this database are curated by humans, with submissions made through the website. If you want to find out about how the “smart summon” feature of a Tesla caused the vehicle to slam into a private jet, or what happened to the 24 Amazon workers who were hospitalized after a robot punctured a can of bear spray, this is the place to go. The incidents are not pretty, but they make up for their ugliness by sheer volume.

Project Page

Categories: other