Ideas about AI Ethics

Reflections on artificial intelligence, ethics, and technology

Transparency

glasses When principles of AI are treated in an unsophisticated way, we end up with a patchwork that is more performative than substantive. Case in point: how major organizations treat transparency in AI. According to IBM , transparency means being clear about how data is collected, stored, and used. According to Microsoft , transparency involves stating the intended use cases and limitations for AI systems. According to Google , transparency has to do with data safeguards. — Read More …

AI Ethics and Wicked Problems

balance You are doing work in AI, and you realize your organization are starting to make decisions that have serious ethical consequences. What should you do? You could just do what your intuition tells you. But scandals resulting from shoot-from-the-hip ethics are legion: Cambridge Analytica's data breach, IBM's photo-scraping without permission, Google's harvesting of private healthcare records, to name but a few. This costs money, ruins reputations, and sometimes destroys organizations. — Read More …

AI Bill of Rights and the Trust Deficit

balance Airline travel depends on trust in something that should be intuitively frightening. It involves moving at hundreds of miles per hour, thousands of feet above the ground, in a pressurized metal tube. On the face of it, this doesn’t sound very safe. But airline travel is safe (mostly), and we trust it (usually). We trust the technology will function without harming us, that design tolerances exceed the stresses likely to be encountered by an aircraft in flight, and that backup systems are in place in case something goes wrong. — Read More …

The Red Queen Fallacy

red queen Consider the Red Queen’s Race, described in an exchange between Alice and the Red Queen, in Lewis Carroll’s Through the Looking Glass: “Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else—if you run very fast for a long time, as we’ve been doing.” “A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. — Read More …

Getting AI Ethics Exactly Wrong

iris If you want people to trust AI, look at how Clearview.ai behaves, and then do the opposite. Clearview.ai came to light in a New York Times exposé highlighting the company’s unscrupulous business strategy. The paper revealed that Clearview scrapes billions of photos from social media and then links those photos to personal information. It then sells the results, in the form of a searchable database, to law enforcement (federal, state, and local levels), universities (such as Columbia, and the University of Alabama), and corporations (such as Albertsons, Bank of America, Home Depot, Rite Aid, and Walmart). — Read More …

Stop Using Checkbox Ethics (Part 2)

lightblub In Part 1, we discussed the ways in which checkboxing fails for AI Ethics: it is fragile, blunt, and closed. In Part 2, we introduce an alternative that avoids the pitfalls of checkboxing: an inquiry-based approach to AI Ethics. As a professor of philosophy, inquiry as a strategy for grappling with complex challenges is well-known to me. It’s an approach that goes back at least to Socrates' call to live an examined life , and was crystallized in John Dewey's philosophy of education . — Read More …

Stop Using Checkbox Ethics (Part 1)

checkmark Checkboxing. You’re probably familiar with this ubiquitous aspect of corporate compliance. Whether you’re on a board, run your own company, or work 9–5, most organizations require attendance at mandatory training sessions. The topics may be important and the goals laudable, but in the end they too often become about checking off boxes. It happens in the private, non-profit, and public sectors alike. When it comes to AI Ethics, checkboxing isn’t just a tiresome exercise — it’s actively harmful to an organization’s interests. — Read More …