You are doing work in AI, and you realize your organization are starting to make decisions that have serious ethical consequences. What should you do? You could just do what your intuition tells you. But scandals resulting from shoot-from-the-hip ethics are legion: Cambridge Analytica's data breach, IBM's photo-scraping without permission, Google's harvesting of private healthcare records, to name but a few. This costs money, ruins reputations, and sometimes destroys organizations.
— Read More …
In Part 1, we discussed the ways in which checkboxing fails for AI Ethics: it is fragile, blunt, and closed. In Part 2, we introduce an alternative that avoids the pitfalls of checkboxing: an inquiry-based approach to AI Ethics.
As a professor of philosophy, inquiry as a strategy for grappling with complex challenges is well-known to me. It’s an approach that goes back at least to Socrates' call to live an examined life , and was crystallized in John Dewey's philosophy of education .
— Read More …
Checkboxing. You’re probably familiar with this ubiquitous aspect of corporate compliance. Whether you’re on a board, run your own company, or work 9–5, most organizations require attendance at mandatory training sessions. The topics may be important and the goals laudable, but in the end they too often become about checking off boxes. It happens in the private, non-profit, and public sectors alike.
When it comes to AI Ethics, checkboxing isn’t just a tiresome exercise — it’s actively harmful to an organization’s interests.
— Read More …