Reflections on artificial intelligence, ethics, and technology
May 26, 2023
When principles of AI are treated in an unsophisticated way, we end up with a patchwork that is more performative than substantive. Case in point: how major organizations treat transparency in AI.
According to IBM , transparency means being clear about how data is collected, stored, and used. According to Microsoft , transparency involves stating the intended use cases and limitations for AI systems. According to Google , transparency has to do with data safeguards. Because there is disagreement among these major players, we know at least two of them, and possibly all of them, have got transparency wrong.
As it turns out, IBM, Microsoft, and Google all have it wrong.
These organizations have each put their finger on a different sliver of transparency. But none have grasped transparency in its entirety, instead opting for a watered down principle. This is detrimental to organizations adopting these principles, because they mistake the whole for a part, exposing them to risk when the truth of the matter comes out. And it is detrimental to the community at large, because it exposes everyone else to an AI systems whose challenges are not clearly stated.
Transparency is a broad notion that can be broken down into three categories: model transparency, business transparency, and user transparency.
Model transparency aims to be clear about the computational models that underpin the AI system. It involves clarity about:
Business transparency aims to be clear about the business practices that support the development and use of the AI system. It involves clarity about:
User transparency aims to be clear about how the AI system is implemented and used. It involve clarity about:
You might think that this makes transparency an obstacle to development. But this is because of the current situation, where claims to transparency are a public relations bromide rather than how we should actually be acting. AI is an incredibly powerful tool that organizations should have access to. A commitment to actual transparency, along the lines above, cuts exposure to risk for everyone, by ensuring AI does not become an organizational footgun . And it increases AI acceptance, by increasing the trust we have in it.
To be truly committed to transparency requires being clear about all three categories. This is much harder than writing a simple statement of principle (as IBM, Microsoft, and Google have done), because it involves thinking in detail about benefits and harms, and the trustworthiness of AI.