Blog Post

Reflections on artificial intelligence, ethics, and technology

Transparency

May 26, 2023

When principles of AI are treated in an unsophisticated way, we end up with a patchwork that is more performative than substantive. Case in point: how major organizations treat transparency in AI.

According to IBM , transparency means being clear about how data is collected, stored, and used. According to Microsoft , transparency involves stating the intended use cases and limitations for AI systems. According to Google , transparency has to do with data safeguards. Because there is disagreement among these major players, we know at least two of them, and possibly all of them, have got transparency wrong.

As it turns out, IBM, Microsoft, and Google all have it wrong.

These organizations have each put their finger on a different sliver of transparency. But none have grasped transparency in its entirety, instead opting for a watered down principle. This is detrimental to organizations adopting these principles, because they mistake the whole for a part, exposing them to risk when the truth of the matter comes out. And it is detrimental to the community at large, because it exposes everyone else to an AI systems whose challenges are not clearly stated.

Transparency

Transparency is a broad notion that can be broken down into three categories: model transparency, business transparency, and user transparency.

Model transparency aims to be clear about the computational models that underpin the AI system. It involves clarity about:

  1. The purpose and limitations of the AI system
  2. The design choices and their justification
  3. The known biases and mitigations of biases
  4. The external auditors and their appraisals

Business transparency aims to be clear about the business practices that support the development and use of the AI system. It involves clarity about:

  1. The working conditions of those who generate, gather, and label data
  2. The working conditions of those who develop and test the model
  3. The governance and control of the data and system
  4. The way data is collected, protected, originated, and compensated
  5. The kind (renewable, fossil) of energy and how much is required

User transparency aims to be clear about how the AI system is implemented and used. It involve clarity about:

  1. The way users are made aware that they are using an AI
  2. The procedure a user has to challenge the outcome and ask for human review
  3. The ability of a user to request a natural-language explanation of the outcome
  4. The ability of a user to opt out of the results being made by AI

You might think that this makes transparency an obstacle to development. But this is because of the current situation, where claims to transparency are a public relations bromide rather than how we should actually be acting. AI is an incredibly powerful tool that organizations should have access to. A commitment to actual transparency, along the lines above, cuts exposure to risk for everyone, by ensuring AI does not become an organizational footgun . And it increases AI acceptance, by increasing the trust we have in it.

To be truly committed to transparency requires being clear about all three categories. This is much harder than writing a simple statement of principle (as IBM, Microsoft, and Google have done), because it involves thinking in detail about benefits and harms, and the trustworthiness of AI.