Artificial Intelligence Partials

How Transparent Should Business Artificial Intelligence Be?

In our recent blog post on ethics in artificial intelligence (AI), we cited a few examples where AI systems deployed in businesses made decisions that resulted in discrimination and racism. The AI industry is, in fact, filled with examples of AI simply getting something wrong, sometimes with disastrous outcomes, such as when a self-driving car hit and killed a pedestrian.

When negative events occur as a result of an AI system, or when an AI simply gets something wrong, data scientists need to carefully examine the system’s underlying algorithms and related data to determine what needs to be fixed in order to avoid future mistakes.

However, as some have argued, AI systems should be “explainable” and “transparent” to start with. The goal here is to have an AI’s decisions instantly be clear and easily understood by humans so that there is little to no question about the “why” behind the decision. And the more AI systems become integrated into business and our everyday lives, the greater the need for more transparency.

The question then revolves around how we achieve “explainable AI” and what tradeoffs do we need to make to get it?

The Challenge of Achieving Transparency

It might seem on the surface that an explainable AI simply entails its algorithms to be audited. To be sure, allowing an AI’s underlying code to be examined by a third-party expert is important, but achieving transparency is far more complicated than that. This is particularly true of machine leaning systems, where the AI is fed a data set and comes up with its own method of achieving the goal based on that data. In these instances, though, the AI could easily arrive at the goal in a biased way based on the data it was fed. For example, one AI failed to distinguish wolves from huskies based on whether or not snow was present in the background of the picture being analyzed. While this might seem like an innocuous error, it actually represents issues of biased based on irrelevant or inaccurate information presented in the original training data set.

This points out the primary challenge in achieving transparency in AI, particularly machine learning systems where the AI develops its own algorithms for achieving a specified goal. Any explanation about an AI’s decisions can only come after the fact, once the decision is made. And in cases that result in potentially deadly scenarios, such as with self-driving cars, this after-the-fact inquiry may or may not help us prevent future mistakes, and does absolutely nothing for those affected by the original error.

Achieving Transparency by Design

There is one way to ensure that an AI is explainable before it even executes any decisions, and that is by “dumbing” it down. That means that the AI is not trained to handle more complex tasks, which really defeats the purpose of having an AI in the first place. Instead, it’s essential to build transparency and explainability directly into the AI to begin with.

According to the Institute for Ethical AI & Machine Learning, this means AI developers should commit to the following principles:

  1. Human augmentation: Assessing the potential impact of incorrect learnings
  2. Bias evaluation: Instituting processes for identifying bias in an AI while in development, but also in production
  3. Explainability by justification: Developing tools and processes that purposefully improve transparency
  4. Reproducible operations: Ensuring repeatable results across all AI operations
  5. Displacement strategy: Developing a plan for identifying and documenting information needed to mitigate the impact of AI on human workers (NOTE: This one more closely pertains to the effects of AI replacing workers)
  6. Practical accuracy: Developing processes to ensure accuracy and cost metric functions align with what the AI is intended to do
  7. Trust by privacy: Ensuring that data, particularly sensitive personal information, is protected and handled with privacy in mind
  8. Security risks: Developing and improving processes to ensure system security so that personal data is not accessible nor is the AI being manipulated

The bottom line – for businesses and researchers alike – is that any AI that is developed should adhere to our expectations for ethical behavior, and this is really best achieved when we work towards developing explainable AI from the start and not as an add-on.

If your business is considering implementing an AI program, we can help you ensure it is developed with proper controls and transparency from the get go. Contact us to discuss how to develop an appropriate strategy for AI that benefits your business while still ensuring transparency and safety.

Best regards,
Florian Horn

Your data will be collected and processed in accordance with our privacy policy.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *