Explaining machine learning systems

The new legislation in Europe gives people the right to ask for an explanation if an algorithm made a decision about them. This presents a challenge because machine learning processes are not usually easy to explain. Understanding how machine learning techniques lead to a certain output is helpful in correcting unexpected consequences. It’s also necessary to spot potential bias.

Helping experts understand the systems they build

Finding new ways to communicate what learned systems do is something that many researchers are working on. Distill is a publication dedicated to publishing this kind of research through interactive examples. Showing examples the reader can play with helps them develop intuitive understanding. This is useful for professionals who build machine learning systems, who need to understand how to spot bias or mistakes. When learned systems go wrong the consequences can be very serious. An autonomous car might crash or an automated university admissions system might display racial bias.

The people who use the resulting learned system in practice will also need to know what to expect from it. For example, a doctor using a machine learning system to make a diagnosis needs to be able to understand the system’s strengths and weaknesses. This will influence how much they will trust an automated recommendation. They might not need to understand it at the same level as someone who built it but they still need to know enough to be able to know when to trust it.

Helping people develop intuition about systems they use

What about scenarios that aren’t life-critical or involving expert users? Most people don’t need—or want—to understand the underlying technology in great detail. But they might want to have some intuition about what systems are likely to do, so they know what to expect.

Decision Testing 01

Image: Decision testing pattern. IF CC-BY.

The decision testing pattern we recently published in our Data Permissions Catalogue shows how someone might give test information to a system, to see what output it produces. This doesn’t reveal how the decision is made but helps someone make guesses about how they might be able to influence output in the future.

More work is needed to develop patterns for interactive explanations that are appropriate to use in different contexts, especially for non-expert users. Different kinds of systems will need to explain their workings in different ways. The depth of explanation should be influenced by the potential for things to go wrong and how difficult it will be for someone to fix mistakes.