Automated decisions are part of many services we use everyday, but how they work is rarely explained or understood.
This matters because automated decisions have an impact at scale. Decisions made by automated systems should be transparent, explainable and accountable.
We’ve been working with London School of Economics to explore different ways of explaining automated decisions. We’re exhibiting our work from today, to Friday 9th November at the Atrium gallery at the London School of Economics. We are showing this work to demonstrate to industry and the public why automated decisions must be explained.
I recently wrote about our first design sprint, where we prototyped practical responses to academic theories. This post will focus on our second sprint, where we developed prototypes that show the possibilities of explaining decisions made by machine learning systems.
Flock’s rule-based system
In the first design sprint we used Flock as a case study. Flock is, a pay-as-you-fly drone insurance company that uses a rule-based approach to calculate the cost of insurance. This means Flock developers have preprogrammed the system with a set of rules for the system to follow. For example, if a pilot wants to fly when it is windy, the flight is higher risk and the premium will cost more.
In a rule-based system like this, the risk of the flight, and therefore the cost of insurance, is determined by the people who programmed the automated system.
A machine learning approach
If machine learning is used, developers create a model, and the model makes predictions about risk. These predictions are based on large datasets and the process of reaching a prediction may not follow a defined path. In the drone flight example, the system would take many factors into account to predict how likely it is for the pilot to have an incident, and therefore how risky it is.
In a system that uses machine learning, it is the machine that makes predictions about risk. So the machine determines the cost of insurance, instead of the people who developed the system.
The challenge is understanding why the machine learning system predicted the incident and being able to demonstrate that it was a fair decision.
Explaining machine learning results
We created a new use case to explore the challenges around explaining decisions in a machine learning system.
We used a fictional car insurance company that provides policies based on data about how people drive. Data is collected from a sensor in a driver’s car and sent to their insurance company. The data analysed using machine learning, which determines the cost of their insurance.
Machine learning could make risk prediction more accurate, but because of the complexity of many automated systems it will be hard for companies to be completely transparent. On top of that, how automated systems work is often a unique part of a company’s business model and may be commercially sensitive information.
Showing how individual data compares to a training dataset
The way organisations develop machine learning models can influence how the automated system makes decisions.
Developers train models to make predictions using a training data set. Where that data comes from and how it was collected can affect the way the automated systems make decisions.
Showing the datapoints used to calculate a result
Machine learning systems use many factors to make predictions. It can be challenging for the people who developed the system to understand why certain decisions were made, let alone people using the service.
These are just two of the prototypes we created, we published the full list on our Tumblr.
It’s been brilliant to explore how to apply academic research to create more informed, practical outcomes. There is a lack of academic enquiry in product development, and this kind of partnership is something we want to see happening more.
Come and have a look at our exhibition running from today, until 9th November at LSE!