Launching the Understanding Automated Decisions exhibition

Last Monday we celebrated the launch of the Understanding Automated Decisions exhibition at the London School of Economics. In the exhibition we present a project between IF and the LSE to explore how to make automated decisions understandable. The exhibition is open to the public at the Atrium gallery at LSE until Friday 9th November.

R0020807 Left

Understanding Automated Decisions exhibition IF CC-BY

Dr Alison Powell and I opened the launch event with a presentation of the work and talked through our collaborative approach. During the project we combined our academic and design practices to create informed, practical outcomes. It was an incredible journey for both the LSE and IF teams. Together we designed, discussed and iterated different ways of explaining automated decisions in the context of a real service. We hope the results will inform discussions in both academic and design communities and lead to developing technologies that are in service to people.

We then handed over to a panel of experts to discuss key questions and themes that emerged from the project.

Our panelists included:

We were thrilled to be joined by such brilliant panelists from academia and industry. It was great to hear their different perspectives on the work and the issues around automated decisions.

How can we explain automated decisions?

In our project we explored different ways of explaining automated decisions within the context of car insurance. We wanted to hear from our panelists about challenges or solutions to explaining automated decisions in their different areas.

Lydia Nicholas talked about the need to be specific and clear with language when explaining automated decisions. She praised the definitions we had created in the Understanding Automated Decisions project and the Understanding Patient Data work from Wellcome Trust. Lydia said that it’s critical we consider the language we use when explaining issues around data and technology.

Gyorgyi Galik spoke about her research with AI assistants increasing visibility on environmental pollutants. When the devices explained more about the home environment to people they had negative reactions, finding it creepy. It’s important to think about people’s needs and the context when explaining automated decisions.

How can we ensure automated systems make decisions fairly and consistently?

One of the challenges we addressed in the work was how organisations should show automated decisions are fair to people using a service. Fairness is not an easy concept to define, as our panelists discussed.

Natasha McCarthy shared insights from a research project on dynamic data governance. She argued that people’s perceptions of fairness differed depending on the context and scenario. That’s why it’s important to develop practical solutions that are based on specific use cases.

Reuben Binns commented on the idea of fairness vs justice and how they need to be clearly defined. “If the police came into this room and decided to arrest all of us, we might say that decision was fair because they didn’t discriminate against anyone. But we might not think it was just”. We should be clear about exactly what we mean by ‘fair’ to know whether automated systems are making decisions fairly.

Antton Peña from Flock, talked about how regulation from the Financial Conduct Authority requires companies to treat customers fairly, regardless of whether decisions are made by machines or people. He spoke about how Flock demonstrates fairness by being transparent about why they need to treat customers differently. “If one pilot has years more experience they will receive a different quote to an inexperienced pilot, we make this clear to our users”.

Who should oversee automated decisions and take responsibility for their consequences?

A lot of the prototypes we created in the project look at explaining automated decisions to an individual using a service. But we know there are groups and organisations that need to oversee automated decisions to make sure they aren’t affecting people negatively.

Antton argued that there’s no straightforward answer. “We’re the last in a long line of companies providing a service”. It’s difficult to place responsibility on one group over another.

Natasha talked about the importance of ensuring new bodies, such as the Centre for Data Ethics and Innovation and the Ada Lovelace Institute, are networked and connected to regulators who apply standards for different industries.

Lydia thought the issue was with funding. The organisations responsible for enforcing regulation “need more teeth!” At the moment the Information Commissioner’s Office responds only when there is a complaint or evidence of an incident. With more funding they could take a more proactive approach to regulating how data is used.

Visit the exhibition and join the discussion

It was brilliant to hear such different views from our panelists. Open discussion across the academic, design and industry communities can help to develop informed perspectives around the challenges and opportunities for understanding automated decisions. In our exhibition we also invite the audience to tell us what they think about some of these big questions.

Visit the exhibition at the Atrium gallery at LSE until Friday 9th November. We’d love to know what you think.

R0020892 Center

Add your thoughts to the discussion IF CC-BY