Trust and rights in learned systems

At IF we’ve been looking at how people can trust services powered by machine intelligence. This is important, because services that use machine intelligence are increasingly providing utility in sensitive areas of our lives. For instance getting a job, managing finances, or as we learnt last week — in deciding who to vote for.

Learned systems

It’s helpful to think of machine intelligence technologies as ‘learned systems’. It’s a term that Matt Jones introduced me to, I like it as it lets us refocus on people and society. People build and teach these systems, and they operate in the a real world. Using the term learned systems helps remind me of that.

Trust in learned systems

These points are an overview of practical insights we’ve learnt from IF’s work on learned systems.  Whilst I can’t show the work because of confidentiality agreements, the general ideas are illustrated by examples from the public domain or from our blog.

Design for non-determinism (1/6)

Users need to be able to trust a service, even if they each see something different

Screen Shot 2018 03 22 At 21 56 52

Image: Georgina Bourke, IF CC-BY

As my colleague Georgina wrote, “As designers, we are used to designing for a fixed set of circumstances. ‘If this, then that’. AI generated interfaces effectively create an infinite set of interfaces. Nobody might see the same thing”. This creates a challenge of how users might trust a service powered by a learned system. How can users explain things to each other if no one experiences the same thing?

We need to start designing frameworks for learned systems to express a range of possibilities within. This week we started a project with Alison Powell at London School of Economics looking into this idea. We'll be looking at what those frameworks should be, and how they can help users understand and challenge a learned system.

Design for intelligibility (2/6)

Explaining how a service works and what data is used at point of use

Like handwriting, if someone’s writing is messy it’s hard to read what they have written. That’s what happens in services right now, we make it hard for users to read how a service works. But then, even if we did make services more legible, would they be understandable?

To date we’ve relied on terms and conditions (or terms of service) to explain how a service works to users. But we know they are not good enough for today’s digital services, let alone learned systems.

It’s possible to start designing T&Cs out. We can show users when services are learning, what data is being used and what powers they have to change that data, all at the point of use of a digital service.

For example, we designed a fictional benefits service last year. We submitted it to the inquiry on Algorithms in decision-making.

Scenario 1 1

Image: IF CC-BY

This example shows screens seen by someone who receives benefits. Their money has been subject to a sanction. This screen shows how that could be be explained at point of use, and how they can appeal the decision.

Until we can design T&Cs out completely, we’re working on how to improve them. Paul’s begun blogging about how we can make T&Cs more legible and easier to understand.

Design for override (3/6)

Trust is a two way relationship, so users must be able to give feedback or take over

Trust is a two way relationship, so it won’t be any good if learned systems happen to users and users can’t feed back. One instance of this is override.

Users must be able to take over critical automated processes. It’s important that they know it’s possible, and how to do it. This is important for a host of reasons, but one of them is prevention of skills-fade.

London Underground 2009 Stock Front

Photo by Tom Page / CC BY-SA

There is a nice parallel here to railway safety systems. We learnt this when we visited Transport for London’s (TfL) new control centre in Hammersmith. On most Sundays on the London Underground, TfL revert to manual braking on the underground to prevent skills-fade in drivers.

Design for recovery (4/6)

Should a system go wrong, a user must be able to remove autonomy or revert back to a previous state without systemic failure

If a learned system has got stuck or made a mistake, users must be able to remove autonomy. They should be able to revert to an earlier state, correct incorrect data or make the data less granular without causing a systemic failure.

An example of this is cruise control in a car. You can use cruise control for the majority of a journey. When you can see an unusual junction or an obstruction up ahead, you can put the car back into manual and drive yourself.

Design for inspectability (5/6)

Let a user see what a learned system is doing and why

It should be possible for a user to see what a learned system is doing and why. Like ‘View Source’ on the web, or checking the oil levels in the engine of a car. This is important even if they don’t do it very often.

Design for collective action (6/6)

Things we can only do together or with the help of an organisation are part of the answer too

Trust isn’t just about more technology or more design. We need to think about the wider system and the needs of different levels of society. We shouldn’t fall into the trap of assuming that the way to explain and trust learned systems will done just by individuals. Collective action, things we can only do together or with the help of an organisation, need to be part of the answer too.

We’ve been looking into how consumer groups, unions and medical charities can help people they represent. To help them understand if they can trust services powered by learned systems and help society course correct.

Citizens Advice Collective Action

Image: IF CC-BY

To illustrate this I'm returning to the fictional benefits service I mentioned earlier. The individual who received the sanction visits their local Citizens Advice office to get help. The advisor at Citizens Advice is able to see why the sanction was given. They are alerted that 153 other benefits claimants have been sanctioned for the same reason. The advisor adds the case. They can alert the organisation that there may be a problem with the automated decision that decides sanctions in a city.

Rights in learned systems

GDPR in learned systems (1/3)

GDPR is the General Data Protection Regulation. It comes into force in May this year. It’s going to have a profound impact on the way that we build and maintain services. The GDPR gives people a range of new rights and it will affect any company that holds any data on any EU citizen.

Some of the rights directly challenge learned systems.

What is an individual’s ‘right to deletion’ in a federated machine learning model? Is ephemeral learning possible with this technology?

People are complex. The data they put into a service that's used to train a model might be wrong. Or, data is no longer right. For example, someone who becomes divorced. This life change has many consequences. For instance, how they manage their bank account or what photos it’s not okay to show in their social media feed.

Google Clips Feat

Photo by Google

How do we design ephemeral learning? Is ephemeral learning possible with federated machine learning?

Our work at IF shows it’s important that a system can explain new learning to a user. The user should be able to choose whether to keep that learning, or when it’s appropriate to use that learning.

How can you grant a user the ‘right to understand’ in a world that is nondeterministic, without overwhelming them?

We have to think about this from the perspective of communities, civic institutions and delegates. Data privacy complex. Regulation cannot be by individuals alone. People are busy doing important things like sorting childcare or trying to stretch their money. Most people don't have time to tweak data permissions. Some may not have the skills necessary to check that their choices are respected.

This is a complex area that is growing. We need to look to groups that are able to represent many of us to help us understand and take action.

Testing in learned systems (2/3)

Any system of any complexity in society has required a testing ecosystem. For instance, take a fridge.

The fridge in your kitchen will have undergone rigorous testing. A product safety organisation will test that the fridge won’t set on fire in your kitchen. The manufacturer will have run tests too to make sure that the fridge will keep your food at a safe temperature. Plus they will have built the fridge to standards that we, as a society, have agreed to.

Learned systems will need a testing ecosystem too. It's likely that the testing will need to look different to what we know today. Fridges only change with age as their performance deteriorates, whereas with learned systems we’re suggesting that services will change in moment.

What does the testing ecosystem look like that’s capable of keeping pace with software?

It is likely that users will be running different versions of a model on any one service. How can they make sure that the model they are looking at is a trusted model? How do they verify?

DeepMind are developing a technology called Verifiable Data Audit. It can create a log that a hospital trust or auditor needs to understand how a clinician made a decision. We also need other verifiable, open registries and audit logs of model versions and training data. These data sets need maintaining by organisations committed to transparency of learned systems.

Ownership and control in learned systems (3/3)

Dyha Sqsws Aa Gl R

Observer front page, March 18th 2018

Learned systems don't exist in isolation, they will exist as part of a wider system. Facebook and Cambridge Analytica have shown what can go wrong in a closed, illegible system with perverse incentives baked into it.

When you use a service that learns from your behaviour and makes decisions on your behalf, how can you know it works in your best interests? As humans, we have morality. So how do we give a bunch of numbers the ability to know what is in our best interests? If a learned system makes decisions for your community, how can you be sure that the decisions made show what is right?

When we use a Doctor, Engineer or Lawyer these individuals work to a professional code of ethics. How do we give learned systems professional ethics? How does that get communicated to users? Where do you think a system’s ethical promise should get logged?

My closing question is this: which parts of a learned system and the platforms they operate on should be closed, proprietary and of the market? What needs to be open, cooperative, owned by society and not part of the market?

If there is not enough transparency built into a learned system and the platform it operates on, how can we course correct? For example, the financial crisis represented a huge, worldwide, systems failure that society didn’t see coming. It affected everyone, especially those who have least.

_

These questions are some of the most pertinent areas to investigate in design today.

What we design affects the understanding and access that a person has to their rights and what actions they are able to take. It’s our responsibility to design trusted learned systems that safeguard people’s rights.

This post was based on a talk I gave at PAIRS UX Symposium in March 2018.