From a series of posts on making digital services worth trusting. In previous posts I explained the problems with permissions and how to design permissions worth trusting. This post describes IF’s ideas on the future of permissions.

To solve usable permissions, we need to think about the problem differently

Knowing how and when to ask a user for permission to share data is really difficult. That’s partly because we have out-dated ideas and tools to solve the problem.

Deciding how and why data can be accessed is no longer something that an individual should have to manage on their own. The human rights lawyer Lizzie O’Shea recently argued that making individuals solely responsible for their privacy ‘fundamentally misunderstands our social and political environment.

Few people understand how software works or what giving permission means. Making nuanced decisions about abstract concepts is hard! We know that data isn’t simple or neutral, and that trust is at an all time low.

Today, software is hugely complicated and untrusted. This is for all sorts of reasons. The software supply chain is vast, permissions are no longer about access to a file, and code has errors and bugs. Yet services continue to push users to deal with the complexity of the underlying technology as if individuals are capable of properly thinking through these decisions. The way permissions are designed and the way users are asked to interact with them is often unwieldy, intrusive, overwhelming and sometimes misleading.

Permissions must change for the services we use to be trustworthy.

Usable permissions must remove the burden from people whilst giving people meaningful control

In December 2015 I soft-launched IF with ‘data licences’. Data licences are a pattern that enable people to customise a values based agreement for how data about them can be used. They are similar to Creative Commons licences.

Data licences - (Image: IF, CC-BY)

Data licences, for now, are a speculative pattern. But what they represent is important. Usable permissions remove the burden of decision making from individuals, while giving people agency over things that matter to them. People’s choices are based on their experiences and perception of risk. This principle is backed by Adrienne Porter Felt’s research. She highlights the need for a methodology capable of addressing the challenge of customisation without overloading individuals.

AI tools could create new interactions that give people agency

At IF we often look at problems by thinking about how technology might solve problems in a way that was not possible before. Thinking about the problem of giving people choice while removing burden, two technologies are of interest to us right now: AI powered personal assistants, and federated machine learning.

Google has been working on privacy-preserving software and products. TensorFlow Federated (TFF) is an open-source framework for privacy-preserving machine learning - (Image: María Izquierdo/IF, CC-BY)

What if you could delegate responsibility for how data is collected and used to an AI powered assistant that acts on your behalf? The AI could represent an organisation like Consumer Reports or the ACLU. You’d select the AI based on the one you feel best matches your values. You’d only set your privacy preferences once, then the AI does the rest.

The AI could operate under a federated global privacy model that is made and inspected by experts. ‘Local’ privacy models can then be personalised to you, by learning from your choices.

Work is already being done on this. Using Foursquare as a case study, researchers developed a machine learning model that could be used to protect people’s privacy. The model learns why a user wants to share their location on Foursquare. Based on the reason for location sharing, the model automatically obfuscates, or hides, the location from third parties requesting access to this information. The obfuscation level provides the highest degree of privacy that still allows the app to function as the user wants.

Delegating decision making to an entity that represents many of us is not a new idea. Tom Steinberg wrote about organisations, “personal data representatives,” that could take on this role. I explored the idea of data cooperatives in 2014. Now there are a growing number of ‘data trust’ experiments from Sidewalk Labs, the Open Data Institute, Uber drivers and others.

New technical architectures are needed too

To make delegation to organisations or machine actors trusted, new technical architectures are needed.

Ben Laurie, Director of Security and Transparency at Google Research, with whom we work closely, wrote about one version of a technical infrastructure. He makes the argument for human-centred APIs designed for human needs, in ways that increase trust and accountability. The APIs are controlled by policies. Those policies need to be ‘inspectable and enforceable’.

Image: Sarah Gold/IF, CC-BY

In this future, organisations will play a significant role. They will need to provide expert governance over how data can be accessed by a service, check that data policies are fair and that claims made about data collection and access are true. These functions are critical for society to trust that data is being used responsibly. They also come at a financial cost.

Professor Diane Coyle recently spoke about how technology shapes our organisations and companies in her talk for WIRED Smarter, ‘All technology is social’. Her talk makes me think about the investment necessary to deliver permissions like these. Who will (or should) pay for the new infrastructure? What will be the successful business models for permissions like these? These are important questions to answer as permissions that better meet people’s needs will come at institutional and political cost. There are many elements that need to come together for better permissions to succeed and have a wider impact on society and the economy.

Technology on its own will not be enough

Part of the answer might be “add more, better tech”. But there’s no reason to believe that there’s a fully tech-led solution to a tech-created problem. We will need other kinds of infrastructure too: institutions and organisations that can help keep these technologies, and the people behind them, to account.


Thank you to Ella and Grace for helping me get this mega post finished, and to Felix for the research that underpins the idea of a privacy model.

Interested in working for IF to solve issues like useable permissions? We’re hiring!