From a series of posts on making digital services worth trusting. This is the first in a series of 3 posts, looking at permissions as a key part of a trustworthy service. It’s a complicated topic, so this one is about some of the nuanced problems with permissions.

The number of consumers concerned about how brands access data about them has leaped from 46% in 2015 to 67% in 2018. There are many reasons for this loss of consumer trust. Building brands, or services, worth trusting isn’t an advertising challenge anymore: it’s increasingly a design and software one.

Permissions are for privacy, but don’t feel empowering

The idea of permissions is both well-intended and necessary. They’re put in place for privacy, to give users control over what services can know about them. But as users of services, our lived reality of permissions is that they’re too often confusing and disempowering.

Lots of permissions are bound up in legalese in the terms and conditions. Almost nobody reads them. In an academic study from a few years ago, 98% of people who signed up for a fictional social-networking service agreed to give away their first-born child. As a study by Doteveryone showed, they’re just too long and too hard to understand. In 2016, the Norwegian Consumer Council read the terms and conditions out loud for 33 apps (the average number of apps a Norwegian uses) and it took 31 hours, and 49 minutes.

Permission pop ups - (Image: Sarah Gold, CC-BY)

Other permissions are asked during the onboarding journey of a service or during use. This might sound like a better way of doing things. Unfortunately, that’s problematic too. The number of digital services we use daily means we’re asked for permission too often to carefully consider each case. And who has the expertise to properly assess these requests? It gets even more difficult to understand what’s going on if you decline a permission that’s critical to the service. Or when you aren’t asked permission by a service when you think you should have been.

Permissions for apps is borked

Flow from critical to runtime to denied - (Image: Sarah Gold, CC-BY)

It’s tricky for product teams too. It’s often not the designers or developers fault. If you are making an app for Android or iOS, you have to use the frameworks that Android and iOS set, along with their policies. As a developer you have to include the pop-up permission dialogs. You’re encouraged (sometimes forced) to place them in certain points in your user journey.

At the moment, Google’s Material Design framework for Android states, “Permission requests should be simple, transparent, and understandable” for “best practices of user interface design”. This is a great, and responsible, aim. In practice, though, that framework doesn’t currently make it easy for product teams to build trustworthy services. Something that’s worth fixing.

Even among organisations that are genuinely aiming for transparency, the way permissions are designed and sit in the user journey is often unwieldy, intrusive and overwhelming.

The Alexa problem: no runtime permissions - (Image: Sarah Gold, CC-BY)

New services, like Alexa Skills, have no design system for permissions. Instead they default to the Android or iOS permission frameworks where the user sets permission in their Amazon app. There are no runtime permissions, which means that as a user you don’t get asked for permission for data sharing whilst you’re using a Skill.

One of the things we’ve been working on at IF is designing for more than one person. As Alexa is a product that’s meant to be used in that way, it would be a really interesting example of how to create best practices for permissions in voice. Again, worth doing.

At the moment Alexa only uses two kinds of permission: writing to/reading Alexa lists and location. The location specificity can be set by the Skill developer, presumably to support Skills like delivery. But that also means someone could be listening to a Podcast Skill, meanwhile the Skill accesses their address. That’s all because the user gave the Skill permission, once. It’s similar to the security issues from torch-light apps.

It means the user experience of understanding data sharing is obfuscated. To understand how data is accessed and shared by a Skill the user must read the Skill’s privacy policy. Research from 2017 found that 75% of Alexa Skills have no privacy policy. I wonder if that’s any different today.

Fixing permissions needs action upstream

Frameworks for developers makes a lot of sense. Writing code is often full of unknowns. But we need new frameworks that are fit for purpose. Organisations that set standards should be making it easier, not harder, for teams to design permissions well.

That’s the real “platform play” here.

This work is now urgent. Critical services are moving on to apps, which have limited permissions settings. The NHS App (which will be a place that you’ll access your NHS health record) will have to use permissions frameworks that are not currently fit for purpose. More research, considered design and iteration is needed. This is doable, but it’s going to demand a lot more than “ethics theatre” to get the industry there.

In the next post I’ll describe what we’ve found at IF to be best practices for designing permissions.

Thank you to Jess Holland, Ella Fitzsimmons, Grace Annan-Callcott and Felix Fischer for their help with this post.