How can we create digital products and services that people can—and do—trust? It’s a question that’s integral to our work at IF. It’s becoming increasingly important as people become more aware of the possible consequences of data being recorded, joined up and used by organisations.

Winning users’ trust is becoming an essential component of business success. It’s not only the responsible and moral thing to do. This effect will be amplified in the coming decade.

At IF, we expect to see trustworthiness become a KPI that is routinely measured within organisations as part of their overall performance strategy. For this to happen, though, we need to think critically about what makes a digital product or service worth trusting.

The user interface level plays a crucial role when it comes to trust. We’re used to seeing pop-ups demanding consent for various uses of our data when we’re online, and we’re used to clicking ‘agree’ on lengthy and impenetrable terms and conditions in order to complete a task.

It’s hard to trust something we don’t really understand, and there’s a general sense that these interactions are more of a formality than a genuine attempt to clearly communicate how data about us is used. At IF, we research and prototype better ways of explaining data policies and gaining consent, and we continue to build a comprehensive pattern library that lays out the pros and cons of all the different possible ways of doing this.

Trust must inform how you design the whole technology stack

Improving the UI of digital products is just the first of many ways that trust can be gained. But organisations must go further, and create new ways of dealing with issues around privacy, transparency and accountability. At IF we have been working with organisations on redesigning their products in a fundamental way, so that trustworthiness is built into the technical architecture at every level.

One way of doing this is by designing for verifiability. Over the last year, we’ve been working with Trillian, Google’s open-source tool for storing data in a way that makes it possible to check if data has been tampered with. By using Trillian in the design of digital products, organisations can prove how data is being used. This kind of infrastructure innovation enables organisations that represent many of us, like consumer rights organisations, to provide up to date information on the trustworthiness of services—or users could verify how data is being used for themselves. People don’t have to blindly trust that the system is working as intended.

Communities, regulations and ethical codes must play a role

Over the coming months, I will be sharing a series of blog posts that describe the characteristics of trustworthy systems, breaking down the various ways in which trust can be earned. This will cover not just the technical architecture of these systems, but also possible regulations and codes of conduct that could play an important role in ensuring the trustworthiness of future technology, and the way that it’s possible to earn trust via communities.

For example, developers could make the workings of digital products transparent in such a way that experts, influencers or chosen community representatives can assess their trustworthiness and communicate this to the public. At IF, we’ve been thinking about the colour-coded health information on food packaging, and how groups could use design in a similar way to express the trustworthiness of digital products.

Taking a design-approach to trust

Thinking in the open. Inspired by a sketch from Mark Hurrell in 2016, here’s my version of the Stactivism sketch.

Designing for trust is a complex, multifaceted challenge, and solutions will be context-specific. However, there is a reliable way of figuring out what works in each specific instance, and that’s by taking a design approach to the problem. Designers are instinctively inquisitive; they are good facilitators of discussions; and they are good at bringing together multidisciplinary teams. They are also used to testing prototypes and improving them in response to feedback. These are elements needed when deciding how to make any specific service optimally trustworthy.

At IF, we’ve been using this process as we work with organisations—from technology multinationals to NGOs—to build solutions to the challenge of trust that are fit for the digital age. We want to keep sharing our findings as we continue our collaborative work, so we can play a part in helping create a digital ecosystem that’s not just practical, innovative and successful over the long-term, but that’s also safe, transparent and worthy of continued trust.

Here are the series of posts as they’re published:


Thanks to Jess Holland and Ella Fitzsimmons for their edits.