Healthcare data has never been designed

The way you get information about your health hasn’t ever really changed. You go to a consultation. You hear something and, for some, shock takes over. You don’t really take in the information you’re being told. You might be given an information sheet or prescription, but you don’t really have control of the information. You’re just playing catch-up.

At the same time, as a person becomes progressively more ill or starts suffering from multiple conditions, the quantity of data held about them increases. Not only that, but the number of people involved in their care increases too. This data isn’t really ‘in one place’ – it tends to be scattered across the different healthcare providers and services, stored digitally on systems that are rarely interoperable or stored on paper in a filing cabinet.

We explored different ways patients could receive information (Image: Doteveryone)

So we have patients that don’t feel in control of information about their own health, and a system for storing that data that was never ‘designed’ but grows in fits and spurts for every patient or new technology.

This means that when the government and public talk about ‘consent’ for healthcare data, there’s much more to unpick than just a binary opt-in or opt-out of sharing. Remember the scheme?

This model of consent is problematic because it:

  • assumes there’s a canonical source of data and a defined set of organisations who are able to access it (there isn’t, and there aren’t)
  • may give you the ability to opt out of your data being shared from one service, but not another organisation that supports a separate service that holds a copy of the same data
  • only gives you the power to make a binary decision (in some instances consent is conditional – organ donation, for instance, gives you a range of options)
  • doesn’t give people the range of permissions that could better support their needs, for instance giving someone a single-time access code to a particular piece of healthcare data
  • emphasises the personal value or risk of sharing healthcare data, when we also need to also consider the public value from sharing healthcare data

Instead of thinking about giving organisations or an individual’s access to healthcare data through models of consent, perhaps it’s more useful to think about permissions.

We should design good permissions systems for healthcare data that are accessible and useful for people, so they can understand who has access to their healthcare data and why. We need permissions that are capable of dealing with the complexity of people’s lives and their relationships.

We explored what good permissions could look like (Image: Doteveryone)

And we should go a layer deeper

The improving care project made it clear that having better access to healthcare data will help the clinicians provide better care, and patients to have a better quality end of life. But by opening up access to healthcare data, we also need to know that the foundations are secure. That means grappling with social and technological infrastructure. It means thinking about Wi-Fi and WhatsApp.

We don’t have a complete picture about how medical professionals share information, but we’ve heard anecdotes about some using tools like WhatsApp to get advice from other professionals and carers. Taking information outside of the existing ‘secure’ setup and sharing it in potentially insecure places.

If those anecdotes are true, then they aren’t stories about carers disregarding patient safety. Quite the opposite. Healthcare professionals want to give their patients better treatment so they’re using tools that promise reliable, speedy exchanges of information instead of the official ways, which are obstructive and slow. Hillary Clinton’s use of a private server is a good example of how ingrained this kind of infrastructural problem is: people find workarounds when technology fails to meet their needs.

Better access to healthcare data needs to be built conscious of the risks that behaviour like this introduces.

An early prototype looking at new communication tools for clinicians and patients (Image: Doteveryone)

Similarly, if you’re giving patients in long term care access to their records then you have to look at the system they’re using to access them. Right now, free Wi-Fi isn’t very useable and it’s often not secure. It’s either accessed through open networks (there’s no password, so it’s it’s easier for patients, but the tradeoff if a less secure connection) or via providers who use captive portals to get people to sign up (where patients may be surrendering data and metadata to a third party without giving informed consent).

Technology like this is foundational to accessing healthcare data. Right now, no-one is responsible for making sure it’s implemented ethically.

##“Technology is the answer, but what is the question?”

I’ve referenced Cedric Price before, but this project highlighted how important it is to know what the question is. Because just sticking some technology on top of deep institutional problems won’t fix anything. There are issues around security, legibility and consent that are critical to improving care. That means doing broad, holistic research into how people really use technology so you understand what people actually do. Meanwhile, trying new ideas and technologies to see how the underlying infrastructure must shift to support change.

We were fortunate to work with Doteveryone and BuckleyWilliams on doing this for a thin slice of the health and care system. It’s a big, complicated area for designers to research and build meaningful things, and we’re looking forward to seeing what happens next.