It’s the part where we asked people to put a value on their data, something which the results we looked through suggested didn’t work so well. I wanted to think about why.
A marketplace for data
The way data is sold is extremely opaque. It’s traded in big volumes by large entities in industries like advertising and insurance. The data includes information about who we are, what we do, what we like and more. We rarely have any influence on the way our data is used by other parties.
In our exhibit, we experimented with the idea of a data marketplace, where data is given value and is traded. This isn’t a new idea. Services like Data Coup, People.io and Citizenme do this already, but act more like a middleman, rather than allowing people to directly put a value on their data.
We make the data, but we’re shut out of the conversation with the people that sell it. We should be able to get something back from the data we produce about ourselves. It’s also important that what we get back isn’t superficial. Rather than give people discounts or extra features, the returns on our data should be real and useful. Jaron Lanier writes about this in his book, Who Owns the Future.
It’s hard to put a value on an abstract concept
We knew from the outset that it would be hard to put a value on data. It implies we can contain a unit of the thing, something which is difficult with data because it depends on what it’s used for. Data doesn’t have the intrinsic value of a gold bar or a pound coin, or even a sandwich. We knew we had to do a lot of work so people could get their head around how it might work.
Adding a value was the last part involved in making a data licence. To help people understand what they are putting a value on, we asked what prices participants would charge for six months of access.
We provided a couple of suggestions for what we thought would be the low, mid, and high values. We thought that as more people started to set a price those values would shift. But what we found was that people just took those suggestions as discrete options.
Our data makes conclusions hard to draw
It’s hard to know what people thought when they got to this point in the exhibit. The values added to most licences didn’t deviate much from our suggestions, so we don’t know what people really thought their own data might be worth, or how much they thought about the question.
I don’t think there was that much considered thought, but that’s a big assumption. Maybe I’m wrong.
To learn more we’d have to try something else. Half the battle is a design challenge, and the other half is an economics problem.
If you’re creating a market from scratch, there are all these weird economic and almost philosophical questions you have to answer. Because this was only a small part of a piece in an exhibition we didn’t really do that. Instead, our price suggestions created a feedback loop. We have to think carefully about how we design that part in the future.
We also have to think more carefully about what we’re hoping to discover. Data that involves biometrics is arguably the most intimate type of data you could possibly have. Would people make it more expensive than their travel data? The current implementation makes it hard for us to find that out.
We’ll have another go
What we showed at Big Bang Data was first shot at an implementation of data licences. Core to this consent model is the idea of giving people fine control over the way their data is used. Putting a value on data is an interesting and speculative extension of this model.
It’s something we want to explore again. Not only to try other ways of explaining these ideas, but to further interrogate the nature of putting value on data, and what the implications of doing that might be. For instance, is there’s a risk of increasing the economic inequality between people who own expensive connected devices that produce large amounts of data, and those who don’t?
We didn’t come in with an agenda, other than to see what people would do if they could control more of their data. We managed to make the idea of trading data less shady and a bit more real for the general audience. That’s important in itself. The fact that it didn’t go completely to plan is fine, because we’ve learned a lot in the process.