Data, privilege and inequality

When I told my Dad I was going to work for IF, we talked about why IF’s mission is so important right now. I explained that the balance of power needs to change. People need to be able to understand, and challenge, how organisations use data.

He responded by saying “I don’t really think it matters what they know about me. It’s actually quite helpful when Facebook suggests things I’d like to buy.” Since I’ve been at IF, and learnt more about the problems we’re addressing, I’ve reflected a lot on this conversation.

I’ve been thinking about the role privilege plays in how people think about ‘data’. Because services mostly operate as a contract between an individual and an organisation, we tend to think of the potential harm of data misuse in individualistic terms. And as individuals, our privilege can protect us from the worst impacts of unethical data practices. But as more data is collected and used in different ways, there’s a risk that new technologies quietly reinforce systemic societal inequality.

I’m going to talk about three examples that show how data use can support historic power structures around nationality, race and wealth.

Enforcing borders

Data is shared between government departments, and sometimes with private companies, often with little transparency. There are lots of examples of data sharing that helps governments enforce borders based on xenophobic constructions of nationality.

In a recent example, the Canadian border agency was criticised for using data collected by ancestry websites to deport people. The agency accessed DNA, and even contacted distant relatives in efforts to find out someone’s nationality. As well as the obvious human rights abuses, ‘the whole premise of using DNA to establish nationality is flawed since ethnic origin doesn’t necessarily tell what someone’s citizenship is’.

We’ve seen similar things here in the UK. Until earlier this year, data was shared between the the NHS and the Home Office to find ‘illegal immigrants’. The decision was made by the government with little debate or public consultation.

Encouraging biased policing practices

Facial recognition software is notoriously bad at identifying people of colour - due to a lack of diversity in the training data these systems learn from.

To prove this point, the ACLU tested Amazon’s Rekognition software on the US Congress. Rekognition incorrectly recognised 28 members of Congress as people who had been arrested for a crime - and was twice as likely to falsely match people of colour. Amazon sells this software to law enforcement agencies across the US - what will this do to the already fraught relationship between communities of colour and the police? My guess is, it’ll only strengthen policing and sentencing practices which already discriminate against young African-American men.

Policing And Bias

Photo by King's Church International.

What’s going on in the US is getting a lot of press and campaign groups’ attention. What’s happening here - less so. But there are similarities.

The Metropolitan police was recently criticised by Amnesty International for its ‘gang matrix’, which lists 3,800 suspected ‘gang-affiliated individuals’. Because the ‘conflation of elements of urban youth culture with violent offending is heavily racialised,’ people of colour are overwhelmingly represented in the database. 78% of individuals on the gang matrix are black. But the black population of London is 13%, and the percentage of black people identified by the police as responsible for serious crimes is 27%.

Data from the matrix is shared across police forces, housing associations, schools, job centres, the criminal justice system and the Home Office. This risks further entrenching institutional racism in organisations that have a history of failing the black community.

Excluding certain socio-economic groups

In India, more and more services are delivered through Aadhaar, a new digital identity system. People are denied food rations if they are unable, or refuse, to submit biometric data to Aadhaar. There are many examples of people on lower incomes or living in rural areas being excluded from the system. Aadhaar punishes those who most rely on government support.

And in the UK, the Home Office used a database created by charities working with homeless people to deport rough sleepers from EU countries. The policy has since been determined unlawful, but not before the rights of people in vulnerable situations had been abused.

Where do we go from here?

The narrative is already changing. People are becoming more aware of how organisations collect and use data. In the US, campaigns about data, technology and race are getting more visibility. The UK is comparatively quiet and I think it’s important we become more vocal in challenging what’s happening here too. But what practical changes need to happen?

As individuals, our privilege may protect us from some of the worst harms of data misuse. But when new technologies reinforce social inequality, that shapes the society we live in. So we all have a responsibility to care. Especially those of us who are privileged enough to have the power to change things for the better.