Putting the care of Black people at the forefront of design is one way to turn anti-racist talk into action. To that end, we’ve been reflecting on past projects this week to look at how our learnings can be applied to anti-racist design practices.

The corporate statement released by Airbnb in response to the Black Lives Matter movement referred specifically to the racism that had taken place within their community. On seeing this, we started talking about the sharing economy and how to deliver care in the face of racial microaggressions and have outlined design interventions that emphasise care. These interventions haven’t yet been tested but they’re a good start to build a new normal.

At the moment, many reporting flows go something like this: incident occurs > incident is reported > aggressor is reprimanded (how they’re reprimanded is not always transparent but that’s for another day).

We are interested in:

  • How can we design interventions so that racial microaggressions can be reported, while protecting the accuser’s identity and safety?

  • How can we design a reporting flow that supports the victim, makes them feel cared for?

1. Stop microaggressions before they’re sent

Natural language processing can analyse messages between users of the platform and detect if they contain discriminatory terms, from microaggressions to racist slurs. We can design an intervention that helps the user to reconsider their language.

2. Educate the sender about harmful language

This intervention helps protect the person on the receiving end. It also helps inform the user in context about hurtful language.

We have previously used natural language processing with vulnerability indexes to think about how we can design automated decision-making that demonstrates care for people – especially for sensitive topics like mental health – while being clear about the boundaries and limitations of AI.

IF wanted to explore young people’s attitudes to data privacy and automated decisions. As part of our research, we developed a fictional mental health chatbot, MoodJar.

3. Protect the user’s identity during the reporting flow

We challenged Oxfam to show care by giving displaced people proof of their consent. This proof of permission allows someone to trace what data about them is being used for. This simple receipt allows the organisation to be held accountable for the data they’ve collected, but also gives the provider of this data transparency a mechanism to follow up on their data and how it’s being used. Additionally, the receipt contains only pseudo-anonymous data so it can’t be traced back to a specific person.

IF designed a receipt to help people keep track of how data about them is used.

4. Proactively detect discrimination to lift the burden off of users

Interactions between guests and hosts can be proactively monitored in a privacy-preserving way to detect cases of discrimination. Once a cancellation of concern is detected, any users involved should be able to check in on its progress, while the case is investigated by a real person.

This is only the beginning

Here are some other questions we considered to put care first in the reporting feedback loop :

  • How might sharing economy platforms reassure people that have reported discrimination that their report is meaningful and consequential?
  • How might sharing economy platforms be more transparent about their process for dealing with reports of discrimination and the consequences if found guilty?
  • How might people understand how sharing economy platforms classify each type of infringement by discrimination?
  • How might sharing economy platforms protect people that report discrimination from backlash?

It would seem obvious that any discriminatory policy should have a robust response to caring for the victim but their needs can get lost in the bureaucracy of performative action.

Want to start using anti-racist design thinking in your work? Let’s talk.