In early 2019, IF wanted to explore young people’s attitudes to data privacy and automated decisions. So we invited our friends Alex and Akil at Comuzi Labs (who had just released the Feminist Alexa chatbot) to join us in this work. As part of our research, we developed a fictional mental health chatbot, MoodJar. Rapid prototypes like that are great for thinking through making, and for testing people’s assumptions, thoughts and feelings about otherwise esoteric topics.
What we did with it next
We never intended for MoodJar to be a live digital product. Instead, we used MoodJar to explore patterns about how and when to explain automated decisions. IF has done a lot of work making AI more transparent and understandable since then, with the London School of Economics and Google AI last year. We’re also committed to showing what good looks like when it comes to consent patterns around data and automated decisions.
The MoodJar prototypes were raw and utilitarian - they were useful for us to explore how to communicate automated decisions through different UI interactions. They were prototypes for testing, not to tell a story. How could we take what we’d learned from the work, and with hindsight on our side, redesign it so that it told the story about interactions patterns more clearly and quickly?
Started with small tweaks
We thought this would be a quick and simple redesign. So we started off with “quality of life” improvements: tweaked the copy to be clearer, made the text bigger so it’s readable when we’re presenting the work in a big screen at a distance, showed the UI in context (on a phone) and started to break the long chatbot conversation in smaller chunks. But it wasn’t enough.
Focussing on the patterns
One of the problems for that design was that it still felt and looked too close to a real product. When we ran an internal crit, people thought of it as “a mental health app” and asked all sorts very valid questions about the app itself. And at IF, we think that permissions should sit in the context of a service and as part of a branded experience. But that’s not always a helpful way of telling a story about what sorts of permissions should exist. It was too real, and this was distracting. So we took the prototype and stripped it down further so all that’s left are consent patterns. We needed a level of detail in between what we had and a Data Patterns Catalogue illustration. So we started by removing all MoodJar branding. Made the interactions less wordy and added emojis. Abstracted the phone to a rectangle, released the text boxes from the screen and let them float on top of it. We divided the story and turned it into modular images that can be used individually, in pairs, or all together. This way you see them as isolated consent patterns rather than the end to end MoodJar product. All the while keeping the political and practical thoughts about accountability, transparency and usability that underpinned the first versions of the prototype.
What do we get from this?
Working on the edge of current design and tech practices is exciting, but hard. How do you take ideas like “algorithmic accountability” or “transparency” and test what they would look like in an interface? By testing prototypes like the first iteration of MoodJar, teams like IF and Comuzi can begin to pick out how people actually feel when they’re faced with an alternative to extractive practices. These prototypes and patterns go on to become part of how we tell the story of what we do (in talks by David, Georgina, Sarah and Ella).
In the end, we take what we learn from both client and self-initiated projects and show how these interaction patterns can be used more generally; it’s our way of helping build a new, better normal.
Thanks to Ella Fitzsimmons for her contributions to this post.