Learning through making: understanding what young people think about AI and data privacy

Guest post by Akil Benjamin, Head of Research and Founder at Comuzi, an innovation studio which works on next-generation products and services.

AI, or at least automated decisions, are already pervasive across seen and unseen aspects of our lives. Their impact and implications will become more far-reaching as these technologies develop. IF and Comuzi share a belief that people should understand how these technologies work, so we’re all best placed to navigate society in the future.

Privacy shouldn’t be a luxury. People should be able to understand what’s happening when they use their smartphones. But how do you design for that? Do young people even care about these things, or have they grown up in a time where they believe privacy is irrelevant?

Two studios, working together

If Comuzi Blog Post

Akil, Nat and Harry. IF: CC-BY

IF has previously explored the impact and implications of AI and privacy from many perspectives. Building on this work, IF and Comuzi spent a few weeks looking at how young people navigate and experience a society where automated decisions are making more important and more complex decisions about people’s lives.

Taking the launchpad approach, our research found insights in 6 areas we think are important for the future development of products and services for young people.

Unnamed 5

As part of our research, we developed a fictional mental health chatbot, MoodJar. We used it to see:

  1. What young people understand about automated decisions

  2. How we could show the inner workings of a mental health chatbot, in ways that make automated decisions more understandable

Some shared opinions about humans and AI

Young people already have strong opinions on how emerging technical systems should work. We learned that was especially true about AI, and the situations automated decisions should be used in.

The young people we spoke to had the following shared opinions:

  • It is important AI discloses it is artificial or automated in the beginning of the interaction or engagement

  • It should not try to foster a humanlike relationship

  • The app should forget their entire history of interactions when it is is deleted or removed

  • The chatbot won’t be great at everything and its limitations should be clear

  • Unless specifically directed, AI powered chatbots and tools should not betray privacy

Strong but incomplete understanding of how technology works

The young people in our user research sessions had technical intuition but didn’t use it to look beyond the day-to-day use of their phones, apps, TVs and other digital tools (Alix Trot, 2018).

Because young people’s technology expectations are generally fulfilled, they’re reaching a place where their inherent trust in technology is growing. That includes emerging technology. Our user research participants knew the chatbot would respond automatically without human input and they trusted it would respond somewhat coherently:

      “I could explain Alexa superficially... that’s fine as I could always go and find out more if I wanted” - (User research participant)

This quote was something we heard in our first user research session. In our second session, people shared these thoughts too. But when talking about technology which impacted things like their health, finance, security or employment, they wanted a deeper understanding.

Unnamed

User research in the IF offices. IF: CC-BY

Thinking about this, we distilled what we found in the user research sessions into two personas:

  1. An individual who expects information security from the digital products and services they use. They don’t really need to truly know how a technical system works, just that it does.

  2. An individual who expects both privacy and information security from the digital products and services they use. These people look to dig a little deeper, beyond the surface, to understand if their expectations are being met.

This second group highlighted the self-awareness some young people showed in the user research. The difference between these two personas helped inform our research question: how might we show the inner workings of an emerging technical system (a mental health chatbot that is automated) to make its actions explainable?

Different definitions of objectivity

Many young people wanted to interact with MoodJar because it was a ‘bot’. They trusted that they would get a response based on what they said and nothing more. This was one of the key reasons why the young people appreciated the interaction with MoodJar, valuing this method of expression over talking to friends in such circumstances. I believe the application of this insight could be extended further.

There is a difference in what is meant by the term ‘bias’ as a technology professional, compared to the general public and young people.

To a technology professional, ‘bias’ in regards to AI means:

       When the data used to teach a machine learning system reflects the values of humans involved in that data collection, selection, or use. (AI Cheatsheet, 2018)

But for some of the young people in our research, ‘bias’ meant something completely different. What they identified as bias, we identify as objectivity:

       “I know its a bot... with a non-biased opinion. I rather it not be personal then I would have to talk to it like a friend, then why wouldn’t I go speak to a friend” - (User research participant)

There is something in objectivity which young people appreciate. Is objectivity a perceived feature of automated decisions to a wider audience? Or is this a sentiment only shared by young people?

Are young people the first group outside of designers, researchers and technologists which are widespread advocates of algorithmic accountability?

Currently we don't know, but these are areas we look to explore with future work.

Chatbots are tools, not friends

Young people did not identify chatbots as friends. Their method of interaction with many chat and voice-bots, such as Siri or Alexa, was mainly command-driven:

      “I just say ‘Weather!’, I don’t say ‘Please tell me the weather today’, because I know ‘Weather!’ is a command word” - (User research participant)

Young people understand that AI driven chatbots are not human. When presented with MoodJar, a chatbot that reinforced that understanding, they appreciated the chatbot for reasons like:

      “It’s clear what it does”

      “Evident its an app, you know you are not talking with a person”

      “Has a ‘detached’ element which some people may like (objectivity)”

      (Concept testing session)

Young people’s trust in the types of decision AI makes is dependent on who made the app. Young people use different apps for different things:

      “I wouldn’t talk to Alexa about my mental health”

      “Wysa said it had qualified psychologists who helps make the responses. I thought that was good”

      (User research session)

As we now have a sense of where young people’s boundaries might be, we have an opportunity to try and design ethical AI products and services which meet a young person’s expectations (Weyenberg, 2016). Things like terms and conditions fell pretty far outside some young people’s understandings of what was fair and ethical:

      “I think terms and conditions are unfair, I don’t even know what they say” - (User research participant)

Designing to expose the seams

(Wieser, 1994)

Taking what we learned in user research as a baseline, we designed MoodJar to see if we could improve young people’s understanding of automated decisions. At least in certain contexts.

As well as using the insights from this research to design a provotype which simulated conversation, we focused keenly on the user interface (UI) of the chatbot. We wanted to see if it could assist in making automated decisions more explainable, and subsequently understandable.

In our UI designs, we worked to show young people how the information they shared with the chatbot contributed to decisions made about them.

Unnamed 3
Unnamed 2

Using text wrapping and underlining to acknowledge key points of data entry is an example of ‘exposing the seams’ and showing the inner workings of an automated decision.

This visualisation was successful in communicating the information MoodJar uses to make decisions:

      ‘...it picks up on keywords for suggestions’ - (User research participant)

Unnamed 1
Unnamed

This second pair of UI interactions are based around user research insights. We wanted to design something which is transparent with the data it collects and shows the data in a useful and meaningful way:

      “I think terms and conditions are unfair, I don’t even know what they say” - (User research participant)

      “I like the way it visualises my information” - (User research participant)

We have started to learn how we can improve a young person’s understanding of automated decisions through design. These UI patterns may be one part of many changes we can make. Though there are areas for improvement, young people grasped enough of an understanding of MoodJar actions to accurately describe how it worked.

We’re cataloguing these interactions for further use. In the right use cases, we can use these UI designs to make other applications which make automated decisions more explainable.

Thoughts about the sprint

What we learned in this project speaks to the relationship young people want to have with systems that use AI. Yes, it’s early research, and we need to keep testing these things, but there are some really interesting results. The main one is that young people want certain boundaries respected if they are going to continue to positively engage with these technologies.

Young people believe and trust that AI is objective. They believe AI should not be making decisions which will have a big impact on their lives (though more decisions increasingly are), and they don’t want AI to try and develop a human like relationship with them.

With these lessons in mind, there is an opportunity to create new ways of designing that inform fairer and more explainable AI products and services. If we pursue this work further, we can empower young people to navigate AI powered products and services more confidently in the future, by designing in ways that matter to them.


Thanks to Harry Trimble, Nat Buckley and Cath Richardson for their work on this project. Blog post edited by Ella Fitzsimmons, and proofread by Grace Annan-Callcott and Jess Holland.

IF is a technology studio specialising the ethical and practical uses of data.

Comuzi is a design and innovation studio. We render emergent technologies into next generation products and services for positive human interaction.