From auto-complete in Google search, to emoji predictive messaging from SwiftKey, AI generated, highly personalised interfaces are now part of the services that millions of people use everyday. 

The interesting consequence of this from a design point of view is that no one working at Google or SwiftKey will know exactly what people will see.

As AI becomes responsible for more of the design of interfaces and interactions, this introduces new challenges for designers.

From predictable controlled interfaces to infinite possibilites

Designing for AI generated interfaces is very different to how we have traditionally designed user interfaces. As designers, we are used to designing for a fixed set of circumstances. If this, then that. AI generated interfaces effectively create an infinite set of interfaces. Nobody might see the same thing.

Like this:

As Chris Noessal, Head of Design for IBM’s travel & transportation industry, says: this might change designers relationship to creating user interfaces. Designers will need to think about a visual language or framework for the AI to express a range of possibilities.

Within that framework designers will need to consider what happens when things go wrong. Caroline Sinders, machine learning designer, user researcher and digital anthropologist, thinks it’s the responsibility of designers to protect people from potentially harmful effects of unregulated AI generated content.

Caroline says “The product you are building uses a specific kind of algorithm and how that algorithm responds to a specific data set is a design effect–whether or not you intend it, and whether or not you know what the outcome will be.” How do you protect people from mistakes made by AI when you’re unable to know what those mistakes might be? Caroline believes one answer to this lies in making algorithmic decisions transparent to people.

Transparency is needed for trust

Genuinely trusted AI will require people to understand why things work the way they do and help them manage and recover from mistakes.

Karrie Karahalios, from University of Illinois at Urbana-Champaign, has been researching how making algorithmic decisions more transparent in social media feeds engenders trust. One of themes that emerged from the research was that people need to revert or tune the outcome of an algorithm.

Snapchat’s latest update includes an algorithmically generated feed that “will mold itself to what each person watches most, like Netflix”. Many people rejected the new design, saying the “algorithm doesn’t know [me] as well” and that the service was much harder to use. Perhaps if people were able to understand more about the decisions the algorithm was making and feedback into its learning this would help to improve trust in the service.

If an AI powered service has got stuck or made a mistake people must be able to gracefully degrade it, remove any autonomy the algorithm has, or revert to an earlier version without causing a systemic failure.

John Zimmerman, from the Human-Computer Interaction Institute in Pittsburgh, discusses this in his talk at the 2017 PAIRS symposium. He emphasises the need for a universal ‘back button’ for people, giving them options for recourse for errors that will inevitably arise.

Design for transparency

We’re only beginning to understand the challenges and opportunities that come from designing AI generated interfaces.

Our relationship with designing interfaces will shift from designing elements for fixed circumstances to creating visual languages and frameworks with infinite possibilities. As we won’t be able to know exactly what people will see, we need to design transparency into these frameworks to make sure people are able to interrogate and challenge what the AI generates.

At IF, we’ll be exploring and developing new patterns for transparency in AI over the next few months so check the blog for updates.