Discovering how to explain automated decisions with LSE
People have a pressing need to understand how automated systems, which are embedded at the core of many businesses, make important decisions about them. Right now, this is difficult because there’s a lack of clear, effective ways of communicating how these systems produce decisions without human involvement, and many systems are not built with explainability in mind.
This week, we’re starting a research collaboration with the Data and Society program at the London School of Economics’ Department of Media and Communications, to start to build a design framework that digital services can use to help people understand how complex, cutting edge algorithmic systems make decisions within them.
What we'll explore
Over the next ten months our research will look at challenges in algorithmic transparency and accountability across socio-technical, design and policy realms. Our research will advance efforts at making systems more fair, transparent, and accountable.
We’ll explore how companies balance commercial interests with the societal and legal need to open up how automated decisions are made, the “folk stories” that people create when interacting with automated systems, and how the General Data Protection Regulation and new rights related to automated decision making are likely to work in practice.
We’ll be developing prototypes alongside Flock, an on-demand drone insurance company, to test our thinking within the demands of a real service.