Algorithmic accountability

Birmingham Law School research project

Over the past three decades, advances in the power, speed, scale and sophistication of digital technologies have radically transformed almost every sphere of modern life.

The internet has evolved from its humble beginnings into a global data infrastructure that makes possible myriad modern conveniences which those of us living in advanced industrialised economies could not contemplate living without.

Although these digital interfaces are designed to be as unobtrusive and ‘seamless’ as possible, beneath their simple and elegant veneer lies a highly complex, global data ecosystem supported and powered by sophisticated socio-technical systems, largely developed and designed by extraordinarily rich and powerful global tech firms (the so-called ‘Digital Titans’). The digital services provided via these systems rely critically on computational algorithms that operate on the basis of mathematical logic. The resulting massive data-sets that are now continuously collected from the traces of our online interactions are parsed by machine learning algorithms to infer our tastes, preferences, traits, behaviours and vulnerabilities, making it possible to deliver predictively ‘personalised’ services to us as individuals, yet on a planetary scale.

Algorithms in society

If we consider the scale and reach of these processes, the extraordinary power of the algorithm and its role in mediating our social experience and existence is drawn into sharper focus. Not merely our time and attention, but the range of choices available to each individual is increasingly channelled through algorithmic systems which sort, filter, search, prioritise, recommend, nudge or otherwise engage our senses. The role of algorithms in society has therefore risen to prominence, both within public discussion, and as an object of academic inquiry. Given the ever-expanding ‘internet of everything,’ it is increasingly difficult to identify any sphere of our economic, industrial, social, political and our home and family life, that remains untouched by the power of data-driven, algorithmic systems, with profound and often deeply troubling implications.

Although an algorithm is merely a set of steps for solving a problem, the term is now commonly used to refer to computational algorithms encoded in software programs. To those of us without highly technical expertise, algorithms are opaque, inscrutable ‘black boxes’: beyond our capacity to comprehend due to the sophisticated mathematical processes upon which they rely and, in many cases, their protection from disclosure by intellectual property law. Yet the power of algorithmic systems across contemporary industrialised societies is fuelling concerns about the need for ‘algorithmic accountability’ given their capacity to inform and increasingly to automate decision-making power, often with highly consequential effects. Debates about the way in which algorithms mediate and regulate our social world touch upon some of the most pressing dilemmas and conflicts of digitalized, late modern societies: the future of democracy, individual freedom and the rule of law, widening social and economic inequality, ecological sustainability and the foundations of trust in contemporary societies. Understanding and critically reflecting upon the constitutional, political, social, economic and moral implications of the use of algorithmic systems is therefore essential: both as a prerequisite for understanding the choices and potential directions available to modern societies in seeking to shape and respond to technological and social change, and to ensure that we can establish and maintain legitimate and effective mechanisms for securing algorithmic accountability.

Research overview and impact

Karen Yeung’s experience and insights arising from her research in the governance of emerging technologies generally, and AI systems in particular, has fed into a number of technology policy-initiatives at the very highest level both nationally and internationally. As an independent expert appointed to the Council of Europe’s Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT), she was appointed Rapporteur to undertake A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. She is also a member and rapporteur for the EU High Level Expert Group on Artificial Intelligence, one of the two UK members of this group of 54 European experts advising the European Commission on addressing AI-related mid to long-term challenges and opportunities to inform policy and legislative review in the development of a next-generation digital strategy. In this capacity, she has played a significant role in authoring the EU’s Ethics Guidelines for Trustworthy AI and Policy and Investment Recommendations for Trustworthy AI. She also acts as Ethics Expert for the European Research Council, and is an external expert reviewer for the Digital Freedom Fund.

At the national level, she currently serves as a member of the UK’s Council for Data Ethics and Innovation’s Crime and Justice Advisory Panel. Prior to that, she acted as Digital Medicine Panel’s Ethics Expert, which advised the Secretary of State for Health on how technological and other developments (including in genomics, pharmaceutical advances, artificial intelligence, digital and robotics) are likely to change the roles and functions of clinical staff in all professions over the next two decades (the ‘Topol Review’) which published its report, Preparing the healthcare workforce to deliver the digital future in February 2019. Prior to that, she was a member of the Royal Society and British Academy working group which produced a joint report, Data management and use: Governance in the 21st Century in 2017, which led to the establishment of the UK’s Council for Data Ethics and Innovation in 2018.

Projects

Blockchain For Healthcare
A Wellcome Trust funded project, Blockchain for Healthcare seeks to establish the real value this technology can offer the healthcare sector.

Research Associates: Immaculate Motsi-Omoijade and Alexander A. Kharlamov

FATAL4Justice?
Funded by the VW Stiftung Foundation, seeks to deepen and enrich our understanding of the influence of ADM systems on future society by critically investigating ADM systems in the criminal justice system from multiple and intersecting disciplinary perspectives.

Research Associate: Adam Harkens

Media Content Personalisation
The BBC has a keen interest in utilising AI and data-driven services. This AHRC-funded project working in partnership with the BBC will be one of the tools the BBC uses to gain an understanding of this complex area through the research insights this project will generate.

Research Associate: James McLaren

The Responsible Governance of Computer Vision:

PhD researcher: Emma Rengers

Research team

  • Professor Karen Yeung

    Interdisciplinary Professorial Fellow in Law, Ethics and Informatics

    Professor Yeung’s research examines the constitutional, democratic, and ethical implications of computational systems, including artificial intelligence and blockchain technologies, with a particular emphasis on the need to ensure that these systems are designed, tested and implemented in ways that respect human rights, democratic freedom and the rule law She has particular expertise in the governance of emerging technologies in constitutional democracies.


    Karen's profile

Dr Adam Harkens

Research Associate

Adam is a Research Associate specialising in algorithmic decision-making in the criminal justice system. He is currently working on the Volkswagen Stiftung-funded FATAL4JUSTICE? project.


Immaculate Motsi-Omoijiade

Research Associate

Immaculate Motsi-Omoijiade is a researcher in Emerging and Industry 4.0 Technology, specializing in Blockchain and Distributed Ledger Technology (DLT)

Academic outputs