The Rise of Digital Authoritarianism?

Authoritarian and Democratic Politics in the Age of Artificial Intelligence

Within just a couple of years, artificial intelligence (AI) has evolved from an object of scientific curiosity to a salient political issue. Driven by the promise of deep learning algorithms, which infer patterns from massive datasets to make predictions, AI systems have become a pervasive feature of public and private life. In democratic and authoritarian states alike, governments have sought to leverage the efficiency of AI systems to their advantage. In various domains, bureaucracies now routinely delegate life-altering decisions to AI, ranging from criminal justice and surveillance to border controls, education, health, and the allocation of social benefits.

Unsurprisingly, the rapid uptake of AI systems worldwide has sparked a lively debate about the possible implications for democratic processes and norms. Scholars and policymakers have focused in particular on the dangers of so-called digital authoritarianism, i.e., the use of digital information technology to surveil, repress, and manipulate domestic and foreign publics. Most prominently, they point to China as a cautionary tale for the repressive potential of an AI-powered surveillance state, as it integrates practices such as live biometric identification, online content control, and censorship to control its population. Similarly, scholars increasingly highlight the transnational dimension of digital authoritarianism, for instance, China’s export of surveillance technology to other countries or Russia’s targeted misinformation campaigns to disrupt democratic elections abroad.

Yet, AI’s potential to undermine democratic norms and processes is not necessarily limited to authoritarian or non-democratic regimes. The widespread adoption of possibly biased, unsafe, or untransparent AI systems by democratic governments has produced its own challenges. As these governments automate decisions over social benefits, parole sentences, or asylum procedures, they challenge fundamental rights such as the rights to non-discrimination, a fair trial, privacy, or the freedom of movement. Furthermore, in the absence of robust regulation, corporations and other private actors may produce AI systems that undermine fundamental democratic norms without being held accountable.

In this special issue, we seek to shift the focus of the debate from potentiality to existing practices. Put differently, we want to explore how AI technologies are already being employed and contested in concrete settings and sites. Through case studies and comparative analyses covering both authoritarian and democratic states, as well as states in transition, we want to create a comprehensive, empirically grounded picture of how AI systems affect democratic and authoritarian processes worldwide.

How, and to what end, have governments across the world deployed AI systems? What effects has this had on democratic norms, including the protection of human and individual rights? How and by whom have such practices been contested, and what strategies have these actors employed? To what extent do we see transnational coordination between both regimes deploying AI systems (e.g. in border control) and the movements challenging them? What role do AI systems play in authoritarian or democratic transitions? To what extent does contestation over AI and democracy play out at the international level, where we are seeing early attempts at AI governance? Do democratic and authoritarian regimes behave differently at this level of AI politics?

To explore these questions, we invite case studies and comparative analyses tackling:


  • AI and human rights

  • Digital authoritarianism

  • Use of AI systems in border control and migration

  • Predictive policing, surveillance technology, and biometric identification

  • Challenges to democratic norms from automated decision-making in the public

    sector (e.g. education, chatbots, allocation of benefits)

  • Online content control and social media

  • Export of AI surveillance technology and misinformation

  • AI and (transnational) social movements

  • Questions of accountability and democratic control in the context of private actors

  • International discussions about AI governance, ethics, and human rights

    If you are interested in participating, please send us an abstract of your proposed paper (~250 words) to until Sunday, December 12, 2021.

    Overall, we aim to select a total of 8-10 papers of around 9,000 words. In preparation for the Special Issue, we would like to organize a workshop with all authors, likely to take place in June or July 2022.

    We look forward to your contributions!

  • Irem Tuncer Ebetürk, WZB Berlin Social Science Center 

  • Jelena Cupać, WZB Berlin Social Science Center

  • Hendrik Schopmans, WZB Berlin Social Science Center