Parliaments must modernise to avoid AI-induced executive dominance

Many governments are enthusiastic about the prospect of adopting Artificial Intelligence (AI), not least for the potential gains in efficiency and capacity this may afford them. Yet the rapid adoption of AI by governments risks even stronger executive dominance over the other branches of government.

AI technologies for predictive analytics, automated decision-making, and natural language processing provide the executive with unprecedented power to analyse and act on vast amounts of information swiftly. Whilst these technologies bring the promise of better evidence-based policymaking, this technological empowerment might have unintended side-effects.

Unless the other branches of government invest in and can access similar emerging technologies, it is likely that in the next few years the balance of power will tilt toward the executive further, allowing it to operate with even greater autonomy and efficiency and with reduced oversight.

A democratic approach to AI

Among other things, AI can streamline administrative processes, enhance law enforcement capabilities, optimise resource allocation, and facilitate public access to information, thereby centralising decision-making within the executive. This centralisation risks undermining the checks and balances provided by legislative and judicial oversight, which could result in a more powerful and potentially less accountable executive.

In its policy brief, A Democratic Approach to Global Artificial Intelligence (AI) Safety, Westminster Foundation for Democracy (WFD) emphasises the importance of transparency and accountability in AI governance. The author Alex Read argues that democratic leaders should focus on building political and societal resilience to AI disruptions, incorporating core democratic values of transparency, accountability, public participation, and inclusivity. To counter illiberal and repressive uses of AI, democracies will need to set a values-based example and demonstrate a coordinated approach.

Without effective parliamentary oversight, for example, the use of AI systems can lead to a lack of transparency and accountability. Complex algorithms can be opaque, making it difficult for parliaments, the judiciary and the public to comprehend decision-making processes. This inherent "black box" nature of AI could cause potential abuses of power and erode public trust in government institutions.

Parliamentary modernisation

To counter these risks and to keep pace with the executive, it is imperative that parliaments modernise their operations. This includes adopting AI technologies and adjusting their processes accordingly, investing in training for parliamentarians and staff, and developing robust frameworks for the ethical and transparent use of AI, as highlighted in the new WFD Guidelines for AI in Parliaments. The aim of the Guidelines, as Dr. Fitsilis from the Hellenic Parliament explains, is to equip parliaments with the knowledge and tools needed to navigate the complex landscape of AI, while maintaining and even strengthening their very democratic nature.

Dr. Alberto Mencarelli of the Italian Chamber of Deputies argues that, “parliaments need to ensure that the adoption of AI is guided by stringent policies, ethical testing, and comprehensive training. A failure to effectively integrate AI could result in parliaments falling behind in the ongoing technological revolution, potentially compromising the resilience of parliamentary ecosystems.”

A few parliaments have started to introduce AI, as documented by the Inter-Parliamentary Union (IPU). The lived experience in parliaments points at the use of AI for transcribing and managing records of debates, assisting in drafting amendments, analysing large volumes of text to identify key themes and insights, supporting public engagement by analysing public submissions, powering chatbots and other user support, and automating tasks like schedule management.

Parliaments can benefit from sharing experiences and best practices regarding AI implementation, such as through the IPU's Centre for Innovation in Parliament. The new Global Community of Practice on Post-Legislative Scrutiny helps facilitate dialogue on applying AI in legislative scrutiny processes, as argued by Dr Marci Harris from POPVOX Foundation.

Developing countries

The rapid development and implementation of AI technologies poses increased risks of marginalising parliaments in developing countries and the Global South. Some countries may lack the technological infrastructure, expertise, and financial resources to keep pace with AI advancements. Consequently, their legislative bodies might struggle to exercise effective oversight over AI-driven executive actions.

The disparity in AI capabilities can lead to a scenario where developing countries become overly reliant on AI technologies developed in the Global North or by authoritarian regimes. This reliance can undermine national governance structures and exacerbate existing power imbalances. Parliaments in these countries might find themselves sidelined, unable to fully understand or regulate the AI systems that are increasingly influencing governance and the day-to-day lives of citizens.

Authoritarian tendencies

If not properly regulated, AI can facilitate authoritarian tendencies, enabling mass surveillance and social control. Governments could be tempted to use AI to suppress dissent, monitor political opponents, manipulate public opinion, and spread disinformation, undermining democratic processes and human rights.

China’s AI-driven surveillance systems and their export to other countries could undermine oversight responsibilities of national parliaments or the judiciary. In recent years, Chinese companies have provided surveillance systems to several countries, including Ecuador, Zimbabwe, and Serbia. This has helped government agencies to monitor opponents, activists and journalists with little oversight from their national parliaments.

China’s practice of establishing overseas police stations in other countries can be considered a security threat as well as a threat to democratic oversight. Potentially equipping ﷟Chinese police presence abroad with AI-driven policing technologies can further marginalise legislative oversight and contribute to an erosion of civil liberties​. In several countries, Chinese companies have been involved in developing "smart cities" that include extensive surveillance infrastructure. These projects often lack transparency and can bypass parliamentary scrutiny, increasing the risk of authoritarian practices​.

Accountability

As AI's role in government expands rapidly, it poses significant threats to the balance of power in democratic institutions. Parliaments must modernise and enhance their oversight capabilities to ensure AI deployment is transparent, ethical, and accountable. By doing so, they can safeguard democratic principles, maintain necessary checks and balances, and ensure AI serves the public good.

  • Franklin De Vrieze,
  • Head of Practice Accountability, Westminster Foundation for Democracy (WFD)
People looking at a laptop, with digital graphs in front of them