How AI Is and Is Not Changing Public Administration in the EU

Share: Print

Artificial intelligence (AI) is increasingly shaping the way public administrations operate across Europe: from streamlining administrative processes to improving evidence-based policymaking, AI promises greater efficiencies and better services for citizens. Yet, its adoption is neither simple nor universal.

The adoption of AI in public administration is shaped by regulatory frameworks, technical and ethical challenges and by what AI can realistically achieve. Understanding these dimensions is key to assessing both the transformative potential and the boundaries of AI in government.

Navigating the European AI Rulebook

Over the past decade, the European Union (EU) has progressively built one of the world’s most structured AI governance frameworks. From early strategic communications in 2018 to the AI Act adopted in 2023, the EU has moved from promoting AI uptake to defining binding rules for its safe and accountable use.

For public administrations in Europe today, this means operating within a harmonized legal environment that clearly classifies AI systems by risk level and imposes specific obligations — particularly for high-risk applications. The regulatory question is no longer whether AI should be governed, but how administrations can implement AI responsibly within this framework.

The AI Act classifies AI applications according to four levels of risk:

  1. Minimal or no-risk: technologies that can be freely used

  2. Limited risk: technologies that must meet transparency requirements, ensuring users understand when they are interacting with AI

  3. High risk: technologies that affects critical sectors such as healthcare, infrastructure, or law enforcement, must comply with stricter safety and monitoring standards

  4. Unacceptable risk: technologies that are prohibited, such as AI systems manipulating vulnerable groups or enabling social scoring.

Beyond regulation, the EU is also investing in the practical conditions that enable AI adoption. Through its broader AI Action Plan, the Commission is not only setting rules, but also mobilizing funding, infrastructure and collaborative ecosystems to support implementation.

For public administrations, this means access to European funding programs, AI-ready digital infrastructure, skills development initiatives and public-private networks designed to accelerate adoption and reduce implementation risks. Rather than navigating AI transformation alone, administrations can leverage EU-level platforms, competence centers and sector-specific alliances to share best practices, access technical expertise and co-develop solutions.

Financial support from the EU has been significant. Through the Recovery and Resilience Facility, which funded national recovery plans, Member States received an additional €134 million to support digital development. Beyond this, the EU, via programs such as Horizon Europe and Digital Europe, plans to invest approximately €1 billion annually in AI, providing a substantial boost for research, innovation and digital transformation across the continent.

While the EU provides an overarching regulatory and strategic framework for AI, the approaches of individual Member States vary depending on their strategic priorities and budget.

Yet, national AI strategies show common trends: most countries are investing in specialized training to build the digital skills of their workforce and in National Competence Centers to foster research, innovation and public-private partnerships in AI. Despite these efforts, differences in infrastructure, expertise and resources continue to slow the uniform adoption of AI.

Beyond Regulations: Managing the Risks of AI in Public Administration

While European and national regulations provide a legal and ethical framework for public sector agencies to do their work, they are not sufficient to address the challenges posed by AI. Challenges include:

  • Political factors: AI adoption can stall when priorities differ or when leadership commitment is inconsistent, leaving promising initiatives incomplete or poorly supported. Political debates over AI ethics or public acceptance can further delay implementation.

  • Organizational factors: resistance to change, within public institutions, unclear roles and fragmented decision-making structures can undermine even the most promising projects. Staff may also fear that AI will reduce their responsibilities, increasing internal pushbacks.

  • Demand factors: public officials may lack awareness or understanding of AI capabilities, leading to scepticism or low adoption. Without clear knowledge of AI’s benefits, risks and potential applications, officials are less likely to champion or support AI projects, limiting their scope and impact.

  • Technical factors: a shortage of AI expertise, limited technical standards and difficulties integrating new AI systems with legacy IT infrastructure can significantly impede progress. Systems may not communicate effectively, resulting in inefficiencies, errors or incomplete insights. Small administrations may struggle to attract the talent required for complex projects.

  • Infrastructure factors: outdated digital platforms, inadequate data management capabilities and fragmented IT systems further complicate implementation. Reliable, high-quality and interoperable data is the foundation of effective AI, and gaps in infrastructure can prevent AI from delivering accurate and actionable actions.

  • Supply factors: AI solutions tailored to the specific needs of the public sector are often scarce, expansive or require extensive customization. Vendors may prioritize commercial clients over government needs, leaving public administrations dependent on limited or suboptimal options.

  • Financial factors: the high costs associated with developing, deploying and maintaining AI systems can overwhelm smaller administrations, particularly those with limited budgets or competing priorities. Long-term funding for training, infrastructure and system updates is often uncertain, creating additional risk.

  • Legal factors: regulatory uncertainty, complex compliance requirements and slow legislative processes can delay AI projects. Administrations may be unsure how to meet requirements, which can slow procurement and deployment. The evolving nature of AI regulation can also make long-term planning difficult.

  • Ethical factors: concerns about fairness, transparency and accountability can make decision-makers hesitant to adopt AI. Fear of biased algorithms, data misuse or public backlash can prevent organizations from experimenting with innovative solutions, even when these solutions could improve efficiency or service quality.

Alongside these barriers, AI introduces a range of real risks — from intrusive surveillance and privacy erosion to job displacement, misinformation, discrimination and biased policymaking. The stakes are high, and missteps can undermine public trust and the legitimacy of government actions.

How can public administrations unlock the potential of AI while keeping these risks under control?

The answer lies in a multi-layered, integrated approach that combines legal, technical and ethical safeguards into a cohesive strategy:

  • Robust laws and regulations: Public Administrations should ensure compliance with existing regulations (such as the AI Act), build internal oversight mechanisms, invest in secure system design and embed ethical review processes in procurement and deployment.

  • Security as a foundation: AI systems must be rigorously tested, continuously monitored and built with safeguards to prevent errors, unintended consequences and misuse.

  • Ethical guidelines: principles for responsible AI development provide moral and operational guidance. Though the adoption of ethics frameworks remains challenging due to limited incentives, they are essential for fostering trust and societal acceptance.

By combining these approaches with proactive measures to overcome political, organizational, technical and financial hurdles, public administrations can transform AI from a risky experiment into a powerful and reliable tool.

What AI Can – And Cannot – Do in Public Administration

This leads to a crucial question: where does AI truly add value in public administration – and where should humans remain in charge?

First, despite its growing capabilities, AI has clear and important limits, especially in context that require human judgement, trust and social interactions. Indeed, AI does not possess human and relational skills: it cannot build meaningful relationships, negotiate, persuade, feel empathy or understand emotions. In public administration, where interactions with citizens often involve vulnerable or sensitive decisions, these qualities are irreplaceable.

AI also lacks contextual reasoning. Humans can weight multiple dimensions at once: political priorities, social impact, institutional culture, ethical considerations and long-term consequences. AI, by contrast, can only act because of the data and parameters it is given. This is why, without carefully designed inputs, it cannot truly “understand” complex real-world situations.

Finally, AI cannot replicate trust and reputation. For many citizens and organizations, the reliability of a long-established public institution still far outweighs that of an algorithm. Legitimacy, accountability and credibility remain deeply human assets.

For these reasons, AI should not be seen as a substitute for public officials, but as a tool to enhance their capabilities. Human expertise remains central – AI works best when it supports, rather than replaces, professional judgement.

However, when used appropriately, AI can deliver significant value to public administration, especially in data-intensive, repetitive and time-critical tasks, for instance:

  • Information retrieval: searching and organizing large volumes of data or documents

  • Data analysis: identifying patterns, trends and anomalies difficult for humans to detect

  • Document drafting: supporting the preparation of reports, policy briefs or administrative texts

  • Regulatory compliance: checking whether procedures, decisions or documents comply with legal requirements

  • Idea generation: offering alternative policy options, service design or process improvements

  • Scenario simulation: modelling the impact of policy choices before they are implemented

  • Cost-benefit analysis: supporting evidence-based decisions by comparing alternatives

  • Language translation: enabling communication across multilingual administrations or citizens

  • 24/7 assistance: powering chatbots and virtual assistants that provide continuous support to citizens and staff

  • Ethical and social impact analysis: helping assess potential risks, biases and societal effects of policies or technologies.

In these areas, AI acts as a force multiplier: it increases speed, consistency and analytical power, allowing public officials to focus on higher-value tasks such as decision-making, stakeholder engagement and strategic planning.

When AI is deployed in areas where it excels, and humans remain in charge of judgement, accountability and relationships, public administrations can achieve the best of both worlds: technological efficiency and human-centred governance.

In this sense, AI is not the future replacement of public servants – it is the next generation of tools that enables them to govern better, faster and more fairly.

ISG helps public administrations in the EU navigate the rapidly changing AI market and consider solutions that are right for them. Contact us to find out how we can get started.

Share:

About the author

Federica Contissa

Federica Contissa

As ISG’s Senior Consultant, Federica specialize in Public-Private Partnerships (PPPs) and corporate reorganization projects, supporting both public and private sector clients. Her work includes assisting organizations in securing local, national, and European incentives, as well as delivering academic and professional training in public procurement and PPPs.