Governance of artificial intelligence (AI) has shifted from research labs to global policymaking. In barely a decade, early discussions on fairness and accountability have evolved into a complex network of national strategies, regulatory frameworks, and international agreements shaping how societies govern AI.
This article looks at how AI governance has evolved and what remains unfinished. It traces how ideas about fairness and accountability became the foundations of global governance, how political ambition and public concern shaped new institutions, and how the next phase must move beyond fragmented responses toward legitimate, anticipatory, and globally coherent oversight.
From Ethics to Governance
The roots of AI governance stretch back to early debates in computer ethics and robotics, which in the early 2000s raised questions of responsibility, transparency, and the moral status of intelligent systems. Conferences on “roboethics” and EU projects such as ETHICBOTS and ETICA set the stage. In the United States, military reviews of autonomous systems foreshadowed today’s debates on lethal autonomous weapons.
By the mid-2010s, scholars and practitioners launched initiatives like the Workshop on Fairness, Accountability, and Transparency in Machine Learning, the Stanford AI100 project, and the Institute of Electrical and Electronics Engineers’ consultation on Ethically Aligned Design. These efforts shaped the vocabulary around the issues of fairness, accountability, and transparency, that now defines AI governance.
Governments soon followed. China’s 2017 AI plan framed AI as a strategic priority. France’s Villani report (2018) and Canada’s major investments signaled the raise of national strategies. Civil society declarations, like those the Montréal and Toronto ones, pushed human rights to the center. By 2020, the Global Partnership on AI was launched to bridge research and policy. At the global level, the OECD AI Principles (2019) and UNESCO’s Recommendation on AI Ethics (2021) consolidated soft-law guidelines. Singapore complemented these with its Model AI Governance Framework and AI Verify toolkit, offering practical tools for companies and regulators.
Binding Rules and Global Action
After 2021, the pace quickened and principles gave way to regulation. The EU’s AI Act (2024) became the first horizontal, risk-based AI law, enforced by a new European AI Office. The Council of Europe’s Framework Convention on AI (2024) was the first binding international treaty linking AI to human rights, democracy, and the rule of law. Opened for signature in September 2024, it has now been signed by 16 countries—including Canada, Japan, Switzerland, Ukraine, the United Kingdom, and the United States—and the EU. Following the practice of multi-stakeholder engagement, organizations from civil society, academia, and industry were actively involved in its development. The convention allows parties two options for applying its principles to the private sector: they may be directly bound by its provisions or adopt equivalent measures consistent with their human-rights and rule-of-law obligations. Related G7 and G20 initiatives promote human-centric, interoperable AI governance, reinforcing the move from voluntary ethics to coordinated global oversight.
Technical bodies have taken major steps to operationalize AI principles. The U.S. National Institute of Standards and Technology released its AI Risk Management Framework (2023), offering guidance on trustworthy design and deployment, while the International Organization for Standardization published the ISO/IEC 42001 standard (2023), the first certifiable management system for AI governance. Together, these tools translate high-level ethics into practical compliance mechanisms for organizations worldwide.
The Rise of Generative
AI Generative AI and the introduction of AI chatbots available to everyone brought a sharp acceleration in global governance efforts. The Bletchley Declaration (2023) and the Seoul Declaration (2024) launched recurring international summits and a coordinated network of AI Safety Institutes.
In March 2024, the UN General Assembly adopted its first resolution on AI, encouraging states to steer AI toward sustainable development and to bridge digital divides. Following the UN’s Governing AI for Humanity report and intergovernmental consultations, member states in August 2025 adopted a resolution creating two new bodies: the Independent International Scientific Panel on AI, composed of 40 experts and tasked to produce annual evidence-based assessments, and the Global Dialogue on AI Governance, a forum for states, industry, civil society, and scientists to deliberate risks, norms, and cooperation.
In 2024, the African Union launched a continental AI strategy grounded in inclusive and developmental priorities. Meanwhile, countries such as Brazil, Canada, and India have adopted national AI bills, U.S. states like Colorado and New York have pioneered local AI regulation, and cities worldwide have issued their own AI charters—together making governance increasingly multilayered and globally interconnected.
Learning From Failure
AI governance has developed not only from foresight but also from failure. Cases such as the COMPAS algorithm in U.S. courts, the childcare benefits scandal in the Netherlands, the A-level grading debacle in the United Kingdom, and the Robodebt scheme in Australia exposed how automated systems can reproduce and amplify social inequalities. Each crisis forced governments to confront the limits of technical optimism and to recognize that algorithmic systems can deepen injustice when deployed without transparency, accountability, or redress. These episodes illustrate that AI governance has often been reactive, emerging after public harm and political outrage. To move forward, policy must become anticipatory: learning from these failures before they recur, embedding continuous oversight, and ensuring that responsibility is shared by developers, deployers, and decision-makers alike.
Challenges Ahead
Despite impressive progress, AI governance remains fragmented, contested, and fragile. Five issues are especially pressing:
• Coherence: We now have dozens of principles, standards, and laws, but they often overlap or collide. Without a responsibility-based framework that links ethics, safety, and rights, the result will be confusion and weak enforcement.
• Legitimacy: Rules will fail if people view them as elitist or technocratic. True legitimacy requires that affected communities participate meaningfully, with rights of redress and institutionalized contestability. Governance should empower people, not just regulate systems.
• Anticipation: Most governance has been failure-driven. We need foresight exercises, risk assessments, and scenario planning to become routine tools, not afterthoughts. Learning only from failure is too costly.
• Pluralism: Regional differences are inevitable and even valuable, but unchecked they risk producing fragmentation and compliance burdens. The goal should be interoperability. The principle of “comply once, demonstrate many times” allows diversity while ensuring accountability across borders.
• Broadening safety: Current debates often focus on frontier risks or existential threats. Yet everyday harms—discrimination, misinformation, exploitative labor, administrative injustice—are equally urgent. Safety must be understood not just as preventing catastrophic risks but also as ensuring justice in the daily realities where AI touches lives. At the moment, geopolitics complicates things. China, Europe, and the United States pursue diverging models, often with competition in mind. This risks fragmentation. Yet alternatives exist: the African Union shows how regional cooperation can align AI with justice and development. The history of AI governance is still being written. It could become a story of fragmented power, with competing blocs and endless patchwork rules. Or it could evolve into shared responsibility, where diverse actors converge around accountability, justice, and human dignity. Which path we take depends less on the technology than on the political imagination we bring to it. AI governance will never be solved by technical fixes alone. It requires cooperation across borders, humility across cultures, and, above all, responsibility as the guiding thread. Only then can we ensure that AI serves not just innovation or competition but also the common good.
Virginia Dignum is professor in Responsible Artificial iIntelligence and the Director of the AI Policy Lab at Umeå University. She is a member of the UN High Level Advisory Board on AI. She was a Digital Humanism Fellow at the IWM in 2025.
