top of page

Global AI Governance Updates Summer 2025: 14 Key Developments

  • Writer: Nikita Silaech
    Nikita Silaech
  • Aug 19
  • 6 min read
ree

AI governance is entering a period of rapid acceleration in 2025. In just a few months, governments, alliances, and industry bodies have introduced significant new laws, strategies, and codes of practice that will shape the future of artificial intelligence. These developments are not isolated events. They represent a growing global push to balance innovation with oversight, increase transparency, and build trust in AI systems.

From the United States to the European Union, and from BRICS nations to emerging economies, regulatory momentum is gathering pace. This article presents 14 confirmed governance updates from the summer of 2025, outlining the decisions, policies, and commitments that will define the next phase of responsible AI deployment.


AI Governance Updates

The summer of 2025 has seen a wave of decisive actions in AI regulation and policy. Nations and global bodies are moving quickly to formalize governance frameworks, publish technical guidelines, and implement oversight measures. 

The following updates highlight the most significant confirmed developments shaping AI governance this season.


Senate Rejects 10-Year Moratorium on State AI Laws

The United States Senate voted down a proposed 10-year moratorium that would have blocked states from enacting their own AI regulations. The measure was intended to maintain a single federal standard and prevent a patchwork of state-level rules that could complicate compliance for AI developers and businesses. 

Lawmakers opposing the moratorium argued that state governments must retain flexibility to address local risks, experiment with policy approaches, and respond quickly to emerging AI challenges. The rejection means states like California, New York, and Illinois can continue advancing their own AI bills, potentially leading to divergent rules across the country. This outcome signals that U.S. AI governance will remain a mix of federal and state oversight, increasing the importance of multi-jurisdiction compliance strategies for enterprises.


BRICS Leaders Sign AI Governance Declaration

Leaders of the BRICS nations — Brazil, Russia, India, China, and South Africa — signed a joint declaration outlining principles for responsible AI governance. The agreement emphasizes sovereignty in AI policymaking, the need for equitable access to AI technologies, and cooperation in developing safety standards. 

It also calls for increased collaboration on AI research, capacity building, and the prevention of misuse, particularly in critical infrastructure and defense contexts. While the declaration stops short of setting binding regulations, it positions BRICS as a collective voice in shaping global AI norms and offers a counterbalance to governance frameworks emerging from the U.S. and European Union. 

The move underscores the geopolitical dimension of AI policy and the growing importance of regional alliances in setting the rules for emerging technologies.


EU Confirms No Delay to AI Act

The European Union confirmed that the implementation timeline for the AI Act will remain unchanged, despite industry lobbying for an extension. The legislation, which classifies AI systems by risk level and sets strict requirements for high-risk applications, is scheduled to begin phased enforcement in 2026. EU officials reaffirmed that early compliance is essential to ensure public trust and safety, noting that companies have already had significant preparation time since the Act’s adoption in 2024. 

The decision sends a clear message that the bloc is committed to advancing its regulatory agenda without compromise, making the AI Act one of the most comprehensive and timely governance frameworks in the world. This stance also positions the EU as a global leader in establishing enforceable standards for AI accountability and transparency.


EU AI Office Publishes GPAI Code of Practice

The European Union’s AI Office released the first official Code of Practice under the Global Partnership on Artificial Intelligence (GPAI). The document provides voluntary guidelines for transparency, data governance, and accountability, designed to complement the mandatory requirements of the EU AI Act. 

It encourages companies to adopt proactive risk assessments, publish model documentation, and implement user feedback mechanisms. While nonbinding, the code is expected to influence global best practices and serve as a benchmark for organizations preparing for stricter regulations.


New Zealand Publishes National AI Strategy

New Zealand unveiled its first National AI Strategy, outlining a ten-year roadmap to integrate AI safely and ethically into the country’s economy. The strategy focuses on building AI skills, supporting local innovation, and developing governance frameworks that reflect Māori values and principles of social equity. 

It also includes commitments to transparency in public sector AI use and funding for AI research aimed at environmental sustainability. By setting a long-term vision, New Zealand positions itself as a regional leader in responsible AI adoption and governance.


EU AI Office Publishes GPAI Guidelines

The EU AI Office issued new guidelines under the Global Partnership on Artificial Intelligence (GPAI) to help organizations align development and deployment practices with internationally recognized principles. 

The guidelines focus on ethical risk assessment, dataset quality checks, and transparency in model behavior. They are intended as a practical resource for companies operating across jurisdictions, helping bridge gaps between voluntary commitments and binding legal requirements.


UK Minister Signals AI Legislation Consultation

A UK government minister announced plans to open a public consultation on AI legislation later this year. The consultation will explore potential requirements for transparency, safety testing, and market oversight, aiming to balance innovation with protection against AI-related harms. 

This marks a shift toward more formal regulation after the UK’s earlier light-touch approach, reflecting growing pressure from both industry and civil society for clearer rules.


EU Releases Training Data Transparency Template

The European Union introduced a standardized template for AI developers to disclose details about their training datasets. The template covers data sources, collection methods, quality control processes, and measures to mitigate bias. 

Regulators hope the format will improve comparability between models and make it easier for auditors, researchers, and the public to evaluate compliance with the AI Act.


US Publishes AI Action Plan

The United States released a comprehensive AI Action Plan outlining federal priorities for safe, ethical, and competitive AI development. The plan includes investments in AI research, expansion of public-private partnerships, and creation of sector-specific safety benchmarks. It also commits to stronger federal oversight, particularly in critical infrastructure, healthcare, and defense applications.


Trump Issues Executive Order on ‘Woke AI’

Former President Donald Trump signed an executive order targeting what he calls “woke AI,” directing federal agencies to prevent AI systems from being designed or trained with political or ideological bias. The order mandates audits of AI tools used in government, with the goal of ensuring viewpoint neutrality. Critics warn that the language could politicize AI governance and create new compliance challenges.


China Releases Global AI Governance Action Plan

China announced a Global AI Governance Action Plan aimed at promoting international cooperation, especially among developing countries. The plan calls for shared safety standards, responsible technology transfer, and AI applications that support sustainable development goals. It positions China as a leader in shaping governance norms outside Western-led frameworks, reflecting its strategic push to influence global AI policy discourse.


EU Approves GPAI Code of Practice

The European Union formally approved the GPAI Code of Practice, marking its adoption as a recognized framework for voluntary AI governance. The code covers principles such as transparency, safety, and accountability, offering developers and deployers a structured approach to align with the EU AI Act ahead of enforcement deadlines.


AI Act Provisions for GPAI Models Come Into Effect

Specific provisions of the EU AI Act targeting general-purpose AI (GPAI) models officially came into force. These include requirements for detailed technical documentation, disclosure of training data summaries, and compliance with standardized risk management processes. Developers must now ensure their models meet these obligations to operate within the EU market.


Over 26 Companies Sign GPAI Code of Practice

More than 26 companies, including major AI developers and enterprise technology providers, have signed the GPAI Code of Practice. By committing to these voluntary measures, signatories aim to demonstrate early compliance readiness and leadership in responsible AI. This collective action is expected to set an industry benchmark and influence similar initiatives in other regions.



Global AI governance is advancing at an unprecedented pace, with new laws, voluntary codes, and strategic action plans emerging across multiple regions. While there is growing momentum toward international alignment, significant regional differences in priorities, enforcement mechanisms, and political framing remain. For businesses, this evolving landscape means compliance cannot be an afterthought. 

Organizations must proactively track regulatory developments, assess their AI systems against emerging standards, and integrate governance considerations into product design and deployment from the outset. Early adaptation will not only reduce compliance risk but also position companies as leaders in responsible AI.

 
 
 

Comments


bottom of page