The Evolving Landscape of AI Regulation

Artificial intelligence (“AI”) is at the forefront of global technological advancements, and nations worldwide are grappling with the challenge of regulating its development and deployment. While some jurisdictions prioritize innovation and economic growth, others emphasize risk mitigation and ethical concerns.

The result is a fragmented regulatory landscape, where businesses must navigate varying legal frameworks.

A Shift Towards Deregulation in the USA

Over the past five years in the US, there  has seen an upward trend in the number of AI regulations. In particular, in 2023, 25 AI-related regulations were adopted, compared to only one in 2016. 

Despite these limitations, the United States is ahead of China, the EU, and the UK as a leading supplier of the best AI models. According to AI Index Report (2023), in 2023 alone, 61 AI models originated from the United States, while 21 models came from the EU and 15 models were represented by China.  

Contrary to the already established political vector, Donald Trump with his Executive Order on the “Initial Revocation of Harmful Executive Orders and Actions” of 20 January 2025 cancelled Executive Order 14110 of 30 October 2023, signed by President Joe Biden. 

Executive Order 14110, issued by Biden, aimed to ensure the ‘safe, secure, and responsible development and use of artificial intelligence’. It required AI developers working on high-risk systems, especially those affecting national security, the economy, healthcare, and public safety-to provide test results to the federal government.

In addition, the decree required federal agencies to establish security standards to minimise AI-related risks, including cybersecurity, biosecurity, and nuclear safety. This regulatory framework was put in place in the absence of comprehensive AI legislation from Congress, reflecting growing concerns about the potential threats posed by artificial intelligence.

Shortly afterwards, Donald Trump signed an Executive Order on “Removing Barriers to American Leadership in Artificial Intelligence” on 23 January 2025  

The Trump administration argues that US leadership can only be secured in an environment free of ideological bias or engineered social agendas.

In order to remove such bureaucratic barriers and open up opportunities for decisive action on the part of the United States, the order cancelled a number of other regulations that ‘impede’ American innovation. 

Key Changes and their Impact:

  • Companies that develop high-risk AI systems are no longer required to report the results of security testing to the government.  This reduces the burden of regulatory compliance, but raises concerns about the transparency of these processes, compliance with data protection requirements, cybersecurity, etc.
  • The authorised persons are required to develop and submit an action plan to the President within 180 days. 
  • The White House has instructed federal agencies to review all policies, directives, regulations, orders, and other actions taken pursuant to the cancelled Executive Order 14110 of 30 October 2023..

A Strict Regulatory Approach in The European Union and Council of Europe 

While the United States has taken a deregulatory stance on AI, the European Union and the Council of Europe have moved in the opposite direction, implementing stringent legal frameworks to ensure AI development aligns with fundamental rights and democratic values. 

Council of Europe

In May 2024, the Committee of Ministers of the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Framework”) —the first international legally binding treaty governing AI. Open to all states worldwide, the Framework sets out principles to ensure AI technologies are developed and deployed responsibly and in full compliance with human rights protections. 

The Framework establishes risk and impact management obligations for AI developers and deployers, including the requirement to (Council of Europe, 2024):

  • Conduct ongoing risk and impact assessments to evaluate potential threats to human rights, democracy, and the rule of law;
  • Implement preventive and mitigation measures based on assessment results;
  • Allow authorities to impose bans or moratoria on certain high-risk AI applications (so-called “red lines”).

Under Article 3, the Framework Convention applies to AI activities across their entire lifecycle, covering both public authorities and private entities acting on their behalf. Additionally, it mandates that states address AI-related risks from private actors that could impact human rights, democracy, or the rule of law (Council of Europe, 2024).

Each signatory state must specify how it intends to fulfill this obligation, either by directly applying the treaty’s principles to private entities or by adopting equivalent national measures. Importantly, no implementation approach may derogate from existing international commitments on human rights, democracy, or the rule of law.

By introducing binding AI governance standards, the Framework represents a milestone in global AI regulation, reinforcing a human-centric approach to AI development and deployment.

The EU AI Act: A Comprehensive Risk-Based Framework

The Regulation (EU) 2024/1689 (“AI Act”), adopted by the European Parliament and the Council on June 13, 2024, is the world’s first comprehensive legal framework governing AI. It establishes harmonized rules to foster trustworthy AI and protect fundamental rights while supporting innovation. The regulation follows a risk-based approach, classifying AI systems into four categories: unacceptable, high,  transparency, minimal or no risk.

The AI Act entered into force on August 1, 2024, with full application planned for August 2, 2026. However, certain provisions take effect earlier:

  • February 2, 2025 – Prohibitions on high-risk AI applications and AI literacy obligations.
  • August 2, 2025 – Rules for general-purpose AI models.
  • August 2, 2027 – Extended compliance deadlines for AI systems integrated into regulated products.

The European Commission has established a new EU level regulator, the European AI Office. It will monitor, supervise, and enforce the AI Act requirements on general purpose AI models and systems across the 27 EU Member States. For well-informed decision-making, the AI Office collaborates with Member States and the wider expert community through dedicated fora and expert groups.

The United Kingdom

At the recent Global AI Summit in Paris, over 70 countries, including France, China, and India, signed an international AI agreement aimed at establishing shared principles for AI governance. However, the United States and the United Kingdom declined to sign the document, citing differences in regulatory approaches and a preference for developing their own national AI strategies.

The Kingdom of Saudi Arabia

The Kingdom of Saudi Arabia  is rapidly developing its AI sector as part of Vision 2030, aiming to become a global technology leader (Middle East Briefing, 2024). The Saudi Data and Artificial Intelligence Authority (“SDAIA”), established in 2019, oversees AI policy and вуиудщзьуте strategy. Although there is no specific legislation in this area in Saudi Arabia, the SDAIA is guided by the implemented Principles of AI Ethics.  

As informed Middle East Briefing, 2024 the Kingdom has attracted major investments in data centers, cloud infrastructure, and AI startups, with partnerships involving Microsoft (US$2.1 billion), Oracle (US$1.5 billion), and Huawei (US$400 million), driving innovation. AI talent development is also a priority, with initiatives to train 20,000 specialists by 2030 and educational programs integrating AI into university curricula (SDAIA, 2023).

Saudi Arabia’s regulatory approach remains flexible, relying on existing laws like the Personal Data Protection Law (“PDPL”) to address AI-related concerns. The evolving framework aims to balance innovation with ethical standards, fostering an investment-friendly AI ecosystem.

A Balancing Asia 


China

China views AI development as a strategic priority, with government policies encouraging rapid innovation and investment in AI research. Chinese tech firms are actively working toward technological parity with the U.S., and the launch of the R1 AI model represents a major step in strengthening China’s AI competitiveness. The impact of China’s AI advancements is already being felt—DeepSeek’s AI model release led to a 17% drop in Nvidia’s stock, signaling growing competition in AI development. 

Donald Trump said a Chinese company’s release of its DeepSeek AI model should be a “wake-up call” for US tech.

Japan

As mentioned in the article on the CSIS website, two key publications from the first half of 2024 strongly indicated that Japan was moving toward new legislation to more comprehensively regulate AI technology. In February, the ruling Liberal Democratic Party released a concept paper, and in May, the Japanese Cabinet Office’s AI Strategy team published a white paper. Both documents recommended the introduction of new regulations for large-scale foundational models. These developments suggested that Japan was aligning with its international allies in strengthening its regulatory framework for AI.

On March 3, 2025, the Japanese government introduced a bill aimed at balancing AI innovation with risk mitigation, particularly focusing on preventing AI-mediated crimes. The bill seeks to strike a balance between fostering innovation and mitigating risks, such as AI-mediated crimes. While Japan remains committed to advancing AI development, it is aligning its regulatory approach with key international partners.

The Consequences of Fragmented AI Regulation 

PARIS, Feb 11 (Reuters) – U.S. Vice President JD Vance warned Europeans that their “massive” regulations on artificial intelligence could stifle technological development, calling content moderation “authoritarian censorship.”

“We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship,” Vance stated.

He criticized the EU’s comprehensive regulations, such as the Digital Services Act and General Data Protection Regulation (“GDPR”), which he argued impose excessive legal compliance costs on smaller companies.

“Of course, we want to ensure the internet is a safe place, but there’s a big difference between preventing a predator from preying on a child online and stopping an adult from accessing an opinion that the government deems misinformation,” he said.

The United States is moving toward a more relaxed approach to AI regulation, allowing companies greater freedom to innovate and rapidly deploy AI solutions. In contrast, the European Union is tightening its control. On May 21, 2024, the European Parliament approved the AI Act, which requires certification for high-risk AI products, algorithmic transparency, and accountability for developers.

If the U.S. and China continue to advance AI without stringent regulations, the EU risks falling behind in the global AI race. Some European companies may relocate their AI development to the U.S. to avoid regulatory burdens. Eventually, the EU may be forced to relax its AI rules to stay competitive.

This could lead to a fragmented AI market, where the same AI products operate differently across jurisdictions. As a result, companies will need to develop AI products under multiple regulatory frameworks: less regulated in the U.S., UK, and China, and highly controlled in the EU. Non-compliance with the regulations of a particular country is risky and could lead to serious legal consequences. As AI regulation continues to evolve, businesses will have to navigate a complex and fragmented legal landscape, balancing the need for compliance with the desire to innovate.

Fragmented regulation poses considerable risks for both businesses and users. Excessive regulation, in particular, is often viewed negatively, a sentiment echoed by U.S. Vice President JD Vance. The results of the vote at the global summit show that he is not alone in this view. Businesses must adjust to the regulatory requirements of each jurisdiction to avoid adverse legal consequences in the future. Meanwhile, the U.S. and other countries with more permissive approaches are becoming increasingly attractive for startups.

Conclusion

AI governance varies widely across countries. The U.S. and the U.K. favor a less restrictive, industry-driven approach to foster innovation. 

The European Union enforces a strict regulatory framework, prioritizing AI safety, ethical considerations, and fundamental rights. 

China follows a hybrid model, combining state control with active encouragement of AI innovation, ensuring both technological advancement and regulatory oversight

Japan is moving toward stricter regulations, particularly for foundational AI models, while Saudi Arabia maintains a flexible, investment-friendly approach, focusing on AI-driven economic growth..

This fragmented landscape presents compliance challenges for businesses operating across jurisdictions. While deregulation may accelerate AI progress, stricter rules seek to mitigate risks. Moving forward, international cooperation will be essential to balance innovation, security, and ethical AI deployment.

Our contacts

If you want to become our client or partner, feel free to contact us at support@manimama.eu.

Or use our telegram @ManimamaBot and we will respond to your inquiry.

We also invite you to visit our website: https://manimama.eu/.

Join our Telegram to receive news in a convenient way: Manimama Legal Channel.


Manimama Law Firm provides a gateway for the companies operating as the virtual asset wallet and exchange providers allowing to enter to the markets legally. We are ready to offer an appropriate support in obtaining a license with lower founding and operating costs. We offer KYC/AML launch, support in risk assessment, legal services, legal opinions, advice on general data protection provisions, contracts and all necessary legal and business tools to start business of virtual asset service provider.


The content of this article is intended to provide a general guide to the subject matter, not to be considered as a legal consultation.