How artificial intelligence is regulated in the EU and Ukraine: an overview of key initiatives | Manimama

How artificial intelligence is regulated in the EU and Ukraine: an overview of key initiatives 

light

Artificial intelligence is rapidly changing the world around us - from medical diagnoses to city management and financial markets. However, along with unprecedented opportunities, potential risks are also growing, including breaches of privacy.

In response to these threats, regulators in different countries have begun to actively develop approaches to the legal regulation of artificial intelligence.


European Union Artificial Intelligence Act

Regulation (EU) 2024/1689, known as the AI Act, is the world’s first comprehensive attempt to legislate the use of artificial intelligence systems at the level of a supranational regulator. Its key goal is to protect fundamental human rights, ensure the safety of AI solutions, and not to hinder innovation.

The Act officially entered into force on August 1, 2024, but its provisions are to be implemented gradually. Thus, prohibitions on the use of AI systems that pose unacceptable risks began to apply as early as February 2, 2025, while some other rules will apply in 12 months, and the Act should be fully implemented on August 1, 2026.

The AI Act applies a risk-based approach, dividing all systems into three categories,  depending on the severity of the risks associated with them:

  • Prohibited systems – those that pose an unacceptable risk to human rights. For example, these include social scoring systems like those in China. 
  • High-risk systems that are subject to special legal requirements. These include, for example, tools for analyzing resumes, healthcare, critical infrastructure, or justice solutions. These systems are allowed to be used, but only after passing a conformity assessment, creating proper documentation and meeting other requirements set out in the Act.
  • General-purpose systems that are not explicitly prohibited are not classified as high-risk and remain largely unregulated. These systems are regulated only in terms of transparency, as they must inform the user that he or she is interacting with artificial intelligence.

Institutional support

The effective implementation of the Act requires proper institutional support, which is why several bodies have been established in the EU:

  • European AI Office, which is authorized to interpret and support the implementation of the AI Act.
  • The European Artificial Intelligence Council, which includes one representative from each member state.
  • The European Parliament Working Group to oversee the implementation of the Act.

To whom the AI Act is addressed

As stated in Article 2 of the Act, it applies to several categories of persons, including:

  • Providers placing on the market or putting into service AI systems in the EU, regardless of where they are established or located, as well as authorized representatives of such providers.
  • Deployers of AI systems, i.e., using them under their own control, and established or located in the EU.
  • Providers and deployers of AI systems established or located in a third country, if the results of AI work are used in the EU.
  • Importers and distributors of AI systems.
  • Manufacturers of products that place AI systems on the market or put them into operation together with their products.
  • Affected persons located in the EU.

Thus, the Act has an extraterritorial effect, similar to the GDPR, and applies not only to companies registered in the EU, but also to other persons.

Prohibited systems: what exactly is considered an unacceptable risk

The AI Act explicitly prohibits the use of AI systems that violate fundamental human rights or potentially threaten democracy and individual freedom.

This includes:

  • Systems that employ subliminal techniques, purposeful manipulative or deceptive techniques that could significantly distort a person’s behavior and substantially limit their ability to make informed decisions.
  • Systems used to evaluate or classify individuals based on their social behavior or known or predictable personal characteristics.
  • Systems used to assess the risk of an individual to evaluate or predict the risk of committing a criminal offense.
  • Systems that create or expand facial recognition databases by indiscriminately extracting facial images from the Internet or CCTV footage.
  • Systems used to determine the emotions of an individual in an educational institution or at work.
  • Biometric categorization systems.
  • Systems for remote biometric identification in real time.

Some of these prohibitions have exceptions, but in general, the list looks like this. They have already come into force on February 2, 2025, and violations may result in serious sanctions.

High-risk systems: requirements and limitations

AI systems that have the potential to significantly affect human life are recognized as posing a high level of risk.

 An AI system can be recognized as such if both of the following conditions are met:

  1. The system is intended to be used as a safety component of a product, or the system is itself a product covered by the EU harmonization legislation specified in the Act; and
  2. The product of which the AI system is a safety component or is itself a product must undergo a third-party conformity assessment.

In particular, high-risk AI systems include systems in the areas of biometrics, education and training, critical infrastructure, employment and employee management, access to private services and public services, law enforcement, justice and migration management.

!!!At the same time, a system is not considered high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of an individual.

The use of such systems is permitted subject to strict regulatory requirements, including:

  • Implementation of a risk management system.
  • Training AI only on high-quality, representative data.
  • Availability of technical documentation.
  • Automatic recording of events (logs).
  • Ensuring human control over the system.
  • Development with transparency, accuracy, and cybersecurity in mind.

The Act provides for the registration of such systems in the EU public register, conformity assessment, as well as a number of obligations for suppliers, importers and users.

General purpose systems: minimum requirements and maximum transparency

This category includes most “ordinary” AI applications used by citizens, such as chatbots, recommendation systems, automated content moderation, and others. The Act does not set strict technical requirements for this category of systems, but requires compliance with an appropriate level of transparency, including obligations:

  • Notify the user that they are dealing with AI (for example, if it is a chatbot).
  • Indicate that the image or content was generated by AI (e.g., deepfake or text materials).
  • Provide a simple explanation of how the system works in cases where it affects the user.

Providers of such systems must compile and update the relevant technical documentation, update and provide information to other providers intending to integrate the AI system into their own systems, implement policies to protect copyright and related rights, etc.

How the EU supports AI innovation: sandboxes and exemptions

Despite a clear focus on regulation and security, the AI Act does not aim to “stifle” innovation, but rather provides institutional and procedural mechanisms to stimulate the development of AI solutions. 

In particular, the AI Act obliges Member States to establish at least one “sandbox” – a special environment where developers can test AI solutions in real-world conditions, but with limited legal risk – to regulate AI by August 2, 2026. To support their creation and operation, the European Commission can provide technical support, advice, and various tools.

The established regulatory sandboxes may process previously lawfully collected personal data, but only for the purpose of developing, training and testing certain AI systems and subject to all the conditions set out in the Act. 

In exceptional cases, testing of high-risk AI systems is possible even outside of sandboxes if the requirements of the Act are met: a test plan, informing individuals, compliance with security requirements, etc.

Thus, the AI Act combines the precautionary principle with the desire for technological progress, creating a space for safe experimentation.

Ukraine’s Preparation for AI Regulation: Strategy, Roadmap, and White Paper

Ukraine, as a candidate country for accession to the European Union, is already formulating a strategy to harmonize its approach to AI with the European Regulation. The main guideline in this process is the Roadmap for Artificial Intelligence Regulation presented by the Ministry of Digital Transformation of Ukraine in the fall of 2023.

Ukraine’s approach to the future law is based on the bottom-up concept – a gradual transition from self-regulation to mandatory regulations to give time for business, the public, and the state to adapt.

The plan is being implemented in two stages:

  1. Preparatory stage (2023-2025):
    1. providing businesses with tools for self-assessment and compliance with future regulation;
    2. establishing a competent authority for artificial intelligence
    3. raising public awareness of the legal and technological aspects of artificial intelligence.
  2. Implementation stage:
    1. gradual adoption of the law harmonized with the EU AI Act;
    2. possible use of a mechanism for the phased implementation of the most complex provisions;
    3. taking into account EU requirements in the process of approximation to membership.

In June 2024, as part of the Roadmap, Ukraine presented a White Paper detailing its vision of regulatory regulation of artificial intelligence. One of the key tools was the introduction of the Trusted Flagger model – independent civil society organizations that will be able to filter complaints about human rights violations when using AI platforms.

This mechanism will allow:

  • respond more quickly to digital rights violations;
  • reduce the burden on the platforms themselves;
  • ensure the participation of civil society in controlling artificial intelligence.

The Ministry of Digital Transformation has already reached preliminary agreements to engage two leading Ukrainian NGOs with extensive experience in the field of digital rights: “ГО Цифролаба” and “ЦЕДЕМ”. These organizations have already expressed their willingness to join the work as Trusted Flaggers, but other organizations may also express their willingness to cooperate by responding to the White Paper.

The Ministry of Digital Transformation of Ukraine has also created the WINWIN AI Center of Excellence, the first center in Europe for the comprehensive integration of artificial intelligence into government processes, including defense, medicine, education, and business.

Key areas and goals include

  • development of a national AI strategy; 
  • adaptation of European legislation in the field of artificial intelligence;
  • development of AI assistants for public services.

Summary: what to look out for today

The introduction of artificial intelligence regulation in the European Union is not just a legal initiative, but also a new standard for the responsible use of the latest technologies, which is already affecting global supply chains and business models. 

Companies that work with artificial intelligence or plan to enter the European market need to adapt to the requirements of the AI Act right now, especially since the introduction of the relevant regulation in Ukraine is only a matter of time. 

Responsible businesses can already take several important steps:

  1. Conduct an audit of their own AI systems to determine which risk category they fall under and whether they violate the law on private data protection.
  2. Ensure basic transparency in accordance with European requirements, including clear warnings that certain content was created using artificial intelligence.
  3. Prepare internal documentation for risk management.
  4. Follow global and Ukrainian initiatives to prepare for the introduction of regulation in time. If you wish, you can join discussions and public consultations.
  5. Seek legal advice to be sure to understand all the intricacies of the legislation and prepare for its strict requirements. 

Artificial intelligence is no longer the future, but the present. And the future belongs not only to those who develop innovations, but also to those who implement them responsibly.

Our contacts

If you want to become our client or partner, feel free to contact us at support@manimama.eu.

Or use our telegram @ManimamaBot and we will respond to your inquiry.

We also invite you to visit our website: https://manimama.eu/.Join our Telegram to receive news in a convenient way: Manimama Legal Channel.


Manimama Law Firm provides a gateway for the companies operating as the virtual asset wallet and exchange providers allowing to enter to the markets legally. We are ready to offer an appropriate support in obtaining a license with lower founding and operating costs. We offer KYC/AML launch, support in risk assessment, legal services, legal opinions, advice on general data protection provisions, contracts and all necessary legal and business tools to start business of virtual asset service provider.


The content of this article is intended to provide a general guide to the subject matter, not to be considered as a legal consultation.

Tags

Chat

Ready to create your future?
Let's begin

Share your vision. We'll create a legal framework tailored to bring it to life

Payment services

Payment services

Crypto licenses

Tokenization

MiCa regulation

Company formation

Your global legal partner
for crypto & fintech success