When AI Becomes the Fund Manager: Who Holds the License? | Manimama

Get a free consultation

After filling out the form, we will help you choose a company, licence, account, and legal support. Leave a contact for communication.

We do not share your data with third parties

When AI Becomes the Fund Manager: Who Holds the License?

light

Artificial intelligence is rapidly reshaping the world of finance. From predictive analytics to fully automated portfolio allocation, AI-driven systems are no longer experimental tools – they are already making investment decisions, executing trades, and managing client assets at scale.

Yet one crucial question remains unanswered: can AI legally “manage” a fund? And if not – where does the line of regulatory responsibility actually lie?

At Manimama, we’ve been closely monitoring how regulators interpret the intersection between AI autonomy and licensed financial activity. Let’s break down the essentials.

AI in Financial Management: Tool or Decision-Maker?

Under virtually every financial regime – whether the EU’s MiFID II, the US Investment Advisers Act (1940), or Canada’s Securities Acts – asset management and portfolio advice are licensed activities.

Licensing hinges on two key principles:

  1. Human accountability – a qualified individual or firm bears fiduciary responsibility for every investment decision; and
  2. Supervision – clients must know who manages their funds and who can be held liable for misconduct.

AI systems – no matter how sophisticated — cannot be “licensed” or bear legal responsibility. They have no legal personality, no fiduciary duties, and no capacity to be sanctioned.

Therefore, even when AI executes trades or allocates assets autonomously, regulators see it only as an instrument under the control of a licensed manager.

In short: AI may assist, but it cannot replace the accountable human.

Regulators’ Stance: The “Responsible Person” Principle (Concise)

Across jurisdictions, regulators are consistent: using AI does not remove legal accountability from the licensed person.

AI can assist — but it cannot act as an independent decision-maker in asset management.

ESMA (EU) – In its Public Statement on AI in Investment Services (May 2024), ESMA states: “Firms’ decisions remain the responsibility of management bodies, irrespective of whether those decisions are taken by people or AI-based tools.”

The regulator requires ongoing human oversight and full documentation of AI-assisted investment decisions.

FINRA (U.S.) – In Regulatory Notice 24-09, FINRA confirms that supervisory rules (Rule 3110 – Supervision) apply equally to AI systems: “Existing rules governing supervision and suitability apply equally to AI-generated recommendations under a technology-neutral framework.”

Responsibility for results remains with the licensed firm, even if AI or third-party providers perform certain functions.

FCA (UK) – In AI and the FCA: Our Approach (2024), the FCA emphasizes:

“Delegating functions to AI systems does not remove accountability from senior managers.” Firms must ensure transparency, human supervision, and fair client outcomes in line with the Consumer Duty principle.

In practice, this means a licensed portfolio manager must remain the “responsible person” – supervising model inputs, validating data sources, and approving outputs. Failing to maintain this oversight can amount to unlicensed investment activity or breach of fiduciary duty.

Practical Risk: The Accountability Gap

The main compliance risk is the illusion of autonomy. If a company markets an “AI-powered investment platform” but lacks a licensed responsible person, it effectively performs unregulated investment activity.

Supervisory authorities – from BaFin (Germany) to AMF (France) – have already warned that using AI does not exempt firms from existing regulatory categories.

Key obligations remain:

  • investor suitability assessments;
  • clear disclosure of algorithmic logic and risks;
  • ongoing model governance and human review.

In essence: AI can process the data, but humans must process the responsibility.

The Way Forward: Compliance-by-Design

For fintechs and asset managers, the roadmap is clear:

  1. Structure AI governance frameworks – assign accountable officers (Chief Investment Officer, Compliance Officer) for all algorithmic operations.
  2. Establish human-in-the-loop controls – every trading or allocation decision must be traceable to an authorized individual.
  3. Secure proper licensing – whether under MiFID II, AIFMD, or national securities laws, ensure your entity holds the relevant permission for portfolio management or advice.
  4. Document oversight – maintain audit trails proving active monitoring of AI decisions.

This “compliance-by-design” approach transforms AI from a legal risk into a strategic asset.

Conclusion: The Future Is Hybrid

AI will continue to redefine the speed, precision, and scale of financial decision-making. But regulation remains human-centric for a reason – accountability cannot be automated.

The AI-as-a-manager model will evolve, but it will do so within a licensed, supervised, and transparent framework. The firms that understand this early – embedding legal oversight into technological innovation – will not only stay compliant, but gain a competitive edge.

At Manimama, we help fintech and investment clients build that bridge – aligning AI innovation with regulatory integrity.


The content of this article is intended to provide a general guide to the subject matter, not to be considered as a legal consultation.

Tags

Chat

Ready to create your future?
Let's begin

Share your vision. We'll create a legal framework tailored to bring it to life

Payment services

Payment services

Crypto licenses

Tokenization

MiCa regulation

Company formation

Your global legal partner
for crypto & fintech success

Talk to our experts

By clicking the "Submit" button, I confirm that I have read the Privacy Policy and agree to the collection and processing of my personal data in accordance with the General Data Protection Regulation (GDPR).