10 April 2026 · 6 min read

Does the EU AI Act apply to your AI app?

The question every developer building with OpenAI, Anthropic, or any AI API is asking. The answer is almost certainly yes if you have EU users — and the regulation does not care where your company is incorporated. Here is how to work it out, with every claim traced to a specific Article.

The short answer

If your product uses AI and you have users in the EU, the EU AI Act applies to you. Full stop.

Article 2(1) defines three categories of entity that fall within scope:

  • Article 2(1)(a): Providers placing an AI system on the EU market, regardless of whether they are established in the EU or a third country.
  • Article 2(1)(b): Deployers of AI systems who are established in, or located in, the EU.
  • Article 2(1)(c): Providers and deployers of AI systems established in a third country, where the output produced by the system is used in the Union.

That third clause is the one that catches most developers off guard. It means a SaaS product built in San Francisco, Lagos, or London that serves even a single EU user is in scope.

Source: Regulation (EU) 2024/1689, Article 2(1). Published in the Official Journal of the European Union on 12 July 2024.

A decision tree

Work through these five questions in order. Each one narrows your position under the regulation.

1

Does your product use AI?

Article 3(1) defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." If you call an LLM API, run an ML model, or have a system that learns from data — yes, this is AI under the regulation. If no: you are not in scope. Stop here.

2

Do you have EU users?

If the output of your AI system is used by anyone located in the EU, Article 2(1)(c) brings you into scope. "Used in the Union" includes EU-based end users of a globally available SaaS product. If you have zero EU users today and no plans to serve them: you are not currently in scope, but this changes the moment your first EU user signs up.

3

Are you the provider or the deployer?

Provider (Article 3(3)): the entity that develops an AI system or has it developed, and places it on the market or puts it into service under its own name or trademark. Deployer (Article 3(4)): any entity that uses an AI system under its authority. Most SaaS companies integrating OpenAI, Anthropic, or Google APIs into their product are deployers. The API provider is the general-purpose AI (GPAI) model provider. See Section 4 below for the full breakdown.

4

What risk tier are you in?

The EU AI Act classifies AI systems into four tiers: Prohibited (Article 5), High-risk (Article 6 + Annex III), Limited-risk (Article 50 transparency), and Minimal-risk (no mandatory obligations). Your obligations depend entirely on which tier your product falls into. High-risk domains include hiring, credit scoring, medical diagnosis, education assessment, law enforcement, and critical infrastructure.

5

What are your obligations?

Prohibited: stop immediately. High-risk: Articles 9–15 apply (risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness). Limited-risk: inform users they are interacting with AI. Minimal-risk: no mandatory requirements, but Article 5 prohibitions still apply to everyone.

I am based in the US / UK / outside the EU — does it still apply?

Yes. The mechanism is Article 2(1)(c), which states that the regulation applies to:

"providers and deployers of AI systems that are established in a third country, where the output produced by the AI system is used in the Union"

This is deliberately modelled on GDPR's extraterritorial reach (Article 3(2) of the GDPR). The regulation does not require you to have an office in the EU, a legal entity in the EU, or even a contract with an EU customer. If the output of your AI system is used by someone in the EU, you are in scope.

For non-EU providers of high-risk AI systems, there is an additional requirement: Article 22 requires you to appoint an authorised representative established in the EU before placing the system on the market. This is a legal prerequisite, not optional.

Source: Regulation (EU) 2024/1689, Article 2(1)(c) and Article 22.

I use the OpenAI API — am I a provider or a deployer?

This is the single most common question on developer forums, and the answer is more straightforward than most people expect.

You are almost certainly a deployer.

Article 3(4) defines a deployer as "any natural or legal person, public authority, agency or other body using an AI system under its authority." If you integrate OpenAI's GPT, Anthropic's Claude, Google's Gemini, or any other hosted model into your product, you are using that AI system under your authority. You are the deployer.

OpenAI (or Anthropic, or Google) is the GPAI model provider under Article 3(63). They have their own obligations under Articles 53–55, which have been in force since 2 August 2025.

When do you become a provider instead? Article 3(3) says you are a provider if you:

  • Place an AI system on the market or put it into service under your own name or trademark, or
  • Make a substantial modification to a high-risk AI system that is already on the market (Article 25)

So if you fine-tune a model significantly, wrap it in a product, and market it as your own AI system — you may cross the line from deployer to provider. The distinction matters because providers of high-risk systems have heavier obligations (conformity assessment, CE marking, registration in the EU database).

For most SaaS developers calling an API: you are a deployer. Your obligations are lighter but they are real — particularly around transparency (Article 50) and, if you are in a high-risk domain, around human oversight (Article 14) and record-keeping (Article 12).

Source: Regulation (EU) 2024/1689, Articles 3(3), 3(4), 3(63), 25, and 53–55.

I am a solo founder with 3 users — do I really need to worry?

Honest answer: the regulation does not have a size threshold.

Article 2 applies based on where the AI system's output is used, not the size of the company behind it. There is no "small company exemption" and no minimum user count. A solo developer with a SaaS product used by three EU customers is technically in scope.

However. Practical enforcement risk is proportionate. The EU AI Office and national competent authorities have finite resources. They will focus on high-impact, high-risk systems first — not a three-user side project.

Article 62 requires Member States to provide SMEs and startups with priority access to AI regulatory sandboxes and to take the specific interests and needs of SME providers into account when setting fees and compliance costs. Recital 141 explicitly acknowledges that compliance costs should be proportionate.

The smart move is not to ignore the regulation. It is to know your tier now so you are not scrambling later. If you are minimal-risk, you have no mandatory obligations and you can stop worrying. If you are high-risk, you want to know that before you have 3,000 users and an enforcement body looking at your sector.

Omnibus status. The proposed EU Digital Omnibus may extend some Annex III (high-risk) deadlines from 2 August 2026 to 2 December 2027. As of April 2026, this is under active trilogue negotiation and is not yet law. Do not plan around it as certain. Article 5 prohibitions and Article 50 transparency obligations are not proposed for delay.

Source: Regulation (EU) 2024/1689, Articles 2, 62, and Recital 141. Omnibus status: COM(2025) 37 final.

Automate the decision

The decision tree above is exactly what regula assess implements as an interactive CLI flow. It walks you through the same five questions — AI usage, EU scope, prohibited practices, high-risk domains, and transparency triggers — and tells you your tier, your role, and your obligations.

$ regula assess

EU AI Act — Applicability Check

  Does your product use AI, machine learning, or an AI API?
  (Examples: OpenAI, Anthropic, Google AI, Hugging Face...)
  yes

  Do any users interact with your product from within the EU?
  yes

  Does your product do any of the following?
  a) Score or rank people's social behaviour...
  b) Influence user decisions outside their awareness...
  no

  Does your product do any of the following?
  a) Screen, rank, or filter job candidates...
  b) Make or influence credit decisions...
  no

  Does your product do any of the following?
  a) Interact with users via chat or voice...
  b) Generate text, images, audio, or video...
  yes

  Are you based outside the EU?
  yes

Result: LIMITED-RISK (Article 50 transparency obligation) Your product is in scope, but the obligation is lightweight: inform users they are interacting with AI or consuming AI-generated content. Deadline: 2 August 2026. This deadline is NOT proposed for delay by the Digital Omnibus.

You can also run it non-interactively in CI or scripts:

$ regula assess --answers yes,yes,no,no,yes
$ regula assess --answers yes,yes,no,yes,no --format json

What to do next

Based on where you landed in the decision tree:

  • Not sure about your tier? Run regula assess to find out in 60 seconds.
  • Want to scan your code? Run regula check . to identify risk patterns against 404 detections across 8 languages.
  • High-risk? Run regula gap . for an Articles 9–15 gap assessment, then regula plan . for a prioritised remediation list.
  • Limited-risk? Run regula disclose . to generate compliant disclosure text for your chatbot or AI-generated content.
  • Want the full picture? Install and run:
$ pipx install regula-ai
$ regula assess          # applicability check
$ regula check .         # code-level risk scan
$ regula report . -f html # full compliance report

Not legal advice. Regula identifies regulatory risk indicators in code for developer review. It does not constitute legal advice, and its output should not be relied upon as a definitive compliance determination. The EU AI Act requires contextual assessment that no automated tool can fully provide. For high-risk systems, consult a qualified legal professional. All Article references are to Regulation (EU) 2024/1689 as published in the Official Journal of the European Union on 12 July 2024.

Keep reading

Discuss on Hacker News