What is PL 2338/2023?
PL 2338/2023, known as the Marco Legal da Inteligencia Artificial, is a bill that would establish a comprehensive legal framework for the development and use of artificial intelligence in Brazil. It originated in the Senate, where it was approved by symbolic vote on 10 December 2024.
The bill was then sent to the Chamber of Deputies. On 4 April 2025, the Chamber created a Special Commission to analyse the text. As of April 2026, the bill is awaiting the rapporteur's report within that Special Commission. The Chamber may amend the text before voting; if it does, the bill returns to the Senate for conciliation.
Article 45 of the Senate-approved text states that the law enters into force one year after its publication in the Diario Oficial da Uniao. Note: Article 45 was amended during Senate deliberation by Senator Esperidiao Amim to address the National AI Governance System (SIA), and the Chamber of Deputies may further modify the implementation timeline. Penalties for non-compliance include fines of up to R$50,000,000 (fifty million reais) or 2% of annual revenue per infringement, whichever is higher.
PL 2338/2023 is not yet law. The penalty ranges, risk classifications, and obligations described on this page reflect the Senate-approved text and may be modified by the Chamber of Deputies.
Risk classification under the Marco Legal
The Senate-approved text of PL 2338/2023 establishes three risk tiers. These broadly parallel the EU AI Act's risk pyramid but use different terminology and scope.
Excessive risk (risco excessivo) — prohibited
- Social scoring by government entities
- Indiscriminate biometric surveillance in public spaces (with law-enforcement exceptions)
- AI systems that exploit vulnerabilities of specific groups (children, elderly, persons with disabilities) in ways that cause harm
High risk (alto risco)
- AI used in employment decisions (recruitment, evaluation, dismissal)
- AI used in credit and insurance decisions
- AI used in education (admissions, grading, allocation)
- AI used in healthcare (diagnosis, treatment recommendations)
- AI used in criminal justice and law enforcement
- AI used in essential public services (welfare, utilities)
- AI used in autonomous vehicles
High-risk systems would be subject to impact assessments, human oversight requirements, transparency obligations, and documentation duties.
Non-high risk
All other AI systems would be subject to general transparency and good-practice obligations, including the right of users to know they are interacting with an AI system.
LGPD and AI: what applies today
While the Marco Legal works through the Chamber, the LGPD (Lei Geral de Protecao de Dados, Lei 13.709/2018) is already in force and already applies to AI systems that process personal data. Three articles are particularly relevant.
Article 20 — automated decision review
Data subjects have the right to request a review of decisions made solely by automated processing that affect their interests — including profiling, credit scoring, and hiring decisions. The data controller must provide clear and adequate information about the criteria and procedures used. Note: the original draft required human review, but a legislative amendment removed that requirement. Article 20 requires a review, but does not mandate that it be performed by a human. This applies to any AI system making decisions that affect individuals, regardless of whether the Marco Legal is enacted.
Article 38 — RIPD (data protection impact report)
The ANPD may require a Relatorio de Impacto a Protecao de Dados Pessoais (RIPD) for processing activities that present high risk to data subjects' fundamental rights and freedoms. AI-driven profiling, automated credit decisions, and large-scale processing of sensitive data are all candidates for a RIPD. This is Brazil's equivalent of a DPIA under the GDPR.
Article 11 — sensitive personal data
Processing of sensitive personal data (racial or ethnic origin, religious belief, political opinion, health data, biometric data, genetic data) requires explicit and specific consent or one of the narrow legal bases listed in Article 11. AI systems trained on or processing sensitive data face a higher compliance bar under the LGPD, regardless of the Marco Legal's status.
ANPD enforcement priorities 2026–2027
The Autoridade Nacional de Protecao de Dados (ANPD) has signalled that AI and automated decisions are among its enforcement priorities for the 2026–2027 cycle. This means that even without the Marco Legal, the ANPD is actively looking at how organisations use AI in ways that touch personal data.
Areas of particular ANPD focus include:
- Automated decision-making that affects individuals' rights (Article 20 enforcement)
- Transparency in algorithmic processing of personal data
- Impact assessments for high-risk automated processing
- Cross-border data transfers involving AI systems
Organisations deploying AI in Brazil should not wait for the Marco Legal to become law. The ANPD already has enforcement authority under the LGPD, and AI-related processing is squarely in its sights.
What developers should do now
Whether or not PL 2338/2023 is enacted this year, the following steps are worth taking today. All are grounded in obligations that already exist under the LGPD or that will apply under the Marco Legal regardless of final text.
- Inventory your AI systems. List every AI-powered feature in production: what data it processes, what decisions it makes or influences, and which user categories it affects. The LGPD already requires you to know this.
- Map your automated decisions. Identify every system that makes decisions solely by automated processing. Under LGPD Article 20, data subjects can request a review of these decisions. Document the criteria, the logic, and the review mechanism for each one. While Article 20 does not require human review specifically, implementing a human review path is a best practice that also prepares you for Marco Legal obligations.
- Assess whether you need a RIPD. If your AI system processes personal data at scale, profiles individuals, or handles sensitive data categories, you likely need a data protection impact report under LGPD Article 38.
- Review your sensitive data processing. AI systems trained on or inferring sensitive personal data (health, biometrics, racial/ethnic origin) face stricter requirements under LGPD Article 11. Confirm your legal basis.
- Prepare for Marco Legal high-risk obligations. If your system falls into any of the high-risk categories (employment, credit, education, healthcare, criminal justice, essential services, autonomous vehicles), start documenting risk assessments, human oversight provisions, and transparency measures now. You will need them if the bill passes.
- Track the bill. Follow the Special Commission proceedings at the Camara dos Deputados. The rapporteur's report will signal what the final text looks like.
How Regula helps
Regula is an open-source compliance CLI that combines code scanning with governance questionnaires for AI risk assessment. Its framework crosswalk already includes both the LGPD and the Marco Legal da IA, so you can map risk findings to the relevant Brazilian articles today.
For a team building AI for the Brazilian market, the practically useful commands are:
# Install
pipx install regula-ai
# Scan for risk indicators with Brazil-specific mapping
regula check --jurisdictions brazil . # Maps findings to LGPD articles
# Gap analysis against Brazilian frameworks
regula gap --framework lgpd # LGPD article-by-article gap assessment
regula gap --framework marco-legal-ia # Marco Legal gap assessment
# Inventory what you have
regula discover . # AI systems present in the project
regula inventory # AI library / model references
# Generate compliance evidence
regula conform # Conformity evidence pack
regula oversight # Human oversight detection
# Health and reproducibility
regula self-test
regula doctor
The framework crosswalk covers all 7 EU AI Act obligation articles (Articles 9–15), each mapped to both LGPD and Marco Legal articles. This means you can use the same scan to understand your compliance posture across the EU AI Act, the LGPD, and the Marco Legal simultaneously.
Regula is open source, written in Python with zero production dependencies, and the entire detection ruleset is in the repository. Brazilian teams can fork it, add Brazil-specific patterns, and contribute them back.
Frequently asked questions
What is PL 2338/2023 (Marco Legal da IA)?
PL 2338/2023, known as the Marco Legal da Inteligencia Artificial, is a bill that would establish a legal framework for AI in Brazil. The Senate approved it by symbolic vote on 10 December 2024. It was sent to the Chamber of Deputies, where a Special Commission was created on 4 April 2025. As of April 2026, the bill is awaiting the rapporteur's report in that Special Commission. It is not yet law.
Does Brazil have an AI law?
Not yet. PL 2338/2023 passed the Senate on 10 December 2024 but is still being considered by the Chamber of Deputies. Article 45 of the Senate-approved text provides for entry into force one year after publication, though the Chamber may modify this timeline. In the meantime, the LGPD (Lei 13.709/2018) already applies to AI systems that process personal data, including automated decision-making under Article 20.
How does the LGPD apply to AI systems?
The LGPD applies to any AI system that processes personal data of individuals in Brazil. Article 20 gives data subjects the right to request a review of decisions made solely by automated processing that affect their interests (note: the original draft required human review, but that requirement was removed by legislative amendment). Article 38 requires a RIPD (data protection impact report) for high-risk processing. Article 11 imposes stricter rules when sensitive personal data is involved. The ANPD has signalled that AI and automated decisions are enforcement priorities for 2026–2027.
What are the penalties under the Marco Legal da IA?
If PL 2338/2023 is enacted as currently drafted, penalties include fines of up to R$50,000,000 (fifty million reais) or 2% of annual revenue per infringement, whichever is higher. These penalty ranges are subject to change during Chamber deliberation. The bill is not yet law.
What risk categories does the Marco Legal define?
The bill as approved by the Senate establishes three risk tiers: excessive risk (prohibited uses such as social scoring by government and indiscriminate biometric surveillance), high risk (AI used in employment, credit, education, healthcare, criminal justice, essential services, and autonomous vehicles), and non-high risk (subject to general transparency and good-practice obligations). The Chamber may modify these categories.
What should Brazilian developers do now?
Inventory AI systems in production. Map every automated decision to its LGPD Article 20 review mechanism. Assess whether a RIPD is needed under Article 38. Review sensitive data processing against Article 11. Prepare documentation for Marco Legal high-risk obligations. Track the Special Commission proceedings at the Chamber of Deputies.
Can Regula scan for LGPD and Marco Legal compliance?
Yes. Regula's framework crosswalk includes both LGPD and Marco Legal da IA. The commands regula gap --framework lgpd and regula gap --framework marco-legal-ia map risk findings to the relevant articles of each framework. The command regula check --jurisdictions brazil applies LGPD-mapped rules to your scan results.
How does the Marco Legal compare to the EU AI Act?
Both use a risk-based approach with prohibited practices, high-risk categories, and lighter obligations for lower-risk systems. The Marco Legal's high-risk categories overlap significantly with the EU AI Act's Annex III list (employment, credit, education, healthcare, law enforcement). Key differences: the Marco Legal does not have an EU-style central AI database or conformity assessment procedure (yet), and the penalty structure is different (R$50M / 2% revenue vs the EU's tiered percentage-of-global-revenue model). Regula's framework crosswalk maps between both frameworks, covering all 7 EU AI Act obligation articles (9–15).
What we are tracking and what may change
The Marco Legal is still in committee. The final text may differ significantly from the Senate-approved version. Here is what we are watching as of April 2026:
To verify on enactment
- High-risk category definitions. The Chamber may add, remove, or redefine high-risk categories. Autonomous vehicles, in particular, were a late addition and may be scoped differently.
- Penalty ranges. The R$50M / 2% revenue figures may be adjusted during Chamber deliberation.
- Supervisory authority. The Senate text assigns oversight roles but the final institutional arrangement (single authority vs sector-specific model) may change.
- Transition period. The Senate-approved text (Art. 45) provides for a one-year entry-into-force window. Some secondary legal analyses describe a phased approach (180 days for certain provisions, two years for others), which may reflect proposed Chamber amendments or an earlier draft. The final timeline will be confirmed upon enactment.
- ANPD role. Whether the ANPD becomes the primary AI regulator or shares authority with sector-specific regulators.
- Senate conciliation. If the Chamber amends the text, it returns to the Senate. The final text may differ from both the Senate-approved and Chamber-amended versions.
We update this page as the bill progresses. If you want to be notified when the tracker changes, watch the repository.
Sources
Primary and secondary sources
- Senado Federal — PL 2338/2023 — full legislative text, voting record, and procedural history. senado.leg.br/materia/157233
- Camara dos Deputados — Special Commission creation (4 April 2025), rapporteur appointment, and procedural updates. www.camara.leg.br
- LGPD (Lei 13.709/2018) — full text of the Lei Geral de Protecao de Dados. Articles 11, 20, and 38 govern sensitive data, automated decisions, and data protection impact reports respectively.
- ANPD — Autoridade Nacional de Protecao de Dados — enforcement priorities and regulatory agenda. gov.br/anpd
- Securiti — analysis of PL 2338/2023 risk classification and penalty structure. Used as a secondary source for the risk tier summary.
- Baker McKenzie — legal analysis of the Marco Legal da IA and its interaction with the LGPD. Used as a secondary source for the framework comparison.
If you spot an error on this page, open an issue on github.com/kuzivaai/getregula or email a correction. We would rather be told than be wrong.