Skip to main content
Legal & regulatory

The EU AI Act and legal AI: what Belgian tax professionals actually need to know

The AI Act entered into force on 1 August 2024. Some provisions are already active. Most articles confuse provider obligations with deployer obligations — here's the distinction that matters for your practice.

By Auryth Team

The EU AI Act is already in force. Not in the future — since 1 August 2024. Prohibited AI practices have been banned since February 2025. AI literacy obligations apply right now. And in August 2026 — six months from now — high-risk system obligations start enforcement, backed by penalties of up to €35 million or 7% of global revenue.

If you are a Belgian tax professional using AI tools in your practice, every article you have read about this regulation probably scared you more than it should have. Here is why: almost everything written about the AI Act confuses what providers must do with what deployers must do. The difference is not academic. It is the difference between €330,000 in compliance costs and essentially zero.

The distinction that changes everything: provider vs. deployer

The AI Act creates two fundamentally different roles with fundamentally different obligations.

Providers build AI systems. OpenAI, Anthropic, Google, Auryth — anyone who develops, trains, or places an AI system on the market. Providers of high-risk AI systems must conduct conformity assessments, implement quality management systems, maintain extensive technical documentation, register their systems in the EU database, and ensure accuracy, robustness, and cybersecurity.

Deployers use AI systems in a professional context. That is you. Law firms, accounting firms, tax advisory practices, corporate tax departments — anyone using an AI tool as part of their professional work.

Your obligations as a deployer are lighter, more practical, and significantly less expensive:

ObligationWhat it means in practice
AI literacy (Article 4)Ensure your staff understands the basics of how the AI tools they use work, their limitations, and their risks. Internal training suffices — no certification required
Human oversight (Article 26)A qualified professional reviews AI output before it reaches clients. If you are already doing this — and you should be — you are already compliant
Log retentionKeep system-generated logs for a minimum of six months
Input data relevanceUse the tool as intended, with appropriate input data
Follow instructions for useRead and follow the provider’s documentation

That is it. No conformity assessment. No quality management system. No €193,000–€330,000 setup cost. The compliance burden for deployers is training, oversight, and record-keeping — activities that any competent professional practice should already be performing.

EU AI Act timeline: phased enforcement from 2024 to 2027 with provider vs. deployer obligations

What is already in force

The AI Act’s enforcement follows a phased timeline. Some provisions are already active:

Since February 2, 2025:

Since August 2, 2025:

Coming August 2, 2026:

Coming February 2, 2026:

This is where the discussion gets genuinely complex — and where most commentary oversimplifies.

Annex III of the AI Act lists categories of AI systems considered high-risk. Point 8 covers the “administration of justice and democratic processes” and specifically includes AI systems “intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”

Read that carefully. The text says judicial authority — courts, judges, tribunals. Not lawyers. Not tax advisors. Not accountants.

A tax research AI tool used by a professional advisor to research provisions and prepare advice is not assisting a judicial authority. It is assisting a professional in their preparatory work. This distinction matters legally, and it is one the European Commission’s forthcoming classification guidelines (February 2026) are expected to clarify.

Moreover, Article 6(3) provides an explicit exemption pathway. An AI system listed in Annex III is not considered high-risk if it:

A research tool that retrieves and organises legal provisions for a professional to interpret and apply fits this description. The professional’s judgment — not the AI’s output — determines the advice. The AI performs preparatory work; the human makes the decision.

One critical exception: Article 6(3) exemptions never apply if the AI system performs profiling of natural persons. If a tax AI tool personalises recommendations based on individual client characteristics, it could fall into high-risk classification regardless of the exemption pathway. This is the single most important design constraint for legal AI providers.

What this means for Belgian practice specifically

Belgium’s implementation of the AI Act reflects its federal structure:

Crucially, the AI Act is a maximum harmonisation regulation. Belgium cannot introduce its own additional AI rules. The framework is uniform across the EU — which means compliance in Belgium equals compliance everywhere.

The Belgian bar’s position

The OVB (Flemish bar association) and NOvA (Dutch bar association) published joint AI guidelines through alice.law that take a permissive but careful stance:

The CCBE (Council of Bars and Law Societies of Europe) reinforced this with its October 2025 guide on generative AI use by lawyers, establishing a pan-European framework that emphasises professional responsibility over prohibition.

The Belgian accountant position

The ITAA-Barometer 2025 reveals a profession leaning into AI rather than away from it:

ITAA President Bart Van Coile stated that members “feel the pressure of increasingly complex regulations but remain the anchor of trust” for their clients. This framing — AI as a tool that enhances the trusted advisor role, not one that threatens it — aligns precisely with the AI Act’s regulatory philosophy.

The AI literacy obligation: what it actually requires

Article 4 is the provision most Belgian professionals overlook, despite being already in force. It requires that providers and deployers ensure staff have a “sufficient level of AI literacy” taking into account their “technical knowledge, experience, education, and training” as well as the “context the AI systems are to be used in.”

In practice, this means:

  1. Your staff should understand how the AI tools they use work at a conceptual level — not the technical architecture, but the general approach (retrieval-based vs. generative, what “confidence scoring” means, why sources matter)
  2. Your staff should understand the tool’s limitations — what it cannot do, where it makes errors, when to verify independently
  3. Your staff should understand the professional responsibility framework — the AI outputs a draft, the professional owns the advice
  4. Documentation — internal training records are sufficient. No formal certification is required

No standalone fine exists for failing Article 4. But it functions as a “major aggravating factor” in any regulatory investigation. If a problem arises and the investigation reveals no AI literacy training, every other penalty can be increased.

National enforcement of AI literacy begins in August 2026 through market surveillance authorities — in Belgium, that means BIPT.

The GDPR intersection

For Belgian tax professionals, the AI Act does not operate in isolation. Any AI tool processing personal data — client names, financial information, tax positions — must comply with both the AI Act and GDPR simultaneously.

Key interaction points:

For professional tax practice: keep personal data out of prompts where possible, pseudonymise where not, and ensure your AI provider has documented GDPR compliance for their data processing. These are deployer obligations that require no legal expertise to implement — just awareness and discipline.

The penalty framework

The numbers are large enough to warrant attention:

ViolationMaximum penalty
Prohibited AI practices€35 million or 7% of global revenue (whichever is higher)
High-risk system obligations€15 million or 3% of global revenue
Misleading information to authorities€7.5 million or 1% of global revenue

For SMEs, the lower of the two amounts applies. For larger organisations, the higher amount. These are maximum penalties — actual enforcement will likely be proportionate and graduated, similar to GDPR’s track record.

The practical risk for deployers (professional practices using AI tools) is minimal for the simple reason that deployer obligations are minimal. Train your staff, maintain oversight, keep logs, follow instructions. If you do these four things, the penalty framework is effectively irrelevant to your practice.

The real compliance question

The most important insight from the AI Act is not what it prohibits. It is what it legitimises.

When a regulation establishes explicit compliance criteria for AI tools in professional contexts, it simultaneously creates two categories: tools that meet the criteria and tools that do not. General-purpose chatbots used for tax research — with no source transparency, no audit trails, no documented accuracy, no human oversight design — will find it increasingly difficult to justify their use in professional practice as the regulatory framework matures.

Purpose-built legal AI tools designed with transparency, source citation, confidence scoring, and human-in-the-loop architecture are not just better tools. They are the compliant tools. The AI Act does not ban AI in professional practice — it bans bad AI in professional practice.

The OVB already positions AI as a legitimate supporting tool. The ITAA already sees AI as an opportunity. The regulatory framework now provides the compliance structure that turns professional interest into professional practice.

A practical compliance checklist for Belgian tax practices

Already required (since February 2025):

Required by August 2026:

Ongoing best practice:



How Auryth TX applies this

Auryth TX is designed for AI Act compliance by architecture, not by afterthought.

Every research query produces a fully sourced response with article-level citations, confidence scoring, and a complete audit trail. The human-in-the-loop design means the professional always makes the final interpretive decision — the tool retrieves, organises, and flags; the professional judges, advises, and takes responsibility.

Source transparency is not a feature — it is the foundation. Every provision cited links to its source. Every confidence score reflects the retrieval quality and coverage depth. Every research session generates a documented trail suitable for professional records.

For the AI literacy obligation: Auryth’s interface is designed to teach as it works. Confidence indicators, source provenance, domain coverage maps, and temporal version flags make the tool’s reasoning visible — turning every research session into implicit literacy training.

No personal data enters the system by design. Queries are about legal provisions, not about clients. The architecture eliminates the GDPR intersection before it arises.

At €99/month, compliance infrastructure that meets the AI Act’s deployer requirements costs less than a single hour of non-compliance risk.

Research faster. Cover more. Advise better — compliantly.


Sources: 1. European Parliament and Council (2024). Regulation (EU) 2024/1689 (AI Act). Official Journal of the European Union. 2. OVB (2025). “AI-richtlijnen voor advocaten.” ovb.be. See also: Alice.law (2025). “AI guidelines for lawyers in Belgium and the Netherlands.” 3. CCBE (2025). “Guide on the use of generative AI by lawyers.” ccbe.eu. 4. ITAA (2026). “ITAA-Barometer 2025.” blogitaa.be. 5. European Commission (2025). “General-Purpose AI Code of Practice.” digital-strategy.ec.europa.eu. 6. Ooms, W. & Gils, T. (2025). “Implementing the AI Act in Belgium: Scope of Application and Authorities.” SSRN. 7. ICT Rechtswijzer (2025). “The AI Act — Belgian implementation.” ictrechtswijzer.be.