The EU AI Act and legal AI: what Belgian tax professionals actually need to know
The AI Act entered into force on 1 August 2024. Some provisions are already active. Most articles confuse provider obligations with deployer obligations — here's the distinction that matters for your practice.
By Auryth Team
The EU AI Act is already in force. Not in the future — since 1 August 2024. Prohibited AI practices have been banned since February 2025. AI literacy obligations apply right now. And in August 2026 — six months from now — high-risk system obligations start enforcement, backed by penalties of up to €35 million or 7% of global revenue.
If you are a Belgian tax professional using AI tools in your practice, every article you have read about this regulation probably scared you more than it should have. Here is why: almost everything written about the AI Act confuses what providers must do with what deployers must do. The difference is not academic. It is the difference between €330,000 in compliance costs and essentially zero.
The distinction that changes everything: provider vs. deployer
The AI Act creates two fundamentally different roles with fundamentally different obligations.
Providers build AI systems. OpenAI, Anthropic, Google, Auryth — anyone who develops, trains, or places an AI system on the market. Providers of high-risk AI systems must conduct conformity assessments, implement quality management systems, maintain extensive technical documentation, register their systems in the EU database, and ensure accuracy, robustness, and cybersecurity.
Deployers use AI systems in a professional context. That is you. Law firms, accounting firms, tax advisory practices, corporate tax departments — anyone using an AI tool as part of their professional work.
Your obligations as a deployer are lighter, more practical, and significantly less expensive:
| Obligation | What it means in practice |
|---|---|
| AI literacy (Article 4) | Ensure your staff understands the basics of how the AI tools they use work, their limitations, and their risks. Internal training suffices — no certification required |
| Human oversight (Article 26) | A qualified professional reviews AI output before it reaches clients. If you are already doing this — and you should be — you are already compliant |
| Log retention | Keep system-generated logs for a minimum of six months |
| Input data relevance | Use the tool as intended, with appropriate input data |
| Follow instructions for use | Read and follow the provider’s documentation |
That is it. No conformity assessment. No quality management system. No €193,000–€330,000 setup cost. The compliance burden for deployers is training, oversight, and record-keeping — activities that any competent professional practice should already be performing.

What is already in force
The AI Act’s enforcement follows a phased timeline. Some provisions are already active:
Since February 2, 2025:
- Prohibited AI practices are banned. Social scoring, manipulative subliminal techniques, exploitation of vulnerabilities, real-time remote biometric identification in public spaces (with limited exceptions). None of these affect tax AI research tools.
- AI literacy obligations apply. Article 4 requires both providers and deployers to ensure a “sufficient level of AI literacy” among staff who operate or are affected by AI systems. This is in force now — not August 2026.
Since August 2, 2025:
- General-purpose AI (GPAI) model obligations apply. This affects providers of foundation models — OpenAI for ChatGPT, Anthropic for Claude, Google for Gemini. They must maintain technical documentation, comply with copyright rules, and publish training data summaries. As a deployer, you have no GPAI-specific obligations. This is the provider’s burden.
Coming August 2, 2026:
- Full high-risk AI system obligations enter force. Transparency requirements under Article 50 apply. National authorities — in Belgium, the BIPT, Centre for Cybersecurity Belgium, and Labour Inspection — begin enforcement.
Coming February 2, 2026:
- The European Commission publishes classification guidelines with practical examples for determining whether an AI system qualifies as high-risk. This guidance arrives six months before enforcement, giving the market a final window to adjust.
The high-risk question: does legal AI qualify?
This is where the discussion gets genuinely complex — and where most commentary oversimplifies.
Annex III of the AI Act lists categories of AI systems considered high-risk. Point 8 covers the “administration of justice and democratic processes” and specifically includes AI systems “intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”
Read that carefully. The text says judicial authority — courts, judges, tribunals. Not lawyers. Not tax advisors. Not accountants.
A tax research AI tool used by a professional advisor to research provisions and prepare advice is not assisting a judicial authority. It is assisting a professional in their preparatory work. This distinction matters legally, and it is one the European Commission’s forthcoming classification guidelines (February 2026) are expected to clarify.
Moreover, Article 6(3) provides an explicit exemption pathway. An AI system listed in Annex III is not considered high-risk if it:
- Performs a “narrow procedural task”
- Improves the result of a “previously completed human activity”
- Does not “materially influence” the outcome of decision-making
A research tool that retrieves and organises legal provisions for a professional to interpret and apply fits this description. The professional’s judgment — not the AI’s output — determines the advice. The AI performs preparatory work; the human makes the decision.
One critical exception: Article 6(3) exemptions never apply if the AI system performs profiling of natural persons. If a tax AI tool personalises recommendations based on individual client characteristics, it could fall into high-risk classification regardless of the exemption pathway. This is the single most important design constraint for legal AI providers.
What this means for Belgian practice specifically
Belgium’s implementation of the AI Act reflects its federal structure:
- BIPT (Belgian Institute for Postal Services and Telecommunications) serves as market surveillance authority and single point of contact
- Centre for Cybersecurity Belgium handles cybersecurity aspects
- Labour Inspection covers AI in workplace contexts
- COC (Supervisory Body for Police Information) covers law enforcement AI
Crucially, the AI Act is a maximum harmonisation regulation. Belgium cannot introduce its own additional AI rules. The framework is uniform across the EU — which means compliance in Belgium equals compliance everywhere.
The Belgian bar’s position
The OVB (Flemish bar association) and NOvA (Dutch bar association) published joint AI guidelines through alice.law that take a permissive but careful stance:
- AI use is “neither prohibited nor mandatory” — it falls within the lawyer’s professional “freedom and responsibility”
- Lawyers should pseudonymise personal data before entering it into AI tools
- As a rule, personal data should not be entered into AI tool prompts
- Lawyers must read and understand the AI tool’s terms of use: data training policies, data transfer and storage locations, whether the system is open or closed, liability provisions, and IP rules
- The lawyer remains fully responsible for any AI-generated output used in professional work
The CCBE (Council of Bars and Law Societies of Europe) reinforced this with its October 2025 guide on generative AI use by lawyers, establishing a pan-European framework that emphasises professional responsibility over prohibition.
The Belgian accountant position
The ITAA-Barometer 2025 reveals a profession leaning into AI rather than away from it:
- 71% of Belgian accountants view AI as an opportunity for personalised advice
- 80% believe it is their professional duty to guide clients through digital transition
- 59% want to invest in secure data management solutions
ITAA President Bart Van Coile stated that members “feel the pressure of increasingly complex regulations but remain the anchor of trust” for their clients. This framing — AI as a tool that enhances the trusted advisor role, not one that threatens it — aligns precisely with the AI Act’s regulatory philosophy.
The AI literacy obligation: what it actually requires
Article 4 is the provision most Belgian professionals overlook, despite being already in force. It requires that providers and deployers ensure staff have a “sufficient level of AI literacy” taking into account their “technical knowledge, experience, education, and training” as well as the “context the AI systems are to be used in.”
In practice, this means:
- Your staff should understand how the AI tools they use work at a conceptual level — not the technical architecture, but the general approach (retrieval-based vs. generative, what “confidence scoring” means, why sources matter)
- Your staff should understand the tool’s limitations — what it cannot do, where it makes errors, when to verify independently
- Your staff should understand the professional responsibility framework — the AI outputs a draft, the professional owns the advice
- Documentation — internal training records are sufficient. No formal certification is required
No standalone fine exists for failing Article 4. But it functions as a “major aggravating factor” in any regulatory investigation. If a problem arises and the investigation reveals no AI literacy training, every other penalty can be increased.
National enforcement of AI literacy begins in August 2026 through market surveillance authorities — in Belgium, that means BIPT.
The GDPR intersection
For Belgian tax professionals, the AI Act does not operate in isolation. Any AI tool processing personal data — client names, financial information, tax positions — must comply with both the AI Act and GDPR simultaneously.
Key interaction points:
- Data minimisation applies to both frameworks. The OVB guidance’s requirement to pseudonymise data before entering it into AI tools is not just professional best practice — it is GDPR-aligned
- Automated decision-making under GDPR Article 22 and the AI Act’s profiling rules create overlapping obligations. If your AI tool makes or materially influences decisions about individuals, both frameworks apply in full
- The Digital Omnibus (November 2025) introduced GDPR amendments recognising AI training as a “legitimate interest,” partially resolving one of the key tensions between the two frameworks
For professional tax practice: keep personal data out of prompts where possible, pseudonymise where not, and ensure your AI provider has documented GDPR compliance for their data processing. These are deployer obligations that require no legal expertise to implement — just awareness and discipline.
The penalty framework
The numbers are large enough to warrant attention:
| Violation | Maximum penalty |
|---|---|
| Prohibited AI practices | €35 million or 7% of global revenue (whichever is higher) |
| High-risk system obligations | €15 million or 3% of global revenue |
| Misleading information to authorities | €7.5 million or 1% of global revenue |
For SMEs, the lower of the two amounts applies. For larger organisations, the higher amount. These are maximum penalties — actual enforcement will likely be proportionate and graduated, similar to GDPR’s track record.
The practical risk for deployers (professional practices using AI tools) is minimal for the simple reason that deployer obligations are minimal. Train your staff, maintain oversight, keep logs, follow instructions. If you do these four things, the penalty framework is effectively irrelevant to your practice.
The real compliance question
The most important insight from the AI Act is not what it prohibits. It is what it legitimises.
When a regulation establishes explicit compliance criteria for AI tools in professional contexts, it simultaneously creates two categories: tools that meet the criteria and tools that do not. General-purpose chatbots used for tax research — with no source transparency, no audit trails, no documented accuracy, no human oversight design — will find it increasingly difficult to justify their use in professional practice as the regulatory framework matures.
Purpose-built legal AI tools designed with transparency, source citation, confidence scoring, and human-in-the-loop architecture are not just better tools. They are the compliant tools. The AI Act does not ban AI in professional practice — it bans bad AI in professional practice.
The OVB already positions AI as a legitimate supporting tool. The ITAA already sees AI as an opportunity. The regulatory framework now provides the compliance structure that turns professional interest into professional practice.
A practical compliance checklist for Belgian tax practices
Already required (since February 2025):
- Ensure staff understand the AI tools they use (AI literacy, Article 4)
- Document internal training on AI capabilities and limitations
Required by August 2026:
- Implement human oversight procedures for AI-assisted advice
- Maintain AI system logs for minimum six months
- Ensure AI tools used are deployed according to provider instructions
- Review provider documentation for compliance claims
Ongoing best practice:
- Pseudonymise client data before entering into AI prompts
- Verify AI output before delivering to clients
- Keep records of AI-assisted research for audit purposes
- Monitor European Commission classification guidance (February 2026)
Related articles
- “I don’t trust AI for tax advice” — and you’re right. Here’s why you should try it anyway. →
- How to evaluate a legal AI tool: 10 questions that actually matter →
- What is confidence scoring — and why it’s more honest than a confident answer →
How Auryth TX applies this
Auryth TX is designed for AI Act compliance by architecture, not by afterthought.
Every research query produces a fully sourced response with article-level citations, confidence scoring, and a complete audit trail. The human-in-the-loop design means the professional always makes the final interpretive decision — the tool retrieves, organises, and flags; the professional judges, advises, and takes responsibility.
Source transparency is not a feature — it is the foundation. Every provision cited links to its source. Every confidence score reflects the retrieval quality and coverage depth. Every research session generates a documented trail suitable for professional records.
For the AI literacy obligation: Auryth’s interface is designed to teach as it works. Confidence indicators, source provenance, domain coverage maps, and temporal version flags make the tool’s reasoning visible — turning every research session into implicit literacy training.
No personal data enters the system by design. Queries are about legal provisions, not about clients. The architecture eliminates the GDPR intersection before it arises.
At €99/month, compliance infrastructure that meets the AI Act’s deployer requirements costs less than a single hour of non-compliance risk.
Research faster. Cover more. Advise better — compliantly.
Sources: 1. European Parliament and Council (2024). Regulation (EU) 2024/1689 (AI Act). Official Journal of the European Union. 2. OVB (2025). “AI-richtlijnen voor advocaten.” ovb.be. See also: Alice.law (2025). “AI guidelines for lawyers in Belgium and the Netherlands.” 3. CCBE (2025). “Guide on the use of generative AI by lawyers.” ccbe.eu. 4. ITAA (2026). “ITAA-Barometer 2025.” blogitaa.be. 5. European Commission (2025). “General-Purpose AI Code of Practice.” digital-strategy.ec.europa.eu. 6. Ooms, W. & Gils, T. (2025). “Implementing the AI Act in Belgium: Scope of Application and Authorities.” SSRN. 7. ICT Rechtswijzer (2025). “The AI Act — Belgian implementation.” ictrechtswijzer.be.