Eu AI Act Websites Preparation Checklist
The AI Act is here. What does this mean for chat bots, AI texts, and personalization on your website? Labeling obligations and risk classes explained.
The EU AI Act: Not a "Paper Tiger", But Reality
Passed in 2024, fully effective from 2026: The EU AI Act (AI Regulation) is the world's first comprehensive law regulating Artificial Intelligence. Many website operators think: "This only affects OpenAI or Google." Wrong. As soon as you deploy AI on your website (Deployer), you are affected. And "deployment" starts early:
- The support chatbot? Affected.
- The AI-generated product texts? Affected.
- The automated credit check in checkout? High risk.
Anyone ignoring the rules risks fines up to 35 million Euro or 7% of global turnover (more than GDPR!).
Featured Snippet: The EU AI Act divides AI systems into risk classes. Two are relevant for typical websites: 1. Limited Risk (Chatbots, Deepfakes): Here, a strict transparency obligation applies. Users must know they are interacting with a machine ("I am a bot"). 2. Minimal Risk (Spam filters, AI search function): Hardly any requirements here. AI-generated content must also increasingly be marked as such (Watermarking).
The Cost of Inaction: The Transparency Shock
Imagine a customer complains about your chatbot because they thought it was a human. They report this to the supervisory authority. Under Art. 50 AI Act (Transparency Obligations), you have a problem. If you didn't make it obvious that it's an AI, you are acting illegally. The customer's trust is gone anyway ("They deceived me").
Action Required: Check all "human" interfaces on your website.
- Is the bot named "Anna"? Does it have a photo of a woman?
- If yes: Add in bold: "AI Assistant".
The 3 Duties for Website Operators
Chatbots & Emotion AI (Art. 50)
If an AI system interacts with a human (Chatbot, Voice-Bot), the user must be informed.
- The Rule: It must be immediately recognizable to an average user that they are not speaking with a human.
- Implementation: "Hello, I am the digital assistant of Company XY." Never start the chat with a deception.
AI-Generated Content (Deepfakes / Synthetic Media)
If you use AI to create images, videos, or audio that appear "real" (Deepfakes), you must label them.
- Does this apply to blog texts? Currently a grey area, but Best Practice: "This text was created with the help of AI and reviewed by experts."
- Does this apply to images? Yes, if they show persons who do not exist. Label it ("AI-generated image") in the Alt-Text or caption.
High-Risk AI (High Risk)
This rarely affects you, but if it does, it's severe. Do you use AI in HR (automatic applicant sorting)? Do you use AI for creditworthiness (Scoring) or insurance premiums? These are High-Risk Systems (Annex III). Here you need:
- Risk management system.
- Data governance (no bias in training data).
- Human oversight (Human in the Loop).
- CE marking for the AI.
Myth-Busting: "I Only Use ChatGPT, I'm Not Responsible"
Many companies say: "We don't program AI, we only use the API from OpenAI. The responsibility lies with OpenAI." Wrong. The AI Act distinguishes between Provider (Manufacturer, e.g., OpenAI) and Deployer (Operator, e.g., You). Even as a Deployer, you have duties. You must ensure that your employees use the AI "as intended" (Instruction of Use). You are responsible for the output on your site (Liability for AI hallucinations lies with you, not ChatGPT!).
Strategic Chance: "Human Quality Seal"
In a flood of AI content, "humanity" becomes a premium feature. Turn the tables. Label not only AI, but label especially the Human. "This article was written by [Name], a real specialist lawyer." "Our support is handled by real humans in Munich."
The AI Act forces transparency. Use this transparency to show where you still put real heart and soul (and brainpower) into it.
Unasked Question: "What About AI Copyright?"
Can I use AI images on my website without violating copyrights? The AI Act does not primarily regulate this (copyright law does). But: The AI Act forces AI model providers to disclose their training data. The risk in 2026: If courts decide that models like Midjourney are based on "stolen" data, output images could become legally unsafe. Advice: Still use human designers or "Clean AI" models (like Adobe Firefly) trained on licensed data for critical brand assets (Logo, Key Visuals).
FAQ: EU AI Act
When does what apply?
The Act entered into force in 2024. Bans (e.g., Social Scoring) applied after 6 months. Transparency obligations (Chatbots) apply after 12-24 months (so fully effective by mid-2026 at the latest).
Does the law also apply to Swiss companies?
De jure no (Non-EU). De facto yes (Brussels Effect). If you address customers in the EU (offer goods/services), you must comply with the AI Act (Marketplace Principle). Just like with GDPR.
Do I need a privacy policy for AI?
Yes. If you use AI that processes personal data (e.g., analyses chatbot input), you must make this transparent in the Privacy Policy ("We use OpenAI API to process your requests..."). Check if data flows to the USA!
Internal Linking
Related Articles:
MyQuests Legal-Tech
Founder & Digital Strategist
Olivier Jacob is the founder of MyQuests Website Management, a Hamburg-based digital agency specializing in comprehensive web solutions. With extensive experience in digital strategy, web development, and SEO optimisation, Olivier helps businesses transform their online presence and achieve sustainable growth. His approach combines technical expertise with strategic thinking to deliver measurable results for clients across various industries.
Related Articles
Compliance As Competitive Advantage Privacy Marketing
Read more about this topic Compliance As Competitive Advantage Privacy Marketing β Privacy, Consent, Trust-by-Design
Consent Management 2 0 Transparency Instead Of Fatigue
Read more about this topic Consent Management 2 0 Transparency Instead Of Fatigue β Privacy, Consent, Trust-by-Design
Data Minimization Strategic Advantage
Read more about this topic Data Minimization Strategic Advantage β Privacy, Consent, Trust-by-Design
