Slide
Expert explains: Common myths about the AI Act
Starting August 2, general-purpose AI systems must comply with the transparency obligations of the EU Artificial Intelligence Act (AI Act).
23.07.2025.
Krete Paal, CEO of the Estonian privacy tech startup GDPR Register, highlights the most common fears and myths surrounding the regulation - and explains what companies need to be doing today.
The EU Artificial Intelligence Act is the world's first comprehensive regulation designed to make the use of AI safe, responsible, and respectful of fundamental rights. While the goal is commendable, there's still widespread uncertainty, fear, and misinformation surrounding it. "It's been a year since the AI Act was adopted, but in Europe and in Estonia we're still seeing a wave of anxious questions and exaggerated interpretations. Some have even wondered if the regulation bans AI altogether or if businesses need to leave the EU to keep innovating," says Paal.
She outlines five of the most common myths related to the AI Act.
Myth 1: The AI Act will kill innovation
It won't kill innovation - it will guide it to become smarter, more transparent, and more human-centered. "This reminds me of 2018 when GDPR came into force. Back then, there was also a lot of confusion and panic. Some companies shut down their websites 'just in case,' others overspent on legal audits. Meanwhile, smart businesses turned privacy into a competitive edge," says Paal.
Likewise, the AI Act presents a strategic opportunity for forward-thinking businesses. By taking a proactive approach to risk analysis and transparency, companies can build trust and ensure their license to operate in the future.
Myth 2: The AI Act bans AI
Not at all. The AI Act is not a list of bans. Most familiar AI tools - such as chatbots, marketing tools, and analytics systems - are classified as low or limited risk technologies.
"That means their use is allowed with minimal added conditions. For instance, it will be mandatory to inform users that they are interacting with an AI-based tool," explains Paal.
The Act only bans a very narrow set of high-risk applications, such as real-time biometric mass surveillance in public spaces without legal grounds, or manipulative AI systems that compromise human free will. "These are extreme edge cases, not everyday tools," she adds.
Myth 3: All AI products are high risk
Only specific use cases are defined as "high-risk" under the AI Act - for example AI used in pre-selecting job applicants, automated grading of exams, AI systems in law enforcement or border control.
Meanwhile, tools used for content generation, social media personalization, or production line optimization are not considered high-risk. In addition, the Act clearly outlines how high-risk systems can remain compliant - through risk assessments, documentation, and transparent design practices.
Myth 4: Generative AI will disappear
Paal points out that despite popular belief, generative AI is not going away - but it will become more accountable. Tools like ChatGPT are covered by the Act as general-purpose AI models (GPAI). That means they must meet new transparency and content disclosure requirements.
"Users must be clearly informed when content is AI-generated. Where possible, providers will also need to disclose the datasets used to train the model," says Paal. This is crucial in an era of deepfakes and misinformation, where AI-generated content can be both realistic and misleading. These rules help protect users and build trust in the technology.
Myth 5: AI compliance is just an IT issue
A dangerous myth is that AI compliance only concerns developers, data scientists, or technical leads. "In reality, the regulation affects multiple business functions - marketing, product development, customer service, legal, and leadership," Paal emphasizes.
Marketing teams must know when they're legally required to disclose AI-generated content. Product teams must assess whether new AI features may fall under high-risk categories. Leadership is responsible for ensuring the company has internal risk management systems and compliance processes in place.
"The AI Act isn't just a technical manual - it's a strategic framework. It demands cross-functional cooperation and awareness. Think of AI as a team sport: everyone needs to know the rules," Paal explains.
What should companies be doing today?
Map your AI use - Identify which AI systems you use or develop. Don't wait for the full enforcement; some provisions are already in effect as of early 2025.
Understand your risk category - Determine whether the systems you use fall under which risk category.
Build transparency now - Be clear with users and document your AI processes. Clients and investors are increasingly evaluating trust and governance.
Don't act out of fear - act on facts. Stay informed and seek expert guidance if needed.
GDPR Register is an Estonian startup developed in collaboration with IT experts to make GDPR compliance simple, logical, and efficient. The platform helps companies and public sector organizations streamline and manage all GDPR-related processes and documentation.





Comments powered by CComment