European Commission Guidelines on Prohibited Artificial Intelligence Practices

Dr Clare Dixon
An illustration of a brain-shaped object on an abstract background.

The Regulation (EU) 2024/1689 artificial intelligence (AI) Act aims to promote innovation and uptake of AI while ensuring high protection of health, safety, and fundamental rights. It classifies AI systems into four risk categories which are unacceptable risk, high risk, transparency risk and minimal to no risk.

On 4th February 2025, the European Commission issued guidelines on prohibited AI practices established by regulation (EU) 2024/1689 (AI Act). The guidelines are important to organisations and businesses with AI systems in the EU and detail which AI practices are unacceptable.

A summary of the eight prohibited AI practices, examples of AI systems to which they apply and systems that are out of scope are listed below.

Update January 2026: We wrote about the further clarification around how AI-enabled medical devices are regulated.

Prohibited AI practices:

1. Harmful Manipulation and Deception - Article 5 (1)(a):

AI systems that employ subliminal, manipulative, or deceptive techniques that distort behaviour and cause significant harm are prohibited.

  • Subliminal techniques: Examples include AI systems that use visual and auditory subliminal messages, subvisual and subaudible cueing and embedded images that can unconsciously influence users.
  • Manipulative techniques: Examples include AI systems that purposefully manipulate users through background audio or images to induce mood changes such as increased anxiety or mental suffering resulting in significant harm.
  • Deceptive techniques: Examples include an AI chatbot that impersonates a friend or relative with a synthetic voice leading to scams and significant harm.

2. Harmful exploitation of vulnerabilities - Article 5(1)(b):

AI systems that exploit vulnerabilities due to age, disability, or socio-economic situations resulting in distortion of behaviour and significant harm are prohibited.

  • An example is an AI powered toy that encourages children to complete increasingly risky challenges for digital rewards and virtual praise, potentially leading to dangerous behaviour and physical harm.
  • AI systems that use lawful persuasion rather than manipulation and that are not likely to cause significant harm are outside of the scope of Article 5(1)(a) and (b).
    • For example, an AI system that uses personalised recommendations based on transparent algorithms and user preferences engages in lawful persuasion.
    • For example, a therapeutic chatbot that uses subliminal techniques to guide users towards a healthier lifestyle and to quit bad habits such as smoking is not likely to cause significant harm even if users experience physical discomfort and psychological stress because of the effort made to quit smoking.

3. Social Scoring - Article 5(1)(c)

AI systems that evaluate or classify individuals based on social behaviour or personal characteristics resulting in unjustified treatment are prohibited.

  • An example is an AI predictive tool that analyses taxpayers’ tax returns to select tax returns for closer inspection. The AI tool uses relevant variables such as yearly income as well as unrelated data such as a taxpayer’ social habits or internet connections to select individuals for closer inspection, leading to potential discrimination.
  • Out of scope legitimate scoring practices include financial credit scoring systems which are used by creditors and credit information agencies to determine a customer’s ability to repay debts by analysing a range of financial data such as the customer’s income and expenses.

4. Individual Criminal Offence Risk Assessment and Prediction - Article 5(1)(d)

AI systems that predict criminal behaviour based solely on profiling or personality traits, without objective human assessment are prohibited.

  • An example is a law enforcement authority that uses an AI system to predict criminal behaviour based on a personal characteristic such as age, nationality, address and marital status leading to unjust profiling.
  • Out of scope AI systems include predictive policing systems which generate a score for the likelihood of criminality in different neighbourhoods based on previous criminality rates and other supporting information such as street maps, allowing law enforcement to identify areas that require additional police presence.

5. Untargeted Scraping of Facial Images - Article 5(1)(e)

Creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV footage is prohibited.

  • In terms of the internet, if a person has published facial images of themselves on social media it does not mean that the person has given permission for the images to be included in a facial recognition database. An example of scraping images from CCTV is an AI tool that uses images from surveillance cameras in public spaces like airports and streets without consent.
  • Out of scope AI systems include databases that contain facial images but do not associate them with identifiable individuals e.g. datasets used solely for training or testing AI models without any intent to recognise or identify the persons in the images.

6. Emotion Recognition - Article 5(1)(f)

AI systems inferring emotions in workplaces and educational institutions, except for medical or safety reasons are prohibited.

  • An example of ‘emotion recognition’ is an AI system that infers that an employee is unhappy, sad or angry to customers using body gestures, a frown or the absence of a smile.
  • Out of scope are AI systems that infer emotions not on the basis of biometric data e.g. from written text and AI systems that infer physical states e.g. pain and tiredness.

7. Biometric Categorisation - Article 5(1)(g)

Categorising individuals based on biometric data to infer sensitive characteristics like race, political opinions, or sexual orientation are prohibited.

  • An example is an AI system that categorises social media users by their presumed political orientation based on biometric data from uploaded photos to send them targeted political messages.
  • Examples of permissible filtering include the categorisation of patients using images according to skin or eye colour which may be important for a medical diagnosis such as cancer.

8. Real-time Remote Biometric Identification (RBI) - Article 5(1)(h)

The use of real-time RBI systems in publicly accessible spaces for law enforcement is prohibited with exceptions only for serious threats and criminal investigations.

Safeguards and conditions for the exceptions (Article 5(2)-(7) AI Act) are documented in the guidelines which aim to ensure the responsible and ethical use of AI technologies while safeguarding fundamental rights and promoting trust in AI systems.

A diagram showing 8 prohibited AI practices.svg
Overview of Prohibited AI Practices

Enforcement of Article 5 AI Act

The prohibitions in Article 5 AI Act became applicable after 2 February 2025 and penalties, governance and confidentiality will apply from 2 August 2025.

Market Surveillance Authorities are responsible for the enforcement of rules in the AI Act for AI systems including prohibitions. The AI Act employs a tiered response to determine penalties for non-compliance. This system is designed to ensure that the severity of the infringement is appropriately matched with the corresponding penalty. Non-compliance with prohibitions in Article 5 AI Act are classified as the most serious infringement and subject to the highest fine which can be up to EUR 35,000,000 or up to 7% of worldwide turnover for the previous financial year, if the offender is an undertaking, whichever is higher.

Conclusion

AI in healthcare, such as software as a medical device (SaMD), must be developed with stringent ethical standards to ensure patient safety and protection of fundamental rights. The guidelines remind developers and regulators alike of the importance of maintaining transparency and safeguarding against AI misuse. As the healthcare sector continues to integrate AI into medical devices, these guidelines will serve as a key framework for ensuring that AI-driven solutions prioritise the well-being of patients while promoting innovation and trust in the healthcare system.

If you need guidance on navigating AI challenges, contact us today to arrange a free, no-obligation discussion.

Related articles

  1. US and EU flags on poles alongside each other.

    Clinical Evidence under EU MDR: Leveraging FDA Clinical Data to Streamline EU MDR Compliance

    FDA approval alone is not sufficient for European market access - a theme we explore futher in this article and the accompanying webinar.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  2. An AI-generated image of 3 people in an office in front of a whiteboard with the words 'Medical Device Market Entry Strategy' written above a world map.

    EU MDR & NHS DTAC Cybersecurity Requirements for UK Market Entry

    This guest article from our partner Cyber Alchemy shows you how to build cybersecurity evidence for the EU MDR and NHS DTAC.

    Luke Hill Luke Hill Co-Founder of Cyber Alchemy
  3. An illustration showing a GPS-driven navigation route superimposed upon someone using a laptop.

    Where to Launch First? A MedTech Founder's Regulatory Roadmap to the EU, UK and US

    Cyber Alchemy × Mantra Systems — Episode 1: All three markets operate under different regulatory systems and place different demands on manufacturers.

    Ronghe Xu Ronghe Xu Regulatory Medical Writer & Strategic BD Lead China
  4. A woman uses an inhaler.

    Navigating EU MDR Article 117: A Practical Guide to Drug-Device Combination Product Submissions

    Implementation of the EU MDR 2017/745 has brought significant changes.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  5. Collage art showing a pair of binoculars, an analogy for surveillance.

    How EU MDR Post Market Surveillance differs from FDA post-market expectations

    We compare manufacturer-specific post-market obligations across both regulatory systems.

    Dr Gayle Buchel Dr Gayle Buchel Chief Medical Writer
  6. An arrow arcs from the US over to Europe.

    How EU device classification differs from the US - Are you Prepared?

    Did you know an FDA Class II medical device could be immediately considered as a high-risk Class III device under European Union regulations?

    Gabriela Cardoso Gabriela Cardoso Regulatory Medical Writer
  7. A magnifying glass inspecting a number of wooden cubes with question marks upon them laid upon a blue table. The wooden cube under the magnifying glass has a lightbulb painted on it.

    Fixing the MDR and IVDR? The Commission’s Proposed Amendments and What They Mean for Manufacturers

    Exploring the key elements of this proposal.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  8. Two arms point at a sign and hold a question mark, in an abstract pop-art style.

    Regulatory Reset? The EU’s Proposed Changes to MDR and IVDR Explained

    Changes published in December 2025 aim to streamline EU medical device and in vitro diagnostics. We explain who is impacted and how.

    Dr Gayle Buchel Dr Gayle Buchel Chief Medical Writer
  9. A pair of glasses rests on an eye test chart.

    Did You Know Your Glasses Were a Medical Device? A Regulatory Guide for Manufacturers

    The importance of correct classification and our recommended path to avoid common ophthalmic device 'gotchas'.

    Gabriela Cardoso Gabriela Cardoso Regulatory Medical Writer

More articles

Need help producing compliant CEPs & CERs? We are offering FREE CEPs to 5 qualifying applicants per week

Get your free CEP