European Commission Guidelines on Prohibited Artificial Intelligence Practices

Dr Clare Dixon
An illustration of a brain-shaped object on an abstract background.

The Regulation (EU) 2024/1689 artificial intelligence (AI) Act aims to promote innovation and uptake of AI while ensuring high protection of health, safety, and fundamental rights. It classifies AI systems into four risk categories which are unacceptable risk, high risk, transparency risk and minimal to no risk.

On 4th February 2025, the European Commission issued guidelines on prohibited AI practices established by regulation (EU) 2024/1689 (AI Act). The guidelines are important to organisations and businesses with AI systems in the EU and detail which AI practices are unacceptable.

A summary of the eight prohibited AI practices, examples of AI systems to which they apply and systems that are out of scope are listed below.

Prohibited AI practices:

1. Harmful Manipulation and Deception - Article 5 (1)(a):

AI systems that employ subliminal, manipulative, or deceptive techniques that distort behaviour and cause significant harm are prohibited.

  • Subliminal techniques: Examples include AI systems that use visual and auditory subliminal messages, subvisual and subaudible cueing and embedded images that can unconsciously influence users.
  • Manipulative techniques: Examples include AI systems that purposefully manipulate users through background audio or images to induce mood changes such as increased anxiety or mental suffering resulting in significant harm.
  • Deceptive techniques: Examples include an AI chatbot that impersonates a friend or relative with a synthetic voice leading to scams and significant harm.

2. Harmful exploitation of vulnerabilities - Article 5(1)(b):

AI systems that exploit vulnerabilities due to age, disability, or socio-economic situations resulting in distortion of behaviour and significant harm are prohibited.

  • An example is an AI powered toy that encourages children to complete increasingly risky challenges for digital rewards and virtual praise, potentially leading to dangerous behaviour and physical harm.
  • AI systems that use lawful persuasion rather than manipulation and that are not likely to cause significant harm are outside of the scope of Article 5(1)(a) and (b).
    • For example, an AI system that uses personalised recommendations based on transparent algorithms and user preferences engages in lawful persuasion.
    • For example, a therapeutic chatbot that uses subliminal techniques to guide users towards a healthier lifestyle and to quit bad habits such as smoking is not likely to cause significant harm even if users experience physical discomfort and psychological stress because of the effort made to quit smoking.

3. Social Scoring - Article 5(1)(c)

AI systems that evaluate or classify individuals based on social behaviour or personal characteristics resulting in unjustified treatment are prohibited.

  • An example is an AI predictive tool that analyses taxpayers’ tax returns to select tax returns for closer inspection. The AI tool uses relevant variables such as yearly income as well as unrelated data such as a taxpayer’ social habits or internet connections to select individuals for closer inspection, leading to potential discrimination.
  • Out of scope legitimate scoring practices include financial credit scoring systems which are used by creditors and credit information agencies to determine a customer’s ability to repay debts by analysing a range of financial data such as the customer’s income and expenses.

4. Individual Criminal Offence Risk Assessment and Prediction - Article 5(1)(d)

AI systems that predict criminal behaviour based solely on profiling or personality traits, without objective human assessment are prohibited.

  • An example is a law enforcement authority that uses an AI system to predict criminal behaviour based on a personal characteristic such as age, nationality, address and marital status leading to unjust profiling.
  • Out of scope AI systems include predictive policing systems which generate a score for the likelihood of criminality in different neighbourhoods based on previous criminality rates and other supporting information such as street maps, allowing law enforcement to identify areas that require additional police presence.

5. Untargeted Scraping of Facial Images - Article 5(1)(e)

Creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV footage is prohibited.

  • In terms of the internet, if a person has published facial images of themselves on social media it does not mean that the person has given permission for the images to be included in a facial recognition database. An example of scraping images from CCTV is an AI tool that uses images from surveillance cameras in public spaces like airports and streets without consent.
  • Out of scope AI systems include databases that contain facial images but do not associate them with identifiable individuals e.g. datasets used solely for training or testing AI models without any intent to recognise or identify the persons in the images.

6. Emotion Recognition - Article 5(1)(f)

AI systems inferring emotions in workplaces and educational institutions, except for medical or safety reasons are prohibited.

  • An example of ‘emotion recognition’ is an AI system that infers that an employee is unhappy, sad or angry to customers using body gestures, a frown or the absence of a smile.
  • Out of scope are AI systems that infer emotions not on the basis of biometric data e.g. from written text and AI systems that infer physical states e.g. pain and tiredness.

7. Biometric Categorisation - Article 5(1)(g)

Categorising individuals based on biometric data to infer sensitive characteristics like race, political opinions, or sexual orientation are prohibited.

  • An example is an AI system that categorises social media users by their presumed political orientation based on biometric data from uploaded photos to send them targeted political messages.
  • Examples of permissible filtering include the categorisation of patients using images according to skin or eye colour which may be important for a medical diagnosis such as cancer.

8. Real-time Remote Biometric Identification (RBI) - Article 5(1)(h)

The use of real-time RBI systems in publicly accessible spaces for law enforcement is prohibited with exceptions only for serious threats and criminal investigations.

Safeguards and conditions for the exceptions (Article 5(2)-(7) AI Act) are documented in the guidelines which aim to ensure the responsible and ethical use of AI technologies while safeguarding fundamental rights and promoting trust in AI systems.

A diagram showing 8 prohibited AI practices.svg
Overview of Prohibited AI Practices

Enforcement of Article 5 AI Act

The prohibitions in Article 5 AI Act became applicable after 2 February 2025 and penalties, governance and confidentiality will apply from 2 August 2025.

Market Surveillance Authorities are responsible for the enforcement of rules in the AI Act for AI systems including prohibitions. The AI Act employs a tiered response to determine penalties for non-compliance. This system is designed to ensure that the severity of the infringement is appropriately matched with the corresponding penalty. Non-compliance with prohibitions in Article 5 AI Act are classified as the most serious infringement and subject to the highest fine which can be up to EUR 35,000,000 or up to 7% of worldwide turnover for the previous financial year, if the offender is an undertaking, whichever is higher.

Conclusion

AI in healthcare, such as software as a medical device (SaMD), must be developed with stringent ethical standards to ensure patient safety and protection of fundamental rights. The guidelines remind developers and regulators alike of the importance of maintaining transparency and safeguarding against AI misuse. As the healthcare sector continues to integrate AI into medical devices, these guidelines will serve as a key framework for ensuring that AI-driven solutions prioritise the well-being of patients while promoting innovation and trust in the healthcare system.

If you need guidance on navigating AI challenges, contact us today to arrange a free, no-obligation discussion.

Related articles

  1. A precariously balanced pile of ping-pong balls and wooden bars.

    The Shift from MDD to MDR: Key Differences in Demonstrating Equivalence

    This transition has demanded that device safety must be demonstrated with more evidence. We offer tips for winning equivalence claims.

    Kamiya Crabtree Kamiya Crabtree Regulatory Medical Writer
  2. A pen and notepad, resting on a laptop.

    Periodic Safety Update Report: Requirements under EU MDR

    Post-Market Surveillance has become more stringent. We help you to understand what manufacturers need to consider.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  3. An EU flag on a pole flies between two US flags against a blue sky.

    Webinar: From USA to Europe - Accelerating Your Path to the Medical Device Market

    We showed you how to quickly transform your U.S. regulatory work into a compliant EU MDR submission.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  4. A poster frame for our Clinical Evaluation video series featuring Paul Hercock.

    Guide to Clinical Evaluation: Common Pitfalls & Useful Resources

    Part 5 - In the final video from this series, we explore five major pitfalls that often derail clinical evaluations.

    Dr Paul Hercock Dr Paul Hercock Chief Executive Officer
  5. A US-style 'changes ahead' warning road sign.

    Device Modifications: When a Simple Change Becomes a Regulatory Nightmare

    As regulatory consultants we understand how minor modifications to a device can often cause disproportionate disruption.

    Kamiya Crabtree Kamiya Crabtree Regulatory Medical Writer
  6. Webinar announcement poster.

    Webinar: Regulatory & Cybersecurity Essentials for medical device software and AI-enabled devices

    Our webinar with Cyber Alchemy addressed bringing AI-enabled medical devices to market with both the right regulatory and cybersecurity foundations.

    Shen May Khoo Shen May Khoo Regulatory Project Lead
  7. A simple jigsaw with iconography representing growth printed on it.

    Leveraging Post-Market Surveillance Data for Continuous Improvement

    PMS isn’t just about compliance, it’s an opportunity for improvement, enhance patient safety & innovate.

    Shen May Khoo Shen May Khoo Regulatory Project Lead
  8. A poster frame for our Clinical Evaluation video series featuring Dr. W. Brambley.

    Guide to Clinical Evaluation: CEP Strategy & CER Structure

    Part 4 - We explore how these guide reviewers through the evidence that supports safey, performance, and conformity.

    Dr Will Brambley Dr Will Brambley Lead Medical Writer
  9. A checklist being ticket-off in pen.

    The Critical Role of Pre-Submission Reviews in EU MDR Clinical Evaluations

    Ensuring your CER is robust and aligned with current standards is critical. How much Clinical Evidence is enough?

    Sandra Gopinath Sandra Gopinath Chief Regulatory Officer

More articles

Need help producing compliant CEPs & CERs? We are offering FREE CEPs to 5 qualifying applicants per week

Get your free CEP