Webinar: Regulatory & Cybersecurity Essentials for medical device software and AI-enabled devices

Shen May Khoo

Our webinar on the 13th of November 2025 with cybersecurity experts Cyber Alchemy explored how to bring AI-enabled medical devices to market with the right regulatory and cybersecurity foundations.

The discussion - led by Dr Will Brambley (Mantra Systems), Dr Simon Cumiskey (Mantra Systems) and Luke Hill (Cyber Alchemy) - brought together insights from both regulatory and cybersecurity perspectives, covering everything from device qualification to protecting proprietary AI models.

If you couldn’t join us live, you can watch in full here - or read our key takeaways bellow.

Hardware Security Matters Too

While AI software tends to dominate the conversation, connected hardware can introduce equally serious cybersecurity risks, and attackers have already proven this in the real world.

High-profile demonstrations, such as wireless hacks of insulin pumps and pacemakers, highlight just how vulnerable poorly protected devices can be. Today’s AI-enabled devices, particularly those running models on-device, introduce additional attack surfaces that must be understood from day one.

Whether an AI model runs locally, on hospital servers, or in the cloud dramatically shapes the security posture required. Each environment brings its own vulnerabilities, from physical tampering to network exposure to questions of data residency and ownership in cloud-based architectures. Manufacturers who treat hardware and software as separate concerns soon discover the two are inseparable.

AI Doesn’t Increase Risk, But It Increases Responsibility

A key misconception is that AI automatically pushes a device into a higher regulatory class. In reality, classification still depends on intended purpose, not the presence of AI.

However, the inclusion of AI increases the manufacturer’s burden of evidence. You must show not only that your model works, but that it works reliably across a wide range of real-world variables including different patient populations, imaging systems, and clinical environments. Moreover, AI introduces new failure modes, from data poisoning to prompt manipulation, that traditional software simply doesn’t face.

And yet, the fundamentals haven’t changed: you still have to show that you understand your risks and have applied appropriate controls to keep those risks in check. AI simply expands the scope of what that understanding must cover. New failure modes, data drift, adversarial inputs, prompt manipulation mean that security, robustness, and oversight must be treated as integral parts of the safety narrative.

This is where the EU AI Act adds a new dimension. Most AI-enabled medical devices will fall into the “high-risk” category, bringing about higher expectations of security.

Start Cybersecurity Early

One of the strongest messages from the session was that cybersecurity is not an add-on but a foundational design activity. Retrofitting security at the end of development inevitably leads to budget overruns, delays, and frustrated engineering teams.

Manufacturers should approach cybersecurity with the same discipline as traditional software lifecycle management. Early threat modelling, robust architectural planning, and alignment with standards such as IEC 81001-5-1 and IEC 62304 are essential. These activities shape not only how the product is built, but how it will be assessed during CE or UKCA submissions.

Equally important is supply chain visibility; knowing what is in your software, who supplied it, and how vulnerabilities will be managed after launch. This is especially relevant as AI systems increasingly incorporate third-party models, frameworks, and datasets.

Security is therefore not a barrier to innovation. Done well, it accelerates development by providing clarity, reducing rework, and giving both regulators and customers confidence in your product.

The EU AI Act Will Add New Layers

The EU MDR and UK MDR already set expectations around software safety and performance, but they remain high when it comes to AI. The EU AI Act fills this gap by introducing AI-specific requirements that touch nearly every aspect of design, documentation, and oversight.

While this may feel like another regulatory hurdle, the organisations best equipped to adapt will be those that have already embedded strong lifecycle processes, especially around risk management, threat modelling, and post-market monitoring. The AI Act reinforces the idea that AI systems must be explainable, monitored, and robust, and that humans remain accountable for their operation.

In other words, the EU AI Act does not introduce a new mindset so much as formalise the one that high-quality manufacturers already subscribe to.

Security, Compliance, and Features: You Can’t Choose Just One

When teams ask whether to prioritise security, compliance, or product features, the truth is that you can’t meaningfully separate them. Without security, you won’t achieve compliance. Without compliance, you don’t have a viable product. And without a viable product, features are irrelevant because no one will buy it. These elements aren’t competing priorities; they’re interdependent pillars that determine whether your product ever reaches a customer.

The challenge is striking the right balance early on, especially when working with an MVP mindset. The most pragmatic approach is to ask: What is the minimum level of security, compliance, and functionality we need to land our first customer? Getting that answer right requires structured planning rather than heavy documentation. Aligning your development approach with established lifecycle principles, such as those in IEC 81001-5-1, gives you a framework to build from without overengineering. Equally, early attention to system and security architecture, along with light-touch threat modelling, helps lay the groundwork for scalable and secure product evolution.

If those fundamentals aren’t in place, feature iteration becomes painful. You end up revisiting core design decisions to retrofit security or meet regulatory expectations. And if you skip those early stages entirely, the reckoning simply arrives later, during submission, when the fixes become far more expensive, time-consuming, and stressful.

We can help you with rapid SaMD compliance

Are you developing an AI-enabled medical device and need support with regulatory compliance? Our team specialises in navigating the complex UK and EU requirements for AI and software-based devices, ensuring your innovation meets compliance expectations while achieving commercial success.

We recently launched a new SaMD-focused service that make guarantees to beat industry averages for regulatory compliance.

Contact us today to discuss how we can help you build a clear, efficient pathway to regulatory approval and market adoption.

About the Speakers

  • Simon CumiskeySenior Lead Medical Writer, Mantra Systems
    Specialises in regulatory strategy for software and AI-enabled devices. Advises companies on meeting MDR requirements and achieving market access through robust documentation and clear regulatory positioning.
  • Luke HillCybersecurity Consultant, Cyber Alchemy
    Luke is a technical cybersecurity specialist who works across sectors, with a strong focus on MedTech. He has helped bring AI-enabled medical apps to market and worked securing medical devices such as X-ray and MRI systems. His expertise includes security testing, cloud systems, security architecture and application security, translating complex risks into practical, regulator-ready controls.
  • William Brambley - Lead Medical Writer, Mantra Systems
    Develops evidence strategies that secure approvals and drive market adoption. Leads EU and UK MDR submissions for complex software devices, aligning regulatory pathways with commercial success.

To ensure you don’t miss the next one, consider signing up to our newsletter, or following us on Linkedin.

Related articles

  1. An EU flag on a pole flies between two US flags against a blue sky.

    Webinar: From USA to Europe - Accelerating Your Path to the Medical Device Market

    We showed you how to quickly transform your U.S. regulatory work into a compliant EU MDR submission.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  2. A poster frame for our Clinical Evaluation video series featuring Paul Hercock.

    Guide to Clinical Evaluation: Common Pitfalls & Useful Resources

    Part 5 - In the final video from this series, we explore five major pitfalls that often derail clinical evaluations.

    Dr Paul Hercock Dr Paul Hercock Chief Executive Officer
  3. A US-style 'changes ahead' warning road sign.

    Device Modifications: When a Simple Change Becomes a Regulatory Nightmare

    As regulatory consultants we understand how minor modifications to a device can often cause disproportionate disruption.

    Kamiya Crabtree Kamiya Crabtree Regulatory Medical Writer
  4. MEDICA logo.

    Mantra Systems at MEDICA 2025

    Mantra Systems is once again going to MEDICA, the largest medical trade fair in the world. We hope to see you there.

    Megan Allen Megan Allen Regulatory Medical Writer
  5. A simple jigsaw with iconography representing growth printed on it.

    Leveraging Post-Market Surveillance Data for Continuous Improvement

    PMS isn’t just about compliance, it’s an opportunity for improvement, enhance patient safety & innovate.

    Shen May Khoo Shen May Khoo Regulatory Project Lead
  6. A poster frame for our Clinical Evaluation video series featuring Dr. W. Brambley.

    Guide to Clinical Evaluation: CEP Strategy & CER Structure

    Part 4 - We explore how these guide reviewers through the evidence that supports safey, performance, and conformity.

    Dr Will Brambley Dr Will Brambley Lead Medical Writer
  7. A checklist being ticket-off in pen.

    The Critical Role of Pre-Submission Reviews in EU MDR Clinical Evaluations

    Ensuring your CER is robust and aligned with current standards is critical. How much Clinical Evidence is enough?

    Sandra Gopinath Sandra Gopinath Chief Regulatory Officer
  8. A view of the stage at the AI Health Summit.

    Redefining Care Through AI: Perspectives from the AI in Health Summit 2025

    Conversations around AI in healthcare have evolved dramatically over the past few years. We found insight, debate and a healthy dose of cautious optimism at this summit.

    Kamiya Crabtree Kamiya Crabtree Regulatory Medical Writer
  9. A poster frame for our Clinical Evaluation video series featuring Dr. W. Brambley.

    Guide to Clinical Evaluation: The State-of-the-Art (SOTA) Literature Review

    Part 3 - This is core of a sucessful submission. Will demystifies the process and explains how it supports clinical evaluation.

    Dr Will Brambley Dr Will Brambley Lead Medical Writer
  10. An orange tabletop with wooden question mark blocks laid upon it.

    Regulatory Update: EU Borderline & Classification Manual for medical devices v4

    New examples sharpen the distinction between medical devices and other product categories, such as pharmacologically active substances and aesthetic-only products.

    Chandini Valiya Kizhakkeveetil Chandini Valiya Kizhakkeveetil Regulatory Medical Writer
  11. A poster frame for our Clinical Evaluation video series featuring Dr. P. Boxall.

    Guide to Clinical Evaluation: Clinical Evaluation in Context

    Part 2 - A clinical evaluation demonstrates that a device is safe and effective, but achieving this requires more than simply compiling studies.

    Dr Peter Boxall Dr Peter Boxall Lead Medical Writer

More articles

Need help producing compliant CEPs & CERs? We are offering FREE CEPs to 5 qualifying applicants per week

Get your free CEP