Our webinar on the 13th of November with cybersecurity experts Cyber Alchemy explored how to bring AI-enabled medical devices to market with the right regulatory and cybersecurity foundations.
The discussion - led by Dr Will Brambley (Mantra Systems), Dr Simon Cumiskey (Mantra Systems) and Luke Hill (Cyber Alchemy) - brought together insights from both regulatory and cybersecurity perspectives, covering everything from device qualification to protecting proprietary AI models.
If you couldn’t join us live, you can watch in full here - or read our key takeaways bellow.
Hardware Security Matters Too
While AI software tends to dominate the conversation, connected hardware can introduce equally serious cybersecurity risks, and attackers have already proven this in the real world.
High-profile demonstrations, such as wireless hacks of insulin pumps and pacemakers, highlight just how vulnerable poorly protected devices can be. Today’s AI-enabled devices, particularly those running models on-device, introduce additional attack surfaces that must be understood from day one.
Whether an AI model runs locally, on hospital servers, or in the cloud dramatically shapes the security posture required. Each environment brings its own vulnerabilities, from physical tampering to network exposure to questions of data residency and ownership in cloud-based architectures. Manufacturers who treat hardware and software as separate concerns soon discover the two are inseparable.
AI Doesn’t Increase Risk, But It Increases Responsibility
A key misconception is that AI automatically pushes a device into a higher regulatory class. In reality, classification still depends on intended purpose, not the presence of AI.
However, the inclusion of AI increases the manufacturer’s burden of evidence. You must show not only that your model works, but that it works reliably across a wide range of real-world variables including different patient populations, imaging systems, and clinical environments. Moreover, AI introduces new failure modes, from data poisoning to prompt manipulation, that traditional software simply doesn’t face.
And yet, the fundamentals haven’t changed: you still have to show that you understand your risks and have applied appropriate controls to keep those risks in check. AI simply expands the scope of what that understanding must cover. New failure modes, data drift, adversarial inputs, prompt manipulation mean that security, robustness, and oversight must be treated as integral parts of the safety narrative.
This is where the EU AI Act adds a new dimension. Most AI-enabled medical devices will fall into the “high-risk” category, bringing about higher expectations of security.
Start Cybersecurity Early
One of the strongest messages from the session was that cybersecurity is not an add-on but a foundational design activity. Retrofitting security at the end of development inevitably leads to budget overruns, delays, and frustrated engineering teams.
Manufacturers should approach cybersecurity with the same discipline as traditional software lifecycle management. Early threat modelling, robust architectural planning, and alignment with standards such as IEC 81001-5-1 and IEC 62304 are essential. These activities shape not only how the product is built, but how it will be assessed during CE or UKCA submissions.
Equally important is supply chain visibility; knowing what is in your software, who supplied it, and how vulnerabilities will be managed after launch. This is especially relevant as AI systems increasingly incorporate third-party models, frameworks, and datasets.
Security is therefore not a barrier to innovation. Done well, it accelerates development by providing clarity, reducing rework, and giving both regulators and customers confidence in your product.
The EU AI Act Will Add New Layers
The EU MDR and UK MDR already set expectations around software safety and performance, but they remain high when it comes to AI. The EU AI Act fills this gap by introducing AI-specific requirements that touch nearly every aspect of design, documentation, and oversight.
While this may feel like another regulatory hurdle, the organisations best equipped to adapt will be those that have already embedded strong lifecycle processes, especially around risk management, threat modelling, and post-market monitoring. The AI Act reinforces the idea that AI systems must be explainable, monitored, and robust, and that humans remain accountable for their operation.
In other words, the EU AI Act does not introduce a new mindset so much as formalise the one that high-quality manufacturers already subscribe to.
Security, Compliance, and Features: You Can’t Choose Just One
When teams ask whether to prioritise security, compliance, or product features, the truth is that you can’t meaningfully separate them. Without security, you won’t achieve compliance. Without compliance, you don’t have a viable product. And without a viable product, features are irrelevant because no one will buy it. These elements aren’t competing priorities; they’re interdependent pillars that determine whether your product ever reaches a customer.
The challenge is striking the right balance early on, especially when working with an MVP mindset. The most pragmatic approach is to ask: What is the minimum level of security, compliance, and functionality we need to land our first customer? Getting that answer right requires structured planning rather than heavy documentation. Aligning your development approach with established lifecycle principles, such as those in IEC 81001-5-1, gives you a framework to build from without overengineering. Equally, early attention to system and security architecture, along with light-touch threat modelling, helps lay the groundwork for scalable and secure product evolution.
If those fundamentals aren’t in place, feature iteration becomes painful. You end up revisiting core design decisions to retrofit security or meet regulatory expectations. And if you skip those early stages entirely, the reckoning simply arrives later, during submission, when the fixes become far more expensive, time-consuming, and stressful.
We can help you with rapid SaMD compliance
Are you developing an AI-enabled medical device and need support with regulatory compliance? Our team specialises in navigating the complex UK and EU requirements for AI and software-based devices, ensuring your innovation meets compliance expectations while achieving commercial success.
We recently launched a new SaMD-focused service that make guarantees to beat industry averages for regulatory compliance.
Contact us today to discuss how we can help you build a clear, efficient pathway to regulatory approval and market adoption.
About the Speakers
- Simon Cumiskey – Senior Lead Medical Writer, Mantra Systems
Specialises in regulatory strategy for software and AI-enabled devices. Advises companies on meeting MDR requirements and achieving market access through robust documentation and clear regulatory positioning. - Luke Hill – Cybersecurity Consultant, Cyber Alchemy
Luke is a technical cybersecurity specialist who works across sectors, with a strong focus on MedTech. He has helped bring AI-enabled medical apps to market and worked securing medical devices such as X-ray and MRI systems. His expertise includes security testing, cloud systems, security architecture and application security, translating complex risks into practical, regulator-ready controls. - William Brambley - Lead Medical Writer, Mantra Systems
Develops evidence strategies that secure approvals and drive market adoption. Leads EU and UK MDR submissions for complex software devices, aligning regulatory pathways with commercial success.
To ensure you don’t miss the next one, consider signing up to our newsletter, or following us on Linkedin.