
On the 1st of October, I had the opportunity to attend the AI in Health Summit hosted by Economist Impact at the Royal College of Physicians, London.
This summit presented exclusive insight from policy makers, clinicians, healthcare providers, med-tech providers, academics, charities and investors working to achieve the maximum potential of AI in health. It’s safe to say that the day was packed with insights, debates, and a healthy dose of cautious optimism. The conversations around artificial intelligence in healthcare have evolved dramatically over the past few years, moving from excitement about what’s possible to a much deeper focus on how we can make it all work safely, ethically, and sustainably.
What really struck me throughout the day was the fine balance between responsiveness and responsibility; the two words that seemed to come up again and again. Innovation in AI is at the fastest pace it has been, and regulators are now trying to work hard to keep up this pace, without stifling progress and innovation. During one of the fireside chats, Lawrence Tallon, Chief Executive of the MHRA, mentioned the introduction of AI sandboxes, a controlled environment for innovators to test their AI systems in a safe environment. The AI Airlock programme by the MHRA, is one example of this new mindset, and one way to provide regulatory guidance and learning, whilst still fostering innovation. It’s a way of encouraging creativity, but with guardrails firmly in place.
It was encouraging to see how much thought is being put into proportionate oversight, ensuring that regulation is rigorous enough to maintain safety and public trust, but flexible enough to avoid unnecessary barriers to innovation. After all, as several speakers reminded us, public trust is everything. This theme was front and centre during the panel discussion “Global Regulations for Trust and Transparency.” Key speakers, including Jamie Cox, Co-founder and Chief Technology Officer of Scarlet, and David Novillo, Head of Data and Digital Health at the WHO, emphasised that both health workers and patients must have a voice, and that using AI should always be a choice, not an obligation. Can patients and health workers truly trust AI if accountability and choice aren’t built in from the start?
AI in healthcare brings immense promise and excitement, but also raises critical questions like this. Without public trust, the rapid momentum behind AI adoption in healthcare could quickly come to a halt.

Beyond Clinical Impact: How AI Can Support Health Systems
During the summit, one of the more refreshing themes was that AI is no longer just about improving clinical outcomes for patients; it’s also about strengthening the systems that support them. Several panel discussions reflected this idea, with sessions such as “How Can AI Deliver Maximum Value to Patients?” and “Reducing the Burden on Health Systems” taking centre stage. As a specific example, narrow AI continues to shine in specific, well-defined applications, e.g. diagnostic imaging or triage support. However, a conversation has now been sparked on how AI can also help solve the more practical issues of healthcare systems: improving efficiency, supporting workforce planning, and easing the administrative workloads.
Adoption does, however, rely on overcoming the cultural barriers, too. Resistance can still be seen from clinicians, questions are raised about accountability, and legitimate concerns have been raised about the bias and inequality of AI. But there was a strong sense that AI, when implemented thoughtfully, could help make healthcare more timely, equitable, and sustainable, not less.
The Regulatory Road Ahead
Regulation can often be seen as a hurdle, or just another hoop you have to jump through in the world of AI in healthcare, but attending the summit reminded me that, when seen in the correct light, it can also be a powerful enabler. When done thoughtfully, regulation doesn’t just protect patients, but it builds trust, innovation, and creates a clear pathway for AI tools to move safely from their development to real-world clinical use. One key theme was the importance of proportionate oversight. Rather than a one-size-fits-all approach, regulators are increasingly focusing on risk-based frameworks, ensuring that higher-risk AI applications receive more scrutiny, while lower-risk tools can reach clinicians faster. This balance is critical: it protects patients without unnecessarily slowing innovation, giving developers opportunities to experiment and iterate.
At Mantra Systems, we empower these innovators, guiding them through the regulatory process and allowing them to lean into regulations, not fight against them. Working together in this way enables us to reach a shared goal of getting medical devices onto the market, applying a unique combination of clinical, regulatory and medical device expertise. In the past, we have hosted the MedTech Start-Up Workshop, providing early-stage companies with practical insights into navigating regulatory requirements. Subscribe to our newsletter to hear about the next one.