numan news

11 minute read

Introducing Numan’s AI Health Assistant

Numan AI
Share:

At Numan, we’re driven by a simple but powerful vision: to empower patients to live healthier, happier, and longer lives. That’s why we’re constantly exploring how technology can make healthcare more personalised, accessible, scalable, and impactful.

Numan’s first AI Health Assistant is the latest step in that journey. Combining cutting-edge AI technology with robust safety systems, our AI Health Assistant represents Numan’s commitment to delivering healthcare that’s as innovative as it is ethical. In this post, we share how we brought this technology to life, the challenges we faced along the way, and how we’re ensuring it remains safe, reliable, and tailored for patient needs. 

Numan’s vision for AI in healthcare

We believe that healthcare should be proactive, not reactive. Too often, healthcare follows a “problem → solution” model, addressing issues only after they arise. At Numan, we’re working to flip that script by creating an integrated platform that supports long-term health through:

  • Advanced diagnostics

  • Behavioural change programmes

  • Personalised treatment plans

  • Expert clinical support

And while there’s a long way to go - our AI Health Assistant is the first step towards achieving this. It will enable us to scale personalised care, making these solutions accessible to a broader population. Our AI Health Assistant embodies this approach by guiding patients through their health journeys with tailored, 24/7 support.

Bringing the solution to life

Our journey towards the solution started with a question: How can we use AI to help patients feel supported at every step of their journey? To answer that, we drew on insights from the scientific evidence base, our clinical team, behavioural scientists, and engineers. We developed a prototype powered by the latest LLM technology. But like any new AI tool, it had its limitations.

In its early form, our assistant struggled with:

  • Providing up-to-date information (limited by training data cutoffs).

  • Alignment with tone and behavioural science principles.

  • Providing the most relevant information to a patient’s individual lifestyle change journey 

These were big challenges, but they also showed us the assistant’s potential. Through continuous iteration and collaboration across teams, we began to refine our AI into the intelligent, empathetic, expert assistant we envisioned.

Take our prompt engineering; during the initial phase of development we explored different approaches and collaborated with clinicians and behavioural scientists to determine the core requirements for the prompt. We then iteratively tested and refined our prompt, adjusting parameters, constraints, guidelines, and structures. We employed established prompting techniques, e.g. few-shot prompting, to improve performance and adherence to instructions. The prompt went through many stages of internal testing before being released initially as an in-house Alpha, followed by a multi-phase Beta involving patients. 

Listening to our patients 

In our Beta testing phases, we invited cohorts of patients to check out the assistant and see how it worked for them. We carried out surveys and interviews to speak with patients to see if the assistant was actually meeting their needs. Through this testing, we identified key areas for improvement. For instance, patients mentioned wanting to revisit past conversations. So, we added a history feature that enables them to go back to those interesting chats. Imagine the assistant suggested a recipe they loved; now, when they're grocery shopping next week, they can easily check back and grab the ingredients they need.

AI

Patients told us that they found the assistant particularly useful for getting quick answers on things like diet, health tips, and general wellbeing. As the assistant is available 24/7, they love this round-the-clock support that means they don’t have to wait to receive an answer until their coach or clinician’s working hours. Instead, they can get an answer midway through their grocery shop, on the gym floor as they explore new exercises, or even from bed as they prepare for the day ahead. 

This is just the tip of the iceberg in terms of our research - we're always doing more and working out new ways to design the best tools and experiences for our patients. 

The patient experience: what our AI Health Assistant offers

For patients, our AI Health Assistant represents a new level of support. Available 24/7, it provides:

  • Round-the-clock answers to patient questions. 

  • Tailored advice on diet, exercise, and lifestyle.

  • Escalation to human support when needed.

By combining AI-driven insights with clinical oversight, the Health Assistant empowers patients to take charge of their health, whether that means building new habits around food, discovering exercise routines that fit their lifestyle, or simply sanity checking that they're doing the right thing for their bodies during their journey. This proactive engagement helps patients make informed decisions and achieve their health goals more effectively.

AI life

Improving quality with RAG

To deliver accurate and personalised responses, Numan’s AI Health Assistant uses a Retrieval Augmented Generation (RAG) framework. This approach integrates a curated knowledge base developed by our expert team of health coaches and clinicians, ensuring the assistant’s information is both evidence-based and up-to-date.

For example, when the AI Health Assistant supports patients on GLP-1 treatment, it draws from the most relevant insights about lifestyle changes related specifically to weight loss medications. This way, it can offer care that’s tailored to each patient’s journey. For example, looking at nutrition insights, these might cover things like which foods are best to eat or steer clear of while on their weight loss journey, what to include in the grocery shopping list, and how to increase protein intake. This approach also significantly cuts down on issues like AI “hallucinations” - this is when AI generates incorrect or misleading information (but may insist that it’s factual) - and boosts the accuracy of the messages. 

Building safe and ethical AI: the role of monitoring

Patient safety is at the core of everything we do at Numan. The unpredictable nature of LLM-based conversational technology means we need to take a thoughtful approach, using effective strategies to handle a wide range of scenarios - even those that might seem harmless at first glance. Additionally, the interactive capabilities of generative AI permit a broader range of information exchange between AI assistants and patients. 

That’s why we’re building Numan’s monitoring and evaluation system, to ensure our Health Assistant operates with the highest standards of safety and accuracy.

The system is more than just a safeguard; it’s a feedback loop that helps us identify and address potential risks in real-time, whether it’s a patient requesting support on symptom management that can be referred to our clinical team, or an instance where the assistant hasn’t given the most up-to-date information. It spans the entire AI life cycle, from ideation to monitoring.

Here’s how it works:

AI

1. Prompt evaluator: continuously evaluating system performance on key priority areas 

We rigorously evaluate the AI Health Assistant’s responses using a structured set of questions covering key domains such as:

  • Immediate patient safety

  • Mental health risk

  • Side effect and symptom support

  • Technical assistance

  • Nutrition guidance

The responses are then evaluated to ensure the Health Assistant's outputs meet strict safety and accuracy criteria. These tests are designed to run pre-deployment (before any changes to the system go live) and post-deployment (periodically while the model is in production to check performance).

When tests are ranked as anything other than a ‘pass with flying colours’, this flags the expert team behind the assistant to swiftly investigate the reason and remedy. This system ensures that the team’s time can be spent on solutioning rather than testing. 

2. Clinical escalation system: bringing a human into the loop when needed

Numan aims to enhance, not replace, human support through AI. Our escalation system reviews patient messages to identify situations that require human intervention. When this happens, escalation to a dedicated Clinical Specialists team is triggered. Clinicians review the case and use their clinical judgement to follow up with the patient if needed. To ensure escalation system accuracy, we maintain a human-in-the-loop by conducting regular audits of the system to ensure accuracy and continuously improve performance. 

3. Safety classifier: ensuring information provided meets the highest standards of safety and patient care 

This tool continuously monitors our Health Assistant’s messages, flagging any potentially unsafe or inappropriate responses. It’s an essential layer of protection, ensuring patients can trust the guidance the AI provides. This acts as a continuous monitoring and alert system. If safety metrics drop, our team is alerted to this and can quickly investigate and remedy the situation. 

Overcoming challenges and ensuring scalability

Scaling an AI assistant in healthcare isn’t easy. 

One of our biggest challenges has been maintaining consistent monitoring as the Health Assistant’s interactions grow. Our automated monitoring system helps us address this by automating key evaluation processes, allowing us to scale safely without compromising quality.

Of course, human oversight remains essential. We conduct regular audits and expert human reviews to ensure the system operates effectively, and to catch any edge cases that automation might miss. This combination of automation and human expertise is key to building trust in AI-driven healthcare.

Guided by regulatory governance to ensure world-class safety systems 

As debates about the ethics and safety of AI in healthcare grow, Numan is setting the benchmark for responsible innovation, embedding governance into every layer of its AI systems. This includes working towards compliance with ISO42001, the first international standard established to guide the development of AI management systems. 

The ISO framework offers detailed guidance and a structured approach to managing AI-related risks, effectively balancing governance with innovation. We demonstrate our commitment to responsible AI deployment by proactively incorporating these regulations into the very design of our management system from inception through to execution. 

Share: