Le Gouvernement du Grand-Duché du Luxembourg

Patient engagement and AI in healthtech

The 2024 European Digital Healthtech Conference highlighted best practices for the robustness and trustworthiness of AI-based digital health technologies.

Validating digital health technologies is a complex process with many components. At the European Digital Healthtech Conference, organised by Luxinnovation, speakers explored crucial aspects such as clinical trials for digital technologies, societal readiness for these innovations, and the vital engagement of patients in the development process.

Patients as partners

Milana Trucl, Policy Officer at the European Patients’ Forum (EPF), emphasised the essential role of patients in shaping digital health innovations from the outset. The EPF, which represents around 80 patient organisations across Europe, defends their interests through education, empowerment and collaboration. The aim is to integrate them meaningfully into health policy, practice, research and other processes.

“Patient preferences should be at the heart of digital health solutions. They should be seen as partners, and their contribution should be planned in advance. This means it should be built into the design, rather than patients being approached either at the last minute or simply as a tick-the-box exercise,” she stated.

This approach builds trust, enhances legitimacy, improves clinical trial participation rates, aligns research priorities effectively, and underscores real-world unmet needs.

Patient preferences should be at the heart of digital health solutions.

EPF is engaged in significant European projects that enhance patient involvement in health technology assessments (HTAs). Initiatives such as the European Capacity Building for Patients (EUCAPA) equip patient advocates with the skills and expertise to contribute to HTAs. It ensures they understand the legal framework and other critical aspects. The EDiHTA project aims to create the first European digital health technology assessment framework. It is co-developed by stakeholders, including patients.

Evaluation of digital medical devices

Sarah Zohar, Director of Research at Inserm, discussed the unique challenges in the clinical evaluation of AI-based digital medical devices (DMDs).

“Last year, when we talked about artificial intelligence, people suddenly thought, ‘No, it’s a black box. We don’t know what that is.’ This is not true; it’s just that the indicators are different. You have algorithms, which are essentially computer codes, that address clinical questions,” she pointed out.

She introduced a comprehensive set of 44 indicators developed by AI experts across France to assess the quality and robustness of AI algorithms embedded in DMDs, critical for informed reimbursement decisions. These indicators cover various aspects such as: data quality, algorithm training and validation, resilience, and interpretability.

You have algorithms, which are essentially computer codes, that address clinical questions.

“With these types of indicators, if we can also harmonise them across Europe, we will be able to evaluate and make decisions without needing specific knowledge or the ability to read the code.”

Promoting trustworthy AI in healthcare

Hannah-Marie Weller, Policy Officer at of the European Commission’s DG Connect, outlined the EU’s approach to fostering trustworthy AI in healthcare. Central to this approach is the Artificial Intelligence Act approved by the Council of the European Union in May 2024.

“The aim is to ensure the AI systems that are being put on the market are safe, respect fundamental rights and values, while also stimulating investment in innovation,” she explained.  This legislation focuses particularly on high-risk devices.

“It is indeed the case that AI systems in the healthcare sector will likely fall under the AI Act because they are typically high-risk systems. This is due to their direct impact on people’s health and safety. Many AI systems in healthcare, such as clinical decision support systems, patient monitoring systems, AI for robotic surgery, and others, will therefore need to comply with the AI Act. These systems can be either standalone, like AI software, or embedded components within medical devices,” she clarified.

The aim is to ensure the AI systems that are being put on the market are safe, respect fundamental rights and values.

In practical terms, the legislation serves as a catalyst for innovation in healthcare. According to Ms. Weller, “It enhances trust among patients and healthcare professionals, provides market certainty, and facilitates regulatory sandboxes.” The latter refers to controlled environments for testing and validating AI solutions in compliance with the AI Act. An objective is to establish at least one sandbox in every EU member state.

She also discussed initiatives such as GENAI4EU and AI Factories. These initiatives support AI startups and facilitate the validation of AI models, including in health settings. Funding comes primarily through EU programmes like Horizon Europe, Digital Europe, and EU4Health, which promote research, deployment, and uptake of AI innovations.

Would you like to explore business and innovation opportunities?

Get in touch for more information and personalised support.
Contact us

Stay informed with our Crossroads newsletter

Get the latest business, economy and innovation news from Luxembourg in your inbox every month.

You may unsubscribe from the newsletter at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Notice.