Trustworthy and Responsible AI Network expands to help European healthcare organizations enhance the quality, safety and trustworthiness of AI in health

Monday, at HLTH Europe, the Trustworthy & Responsible AI Network (TRAIN), a consortium of healthcare leaders, announced its expansion to Europe with the objective to help organizations in the region operationalize responsible AI through technology-based guardrails. Organizations that have come together to form the European TRAIN include Erasmus MC (the Netherlands), HUS Helsinki University Hospital (Finland), Sahlgrenska University Hospital (Sweden), Skåne University Hospital (Sweden), Universita Vita-Salute San Raffaele (Italy), and University Medical Center Utrecht (the Netherlands), with Microsoft as the technology enabling partner. Foundation 29, a nonprofit organization that aims to empower patients and transform healthcare through data-driven initiatives and innovative technologies, has also joined European TRAIN. The network is open to other healthcare organizations in Europe interested in joining.

Emerging AI technologies hold significant promise for revolutionizing the healthcare sector in Europe and across the globe. By enhancing patient care outcomes, streamlining processes and reducing costs, AI has the potential to transform the industry. As the technology continues to evolve, robust development and evaluation standards are crucial to ensure responsible and effective AI applications. TRAIN aims to improve the quality, safety and trustworthiness of AI tools implemented in healthcare to help ensure clinicians and patients benefit from this innovative technology.

TRAIN’s initial formation, announced in March 2024, introduced leading healthcare organizations in the U.S. as part of the network. The consortium’s operational objectives include:

  • Providing technology and tools that enable trustworthy and responsible AI principles to be operationalized at scale.
  • Working in collaboration with other TRAIN members and key stakeholders to enable all organizations, including low-resource settings, to benefit from technology-based responsible AI guardrails.
  • Sharing best practices related to the use of AI in healthcare settings, including the safety, reliability and monitoring of AI algorithms, and the skillsets required to manage AI responsibly. Data and AI algorithms will not be shared between member organizations or with third parties.
  • Working toward enabling registration of AI used for clinical care or clinical operations through a secure online portal.
  • Providing tools to enable measurement of outcomes associated with the implementation of AI, including best practices for studying the efficacy and value of AI methods in healthcare settings and leveraging of privacy-preserving environments, with considerations in both pre- and post-deployment settings. Tools that allow analyses to be performed in subpopulations to assess bias may also be provided.
  • Working toward the development of a federated AI outcomes registry for organizations to share among themselves. The registry will capture real-world outcomes related to efficacy, safety and optimization of AI algorithms.
/Public Release. View in full here.