Concept & Risk Analysis

Appropriate transparency & explainability

Data & digital tools: 

Digital CMC tool developers should use an assessment framework as a tool for communication within and between drug developers and regulatory authorities across multidisciplinary teams to increase transparency and provide an understanding of digital CMC tool at the planning stage. Early alignment with regulatory authorities facilitates subsequent acceptance of digital CMC tool evidence. Support transparent developer–regulator communication at the planning stage using e.g. ICH M15. Early alignment facilitates acceptance of digital CMC evidence. Ensure transparency and explainability align with ICH M15, ISO 22989, and WHO AI guidance.

Personnel Training:

Understanding of explainability expectations in a regulatory context.​

Links:

ICH Q8(R2): https://database.ich.org/sites/default/files/Q8%28R2%29%20Guideline.pdf

ICH Q9(R1): https://database.ich.org/sites/default/files/ICH_Q9%28R1%29_Guideline_Step4_2025_0115_0.pdf

ICH Q10: https://database.ich.org/sites/default/files/Q10%20Guideline.pdf

ICH Q12: https://database.ich.org/sites/default/files/Q12_Guideline_Step4_2019_1119.pdf

ICH Q13: https://database.ich.org/sites/default/files/ICH_Q13_Step4_Guideline_2022_1116.pdf

ICH M15: https://database.ich.org/sites/default/files/ICH_M15_EWG_Step2_DraftGuideline_2024_1031.pdf

ISO/IEC 5259:2024-2025 (AI data quality management bundle): https://www.iso.org/publication/PUB200525.html

ISO/IEC 5469 :2024 (Functional safety and AI systems): https://www.iso.org/standard/81283.html

ISO/IEC TS 6254:2025 (Information technology — Artificial intelligence — Objectives and approaches for explainability and interpretability of machine learning (ML) models and artificial intelligence (AI) systems): https://www.iso.org/standard/82148.html

ISO/IEC 12792:2025 (Information technology — Artificial intelligence (AI) — Transparency taxonomy of AI systems): https://www.iso.org/standard/84111.html

ISO/IEC CD TS 22440 (Artificial intelligence – Functional safety and AI systems): https://www.iso.org/standard/89535.html

ISO/IEC 22989:2022 (Information technology — Artificial intelligence — Artificial intelligence concepts and terminology): https://www.iso.org/standard/74296.html

ISO/IEC 23053:2022 (Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)): https://www.iso.org/standard/74438.html

ISO/IEC TR 24028:2020 (Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence): https://www.iso.org/standard/77608.html

ISO/IEC TR 24368:2022 (Information technology — Artificial intelligence — Overview of ethical and societal concerns): https://www.iso.org/standard/78507.html

ISO 42001:2023 (Information technology — Artificial intelligence — Management system): https://www.iso.org/standard/42001

IEEE 7001:2021 (IEEE Standard for Transparency of Autonomous Systems): https://standards.ieee.org/ieee/7001/6929/

WHO Ethics and Governance for AI for Health: https://www.who.int/publications/i/item/9789240084759

WHO Sharing and reuse of health-related data for research purposes: WHO policy and implementation guidance: https://iris.who.int/bitstream/handle/10665/352859/9789240044968-eng.pdf?sequence=1

WHO Data Principles: https://www.who.int/docs/default-source/world-health-data-platform/who-data-principles-10aug-%283%29.pdf

ASME VV40: https://www.asme.org/codes-standards/find-codes-standards/assessing-credibility-of-computational-modeling-through-verification-and-validation-application-to-medical-devices

FDA Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles: https://www.fda.gov/medical-devices/software-medical-device-samd/transparency-machine-learning-enabled-medical-devices-guiding-principles

FDA Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products Guidance for Industry and Other Interested Parties:  https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological

FDA Assessing the Credibility of Computational Modeling and Simulation in Medical Device Submission: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/assessing-credibility-computational-modeling-and-simulation-medical-device-submissions

FDA Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing

EMA and FDA set common principles for AI in medicine development: https://www.ema.europa.eu/en/news/ema-fda-set-common-principles-ai-medicine-development-0

EMA Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle: https://www.ema.europa.eu/system/files/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle-en.pdf

UK Government Implementing the UK’s AI regulatory principles: initial guidance for regulators: https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators