Regulatory Use & Lifecycle Management
Fairness
Data & digital tools:
Continuous monitoring of the performance of the deployed system should be performed. Performance degradation of the model over time can be attributable to changes in the environment (e.g. societal practices and norms, emerging new behaviours, changing input population composition) and changes in requirements. Further, a system can be biased towards a historical position. If indications of unwanted bias are present, the system can be retrained or re-engineered. (Ref. ISO 24027)
Personnel Training:
Proper training and support for the AI system is important for the users to enable effective use of the product. This includes guidance for system developers on what constitutes appropriate and inappropriate deployment of an AI system. (Ref. ISO 24027)
Links:
ICH Q1: https://database.ich.org/sites/default/files/Q1A%28R2%29%20Guideline.pdf
ICH Q2(R2): https://database.ich.org/sites/default/files/ICH_Q2%28R2%29_Guideline_2023_1130_ErrorCorrection_2025.pdf
ICH Q7: https://database.ich.org/sites/default/files/Q7%20Guideline.pdf
ICH Q9(R1): https://database.ich.org/sites/default/files/ICH_Q9%28R1%29_Guideline_Step4_2025_0115_0.pdf
ICH Q10: https://database.ich.org/sites/default/files/Q10%20Guideline.pdf
ICH Q12: https://database.ich.org/sites/default/files/Q12_Guideline_Step4_2019_1119.pdf
ICH Q13: https://database.ich.org/sites/default/files/ICH_Q13_Step4_Guideline_2022_1116.pdf
ICH M15: https://database.ich.org/sites/default/files/ICH_M15_EWG_Step2_DraftGuideline_2024_1031.pdf
ISO/IEC 5259:2024-2025 (AI data quality management bundle): https://www.iso.org/publication/PUB200525.html
ISO/IEC TS 12791:2024 (Information technology — Artificial intelligence — Treatment of unwanted bias in classification and regression machine learning tasks): https://www.iso.org/standard/84110.html
ISO/IEC TR 24027:2021 (Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making): https://www.iso.org/standard/77607.html
ISPE AI GAMP: Artificial Intelligence GUIDE: https://ispe.org/publications/guidance-documents/gamp-guide-artificial-intelligence
FDA Good Machine Learning Practice for Medical Device Development: Guiding Principles: https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles
FDA Guidance for Industry Process Validation: General Principles and Practices: https://www.fda.gov/files/drugs/published/Process-Validation--General-Principles-and-Practices.pdf
FDA Adverse Event Reporting System (FAERS) Public Dashboard, Available at https://www.fda.gov/drugs/fdas-adverse-event-reporting-system-faers/fda-adverse-event-reporting-system-faers-public-dashboard;
FDA Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological
EMA and FDA set common principles for AI in medicine development: https://www.ema.europa.eu/en/news/ema-fda-set-common-principles-ai-medicine-development-0
EMA Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle: https://www.ema.europa.eu/system/files/documents/scientific-guideline/reflection-paper-use- artificial-intelligence-ai-medicinal-product-lifecycle-en.pdf
EMA Guideline on process validation for finished products - information and data to be provided in regulatory submissions: https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-process-validation-finished-products-information-and-data-be-provided-regulatory-submissions-revision-1_en.pdf
EMA Artificial Intelligence: https://www.ema.europa.eu/en/about-us/how-we-work/data-regulation-big-data-other-sources/artificial-intelligence