Statistical Evidence
in AI Systems
The clinical use of AI systems requires plausible, robust and reproducible behavior, especially in clinical trials. CAIMed uses data from the Collaborative Development & Validation of AI and AI Systems for Integrative Multi-Omics Data (molecular data) groups. In this group, the focus is on validation and statistical evidence for developed AI systems based on this data. The quantification of accuracy (uncertainty) is central to this, as it is an important indicator of the quality of predictions, especially in a medical context.
To minimize data privacy issues, federated learning methods are applied in collaboration with the abovementioned groups. Applications range from cardiology to the optimization of treatment protocols in oncology. The topics "Interpretable AI" and "Explainability" are relevant as molecular variables show complex interaction patterns. At the patient level, randomized controlled clinical trials are the gold standard for therapy evaluation. In ethically unacceptable situations, non-randomized controlled trials may provide the next best evidence. Causal inference procedures that work with counterfactual models can help here. The increasing availability of non-randomized data allows the combination of different study types, but requires a weighted evidence synthesis. The group is therefore working on the development of evidence synthesis procedures based on counterfactual methods using meta-analysis procedures and artificial neural networks.