3 questions for...
Prof. Dr. rer. hum. biol. Björn-Hergen Laabs
Groupleader
Prof. Dr. rer. hum. biol. Björn-Hergen Laabs joined the Department of Medical Statistics at the University Medical Center Göttingen as a junior professor. Before that, he worked at the University of Lübeck on the explainability of AI models and the genetic and epigenetic factors underlying movement disorders, such as dystonia or Parkinson’s disease. Within CAIMed, he takes over the junior research group “Statistical Evidence in AI Systems”. Here, his focus will be on explainability, uncertainty, and causality of predictions. His goal is that an AI model can give enough context to every prediction to evaluate quality and plausibility in clinical practice.
1.
Which challenges do you see with the application of statistical and AI models from research to everyday clinical practice?
I believe that one of the main challenges at the moment is that models are too often trained in such a way that they A) always deliver an output and B) leave the interpretation entirely to the user without providing any further context. This can lead either to general mistrust (keyword: hallucinating AI) or blind trust. I think that measures of how confident the AI is in its prediction and explanations of how a prediction was made are essential aspects of making AI usable in everyday clinical practice.
2.
You work with machine learning and interpretable models. Why is transparency of AI decisions so crucial to medicine?
AI decisions that are not comprehensible will, in the medium term, either lead to a general rejection of AI models, as it is impossible to understand why the models sometimes make mistakes, or to blind trust in AI, which can lead to dramatic misjudgments. Therefore, measures of prediction uncertainty and an explanation of how the prediction was made should be an essential part of the prediction so that the user can better assess the quality of the prediction.
3.
When thinking about the future, what new data sources could particularly enrich your research?
I think that a key factor for the success of AI in everyday clinical practice will be the availability of data, which is already being collected in large quantities in hospitals. This routine data is also increasingly available for research purposes, allowing us as researchers to focus on developing AI models that can be integrated as seamlessly as possible into current hospital processes.