Externally Funded Project (2020-2023)
AI-Trust is a project in the program “Verantwortliche Künstliche Intelligenz” (“Responsible Artificial Intelligence”) of the Baden-Wuerttemberg Foundation (Baden-Wuerttemberg Stiftung).
Project Description
The super-convergence of digital technologies – big data, smart sensors, artificial neural networks for deep learning, high-performance computing and other advances – enable a new generation of ‘intelligent’ systems, often grouped under the notion of ‘AI system’. This megatrend is leading to a profound transformation of all sectors of society from education to industrial production and logistics to science and healthcare and poses real and imminent ethical and legal challenges. The research group AI Trust examines the challenges of Interpretable Artificial Intelligence Systems for Trustworthy Applications in Medicine.
While deep learning methods promise great performance gains in various application domains, the solutions they provide are not readily comprehensible to human users. This black-box approach is acceptable in some domains, but for medical applications, in particular, transparency seems necessary so that clinicians can understand the decisions of a trained machine learning model and ultimately validate and accept its recommendations. Recently, a number of methods have been developed to provide more insight into the representations that these networks learn. So far, however, there is no systematic comparison of these methods with each other. Especially for automated EEG diagnosis, it is not clear which of these methods can provide the most helpful information for the practitioner in different circumstances. Furthermore, it is unclear to what extent these methods serve the overarching goals of interpretability, explainability, and comprehensibility. Thus, it is not clear how they may foster trust in the ‘AI-system’.
Our research project AI Trust follows an ’embedded ethics and law’ approach that aims to investigate the ethical and legal challenges of a deep learning-based assistive system for EEG diagnosis (‘DeepEEG’ System) throughout the research and development phase in order to provide normative guidance for the development of the system. This approach intends to exemplify how the inclusion of ethical and legal expertise in developing ‘AI-systems’ in medicine may help leverage AI’s innovation potential while ensuring a responsible and trustworthy ‘ethics-and-law-by-design’ development. More generally, this will demonstrate how the enormous societal challenges posed by ‘AI-systems’ can be framed, and the problems raised by ‘AI-systems’ can be solved from a conceptual (philosophical, ethical, legal) and a technical perspective.
Project Partners
Project Coordinator
Dr. med. Philipp Kellmeyer, M.Phil., Human-Technology Interaction Lab, University Medical Center Freiburg
Principal Investigators
- Prof. Dr. Tonio Ball, Neuromedical AI Lab, University Medical Center Freiburg
- JProf. Dr. Joschka Boedeker, Neurobotics Lab, University of Freiburg
- Prof. Dr. Wolfram Burgard, Autonomous Intelligent Systems, University of Freiburg
- Prof. Dr. Oliver Müller, Nexus Experiments and Department of Philosophy, University of Freiburg
- Prof. Dr. Silja Voeneky, Institute of Public Law Dep. 2 (Public International Law and Comparative Law), University of Freiburg
External Partners
- Dr. phil. Philippe Merz, Thales Akademie, Freiburg im Breisgau
- Christoph Boehm, SAP, Walldorf