Responsible Artificial Intelligence
AI can be a motor for social innovation and the common good. We analyze human-AI interactions to inform responsible AI governance.
AI and related digital technologies have become a disruptive force in our societies and the calls for ethical frameworks and regulation have become louder. We hold that responsibility is a key concept for anchoring AI innovation to human rights, ethics and human flourishing.

Ethics

Law Responsible AI

Law

Responsibility Responsible AI

Responsibility

Projects

Cambridge Responsible AI Book

The Cambridge Handbook of Responsible Artificial Intelligence

“The Cambridge Handbook of Responsible Artificial Intelligence – Interdisciplinary Perspectives” will be published in September 2022. It comprises 28 chapters written by participants of our virtual conference “Global Perspectives on Responsible AI”. The book will provide conceptual, technical, ethical, social and legal perspectives on “Responsible AI” and discusses pressing governance challenges for AI and AI systems for the next decade from a global and transdisciplinary perspective.

AI-Trust Responsible AI

AI-TRUST

AI Trust develops an ’embedded ethics and law’ approach that aims to investigate the ethical and legal challenges of a deep learning-based assistive system for EEG diagnosis.  Throughout the research phase we will jointly develop normative guidance for the development of the system.

Neuroethics Responsible AI

RESCALE

RESCALE is a multidisciplinary project on metalearning of assistive robots in which we pursue a design-based approach involving mixed-methods qualitative research.

EduHealt Responsible AI

KIDELIR

KIDELIR is a multidisciplinary project on AI-based decision support for predicting delirium in clinical patients.

News

Responsible AI visits Care Robot LIO

On August 4th, 2021, Dr. Philipp Kellmeyer and Patricia Gleim, from the Responsible AI research group at the Freiburg Institute for Advanced Studies (FRIAS) of the University of Freiburg, visited the care robot LIO (F&P Robotics) at the St. Marienhaus nursing home in Constance.

tell me more

New project AI-Trust

New Research Project AI-TRUST

AI-TRUST – Interpretable Artificial Intelligence Systems for Trustworthy Applications in Medicine (starting October 2020) with Dr. Philipp Kellmeyer, Prof. Dr. Tonio Ball, Prof. Dr. Wolfram Burgard, Prof. Dr. Oliver Müller and Ass.Prof. Dr. Joschka Boedecker.

tell me more

Who we are

Leadership

Vönecky responsible AI
Silja Vöneky
Müller responsible AI
Oliver Müller
Kellmeyer responsible AI
Philipp Kellmeyer

Research-Team

Team Research

We value networks and the exchange of different disciplines and approaches. This is also reflected in our team and our large project group.

We are located at:

FRIAS Uni Freiburg

We are supported by:

Responsible Artificial Intelligence

Prof. Dr. Joschka Boedecker
Dr. Philipp Kellmeyer
Prof. Dr. Oliver Müller
Prof. Dr. Silja Vöneky

Freiburg Institute for Advanced Studies (FRIAS)
Albert-Ludwigs-Universität Freiburg
Albertstr. 19
79104 Freiburg
Germany
Phone:  0049 (0)761 203- 97354
E-Mail:

Follow us on X: @responsibleAI1

Legal notice | privacy protection