Control of intransparency

AI in large organisations and companies

Project overview

The research project "Controlling intransparency - Artificial intelligence in large organisations and companies" is dedicated to the question of how appropriate governance structures and ethical principles for the use of AI can be developed and implemented under conditions of uncertainty and intransparency. In view of the disruptive dynamics of artificial intelligence, large organisations and companies in particular are faced with the challenge of finding responsible and reflective ways of dealing with AI technologies.

The project centres on the following questions, among others:

  • How can AI systems - taking into account potential risks and dangers - be provided and implemented responsibly?
  • How can risks, bias and lack of transparency in dealing with AI be observed, understood and, if necessary, mitigated?
  • What is the relationship between social systems and AI, and to what extent can AI generate ethical added value in this area of tension?


The aim of the project is to contribute to the further development of the topics of governance, AI ethics and responsibility - both through scientific research and through teaching programmes. In addition, the transfer of knowledge between theoretical reflection and practical application is to be intensified. Of particular interest here is the question of how large organisations react to the technological upheavals triggered by the use of AI.

The project is part of an initial three-year research collaboration between Witten/Herdecke University and Deloitte. As part of this cooperation, a doctoral position on the ethics of artificial intelligence will be established at the Chair of Sociology of the Faculty of Health under the direction of Prof. Dr Werner Vogd. The aim is to sustainably strengthen the field of AI ethics in research and teaching.

Further information

  • Duration: ongoing
  • Responsible: Chair of Sociology

Project management