The development and application of autonomous weapons systems (AWS) is one of the most pressing problems of international security and peace. Stakeholders from politics, civil society, military, science and academia are debating controversially whether and to what extent the expansion of AI-based mechanized autonomy threatens a loss of human control over crucial acts of war. The transregional and interdisciplinary competence network “Meaningful human control, AWS between regulation and reflection” brings together researchers and academics from Science and Technology Studies (STS), Robotics, Law, Sociology, Physics, Political Science, Gender Studies and Media Studies as well as interdisciplinary fellows originating from the global south. It aims to situate hitherto unconnected problem descriptions and disparate concepts historically and culturally in an interdisciplinary research program. The goal is 1. A comprehensive understanding of the sociocultural dimension of AWS, 2. A complex technical understanding sociomaterial agency of AI-based AWS, 3. The pooling of transclassical competences of peace studies, 4. The transitioning of academic results regarding “AWS and Meaningful Human Control” into broader public discourse.
Funding institutions: Federal Ministry of Education and Research
Duration 4 years
Cooperative decision vs individual responsibility
Prof. Dr. Susanne Beck
In international humanitarian law the increasing focus on individual responsibility or criminal responsibility is of great importance. It is the basis for mutual recognition as persons capable of acting, the (re)establishing of trust in norms being upheld and the peacekeeping effect in society. In view of the further development of autonomous systems, there are demands for Meaningful Human Control (MHC) – not only regarding weapons systems but also in other contexts such as in medicine. As a follow up to a research project in medicine, the significance of technical implementation, the illusion of one’s own agency for MHC, the concept of responsibility and the related social function are to be analysed from the perspective of (international) criminal law
AI and the human understanding of meaning in law
Prof. Dr. Susanne Krasmann
Starting from the observation that they speak different languages, the project asks how the law and algorithmic based decision systems communicate with each other and understand their subject respectively. Being interested in the forms of interaction between sense-making actors, algorithms and their subjects, the project is also concerned with epistemological questions. It inquires into the new forms of knowledge production and thus also of human thinking as well as in the modes of living together that the so-called artificial intelligence introduces.
Scenarios of interaction – human-machine interfaces in the discussion about AWS.
PD Dr. Christoph Ernst
Bonn, Media Studies/Media History
The possibilities for control and regulation are not only connected to actors, institutions or technology but also dependent on the media conditions such as interfaces. The main goal of this subproject is to investigate these conditions on the basis of scenarios of human-machine-interaction in the context of autonomous weapons systems (AWS) and to open them up for critical reflection. For this purpose, it will be analysed how concepts such as “Meaningful Human Control”, responsibility or autonomy can be produced or ensured technically in certain interface-solutions. On the other hand, the project will elaborate which notions regarding technical control, supervision, or autonomy are constructed in politics, academic research, industry, and fictional representations. As one of the main theses of the subproject states, these imaginings shape the reality of autonomous weapons to a considerable extent at the same time.
Swarm Technologies. Control and autonomy in complex weapons systems.
Prof. Dr. Jutta Weber
Paderborn, Media Studies/Media, Culture and Society/STS
This subproject analyzes current concepts and socio-technical imaginations of autonomous drone swarms capable of learning within current military thinking and elucidates the implication for the human-machine-relationship and future forms of warfare. There are, on the one hand, analyses oriented towards military strategy that seek to achieve a new quality of autonomy and cognitive performance in weapons systems with biomimetic and complexity-theoretical concepts of behaviour, control and controllability of drone swarms. On the other hand, critics point to the fundamental unpredictability of complex swarm behavior and challenge the idea of the responsibility of a ‘human on the loop’.
RoboCup test scenario
Prof. Dr. -Ing. Reinhard Gerndt
Ostfalia, Robotics/Computer Science
Within the competence network, this subproject contributes a test scenario as an illustrative example. It is based on the overall goal of the RoboCup – a football match between autonomous humanoid robots and humans. The scenario reflects – reduced in complexity and less normatively considered than a weapons system – the central topics of the competence network as well as individual subprojects (individual responsibility, predictability, interaction). Assigning rule violations committed by robots in the game to the autonomous robot ‘player’ or to the programmers and the question of an automatic (AI) referee or ‘ethics module’ on the robot player (cf. Arkin) are both practical and transferable issues. In this way, the scope and limits of controllability and control can be discussed in a complexity-reduced manner.