Aktuell

Vulnerability and Artificial Intelligence

Deadline: January 5th, 2024

Since the release of ChatGPT, applications of artificial intelligence (AI) have experienced a veritable hype in the public sphere, which seems to further accelerate a long-term process of continuous implementation of machine learning techniques further. From chatbots to applications in industrial production, cybersecurity, and predictive policing, AI is making its way into technical infrastructures as well as economic, political, and media systems. On the one hand, AI is praised as a superior solution to complex problems, enhancing efficiency, optimizing systems, making them smart and flexible, adaptive, or resilient. On the other hand, AI is problematized due to its opacity (black box problem), the risks of specific applications, or its inherent biases. These contrasting assessments can be interconnected and brought into dialogue through the concept of vulnerability.

AI is associated with various expectations to support the management of societal vulnerabilities, for example in the context of political debates on social media or the consequences of climate change. At the same time, AI applications can potentially foster new vulnerabilities. These vulnerabilities arise not only from the characteristics of the technology itself, but also from its interaction with other socio- technical entities. The properties of current AI have sociological implications that have been insufficiently explored so far. This leads to the following questions:

  • What concepts and theoretical approaches can be used to analyse the relationship between vulnerability and AI?
  • Which socio-technical systems are modified by AI, and what are the consequences for their vulnerability?
  • Does vulnerability arise from AI itself or from the context of its implementation?

We invite contributions that address these or related questions concerning the relationship between AI and societal vulnerabilities. Both theoretical and conceptual contributions, as well as empirical contributions, such as case studies, are welcome.

Submissions:

  • Please send your abstracts to Alexandros.gazos(at)kit.edu by the 05th of January 2024
  • Length of abstracts: max. 2000 characters (incl. spaces)
  • Workshop language(s): German, English