slider

Unintended and Illegitimate Consequences of LLMs and Their Impact on Society - Prof. Dr. Norbert Pohlmann

Unintended and Illegitimate Consequences of LLMs and Their Impact on Society

Unintended and Illegitimate Consequences of LLMs and Their Impact on Society

D. Adler, C. Böttger, U. Coester, Prof. Norbert Pohlmann (Institut für Internet-Sicherheit):
„Unintended and Illegitimate Consequences of LLMs and Their Impact on Society”.
12th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation / 2nd AISoLA Doctoral Symposium, 2024,
Electronic Communications of the EASST, Volume 84,
Edited by: Sven Jörges, Salim Saay, Steven Smyth, 2025


Abstract:
In our paper, we explore consequences of Large Language Models (LLMs) from the perspective that they can cause illegitimate harm if they are not taken into account by AI providers, and what requirements result from this in terms of what needs to be done. In the first part of the article, we examine the potential for harm that can be caused by LLMs and then use the ‘do no harm’- principle to illustrate why AI providers are theoretically obligated to employ all available measures to prevent illegitimate harm. The subsequent section details the development of an AI Restriction Framework, which aims to enhance visibility of potential illegitimate harm, thereby serving as a base for both AI providers and users to take action. The overarching objective of our research is to establish a foundation for a shared understanding of the potential (illegitimate) harms that may arise from LLMs, by facilitating a focal point for a more informed societal discourse on their utilization.

Keywords: LLM, ChatGPT, Ethics, AI Act, harm, ‘do no harm’-principle

1 Introduction
A significant proportion of individuals use technology with the belief that it will have a positive benefit, whether in the form of enhanced efficiency or the facilitation of their professional tasks. This notion can be compared to that of the “Zauberlehrling” sorcerer’s apprentice – who sought to simplify his tasks by utilizing the spell he had learned from his master, but unfortunately, he lacked the knowledge of the counter spell, leading to a situation that spiraled out of control – as depicted in Goethe’s ballad. However, can it be asserted with absolute certainty that, in contrast to the sorcerer’s apprentice, everyone today has a precise understanding of what they are doing? After all, it seems to be questionable whether individuals – developers or users alike – are aware of the measures to be taken to ensure that Artificial Intelligence (AI) does not become uncontrollable. Moreover, can this status even be defined in advance? Or, most importantly, according to which criteria can such a decision be made [Gab18, pp. 208–209]? But not only these rather complex and future-oriented questions need to be answered; there is also a necessity for a thorough evaluation of the value alignment of individual actors, particularly international technology companies, from a more evident perspective. Namely the fundamental question, whether their motives and actions today are actually geared towards providing an AI that prioritizes human well-being and needs – which would also entail the commitment to take every possible measure to avoid causing (illegitimate) harm to people. Since the value alignment of the most technology companies remains unclear respectively is – in some cases – only known in a rudimentary way or currently being questioned, a fundamental requirement can be derived from this: To empower individuals within a society to lead lives that are largely consonant with their moral concepts, for example regarding privacy or the future of work, it is essential to create a shared understanding of values (i.e. the defining principles of the social structure), conditions, circumstances and expectations concerning technology and the management of potential implications. To reach a consensus on this matter, it is pertinent to establish a foundation for informed decision-making. This necessitates a technological approach to the domain of AI security, among other areas, in addition to a comprehensive understanding of the potential harms that may arise from the utilization of AI, particularly the illegitimate ones. Harm may occur if a chatbot fails by default to recognize a potentially dangerous situation for (vulnerable) users, such as children, and provides them with inappropriate instructions or guidance – this includes among other things overlooking signs that indicate self-destructive behavior or failing to refer users to professional assistance when required. In this context, the framework presented in this paper could serve as a valuable instrument in this regard, as it evaluates the priorities for AI providers in addressing identified issues, while concurrently illustrating to users the potential consequences and methods to avert them.



kostenlos downloaden



Weitere Informationen zum Thema “Unintended and Illegitimate Consequences of LLMs”



Why Trustworthiness is the Cornerstone of Digitalization

Spielerisch gegen Cyberbedrohungen – IT-Sicherheitstrainings mit Serious Games

Privacy from 5 PM to 6 AM: Tracking and Transparency Mechanisms in the HbbTV Ecosystem

Different Seas, Different Phishes – Large-Scale Analysis of Phishing Simulations Across Different Industries



Lehrbuch Cyber-Sicherheit

Übungsaufgaben und Ergebnisse zum Lehrbuch Cyber-Sicherheit

Bücher im Bereich Cyber-Sicherheit und IT-Sicherheit zum kostenlosen Download

Trusted Computing – Ein Weg zu neuen IT-Sicherheitsarchitekturen



Vorlesungen zum Lehrbuch Cyber-Sicherheit

Cyber-Resilienz – Idee und Umsetzung

Only Regulators in the Building – EU AI Act, Compliance, Trust and AI

IT-Sicherheitsrecht – Was gibt die EU vor, wie kann die Industrie die Umsetzung aktiv gestalten



Forschungsinstitut für Internet-Sicherheit (IT-Sicherheit, Cyber-Sicherheit)

Master-Studiengang Internet-Sicherheit (IT-Sicherheit, Cyber-Sicherheit)

Marktplatz IT-Sicherheit

Marktplatz IT-Sicherheit: IT-Notfall

Marktplatz IT-Sicherheit: IT-Sicherheitstools

Marktplatz IT-Sicherheit: Selbstlernangebot

Marktplatz IT-Sicherheit: Köpfe der IT-Sicherheit

Vertrauenswürdigkeits-Plattform



TeleTrusT-Positionspapier Cyber-Nation

Investitionen aus Sondervermögen in Cyber-Sicherheit



eco-Studie: Security und digitale Identitäten

Gaia-X-sichere und vertrauenswürdige Ökosysteme mit souveränen Identitäten



Cyber-Sicherheit braucht mehr Fokus



IT-Sicherheitsstrategie für Deutschland

Human-Centered Systems Security – IT Security by People for People



IT-Sicherheit

Cyber-Sicherheit

Cyber-Sicherheitsstrategien

Cyber-Sicherheitsmaßnahme

Unintended and Illegitimate Consequences of LLMs and Their Impact on Society
Unintended and Illegitimate Consequences of LLMs and Their Impact on Society Prof. Dr. Norbert Pohlmann - Cyber-Sicherheitsexperten