de en

IRIS 2025: Der Mensch im Zentrum – KI, Ethik & Recht

Sehr geehrte Leser*innen

Das 28. Internationale Rechtsinformatik Symposion findet derzeit an der Universität Wien statt. Das Schwerpunktthema ist KI, Ethik & Recht, mit dem Menschen im Mittelpunkt. Nach dem erfolgreichen Launch von ChatGPT und vielen ähnlichen Large Language Models (LLMs) und dem Inkrafttreten der KI-VO der EU stellt sich die Frage, wie der Mensch Mass aller Dinge bleibt, auch wenn unterstützende Software viele intellektuelle Funktionen übernehmen kann.

In dieser Ausgabe von Jusletter IT erscheinen die Beiträge zum Generalthema KI, Ethik & Recht sowie die Beiträge aus dem Themenblock Rechtsinformation, KI & Recht (technische Aspekte), LegalTech und fortgeschrittene juristische Informatik-Systeme. Wie gewohnt erscheinen die weiteren Beiträge des IRIS in den kommenden Ausgaben.

Den gedruckten Tagungsband mit allen Beiträgen, herausgegeben von Erich Schweighofer gemeinsam mit Stefan Eder, Federico Costantini und Felix Schmautzer, können Sie jetzt schon bestellen.

Wir wünschen Ihnen eine spannende Lektüre!

Philip Hanke
Verlagsleiter

Vorwort
Erich Schweighofer
Erich Schweighofer
Stefan Eder
Felix Schmautzer
KI - Ethik - Recht
Christiane Wendehorst
Abstract

This version 1.1 of a Tentative Academic Discussion Draft (TADD 1.1) is the slightly revised version of a previous draft, which had been prepared for a workshop held at the University of Vienna on 13–14 December 2024. Its purpose is to stimulate a discussion on a possible reform of EU data protection law. Compared to the previous draft, it takes into account feedback received at the workshop. However, it remains a work in progress, inviting critical comments and suggestions, and will be revised in the light of the feedback received.

Rolf H. Weber
Rolf H. Weber
Abstract

The manifold opportunities provided by new technologies and algorithms also cause challenges for the inclusion of the whole society (deprivation, discrimination, distortion). As a counterbalance, the normative framework is called to implement measures securing access to and availability of AI applications. Respective concepts are on hand: the rule of law and the equality principle, if combined with AI literacy, can contribute to fairness and non-discrimination as well as combat tendencies leading to a digital or algorithmic divide.

Rob van den Hoven van Genderen
Abstract

Panic detection at central station, aggression in the football stadium, fear in the swimming pool, claustrophobia and sweating in the elevator at work, examination stress: emotions that can be detected by AI systems through analysis of biometric data such as facial features, body temperature, sweat production, pupillary movement, etc. to then make appropriate adjustments to take measures. A wonderful application or dangerous development? The European legislator is inclined to err on the side of avoidance in this area of the unacceptable risks when processing the emotion AI systems relate to the workplace and education. This has led to a ban in the Artificial Intelligence Act (AIA) among other unacceptable applications such as manipulation and profiling by AI. The question posed in this article is whether this choice that leads to a ban emotional AI is sensible in those areas and is not based too much on risk aversion and fear of the unknown instead of giving way to positive applications. “Himmelhoch jauchzend, zum Tode betrübt”, there is no romance in its regulation of AI for emotion processing.

Maksymilian M. Kuźmicz
Abstract

The development of AI affects various stakeholders, potentially leading to conflicts of interest-clashes between the interests of different parties. These conflicts can hinder the progress and deployment of AI systems. This study addresses the proactive management of conflicts of interest, aiming to prevent them. It advocates for the use of risk identification methods to achieve this goal. The approach is supported by elucidating the concepts of conflict of interest and risk, and exploring their interconnections. The paper then demonstrates the application of risk identification methods to conflicts arising from the processing of personal data by AI systems in a care context.

Kanan Naghiyev
Abstract

Legal and ethical issues posed by AI in video games development are in point of those related to developer liability, creative rights and consumer protection. This paper investigates the emerging frameworks for regulating AI driven manipulation, transparency obligations and emotional exploitation. Drawing on these insights along with examination of legal adequacy, the thesis proposes legislative solutions that balance innovation with player welfare, offering a solution framework for developing responsible implementation of AI whilst preserving creative freedoms.

Andrej Krištofík
Pavel Loutocký
Anna Blechová
Tereza Novotná
Abstract

The increasing volume of legal disputes, particularly within labour law, has overburdened judicial systems, extending case durations and undermining legal certainty for affected individuals. Recent technological advancements offer potential solutions for alleviating this load through automation and augmented decision-making processes. However, the integration of technology in labour dispute resolution must be approached cautiously, adhering to principles such as Susskind’s maxim, which emphasizes that any technology deployed should substantively improve legal processes. This article explores a structured framework for deploying automation within labour law, identifying case types suitable for automation based on factors like legal and technological complexity. It outlines the need for a multi-tiered approach that escalates from negotiation to binding decisions, tailored specifically for less complex, high-volume cases. The framework also highlights critical legal considerations, such as safeguarding fundamental rights, procedural due process, and human oversight, ensuring that automated systems do not compromise these protections. By establishing pilot cases within a controlled environment, we aim to ensure that the technology‘s application not only enhances efficiency but also respects legal standards and safeguards human dignity and autonomy in automated labour dispute resolution.

Pavel Koukal
Abstract

The growing use of artificial intelligence (AI) in content moderation on digital platforms has transformed the regulation of user-generated content, but it also raises challenges when AI mistakenly classifies lawful content as illegal, leading to unjust removals and harm to users. In the European Union, the Digital Services Act (DSA) governs platform responsibilities, promoting transparency and user protections. However, it may need to fully address the risks posed by high-risk AI applications. To address these gaps, the EU introduced the AI Act and the proposed Artificial Intelligence Liability Directive (AILD). This paper examines how these frameworks interact, emphasizing the AI Act’s recognition of content moderation tools as high-risks, its human oversight requirements, and the AILD’s liability mechanisms for damages caused by AI errors. By synchronizing the DSA with AI-specific regulations, the paper highlights a pathway to enhance user protection and platform accountability in AI-driven content moderation.

Felix Gantner
Felix Gantner
Abstract

Für Recht und Ethik besteht schon seit langem eine Tradition des Diskurses. Auf Seiten der Informatik gibt es einerseits jene, die in Anlehnung an Turing die Entwicklung von technischen Systemen der Beschäftigung mit „philosophical questions“ vorziehen. Andererseits wird von Informatikern dem Diskurs über ethische Fragen auch große Bedeutung beigemessen. Vor allem, wenn es um die Abbildung ehtischer Fragestellungen in KI-Systemen geht, wird dieser Diskurs auch für die Praxis wichtig. Dies wird am Beispiel des LLM-Systems Gemini und des Trolley-Problems dargestellt. Für die Rechtsinformatik von Bedeutung sind in diesem Zusammenhang juristische Trolley-Anwendungen, die im Fehlerfall wie autonome Fahrzeuge Menschenleben kosten können.

Dawn Branley-Bell
Johannes Feiner
Sabine Prossnegg
Sabine Prossnegg
Tomer Libal
Abstract

Artificial Intelligence (AI) has arrived in the heart of society, bringing both transformative potential and challenges. This paper explores how humans and AI can work together. To do so, this paper examines the roles and protections afforded to humans under the AI Act (AIA) and the General Data Protection Regulation (GDPR). AI is often portrayed as a tool to support overburdened staff and offer better and/or more accessible services for individuals. However, AI can also be unhelpful, or in some cases, even harmful. This paper contributes to the ongoing discourse on the implications of AI-driven interventions. We support the idea of the incorporation of a priori evaluations as a prerequisite for any intervention that incorporates AI. This involves identifying if – and to what extent – the use of AI makes sense in the first place, followed by a clear labelling of any AI elements. We support the inclusion of obligatory and meaningful human oversight, combined with AI literacy where feasible.

Ondřej Böhm
Abstract

This paper addresses the issue of dissemination of disinformation in UGC, or Sandbox video games. First, it discusses the specific principles of disinformation dissemination in video games and the so-called procedural rhetoric. Second, the paper deals with the problematic understanding and detection of disinformation in video games, including the unclear application of the Digital Services Act in the context of the essence of user-generated content and outlines the issues that arise from this problematic application, such as the practical ineffectiveness of content moderation or, on the contrary, the potential infringements on the artistic freedom and freedom of expression of UGC game users.

Ahti Saarenpää
Abstract

In the context of the European Union, we are used to talking about different kinds of data spaces. The basic idea is that the freedom of movement of the individual has been complemented by the freedom of movement of data and information. One of the new European spaces is the space for the movement of health data. Here we are dealing both with the transfer of sensitive personal data and, above all, with quality assurance. The transfer of an individual’s health data from one cultural and scientific environment to another is an exceptionally demanding operation. It is not only a question of ensuring the technological path of information as such. And the more general transfer of care data is also a very demanding transfer of data. That is why the EU wants strict rules on the use of anonymised or pseudonymised health data for research, innovation and decision-making too. Regulation of the information space for health data is a broad issue. It is the first thematic regulation of the data and information space in accordance with the EU data strategy. We are witnessing an extremely important stage of social development in terms of human dignity, one involving many tensions. For example, there has been and still is an undeniable conflict of values and objectives between our right to self-determination and the various re-uses of medical data. Equally, data security needs to be highlighted in a new way in this context. In this brief presentation I will focus on the vital aspects of the transferability of data and information on the quality of care. This is, as I understand it, the necessary starting point for a new, comprehensive set of regulations. The internationalisation of healthcare has made good progress on the linguistic level, but there is still work to be done to harmonise essential treatment information. AI could give this a new impetus. The second key part of the regulation, the secondary use of health data, is outside the scope of this paper as well as the question of the authorities.
Wir treten in eine neue Ära der digital vernetzten Gesellschaft ein: Der freie Markt der Union erhält eine Ergänzung durch neue themenbestimmte Informationsräume. Im ersten dieser Räume wird der Verkehr von Gesundheitsdaten geregelt. In diesem Beitrag wird die Mobilität der sich aus einer Behandlung ergebenden Gesundheitsdaten einer Person innerhalb der Europäischen Union untersucht. Dieses Thema ist unter dem Gesichtspunkt der Sicherung der Qualität der Versorgung und ihrer Ex-post-Bewertung von großer Bedeutung. Das Thema hat auch mit dem Wissen als Element der individuellen Selbstbestimmung zu tun: unser Recht auf korrektes Wissen. Das zweite Schlüsselelement der Verordnung, die Sekundärnutzung von Gesundheitsdaten und die Frage der Zuständigkeit von Behörden, liegt außerhalb dieses Beitrags.

Ingrid Jez
Thérèse Tomiska
Katrin Forstner
Renata Leordean
Abstract

DiGA bieten vielfältige Chancen für die Unterstützung und Behandlung von Patient*innen. Die Weiterentwicklung von KI vergrößert zusätzlich das Potential. Der Wunsch von Behandler*innen, DiGA und KI zum Wohle der Patient*innen einsetzen zu können, ist daher verständlicherweise groß. Jedoch werfen diese neuen Möglichkeiten auch eine Menge rechtlicher Fragen auf. Viele sind derzeit vom österreichischen Gesetzgeber unbeantwortet. Zudem ergeben sich auch vielfältige epistemologische Fragestellungen, die adressiert werden müssen. Der vorliegende Beitrag soll einem interdisziplinären Diskurs dienen.

Leijla Malici
Antonio Paolo Beltrami
Abstract

Biobanks are key tools in advancing scientific research by providing access to large amounts of biological samples and associated data, thus are increasing their relevance in biomedical research. Likewise, they raise several ethical, legal, and societal issues, mainly related to the definition of their purposes, ownership of biological materials, and protection of personal data. The peculiar two-fold nature of human samples – material and informational – generates uncertainty, primarily regarding ownership and custodianship. Moreover, the need to collect informed consent from donors presents significant challenges regarding the many possible uses of the samples and the application of the principle of transparency in this field. The advent of AI poses new risks related to data protection and fundamental rights. In this contribution we address the main concerns in balancing scientific research with public and private interest, and individual rights.

Jakub Karfilát
Michal Koščík
Abstract

Digital Twin (DT) technology, and specifically Digital Patient Twins (DPT), offers transformative potential in advancing personalized healthcare through enhanced diagnostics, treatment, and disease management. However, significant challenges persist, particularly in areas such as data protection, patient autonomy, responsibility and liability, even within the existing regulatory frameworks like the GDPR and the AI Act. This paper examines these critical challenges and explores how the current EU regulatory framework address some of them, highlighting both strengths and gaps in the legal landscape.

Rechtsinformation - LegalTech - Fortgeschrittene juristische Informatik-Systeme
Bettina Mielke
Bettina Mielke
Christian Wolff
Abstract

Wir diskutieren den möglichen Einsatz großer Sprachmodelle und generativer KI im Rechtswesen und erörtern, wie sich dieser evaluieren lässt. Dazu gehen wir auf bisherige Ansätze zur Evaluierung von großen Sprachmodellen, insbesondere am Beispiel des HELM-Projektes (holistic evaluation of large language models) ein und stellen erste explorative Versuche aus dem deutschsprachigen Raum, die Leistungsfähigkeit großer Sprachmodelle zu bewerten, vor. Auf dieser Basis entwickeln wir Anforderungen an eine systematische Methodik für künftige Evaluierungsstudien.

Auke Pals
Floortje Scheepers
Sander Klous
Tim Müller
Rosanne J. Turner
Saba Amiri
Corinne Allaart
L. Thomas van Binsbergen
Lea Dijksman
Adam Belloum
Paola Grosso
Peter Grünwald
Karin Hoogoort
Aki Härmä
Johanna Maria Hegeman
Jamila Alsayed Kassem
Milen Kebede
Cees de Laat
Paul van der Nat
Abstract

Clinical pathways are currently difficult to optimize due to sensitive data that is typically distributed across organizations while rules and regulations constrain access and data processing. In this paper we describe a federated approach that can significantly reduce the efforts required to overcome these obstacles. First, we describe a standard conceptual workflow for optimizing clinical pathways, including all steps and involved stakeholders. This is followed by a translation of the workflow into a real-world scenario with an associated proof of principle to demonstrate how the scenario can be implemented on top of a federated framework. We present the most important results and conclude with an overview of the benefits for each of the stakeholders. Our most important outcomes are: the federated approach offers significant benefits for all relevant stakeholders and has little downsides. A policy-driven framework with embedded policy enforcement is crucial for successful adoption of a federated approach. Integration of safe statistics and synthetic data generation in a federated framework is straightforward and offers additional benefits, especially when setting up healthcare consortia. This solution is almost ready to be adopted by healthcare organizations as part of their regular operations.

Michał Araszkiewicz
Michał Araszkiewicz
Abstract

A holistic approach to Legal AI is essential to unify research streams and enable comprehensive support for real-world applications. We introduce HOLAI (Holistic Legal AI), a methodo­logy built as a set of principles for development of a network of specialized micro-services across five key dimensions. First, HOLAI is grounded in a robust theoretical foundation, integrating computational, cognitive, and legal-theoretical insights. Second, it models legal reasoning through hybrid approaches, combining machine-learning methods and a stratified approach to symbolic reasoning models.. Third, HOLAI prescribes integration a range of functionalities from information retrieval and legal research to decision-support and decisionautomation tools. Fourth, it recommends how to accommodate various legal domains, adaptable to jurisdictional and regulatory differences. Finally, HOLAI postulates a dynamic autoevaluation mechanism, ensuring continuous alignment with legal and ethical requirements. This adaptive network enables a holistic response to the demands of modern legal practice, advancing flexible applications in Legal AI.

Kai Erenli
Kai Erenli
Andreas Gruber
Abstract

The OpenWebSearch.EU project, funded by the European Union, seeks to create a European Open Web Index (OWI) and an open web search infrastructure aimed at addressing the dominance of non-European entities in the global search market. The initiative aspires to establish a transparent, unbiased, and balanced web search environment that aligns with European values and legal standards. As the OWI will also serve as a foundational dataset for AI models, its deployment raises significant legal challenges, including navigating Data laws (such as the European AI Act or the DSA), ensuring GDPR-compliant data privacy, and managing intellectual property rights. Additional complexities include liability for content infringements, handling illegal content, and adhering to diverse international regulations. Developing robust legal frameworks for content monitoring, copyright management, data protection, and accountability is essential to building a secure and legally sound open web search infrastructure that supports innovation while maintaining compliance with global and European legal standards. This paper will focus on the most challenging legal regulations and highlight the legal challenges for the OWI.

Irene Ng (Huang Ying)
Robert Kordić
Abstract

Generative AI (“GenAI”) tools have captured the attention of legal practitioners, as lawyers become interested in using them for legal practice and producing legal products. To use these GenAI tools, prompts are required to request the GenAI tool to produce the right output. This paper explores the integration of prompt engineering, a critical skill in utilizing large language models (LLMs), within the legal practice. It outlines key use cases, such as legal research and contract drafting, emphasizing how advanced prompting can aid in delivering comprehensive legal products. The paper also addresses challenges, including methodological aspects and ethical considerations as a lawyer and the risk of hallucinations. By analysing both the opportunities and limitations of prompt engineering, this work aims to provide legal professionals with a general practical understanding of how to harness GenAI.

Ratko Savic
Roman Kern
Abstract

This paper evaluates the performance of a hybrid LegalAI approach which combines Large Language Models (LLMs) and rule-based systems (Prolog) when applied to Austrian statutory inheritance law. The study demonstrates significant improvements in accuracy and consistency of information extraction and legal case resolution when comparing this kind of model to ChatGPT. Limitations include the scope restricted to Austrian provisional inheritance law and problems of handcrafting a knowledge base for rule bases approach. The results encourage further research of this approach, aiming to create a reliable and accessible LegalAI system that can contribute to the democratization of access to justice.