Jusletter IT

Three Lessons Learned for Intelligent Transport Systems that Abide by the Law

  • Autor/Autorin: Ugo Pagallo
  • Kategorie: Beiträge
  • Region: Italien
  • Rechtsgebiete: Datenschutz, Robotik
  • Zitiervorschlag: Ugo Pagallo, Three Lessons Learned for Intelligent Transport Systems that Abide by the Law, in: Jusletter IT 24. November 2016
Die Rechtsfragen rund um die Technologie selbstfahrender Fahrzeuge (Autonomous Vehicles = AV) tangieren sowohl Sicherheit, Verbraucherrecht und Versicherung, als auch Privatsphäre und Datenschutz, Strassenverkehrsrecht, Steuerrecht etc. Der Beitrag befasst sich mit diesen und weiteren normativen Herausforderungen von AV, welche die Absicht des Gesetzes betreffen, den Prozess von technologischer Innovation zu regeln, und stützt sich hierbei auf eine mehr als 15 Jahre alte Diskussion. Während in Europa aufgrund von Unfalltodesfällen jährlich der Gegenwert einer mittelgrossen Stadt verschwindet, können AV eine fundamentale Rolle in der Wiedererfindung unserer Städte und deren Transportsystem innerhalb eines «Smart Informational Environment» spielen. (ah)

Table of contents

  • 1. Introduction
  • 2. Primary and Secondary Rules of Law
  • 2.1. The content of the primary rules and their goals
  • 2.2. Liability and burdens of proof
  • 2.3. Rules of change
  • 3. Political Decisions
  • 3.1. Competing standards
  • 3.2. A matter of tolerance
  • 4. Re-engineering of the Law
  • 4.1. Techno-regulation
  • 4.2. The aims of design
  • 4.3. Changing the nature of cities and their environment
  • 5. Conclusion

1.

Introduction ^

[1]
At the beginning, they were «UVs.» First, we had the revolution of water-surface and underwater unmanned vehicles, or «UUVs,» in the field of robotics. Used for remote exploration work and the repairs of pipelines, oil rigs and so on, the use of semi-autonomous UUVs started developing at an amazing pace since the mid-1990s. A decade later, unmanned aerial vehicles («UAVs»), or systems («UAS»), upset the military field. A number of factors such as inter-agency transfers, increasing international demand, public R&D support and growing access to powerful software and hardware, explain why the popularity of the «drone technology» rapidly extended to the civilian sector. Among everyday people, the drone mania exploded in US as early as in February 2012. Then, it was the turn of unmanned ground vehicles, or «UGVs,» that is, smart cars driving themselves on the highways in fully autonomous, or semi-autonomous, ways.1 A number of states, organizations and private companies have seriously pursued this project over the past decades. Yet, the turning point occurred with the Grand Challenge competitions organized by the US Defence Advanced Research Projects Agency («DARPA») since the late 1990s.
[2]
The first race was held on 13 March 2004, in the Mojave Desert, but none of the cars completed it. Just a year and a one-half later, five vehicles successfully finished the second race. Starting a rivalry such as the competition between Oxford and Cambridge in the annual boat race, the 2004 winner, i.e. the Carnegie Mellon University’s Red Team was defeated by the Stanford University’s Racing Team on 8 October 2005. Two years later, Carnegie Mellon had the opportunity to take the revenge at the «Urban Challenge.» On 3 November 2007, the third DARPA competition concerned a 96 km urban area race, to be completed in accordance with all traffic regulations and within six hours. Due to the rapid advancement of technology, the challenge was not only to complete such a tortuous route, but to complete it as soon as possible. Teaming with General Motors in the Tartan Racing, Carnegie Mellon overtook the Stanford-Volkswagen car, taking 4 hours 10 minutes and 20 seconds, at 22.53 km per hour, to cross the finish line first… Then, in 2009, joining forces with Sebastian Thrun at Stanford, Google came on stage. Two years later, the first law permitting the operation of autonomous cars in US was passed in Nevada. Florida, California and Michigan have followed suit. Meanwhile, after the Eureka Prometheus Project (1987–1995), the European Commission has similarly promoted the «Intelligent Car Initiative» in 2010, in order to drastically reduce traffic jams and car accidents, while improving energy efficiency and polluting less.
[3]
The breath-taking progress of UV technologies and the role of artificial intelligence («AI»), the internet of things («IoT»), and robotics for smart transport systems, have of course been focus of a much lively debate. Over the past years, a considerable amount of work has been devoted for example to the so-called «trolley problem».2 As Bonnefon, Shariff and Rahwan argue, autonomous vehicles «will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians.»3 Whereas their conclusion is that «regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology,» we should take this kind of research with a pinch of salt. On the one hand, certain terrifying figures can make us fully appreciate that which is at stake with the next revolution of this field, i.e. autonomous vehicles («AVs»)-technology: road transport accounts for around one-quarter of the EU’s total energy consumption, costs of traffic jams amount to approximately 0.5% of EU GDP, car congestions impact 10% of the European major road networks, in which there are more than one million mishaps and 25,000 people who die in car accidents every year. This is the first problem we have to think about, when dealing with the normative challenges of AVs for intelligent transport.
[4]
On the other hand, there are of course multiple ways in which we can program our robots.4 In the case of deontic logic, for instance, the aim is to directly formalize and implement an ethical code in terms of what is obligatory, permissible, or forbidden, through an «AI-friendly»-semantics and a corresponding axiomatization.5 In the case of, say, principlism, the attention is drawn to such notions as autonomy, beneficence, and the aim at doing no harm, in order to infer sets of consistent ethical rules through computational inductive logic.6 In the case of divine-command logic, the goal is the ethical control of robotic behaviour, drawing on the «logic of requirement» of Chisholm and Quinn,7 up to Lewis’s modal logic.8 Therefore, at the end of the day, how should we program our AI robotic system? Should we follow the tenets of deontic logic, or a theory of prima-facie duties, or modal logic, for the design of our AVs? Should we emphasize utilitarian goals, or aim to build a Kantian robot, i.e. an AI agent designed in such a way that its cognitive states, e.g. beliefs, desires, or hopes, can always be deemed as appropriate?
[5]
The intent of this paper is to address these and other normative challenges of AVs, drawing on a more than fifteen year old debate in the legal field.9 The stance sheds light on three lessons learned from this debate. Section 2 dwells on both the content of the primary rules that govern the design, production, and use of AVs, and the role that the secondary rules of change may play in this legal context. In light of the Japanese experimentation on special zones for robots and the Federal Automated Vehicles Policy adopted by the U.S. Department of Transportation in September 2016, the first lesson learned is that secondary rules can be extremely helpful, in order to understand the kind of primary rules we may wish. Section 3 deepens matters of security and privacy, and the political decisions that at times have to be taken. By examining the current debate on what standards should be adopted in IoT, or even for the «internet of everything,» the second lesson learned is that reasonable compromises are sometimes necessary in the legal field. An open attitude to people whose opinions may differ from one’s own, seems the right attitude to find a reasonable solution for such cases. Section 4 examines the re-ontologization of the world occurred with the current information revolution vis-à-vis issues of legal design and the re-engineering of mobility in smart cities. The third lesson learned regards the increasing role of techno-regulation, that is, the regulation of user behaviour through the design of AVs, or the regulation of AVs behaviour through a particular design that embeds normative constraints into such a technology. This final set of legal issues does not occur however in a normative vacuum. Such problems are intertwined with further epistemic issues, technological quarrels, matters of user acceptance, and trust. The next rules of the legal game will then require a fair balance between social norms, epistemic standards, and rules for carmakers catering for the international market.
[6]
The conclusions of the analysis will sum up the open problems of legal AVs. They concern the content of some primary rules, techno-regulation, and the interaction between different regulatory systems. By examining these open issues, however, we should not forget the lessons we have learned so far, each for one of the three legal sets of challenges under scrutiny in this paper. Let’s then start with the first lesson, i.e. how the secondary rules of the law may help us comprehend what kind of primary rules we should adopt, or may wish in the field of AVs.

2.

Primary and Secondary Rules of Law ^

[7]
As mentioned above in the introduction, considerable work has been devoted over the past years to the technicalities of AVs,10 and their legal challenges.11 In this section, let us focus first on the kind of issues with which scholars have been mostly dealing. Such problems regard both the content and goals of legal regulation (section 2.1), along with matters of liability and how the burden of proof is allocated (section 2.2). Then, attention will be drawn to the ways in which the secondary rules of the system may help us tackle some of these open problems (section 2.3).

2.1.

The content of the primary rules and their goals ^

[8]
The legal challenges of AVs may concern either the regulation of human producers and designers of AVs through law, for example through ISO standards or liability norms for users of robots; or, the regulation of the legal effects of AV behaviour through the norms set up by lawmakers, e.g. the effects of a self-driving car accident on third parties.12 The analysis of these different kinds of regulation is crucial, because we want to know the content of the rules that regard security standards, privacy and data protection, consumer law and insurance models, down to tax law for such AVs. Although such vehicles have driven so far millions of miles with some sort of human intervention, or even completely alone, there are many issues of liability that are still quite problematic. Suffice it to say that we still lack enough data on the probability of events, their consequences and costs, in order to determine the levels of risk and hence, the amount of insurance premiums and other mechanisms on which new and old forms of accountability for the «decisions» of such AVs may hinge.13
[9]
However, we should not miss another aspect of the legal problem. By considering issues of security and privacy, consumer law and insurance for AVs, we may ask about the goal of these regulations. For example, as shown by the new EU regulation on data protection n. 679 from 2016, legislators can pass rules that shall apply in identical ways, whatever the technology. This is the approach of «technological indifference.» Also, regulations can be by definition specific to that technology, such as AVs for smart transport, without any favouritism for one or more of its possible implementations. Even when the law sets up a particular attribute of that technology, lawmakers can draft the legal requirement in such a way that non-compliant implementations can be modified to become compliant. On this basis, the issue may revolve around whether the law intends to attain particular effects, or to establish functional equivalence between online and offline activities, or to embrace non-discrimination between technologies with equivalent effects. Regardless of the aim that guides a specific regulation, however, it seems fair to affirm that the intent of the law to govern the race of technological innovation should neither hinder the advance of technology, nor require over-frequent revision to tackle such a progress.14
[10]
Against this backdrop, the analysis of the goals of the primary rules that govern this sector of the law should be completed with the norms that establish sanctions and conditions of liability in both the field of contracts and tort law.

2.2.

Liability and burdens of proof ^

[11]
Responsibility for the behaviour of AVs can be imposed in the field of contracts for injuries that either are caused by the defective manufacture or malfunction of AVs, or by defects in its design. Depending on the circumstances, the burden of proof varies as a result.15 In cases of defective manufacture of AVs, or deficiencies of its design, the burden of proof falls on the plaintiff who has to prove that the product was defective; that such defect existed while the product was under the manufacturer’s control; and finally, that the defect was the proximate cause of the injuries suffered by the plaintiff. In cases of strict malfunction liability, responsibility can be imposed although the plaintiff is not able to produce direct evidence on the defective condition of the product or the precise nature of the product’s defect. Rather, the plaintiff is to demonstrate that defect through circumstantial evidence of the occurrence of a malfunction, or through evidence eliminating both abnormal use of the product and reasonably secondary causes for the accident. In addition, responsibility may hinge on civil (as opposed to criminal) negligence that concerns the duty to conform to a certain standard of conduct. Accordingly, the plaintiff has to prove that defendants breached that duty, thereby provoking an injury and an actual loss or damage to the plaintiff.
[12]
Yet, it is still problematic to evaluate matters of liability in tort law and extra-contractual responsibility in this field. After all, this state-of-art represents the main reason why you have not seen so far all these AVs out there, i.e. driving themselves on the highways or in some old town of Europe.16 Here, individuals are held responsible for unjust damages inflicted upon third parties, i.e. harm provoked to other agents in the system, so that payment follows from obligations between private persons imposed by the state to compensate for damage provoked by wrongdoing. If AVs may sometimes have to choose between two evils, as mentioned above in the introduction, how should legal systems address this dilemma?

2.3.

Rules of change ^

[13]

A way to tackle the social dilemmas of AVs can be illustrated with the Federal Automated Vehicles Policy adopted by the U.S. Department of Transportation in 2016. On the one hand, we may appreciate the overall legislative goal of the policy, that is, the principle of «implementation neutrality» that does not intend to favour one or more of the AVs possible applications. This may bring a beneficial competition among scientists, business, and legal systems, according to Justice Brandeis’s doctrine of experimental federalism, as espoused in New State Ice Co. vs Leibmann.17 On the other hand, that on which the Federal Automated Vehicles Policy casts light is a basic difference among rules of the legal system, e.g. between the primary rules that aim to govern social and individual behaviour, and the secondary rules of change, namely, the rules of the law that create, modify, or suppress the primary rules of the system.18 The example of the Federal Automated Vehicles Policy sheds thus light also on this facet of the legal fabric, that is, how the secondary rules of the system help us understanding what kind of primary rules we may finally opt for. This approach seems even wise, when coping with certain threats of current developments in AI and robotics. Should AVs run over pedestrians or sacrifice «themselves and their passenger to save the pedestrians»?19

[14]
Another approach to this kind of dilemma has been worked out by the Japanese government through the creation of special zones for empirical testing and development, namely, a form of living lab, or Tokku. After the Cabinet Office approved the world’s first special zone in November 2003, covering the prefecture of Fukuoka and the city of Kitakyushu, further special zones have been established in Osaka and Gifu, Kanagawa and Tsukuba. The overall aim of these special zones is to set up a sort of interface for robots and society, in which scientists and common people can test whether robots fulfil their task specifications in ways that are acceptable and comfortable to humans vis-à-vis the uncertainty of machine safety and legal liabilities that concern, e.g., the protection for the processing of personal data.20 Remarkably, a special zone for road traffic law in highways was set up in the city of Sagami in 2013. This approach to the risks and threats of the human-robot interaction is not only at odds with the typical formalistic and at times, pedantic interpretation of the law in Japan. Moreover, it is noteworthy that such special zones are highly deregulated from a legal point of view. «Without deregulation, the current overruled Japanese legal system will be a major obstacle to the realization of its RT [Robot Tokku] business competitiveness as well as the new safety for human-robot co-existence.»21 Furthermore, the intent is «to cover many potential legal disputes derived from the next-generation robots when they are deployed in the real world.»22
[15]
These experiments could obviously be extended, so as to strengthen our comprehension of how the future of AVs interaction in smart cities could turn out with new issues of foreseeability and due care, or novel forms of negligence in tort law. By testing this interaction outside laboratories, i.e. in open or unstructured areas, this stance shows a pragmatic way to tackle some legal challenges of AI and robotics in the fields of AVs, by allowing to collect empirical data and knowledge to make rational decisions for a number of critical issues. First, we have to better appreciate risks and threats brought on by possible losses of control of AV systems, so as to keep them in check. Second, we have to further develop theoretical frameworks that allow us to better appreciate the space of potential systems that avoid undesirable behaviours. Third, we have to improve our understanding of how these AVs may react in various contexts and satisfy human needs. Four, we have to rationally address the legal aspects of this experimentation that cover many potential issues raised by the next-generation AVs, by managing such requirements – which often represent a formidable obstacle for this kind of research – as public authorisations for security reasons, formal consent for the processing and use of personal data, mechanisms of distributing risks through insurance models, authentication systems, and more.
[16]
The first lesson learned from this level of analysis has thus to do with the interaction between primary and secondary rules of the law. The more we disagree on the content of the primary rules, as occurs for example with the default rules of strict liability in the civil (as opposed to the criminal) law field,23 the more the secondary rules of change can be helpful, in order to understand what kind of primary rules we may wish. But what happens, should disagreement persist?

3.

Political Decisions ^

[17]
The focus on the regulatory goals of the law does not mean either that the role of other regulatory systems should be underestimated, or that we can simply ignore the impact of technological innovation on the formalisms of the law.24 The analysis of this section is accordingly divided into two parts: first, the competition between regulatory systems and the role of standards are under scrutiny (section 3.1). Then, the next step is to elucidate the proper attitude with which we should address such a competition, together with the compromises that, at times, are necessary in the legal domain (section 3.2).

3.1.

Competing standards ^

[18]
The relation between law and technology should be grasped as the interaction between competing regulatory systems that not only may reinforce or contend against each other, but against further regulatory systems, such as the forces of the market and of social norms. Every regulatory system claims to govern social behaviour by its own means, and can even render the claim of another regulatory system superfluous. This competition suggests, on the one hand, that at times reasonable compromises between many competing interests will be necessary. Going back to the experimentation through the secondary rules of the law, neither the US «experimental federalism» policy of the Department of transportation, nor the legally de-regulated zones of experimentation in Japan guarantee any uniquely right-answer as to the content of the primary rules of the system.25 After all, the phase of legal experimentation may end up with general disagreement on the goals of legislation, for example whether the law should endorse non-discrimination between technologies with equivalent effects, or favour specific solutions over other technological alternatives. In addition, we may agree with a given goal of legislation and still, differ in how we could attain that end. Although we may consent on, say, mitigating today’s rules of strict liability in the civil (as opposed to the criminal) side of the law, disagreement can persist on what should be our priority, e.g. making the data protection system of AVs robust by design, or making transparency a priority, or striking different balances between security and privacy, etc.26
[19]
On the other hand, we have to pay attention to the role of standards.27 In addition to the technological and legal standards of the field, focus should be on the epistemic standards, i.e. ways to understand the informational reality of AVs, and both the social standards that enable users to trust such applications and evaluate the quality of the services regardless of whether or not these services meet social needs.28 Correspondingly, the definition of the legal standards, as a means that allows agents to communicate and interact, does not take place in a normative vacuum, but it is structured by the presence of values and principles, social norms, and the different ways to grasp the informational reality of humans interacting with AVs out there. This wider context suggests, once again, that reasonable compromises will at times be necessary. The alternative can be illustrated with current impasses of IoT: either legal standards are shared in order to adopt a supranational framework that guarantees uniform levels of protection, or local standards will reaffirm competitive advantages and national or supranational sovereignty, by ensuring different levels of legal protection. Article 50 of the aforementioned EU Regulation on data protection represents a good test. Whilst the EU law has established the strictest international standards across the board over the past twenty years, new Article 50 on «international cooperation» will allow us to ascertain whether and to what extent «the effective enforcement of legislation for the protection of personal data» will be facilitated by the development of international cooperation mechanisms and mutual assistance.

3.2.

A matter of tolerance ^

[20]
By admitting the necessity of reasonable compromises for certain hard cases of the law, we have the further problem to determine whether such agreements are «reasonable.» This means we need to specify our idea of justice and apply it as the permitted variation in some kind of measurement, or other characteristic of an object or entity under investigation.29 Whatever the metrics and the idea of justice we may embrace, however, the second lesson learned from the ongoing debate on the secondary rules of the law and their importance in addressing the normative challenges of AV technology, has to do with loopholes and drawbacks of current legal systems. As previous international agreements have regulated technological advancements over the past decades in such fields as chemical, biological and nuclear weapons, landmines, or the field of computer crimes since the early 2000s, many claim that new agreements are necessary in the field of AVs.30 They may regard either the technicalities of road traffic laws, or human rights protection, or matters of business, such as manufacturers catering for the international market to design in specific legal rules and standards.31
[21]

The result is tolerance, rather than justice, as providing the fundamental virtue of social institutions that instructs us how to design rules that shall regulate human behaviour through the design of AVs. An open attitude to people whose opinions may differ from one’s own is after all the virtue on which any reasonable compromise ultimately relies. This approach is especially fruitful when dealing with risks of unpredictability and lost of control that has more often been associated with AI & robotics research in recent times.32 The wave of extremely detailed regulations and prohibitions on the use of drones by the Italian Civil Aviation Authority, i.e. «ENAC,» illustrates this deadlock.33 The paradox stressed in the field of web security decades ago, could be extended with a pinch of salt to the Italian regulation on the use of UAVs, so that the only legal drone would be «one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards – and even then I have my doubts.»34

[22]
After the lesson learned on the primary legal rules of Section 2, the secondary rules of this section has thus shown the proper attitude we should have before the open issues of AVs. Several mechanisms of legal flexibility can be set up through the secondary rules of the law. Think again of the creation of special zones (Japan), of some legal techniques as the «implementation neutrality» approach of the Department of Transportation (U.S.), or of further meta-rules of «procedural integrity.»35 On the basis of this interaction between primary and secondary rules of the law, we can deepen the analysis of how the primary rules of the law may intend to govern social and individual behaviour, both human and artificial. Above in Section 2, attention was drawn to the regulation of human producers and designers of AVs through the law (e.g. liability norms), much as to the regulation of the legal effects of AV behaviour through the norms set up by lawmakers. In the following section, the aim is to explore both the regulation of user behaviour through the design of AVs, and the regulation of AVs behaviour through their own design, that is, by embedding normative constraints into the design of such AI systems. A third lesson is waiting for us.

4.

Re-engineering of the Law ^

[23]
The third set of legal challenges brought on by AV technology, has to do with the radical way in which the information revolution is affecting our understanding about the world and about ourselves. This section aims to deepen this transformation from a threefold point of view, that is, in terms of techno-regulation (section 4.1), aims of design (section 4.2), and the issues that both scholars and lawmakers should prioritize, when changing the nature of our cities and their environment through the use of AV technology that abides by the law (section 4.3).

4.1.

Techno-regulation ^

[24]
We are interconnected informational organisms that share with biological organisms and engineered artefacts, such as our smart AVs, «a global environment ultimately made of information,» namely, what Luciano Floridi calls «the infosphere.»36 For the first time ever, current human societies do not simply use information and communication technologies («ICT»), for they depend on ICT and more generally, on information as their vital resource. As a result of this ICT-dependency, the information revolution is triggering a radical re-engineering or re-design of today’s systems, processes, and agents, to the extent that their intrinsic nature, ontology or essence is fundamentally transformed or reshaped in informational terms. In addition to the radical transformation of objects that are increasingly becoming seamlessly embedded into the informational environment, ICTs are also generating new realities, within which the traditional distinction between being online and offline blurs in the infosphere.37
[25]
What this huge transformation means, from a legal and political viewpoint, can be illustrated with the ubiquitous nature of the information on the internet. The flow of this information transcends conventional boundaries of national legal systems, as shown by cases that scholars address as a part of their everyday work in the fields of information technology (IT)-Law, i.e. data protection, computer crimes, digital copyright, e-commerce, and so forth. This flow of information jeopardizes traditional assumptions of legal and political thought, by increasing the complexity of human societies. Significantly, since the mid 1990s, the traditional hard and soft law-tools of governance, such as national rules, international treaties, codes of conduct, guidelines, or the standardization of best practices, have increasingly been complemented through the mechanisms of design, codes and architectures. Some of these technological measures are of course not digital, e.g. the installation of speed bumps in roads as a means to reduce the velocity of cars. Yet, current advancements of technology have obliged legislators and policy makers to forge more sophisticated ways to think about legal enforcement, by embedding normative constraints into products and processes, moulding the structure of places and spaces, so as to comply with regulatory frameworks. All in all, it is likely that this trend will extend to the regulation of AVs as well.

4.2.

The aims of design ^

[26]
The first legal umbrella for the adoption of such automatic means of techno-regulation, as digital right management («DRM»), was given by Article 8 of WIPO’s 1996 Copyright Treaty and Article 14 of the twin Performances and Phonograms Treaty. These primary rules enable copyright holders to monitor and regulate the use of their protected artefacts through self-enforcing technologies. Likewise, the use of filtering systems on the internet and the constitutional limits to such a use, were discussed before the EU Court of Justice in Netlog (C-360/10), in order to balance the protection of some basic rights, as privacy and data protection, freedom of speech and of information, or freedom to conduct a business. More recently, Article 25 of the aforementioned EU regulation on data protection, or «GDPR,» has followed suit. Here, data controllers «shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures» by design, and by default.
[27]

What such examples of techno-regulation suggest is to pay attention to the different aims that working out the shape of objects, or the form of products and processes, or the structure of spaces and places, can have in the case of AVs.38 The modalities of design may in fact aim to encourage the change of social behaviour, to decrease the impact of harm-generating conducts, or to prevent that those harm-generating conducts may even occur. As an illustration of the first kind of design mechanisms, there are different ways in which the design of AVs can encourage humans to change their behaviour through e.g. incentives based on trust via reputation systems, or trade (e.g. services in return).39 As an example of the second modality of design, consider the introduction of air-bags to reduce the impact of harm-generating conducts, and current efforts on security measures for IT systems and user-friendly interfaces that do not impinge on individual autonomy, no more than traditional airbags affect how people drive.40 Finally, as an instance of total prevention, contemplate current projects on cars able to stop or to limit their own speed according to the driver’s health conditions and the inputs of the surrounding environment. Although the purpose is to guard people’s wellbeing against all harm, this is the most critical aim of techno-regulation, since the intent is to prevent any alleged harm generating-behaviour from occurring through the use of self-enforcing technologies.

[28]
A considerable amount of work has been devoted to this topic over the past years.41 Examples of techno-regulation and the design of norms in the information era appear particularly relevant, because these forms of legal enforcement affect both the requirements and functions of the law, namely, what the law is supposed to be (requirements), and what it is called to do (functions).42 In particular, by increasingly embedding legal safeguards and normative constraints into technology, e.g. the design of smart AVs that abide by the current rules of data protection, the very nature, ontology or essence of the law, is questioned. WIPO’s art. 8, or EU’s GDPR art. 25, show how the normative side of the law has progressively transferred from the traditional «ought to,» supported by the menace of physical sanctions – such as the ticket for speed limits – to that which actually is in technological terms. The canonical «if A, then B» of the law, pace Hans Kelsen,43 does not only concern what should be, i.e. punitive sanctions (B) that have to follow terms and conditions of legal accountability (A). Rather, we have to do more often with probabilities on effects (B) that follow natural causes (A).
[29]
The third lesson learned from this debate has thus to do with a twofold re-engineering of the law. On the one hand, the traditional tools of hard governance, such as codes, acts and statutes, have been joined – and even replaced – by techno-regulation and design. On the other hand, some forms of the latter have re-designed the nature of the law by transferring its regulative claim from the realm of what should be, to the technological side of what is. By grasping such issues of legislation and jurisprudence vis-à-vis the current debate on the legal challenges of AVs, no surprise then that much of this debate on the primary rules of the law has been affected by this twofold re-engineering process. The attention of legal experts have progressively been drawn to either regulation of user behaviour through AVs design, that is, by designing AVs in such a way that unlawful actions of humans are not allowed, or by embedding normative constraints into the design of the AV.44

4.3.

Changing the nature of cities and their environment ^

[30]
In light of the current re-engineering of the law, two final points have to be stressed. The first issue brings us back to the competition between regulatory systems, which was mentioned above in the previous section. Reflect on all the cases in which the legal intent to regulate the process of technological innovation has miserably failed. A good example is given by the aforementioned Article 8 of WIPO’s 1996 Copyright Treaty and 14 of the twin Performances and Phonograms Treaty. Twenty years after such international agreements, it seems fair to affirm these legal rules have fallen short in coping with people’s behaviour online and the dynamics of technological innovation. The introduction of Cascading Style Sheets (CSS)-technology to protect DVD artefacts, soon after followed by its DeCSS antidote, illustrates this trend. In such a cat-and-mouse game, we can repeat what Steve Jobs said in his Thoughts on Music: «DRMs haven’t worked, and may never work, to halt music piracy.»45 The legal rules that will govern AV technology have then to take into account how the legal standards may interact with further standards and between competing regulatory systems, either conflicting, or reinforcing, each other. Attention of lawmakers should especially be drawn to the role of epistemic standards, e.g. the different ways to grasp the informational reality of humans interacting with AVs out there, and social standards on user acceptance and trust.46
[31]
Another crucial facet of this re-engineering process has finally to be addressed. Car technology has radically changed the nature of our cities and their environment. Current technological convergence of AVs research & development and AI, robotics, internet connectivity, and cybernetics, may re-design once again the nature of our cities and their environment through (not only, but also) self-driving cars and AV-sharing for smart cities and intelligent transport. As the Mobility & Transport division of the EU Commission is keen to inform us on their website, road fatalities are luckily decreasing over the past years and still, the equivalent of a medium town disappears every year on the roads of the old Continent: 31.500 fatalities in 2010, 30.700 in 2011, 28.200 in 2012, 26.000 in 2013, 25.900 in 2014, etc. This is a human failure and a tragedy that we can properly tackle by re-inventing our cities and their transport systems through (not only, but also) AV technology. The challenge is formidable.47 A complex set of legal issues on matters of security, consumer law and insurance, privacy and data protection, down to tax law were mentioned throughout this paper. We should address this set of complex issues with the spirit of those who do not forget the failure and tragedy which are still in progress.
[32]
The time is ripe for the conclusion of the analysis.

5.

Conclusion ^

[33]
There are three different types of open problems for AVs that abide by the law today. The first type concerns multiple kinds of legal issues on the content of the primary rules of the law that aim to govern social and individual behaviour, both human and artificial. Consider the standards of security set up by the Federal Automated Vehicles Policy of the U.S. Department of Transportation, or the set of provisions concerning consent and data protection that shall govern the design and production of AVs, in accordance with the principles and rules of the EU’s GDPR, such as Article 25 on data protection by design and by default, Article 35 on a new generation of impact assessments «for a type of processing in particular using new technologies,» etc. The legal hurdles on security and insurance, safety and tort law, data protection and protection of consumers, will be complex and time consuming. Yet, we do not have to forget the first lesson learned on the interaction between primary and secondary rules of the law. We can properly address cases of disagreement, or crucial lack of data, that affect the content of the primary rules of the legal system, through its secondary rules of change, such as the Japanese special zones for robotics empirical testing and development illustrated above in Section 2. As stressed by the Committee on Legal Affairs of the European Parliament in the recommendations to the EU Commission on civil law rules on robotics from 31 May 2016, «testing robots in real-life scenarios is essential for the identification and assessment of the risks [that AI & robots] might entail, as well as of their technological development beyond a pure experimental laboratory phase.»48 The secondary rules of change can be extremely helpful, in order to understand what kind of primary rules we may wish.
[34]

The second type of open problems for legal AVs regards matters of design, and how the aim to prevent harm-generating behaviour from occurring through the use of self-enforcing technologies may affect some tenets of the rule of law. For some scholars, what is crucially imperiled is «the public understanding of law with its application eliminating a useful interface between the law’s terms and its application.»49 Others refer to «a rich body of scholarship concerning the theory and practice of «traditional» rule-based regulation [that] bears witness to the impossibility of designing regulatory standards in the form of legal rules that will hit their target with perfect accuracy.»50 While some others affirm that «the idea of encoding legal norms at the start of information processing systems is at odds with the dynamic and fluid nature of many legal norms, which need a breathing space that is typically not something that can be embedded in software,»51 the problem is clear and real.52 The more AV technology advances, the more its legal issues will concern techno-regulation and design. Since the devil is in the detail, we can even predict that an increasing number of such cases will progressively concern more and more experts of techno-regulation and legal design in the next future. Let us thus address this new generation of problems on the design of AVs and their corresponding informational environment, in accordance with the second lesson learned in this paper. We should be ready for several agreements and «reasonable compromises» in the foreseeable future, regarding standards for IoT, AI systems and robotics that will require an open attitude to people whose opinions may differ from one’s own. Most threats and risks brought on by the use of self-enforcing technologies, e.g. the modelling of social conduct and paternalism, can be conceived as illustration of a typical intolerant approach that hinders responsible scientific research and innovation.

[35]
The third type of open problems for legal AVs finally concerns the competition between regulatory systems, and the claim to govern social interaction by their own means. As legal experts, we have of course to pay attention to the primary rules of the system, and their interaction with the secondary rules of the law. Still, in order to have a more precise idea on what is going on with the functioning of the legal system – and in particular of that specific set of regulations on the design, production and use of AV technology – attention should be drawn to whether the normative claim of the law either conflicts, or reinforces, the claims of further regulatory systems, such as the market and the force of social norms. According to how such technological, legal, epistemic, and social rules interact, or compete with each other, they will shape a new environment that applies to individuals, artificial agents, and things (technological standards), empowering some and disempowering others (legal standards), shaping the social features of AV interaction in terms of needs, benefits, user acceptance and trust (social standards), up to the making of the informational dimension of AVs as a comprehensible reality (epistemic standards). The third lesson learned in this paper, may help us addressing the final set of legal challenges for AV technology. The competition between regulatory systems recalls the multiple ways in which the information revolution is re-engineering our world. AV technology and research in AI and robotics are crucial to re-invent our cities and their transport system, abandoning a world in which a medium town disappears every year on the roads of Europe. The transition to the next re-engineered city planning won’t be easy. We will have epistemic issues, technological quarrels, matters of user acceptance and trust. This kind of disagreement is however positive: It represents an opportunity to take the challenges of AVs seriously, and envisage what new environment we wish in legal terms.

 

Ugo Pagallo (University of Turin), Professor of Jurisprudence, Law School, University of Torino, Lungo Dora Siena 100 A, 10153 Torino, Italy. ugo.pagallo@unito.it.

  1. 1 The notion of autonomy has of course sparked a hot debate that, however, we can leave aside in this context. Suffice it to mention work by Michael J. Wooldridge/Nicholas R. Jennings, Agent Theories, Architectures, and Languages: A Survey. In: M. Wooldridge and N. R. Jennings (eds.), Intelligent Agents, pp. 1–22, Springer, Berlin 1995; Stan Franklin/Art Graesser, Is it an Agent, or just a Program? A Taxonomy for Autonomous Agents. In: J.P. Müller, M. J. Wooldridge & R. Nicholas (eds.), Intelligent Agents III, Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, pp. 21–35, Springer, Berlin 1997; Colin Allen/Gary Varner/Jason Zinser, Prolegomena to Any Future Artificial Moral Agent, Journal of Experimental and Theoretical Artificial Intelligence, 2000, 12: 251–261; and, Luciano Floridi/Jeff Sanders, On the Morality of Artificial Agents, Minds and Machines, 2004, 14(3): 349–379.
  2. 2 See, e.g., Jason R. Wilson/Matthias Scheutz, A Model of Empathy to Shape Trolley Problem Moral Judgements, The Sixth International Conference on Affective Computing and Intelligent Interaction, ACII 2015.
  3. 3 Jean-François Bonnefon/Azim Shariff/Iyad Rahwan, The Social Dilemma of Autonomous Vehicles, Science, June 2016, 352(6293): 1573–1576.
  4. 4 Selmer Bringsjord/Joshua Taylor, The Divine-Command Approach to Robot Ethics. In P. Lin, K. Abney and G. A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics, pp. 85–108, MIT Press, Cambridge, Mass. 2014.
  5. 5 See respectively John Horty, Agency and Deontic Logic, Oxford University Press, New York 2001; and Yuko Murakami, Utilitarian Deontic Logic. In: R. Schmidt et al. (eds.), Proceedings of the Fifth International Conference on Advances in Modal Logic, pp. 288–302. AiML, Manchester UK 2004.
  6. 6 As developed by Michael Anderson and Susan Leigh Anderson, Ethical Healthcare Agents. In M. Sordo et al. (eds.), Advanced Computational Intelligence Paradigms in Healthcare, pp. 233–257, Springer, Berlin 2008.
  7. 7 Roderick Chisholm, Practical Reason and the Logic of Requirement. In: S. Koerner (ed.), Practical Reason, pp. 1–17, Basil Blackwell, Oxford 1974; Philip Quinn, Divine Commands and Moral Requirements, Oxford University Press, New York 1978.
  8. 8 Clarence Irving Lewis/Cooper Harold Langford, Symbolic Logic, Dover, New York 1959.
  9. 9 The temporal reference of the text is given by legal work on the creation of the first special zone for robotics between 2002 and 2003 in Japan. We return to this approach to the normative challenges of AVs below in Section 2.
  10. 10 An introduction in Azim Eskandarian (ed.), Handbook of Intelligent Vehicles, Springer, London 2012.
  11. 11 See Tom M. Gasser, Legal Issues of Driver Assistance Systems and Autonomous Driving. In Handbook, see above previous note, at 1520 ff.; Gary E. Marchant/Rachel A. Lindor, The Coming Collision between Autonomous Vehicles and the Liability System, Santa Clara Law Review, 2012, 52, at 1321 ff.; Ugo Pagallo, Guns, Ships, and Chauffeurs: The Civilian Use of UV Technology and its Impact on Legal Systems. Journal of Law, Information and Science, 2012, (21)2: 224–233; Jack Boeglin, The Costs of Self-driving Cars, Yale Journal of Law and Technology, 2015, 17, at 171 ff.; and, Melinda Florina Lohmann, Liability Issues Concerning Self-Driving Vehicles, European Journal of Risk Regulation, 2016, 7(2): 335–340.
  12. 12 See Ronald Leenes/Federica Lucivero, Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design, Law, Innovation and Technology, 2016, 6(2): 193–220.
  13. 13 There has been an intensive debate on this issue, especially in the field of business law. Suffice it to mention Jean-François Lerouge, The Use of Electronic Agents Questioned under Contractual Law: Suggested Solutions on a European and American Level, The John Marshall Journal of Computer and Information Law, 2000, 18: 403; Emily Mary Weitzenboeck, Electronic Agents and the Formation of Contracts, International Journal of Law and Information Technology, 2001, 9(3): 204–234; Anthony J. Bellia, Contracting with Electronic Agents, Emory Law Journal, 2001, 50: 1047–1092; and Giovanni Sartor, Cognitive Automata and the Law: Electronic Contracting and the Intentionality of Software Agents, Artificial Intelligence and Law, 2009, 17(4): 253–290. As to the analysis of the levels of risk in the field of AVs see the U.S. Department of Transportation’s Federal Automated Vehicles Policy, September 2016, at p. 21 ff.
  14. 14 See Bert-Jaap Koops, Should ICT Regulation Be Technology-neutral? In: B-J. Koops et al. (eds.), Starting Points for ICT Regulation: Deconstructing Prevalent Policy One-liners, pp. 77–108, The Hague, TMC Asser 2006; and, Chris Reed, Making Laws for Cyberspace, Oxford University Press, Oxford 2012.
  15. 15 For the sake of conciseness, the analysis takes into account the primary rules of the US legal system. A comparison with further legal systems and their rules on liability and burdens of proof in Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts, Springer, Dordrecht 2013.
  16. 16 For traditional traffic planning see the revolutionary approach of Hans Monderman in Tom Vanderbilt, Traffic. Why We Drive the Way We Do, Knopf, New York 2008.
  17. 17 The case is in 285 U.S. 262 (1932).
  18. 18 For the distinction between primary and secondary legal rules see Herbert L. A. Hart, The Concept of Law, Clarendon, Oxford 1961. In this context, we can leave aside such secondary rules, as the rules of recognition and of adjudication, just to focus on the rules of change.
  19. 19 See above n. 3, on the social dilemma of AVs.
  20. 20 Further details in Ugo Pagallo, Robots in the Cloud with Privacy: A New Threat to Data Protection? Computer Law & Security Review, 2013, 29(5): 501–508.
  21. 21 Yueh-Hsuan Weng/Yusuke Sugahara/Kenji Hashimoto/Atsuo Takanishi, Intersection of «Tokku» Special Zone, Robots, and the Law: A Case Study on Legal Impacts to Humanoid Robots, International Journal of Social Robotics, 2015, 7(5): 841–857 (p. 850 in text).
  22. 22 See the previous note.
  23. 23 See above n. 13.
  24. 24 See Ugo Pagallo/Massimo Durante, The Philosophy of Law in an Information Society. In: L. Floridi (ed.), The Routledge Handbook of Philosophy of Information, pp. 396–407, Oxon & New York 2016.
  25. 25 This is of course the thesis of Dworkin and his followers, e.g. Ronald Dworkin, A Matter of Principle, Oxford University Press, Oxford 1985. In a nutshell, according to this stance, a morally coherent narrative should grasp the law in such a way that, given the nature of the legal question and the story and background of the issue, scholars can attain the answer that best justifies or achieves the integrity of the law.
  26. 26 Privacy issues represent one of the thorniest legal challenges of AV technology. See Ugo Pagallo, Teaching «Consumer Robots» Respect for Informational Privacy: A Legal Stance on HRI. In: D. Coleman (ed.), Human-Robot Interactions. Principles, Technologies and Challenges, pp. 35–55, Nova, New York 2015.
  27. 27 See Lawrence Busch, Standards: Recipes for Reality, MIT Press 2011.
  28. 28 See Massimo Durante, What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents, Knowledge, Technology & Policy, 2010, 23(3-4): 347–366.
  29. 29 The field of informational warfare is another good example: see Ugo Pagallo, Cyber Force and the Role of Sovereign States in Informational Warfare, Philosophy & Technology, 2015, 28(3): 407–425.
  30. 30 E.g. Brendan Gogarty/Meredith Hagger, The Laws of Man over Vehicle Unmanned: the Legal Response to Robotic Revolution on Sea, Land and Air, Journal of Law, Information and Science, 2008, 19: 73–145.
  31. 31 See for example the remarks of a study sponsored by the EU Commission, i.e. RoboLaw, Guidelines on Regulating Robotics. EU Project on Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics, September 22, 2014.
  32. 32 Suffice it to mention Harold Abelson et al., Keys Under the Doormat: Mandating Insecurity by Requiring Government Access to All Data and Communications. MIT Computer Science and AI Laboratory Technical Report. 6 July 2015; The Future of Life Institute, An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence, 2015, at http://futureoflife.org/ai-open-letter/ (last visit: 18 October 2016); IEEE Standards Association, The Global Initiative for Ethical Considerations in the Design of Autonomous Systems, forthcoming (2017).
  33. 33 See Ugo Pagallo, Even Angels Need the Rules: On AI, Roboethics, and the Law. In: Kaminka G A et al. (eds.) ECAI Proceedings, pp. 209–215, IOS Press, Amsterdam 2016.
  34. 34 See the introduction of Steve Garfinkel/Gene Spafford, Web Security and Commerce. O’Reilly, Sebastopol, CA. 1997.
  35. 35 The aim of the meta-rules of «procedural regularity» is to determine whether a decision process is fair, adequate, or correct. See Joshua A. Kroll/Joanna Huey/Solon Barocas/Edward W. Felten/Joel R. Reindenberg/David G. Robinson/Harlan Yu, Accountable Algorithms, University of Pennsylvania Law Review, 165 (forthcoming 2017).
  36. 36 The reference is Luciano Floridi, The Ethics of Information, Oxford University Press, Oxford 2013.
  37. 37 See also Luciano Floridi (ed.), The Onlife Manifesto, Springer, Dordrecht 2015.
  38. 38 See Ugo Pagallo, Designing Data Protection Safeguards Ethically, Information, 2011, 2(2): 247–265.
  39. 39 In addition, we can think about security systems that slow down the speed of users that do not help, say, the process of informational file-sharing. See Andrea Glorioso/Ugo Pagallo/Giancarlo Ruffo, The Social Impact of P2P Systems. In: X. Shen, H. Yu, J. Buford and M. Akon (eds.), Handbook of Peer-to-Peer Networking, pp. 47–70, Springer, Heidelberg 2010.
  40. 40 See Ugo Pagallo, Online Security and the Protection of Civil Rights: A Legal Overview, Philosophy & Technology, 2013, 26(4): 381–395.
  41. 41 We return to this problem of control below, in the conclusion of this paper: the opening book of this debate is Lawrence Lessigs Codes and Other Laws of Cyberspace, Basic Book, New York 1999.
  42. 42 See above note 24.
  43. 43 The reference is of course a classic text like Hans Kelsen, Pure Theory of Law, trans. B. L. Paulson and S. L. Paulson. Clarendon, Oxford 2002 (first ed. 1934).
  44. 44 See work mentioned above in notes 11, 13, and 29.
  45. 45 Steve Jobs, Thoughts on Music (2007), at http://www.apple.com/hotnews/thoughtsonmusic/ (last visit: 24 September 2016), p. 3.
  46. 46 See work mentioned above in notes 24 and 28.
  47. 47 See e.g. Bruce Bimber, The Politics of Expertise on Congress: The Rise and Fall of the Office of Technological Assessment, SUNY Press, New York 1996. On the technological impact assessments that will be necessary, see David Dunkerley/Peter Glasner (eds.), Building Bridges between Science, Society and Policy: Technology Assessment – Methods and Impacts, Springer, Dordrecht 2004. In more general terms, Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology, Elgar, Cheltenham 2015.
  48. 48 European Parliament’s Committee on Legal Affairs, doc. 2015/2103(INL), at n. 14.
  49. 49 Jonathan Zittrain, Perfect Enforcement on Tomorrow’s Internet. In: R. Brownsword and K. Yeung (eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, pp. 125–156, Hart, London 2007.
  50. 50 Karen Yeung, Towards an Understanding of Regulation by Design. In: R. Brownsword and K. Yeung (eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, p. 167, Hart, London 2007.
  51. 51 In Bert-Jaap Koops/Ronald Leenes, Privacy Regulation Cannot Be Hardcoded: A Critical Comment on the «Privacy by Design» Provision in Data Protection Law, International Review of Law, Computers & Technology, 2014, 28: 159–171.
  52. 52 This has been a leit motiv of my recent research. In addition to work mentioned above in notes 20 and 26, see Ugo Pagallo, On the Principle of Privacy by Design and its Limits: Technology, Ethics, and the Rule of Law. In: S. Gutwirth, R. Leenes, P. De Hert and Yves Poullet (eds.), European Data Protection: In Good Health?, pp. 331–346. Springer, Dordrecht 2012; Ugo Pagallo, Cracking down on Autonomy: Three Challenges to Design in IT Law, Ethics and Information Technology, 2012, 14(4): 319–328; up to Ugo Pagallo, The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design. In: S. Gutwirth, R. Leenes e P. de Hert (eds.), Data Protection on the Move. Current Developments in ICT and Privacy/Data Protection, pp. 387–410, Springer, Dordrecht 2016.