Designing is on the Agenda ^
What exactly is artificial intelligence? How will artificial intelligence influence our daily life in the future? Although the term «artificial intelligence» (AI) has been used for more than 50 years, these questions are gaining enormously in relevance due to current technical progress. On the one hand, dark visions are drawn by machines that dominate the human race. Other visionaries see AI as a miracle cure to solve all the world's problems in the simplest way. But what is really behind AI? And what can it be used for?
Smart Government uses the possibilities of smart objects and cyberphysical systems (CPS) to fulfil public tasks efficiently and effectively. The term «Smart Government» is used to describe the application of the Internet of Things and the Internet of Services in the processes of government and administration. This involves much more than just the technical integration of smart objects and CPS in public administration. Especially in connection with artificial intelligence, there is also the possibility of entering completely new dimensions of process support and automation.
There may be many reasons, such as speed, efficiency and controllability, for using autonomous systems in the public sector. Technically, autonomous systems are increasingly easier to design, implement and operate. But are state, administration and judiciary already prepared for this? What challenges need to be identified and addressed before autonomous systems can be used in good faith to perform public tasks in the long term?
This article will explain the skills and potential applications behind the term «artificial intelligence». Based on a definition of the term and the corresponding basic technologies and basic applications, five fields of application for public administration are outlined in which AI can be used. Subsequently, some fields of application for AI in the judiciary are outlined. This results in the identification of limits for the use of AI in state and justice and the definition of new research priorities.
Definition und distinction ^
In the context of digitisation, «artificial intelligence» (AI) has become one of the dominant terms. Many companies and organisations are currently investigating its opportunities. However, they use the term and the related technologies in very different contexts. At the same time, the topic is gaining importance in press articles, presentations and strategy papers. No longer just the experts and computer scientists at the German Research Center for Artificial Intelligence (DFKI) are dealing with artificial intelligence. Researchers from different disciplines are also dealing with the topic and the associated technical, social or organizational changes. However, it often remains unclear what is actually behind the term artificial intelligence. The dominant expectation is that processes that were previously carried out by humans can now be completely transferred to technical systems and that these systems learn to perform their tasks increasingly better on their own.
At the moment, there is no generally accepted definition of artificial intelligence. Rather, it is a collective term for different technologies and approaches at different levels of maturity with different levels of security and trust. The term artificial intelligence suggests that these are systems which are intended to replicate human intelligence. AI as a sub-discipline of informatics uses different technologies and architectures. That is why AI is also referred to as a cross-sectional technology. Accordingly, the use of rather broad definitions of AI seems to be more purposeful, which focus more on the output than on the systems themselves. Such a working definition by the German AI researcher Klaus Mainzer describes AI as systems that «can solve problems efficiently by themselves». As early as 1966, Marvin Minsky defined AI as the science of making machines do things that would require intelligence if humans did them.
Defining artificial intelligence only from the technological point of view seems to be difficult over time. 50 years ago, the focus was on completely different technical possibilities than today. In the meantime, the distinction between weak AI, strong AI and super intelligence has become accepted in science. Weak AI is usually developed and used for specific applications. In more concrete terms, these are, for example, expert systems, speech recognition, navigation or translation services. Applications based on weak AI are already widely used today and can be found in everyday life in the form of intelligent search suggestions or optimised route guidance.1
AI Basic Technologies ^
Due to the difficult technological delimitation, it seems to be more appropriate to definate AI according to their capabilities. In the following, these will be described as AI basic technologies, since they provide fundamental capabilities. At the same time, however, it should not be left unmentioned that AI can also be described from the technical side. For example, AI approaches can be differentiated by the categories of learning methods, system architectures and algorithms used. AI learning methods include deep learning, supervised learning, unsupervised learning, reinforcing learning and meta-learning. Systems and architectures can be divided into intelligent agents, expert systems, decision support systems, rule management systems, artificial immune systems and quantum logic systems. The technologies on which these systems are based are also very diverse. In addition to artificial neural networks, support vector machines, possibilitic networks, multi-context systems or genetic algorithms can also be used2
For the concrete application, the underlying technology is often of secondary importance. Rather, the questions arise as to what purpose AI basic technologies can be used for, what they are capable of achieving and what limits should be set. The most important AI basic technologies are briefly presented below.
AI-based pattern recognition analyses data in order to identify regularities, repetitions, similarities or patterns. IT systems are usually able to analyse much larger amounts of data in less time than a human being would ever be able to do. Furthermore, IT systems do not show any signs of fatigue or careless errors. An advantage of learning systems in pattern recognition is the identification of correlations which were previously neither known nor noticed by humans. Data can not only be checked for previously defined correlations, but can also be evaluated openly. Application cases for this can already be found in the defence against cyber attacks, in medicine and the linking of fields of knowledge.3
AI-based text recognition uses algorithms to transform information from unstructured data as well as natural language content into machine-readable and thus further processable form. AI-based algorithms are not only able to assign a meaning to words but also to evaluate it in connection with other words. Unstructured data can thus also be processed by technical systems.
AI-based systems can also recognise acoustic signals and tone sequences and to assign them to specific events or causes. This can be the detection of events, such as the passing of a train based on natural ambient noise, but also the operating noise of an engine, which can be used to detect an incipient defect at an early stage.
Building on this, AI-based speech recognition combines the capabilities of text recognition and acoustic recognition. Algorithms transform spoken language to be written and translated into machine-readable form. For this purpose, the information is extracted from the acoustic recording and converted into structured form.
AI-based translation systems enable the translation of natural language texts into other languages. Foreign languages, simple language or sign language can thus be translated into an official state language. But it is also possible the other way round. For this purpose, the first step is to carry out a speech recognition process. In a second step, the resulting data is translated into the desired language and displayed or read aloud.
AI-based image recognition is able to identify objects in images and assign them to categories. Since objects such as a car do not always look the same or can be photographed from different positions, the system must be able to recognise certain characteristics depending on the situation.
Face recognition as a special case of image recognition recognises human faces on the basis of unique biometric features. This enables people to be identified on the basis of geometric structures of their face or emotions to be analysed.
The 3D space recognition again represents a further step in image recognition. Images are no longer analysed only in two dimensions. At least two images are required to be combined into a three-dimensional image, which enables spatial analyses. Distances and positions in three-dimensional space can be recognised and processed using image recognition and other sensors.
AI-based gesture and motion pattern recognition is based on the analysis of motion data (video films, motion sequences) of a person. Human gestures and movements are also unique biometric characteristics that can be used to identify persons.
AI Basic Applications ^
Building on these basic technologies, AI can be used to simulate different human capabilities4. The most important basic applications will be briefly described:
AI-based perception refers to the analysis of data to identify environmental changes, attitudes and emotions. For this purpose, the system processes generated (sensor) data and assigns them to categories so that individual data can be aggregated to events or emotional attitudes.
With AI-based notification, the focus is on the system's response. Recognised patterns, events or emotions are used to notify users in a targeted manner. Users are thus alerted to events or conditions so that they can react to them promptly and appropriately.
AI-based recommendations extend the data evaluation in such a way that not only the status quo is presented, but also recommendations for action are given. For this it is not only necessary to assign categories to the data. In addition, the perceived actual state must be compared with a target state in order to be able to make recommendations for achieving the target state based on the deviation detected.
AI can also be used to make predictions and forecasts. Based on the patterns detected in the data, predictions for further development are derived, which are communicated to the user.
AI-based precaution links the forecast with the comparison of target and actual status, so that forecasted deviations can be recognised and recommendations or warnings can be given at an early stage with a request for rectification.
In addition to the existing decision-support capabilities, AI can also be used to make independent decisions. In this case, the decision-maker is not supported in his decision-making process by data evaluation or forecasts. Instead, the system reacts to the data with independently made binding decisions, so that the user (and thus the human being) is completely removed from the process.
The data evaluation can also be carried out in real time within the framework of AI-based situation perception. In this case, the above mentioned capabilities have to be executed within milliseconds. Such systems can thus evaluate a situation almost in real time and react immediately with hints, alarms, forecasts or decisions.
Fields of application for AI in administration ^
Artificial intelligence can be used for different types of tasks. The basic AI technologies and applications have shown which human abilities for the fulfilment of tasks can already be taken over by AI today. Five fields of application will be used as examples to show how AI can be used as a further driver of modernisation in the administration.
Front-Office for contact with citizens ^
Making access to the administration more responsive to citizens' needs is not a new phenomenon. The customer orientation known from the private sector has been increasingly demanded for decades, also in the context of public administration. In the eyes of many citizens, the administration still lags far behind the private sector in this respect. However, artificial intelligence currently offers new possibilities for making contact with the administration as simple and pleasant as possible for citizens.
One of the best-known examples for the use of artificial intelligence are chat bots, i.e. dialogue systems with which text-based or speech-based communication can be established via natural language. Chat bots can simulate to a certain extent the dialogue with a human being. Users can ask their questions in natural language as if they were speaking to a human being. They receive a prompt response in the same way. Chat bots can thus extend the existing access channels for administrations. They enable users to find the information they are looking for without specialist knowledge and to receive it in a form that is understandable to lay people. For example, personalised chat bots linked to a citizen's account can also link user input with data available on the citizen and thus improve the quality of the answers. AI can also be used to identify emotions so that appropriate responses can be made. Chat bots are still mostly information tools that provide information about services, procedures and deadlines. In the future, chat bots will also allow applications to be made and even processes to be completed. The information required in application procedures would be recorded in natural language and translated by the chatbot into a structured form for further processing and transmitted to the specialist procedure.
Personal language assistants differ from chat bots in that they rely on audible communication in natural language and control exclusively by voice. They can be integrated into a smartphone, a computer or a smart loudspeaker and are started by an activation word. Platform-based voice assistants use numerous AI-based services in the background. The first cities in Germany are already testing the use of AI-based voice assistance systems.5
In the administrative context, portals allow a single access.6 Even stronger incentives could be provided by expanding the citizens' accounts to include a personal assistant for various areas of life. The user would be given the adjustable option of releasing the data available to him/her for third-party services. Businesses would benefit from the high quality and trust in administrative data. For example, access to identities and current registration addresses would considerably reduce the identification effort for many services. However, the impression of an assistant creating transparent citizens must not be created. The citizens themselves decide who may have access to their stored data. By default, this would be nobody (privacy by design). It must not be technically possible to create a link against the will of the citizens.
Most administrative contacts are used to request administrative services.7 Steps towards this should be simplified as much as possible for citizens. Indeed, many application procedures are still based on analogue, form-based procedures. The first challenge for applicants is to find the necessary form. In addition to finding it, AI also makes it possible to simplify the application process, for example by taking data directly from natural language, checking it for plausibility or correctness and entering it into an electronic form. In the ideal case, citizens can submit their request in natural language, either in writing or orally. In other words, the citizen does not need to know the appropriate procedure, nor does he or she need to transfer the information into a paper-based form.
As the size and complexity of an organisation increases, the need for support processes increases in addition to the actual provision of services. These processes do not generate any added value themselves, but they do enable the processes that are actually desired. In an organisation as broadly based and complex as public administration, support processes play a significant role.
In order to fully utilise the staff, it must be known what skills an employee has and what free capacities are available both individually and within the authority. An intelligent system can not only assess the suitability of a staff member for a specific case on the basis of assigned skills and experiences, but can also recognise the organisational as well as individual workload in order to react to bottlenecks and airspaces in a timely and appropriate manner. Sub-processes can also be prioritised, for example because an important overall process would otherwise be delayed.
The classic mailroom in combination with the electronic mailroom also offers starting points. They accept postal items, file packages, faxes and increasingly also e-mails. These documents or the information they contain must then be delivered to the correct destination. With scanning and AI-based text recognition, the contents of the letters can be quickly evaluated, so that they can be forwarded electronically to the right department and the responsible civil officer via workflow systems.
Internal processes such as the accounting of travel expenses can be greatly simplified with the help of artificial intelligence. AI-based risk management systems can make a pre-selection to only look more closely at questionable transactions. All non-critical processes are directly released by the system.8
There are also numerous other support processes with a high potential for simplification or automation. These include the IT helpdesk for employees, the translation of documents or the preparation of transcripts of meetings. Although many processes have already been simplified by the integration of software products, human intervention is still necessary in these processes. The merging of data from different sources, the handling of unexpected information or the making of decisions are often performed by humans. Robotic Process Automation (RPA) is based on the operation of existing software products, many of which have proven themselves over long periods of time. Only the operation of this software is no longer carried out by a human being, but by automated algorithms.9 Metaphorically speaking, a robot virtually operates the computer's keyboard and mouse, thus enabling the automation of existing IT applications.
Decision Supporting Systems ^
In the public administration, a large number of different decisions are taken with legally binding effect. The actual decision is always made up of weighing up possible alternatives and finally deciding on one of them. Since the majority of administrative procedures involve decisions, administrative science is sometimes referred to as «decision science».10 In the context of evidence-based government, ideally almost in real time, decisions based on facts and figures are becoming increasingly important.
Public administrations already have huge amounts of data and information from different sources that need to be collected in order to perform public tasks. In addition, smart objects have become an integral part of citizens' everyday life. Smart phones, smart watches, smart meters orsmart homes are constantly generating huge amounts of data in which government agencies are increasingly interested.11 The administration itself also collects smart data, for example via sensor technology or traffic guidance systems. However, the potential of this data is often only partially exploited. The processing via dashboards or cockpits represents a first step towards utilisation. The system recognises which data is relevant in this case, calculates key figures, visualises central arguments or establishes references to earlier decisions. In this way, the user is provided with a more sound basis for decision-making, on which he can make his personal assessment.
Intelligent resource planning is an important form of decision support in everyday administrative work. A well-known example of AI-based deployment planning is predictive policing.12 But such risk management systems can also be used in other areas. For example, predictive maintenance work is one possible solution, in which sensors register unusual sounds from the machines and draw attention to necessary maintenance before the machine breaks down. But even without concrete sensor data, maintenance requirements can be forecast on the basis of statistical evaluation or the manpower requirement for a shift can be calculated. The aim of these approaches is to use the available resources more efficiently.
The case officers, decision-makers, responsible superiors and the reviewing office of audit can also be supported by an audit and second assessment of the output. A decision control radar can support the decision-maker by ensuring that the decision is re-examined before execution, in line with the principle of dual control. If the decision deviates from the decision expected by the system on the basis of the available data, a message with a recommendation for revision is first issued directly to the processor.
The decision-making process itself can also be supported by technical systems. In the decision-making process, the processor can thus be presented with one or more proposals, if necessary provided with key figures, which he or she can check and then accept, adapt or reject. In this way, civil officers can process routine cases much faster, but in special cases they can also deviate from the suggestions made without any problems. With decision-support systems, the decision-making power always remains with the person, from the officer to the decision-maker and the politician. At the same time, the long-term effects of decisions can be taken into account to a greater extent. Forecasts on the long-term consequences of a decision can be calculated and clearly shown by means of key figures or visualisations. These can provide valuable insights and ensure a stronger sustainability of the decision to be taken.
Decision automation: Decisive systems ^
Besides supporting the decision maker, AI can also be used to automate decisions. This means that the human being is taken out of the decision process and the binding decisions are made autonomously and thus exclusively by a technical system.
The results of many administrative decisions can be derived directly from the relevant legislative texts. No interpretation of the case's facts or of the consequences resulting from them is necessary. It is only required to examine the extent to which the conditions defined by law are met, so that the direct consequences can be deduced. In these cases, administrative staff are often busy checking the conditions for certain procedures or services on the basis of clearly measurable facts and criteria. Many procedures could be processed fully automatically according to this model without the need to involve an administrative employee in the processing. If the necessary information is available in a suitable form and the necessary interfaces are in place, the technical implementation is usually not a major challenge. At present, it is mainly legal hurdles that make full automation difficult.13 The German legal system still assumes that a decision is ultimately attributable to a person and that this person can also be held liable for it.
In addition, there are also decisions in the administration whose results cannot be directly derived from the legal requirements, but which have to be weighed up in each individual case. These are referred to as decisions with scope for assessment and discretion. The legislator in Germany has explicitly excluded these decisions from full automation in § 35a VwVfG.14 The main challenge in the coming years will be to identify existing room for discretion and use it appropriately. The legislator grants leeway with the intention of achieving the intended effect in individual cases, even if the individual case could not be precisely foreseen in advance. In this respect, the entire decision-making process is also much more complex. There must be an understanding of what effect is intended to be achieved and what means are to be used to achieve it in each individual case. In this respect, the decision not only consists of transferring the legal requirements to the individual case and the derived consequences, but a classification and weighing of several factors must be made. Although the complexity of discretionary decisions is to be estimated much higher, it does not seem reasonable to use this as an absolute criterion for automatability. The first challenge is to identify existing discretionary decisions. The second challenge for learning decision systems is to recognise the scope for discretion in each case and to make an appropriate decision within this scope.
Decisive systems with real-time decisions ^
In addition to administrative processes where decisions and execution can be easily achieved within minutes, hours or days, other administrative decisions require near real-time decision making and execution. On the one hand, this is related to the need for a direct response, such as a reaction in road traffic in the case of traffic light control or autonomous vehicles. On the other hand, the impact of processes can also be greatly improved. In many cases, this makes it possible to react immediately to the causes instead of only reacting to the visible consequences after a long time interval, thus protecting property and human life.
Intelligently networked and AI-based traffic control systems offer the possibility of reacting to traffic conditions almost in real time. Already today, numerous traffic-related data are collected. Vehicles constantly collect a wide range of data, including their location, speed and energy consumption. Pedestrians and cyclists can be located and tracked using their smartphones, Bluetooth or GPS trackers. Other external data sources can be integrated. If it is known, for example, that a major event will end at a certain time, the necessary public transport and road capacities can be secured and provided at an early stage. For this purpose, urban mobility data rooms are required in which the various data are collected. On this basis, AI-based decisions on traffic light changes, detours or speed limits can be made almost in real time. Intelligent green waves for rescue and emergency vehicles or buses and trams can also be controlled by algorithms in such a way that the roads are kept as free of vehicles as possible and do not clog intersections.
Disaster management usually requires the fastest possible response. In the event of natural disasters or terrorist attacks, a timely response can help to save lives. However, valuable time is often lost through coordination and communication. Automated systems offer time savings and more scope for action here. Such systems are conceivable for a number of natural disasters. Earthquakes can be detected at an early stage by sensors, so that high-speed trains and lifts are immediately slowed down before the secondary, destructive earthquake waves take effect. Floods can be predicted by forecasts and predictions based on remote measuring points. Using satellite images, AI-based algorithms detect volcanic eruptions at an early stage.15 In Japan, systems for detecting earthquakes, tsunamis and volcanic eruptions are already in use.16 AI systems offer the possibility to react to these just perceived (sensor) data in real time.
The administration pays particular attention to vulnerable individuals and social groups, who often cannot protect themselves sufficiently. Government employees, especially those in the external service, are also exposed to risks of physical violence. In this context, sensor data generated by smart objects can also be used to detect dangerous situations. The challenge is, on the one hand, to recognise the dangerous situation, to initiate an appropriate response and, on the other hand, to take appropriate account of privacy and data protection. Artificial intelligence is already able to recognise emotions from spoken language. Using the volume sensors integrated in smartphones or the microphone, the conversation atmosphere can be analysed in real time without having to record or analyse the conversation content. If an emotion inappropriate to the situation is detected, a message can either be sent to the person concerned or appropriate measures can be taken immediately.
Fields of application for AI in the judiciary ^
There are also many possible applications for AI in the judicial context. Judges, prosecutors, lawyers and judicial officials can be relieved of routine tasks and supported in their decisions. Similarly, certain decisions can be made fully automatically.
The decisions taken by the judiciary shape people and their future fates. An independent and trustworthy judiciary is one of the cornerstones of a free and democratic basic order. The use of supportive or decision-making systems based on artificial intelligence must therefore be moderate in order to avoid lasting damage to society's trust in the work of the judiciary.
Decision support systems in the judiciary ^
Numerous legal decisions are accompanied by a succession of transpositions. Decisions not only have to be documented properly and comprehensibly. The persons concerned must also be informed about the decision and the resulting rights and obligations in accordance with legal regulations. Standardised text modules are often used for the formulation, which must be inserted according to the decision. Supporting systems can relieve the processor by automatically creating all necessary documents following a decision and sending them to the persons concerned. Decisions are often made in the context of applying legal decisions to individual cases. Daily allowance rates should be calculated according to the income of the person concerned. Procedural costs are to be adjusted to the corresponding amount in dispute.
In addition, the legal decision itself can also be supported by the use of technical systems. Artificial intelligence offers the possibility of recognising similarities between a decision and previous cases and thus providing the decision-maker with indications. Documents and statements can be analysed to find out to what extent their statements agree and where they contradict each other. Based on the facts presented, systems can also make recommendations as to which legal basis should be used in this case. All this serves to enable the decision-maker to make a decision that is as well-founded as possible. They receive edited versions, key figures and relevant text passages, so that they can grasp the relevant content in much less time than before.
AI can also be used by lawyers in the same way to support a defence. Citizens can also automatically receive first non-binding estimates of the chances of success of proceedings. If the request is obviously not given any chance at all, citizens can save time and money, while at the same time the burden on the judiciary is reduced.
Decisive systems in the judiciary ^
In addition to decision support systems, applications which make decisions independently can also be used in the judiciary. Up to now in Germany, the making of legal decisions has been reserved for human beings. Judgements about people are always made by people. However, the overburdening of the justice system in Germany also makes considerations of automating certain decisions. In particular, standardised procedures occur in large numbers of cases and make heavy demands on the capacity of the justice system.
Decisive systems in the judiciary offer potential in all those areas in which there are clearly defined procedures with clear and measurable decision-making criteria. If certain conditions are met, a previously defined decision is taken. Procedures which bind the legal consequence to clearly measurable criteria are particularly suitable for this purpose. Such procedures could be handled comparatively easily by technical systems alone, without involving a human being in the process. Ultimately, however, it must be ensured by analogy with §31a SGB X that all information relevant to the procedure is actually taken into account. Only in a possible second instance would the result then be examined by a human being and, if necessary, revised.
Minimum Requirements ^
In order to deliver real benefits and be accepted in the long term, supporting and decisive systems have to meet certain requirements. Both citizens and members of the judiciary must be able to work with and deal with the systems and their results. One goal is to save time and resources. Transparent decisions, appeals against decisions and the right to have an application processed by a person must remain. At the same time, there is an expectation that the quality of decisions can be significantly improved by using these systems.
Tonn and Stiefel17 point out as the most important aspect that, in the face of increasingly complex problems, computers can certainly make better decisions. The involvement of different actors also promotes swarm intelligence. Decisions are no longer made on a case-by-case basis by a single decision-maker. Instead, numerous experts are involved in the basic decision stored in the system, so that this decision can be considered to be much more well-founded. They also ensure a more transparent decision-making process if the systems document the process in some cases, however, it may be unethical to hand over decisions to machines, as this would cause people to lose their humanity. In the same way, the ability to make decisions would suffer. People will also be less inclined to engage in ethically correct trade. The authors also stress that computers lack creativity and cannot represent human values. They also lack the ability to weigh up or empathise. Finally, when computers take over decision-making, people have to admit to themselves that they do not want to make the tough decisions - that is unlikely, Tonn and Stiefel say.18
A fundamental question also concerns the requirements for the quality of the data used. More data is not necessarily good data. Falsified facts and insufficient information can lead to different judgements. Quality and accuracy must be ensured. Novoselic19 also points out the question of discretion. This is also related to the fact that politics often requires causal research, which would be neglected if the data were interpreted purely in terms of data interpretation. The author also addresses the danger of a lack of data protection and increasing surveillance when sensors, smart objects and cyberphysical systems continuously generate data that can be evaluated by AI. Furthermore, the problem of responsibility and accountability arises. Who is responsible for decisions made by systems? Finally, Novoselic addresses the danger of new elite groups that are the only ones who understand the technology and thus influence political processes. At the same time, data are never neutral, their selection is always influenced by the experts responsible. However, an illusion of objectivity is created.
Kennedy and Scholl20 stress a similar point. All models always have an implicit influence by the experts who create them. The results lead to a belief in objectivity that is not appropriate. This belief is reinforced by information technology. They also emphasise the importance of human judgement and values. The authors also see the danger that it may not be possible to translate important principles of equality into information systems. They also see independence and privacy at risk. According to the authors, there is a danger of manipulation. It must also be ensured that it is possible to deal with wrong decisions which a computer system also commits. They also see a decline in the transparency of decision-making processes, especially when decisions are taken by systems whose code is not public. They also describe the difficulty of translating from legal terms to computer programming, especially because there are hardly any experts with brilliance in both fields. Finally, the authors ask how to deal with unforeseen inputs, information or other necessary flexibility.21
New Areas of Research ^
The debates described above result in an impressive list of questions which administrative informatics, legal informatics and political informatics, together with their other sister disciplines (administrative science, political science, law, psychology, informatics) will have to address in an interdisciplinary way in the upcoming years in order to achieve a sustainable and fair use of artificial intelligence in the public sector.22
These concern first of all the data used as a basis for decision-making. The quality of decisions can always only be as high as the quality of the underlying data. For this reason, the question arises as to how good and valuable data is available that can be evaluated by artificial intelligence and autonomous systems. Not only classical governmental data stocks already offer a broad basis for this. In the future, data generated by smart objects will provide information in completely new dimensions that can be included in decisions. The question arises, however, to what extent all decision-relevant information can be represented by data and at which points a subjective, human assessment of the decision-maker is necessary instead. While certain procedures are based on clearly measurable and quantifiable values and figures, others require an individual assessment. The appropriate procedures must therefore already be identified in a first step with regard to the information required for the decision. However, the assurance of data quality must also be taken into account. The use of smart data in particular represents a new field. Data can be used for fact-based action, but can also be manipulated in a targeted manner to influence decisions. This must be taken into account in the design. Therefore, it must be determined in advance which data from which sources can be used in decision-making processes and which levels of confidence they must correspond to. If these requirements are not met, the following should not be used.
At the same time, the decision-making process in the closer sense must also be considered. This raises the fundamental question of what types of decision are made and the extent to which they are suitable for automating or supporting decisions. As already mentioned above, bound decisions seem to be more suitable, since the legal consequence is already mandated by the legislator in a given case. This raises the concrete question of how processes are to be designed in the case of issues with no, but in future also with scope for discretion. This is accompanied by the fundamental question of what role motives, individual fates and causal research should play. Is it sufficient if defined facts are available, or should the information behind them also be taken into account? How should unforeseen information be dealt with? Similarly, systems based on artificial intelligence must also respect the principles of the rule of law, such as equal opportunities for all actors. During implementation, the question also arises as to how the legal decision-making logic can be translated into the programming of technical AI systems. The transparency and traceability of algorithmic decisions will also play a fundamental role. Trust and acceptance in AI systems can only be created by making the information used and the logic of the decision comprehensible to humans.
Questions also arise regarding the implementation of decisions taken by autonomous systems. How are autonomously made decisions dealt with inside and outside organisations? Do automated decisions reduce the potential for conflict? Is trust in decisions greater when they are made by AI-based systems? How can broad acceptance of AI-based decisions be ensured?
Last but not least, questions arise about the framework conditions and the environment of autonomously decisive systems. How does the general comprehensibility of decisions change if autonomous systems make decisions instead of people? Which decisions should remain exclusively made by humans for ethical or moral reasons? How should responsibility and accountability be handled? How can we avoid a situation where only a few experts can understand and control decisions? What is the impact of decision-making on the capabilities of future generations? And, last of all, which dependencies on technical systems arise and how can crucial systems be shut down again?
In the age of smart government and artificial intelligence, government, administration and the judiciary must adapt to substantial changes brought about by new possibilities of the Internet of Things and the Internet of Services.
The introduction of autonomous decision making systems in government and administration is not trivial. Nor should it be taken lightly, as it has fundamental consequences for the state, administration and society. Positive and negative consequences cannot always be predicted in their full extent. Around four challenges, this article brings together numerous current and open questions. For a successful implementation of autonomous systems, answers are required as to how reliable bases of decision making are created, how decisions are made by autonomous systems, how the implementation of decisions of autonomous systems is realised and which framework conditions are necessary. These challenges outline a research agenda that administrative informatics, legal informatics and political informatics, together with administrative science, political science, law, psychology, business informatics and computer science, will have to address in the coming years in an interdisciplinary and transdisciplinary manner. The state and public administration would be well advised to address these issues in a timely manner together with the scientific community. They would then be in a position to provide answers and framework recommendations at an early stage when the implementation of smart government and autonomous systems is due in the coming years. In this context, however, there is also a need to think about limits, ethics and regulation of the use of artificial intelligence, to debate openly and to argue politically. Such a transformation can not only lead to increased value and winners, but will also produce losers and losses. The use of AI by authoritarian police states to capture, persecute and imprison citizens who think differently has just as inhuman characteristics as the use of autonomous weapon systems in military conflicts. There is a need for action. Existing scope for action must be used jointly. The interdisciplinary sciences will have to be valuable partners.
Areekadan, J. 2018: Alexa nennt Wartezeiten, in: Kommune 21, 18 (2), S. 22–23.
Bauer, V. 2019: Wie KI und Satelliten helfen können, Vulkanausbrüche vorherzusagen, Mobile Geeks, 09.03.2019. Online: https://www.mobilegeeks.de/news/wie-ki-und-satelliten-helfen-koennen-vulkanausbrueche-vorherzusagen/.
Braun Binder, N. 2016: Vollständig automatisierter Erlass eines Verwaltungsaktes und Bekanntgabe über Behördenportale. Die öffentliche Verwaltung, S. 891–898.
Cummings, D. 2016: Seven Spectrum of Outcomes for AI, in: David Cummings on Startup, Atlanta. Online: https://davidcummings.org/2016/12/28/seven-spectrum-of-outcomes-for-ai.
Bergmann, L.; Crespo, I.; Fleischmann, J. 2009: Gestaltung transparenter Geschäftsprozesse. In: Dombrowski, U.; Hermann, C.; Lacker, T.: Sonnentag, S.: Modernisierung kleiner und mittlerer Unternehmen, Springer, Heidelberg.
Daum, R. 2002: Integration von Informations- und Kommunikationstechnologien für bürgerorientierte Kommunalverwaltungen, Nomos Verlagsgesellschaft, Baden-Baden.
Djeffal, C. 2017: Künstliche Intelligenz in der öffentlichen Verwaltung, Berichte des NEGZ, Berlin. Online: https://www.hiig.de/wp-content/uploads/2019/03/NEGZ-Kurzstudie-3-KuenstlIntelligenz-20181113-digital.pdf.
Duncker, K.; Noltemeier, A. 1985: Organisationsmodelle für ein Bürgeramt und deren Realisierung in der Stadt Unna, Verlag, St. Augustin und Darmstadt.
Etscheid, J. & von Lucke, J. 2020: Künstliche Intelligenz in der öffentlichen Verwaltung, Digitalakademie@bw, Stuttgart, in Druck.
Goldacker, G. 2017: Die Perspektive wechseln, Kommune 21, 07.03.2017. Online: https://www.kommune21.de/meldung_25900_Die+Perspektive+wechseln.html.
Grunow, D. 1988: Bürgernahe Verwaltung – Theorie, Empirie, Praxismodelle, Campus Verlag, Frankfurt und New York.
Japan Meteorological Agency 2016: The national meteorological service of Japan, Tokio. Online: http://www.jma.go.jp/jma/en/Activities/brochure201603.pdf.
Kennedy, R. & Scholl, H. J. 2016: E-regulation and the rule of law: Smart government, institutional information infrastructures, and fundamental values, Information Polity, 21(1), S. 77–98. http://doi.org/10.3233/IP-150368.
Knobloch, T. 2018: Vor die Lage kommen: Predictive Policing in Deutschland, Stiftung Neue Verantwortung, Berlin.
Köneke, V. 2018: Doktor Algorithmus, sag mir was ich hab, Zeit Online, 13.08.2018. Online: https://www.zeit.de/digital/internet/2018-08/deep-learning-medizin-kuenstliche-intelligenz-neurologie-augenheilkunde.
Kuhn, J. 2019: Mein Smartphone weiß dass ich wütend bin, in: Süddeutsche Zeitung, 27.03.2019. Online: https://www.sueddeutsche.de/digital/smartphone-software-emotionen-simulation-ki-1.4377004.
Lanz, A. 2010: Entwurf und Implementierung eines Prozesses aus der Verwaltung am Beispiel einer Reisekostenabrechnung, Universität Ulm, Ulm.
Lindinger, M. 2019: KI entdeckt verborgenes Wissen, in: Frankfurter Allgemeine Zeitung, 30.07.2019. Online: https://www.faz.net/aktuell/wissen/klug-verdrahtet/klug-verdrahtet-ki-entdeckt-beim-lesen-von-artikeln-verborgenes-wissen-16286851.html.
von Lucke, J. 2008: Hochleistungsportale für die öffentliche Verwaltung, Eul-Verlag, Siegburg.
von Lucke, J. 2016: Smart Government – Intelligent vernetztes Regierungs- und Verwaltungshandeln in Zeiten des Internets der Dinge und des Internets der Dienste, Schriftenreihe des The Open Government Institute | TOGI der Zeppelin Universität Friedrichshafen, Band 16, Berlin: epubli GmbH.
von Lucke, J. 2018a: In welcher smarten Welt wollen wir eigentlich leben? Verwaltung und Management, 24(4), S. 177–196.
von Lucke, J. 2018b: Smart Government auf einem schmalen Grat, in: Resa Mohabbat Kar, Basanta Thapa, Peter Parycek (Hrsg.): (Un)Berechenbar? Algorithmen und Automatisierung in Staat und Gesellschaft, Kompetenzzentrum Öffentliche IT (ÖFIT) Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS, Berlin.
von Lucke, J. & Große, K. 2017: Smart Government – Offene Fragen zu autonomen Systemen im Staat 4.0, in: Schröter, W. (Hrsg.): Autonomie des Menschen – Autonomie der Systeme – Humanisierungspotenziale und Grenzen moderner Technologien, Talheimer Sammlung kritisches Wissen, Band 71, Talheimer Verlag, Mössingen-Talheim 2017, S. 313–327.
Mainzer, K. 2016: Künstliche Intelligenz – Wann übernehmen die Maschinen? Springer Verlag, Heidelberg.
Möltgen, K. & Lorig, W. 2009: Die kundenorientierte Verwaltung – zu den Facetten eines Leitbildes der Verwaltungsmodernisierung, VS Verlag für Sozialwissenschaften, Wiesbaden.
Nesseldreher, A. 2006: Entscheiden im Informationszeitalter, Der Andere Verlag, Tönning.
Novoselic, S. 2016: Smart Politics, in: von Lucke, J. (Hrsg.): Smart Government –Intelligent vernetztes Regierungs – und Verwaltungshandeln in Zeiten des Internets der Dinge und des Internets der Dienste, TOGI Schriftenreihe (16), epubli, Berlin, S. 77–95.
Püttner, G. 2000: Verwaltungslehre – Ein Studienbuch, 3. Auflage, Verlag C.H. Beck, München.
Scheer, A.-W. 2017: Robotic Procress Automation (RPA) – Revolution der Unternehmenssoftware, IM+io – Das Magazin für Innovation, Organisation und Management, 32 (3), S. 30–41.
Siegel, T. 2017: Automatisierung des Verwaltungsverfahrens – zugleich eine Anmerkung zu §§ 35a, 24 I 3, 41 IIa VwVfG, in: Deutsches Verwaltungsblatt (132), S. 24–28.
Stanoevska-Slabeva, K. 2018: Conversational Interfaces – die Benutzerschnittstelle der Zukunft?, in: Wirtschaftsinformatik und Management, 10 (6), S. 26–37.
Stucki, T.; D’Onofrio, S. & Portmann, E. 2018: Chatbot – Der digitale Helfer im Unternehmen. HMD Praxis der Wirtschaftsinformatik, 55, S. 725–747.
Tonn, B. & Stiefel, D. 2014: Human Extinction Risk and Uncertainty: Assessing Conditions for Action. Futures 63:134–44.
Wang, R. 2016: Monday’s Musings – Understand The Spectrum Of Seven Artificial Intelligence Outcomes, Constellation Research. Online: http://blog.softwareinsider.org/2016/09/18/mondays-musings-understand-spectrum-seven-artificial-intelligence-outcomes/.
Welzel, C. & Grosch, D. 2018: Das ÖFIT-Trendsonar künstliche Intelligenz, Kompetenzzentrum öffentliche IT, Berlin.
- 1 See: Etscheid/von Lucke 2019.
- 2 See: ausführliche Darstellung im ÖFIT-Trendsonar: Wenzel/Grosch 2018.
- 3 See: Lindinger 2019.
- 4 Referring to: Wang 2016, Cummings 2016 und Etscheid/von Lucke 2019.
- 5 See: Stanoevska-Slabeva 2018, S. 27 ff. und Areekadan 2018, S. 22.
- 6 See: von Lucke 2008.
- 7 See: Goldacker 2017.
- 8 See: Lanz 2010.
- 9 See: Scheer 2017.
- 10 See: Püttner 2000, S. 332 und Nesseldreher 2006.
- 11 See: von Lucke 2018a.
- 12 See: Knobloch 2018.
- 13 See: Siegel 2017.
- 14 See: Braun Binder 2016.
- 15 See: Bauer 2019.
- 16 See: Japan Meterological Agency 2016.
- 17 See: Tonn/Stiefel 2014, S. 134 ff.
- 18 See: Tonn/Stiefel 2014, S. 134 ff.
- 19 See: Novoselic 2016, S. 134 ff.
- 20 See: Kennedy/Scholl 2016, S. 77 ff.
- 21 See: von Lucke/Große 2017, S. 322–324.
- 22 See: von Lucke/Große 2017, S. 324-327.