Developments in the study of communication between humans and artificial intelligence
Introduction to this field of study
Artificial intelligence technology is progressing rapidly, and is being used in various fields including medicine, welfare and education. Until now technical research for the development of artificial intelligence has mainly been conducted in the field of information science, and various computational models and algorithms have been developed. Research on the interaction between human beings and systems equipped with artificial intelligence, on the other hand, has gone beyond the framework of information science, and interdisciplinary studies have been conducted involving the fields of cognitive science and social science. In the U.S, for example, there are “Human Computer Interaction (HCI)” departments where students can learn (for example, CMU (https://www.hcii.cmu.edu/)), and international conferences have also been held to address this topic (for example CSCW (https://cscw.acm.org/) and CHI (https://sigchi.org/) organized by ACM). The core of these fields is information and computer science, but they also examine human science topics, such as how human beings can harmoniously communicate and coexist with intelligent systems like robots equipped with artificial intelligence. In recent years, research and development of “agent” systems has also been carried out in the field of artificial intelligence. An “agent” is a computer program that autonomously makes judgments and solves problems by itself on the basis of certain rules. Familiar examples of this include the personified character you see at the top of the screen when you use an ATM and the “Hitsuji shitsuji [Sheep butler]” smart phone app. A field of research called “Human-Agent Interaction” has emerged that studies human cognitive activity, social behavior, and the fundamental technology required for human beings to communicate with these sorts of intelligent systems. For details please see the following websites: Human-Agent Interaction Symposium 2017 (http://hai-conference.net/symp2017/) and the international symposium (http://hai-conference.net/).
Today in the field of HCI studies a lot of research is being conducted regarding how human beings respond to agents, and how agents should be designed in order to allow for smooth communication between people and computers. One important theory that must not be overlooked in the advancement of this research is “media equation.” This theory claims that people unconsciously respond to computers in a social manner as we humans do in our daily interactions with each other. Regarding the social etiquette observed in interactions between human beings, for example, it has been experimentally studied that human beings also behave in accordance with this etiquette when dealing with a computer (Nass et al., 1995). But do people also display an interpersonal response when dealing with an artificial intelligence or agent that autonomously and adaptively responds to human behavior and displays a certain type of personality? In the project we are conducting at the Institute of Human Sciences, we are conducting experiments comparing differences on interactions between human-to-human and human-to-artificial intelligence. We are employing cognitive science approaches to examine the potential for harmony and co-existence between human beings and artificial intelligence.
The use of artificial intelligence for decision-making in a trial (Hayashi and Wakabayashi, 2017)
Here I would like to present a study in which we examined what kind of influence the provision of information by an artificial intelligence has on decision-making in our recent research projects on AI and Law. In Japanese trials, the length of a sentence is decided by the jury on the basis of past examples from experts, and our research focused on this decision-making process. In the experiment, a virtual judge (jury) trial was constructed on a personal computer, and participants were either assigned into a “judge” group, who received a suggested sentence from a mock human judge, or an “artificial intelligence” group, who received a suggested sentence from an artificial intelligence. When four years, a sentence relatively appropriate given the circumstances of the case in question, was suggested to the participants, a significant difference was found between the two groups, with those in the artificial intelligence group deciding on a sentence closer to the one suggested to them than those in the judge group. In other words, the experiment indicated that in human decision-making we tend to rely more closely on information provided by an artificial intelligence as an aid in arriving at a conclusion than we do on information provided by a human being. For details, please see the following paper published in the proceedings of the ACM Conference.
Hayashi, Wakabayashi (2017) Can AI become reliable source to support human decision-making in a court scene? , Proceedings of the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing(CSCW 2017), 195-198
URL: http://dl.acm.org/citation.cfm?doid=3022198.3026338We believe that by carrying out a series of fundamental experiments of this kind it is possible to broadly investigate this topic, from the information sciences that address the design of artificial intelligence to the human sciences that deal with the interaction between artificial intelligence and human beings. And if, as in the case of the sentencing scenario addressed in this experiment, it becomes clear that the use of artificial intelligence can avoid an adverse effect caused by the agent playing a particular role being a human being, this research can also lead to new discoveries concerning the uses of artificial intelligence in society.