Writing II_Final assignment Social Human Robot Interaction: Fact or Fiction Until recently a human-robot relationship was deprived of social interaction. Not only because sociable robots did not yet exist but also because robots in general were far from autonomous. As a result of their immobility, robots were not able to enter the human domain. And human robot interaction (HRI) had primarily been studied in the artificial environment of a research laboratory. Moreover, social interaction, from a sociological point of view, is defined homogeneous. In other words, social behaviour (e.g. communication) can occur exclusively between members of the same species: making social human-robot interaction (HRI) a paradox. The main questions addressed in this paper are: what is social interaction, how does one engage in social interaction, and are humans and robots able to engage in interaction with each other at a social level. In an attempt to answer these questions this paper has been divided into three parts. The first part provides a short introduction to the characteristics of social interaction. The second part gives an overview of the research that has been conducted on the role of shared attention in HRI and in particular that of eye gazing behaviour. The third and final part introduces a social robot (Kismet) and attempts to answer the question whether social HRI turns out to be fact or fiction. To be social is to be interactive and communicative with members of ones species. But, the most important social behaviour involves reciprocity: the mutual exchange of favours over time. Reciprocal behaviour strengthens the bonds between the members of a society in an attempt to secure its future. Having a conversation is a common example of social interaction (Cassell, 1999, mentioned in Breazeal, 2002). However, to be able to engage in natural and fluent interaction a well-developed set of social skills is required. First of all, one needs to be able to give cues and to interpret perceived cues. When people are communicating they send and receive message. These can be verbal or non-verbal. The following elements are amongst the characteristics of verbal communicating: tone of voice, pitch and intonation. Some examples of non-verbal cues are: gestures, body posture, facial expression and eye gazing (Breazeal, 2002). There are two types of gazing behaviour: looking at something that has attracted ones attention, known as eye gazing and following someone else's gaze to an attended object or location, known as gaze tracking (Johnson, Slaughter, & Carey, 1998). In both types, gaze direction provides valuable information about a person's interest and emotion. Moreover, it affects the evaluation of a person's attentiveness and social skills (Kleinke, 1986). The ability to attract, by making eye contact for example, and direct the attention of potential interaction partners creates a base for social contact. The situation wherein two people have consciously directed their attention, and consequently their gaze, at the same object (or person) is called shared attention (Hoffman, Grimes, Shon, & Rao, 2006). It facilitates the exchange of information and experiences (Scassellati, 1999). Therefore, shared attention might be considered a precursor of social contact (Carpenter, Nagell, & Tomasello, 1998). If robots were to interact naturally with humans, it is essential that their human interaction partner will accept the robot as a social being and will be able to acknowledge its social skills. To examine if a robot is just as capable as a human being to engage in natural social contact, a great number of experiments have been performed on gazing behaviour and shared attention in HRI. Yoshikawa, Shinozawa, Ishiguro, Hagita, and Miyamoto (2006) hypothesised that eye gazing behaviour of a robot would affect its evaluation as a social partner. They studied the effect of a participants feeling of being looked (related to the concept of presence) at by a robot on their evaluation of the robot as a social partner (e.g. as a communicative capable equal). The results of this study showed that eye gazing behaviour of the robot was positively associated to the evaluation of the robot as a social partner. In contrast to the study of Yoshikawa et al., Minato, Shimada, Itakura, Lee, and Ishiguro (2006) found that, despite of responsive eye movements, their humanoid robot was negatively evaluated as a social partner. They examined the influence of a robots appearance on the evaluation by focussing on the breaking of eye contact during a conversation. According to the social signal theory, people tend to break eye contact to communicate to others that they are thinking about something (McCarthy, Lee & Muir, 2001, mentioned in Minato et al., 2005). Participants that disconnected from eye contact more often evaluated the robot more negatively as a social partner. These findings showed that the humanoid's appearance had an effect on the evaluation of its social abilities. The evaluative character of eye gazing behaviour showed that the willingness of people to engage in social contact with a robot depends on the robot's ability to communicate effective non-verbal visual cues. It also showed that a robot's appearance could interfere with a positive evaluation of its social skills. For a robot to be able to engage in social contact it does not only require a human's approval of its social skills, it also needs to be able to initiate and maintain interaction. To find out if a robot is capable of attracting and directing a person's gaze during interacting two experiments will be described. The first one is an experiment that was conducted by Sidner, Lee, Kidd, Lesh, and Rich in 2005. They hypothesised a relation between engagement gestures and the evaluation of the interaction. According to Sidner et al. (2005), the process of social engagement involves the initiation of contact, its maintenance and the ability to disconnect from it by means of verbal and non-verbal cueing. Gaze tracking, following a person's gaze, is part of the engagement process and is, in the context of the experiment, referred to as an 'engagement gesture'. They investigated the influence of engagement gestures on the involvement of people during their interaction with a robot that was able to speak (speaking condition) as well as it was able to communicate non-verbal cues (moving condition). The results showed that the robot was able to draw a participant's attention to the object it had 'in mind' and, consequently, showed that the participants were able to successfully interpret the robot's gazing behaviour and track its gaze direction. The participants reported to be more involved in interaction that included engagement gestures compared to interaction that did not include these gestures (talking condition). This finding was supported by the fact that participants had spent more time interacting with the robot in the moving condition than they had in the talking condition. Sidner et al. concluded that shared attention during HRI is possible and that people can actually become involved in interacting with robots. The extent to which people can get involved in HRI does seem to depend on the presence of engagement gestures during contact. To find out if a robot might still be capable of attracting and directing a person's gaze when it had no pupils was investigated by Yonezawa, Yamazoe, Utsumi, and Abe (2007). The lack of pupils in the eyes (as in many puppets) increases the difficulty to identify gaze direction. In this case, only the position of the robot's head and face were indicators of the object that it was attending. Yonezawa et al. found that even robots with far less sophisticated eyes were able to give apt visual cues that facilitated gaze tracking. They concluded that the limited communication of the eyes only marginally reduced the robot's ability to share attention. The experiments that have been described in this part of the paper illustrate that robots are able to initiate and maintain interaction with people. They have also demonstrated the evaluative character of eye gazing behaviour, and showed that the willingness of people to engage in social contact with a robot depends on the robot's ability to communicate effective non-verbal visual cues. Moreover, these experiments have showed that apart from the technical requirements needed for apt cuing behaviour, the robot's appearance might overshadow the perceived quality of its social skills: making an appearance a powerful factor in the initiation of HRI. A negative evaluation of a robot as a social partner, whether it is true or not, might nip potential interaction in the bud. A possible explanation for the influence of appearance might be provided by Mori's theory about the uncanny valley (1970, mentioned in Minato et al., 2006). Mori's theory states that people prefer anthropomorphic robots as social interaction partners. Although, to a certain extent. When the resemblance is almost perfect, and it can only be distinguished by minute differences in appearance and behavioural movement, people tend to experience an extreme feeling of repulsion. This phenomenon causes a steep drop in the familiarity curve in the theory's accompanying graph, which is referred to as the uncanny valley. Many of the addressed experiments have showed that the level of comfort that people experience during interaction depends heavily on the evaluation of the robot as a social partner. Therefore, the design of social robots needs to carefully consider the consequences of the robot's appearance. The next and final section of this paper will give a brief introduction to Kismet, and will be followed by a final word on the topic of HRI. Cynthia Breazeal's pioneer creation of the 1990s, named Kismet, was the first robot to enter the domain of social interaction. Even though Kismet has no body or ligaments, people were immediately drawn to him because of his distinct facial features and his ability to mimic human emotion. His big eyes, furry brows, bright red lips and his pointy pink ears gave him a childlike appearance that contrasted his mainly mechanical look of screws and bolts. According to Breazeal, 'Kismet is special and unique. Not only because of what [he] can do, but also because of how [he] makes you feel'. Since Kismet's arrival research into the social aspects of human robot interaction has been flourishing. To conclude this paper and to answer the question whether social human robot interaction is fact or fiction, one might conclude that a clear-cut answer cannot yet be given, although the experimental results seem to weigh more heavily on the non-fiction side of the scale. The results of experimental HRI research, presented in this paper, closely resemble the frame of social referencing that is found in human-human interaction (Carpenter, Nagell, & Tomasello, 1998). This resemblance, clearly, indicates that social HRI is very well on its way to becoming a reality. In this light, a new definition of the concept of social interaction needs to be formulated from the viewpoint of artificial intelligence. One concern that does need to be addressed, in the context of HRI research, is that of ecological validity. Most of the robots involved in the experiments that were mentioned in this paper were not able to move around or even leave their artificial environment. Therefore, the results might have been different if the experiments had been conducted in the human domain. A robot that is surrounded by human artefacts might be evaluated more positively as a social partner when it is visually associated with familiar objects from the social domain. This points out the fact that, to be able to acquire accurate and ecologically valid results, robots need to become autonomous. References Breazeal, C. (2002). Regulation and entrainment in human-robot interaction. The International Journal of Robotics Research, 21, 883-902. Carpenter, M., Nagell, K., & Tomasello, M. (1998) Social Cognition, Joint Attention, and Communicative Competence from 9 to 15 Months of Age. Monographs of the Society for Research in Child Development, 63, 1-174. Hoffman, M. W., Grimes, D. B., Shon, A. P., & Rao, R. P. N. (2006). A probablistic model of gaze imitation and shared attention. Neural Networks, 19, 299-310. Johnson, S., Virginia Slaughter, V., & Carey, S. (1998). Whose gaze will infants follow? The elicitation of gaze following in 12-month-olds. Developmental Science, 1, 233-238. Kleinke, C. L. (1986). Gaze and eye contact: a research review. Psychological Bulletin, 100,78-100. Minato, T., Shimada, M., Itakura, S., Lee, K., & Ishiguro, H. (2006). Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person. Advanced Robtics, 20, 1147-1163. Scassellati, B. (1999). Imitation and mechanisms of joint attention: a developmental structure for building social skills on a humanoid robot. Computation for Metaphors, Analogy, and Agents, LNCS 1562, 176-195. Sidner, C. L., Lee, C., Kidd, C. D., Lesh, N., & Rich, C. (2005). Explorations in engagement for humans and robots. Artificial Intelligence, 166, 140-164. Yonezawa, T., Yamazoe, H., Utsumi, A., & Abe, S. (2007). Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking. Proc. ICMI2007, 140-145, 2007. Yoshikawa, T., Shinozawa, K., Ishiguro, H., Hagita, N., & Miyamoto, T. (2006). Responsive robot gaze to interaction partner, Proc. Robotics: Science and Systems.