What is the correspondence bias and how does it affect the attribution process?

  • Journal List
  • Front Robot AI
  • v.8; 2021
  • PMC8764179

Front Robot AI. 2021; 8: 788242.

Abstract

Increasingly, people interact with embodied machine communicators and are challenged to understand their natures and behaviors. The Fundamental Attribution Error [FAE, sometimes referred to as the correspondence bias] is the tendency for individuals to over-emphasize personality-based or dispositional explanations for other people’s behavior while under-emphasizing situational explanations. This effect has been thoroughly examined with humans, but do people make the same causal inferences when interpreting the actions of a robot? As compared to people, social robots are less autonomous and agentic because their behavior is wholly determined by humans in the loop, programming, and design choices. Nonetheless, people do assign robots agency, intentionality, personality, and blame. Results of an experiment showed that participants made correspondent inferences when evaluating both human and robot speakers, attributing their behavior to underlying attitudes even when it was clearly coerced. However, they committed a stronger correspondence bias in the case of the robot–an effect driven by the greater dispositional culpability assigned to robots committing unpopular behavior–and they were more confident in their attitudinal judgments of robots than humans. Results demonstrated some differences in the global impressions of humans and robots based on behavior valence and choice. Judges formed more generous impressions of the robot agent when its unpopular behavior was coerced versus chosen; a tendency not displayed when forming impressions of the human agent. Implications of attributing robot behavior to disposition, or conflating robot actors with their actions, are addressed.

Keywords: fundamental attribution error, correspondence bias, social robot, human-robot interaction, computers are social actors, behaviorism

1 Introduction

The Fundamental Attribution Error [FAE] is the tendency for people to over-emphasize dispositional or personality-based explanations for others’ behavior while under-emphasizing situational explanations [Ross, 1977]. In other words, people sometimes demonstrate a cognitive bias by inferring that a person’s actions depend on what “kind” of person they are rather than on the social and environmental forces that influence the person. As such, an observer will likely attribute reasons for a behavior to internal characteristics and not external factors [Gilbert and Jones, 1986]. Individual behavior is heavily influenced and guided by situational and external factors. However, “because people are accustomed to seeing individuals as causal agents, viewing the actor and [their] actions as forming a single categorical unit also appears to be the simplest, most satisfying, and least effortful inferential strategy [Heider and Simmel, 1944; Heider, 1958; Jones, 1979]” [Forgas, 1998, p. 319].

Although this effect has been thoroughly examined with humans, we do not know if the same correspondence bias will apply to social robots. When communicating with machines such as social robots, people must form impressions of the agents and judge their behavior. Compared to people, current robots are less agentic and autonomous with behaviors driven by programming, design, and humans in the loop. However, people do nonetheless assign robots agency, intentionality, and blame [Sciutti et al., 2013; De Graaf and Malle, 2019; Banks, 2020]. The purpose of this experiment is to determine whether people commit the FAE in response to the behaviors of a social robot. FAE is sometimes referred to as the correspondence bias [Gawronski, 2004], an issue we will return to in the discussion. Whereas the FAE assumes a general tendency to underestimate the power of situation on human behavior, the correspondence bias refers more narrowly to the tendency to make disposition-congruent inferences of observed behavior. However, because much of the literature uses both FAE and correspondence bias, we will use the terminology cited in the mentioned studies in the next sections.

2 Fundamental Attribution Error

Research has demonstrated that the FAE may distort an observer’s judgment of an individual, especially in the case of overattribution of individual responsibility for large achievements or grave mistakes [Ross et al., 1977]. Previous research has demonstrated that individuals who commit the FAE assign too much personal responsibility for both positive and negative outcomes [Ross et al., 1977; Riggio and Garcia, 2009]. According to research on the FAE, individuals use two types of information when making attributions: dispositional and situational [Pak et al., 2020]. As such, the FAE “rests on an assumption of dualism: that there is a clear division between what is inside and outside the person” [Langdridge and Butt, 2004, p. 365].

Dispositional attributions pertain to perceived qualities of the individual, whereas situational attributions pertain to perceived characteristics of the environment and factors outside of the individual’s control. “Potential biases in the causal attribution process can come from the valence of the situational outcome [was the outcome positive or negative], the degree of informational ambiguity of the situation, and the degree of control an actor has over an outcome” [Pak et al., 2020, p. 422]. FAE has been examined in relation to behavioral judgments. For example, when presented with an excerpt of a character’s bad day, students tended to attribute the cause to dispositional versus situational factors [Riggio and Garcia, 2009]. However, students who were primed by watching a video about the power of social and environmental influences on individual behavior attributed the cause of the bad day more to situational factors. Therefore, broader construal may help attenuate the FAE.

FAE does not seem to be universal across cultures but does exist heavily in Western cultures [Norenzayan and Nisbett, 2000]. Research in social psychology has forwarded several explanations for why individuals commit the FAE. The first explanation is that people are more likely to attribute causes or responsibilities to an observed than an unobserved element. Because agents are more salient than their situations in many judgment tasks, the agent itself draws observers’ attributional focus [Taylor and Fiske, 1975; Robinson and McArthur, 1982]. The second explanation is that personal/dispositional attributions are more comforting causal inferences because they reinforce the just-world hypothesis, which holds that “people get what they deserve” or “what goes around comes around” [Walster, 1966]. However, this explanation better explains deliberative judgments than the swift or automatic judgments often formed in response to individual behavior [Berry and Frederickson, 2015].

The third explanation is that humans may have evolved [and learned] to be hypersensitive in terms of agency detection. HADD, or the hypersensitive agency detection device, is the cognitive system theorized to be responsible for detecting intentional agency [Barrett, 2000]. People overestimate the presence of human agency and therefore demonstrate a bias in which situations and events are attributed to people or other human-like entities. Agency detectors are so sensitive that even movement is enough to trigger attributions of will and intention, as evidenced in a number of Theory of Mind [ToM] studies [Barrett, 2007].

2.1 Attributional Process in Human-Robot Interaction

People attribute mental states to others in order to understand and predict their behavior. There is evidence of similarity in how people interpret humans’ and robots’ actions in the sense that people implicitly process robots as goal-oriented agents [Sciutti et al., 2013], use the same “conceptual toolbox” to explain the behavior of human and robot agents [De Graaf and Malle, 2019], make implicit Theory of Mind [ToM] ascriptions for machine agents [Banks, 2020], and evaluate a social robot’s message behavior in terms of its underlying beliefs, desires, and intentions for communication [Edwards et al., 2020]. HRI scholars have argued that the physical presence of a robot, or embodied machine agent, can produce patterns of attributions similar to those occurring in human-human interaction [Ziemke et al., 2015; De Graaf and Malle, 2017; Pak et al., 2020]. Even when participants were provided with transparent information about how a robot makes decisions, they still attributed outcomes of behaviors to robot thinking [Wortham et al., 2017], which suggests the persistence of dispositional attributions even when situational information is provided [Pak et al., 2020]. In addition, people have been found to use folk-psychological theories similarly to judge human and robot behavior in terms of ascriptions of intentionality, controllability, and desirability and in the perceived plausibility of behavior explanations [Thellman et al., 2017]. Furthermore, there is evidence that human-linked stereotype activation [e.g., stereotypes of aging] influences causal attributions of robot behavior [Pak et al., 2020]. The results of such studies generally lend support to the Computers are Social Actors [CASA] paradigm, which posits that people tend to treat and respond to machine agents with social cues in the same ways they do other people [Reeves and Nass, 1996].

The Form Function Attribution Bias [FFAB] refers to cognitive shortcuts people take based on the robot’s morphology or appearance [Haring et al., 2018]. The FFAB leads people to make biased interpretations of a robot’s ability and function based on the robot’s physical form [Hegel et al., 2009] and the perceived age of the robot [Branyon and Pak, 2015]. Some research has demonstrated that attributions of action and mind increased as more human features were included in pictures of robot/avatar faces [Martini et al., 2016]. Interacting with robotic agents on a task reduced one’s own sense of agency similar to working with other individuals [Ciardo et al., 2020]. This effect was not observed with non-agentic mechanical devices. Other research suggests that agent-category cues help shape perceptions which then influence behavioral outcomes [Banks et al., 2021]. In doing so, there is a tendency to judge action on the basis of the agent performing it. Although these findings do not speak directly to the applicability of the FAE to social robots, they do demonstrate that attributional patterns similar to those observed in human interaction may emerge when people interact with social machines.

As a result, it is important to understand how the FAE may impact perceptions of a social robot when the robot engages in popular or unpopular behavior. These findings will have implications for how humans understand the causes of social robots’ behavior and assign blame or credit for their activities, which is increasingly relevant in contexts including emergency/crisis, healthcare, education, retail, and legal. In short, how will people assign the cause of a robot’s behavior in relation to how they do so for other humans? More specifically, to closely replicate the experimental research on FAE in human interaction [Forgas, 1998], we will focus on a situation in which a robot or human expresses the popular or unpopular position on a topic of social importance. This design falls within the attributed attitude paradigm of research investigating the correspondence bias [Jones and Harris, 1967]. Although application of the CASA paradigm would suggest people will demonstrate similarity in their attributional processes of human and robot behavior, the observed differences in people’s responses may indicate differences. The traditional procedure for carrying out research within the CASA framework entails 1] selection of a theory or phenomenon observed in human interaction, 2] adding humanlike cues to a robot, 3] substituting the robot for a human actor, and 4] determining whether the same social rule applies [Nass et al., 1994]. To also allow for identification of more granular potential differences in how people respond to robots, the present study modifies and extends the procedure to include a human-to-human comparison group. We offer the following research questions:

RQ1: Will participants attribute the cause of an agent’s [social robot or human] behavior to disposition or situational factors?

RQ2: How will the nature of an agent’s behavior [popular or unpopular] influence attributions and impressions?

3 Materials and Methods

3.1 Participants

The sample included 267 U.S. American adults recruited and compensated US $2.00 through Amazon’s Mechanical Turk. Participants who 1] failed the audio test, 2] failed the speech-topic attention check or 3] reported non-normative attitudes toward the topic [opposed legalization of medicinal marijuana] were excluded from analysis, leaving 231 participants. Their average age was 43.32 years [SD = 11.36, MD = 40, range = 24–71]. Slightly over half identified as male [51%, n = 118], followed by female [48%, n = 110], those who selected “prefer to not answer” [0.9%, n = 2], and gender fluid [0.4%, n = 1]. Predominantly, participants identified as White [79%, n = 183], followed by Black or African-American [7%, n = 16], Hispanic or Latino/a/x [5%, n = 12], Asian or Pacific Islander [5%, n = 12], bi- or multi-racial [3%, n = 7], and one person [0.4%] selected “prefer to not answer”. Most had a Bachelor’s degree or higher [60%, n = 138].

3.2 Procedures

Procedures entailed a modified replication of Forgas’ [1998] experiments investigating the correspondence bias by examining the degree to which people attributed a person’s message behavior to their “true attitudes” about the topic when that behavior was popular [normative, and therefore expected] or unpopular, and chosen or coerced [Forgas, 1998]. Additionally, Forgas manipulated the mood of participants as happy or sad to determine the influence of mood on attributional judgements. Participants were asked to read an essay forwarding either a popular or unpopular position on the topic of French nuclear testing in the South Pacific, which was framed as either the chosen stance of the author or an assigned/coerced stance. Then, participants were asked to consider whether the essay represented the true attitude of its writer, to indicate their degree of confidence in that attribution, and to give their impressions of the essay writer. In the present study, we replicated the basic design with four modifications: 1] manipulation of the agent as human or robot, 2] use of a more contemporary topic [medicalization of marijuana], 3] speeches versus essays, and 4] measured and statistically controlled for mood rather than manipulated mood.

Upon securing Institutional Review Board approval and obtaining informed consent, we conducted a 2 [agent: human vs. robot] X 2 [behavior: popular vs unpopular] X 2 [choice: chosen vs. coerced] between-subjects online video experiment, which was introduced to participants as a “social perception study.” After completing an audio check, participants were asked to rate their current affective/mood state. Next, participants were randomly assigned to view one of eight experimental conditions involving a 1-min video containing a persuasive appeal by a human or a robot, in which the agent advocated for or against legalizing medical marijuana [operationalizing popular vs. unpopular behavior], with the position stipulated as either freely chosen by or assigned to the speaker. As a manipulation and attention check, participants were asked to report the speaker’s stated position in the video before progressing to the rating tasks. Next, they were asked a series of questions assessed along 7-point semantic differential scales to ascertain 1] inferences of the speakers’ “true attitudes” toward legalizing medical marijuana, 2] confidence in their attributed attitude ratings, and 3] interpersonal impressions of the speaker. Finally, they were asked to report their own attitudes toward the legalization of medical marijuana, to offer any open-ended comments, and to provide demographic information.

3.3 Mood Check

Prior to the experimental task, participant mood was self-assessed with two [1:7] semantic differential items rating current mood as sad:happy and bad:good. Answers were highly related [r [228] = 0.92, p

Bài Viết Liên Quan

Chủ Đề