SEEING HAPPY EMOTION IN FEARFUL AND ANGRY FACES: QUALITATIVE ANALYSIS OF FACIAL EXPRESSION RECOGNITION IN A BILATERAL AMYGDALA-DAMAGED PATIENT Wataru Sato1, Yasutaka Kubota2, Takashi Okada2, Toshiya Murai2, Sakiko Yoshikawa1 and Akira Sengoku2 (1Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University, Kyoto, Japan; 2Department of Neuropsychiatry, Graduate School of Medicine, Kyoto University, Kyoto, Japan) ABSTRACT Neuropsychological studies reported that bilateral amygdala-damaged patients had impaired recognition of facial expressions of fear. However, the specificity of this impairment remains unclear. To address this issue, we carried out two experiments concerning the recognition of facial expression in a patient with bilateral amygdala damage (HY). In Experiment 1, subjects matched the emotion of facial expressions with appropriate verbal labels, using standardized photographs of facial expressions illustrating six basic emotions. The performance of HY was compared with age-matched normal controls (n = 13) and brain-damaged controls (n = 9). HY was less able to recognize facial expressions showing fear than normal controls. In addition, the error pattern exhibited by HY for facial expressions of fear and anger were distinct from those exhibited by both control groups, and suggested that HY confused these emotions with happiness. In Experiment 2, subjects were presented with morphed facial expressions that blended happiness and fear, happiness and anger, or happiness and sadness. Subjects were requested to categorize these expressions by two-way forced-choice selection. The performance of HY was compared with age- matched normal controls (n = 8). HY categorized the morphed fearful and angry expressions blended with some happy content as happy facial expressions more frequently than normal controls. These findings support the idea that amygdala-damaged patients have impaired processing of facial expressions relating to certain negative emotions, particularly fear and anger. More specifically, amygdala-damaged patients seem to give positively biased evaluations for these negative facial expressions. Key words: amygdala, facial expression, emotion, fear, anger, happiness, morphing INTRODUCTION Impairment in the recognition of fearful facial expression in patients with amygdala lesions is one of the most important topics regarding the neural and cognitive processing of facial expressions. The initial reports on this issue were by Adolphs et al. (1994, 1995), who reported that a patient with innate bilateral amygdala damage had impaired recognition of facial expressions related to certain emotions, particularly fear. Some of the following studies have confirmed this phenomenon, and additionally reported that patients with unilateral or acquired amygdala lesions are also impaired in the recognition of facial expressions of fear (Young et al., 1995; Calder et al., 1996; Young et al., 1996; Cortex, (2002) 38, 727-742 728 Wataru Sato and Others Broks et al., 1998; Adolphs et al., 1999a, 1999b). Based on these findings, it has been proposed that the amygdala is an indispensable neural substrate for recognizing facial expressions of fear (e.g., Adolphs et al., 1995). However, some unresolved questions remain. First, some studies have reported that recognition of facial expressions was intact in subjects with complete bilateral amygdala damage (Hamann et al., 1996; Hamann and Adolphs, 1999). These studies purporting intact recognition in patients with bilateral amygdala damage used identical procedures to studies reporting impaired recognition in such patients. These discrepancies between the studies cannot be attributed to other factors, such as age, IQ, etiology, or the size of damaged regions (Broks et al., 1998). Secondly, a recent neuropsychological study reported that brain-damaged patients without amygdala damage also had impaired recognition of facial expressions of fear (Rapcsak et al., 2000). This study pointed out that the conventional recognition task based on the six basic emotions theory (Ekman and Davidson, 1994) has biased difficulty for the recognition of certain negative expressions, particularly fear. It was suggested that impaired recognition of facial expressions of fear in amygdala-damaged patients might not be specific to the amygdala lesions but be a consequence of generalized brain damage. Finally, some studies reported that amygdala-damaged patients have impaired recognition of facial emotion, not only for fear, but also for some other emotions (Adolphs et al., 1994, 1995; Calder et al., 1996; Adolphs et al., 1999a, 1999b). For example, Calder et al. (1996) reported that patients had impaired recognition for facial expressions both for fear and anger. Taken together, although some studies have demonstrated impaired recognition of facial expressions for fear in amygdala-damaged patients, the relationship between damage to the amygdala and recognition of facial expression remains controversial. The current study explored this issue further, by testing the recognition of emotional facial expressions in a patient with bilateral amygdala damage. Two experiments were conducted. In Experiment 1, an attempt was made to verify the findings of earlier studies using the photo-label matching paradigm, a method adopted in previous studies (Young et al., 1995; Calder et al., 1996; Young et al., 1996; Broks et al., 1998; Rapcsak et al., 2000). The performance of the amygdala-damaged subject was compared with that of age-matched normal controls and brain-damaged controls. The avocations of the brain-damaged control group would provide information on the influence of brain damage not including the amygdala damage. Error analyses were conducted together with an accuracy analysis of facial expression recognition. Although previous studies using facial images and verbal label-matching paradigms have focused mainly on accuracy, error analyses provide information relating to the confusion that a subject may have with a facial expression, indicative of a qualitative pattern of facial expression recognition. In Experiment 2, we made a further step toward uncovering the nature of HY’s impairment of emotional expression recognition using a set of morphed facial expressions. Facial expression and amygdala damage 729 EXPERIMENT1 Materials and Methods Subjects The subjects consisted of one patient with bilateral amygdala damage (HY), 9 brain- damaged controls with no damage to the amygdala, and 13 normal controls. All subjects gave informed consent to participate in this study, which was conducted in accordance with the institutional ethical provision and the Declaration of Helsinki. HY: HY is a 37-year-old right-handed woman who suffered from Herpes simplex encephalitis at the age of 27 years. An MRI showed focal aberrant signal regions in the bilateral amygdala and partially their surrounding areas, including the hippocampi and entorhinal cortices. Except for these regions, no other abnormal signal intensity was evident (Figure 1). HY is not aphasic and her spontaneous speech was grammatical and appropriate. Her everyday memory was intact. She showed a normal performance IQ score and a slightly low verbal IQ score (Wechsler Adult Intelligence Scale Revised (WAIS-R), 103 and 76). To assess HY’s basic face-processing ability, neuropsychological tests were conducted. Basic processing of unfamiliar faces was assessed using the Benton Facial Recognition Test (Benton and van Allen, 1983). HY showed a superior performance on this test (25/27, short form). The processing of familiar faces was assessed using the well-known face- naming and face-pointing subset of the Visual Perception Test for Agnosia (Japanese Society of Aphasiology, 1997). HY performed perfectly in these tasks (naming from faces: 8/8; face selection from names: 8/8). Gender discrimination from faces was assessed using a subset of the Visual Perception Test for Agnosia, and HY was found to perform the task perfectly and without hesitation (4/4). In summary, HY was shown to have a normal ability for conducting these basic face-processing tasks. Brain-damaged Controls: Nine right-handed brain-damaged controls (seven females and two males), aged 32 to 58 years (mean: 48.2 years; SD: 9.2), were studied. Subjects had focal brain damage, but no damage in the amygdala. Lesion sites were in the left temporal cortex (n = 4), left temporal and parietal cortex (n = 1), left putamen (n = 2), left temporal cortex and thalamus (n = 1), and bilateral hippocampi (n = 1). As previous studies had shown that lesions in the right somatosensory cortices (Adolphs et al., 1996, 2000), right ventral occipital cortices (Adolphs et al., 1996), or orbitofrontal cortices (Hornak et al., 1996) impair recognition of fearful facial expressions, patients with lesions in these areas were excluded from the group of brain-damaged controls in this study. All subjects were in a stable neurologic condition at the time of the experiment. Fig. 1 – T2-weighted brain MRI images of HY. Representative coronal (left) and horizontal (right) slices at the level of the amygdala are shown. 730 Wataru Sato and Others Normal Controls: Thirteen volunteers (five females and eight males), aged 27 to 46 years (mean: 35.2 years; SD: 4.8), not significantly different in age compared to HY (p > .1, test), without a history of neurologic or psychiatric illness served as normal controls. In addition, the performance of 13 normal subjects (seven females and six males) ranging from 20 to 26 years old (mean: 23.5; SD: 2.0) was also tested. Stimuli A total of 48 photographs of facial expressions depicting six basic emotions (happiness, surprise, sadness, anger, disgust, and fear) were used as stimuli. Half of these pictures consisted of Caucasian models and the remaining half consisted of Japanese models; the stimuli were chosen from the standard facial image set of Ekman and Friesen (1976) and Matsumoto and Ekman (1988). Procedure The events were controlled using the SuperLab software version 2.0 (Cedrus) implemented on a computer (PC-98NX, NEC) with the Windows operating system. A label-matching paradigm, as used in previous studies (Young et al., 1995; Calder et al., 1996; Young et al., 1996; Broks et al., 1998; Rapcsak et al., 2000) assessed the recognition of facial expressions in amygdala-damaged patients. Pictures of people expressing various emotions were presented on a CRT monitor (GDM-F400, Sony) one by one in a random order, and the verbal labels of six basic emotions were presented alongside each photograph. Subjects were asked to select the label that best described the emotion shown in each photograph. They were instructed to consider all six alternatives carefully before responding. There were no time limits and no feedback was provided about performance during the test. There were 8 presentations for each emotional expression, thus making a total of 48 trials for each subject. Before testing, to confirm adequate understanding of emotional labels, participants were asked to provide examples of situations that would elicit each of the emotions. This interview showed that all subjects gave appropriate examples without difficulty. For example, for the verbal label of fear, HY responded that being in an enclosed space, such as in an MRI scanner, might elicit such an emotion. After this interview, subjects were given 5 training trials to become familiarized with the procedure. Results Analysis of Accuracy Preliminary analysis of data derived from normal subjects (13 age-matched controls and an additional 13 other subjects) was preformed. An analysis of covariance (ANCOVA) with subject gender (females and males) as a between- subject factor, and stimulus type (Caucasian and Japanese) and emotional category (happiness, surprise, sadness, anger, disgust, and fear) as within-subjects factors, and subject age as a covariant, was conducted for the accurate emotion recognition scores. The results revealed a significant main effect of emotion [F (5, 130) = 6.65, p < .001]; other effects were not significant (all p > .1). Based on this analysis and our preliminary analysis of brain-damaged controls, the gender and age of subjects, and stimulus type, which showed little effect on the performance of this task, were not focused on in the following analyses. Figure 2 shows the accuracy of recognition (mean percent of correct responses for all groups and SD for brain-damaged controls and normal controls). Facial expression and amygdala damage 731 Comparing the number of correct responses by HY with normal controls the results showed evident reduction in the performance of HY with regard to fearful and surprise expressions at about 1 SD of normal controls. For other emotions, there were no significant differences between the scores of HY and those of normal controls (all p > .1). Visual inspection of the performance profile among emotional categories showed that HY had a lower accuracy for facial expressions of fear than for angry and disgusted facial expressions. In contrast, normal controls showed no differences in accuracy between facial expressions of fear and other stimuli with negative emotions. When comparing the performance of HY with brain-damaged controls, the results showed relative superiority in the performance of HY with regard to fearful expressions at about 1 SD of brain-damaged controls. For other emotions, there were no obvious differences between the scores achieved by HY and brain-damaged controls. Visual inspection of the performance among emotional categories of these groups suggested that the pattern of performance by HY was roughly comparable with that of brain-damaged controls. In addition, we compared the performance of normal controls with brain-damaged controls, using a two-way analysis of variance (ANOVA) with subject group (normal controls and brain-damaged controls) as a between-subject factor, and emotional category (happiness, surprise, sadness, anger, disgust, and Fig. 2 – Mean percent correct (and standard deviation) of facial emotion recognition in normal controls (NORMAL), in brain-damaged controls (BRAIN DAMAGE), and in an amygdala-damaged patient (AMYG DAMAGE). HA = happiness; SA = sadness; SU = surprise; AN = anger; DI = disgust; FE = fear. 732 Wataru Sato and Others fear) as a within-subjects factor. The results revealed the significant main effects of subject group [F (1, 20) = 17.15, p < .001], and emotional category [F (5, 100) = 28.44, p < .001] and a significant interaction of subject group · emotional category [F (2, 34) = 2.47, p < .05]. The main effect of subject group showed that the performance of brain-damaged controls was lower than that of normal controls. For the interaction of subject group and emotional category, further tests of the simple main effect of subject group revealed significant subject group differences for the emotional categories of surprise, anger, disgust, and fear [F (1, 120) = 4.48, p < .05; F (1, 120) = 9.22, p < .005; F (1,120) = 8.03, p < .01; F (1, 120) = 21.62, p < .001], indicative of a relative reduction in the performance of brain-damaged controls. Analysis of Errors Error patterns were analyzed to explore the qualitative aspects of the performance. Figure 3 shows the error responses for each emotional label in each facial image presentation (mean percent ratio of error responses for all groups and SD for brain-damaged controls and normal controls). A comparison of the performance of HY with controls indicated that the former mistook fearful and angry facial expressions for happy facial expressions. On the other hand, both control groups showed some confusion with other negative emotions or surprise in responding to fearful and angry expressions. HY misrecognized surprised faces as happy facial expressions; this pattern was not found in normal controls, although some brain-damaged controls showed this error pattern. To compare normal controls with brain-damaged controls, a one-way multivariate analysis of variance (MANOVA) with subject group (normal controls and brain-damaged controls) as a between-subject factor on error numbers was conducted for each expression. The significance of F-values was determined by Wilks’ lambda criteria. Follow-up tests were conducted using univariate ANOVAs. For the surprised facial expression, the main effect of group was significant [F (5, 16) = 3.46, p < .05], and the follow-up tests revealed that misinterpretation of a happy expression was significantly higher in brain-damaged controls [F (1, 20) = 6.29, p < .05]. For a sad facial expression, there was no significant difference between groups (p < .1). For an angry facial expression, the main effect of group was significant [F (4, 17) = 5.97, p < .005], and the follow-up tests revealed that the misinterpretation with surprise and sadness was higher in brain-damaged controls [F (1, 20) = 9.21, p < .01; F (1, 20) = 9.53, p < .01]. For the disgusted facial expression, there was no significant difference between groups (p > .1). For the facial expression of fear, the main effect of group was significant [F (4, 17) = 7.46, p < .005], and the follow-up tests showed that a sad facial expression was selected significantly more often by brain-damaged controls [F (1, 20) = 7.96, p < .05]. To condense the information and visualize the configurations of subjects in 2-dimensions, a principal component analysis (PCA) on the correlation matrix of error responses for each emotional expression was conducted (Figure 4). This Facial expression and amygdala damage 733 Fig. 3 – Mean percent error (and standard deviation) of facial emotion recognition in normal controls (NORMAL), in brain-damaged controls (BRAIN DAMAGE), and in an amygdala-damaged patient (AMYG DAMAGE). HA = happiness; SA = sadness; SU = surprise; AN = anger; DI = disgust; FE = fear. procedure is known to extract multivariate outliers readily. The two primary components accounted for 100.00, 67.40, 69.69, 56.61, 58.84 and 54.29% of the total variance in the facial expressions of happiness, surprise, sadness, anger, disgust, and fear, respectively. When viewing the positions of HY in the configurations, for fear and anger expressions, she was clearly outstanding in the direction of the happiness category factor loading. For the remaining facial expressions, the position of HY was not conspicuous. Discussion The analysis of recognition accuracy showed that, although HY performed worse in recognizing fearful expressions relative to normal controls, the performance of HY for the facial expression of fear was better than that of 734 Wataru Sato and Others Fig. 4 – Scatter plots of the factor scores for each subject with the plots of factor loading for each emotion category analyzed using principal component analyses. NORMAL = normal controls; BRAIN DAMAGE = brain-damaged controls; AMYG DAMAGE = amygdala-damaged patient; AN = anger; DI = disgust; FE = fear; HA = happiness; SA = sadness; SU = surprise. Facial expression and amygdala damage 735 brain-damaged controls. Rapcsak et al. (2000) reported that patients with focal brain damage not including amygdala lesions, and amygdala-damaged patients, were equally impaired in the recognition of facial expressions of fear compared to normal subjects. Based on this evidence, it was asserted that the apparent fear-recognition deficits exhibited by amygdala-damaged patients were explained by task-specific difficulties. Our accuracy analysis was not inconsistent with the conclusion of Rapcsak et al. (2000). However, the analyses of error responses provided information supplementary to the results of the accuracy analysis. HY had a specific error pattern; expressions of fear and anger were misinterpreted as happy facial expressions. This error pattern did not appear in either normal or brain-damaged controls. HY did not misrecognize facial expressions of fear or anger as negative emotions such as disgust, whereas normal and brain-damaged controls made a number of such errors. The PCA results showed that HY was clearly an outlier relative to both controls for fearful and angry facial expressions. Taken together, these results indicated a special profile for facial expression recognition by HY. The following debriefing is of interest in understanding the nature of HY’s unique error pattern: HY mentioned that all the models in the photographs looked funny and peaceful. In line with this comment, Damasio (1999) and Adolphs et al. (1995) reported that patients with bilateral amygdala damage were less cautious with other people, and proved this empirically (Adolphs et al., 1998). Adolphs and Tranel (1999) demonstrated that amygdala-damaged patients showed this abnormal positive bias not only for humans, but also for various entities, such as nonsense line drawings. In summary, (i) HY was less able to recognize facial expressions of fear compared to normal controls. HY’s error pattern was different from that of normal and brain-damaged control groups, and suggested a positive bias in emotional processing. (ii) HY showed a similar error pattern for anger recognition, although her recognition accuracy for this emotion did not clearly differ from that of normal controls. EXPERIMENT 2 Based on the results of Experiment 1 and previously reported findings, we hypothesized that HY would be prone to evaluate the fearful and angry expressions as happy ones. To test this hypothesis, we conducted an experiment using a set of morphed facial expressions, which blended two different expressions. Use of these morphed facial expressions enables a more sensitive assessment of facial expression recognition than could be attained with tests using typical prototypical expressions (Calder et al., 1996). In line with our hypothesis, we generated the blending of happiness-fear facial expressions, and blending of happiness-anger expressions. As a reference, we blended happiness and sadness, for which Experiment 1 did not detect any impairment in the amygdala-damaged patients. We presented these facial images to HY and 8 age- matched normal controls, all of whom were asked to categorize the expressions using a two alternative forced-choice test (e.g., happiness or fear). We predicted 736 Wataru Sato and Others that, for fearful and angry photographs blended with some happy content, HY would be more likely than normal controls to judge such images as exhibiting happy facial expressions. Materials and Methods Subjects We studied HY and eight age-matched normal controls. Controls were four females and four males ranging from 28 to 46 years old (mean: 36.9 years; SD: 7.0), whose ages were not significantly different from HY (p > .1, test), and who did not have a history of neurologic or psychiatric illness. Stimuli Raw materials were photos of the faces of four individuals chosen form the afore- mentioned Caucasian standard set (Ekman and Friesen, 1976) depicting happy, fearful, angry, and sad facial expressions. Continua of emotional facial expressions were made from these photos. Between happiness and one of the other emotions (fear, anger, or sadness), nine intermediate images in 10% steps were created using computer-morphing techniques (Mukaida et al., 2000), implemented with a computer (Endeavor Pro-400L, Epson Direct) using the Linux operating system. Figure 5 shows the examples of the stimulus sequences. For example, for the happiness-fear continuum, the morphed faces were generated by blending these two expressions in the proportions of 100:0, 90:10, 80:20, and so on (these expressions are referred to as 0, 10, and 20% fear expressions). For each happiness-fear, happiness-anger, happiness-sadness continuum, a total of 44 stimuli (four models, eleven stages) were generated, making a total of 132 stimuli. Procedure The events were controlled using the SuperLab software version 2.0 (Cedrus) implemented on a laptop computer (Inspiron 8000, Dell) with the Windows operating system. Two-way categorizations for a set of morphed faces were conducted. The pictures of facial expressions were presented on a monitor one at a time. Before each trial, subjects were instructed to select either happiness or the other emotion (fear, anger, or sadness), whichever they considered best described the photographs presented. Each continuum was divided into two blocks, each consisting of 22 stimuli, and the stimuli in each block were presented in random order. The order of blocks was randomized initially, and then fixed for all subjects. Each stimulus was presented singly, making a total of 132 trials for each subject. There were no time limits and no feedback was provided about performance during the test. To avoid fatigue and drowsiness, subjects had a short rest after each block. Subjects were given a few training trials to become familiarized with the procedure. Results Figure 6 shows the total number of selections of happiness (mean numbers with SD for controls). HY’s performance was compared with normal controls, using two-way ANOVAs with subject group (HY and controls) as a between-subject factor, and mixture ratio (0-100%) as a within-subjects factor. For the happiness-fear sequence, the results revealed the significant main effects of subject group [F (1, 7) = 11.41, p < .05] and mixture ratio [F (10, 70) = 19.33, p < .001] and the
Description: