Sunday, December 21, 2008

ARTICLE UPDATE - Decoding face information in time, frequency and space from direct intracranial recordings of the human brain.

Tsuchiya N, Kawasaki H, Oya H, Howard MA 3rd, Adolphs R.

PLoS One, in press

Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60-150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than 10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.

ARTICLE UPDATE - EEG-MEG evidence for early differential repetition effects for fearful, happy and neutral faces.

Morel S, Ponz A, Mercier M, Vuilleumier P, George N.

Brain Research, in press

To determine how emotional information modulates subsequent traces for repeated stimuli, we combined simultaneous electro-encephalography (EEG) and magneto-encephalography (MEG) measures during long-lag incidental repetition of fearful, happy, and neutral faces. Repetition effects were modulated by facial expression in three different time windows, starting as early as 40-50 ms in both EEG and MEG, then arising at the time of the N170/M170, and finally between 280-320 ms in MEG only. The very early repetition effect, observed at 40-50 ms over occipito-temporo-parietal regions, showed a different MEG topography according to the facial expression. This differential response to fearful, happy and neutral faces suggests the existence of very early discriminative visual processing of expressive faces, possibly based on the low-level physical features typical of different emotions. The N170 and M170 face-selective components both showed repetition enhancement selective to neutral faces, with greater amplitude for emotional than neutral faces on the first but not the second presentation. These differential repetition effects may reflect valence acquisition for the neutral faces due to repetition, and suggest a combined influence of emotion- and experience-related factors on the early stage of face encoding. Finally, later repetition effects consisted in enhanced M300 (MEG) between 280 and 320 ms for fearful relative to happy and neutral faces that occurred on the first presentation, but levelled out on the second presentation. This effect may correspond to the higher arousing value of fearful stimuli that might habituate with repetition. Our results reveal that multiple stages of face processing are affected by the repetition of emotional information.

ARTICLE UPDATE - Dissociable neural effects of stimulus valence and preceding context during the inhibition of responses to emotional faces.

Schulz KP, Clerkin SM, Halperin JM, Newcorn JH, Tang CY, Fan J.

Human Brain Mapping, in press

Socially appropriate behavior requires the concurrent inhibition of actions that are inappropriate in the context. This self-regulatory function requires an interaction of inhibitory and emotional processes that recruits brain regions beyond those engaged by either processes alone. In this study, we isolated brain activity associated with response inhibition and emotional processing in 24 healthy adults using event-related functional magnetic resonance imaging (fMRI) and a go/no-go task that independently manipulated the context preceding no-go trials (ie, number of go trials) and the valence (ie, happy, sad, and neutral) of the face stimuli used as trial cues. Parallel quadratic trends were seen in correct inhibitions on no-go trials preceded by increasing numbers of go trials and associated activation for correct no-go trials in inferior frontal gyrus pars opercularis, pars triangularis, and pars orbitalis, temporoparietal junction, superior parietal lobule, and temporal sensory association cortices. Conversely, the comparison of happy versus neutral faces and sad versus neutral faces revealed valence-dependent activation in the amygdala, anterior insula cortex, and posterior midcingulate cortex. Further, an interaction between inhibition and emotion was seen in valence-dependent variations in the quadratic trend in no-go activation in the right inferior frontal gyrus and left posterior insula cortex. These results suggest that the inhibition of response to emotional cues involves the interaction of partly dissociable limbic and frontoparietal networks that encode emotional cues and use these cues to exert inhibitory control over the motor, attention, and sensory functions needed to perform the task, respectively.

Saturday, December 06, 2008

ARTICLE UPDATE - Working memory capacity and the self-regulation of emotional expression and experience.

Schmeichel BJ, Volokhov RN, Demaree HA.

Journal of Personality and Social Psychology, 95, 1526-1540

This research examined the relationship between individual differences in working memory capacity and the self-regulation of emotional expression and emotional experience. Four studies revealed that people higher in working memory capacity suppressed expressions of negative emotion (Study 1) and positive emotion (Study 2) better than did people lower in working memory capacity. Furthermore, compared to people lower in working memory capacity, people higher in capacity more capably appraised emotional stimuli in an unemotional manner and thereby experienced (Studies 3 and 4) and expressed (Study 4) less emotion in response to those stimuli. These findings indicate that cognitive ability contributes to the control of emotional responding.

ARTICLE UPDATE - Emotions in Go/NoGo conflicts.

Schacht A, Nigbur R, Sommer W.

Psychological Research, in press

On the basis of current emotion theories and functional and neurophysiological ties between the processing of conflicts and errors on the one hand and errors and emotions on the other hand we predicted that conflicts between prepotent Go responses and occasional NoGo trials in the Go/NoGo task would induce emotions. Skin conductance responses (SCRs), corrugator muscle activity, and startle blink responses were measured in three experiments requiring speeded Go responses intermixed with NoGo trials of different relative probability and in a choice reaction experiment serving as a control. NoGo trials affected several of these emotion-sensitive indicators as SCRs and startle blinks were reduced whereas corrugator activity was prolonged as compared to Go trials. From the pattern of findings we suggest that NoGo conflicts are not aversive. Instead, they appear to be appraised as obstructive for the response goal and as less action relevant than Go trials.

ARTICLE UPDATE - Visual Awareness, Emotion, and Gamma Band Synchronization.

Luo Q, Mitchell D, Cheng X, Mondillo K, McCaffrey D, Holroyd T, Carver F, Coppola R, Blair J.

Cerebral Cortex, in press

What makes us become aware? A popular hypothesis is that if cortical neurons fire in synchrony at a certain frequency band (gamma), we become aware of what they are representing. We tested this hypothesis adopting brain-imaging techniques with good spatiotemporal resolution and frequency-specific information. Specifically, we examined the degree to which increases in event-related synchronization (ERS) in the gamma band were associated with awareness of a stimulus (its detectability) and/or the emotional content of the stimulus. We observed increases in gamma band ERS within prefrontal-anterior cingulate, visual, parietal, posterior cingulate, and superior temporal cortices to stimuli available to conscious awareness. However, we also observed increases in gamma band ERS within the amygdala, visual, prefrontal, parietal, and posterior cingulate cortices to emotional relative to neutral stimuli, irrespective of their availability to conscious access. This suggests that increased gamma band ERS is related to, but not sufficient for, consciousness.

ARTICLE UPDATE - Attentional selectivity for emotional faces: Evidence from human electrophysiology.

Holmes A, Bradley BP, Kragh Nielsen M, Mogg K.

Psychophysiology, in press

Abstract This study investigated the temporal course of attentional biases for threat-related (angry) and positive (happy) facial expressions. Electrophysiological (event-related potential) and behavioral (reaction time [RT]) data were recorded while participants viewed pairs of faces (e.g., angry face paired with neutral face) shown for 500 ms and followed by a probe. Behavioral results indicated that RTs were faster to probes replacing emotional versus neutral faces, consistent with an attentional bias for emotional information. Electrophysiological results revealed that attentional orienting to threatening faces emerged earlier (early N2pc time window; 180-250 ms) than orienting to positive faces (after 250 ms), and that attention was sustained toward emotional faces during the 250-500-ms time window (late N2pc and SPCN components). These findings are consistent with models of attention and emotion that posit rapid attentional prioritization of threat.