1,917
18
Essay, 27 pages (7000 words)

Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition

Introduction

Faces are processed in a unique fashion starting from initial perceptual stages (i. e., encoding). The domain-specific approach sustains that face processing is carried out in specialized modules ( Kanwisher and Yovel, 2006 ). Contrarily, the domain-general approach considers common mechanisms that may operate on face and non-facial stimuli as well. In this perspective, the main factor leading to different processing for faces compared to non-facial stimuli is the substantial visual expertise for the former ( Gauthier et al., 2000 ). This debate aside, faces seem to be characterized by distinctive processing from early stages and supported by specific brain areas ( Haxby et al., 2000 , 2002 ) that may, at least in part, explain how faces are represented in visual working memory (VWM), also when compared to other non-facial stimuli.

VWM is a core cognitive system defined by a limited-space in terms of capacity in which visual information is temporarily stored and manipulated for further processing ( Luck, 2008 ; Liesefeld and Müller, 2019 ) and in this vein it can be considered as a “ form of mental workspace” ( Fukuda et al., 2010 ).

One important dispute regards VWM storage organization in relation to memory item feature (e. g., semantic category, visual complexity, and expertise). When dealing with visually complex items (like Chinese characters, polygons, and faces) a particular class of models seems relevant. Flexible resource models (as opposed to discrete resolution models ; see Luck et al., 1997 ; Vogel et al., 2001 ) propose that a limited pool of memory resources can be allocated in a continuous fashion. Each memory representation has a part of noise and the allocation of a larger amount of memory resources leads to less noise and increases item resolution . Memory capacity limit occurs because more complex items require a larger amount of resources compared to simpler items ( Alvarez and Cavanagh, 2004 ; Ma et al., 2014 ; see also Pratte et al., 2017 , for a variant of discrete resolution models that consider systematic variation in precision across the stimuli; see also Swan and Wyble, 2014 , for an hybrid model ; see also van den Berg et al., 2012 ). Differently, discrete resolution models ( Luck et al., 1997 ; Vogel et al., 2001 ) suggest a fixed slot organization of VWM where each memory item is represented within a slot regardless of the feature complexity. Both approaches consider VWM as characterized by limited capacity (3–4 elements on average); however, the concept of complexity is differently treated. Within flexible resource models, the slope in a visual search rate task (i. e., informational load; Alvarez and Cavanagh, 2004 ) has been proposed as a quantification of visual complexity. In fact, faces are associated with the slowest search rate (i. e., highest informational load ) and lowest VWM capacity compared to other stimuli ( Eng et al., 2005 ; Jackson and Raymond, 2008 ; but see Scolari et al., 2008 ).

Traditionally, VWM has been studied for simple and abstract stimuli (i. e., colored squares, tilted lines) ( Luck et al., 1997 ; Vogel et al., 2001 ). Nevertheless, a central aspect of human cognition is the processing of stimuli with social and affective content. To note, according to the importance that VWM may have in social and affective cognition, an updated version of Baddeley’s model of working memory ( Baddeley and Hitch, 1974 ) has been more recently proposed considering a specific component devoted to stimuli with emotional content ( Baddeley et al., 2012 ; Xie et al., 2016 ). Given the importance of VWM in the human cognitive architecture, it is crucial to understand how these emotional stimuli are represented. Among them, faces certainly occupy a place of the highest order. They convey social and affective relevant information such as identity, ethnicity, and emotions.

Methodological Aspects

For a better comprehension of the studies reviewed in the subsequent sections, this section provides a brief overview on methodological aspects related to VWM research.

One of the traditional paradigms to investigate VWM is the change detection task (CDT) ( Luck et al., 1997 ; Vogel et al., 2001 ; Rensink, 2002 ). Basically, a memory array containing to-be-memorized items is presented, and after a blank retention interval , a test display is displayed. A behavioral response is needed. Participants are required to compare the to-be-memorized items in the memory array with the item/items presented in the test display. These CDT components roughly correspond to the main VWM operations of encoding, maintenance, and retrieval ( Luck, 2008 ; Liesefeld and Müller, 2019 ). Although other VWM-related paradigms have also been more or less successfully employed, (e. g., the n-back task; Jaeggi et al., 2010 ), the CDT is the most widely used and is considered the most versatile paradigm for the study of VWM ( Luck and Vogel, 2013 ).

Given the extensive use of this paradigm, this has led to a great proliferation of CDT variants, sometimes at the expense of the interpretation of the results. The most common CDT manipulations regard the amount and/or type of the memory array and test display items, the duration of both the memory array (with a significant impact on the amount of available encoding time for each displayed item) and the retention interval, and the type of test display presented after the retention interval (e. g., single probe vs. whole display; see, e. g., Vogel and Machizawa, 2004 ; Zhang and Luck, 2008 ; Brigadoi et al., 2017 ). One important variant regards the use of a continuous probe display (e. g., choice of a to-be-remembered color from a colors wheel) allowing an estimation of memory precision ( Zhang and Luck, 2008 ; see also Lorenc et al., 2014 ; Krill et al., 2018 for examples with faces). Other possible variants concern the use of distractors or masks during the retention interval ( Vogel et al., 2006 ).

Within the context of studies that used the CDT, several VWM-dependent measures have been used, including measures of storage capacity (e. g., Cowan’s K ; Cowan, 2001 ) – an index of the amount of items effectively retained (for a review on capacity measures, see Rouder et al., 2011 ) – measures of accuracy – the percentage of correct responses – and measures of sensitivity in the comparison task between the to-be-memorized items and that/those presented in the test display (e. g., d’ from signal detection theory; Green and Swets, 1974 ; Wilken and Ma, 2004 ). As mentioned before, a continuous probe display allows the memory precision estimation through an error distribution around the right value. Finally, the concept of informational load ( Alvarez and Cavanagh, 2004 ; Eng et al., 2005 ) mentioned above is frequently used to compare different stimuli with regard to their visual complexity (but see Jiang et al., 2008 ).

One of the most studied neural correlate of VWM is an event-related potential (ERP) called contralateral delay activity (CDA) or also sustained posterior contralateral negativity (SPCN) (for a review, see Luria et al., 2016 ). This ERP is recorded at occipito-parietal electrodes ( ibidem ) and it has been suggested that the intraparietal sulcus (IPS) is the main neural generator ( Xu and Chun, 2006 ; Robitaille et al., 2009 ). It is computed as a difference wave ( Gratton, 1998 ) between contralateral and ipsilateral activity related to the hemifield location of to-be-memorized items. CDA amplitude tends to correlate with the amount ( Vogel and Machizawa, 2004 ) and resolution ( Luria et al., 2016 ) of stored visual information and it is also sensitive to visual complexity (colors vs. random polygons; Luria et al., 2010 ).

Given the great variability in the methods employed and results obtained in the context of VWM studies, we selected those investigations that used a comparable methodology in order to facilitate comparison between results. In some cases, the results of the different studies discussed here are not directly comparable because of differences in the stimuli used (e. g., schematic faces vs. real faces, different facial expressions, etc.) and/or participants’ task (detection of a change in faces identity vs. facial expressions). For this reason, we have tried to indicate details useful to the readers for a critical analysis of the results. In the following sections, we focus on studies using CDT with faces with particular attention to studies that measured the CDA. In the last section of this review, we also discuss studies that considered the relationship between face representations and individual differences (e. g., psychopathology). This review does not aim to be exhaustive but rather to identify and present selected examples of evidence that may help clarify the critical link between VWM functioning and the complexity of social cognition focusing on the main source of social information, that is others’ faces.

Faces and Visual Working Memory

Curby and Gauthier (2007) demonstrated that a greater number of upright stimuli can be retained in VWM (measured with Cowan’s K ) compared to inverted ones, and, according to the face inversion effect ( Yin, 1969 ; Tanaka and Gordon, 2011 ), this effect is larger for faces compared to non-facial stimuli (for a review see McKone and Robbins, 2011 ). Also, the precision is higher for upright faces when compared to inverted faces ( Lorenc et al., 2014 ; Krill et al., 2018 ). Furthermore, coherent with face visual complexity ( Eng et al., 2005 ; Jiang et al., 2008 ), this effect is present only if sufficient encoding time (i. e., memory array duration) is provided. One possible explanation for this pattern of results takes into account holistic/configural processing that characterizes faces. In support of this, similar VWM advantage has been reported in expert individuals with other class of objects ( Curby et al., 2009 ; but see Wong et al., 2008 ; Jiang et al., 2016 ) or famous faces ( Jackson and Raymond, 2008 ). Within the theoretical framework considering the dissociation between capacity, in terms of slots, and resolution of VWM representations ( Scolari et al., 2008 ; Zhang and Luck, 2008 ), it has been suggested that perceptual expertise may enhance the resolution of VWM representations ( Scolari et al., 2008 ; Curby and Gauthier, 2010 ; Lorenc et al., 2014 ). These results are noteworthy as they strongly suggest that resolution may be a particularly flexible aspect of VWM and potentially modulated on the basis of factors such as, in this case, perceptual expertise, but possibly also social and emotional salience. Therefore, VWM resolution could be a key element for understanding VWM representations of faces and facial expressions of emotions.

Static and Changeable Facial Features

Faces are characterized by both static and changeable features that convey social and affective information, such as race, identity, trustworthiness ( Oosterhof and Todorov, 2009 ), facial expressions, and gaze direction ( Adolphs and Birmingham, 2011 ).

Recognizing people’s identity is a fundamental social ability ( Bruce and Young, 1986 ; Haxby et al., 2000 ) and it has been suggested familiarity with specific individual faces might affect their storage in VWM. For this reason, face familiarity could influence VWM in real-time identity processing. Jackson and Raymond (2008) using an identity CDT demonstrated a VWM improvement (capacity and sensitivity) for familiar actors’ faces compared to unfamiliar ones, leading to the conclusion of an involvement of long-term memory in VWM representations of familiar faces. The effect disappeared for inverted faces. Testing pictorial details’ representations of different pictures of the same individual – either familiar or unfamiliar – Dunn et al. (2019 , see exp. 2–3) found no difference in performance (in terms of sensitivity) as a function of familiarity.

Race seems to influence the quality of face processing ( Young et al., 2012 ) possibly influencing VWM representations. Zhou et al. (2018) demonstrated that with short encoding time, other-race faces are retained with reduced precision (i. e., standard deviation of errors distributions) compared to own-race faces. Stelter and Degner (2018) demonstrated both lower accuracy ( d’) and capacity (Cowan’s K ) for other-race faces. These findings suggest that, similar to inverted faces, other-race faces are processed, at both configural and featural levels of processing, less efficiently ( Hayward et al., 2013 ; Stelter and Degner, 2018 ). Holistic/configural processing seems a critical aspect in race processing ( Tanaka et al., 2004 ), that may also depend on other social-cognitive factors linked to intergroup processing (for a review, see Young et al., 2012 ). Interestingly, a previous study has also provided evidence of a reduced CDA amplitude for other-race faces, especially with direct gaze ( Sessa and Dalmaso, 2016 ) and another study reported a correlation between CDA amplitude and implicit racial prejudice scores ( Sessa et al., 2012 ), such that the most prejudiced participants memorized other-race faces with the lowest resolution.

Facial expressions are extremely relevant to social cognition. Information on the others’ affective states (e. g., others’ emotions) and on the environment (e. g., dangers from fearful reactions) could be extracted from facial expressions ( Adolphs, 2002 ). Using similar methodology (i. e., a single-probe identity CDT with real faces; facial expression was task-irrelevant), one recurring finding in VWM literature is that of an advantage in terms of capacity (Cowan’s K ) and sensitivity ( d’) for negative facial expressions, especially angry, compared to happy and neutral expressions ( Jackson et al., 2008 , 2009 , 2014 ; Thomas et al., 2014 ).

Furthermore, this benefit is observed only when angry faces are presented in the memory array but not in the test display ( Jackson et al., 2014 ). In addition, it declines during the retention interval. Using a longer retention interval (i. e., 9, 000 vs. 1, 000 ms in the study by Jackson et al., 2009 ) this benefit disappears ( Jackson et al., 2012 ). Notably, this angry benefit occurs without reducing performance for concurrently presented neutral faces. All stimuli are retained, with an increased resolution for salient stimuli ( Thomas et al., 2014 ). However, slightly different results (i. e., the absence of an angry benefit and/or the presence of an happy benefit) have been reported using schematic facial expressions (i. e., no information on identity), shorter encoding times, or other different methodological details ( Langeslag et al., 2009 ; Simione et al., 2014 ; Xie et al., 2016 ; Spotorno et al., 2018 ; Curby et al., 2019 ). In particular, the angry face advantage has not always been observed (see also Curby et al., 2019 using a change localization task; Xie et al., 2016 using schematic faces) or has been reported only for short encoding times (150 vs. 1, 000/2, 000 ms of the previously cited studies) ( Simione et al., 2014 using schematic faces).

Varying memory array size, encoding time, and expression (fearful, happy, angry, and neutral), Curby et al. (2019) demonstrated a VWM “ cost” for fearful, compared to neutral and happy real faces in terms of lower capacity (Cowan’s K ). Opposite to the angry benefit, a cost for angry faces has been also observed ( Curby et al., 2019 ) when compared to happy faces (indeed a happy benefit emerged). To note, other studies have instead demonstrated a fearful advantage in terms of capacity, accuracy, and CDA amplitude ( Sessa et al., 2011 ; Stout et al., 2013 ; Lee and Cho, 2019 ; all studies used real faces and facial expression was task-irrelevant). Methodological differences could at least in part explain these inconsistent findings. Sessa et al. (2011) and Stout et al. (2013) used a shorter encoding time (200–500) and a smaller set size (1–2) when compared to the study by ( Curby et al., 2019 ; 1, 000/4, 000 ms and five items, respectively) and the spatial information was less relevant (i. e., the location was probed in Curby et al., 2019 ). Interestingly, in Curby et al.’s (2019) study, the fear cost emerged only at the longest encoding time and, as argued by authors, a difficulty in disengaging from fearful faces could explain the lower estimated capacity. When controlling for spatial and temporal attention, a fearful advantage in terms of sensitivity ( d’) emerges ( Lee and Cho, 2019 ).

Overall, the angry face benefit seems consistent across studies. However, changing some CDT parameters like probing method (i. e., probed location), using real vs. schematic faces, different encoding times and/or dependent variables (Cowan K vs. d’) seems to influence this effect ( Langeslag et al., 2009 ; Simione et al., 2014 ; Xie et al., 2016 ; Spotorno et al., 2018 ; Curby et al., 2019 ). Similarly, a fearful advantage, relative to neutral faces, is observed for studies using similar parameters ( Sessa et al., 2011 ; Stout et al., 2013 ; Lee and Cho, 2019 ; but see Curby et al., 2019 ). Importantly, CDA seems to differentiate fearful and neutral faces regardless of set size and spatial or temporal attentional biases ( Sessa et al., 2011 ) and this may indicate that, compared to VWM behavioral estimates, CDA is more sensitive to resolution variations according to saliency.

Other Socially Relevant Factors and Interindividual Differences

Other investigations combined different emotional stimuli for understanding how social information is integrated into VWM. Negative emotional words presented during the retention interval (2, 000 ms) seem to enhance performance ( d’) for angry faces (compared to happy) ( Jackson et al., 2014 ). An angry benefit emerged with both positive and negative words when using a longer retention interval (9, 000 ms; Jackson et al., 2012 ). Authors suggested that encoding negative faces creates a condition ( threat tagging ) in which identity is coupled with valence and congruent stimuli (i. e., negative words) can interact with this representation ( Jackson et al., 2014 ). Maran et al. (2015) induced positive or negative mood using high-impact pictures (e. g., erotic, mutilations, etc.) and observed improved performance ( d’) for all emotional faces. Similarly, inducing a feeling of social exclusion ( Du et al., 2019 ) or including a monetary reward (instead of penalty; Thomas et al., 2016 ) improved VWM capacity for faces. On the contrary, a facial task during the retention interval while maintaining a face in VWM seems to decrease accuracy ( Robinson et al., 2008 ). Overall, VWM for faces seems to benefit from non-facial emotional stimuli such as words or other non-visual factors (i. e., mood).

Dealing with task-relevant and irrelevant (distractors) information is another important VWM facet. Filtering efficiency interacts with individual VWM capacity ( Vogel et al., 2005 ) and with psychopathology ( Stout et al., 2013 , 2015 ). CDA seems to be an optimal measure for this purpose. Given the correlation with the number of to-be-memorized items until capacity limit ( Vogel and Machizawa, 2004 ), CDA amplitude for n task-relevant stimuli should be greater than amplitude for n stimuli, some of which are task-irrelevant. Including emotional face distractors in the memory array (happy, angry, and neutral) and using an identity CDT (1 or 2 to-be-remembered faces), Ye et al. (2018) found that high-capacity subjects filtered out all distractors compared to low-capacity subjects in whom filtering activity was effective only for happy faces.

Psychopathology is another critical factor in social cognition. Anxiety, in particular, has been widely studied in relation to WM and generally correlates with lower WM capacity (for a review, see Moran, 2016 ). In two different experiments using a location probe task with real emotional faces (angry, neutral, and happy), Yao et al. (2018) demonstrated lower VWM capacity (Cowan’s K ) for all facial expressions in individuals with higher self-reported anxiety, without affecting precision.

Filtering irrelevant information is an important WM function and could be relevant in anxiety ( Qi et al., 2014 ). Using an identity CDT and monitoring the CDA, Stout et al. (2013) measured the filtering efficiency for task-irrelevant faces (with fearful or neutral expressions). They found that task-irrelevant fearful faces were less efficiently filtered out compared to neutral faces. In addition, filtering efficiency negatively correlated with self-reported anxiety. More specifically, Stout et al. (2015) demonstrated that filtering efficiency is specifically inversely related to the worry component of anxiety. Moreover, Meconi et al. (2013) using an identity CDT reported greater CDA amplitude for trustworthy faces. Interestingly, when self-reported anxiety was considered, untrustworthy faces (vs. trustworthy) were associated with larger CDA amplitude in the most anxious participants.

Other clinical conditions have been studied in relation to facial expression VWM representations. Patients with schizophrenia seem to have an overall WM deficit ( Forbes et al., 2009 ) and lower VWM capacity for neutral faces ( She et al., 2019 ). Interestingly, using emotional faces, the angry benefit is still present although an emotion classification deficit is observed ( Linden et al., 2010 ). Individuals with melancholic depression have a VWM bias (i. e., higher d ’) toward sad faces compared to individuals with non-melancholic depression ( Linden et al., 2011 ). In an expression change localization task, individuals with high suicidal intentions seem to have worse VWM capacity for negative schematic faces compared to controls ( Xie et al., 2018 ). Furthermore, Takahashi et al. (2015) using a CDT with schematic faces (angry, happy, and neutral) demonstrated that high alexithymic individuals have worse VWM capacity for happy faces compared to low alexithymic individuals.

Discussion and Conclusion

Faces are complex stimuli that convey multiple information and that seem to be subject to a special type of holistic processing during early stages of processing. For this reason, it is plausible to hypothesize that faces are also represented in VWM in a “ special” way when compared to non-facial stimuli or inverted faces. Many of the studies in the literature have focused on the effects of facial expressions of emotions (both task-relevant with schematic faces, and task-irrelevant with real faces of different identities) on the representation of faces in VWM. Negative faces, in particular angry, are associated with better VWM performances. However, a great methodological variability in stimuli choice and CDT parameters makes it difficult to compare findings. As previously shown, results could drastically change using schematic vs. real faces or different probing methods. Future research in this field, if not of interest, should keep paradigms’ parameters fixed, only varying socially relevant information. Otherwise, an orthogonal variation of CDT parameters within the same study could be useful (e. g., using several encoding times, schematic vs. real faces).

VWM is defined a hub of cognition ( Luck, 2008 ) where information is retained and manipulated. Interestingly, different socially relevant information (e. g., emotional words or mood) seems to interact with facial memory representations. Ecologically, integrating different sources of social information could be an adaptive mechanism.

Psychopathology is another important aspect in social environment and often related to changes in basic cognitive functions. Again, different methods and different psychopathological conditions are difficult to integrate. However, it is interesting noting that psychopathology and VWM functioning are related. Alexithymic individuals have the worst VWM performance for happy faces ( Takahashi et al., 2015 ) and individuals with suicidal intentions show the worst VWM performance for negative stimuli, probably originating from an adaptive avoidance behavior ( Xie et al., 2018 ).

At the neural level, the CDA seems to be influenced by facial information. It has been demonstrated that the CDA is modulated according to the amount ( Vogel and Machizawa, 2004 ) and also the quality (i. e., resolution) of visual information ( Luria et al., 2016 ). Interestingly, even with a single to-be-remembered face (i. e., capacity estimation is not relevant), the CDA is modulated by facial information ( Sessa et al., 2011 , 2018 ; Meconi et al., 2013 ). According to flexible resource models and the neural object-file theory ( Xu and Chun, 2006 , 2009 ), one important and ecologically relevant aspect to be considered could be the resolution variation according to saliency. The theory proposes two stages of processing (with neural bases on distinct part of IPS that is supposed to be also the CDA generator), where the second stage regards a detailed visual encoding of relevant objects. Integrating this neural measure in standard behavioral studies and focusing on resolution besides capacity could be useful for finely comparing representations of different socially relevant information.

Author Contributions

FG wrote the first draft of the manuscript. PS provided critical revision. Both authors read and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Adolphs, R. (2002). Recognizing emotion from facial expressions: psychological and neurological mechanisms. Behav. Cogn. Neurosci. Rev. 1, 21–62. doi: 10. 1177/1534582302001001003

|

Adolphs, R., and Birmingham, E. (2011). “ Neural substrates of social perception” in Oxford handbook of face perception . eds. A. J. Calder, G. Rodhes, M. H. Johnson, and J. V. Haxby (Oxford: Oxford University Press), 571–589.

Alvarez, G. A. A., and Cavanagh, P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychol. Sci. 15, 106–111. doi: 10. 1111/j. 0963-7214. 2004. 01502006. x

|

Baddeley, A., Banse, R., Huang, Y. M., and Page, M. (2012). Working memory and emotion: detecting the hedonic detector. J. Cogn. Psychol. 24, 6–16. doi: 10. 1080/20445911. 2011. 613820

|

Baddeley, A., and Hitch, G. (1974). “ Working memory” in The psychology of learning and motivation . ed. G. H. Bower (New York: Academic Press), 47–89.

Brigadoi, S., Cutini, S., Meconi, F., Castellaro, M., Sessa, P., Marangon, M., et al. (2017). On the role of the inferior intraparietal sulcus in visual working memory for lateralized single-feature objects. J. Cogn. Neurosci. 29, 337–351. doi: 10. 1162/jocn_a_01042

||

Bruce, V., and Young, A. (1986). Understanding face recognition. Br. J. Psychol. 77, 305–327. doi: 10. 1111/j. 2044-8295. 1986. tb02199. x

||

Cowan, N. (2001). The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behav. Brain Sci. 24, 87–114. doi: 10. 1017/S0140525X01003922

||

Curby, K. M., and Gauthier, I. (2007). A visual short-term memory advantage for faces. Psychon. Bull. Rev. 14, 620–628. doi: 10. 3758/BF03196811

||

Curby, K. M., and Gauthier, I. (2010). To the trained eye: perceptual expertise alters visual processing. Top. Cogn. Sci. 2, 189–201. doi: 10. 1111/j. 1756-8765. 2009. 01058. x

||

Curby, K. M., Glazek, K., and Gauthier, I. (2009). A visual short-term memory advantage for objects of expertise. J. Exp. Psychol. Hum. Percept. Perform. 35, 94–107. doi: 10. 1037/0096-1523. 35. 1. 94

||

Curby, K. M., Smith, S. D., Moerel, D., and Dyson, A. (2019). The cost of facing fear: visual working memory is impaired for faces expressing fear. Br. J. Psychol. 110, 428–448. doi: 10. 1111/bjop. 12324

|

Du, X., Xu, M., Ding, C., Yuan, S., Zhang, L., and Yang, D. (2019). Social exclusion increases the visual working memory capacity of social stimuli. Curr. Psychol. 1–12. doi: 10. 1007/s12144-019-00274-1

|

Dunn, J. D., Ritchie, K. L., Kemp, R. I., and White, D. (2019). Familiarity does not inhibit image-specific encoding of faces. J. Exp. Psychol. Hum. Percept. Perform. 45, 841–854. doi: 10. 1037/xhp0000625

||

Eng, H. Y., Chen, D., and Jiang, Y. (2005). Visual working memory for simple and complex visual stimuli. Psychon. Bull. Rev. 12, 1127–1133. doi: 10. 3758/BF03206454

||

Forbes, N. F., Carrick, L. A., McIntosh, A. M., and Lawrie, S. M. (2009). Working memory in schizophrenia: a meta-analysis. Psychol. Med. 39, 889–905. doi: 10. 1017/S0033291708004558

||

Fukuda, K., Awh, E., and Vogel, E. K. (2010). Discrete capacity limits in visual working memory. Curr. Opin. Neurobiol. 20, 177–182. doi: 10. 1016/j. conb. 2010. 03. 005

||

Gauthier, I., Skudlarski, P., Gore, J. C., Anderson, A. W., Skudlarsky, P., Gore, J. C., et al. (2000). Expertise for cars and birds recruits brain areas involved in face recognition. Nat. Neurosci. 3, 191–197. doi: 10. 1038/72140

||

Gratton, G. (1998). The contralateral organization of visual memory: a theoretical concept and a research tool. Psychophysiology 35, 638–647. doi: 10. 1111/1469-8986. 3560638

||

Green, D. M., and Swets, J. A. (1974). Signal detection theory and psychophysics . Oxford, England: Robert E. Krieger.

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. doi: 10. 1016/S1364-6613(00)01482-0

||

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biol. Psychiatry 51, 59–67. doi: 10. 1016/S0006-3223(01)01330-0

||

Hayward, W. G., Crookes, K., and Rhodes, G. (2013). The other-race effect: holistic coding differences and beyond. Vis. Cogn. 21, 1224–1247. doi: 10. 1080/13506285. 2013. 824530

|

Jackson, M. C., Linden, D. E. J., and Raymond, J. E. (2012). “ Distracters” do not always distract: visual working memory for angry faces is enhanced by incidental emotional words. Front. Psychol. 3: 437. doi: 10. 3389/fpsyg. 2012. 00437

||

Jackson, M. C., Linden, D. E. J. J., and Raymond, J. E. (2014). Angry expressions strengthen the encoding and maintenance of face identity representations in visual working memory. Cogn. Emot. 28, 278–297. doi: 10. 1080/02699931. 2013. 816655

|

Jackson, M. C., and Raymond, J. E. (2008). Familiarity enhances visual working memory for faces. J. Exp. Psychol. Hum. Percept. Perform. 34, 556–568. doi: 10. 1037/0096-1523. 34. 3. 556

||

Jackson, M. C., Wolf, C., Johnston, S. J., Raymond, J. E., and Linden, D. E. J. (2008). Neural correlates of enhanced visual short-term memory for angry faces: an fMRI study. PLoS One 3: e3536. doi: 10. 1371/journal. pone. 0003536

||

Jackson, M. C., Wu, C.-Y., Linden, D. E. J., and Raymond, J. E. (2009). Enhanced visual short-term memory for angry faces. J. Exp. Psychol. Hum. Percept. Perform. 35, 363–374. doi: 10. 1037/a0013895

||

Jaeggi, S. M., Buschkuehl, M., Perrig, W. J., and Meier, B. (2010). The concurrent validity of the N -back task as a working memory measure. Memory 18, 394–412. doi: 10. 1080/09658211003702171

||

Jiang, Y. V., Remington, R. W., Asaad, A., Lee, H. J., and Mikkalson, T. C. (2016). Remembering faces and scenes: the mixed-category advantage in visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 42, 1399–1411. doi: 10. 1037/xhp0000228

||

Jiang, Y. V., Shim, W. M., and Makovski, T. (2008). Visual working memory for line orientations and face identities. Atten. Percept. Psychophys. 70, 1581–1591. doi: 10. 3758/PP. 70. 8. 1581

||

Kanwisher, N., and Yovel, G. (2006). The fusiform face area: a cortical region specialized for the perception of faces. Philos. Trans. R. Soc. B Biol. Sci. 361, 2109–2128. doi: 10. 1098/rstb. 2006. 1934

|

Krill, D., Avidan, G., and Pertzov, Y. (2018). The rapid forgetting of faces. Front. Psychol. 9, 1–13. doi: 10. 3389/fpsyg. 2018. 01319

|

Langeslag, S. J. E., Morgan, H. M., Jackson, M. C., Linden, D. E. J., and Van Strien, J. W. (2009). Electrophysiological correlates of improved short-term memory for emotional faces. Neuropsychologia 47, 887–896. doi: 10. 1016/j. neuropsychologia. 2008. 12. 024

||

Lee, H. J., and Cho, Y. S. (2019). Memory facilitation for emotional faces: visual working memory trade-offs resulting from attentional preference for emotional facial expressions. Mem. Cogn. 1231–1243. doi: 10. 3758/s13421-019-00930-8

||

Liesefeld, H. R., and Müller, H. J. (2019). Current directions in visual working memory research: an introduction and emerging insights. Br. J. Psychol. 110, 193–206. doi: 10. 1111/bjop. 12377

|

Linden, S. C., Jackson, M. C., Subramanian, L., Healy, D., and Linden, D. E. J. (2011). Sad benefit in face working memory: an emotional bias of melancholic depression. J. Affect. Disord. 135, 251–257. doi: 10. 1016/j. jad. 2011. 08. 002

||

Linden, S. C., Jackson, M. C., Subramanian, L., Wolf, C., Green, P., Healy, D., et al. (2010). Emotion–cognition interactions in schizophrenia: implicit and explicit effects of facial expression. Neuropsychologia 48, 997–1002. doi: 10. 1016/j. neuropsychologia. 2009. 11. 023

||

Lorenc, E. S., Pratte, M. S., Angeloni, C. F., and Tong, F. (2014). Expertise for upright faces improves the precision but not the capacity of visual working memory. Atten. Percept. Psychophys. 76, 1975–1984. doi: 10. 3758/s13414-014-0653-z

||

Luck, S. J. (2008). “ Visual short-term memory” in Visual memory . eds. S. J. Luck and A. Hollingworth (New York, NY: Oxford University Press), 43–85.

Luck, S. J., and Vogel, E. K. (2013). Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends Cogn. Sci. 17, 391–400. doi: 10. 1016/j. tics. 2013. 06. 006

||

Luck, S. J., Vogel, E. K., Vogel, J., and Edward, K. (1997). The capacity of visual working memory for features and conjunctions. Nature 390, 279–284. doi: 10. 1038/36846

||

Luria, R., Balaban, H., Awh, E., and Vogel, E. K. (2016). The contralateral delay activity as a neural measure of visual working memory. Neurosci. Biobehav. Rev. 62, 100–108. doi: 10. 1016/j. neubiorev. 2016. 01. 003

|

Luria, R., Sessa, P., Gotler, A., Jolicoeur, P., Dell’Acqua, R., Jolicœur, P., et al. (2010). Visual short-term memory capacity for simple and complex objects. J. Cogn. Neurosci. 22, 496–512. doi: 10. 1162/jocn. 2009. 21214

||

Ma, W. J., Husain, M., and Bays, P. M. (2014). Changing concepts of working memory. Nat. Neurosci. 17, 347–356. doi: 10. 1038/nn. 3655

||

Maran, T., Sachse, P., and Furtner, M. (2015). From specificity to sensitivity: affective states modulate visual working memory for emotional expressive faces. Front. Psychol. 6: 1297. doi: 10. 3389/fpsyg. 2015. 01297

||

McKone, E., and Robbins, R. (2011). “ Are faces special?” in Oxford handbook of face perception . eds. A. J. Calder, G. Rodhes, M. H. Johnson, and J. V. Haxby (Oxford, UK: Oxford University Press).

Meconi, F., Luria, R., and Sessa, P. (2013). Individual differences in anxiety predict neural measures of visual working memory for untrustworthy faces. Soc. Cogn. Affect. Neurosci. 9, 1872–1879. doi: 10. 1093/scan/nst189

|

Moran, T. P. (2016). Anxiety and working memory capacity: a meta-analysis and narrative review. Psychol. Bull. 142, 831–864. doi: 10. 1037/bul0000051

||

Oosterhof, N. N., and Todorov, A. (2009). Shared perceptual basis of emotional expressions and trustworthiness impressions from faces. Emotion 9, 128–133. doi: 10. 1037/a0014520

||

Pratte, M. S., Park, Y. E., Rademaker, R. L., and Tong, F. (2017). Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 43, 6–17. doi: 10. 1037/xhp0000302

||

Qi, S., Ding, C., and Li, H. (2014). Neural correlates of inefficient filtering of emotionally neutral distractors from working memory in trait anxiety. Cogn. Affect. Behav. Neurosci. 14, 253–265. doi: 10. 3758/s13415-013-0203-5

||

Rensink, R. A. (2002). Change detection. Annu. Rev. Psychol. 53, 245–277. doi: 10. 1146/annurev. psych. 53. 100901. 135125

||

Robinson, A., Manzi, A., and Triesch, J. (2008). Object perception is selectively slowed by a visually similar working memory load. J. Vis. 8, 1–13. doi: 10. 1167/8. 16. 7

|

Robitaille, N., Grimault, S., and Jolicœur, P. (2009). Bilateral parietal and contralateral responses during maintenance of unilaterally encoded objects in visual short-term memory: evidence from magnetoencephalography. Psychophysiology 46, 1090–1099. doi: 10. 1111/j. 1469-8986. 2009. 00837. x

||

Rouder, J. N., Morey, R. D., Morey, C. C., and Cowan, N. (2011). How to measure working memory capacity in the change detection paradigm. Psychon. Bull. Rev. 18, 324–330. doi: 10. 3758/s13423-011-0055-3

||

Scolari, M., Vogel, E. K., and Awh, E. (2008). Perceptual expertise enhances the resolution but not the number of representations in working memory. Psychon. Bull. Rev. 15, 215–222. doi: 10. 3758/PBR. 15. 1. 215

||

Sessa, P., and Dalmaso, M. (2016). Race perception and gaze direction differently impair visual working memory for faces: an event-related potential study. Soc. Neurosci. 11, 97–107. doi: 10. 1080/17470919. 2015. 1040556

|

Sessa, P., Jolicœur, P., Dell’Acqua, R., Gotler, A., Luria, R., Gotler, A., et al. (2011). Interhemispheric ERP asymmetries over inferior parietal cortex reveal differential visual working memory maintenance for fearful versus neutral facial identities. Psychophysiology 48, 187–197. doi: 10. 1111/j. 1469-8986. 2010. 01046. x

||

Sessa, P., Schiano Lomoriello, A., and Luria, R. (2018). Neural measures of the causal role of observers’ facial mimicry on visual working memory for facial expressions. Soc. Cogn. Affect. Neurosci. 13, 1281–1291. doi: 10. 1093/scan/nsy095

||

Sessa, P., Tomelleri, S., Luria, R., Castelli, L., Reynolds, M., and Dell’Acqua, R. (2012). Look out for strangers! Sustained neural activity during visual working memory maintenance of other-race faces is modulated by implicit racial prejudice. Soc. Cogn. Affect. Neurosci. 7, 314–321. doi: 10. 1093/scan/nsr011

||

She, S., Zhang, B., Mi, L., Li, H., Kuang, Q., Bi, T., et al. (2019). Stimuli may have little impact on the deficit of visual working memory accuracy in first-episode schizophrenia. Neuropsychiatr. Dis. Treat. 15, 481–489. doi: 10. 2147/NDT. S188645

|

Simione, L., Calabrese, L., Marucci, F. S., Belardinelli, M. O., Raffone, A., and Maratos, F. A. (2014). Emotion based attentional priority for storage in visual short-term memory. PLoS One 9: e95261. doi: 10. 1371/journal. pone. 0095261

||

Spotorno, S., Evans, M., and Jackson, M. C. (2018). Remembering who was where: a happy expression advantage for face identity-location binding in working memory. J. Exp. Psychol. Learn. Mem. Cogn. 44, 1365–1383. doi: 10. 1037/xlm0000522

||

Stelter, M., and Degner, J. (2018). Investigating the other-race effect in working memory. Br. J. Psychol. 109, 777–798. doi: 10. 1111/bjop. 12304

||

Stout, D. M., Shackman, A. J., Johnson, J. S., and Larson, C. L. (2015). Worry is associated with impaired gating of threat from working memory. Emotion 15, 6–11. doi: 10. 1037/emo0000015

||

Stout, D. M., Shackman, A. J., and Larson, C. L. (2013). Failure to filter: anxious individuals show inefficient gating of threat from working memory. Front. Hum. Neurosci. 7: 58. doi: 10. 3389/fnhum. 2013. 00058

||

Swan, G., and Wyble, B. (2014). The binding pool: a model of shared neural resources for distinct items in visual working memory. Atten. Percept. Psychophys. 76, 2136–2157. doi: 10. 3758/s13414-014-0633-3

|

Takahashi, J., Hirano, T., and Gyoba, J. (2015). Effects of facial expressions on visual short-term memory in relation to alexithymia traits. Pers. Individ. Dif. 83, 128–135. doi: 10. 1016/j. paid. 2015. 04. 010

|

Tanaka, J. W., and Gordon, I. (2011). “ Features, configuration, and holistic face processing” in The Oxford handbook of face perception . eds. A. J. Calder, G. Rodhes, M. H. Johnson, and J. V. Haxby (New York, NY: Oxford University Press), 177–194.

Tanaka, J. W., Kiefer, M., and Bukach, C. M. (2004). A holistic account of the own-race effect in face recognition: evidence from a cross-cultural study. Cognition 93, B1–B9. doi: 10. 1016/j. cognition. 2003. 09. 011

||

Thomas, P. M. J., FitzGibbon, L., and Raymond, J. E. (2016). Value conditioning modulates visual working memory processes. J. Exp. Psychol. Hum. Percept. Perform. 42, 6–10. doi: 10. 1037/xhp0000144

||

Thomas, P. M. J., Jackson, M. C., and Raymond, J. E. (2014). A threatening face in the crowd: effects of emotional singletons on visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 40, 253–263. doi: 10. 1037/a0033970

||

van den Berg, R., Shin, H., Chou, W. C., George, R., and Ma, W. J. (2012). Variability in encoding precision accounts for visual short-term memory limitations. Proc. Natl. Acad. Sci. USA 109, 8780–8785. doi: 10. 1073/pnas. 1117465109

|

Vogel, E. K., and Machizawa, M. G. (2004). Neural activity predicts individual differences in visual working memory capacity. Nature 428, 748–751. doi: 10. 1038/nature02447

||

Vogel, E. K., McCollough, A. W., and Machizawa, M. G. (2005). Neural measures reveal individual differences in controlling access to working memory. Nature 438, 500–503. doi: 10. 1038/nature04171

||

Vogel, E. K., Woodman, G. F., and Luck, S. J. (2001). Storage of features, conjunctions, and objects in visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 27, 92–114. doi: 10. 1037/0096-1523. 27. 1. 92

||

Vogel, E. K., Woodman, G. F., and Luck, S. J. (2006). The time course of consolidation in visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 32, 1436–1451. doi: 10. 1037/0096-1523. 32. 6. 1436

||

Wilken, P., and Ma, W. J. (2004). A detection theory account of change detection. J. Vis. 4, 1120–1135. doi: 10. 1167/4. 12. 11

|

Wong, J. H., Peterson, M. S., and Thompson, J. C. (2008). Visual working memory capacity for objects from different categories: a face-specific maintenance effect. Cognition 108, 719–731. doi: 10. 1016/j. cognition. 2008. 06. 006

||

Xie, W., Li, H., Ying, X., Zhu, S., Fu, R., Zou, Y., et al. (2016). Affective bias in visual working memory is associated with capacity. Cogn. Emot. 31, 1–16. doi: 10. 1080/02699931. 2016. 1223020

|

Xie, W., Li, H., Zou, Y., Sun, X., and Shi, C. (2018). A suicidal mind tends to maintain less negative information in visual working memory. Psychiatry Res. 262, 549–557. doi: 10. 1016/j. psychres. 2017. 09. 043

||

Xu, Y., and Chun, M. M. (2006). Dissociable neural mechanisms supporting visual short-term memory for objects. Nature 440, 91–95. doi: 10. 1038/nature04262

||

Xu, Y., and Chun, M. M. (2009). Selecting and perceiving multiple visual objects. Trends Cogn. Sci. 13, 167–174. doi: 10. 1016/j. tics. 2009. 01. 008

||

Yao, N., Chen, S., and Qian, M. (2018). Trait anxiety is associated with a decreased visual working memory capacity for faces. Psychiatry Res. 270, 474–482. doi: 10. 1016/j. psychres. 2018. 10. 018

||

Ye, C., Xu, Q., Liu, Q., Cong, F., Saariluoma, P., Ristaniemi, T., et al. (2018). The impact of visual working memory capacity on the filtering efficiency of emotional face distractors. Biol. Psychol. 138, 63–72. doi: 10. 1016/j. biopsycho. 2018. 08. 009

||

Yin, R. K. (1969). Looking at upside-down faces. J. Exp. Psychol. 81, 141–145. doi: 10. 1037/h0027474

|

Young, S. G., Hugenberg, K., Bernstein, M. J., and Sacco, D. F. (2012). Perception and motivation in face recognition: a critical review of theories of the cross-race effect. Personal. Soc. Psychol. Rev. 16, 116–142. doi: 10. 1177/1088868311418987

|

Zhang, W., and Luck, S. J. (2008). Discrete fixed-resolution representations in visual working memory. Nature 453, 233–235. doi: 10. 1038/nature06860

||

Zhou, X., Mondloch, C. J., and Emrich, S. M. (2018). Encoding differences affect the number and precision of own-race versus other-race faces stored in visual working memory. Atten. Percept. Psychophys. 80, 702–712. doi: 10. 3758/s13414-017-1467-6

||

Thank's for Your Vote!
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 1
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 2
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 3
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 4
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 5
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 6
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 7
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 8
Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Page 9

This work, titled "Visual working memory for faces and facial expressions as a useful “tool” for understanding social and affective cognition" was written and willingly shared by a fellow student. This sample can be utilized as a research and reference resource to aid in the writing of your own work. Any use of the work that does not include an appropriate citation is banned.

If you are the owner of this work and don’t want it to be published on AssignBuster, request its removal.

Request Removal
Cite this Essay

References

AssignBuster. (2021) 'Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition'. 22 December.

Reference

AssignBuster. (2021, December 22). Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition. Retrieved from https://assignbuster.com/visual-working-memory-for-faces-and-facial-expressions-as-a-useful-tool-for-understanding-social-and-affective-cognition/

References

AssignBuster. 2021. "Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition." December 22, 2021. https://assignbuster.com/visual-working-memory-for-faces-and-facial-expressions-as-a-useful-tool-for-understanding-social-and-affective-cognition/.

1. AssignBuster. "Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition." December 22, 2021. https://assignbuster.com/visual-working-memory-for-faces-and-facial-expressions-as-a-useful-tool-for-understanding-social-and-affective-cognition/.


Bibliography


AssignBuster. "Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition." December 22, 2021. https://assignbuster.com/visual-working-memory-for-faces-and-facial-expressions-as-a-useful-tool-for-understanding-social-and-affective-cognition/.

Work Cited

"Visual working memory for faces and facial expressions as a useful "tool” for understanding social and affective cognition." AssignBuster, 22 Dec. 2021, assignbuster.com/visual-working-memory-for-faces-and-facial-expressions-as-a-useful-tool-for-understanding-social-and-affective-cognition/.

Get in Touch

Please, let us know if you have any ideas on improving Visual working memory for faces and facial expressions as a useful “tool” for understanding social and affective cognition, or our service. We will be happy to hear what you think: [email protected]