1,750
27
Essay, 17 pages (4000 words)

What are memory-perception interactions for? implications for action

Currently, a growing body of studies demonstrates memory-perception interactions (see Barsalou, 2008 ; Heurley et al., 2012 ; Lobel, 2014 , for reviews). Even if such interactions are highly relevant to support embodied approaches of cognition as well as to better understand memory and perception (e. g., Zwaan, 2008 ; Versace et al., 2009 ; Landau et al., 2010 ; Kiefer and Barsalou, 2013 ), their functional role remains unclear: Why would perception integrate memory and knowledge while it seems highly efficient without such influences? To understand the functional relevance of these interactions, we assume that it is necessary to take into account two important conditions in which our cognitive systems have evolved during the phylogenesis and continue to evolve during our ontogenesis. More precisely, we develop a view where memory-perception interactions are highly relevant to plan and control actions when we interact with well-known objects in non-optimal perceptual conditions.

It is widely accepted that to properly parameterize action components and to control them during the course of action, it is necessary to perceptually process some object’s features ( Hommel and Elsner, 2009 ). As already claimed by Glover (2004), these “ action-relevant perceptual features” (ARPF) can be spatial (e. g., shape, orientation) as well as non-spatial (e. g., fragility, weight). Among spatial-ARPF, size is usually recognized as an important cause to its involvement in a great variety of actions. Jeannerod (1984) has for example demonstrated that the magnitude of the grip aperture, a component of the grasping movement, is function to the visual size of objects (see also Ellis et al., 2007 ; Fagioli et al., 2007 ; Wykowska et al., 2009 ). Visual size processing also seems highly important in order to intercept flying objects ( Lee, 1976 ). Nevertheless, very frequently and for various reasons, the perceptual processing of ARPF is far from being optimal especially in “ out-of-laboratory” conditions. For instance, some ARPF can’t be processed by the available perceptual channels. Indeed, when we want to grasp an object, we are only able to visually perceive it and therefore we are unable to directly perceive its fragility, weight, and temperature whereas they are extremely relevant to plan the force and the velocity of the grasp ( Glover, 2004 ). Even when the right channels are available, some environmental conditions can impair perception. For example, the occlusion of an object by other surfaces can limit our ability to visually perceive it and thus process its shape, size, or distance ( Tanaka et al., 2001 ). Furthermore, short- or long-term injuries to perceptual systems can also induce non-optimal conditions of perception. Indeed, the eyes can be long-term-impaired by cell aging or short-term-impaired by an intense flash light, but in both cases our ability to process visual features is affected. Accordingly, how do we plan and control actions in conditions where features that are suited to plan and control relevant parameters of action cannot often be optimally perceived? First of all, it is important to note that non-optimal processing of ARPF do not necessarily induce object-recognition problems. Indeed, as mentioned in several models, object recognition can be accurately based on non-ARPF such as the color and/or the texture of objects and the context ( Tanaka et al., 2001 ; Bar, 2009 ). Therefore, even if some ARPF cannot be processed, objects can be accurately identified in many cases. Secondly, because in everyday life we mainly interact with well-known objects, preserved ability to recognize object identity can automatically induce the retrieval of a myriad of knowledge associated with the recognized-objects including associated ARPF (e. g., shape, size). Thus, we claim that recognition processes used to identify objects during the planning phase of actions involve the retrieval of previous experienced ARPF that are automatically integrated into perception. We also claim that they allow compensating non-optimally perceived ARPF and so to maintain a high level of action efficiency even in non-optimal conditions of perception. To resume, we assume that the functional relevance of memory-perception interactions (i. e., an embodied cognitive architecture) occurs when humans interact with well-known objects in degraded-perceptual-conditions. We discuss three potential sets of evidence, coming from studies all focusing on the ARPF size in support of this view.

First, numerous experiments suggests that memory would be able to store objects’ perceptual features and especially ARPF (see Barsalou, 2008 , for a review). For instance, a great variety of studies support the idea that the size of objects is accurately stored in memory and closely matches their real visual organization ( Moyer, 1973 ; Holyoak, 1977 ; Holyoak et al., 1979 ; Shoben and Wilson, 1998 ; Bertamini et al., 2011 ; Konkle and Oliva, 2011 ; Linsen et al., 2011 ). More importantly, it seems that the known size of objects can be automatically retrieved even when objects are briefly perceived suggesting the possible automatic retrieval of ARPF during fast real interactions with objects. Ferrier et al. (2007) have for example demonstrated that a target picture (e. g., an elephant) is easily categorized as an animal when the brief prime picture (150 ms) has a similar known size (e. g., a giraffe or a car) rather than a different (e. g., a bee or a key) while both pictures have the same visual size on the screen (see also Setti et al., 2009 ; Gabay et al., 2013 ). It is noteworthy that the size is generally stable across items of a category as well as across experiences. Because all the ladybugs we experienced have approximately the same small size, their size can be easily stored at a conceptual level (i. e., general knowledge, Whittlesea, 1987 ). However, in some cases, ARPF could be stored in a more specific or short-term format. For example, because the size of your car is not shared by all the exemplars of the “ car” category, this feature is undoubtedly stored in a more autobiographic format. Furthermore, some ARPF are so variable that we can only store them for a short period of time like the last position of your car on the supermarket car park or the distance of some objects on a table (see also Borghi, 2013 for a close distinction). Thus, we claim that ARPF can be stored and automatically retrieved from memory but perhaps in various ways according to the stability of the ARPF across experiences.

Moreover, several studies suggest that ARPF are not only stored but can also influence conscious perception. Among others, the case of the size perception has been strongly studied. In a primary study, Paivio (1975) has demonstrated that the comparison of the known sizes of objects is faster when they are congruent with their visual sizes. In others words, it is easier to say that in general an elephant is larger than a mouse when in the experiment the picture of the elephant is presented larger than the picture of the mouse rather than smaller (see also Srinivas, 1996 ; Rubinsten and Henik, 2002 ; Konkle and Oliva, 2012 , for similar results). The works of Riou et al. (2011) and Rey et al. (2014) go further and suggest the automatic nature of this influence. Riou et al. (2011) have demonstrated that the known size of objects can influence the detection of a visually odd-sized stimulus in a visual search task while such an object’s feature is absolutely useless to complete the task. Others studies have demonstrated an influence of the known size of objects on the judgment of distance that are often derived from visual size suggesting that the stored size can automatically impact not only the perception of visual size but also the perception of other ARPF derived from it ( Epstein, 1965 ; Predebon, 1992 , 1994 ; Hershenson and Samuels, 1999 ; Distler et al., 2000 ). Besides the known size of objects, the perception of visual size can be affected by a more abstract kind of size representation: numbers. Henik and Tzelgov (1982) have replicated the interaction between visual- and stored-size reported by Paivio (1975) but with numbers. In a classic bisection task requiring implicit length estimation, de Hevia and Spelke (2009) have found a bias of bisection toward the side of the line where the larger number is printed. In a reproduction task, Viarouge and de Hevia (2013) have demonstrated that large numbers (e. g., 9) presented at each corner of a square induce larger reproduction of this square compared to the condition where smaller numbers are presented (e. g., 2). Altogether, these studies support the possibility that the size stored in memory (i. e., known size of objects or numbers) can directly influence the perception of size or of size-related features (e. g., distance) supporting the possible completion of perception by stored-ARPF when some of them are missing or ambiguous (see Barsalou, 2009 , for a similar idea).

A further step is achieved by recent works demonstrating the influence of size stored in memory on more automatic perception-action links (rather than conscious judgments). Indeed, some studies have been able to show an influence of the known size of objects on action parameters that are dependent on the visual size. For instance, Hosking and Crassini (2010) have conducted experiments in which participants have to carry out time-to-contact judgments on stimuli for which a linear or a parabolic trajectory of approach are simulated. Such a judgment is highly important for a great variety of interceptive actions and is mainly based on the online processing of the visual size of the approaching stimulus. In their experiments, the stimuli used have different known sizes (i. e., large: a football vs. small: a tennis ball). Results elegantly demonstrated that this stored feature of objects influences time-to-contact judgments suggesting that it could interfere with our ability to intercept mobiles (see also DeLucia, 2005 ; Hosking and Crassini, 2011 , for similar results). Another set of studies suggest also an influence of the known size of objects on another well-established perception-action link: Our ability to adapt our grip aperture according to the visual size of the to-be-grasped objects ( Jeannerod, 1984 ). Several studies demonstrate that participants are faster to carry out a precision grip on typically small objects (e. g., cherry) and a power grip on typically large objects (i. e., eggplant; Ellis and Tucker, 2000 ; Tucker and Ellis, 2004 ; Derbyshire et al., 2006 ; Girardi et al., 2010 ) even when visual size cannot interfere ( Glover et al., 2004 ; Tucker and Ellis, 2004 ; Heurley et al., in revision). The same effect on grip aperture is obtained when size-related adjectives are concomitantly processed (e. g., SMALL/LARGE, LONG/SHORT) rather than known objects ( Gentilucci and Gangitano, 1998 ; Gentilucci et al., 2000 ; Glover and Dixon, 2002 ). These results are also replicated when numbers are used. More concretely, Moretto and di Pellegrino (2008) have shown that large number processing facilitate power grips while small number processing facilitate precision grips (see also Andres et al., 2004 ; Lindemann et al., 2007 ). In addition, some results support that such interactions are highly automatic ( Moretto and di Pellegrino, 2008 ; Namdar et al., 2014 ) and seems to be restricted to the planning phase of grasping ( Glover and Dixon, 2002 ; Glover et al., 2004 ; Badets et al., 2007 ; Andres et al., 2008 ). Taken together, these works demonstrate that stored ARPF, such as size, can influence automatic perception-action links and not only conscious perception supporting the possibility that perception can be completed by stored ARPF, itself influencing the planning of some action components.

This short review suggests that the ARPF size can be stored in memory, automatically retrieved during object perception, and can influence conscious perception of visual size (or related-features) as well as the planning of action components mainly based on visual size processing. We used this evidence to support the view that the interactions between present and absent–but simulated in memory–perceptual features are important for action especially in “ out-of-laboratory conditions” in which ARPF can’t be optimally perceived and in which interactions mainly occur with well-known objects. Of course, the reported evidence is limited to the size, but several studies have already demonstrated that other ARPF such as distance, position, and weight could be stored and automatically retrieved ( Estes et al., 2008 ; Scorolli et al., 2009 ; Winter and Bergen, 2012 ). This strongly suggests that our view can be extended. Even if many questions remain open and a lot of work has to be done to best support this view, it has the advantage to search for the functional relevance of memory-perception interactions (i. e., an embodied cognitive architecture) by taking into account two main constraints in which our cognitive systems have certainly evolved at phylogenetic and ontogenetic scales: Interactions with (i) well-known objects in (ii) more or less degraded-perceptual-conditions.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgment

We are grateful to Gabrielle Chesnoy-Servanin for her revision of the English text.

References

Andres, M., Davare, M., Pesenti, M., Olivier, E., and Seron, X. (2004). Number magnitude and grip aperture interaction. Neuroreport 15, 2773–2777.

||

Andres, M., Ostryc, D. J., Nicola, F., and Pause, T. (2008). Time course of number magnitude interference during grasping. Cortex 44, 414–419. doi: 10. 1016/j. cortex. 2007. 08. 007

|||

Badets, A., Andres, M., Di Luca, S., and Pesenti, M. (2007). Number magnitude potentiates action judgements. Exp. Brain Res . 180, 525–534. doi: 10. 1007/s00221-007-0870-y

|||

Bar, M. (2009). The proactive brain: memory for predictions. Philos. Trans. R. Soc. Lond. B Biol. Sci . 364, 1235–1243. doi: 10. 1098/rstb. 2008. 0310

|||

Barsalou, L. W. (2008). Grounded cognition. Annu. Rev. Psychol . 59, 617–645. doi: 10. 1146/annurev. psych. 59. 103006. 093639

|||

Barsalou, L. W. (2009). Simulation, situated conceptualization, and prediction. Philos. Trans. R. Soc Lond. B Biol. Sci . 364, 1281–1289. doi: 10. 1098/rstb. 2008. 0319

|||

Bertamini, M., Bennett, K. M., and Bode, C. (2011). The anterior bias in visual art: the case of images of animals. Laterality 16, 673–689. doi: 10. 1080/1357650X. 2010. 508219

|||

Borghi, A. (2013). “ Language comprehension: action, affordances and goals,” in Language and Action in Cognitive Neuroscience , eds Y. Coello and A. Bartolo (New-York, NY: Psychology Press), 125–144.

de Hevia, M.-A., and Spelke, E. S. (2009). Spontaneous mapping of number and space in adults and young children. Cognition 110, 198–207. doi: 10. 1016/j. cognition. 2008. 11. 003

|||

DeLucia, P. R. (2005). Does binocular disparity or familiar size override effects of relative size on judgments of time to contact? Q. J. Exp. Psychol. A 58, 865–886. doi: 10. 1080/02724980443000377

|||

Derbyshire, N., Ellis, R., and Tucker, M. (2006). The potentiation of two components of the reach-to-grasp action during object categorisation in visual memory. Acta Psychol . 122, 74–98. doi: 10. 1016/j. actpsy. 2005. 10. 004

|||

Distler, H. K., Gegenfurtner, K. R., van Veen, H. A. H. C., and Hawken, M. J. (2000). Velocity constancy in a virtual reality environment. Perception 29, 1423–1435. doi: 10. 1068/p3115

|||

Ellis, R., and Tucker, M. (2000). Micro-affordance: the potentiation of components of action by seen objects. Br. J. Psychol . 9, 451–471. doi: 10. 1348/000712600161934

|||

Ellis, R., Tucker, M., Symes, E., and Vainio, L. (2007). Does selecting one visual object from several require inhibition of the actions associated with nonselected objects? J. Exp. Psychol. Hum. Percept. Perform . 33, 670–691. doi: 10. 1037/0096-1523. 33. 3. 670

|||

Epstein, W. (1965). Nonrelational judgments of size and distance. Am. J. Psychol . 78, 120–123. doi: 10. 2307/1421091

|||

Estes, Z., Verges, M., and Barsalou, L. W. (2008). Head up, foot down: object words orient attention to the objects’ typical location. Psychol. Sci . 19, 93–97. doi: 10. 1111/j. 1467-9280. 2008. 02051. x

|||

Fagioli, S., Hommel, B., and Schubotz, R. I. (2007). Intentional control of attention: action planning primes action related stimulus dimensions. Psychol. Res . 71, 22–29. doi: 10. 1007/s00426-005-0033-3

|||

Ferrier, L., Staudt, A., Reilhac, G., Jiménez, M., and Brouillet, D. (2007). L’influence de la taille typique des objets dans une tâche de catégorisation. Can. J. Exp. Psychol . 61, 316–321. doi: 10. 1037/cjep2007031

|||

Gabay, S., Leibovich, T., Henik, A., and Gronau, N. (2013). Size before numbers: conceptual size primes numerical value. Cognition 129, 18–23. doi: 10. 1016/j. cognition. 2013. 06. 001

|||

Gentilucci, M., Benuzzi, F., Bertolani, L., Daprati, E., and Gangitano, M. (2000). Language and motor control. Exp. Brain Res . 133, 468–490. doi: 10. 1007/s002210000431

|||

Gentilucci, M., and Gangitano, M. (1998). Influence of automatic word reading on motor control. Eur. J. Neurosci . 10, 752–756. doi: 10. 1046/j. 1460-9568. 1998. 00060. x

|||

Girardi, G., Lindemann, O., and Bekkering, H. (2010). Context effects on the processing of action-relevant object features. J. Exp. Psychol. Hum. Percept. Perform . 36, 330–340. doi: 10. 1037/a0017180

|||

Glover, S. (2004). Separate visual representations in the planning and control of action. Behav. Brain Sci . 27, 3–24. doi: 10. 1017/S0140525X04000020

|||

Glover, S., and Dixon, P. (2002). Semantics affect the planning but not control of grasping. Exp. Brain Res . 146, 383–387. doi: 10. 1007/s00221-002-1222-6

|||

Glover, S., Rosenbaum, D. A., Graham, J., and Dixon, P. (2004). Grasping the meaning of words. Exp. Brain Res . 154, 103–108. doi: 10. 1007/s00221-003-1659-2

|||

Henik, A., and Tzelgov, J. (1982). Is three greater than five: the relation between physical and semantic size in comparison tasks. Mem. Cogn . 10, 389–395. doi: 10. 3758/BF03202431

|||

Hershenson, M., and Samuels, S. M. (1999). An airplane illusion: apparent velocity determined by apparent distance. Perception 28, 433–436. doi: 10. 1068/p2779

|||

Heurley, L. P., Milhau, A., Chesnoy, G., Ferrier, L. P., Brouillet, T., and Brouillet, D. (2012). Influence of language on color perception: a simulationnist explanation. Biolinguistics 6, 354–382.

||

Holyoak, K. J. (1977). The form of analog size information in memory. Cogn. Psychol . 9, 31–51. doi: 10. 1016/0010-0285(77)90003-2

|

Holyoak, K. J., Dumais, S. T., and Moyer, R. S. (1979). Semantic association effects in a mental comparison task. Mem. Cogn . 7, 303–313. doi: 10. 3758/BF03197604

|||

Hommel, B., and Elsner, B. (2009). “ Acquisition, representation and control of action,” in Oxford Handbook of Human Action , eds E. Morsella, J. A. Bargh, and P. M. Gollwitzer (Oxford: Oxford University Press), 371–398.

Hosking, S. G., and Crassini, B. (2010). The effects of familiar size and object trajectories on time-to-contact judgements. Exp. Brain Res . 203, 541–552. doi: 10. 1007/s00221-010-2258-7

|||

Hosking, S. G., and Crassini, B. (2011). The influence of optic expansion rates when judging the relative time to contact of familiar objects. J. Vis . 11, 1–13. doi: 10. 1167/11. 6. 20

|||

Jeannerod, M. (1984). The timing of natural prehension movements. J. Mot. Behav . 16, 235–254. doi: 10. 1080/00222895. 1984. 10735319

|||

Kiefer, M., and Barsalou, L. W. (2013). “ Grounding the human conceptual system in perception, action and internal states,” in Action Science: Foundations of an Emerging Discipline , eds W. Prinz, M. Beisert, and A. Herwig (Cambridge: MIT Press), 381–408. doi: 10. 7551/mitpress/9780262018555. 003. 0015

||

Konkle, T., and Oliva, A. (2011). Canonical visual size for real-world objects. J. Exp. Psychol. Hum. Percept. Perform . 7, 23–37. doi: 10. 1037/a0020413

|||

Konkle, T., and Oliva, A. (2012). A familiar-size stroop effect: real-world size is an automatic property of object representation. J. Exp. Psychol. Hum. Percept. Perform . 38, 561–569. doi: 10. 1037/a0028294

|||

Landau, M. J., Meier, B. P., and Keefer, L. A. (2010). A metaphor-enriched social cognition. Psychol. Bull . 136, 1045–1067. doi: 10. 1037/a0020970

|||

Lee, D. N. (1976). A theory of visual control of braking based on information about time-to-collision. Perception 5, 437–459. doi: 10. 1068/p050437

|||

Lindemann, O., Abolafia, J. M., Girardi, G., and Bekkering, H. (2007). Getting a grip on numbers: numerical magnitude priming in object grasping. J. Exp. Psychol. Hum. Percept. Perform . 33, 1400–1409. doi: 10. 1037/0096-1523. 33. 6. 1400

|||

Linsen, S., Leyssen, M. H. R., Sammartino, J., and Palmer, S. E. (2011). Aesthetic preferences in the size of images of real-world objects. Perception 40, 291–298. doi: 10. 1167/10. 7. 1234

|||

Lobel, T. (2014). Sensation: the Sew Science of Physical Intelligence . New York, NY: Atria Books.

Moretto, G., and di Pellegrino, G. (2008). Grasping numbers. Exp. Brain Res . 188, 505–515. doi: 10. 1007/s00221-008-1386-9

|||

Moyer, R. S. (1973). Comparing object in memory: evidence suggesting and internal psychophysics. Percept. Psychophys . 13, 180–184. doi: 10. 3758/BF03214124

|

Namdar, G., Tzelgov, J., Algom, D., and Ganel, T. (2014). Grasping numbers: evidence for automatic influence of numerical magnitude on grip aperture. Psychon. Bull. Rev . 21, 830–835. doi: 10. 3758/s13423-013-0550-9

|||

Paivio, A. (1975). Perceptual comparaisons through the mind’s eye. Mem. Cogn . 3, 653–647. doi: 10. 3758/BF03198229

|||

Predebon, J. (1992). The role of instructions and familiar size in absolute judgments of size and distance. Percept. Psychophys . 51, 344–354. doi: 10. 3758/BF03211628

|||

Predebon, J. (1994). Perceived size of familiar objects and the theory of off-sized perceptions. Percept. Psychophys . 56, 238–247. doi: 10. 3758/BF03213902

|||

Rey, A. E., Riou, B., and Versace, R. (2014). Demonstration of an ebbinghaus illusion at a memory level: manipulation of the memory size and not the perceptual size. Exp. Psychol . 61, 378–384. doi: 10. 1027/1618-3169/a000258

|||

Riou, B., Lesourd, M., Brunel, L., and Versace, R. (2011). Visual memory and visual perception: when memory improves visual search. Mem. Cogn . 39, 1094–1102. doi: 10. 3758/s13421-011-0075-2

|||

Rubinsten, O., and Henik, A. (2002). Is an ant larger than a lion? Acta Psychol . 111, 141–154. doi: 10. 1016/S0001-6918(02)00047-1

|||

Scorolli, C., Borghi, A. M., and Glenberg, A. M. (2009). Language-induced motor activity in bi-manual object lifting. Exp. Brain Res . 193, 43–53. doi: 10. 1007/s00221-008-1593-4

|||

Setti, A., Caramelli, N., and Borghi, A. M. (2009). Conceptual information about size of objects in nouns. Eur. J. Cogn. Psychol . 21, 1022–1044. doi: 10. 1080/09541440802469499

|

Shoben, E. J., and Wilson, T. L. (1998). Categorization in judgments of relative magnitude. J. Mem. Lang . 38, 94–111. doi: 10. 1006/jmla. 1997. 2534

|

Srinivas, K. (1996). Size and reflection effects in priming: a test of transfer-appropriate processing. Mem. Cognit . 24, 441–452. doi: 10. 3758/BF03200933

|||

Tanaka, J. W., Weiskopf, D., and Williams, P. (2001). The role of color in high-level vision. Trends Cogn. Sci . 5, 211–215. doi: 10. 1016/S1364-6613(00)01626-0

|||

Tucker, M., and Ellis, R. (2004). Action priming by briefly presented objects. Acta Psychol . 116, 185–203. doi: 10. 1016/j. actpsy. 2004. 01. 004

|||

Versace, R., Labeye, E., Badard, G., and Rose, M. (2009). The contents of long-term memory and the emergence of knowledge. Eur. J. Cogn. Psychol . 21, 522–560. doi: 10. 1080/09541440801951844

|

Viarouge, A., and de Hevia, M.-A. (2013). The role of numerical magnitude and order in the illusory perception of size and brightness. Front. Psychol . 4: 484. doi: 10. 3389/fpsyg. 2013. 00484

|||

Whittlesea, B. W. A. (1987). Preservation of specific experiences in the representation of general knowledge. J. Exp. Psychol. Learn. Mem. Cogn . 13, 3–17. doi: 10. 1037/0278-7393. 13. 1. 3

|

Winter, B., and Bergen, B. (2012). Language comprehenders represent object distance both visually and auditorily. Lang. Cogn . 4, 1–16. doi: 10. 1515/langcog-2012-0001

|

Wykowska, A., Schubö, A., and Hommel, B. (2009). How you move is what you see: action planning biases selection in visual search. J. Exp. Psychol. Hum. Percept. Perform . 35, 1755–1769. doi: 10. 1037/a0016798

|||

Zwaan, R. A. (2008). “ Experiential traces and mental simulations in language comprehension,” in Symbols and Embodiment: Debates on meaning and cognition , eds M. de Vega, A. M. Glenberg, and A. C. Graesser (Oxford: Oxford University Press), 165–180. doi: 10. 1093/acprof: oso/9780199217274. 003. 0009

|

Thank's for Your Vote!
What are memory-perception interactions for? implications for action. Page 1
What are memory-perception interactions for? implications for action. Page 2
What are memory-perception interactions for? implications for action. Page 3
What are memory-perception interactions for? implications for action. Page 4
What are memory-perception interactions for? implications for action. Page 5
What are memory-perception interactions for? implications for action. Page 6
What are memory-perception interactions for? implications for action. Page 7
What are memory-perception interactions for? implications for action. Page 8
What are memory-perception interactions for? implications for action. Page 9

This work, titled "What are memory-perception interactions for? implications for action" was written and willingly shared by a fellow student. This sample can be utilized as a research and reference resource to aid in the writing of your own work. Any use of the work that does not include an appropriate citation is banned.

If you are the owner of this work and don’t want it to be published on AssignBuster, request its removal.

Request Removal
Cite this Essay

References

AssignBuster. (2022) 'What are memory-perception interactions for? implications for action'. 11 October.

Reference

AssignBuster. (2022, October 11). What are memory-perception interactions for? implications for action. Retrieved from https://assignbuster.com/what-are-memory-perception-interactions-for-implications-for-action/

References

AssignBuster. 2022. "What are memory-perception interactions for? implications for action." October 11, 2022. https://assignbuster.com/what-are-memory-perception-interactions-for-implications-for-action/.

1. AssignBuster. "What are memory-perception interactions for? implications for action." October 11, 2022. https://assignbuster.com/what-are-memory-perception-interactions-for-implications-for-action/.


Bibliography


AssignBuster. "What are memory-perception interactions for? implications for action." October 11, 2022. https://assignbuster.com/what-are-memory-perception-interactions-for-implications-for-action/.

Work Cited

"What are memory-perception interactions for? implications for action." AssignBuster, 11 Oct. 2022, assignbuster.com/what-are-memory-perception-interactions-for-implications-for-action/.

Get in Touch

Please, let us know if you have any ideas on improving What are memory-perception interactions for? implications for action, or our service. We will be happy to hear what you think: [email protected]