1,995
11
Essay, 6 pages (1300 words)

Commentary: distributed cognition and distributed morality: agency, artifacts and systems

A commentary on
Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

by Heersmink, R. (2017). Sci. Eng. Ethics 23, 431–448. doi:

Studies on human–artifact interaction stimulate reflection on the bounds of cognition, agency, and even morality (e. g., Floridi and Sanders, 2004 ). Heersmink (2017) compares distributed cognition (DCog) and distributed morality theory and claims that some artifacts, depending on their use, have cognitive and moral status but lack cognitive and moral agency. According to him, an extended cognitive system (ECS) has agency when artifact(s) included in the system are fully transparent and densely integrated into the cognitive processes of the user, whereas a distributed cognitive system (DCS) without central control lacks agency. My doubts do not concern Heersmink’s main claim. Irrespective of the final assessment of the moral status of distributed systems, I argue that the assumption that the assessment of the degree to which humans and artifacts are cognitively integrated is not always feasible and distorts our understanding of DCog.

Extension and Distribution of the Cognitive

Heersmink (2017) sums up the well-known concepts of “ wide” cognition: cognitive states and processes may go beyond individual minds to involve people and artifacts; human agents and artifacts form integrated systems performing information-processing tasks. Cognitive activity sometimes extends beyond the brain to non-neuronal parts of the body and elements of the environment. Two famous examples are: a navigation team on board of a surface vessel at sea (Hutchins 1995), and a man with Alzheimer’s disease who supports his biological memory by means of a notebook ( Clark and Chalmers, 1998 ). As Heersmink writes: “ Clark’s extended cognition theory focuses on single agents interacting with artifacts, whereas Hutchins DCog theory typically (though not exclusively) focuses on larger systems with more than one agent interacting with artifacts. In such wider cognitive systems, there are thus one or more individuals interacting and coupling with cognitive artifacts” ( Heersmink, 2017 ).

Heersmink recognizes the significant difference between the two concepts, relying on Hutchins’s comments ( Hutchins, 2014 ): extended cognition is just a special case of DCog opening up a much broader view of the different types of cognition. Among them, “[s]ome systems have a clear center while other systems have multiple centers or no center at all” ( Hutchins, 2014 , p. 37).

So far, Heersmink’s summary is not controversial. What is problematic is his view that it “ is better to conceive of system membership in terms of the degree of cognitive integration of humans and artifacts” ( Heersmink, 2017 ). This integration depends on different dimensions including the kind and intensity of information flow between human and non-human components, accessibility of the scaffold, durability of the coupling, amount of user trust, degree of transparency-in-use, ease of interpretation of the information and the amount of personalization or cognitive transformation ( Heersmink, 2015 , 2017 ). Hence, cognitive artifacts can be integrated more or less deeply.

Cognitive System as a Mechanism

Let’s go back to the way Hutchins defines DCSs. For him, distribution means interaction ( Hutchins, 2006 , p. 376–377). When we take the DCog perspective, we do not “ make any claim about the nature of the world. Rather, it is to choose a way of looking at the world, one that selects scales of investigation such that wholes are seen as emergent from interactions among their parts” ( Hutchins, 2014 , p. 36). DCog is not a kind of cognition, but a perspective on all of cognition. “[T]he notions of centralized and distributed are always relative to some scale of investigation. (…) The boundaries of the unit of analysis for DCog are not fixed in advance; they depend on the scale of the system under investigation, which can vary (…)” ( Hutchins, 2014 , p. 36).

DCS—to which ECSs belong—doesn’t constitute any more or less integrated agent “ casing.” Hutchins shows that DCSs may have different scales (the brain is one example), and the large systems he studied offer something like Gulliver’s perspective in the land of giants: an opportunity for direct observation of cognitive processes in the macroscale occurring in an environment ( Hutchins, 1995 , p. 128–129).

Therefore, the assumption that it is always possible to grade the cognitive integration between an agent and artifact may fail in the case of some complex DCSs. Artifacts are not “ attached” to the “ genuinely” cognitive part of the system but are its equally important components. It is only the system as such that can have agency potential or can be the center for something else.

DCS should be viewed as a mechanism (e. g., Bechtel and Abrahamsen, 2005 ; Ylikoski, 2015 ). A mechanism is a structure that performs a function by means of its organized component parts and operations. It is responsible for one or more phenomena. Mechanisms can occur in nested hierarchies and can work cyclically, but they can also be responsible for one-off events only. Mechanisms can also be computational ( Miłkowski, 2013 ; Piccinini, 2015 ). Thus, in the DCS it is the interaction of active components and time coordination that is important for its mechanism(s), whilst the components themselves can be physically separated and coordinated only temporarily. At the same time, mechanisms may be more or less durable, and more or less tightly organized. In other words, Heersmink’s claims can be better stated in the mechanistic framework without distorting the original idea of DCog.

Concluding Remarks

For Heersmink, an ECS has agency when an artifact is fully transparent and densely integrated into the cognitive processes of its user; for this reason (according to Heersmink’s criterion for being an agent and having agency), a distributed system without central control lacks agency because it is not a system whose intentions are being realized. My doubts do not concern Heersmink’s main claim, but a minor one, although important for understanding DCog. In complex cases, it is meaningless to ask about the degree of cognitive integration of humans and artifacts in DCSs. Demanding such an integration brings to mind the anthropomorphic fallacy. The ECS in which a person can fully control the operation of her/his artificial extensions is only a special simple case of DCog. Wider cognitive systems are not added to “ genuine” cognitive systems, but they themselves are “ genuine” systems: their components interact in a coordinated manner, as in the case of any mechanism.

Author Contributions

WW reviewed the literature and developed the theoretical stance and wrote the manuscript and prepared to publication.

Funding

WW is supported by the research grant 2014/15/N/HS1/03994 Interactions in distributed cognitive systems and methodological individualism funded by National Science Centre, Poland.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The author would like to thank Marcin Milkowski for helpful remarks.

References

Bechtel, W., and Abrahamsen, A. (2005). Explanation: a mechanist alternative. Stud. Hist. Philos. Biol. Biomed. Sci. 36, 421–441. doi: 10. 1016/j. shpsc. 2005. 03. 010

||

Clark, A., and Chalmers, D. (1998). The extended mind. Analysis 58, 7–19.

Floridi, L., and Sanders, J. (2004). On the morality of artificial agents. Minds Mach. 14, 349–379. doi: 10. 1023/B: MIND. 0000035461. 63578. 9d

|

Heersmink, R. (2015). Dimensions of integration in embedded and extended cognitive systems. Phenom. Cogn. Sci. 14, 577–598. doi: 10. 1007/s11097-014-9355-1

|

Heersmink, R. (2017). Distributed cognition and distributed morality: agency, artifacts and systems. Sci. Eng. Ethics 23, 431–448. doi: 10. 1007/s11948-016-9802-1

||

Hutchins, E. (1995). Cognition in the Wild . Cambridge, MA: MIT Press.

Hutchins, E. (2006). “ The distributed cognition. Perspective on human interaction,” in Roots of Human Sociality: Culture, Cognition and Interaction , eds N. J. Enfield and S. C. Levinson (Oxford: Berg Publishers), 375–398.

Hutchins, E. (2014). The cultural ecosystem of human cognition. Philos. Psychol. 27, 34–49. doi: 10. 1080/09515089. 2013. 830548

|

Miłkowski, M. (2013). Explaining the Computational Mind . Cambridge, MA: MIT Press.

Piccinini, G. (2015). Physical Computation: A Mechanistic Account . Oxford: Oxford University Press.

Ylikoski, P. (2015). “ Social mechanism,” in International Encyclopedia of the Social & Behavioral Sciences, 2nd Edn . Vol. 22, ed J. D. Wright (Amsterdam: Elsevier), 415–420. doi: 10. 1016/B978-0-08-097086-8. 03194-9

|

Thank's for Your Vote!
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 1
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 2
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 3
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 4
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 5
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 6
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 7
Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Page 8

This work, titled "Commentary: distributed cognition and distributed morality: agency, artifacts and systems" was written and willingly shared by a fellow student. This sample can be utilized as a research and reference resource to aid in the writing of your own work. Any use of the work that does not include an appropriate citation is banned.

If you are the owner of this work and don’t want it to be published on AssignBuster, request its removal.

Request Removal
Cite this Essay

References

AssignBuster. (2022) 'Commentary: distributed cognition and distributed morality: agency, artifacts and systems'. 15 July.

Reference

AssignBuster. (2022, July 15). Commentary: distributed cognition and distributed morality: agency, artifacts and systems. Retrieved from https://assignbuster.com/commentary-distributed-cognition-and-distributed-morality-agency-artifacts-and-systems/

References

AssignBuster. 2022. "Commentary: distributed cognition and distributed morality: agency, artifacts and systems." July 15, 2022. https://assignbuster.com/commentary-distributed-cognition-and-distributed-morality-agency-artifacts-and-systems/.

1. AssignBuster. "Commentary: distributed cognition and distributed morality: agency, artifacts and systems." July 15, 2022. https://assignbuster.com/commentary-distributed-cognition-and-distributed-morality-agency-artifacts-and-systems/.


Bibliography


AssignBuster. "Commentary: distributed cognition and distributed morality: agency, artifacts and systems." July 15, 2022. https://assignbuster.com/commentary-distributed-cognition-and-distributed-morality-agency-artifacts-and-systems/.

Work Cited

"Commentary: distributed cognition and distributed morality: agency, artifacts and systems." AssignBuster, 15 July 2022, assignbuster.com/commentary-distributed-cognition-and-distributed-morality-agency-artifacts-and-systems/.

Get in Touch

Please, let us know if you have any ideas on improving Commentary: distributed cognition and distributed morality: agency, artifacts and systems, or our service. We will be happy to hear what you think: [email protected]