- Published: September 29, 2022
- Updated: September 29, 2022
- University / College: Brown University
- Language: English
- Downloads: 3
Can Machines Think ? This paper regards several points of view on the subject of, what is commonly referred to asArtificial Intelligence, or AI. AI is the attempt to make machines, specifically computers, perform intelligently through programming. Already, this definition has a problem in that the word intelligence can have many interpretations. This essay will attempt to put forward some ideas for how to approach this problem.
It could be said that the human brain is nothing more than a machine, and as we know it to be capable of thought it would be fair to surmise that therefore machines can think and it is probably this, or a similar premise that inspired AI. However, within AI there are many schools of thought. Some believe that if a computer can be programmed correctly to emulate certain human processes, then it is to all intense and purposes thinking as we do.
One of the early pioneers in the world of computers, Alan Turing, outlined a test in which participants are asked to interrogate a computer terminal in order to determine whether they are communicating with a human, or a computer program. Examples of programs which were put through the Turing test are ELIZA and SHRDLU both of which attempted to emulate one side of human conversation. But even if these programs did appear to be totally human, could they be said to actually be thinking ? John Searle(1984) puts forward a scenario in an attempt to devalue this idea.
He refers to a program by Roger Schank at Yale university which, after being given a story will be able to answer questions regarding it. It would seem at the outset that this program would therefore be understanding the story. Searle then argues that despite not being able to understand Chinese, he would, under the correct circumstances, be able to answers Chinese questions in Chinese, relating to a story also written in Chinese. The scenario is summarised as follows; Sitting isolated in a room, Searle is given a wad of Chinese script, followed by another. In addition he is given a list of English rules, for correctly correlating the two.
By simply following the English rules, he writes a third set of chinese words which he then returns to someone outside the room. If the first set of script was a story, the second a set of questions, he could be said to be answering the questions. In fact, from the point of view of someone standing outside the room, he would be correctly answering the questions, and thus would appear to be conversant in Chinese. This of course is not the case, as Searle would have no knowledge of what the story was about, and what the questions were asking – he would not be understanding the story.
This argument is an attempt to demonstrate that although a computer program appears to be understanding a story, it is merely obeying simple instructions, and has no understanding at all. ” In the linguistic jargon, they have only a syntax but no semantics” (Searle 1984) However, depending on hown one observes this problem, it can appear very differently. Regarding the entire room, the person in the room (to whom I shall refer for the sake of continuity of terms as a demon), the scripts and the person outside as a whole, we do have a system that is capable of reading and interpreting Chinese.
Hofstadter extends this idea by modifying the scenario so as to shrink it to brain size, the scripts becoming neurons and so on. This effectively creates a system equivalent to the human brain. So what would be the difference between the two. Why would one be acceptable as a thinking system and one not ? Searle frequently refers to ‘causal properties’ and ‘intentionality’ stating that the artificial system proposed by Hofstadter would lack both of them, and that somehow the human brian has both.
It is here where the subject of duality comes into the fore. Are the mind and the brain one and the same, or are they separate entities ? Many religions favour this dualist approach and refer to the mind, as it is in this instance, as a persons soul and regard it as being separate to the physical self. Whether the mind is separate or not, Searle’s argument implies that the human brain has a mind, because of its natural causal properties, yet an artificial machine does not. But what are these natural causal properties, and from what do they derive ?
Are they a result of the biological material from which the brain is made, are they a result of the brain’s structure or are they a result of a breath of life from the lips of a god ? ” Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance” (McCarthy 1979) At what point does a functioning machine gain intentionality ? Here Zenon Pylyshyn is cited from a reply made to Searle, to illustrate the complex connotations involved in the idea of the natural causal property of the brain. If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make. Surely, the person in the above example would have conscious, intentional thought, despite being constructed from artificial parts. Or would this person simply be acting in the same role as the chinese room demon ? Another area rapidly developing in AI is Parallel Distributed Processing, or neural networks. These are complex structures that emulate the brains neural structure, and are usually modelled within a computer, although in theory there is nothing to stop them being constructed electronically, or even mechanically!
The effect of a neural network is similar to that in Pylyshyn’s example – an electronic replacement for a part of the brain. Functionally it operates almost identically to a brain and can be made to do tasks similar to those performed by Schank’s story program. Could a neural network equivalent be said to have any more ‘causal properties’ than just a computer program ? Searle acknowledges that since we are merely machines, it is possible that machines can think.
However he finds the idea of a computer program thinking implausible. However, if we could model a brain, with atomic accuracy in a computers memory, for example in the form of a neural network, surely it would work in exactly the same way and would therefore be just as valid a thinker as a human. It seems that this whole debate rests its most basic principles on a belief; either one believes that our ‘intentionality’ derives purely from our brain and its structure or one doesn’t.
Even if we ever do manage to construct an exact replica of a brain that appears to work identically to the real thing, how could we tell if it really is a conscious entity with true intentionality, or merely acting like the chinese room demon ? Indeed, how can we define conscious, and intentionality in that context. There must be a level of functioning or reasoning that we can use as a cut-off point for deciding whether or not something is alive and thinking. Descartes stated ” I think. Therefore, I am”. But was he thinking, or merely following a mechanical pattern, with no real understanding of the words ? References