1,837
5
Essay, 9 pages (2000 words)

Philosophical implications of artificial intelligence

INTRODUCTION:

For a long period of time philosophers are trying to resolve the questions related to artificial intelligence. Like, how minds are working? Can machines act intelligently like humans do?, if its so would they really have brain? What would be the ethical implications in that? The philosophers view about AI is that machines act intelligent and do to actually thinking are in strong AI categories. They never worried until program works properly, no questions like its working under simulation of intelligence or real intelligence.

My stand on this is almost followed by the same instances. Do you think really robots act intelligent? Perhaps, it depends on how it is defined. In other words, it depends upon how one defining intelligence or consciousness with machines exactly involved. Yes, robots are intelligent with the system in-built or good simulations, but we deal with not practical, only theory. By comparing two architectures say human and machine, its not fair ask questions like ‘robots really think?’.

But, consider the questions like,

Machines fly?

Machines can swim?

For first question yes is the answer. Because aeroplane will fly. So technically machines can fly. For second question no is the answer. Though ships or submarines move in water, but we don’t call it swimming. Neither of question does not have any impact on lives or its nature. It has to do with their capabilities and meaning how we take. Consider Mr. Alan Turing quotes,’Instead of raising question machines think, we should ask it can past test on a behavioural intelligence. For example, the program ELIZA and the MGONZ internet chatbot fooled humans who didn’t realize they are taking to a program and the ALICE program fooled one judge in 2001 competition named ‘Loebner Prize’. He also exercised on same facts which leads to the objection to the possibility of intelligent machines.

According to the proposals given by Darthmouth ,’all aspects which features of intelligence truly tells that machine can be made to stimulate it’. I also agree to the following quote ‘if a machine acts as intelligently as human, then it is as intelligent as a human. Consider the facts how artificial brain and heart is working, which can satisfy the law of nature, demands. So we can say machines are intelligent?

We now analyze the facts in two dimensions.

From Disability point of perspective, I should say,’Robots can never do X’. Mr. Alan Turing lists X as being kind, using proper words, doing something new, analyzing this from right from wrong, to the subject of tits own thought. Alan try to predict what would be possible in future years, though we have classical records of what computers or super computers already done. And its true that computers performs may well that what humans doing alone. It will play games like chess, cricket, spell check in documents and in medical theraphies like diagonising the diseases and it do hundreds of multiple task as well as human or better than humans. Computers requires performance at each fields in science say no chemistry, biology, computer science, astronomy, maths at level of a human expert.

The educational testing service has used an automated program to grade millions of essays, the program agree with human graders 97% of the time, about the same level of two graders agree. [1]. And its also clear that computers do work better than humans, but it does not mean that computer excel everything. Of course, it use insight and understanding in task performances, not a part of behaviour.

The objection from mathematics perspective for any formal axiomatic system ‘F’ powerful enough to do arithmetic, to construct a ‘Godel Sentence’ G(F) with the following properties,

G(F) is a sentence of F, but cannot be proved within ‘F’.

If ‘F’ is consistent then G(F) is true [2].

Philosophers as mentioned this theorem shows that machines are mentally inferior to humans, because machines are formal systems that are limited by the incompleteness theorem, but human minds are operate by quantum gravity. No human brain could compute the sum of 100 billions of 100 digit number in their lifetime, but computer do it in seconds, before invention of mathematics itself human started to behave intelligent, so its not fair to say that mathematical reason plays more than important role in what it means to be smart or intelligent.

Computers have binded to limitation on what they can prove but there is no evidence for human, that they are immune from those limitations. And its easy to claim that normal system cannot do X, then claims humans can do X using their informal method without evidence, like wise its too tough to say that humans are not subject to Godel’s incompleteness theorem, because any rigour proof contain a formalization of a claimed unformalizable human’s talent. So I planned to left with an appeal to intuition that humans perform superhuman feats of maths insights.

From informality point of view, the most persistent critics of artificial intelligence was ; argument from informality of behaviour; by Turing. Like humans, system cannot generate behaviour intelligent as human they can only follow set of rules. This inability to capture all in a set of logical rules is called the ‘qualification problem’ in artificial intelligence. Its correct what Dreyfus pointed that, logical agents are vulnerable to the qualification problems. From his view, human includes knowledge of rules but as a background within which they exercise.

Various problems have been addressed when Dreyfus and Dreyfus making proposal moving from being artificial intelligence critics to artificial intelligence theorists. i. e, Neural network architecture. It includes,

Unless having proper background you wont get good serialization such that

incorporating background knowledge with the neural network learning process. My concern about this is, the good reason for a serious redesign of current models of neural processing, then it can take the advantage of previously learning knowledge in the way that other algorithms proceed.

Neural networks are supervised which requires guidance say it needs proper input and outputs. But unsupervised or reinforcement learning never looks for trainer (human).

In many issues Dreyfus made impact on the commonsense knowledge, uncertainity importance of considering situated agents. But for me, these are all the evidences of intelligence progress, not its impossibilities.

Let us consider this question can machines really think? According to myself, the machines which passes Turing test would still not be actually thinking. Its quite simulation of thinking. From his point of view, the machines has to be aware of its own mental states of action say consciousness, which is most important.

Its almost false question, according to philosopher Mr. Karl Popper ‘ the term falsiable does not mean something is false, instead that if it is false, then this can be shown by experiment. [3]

From my study of direct experience I personally feel that machines has a not actually feel emotion, whether the machine purpoted beliefs, desires of other representations are actually about something in the real world. Analysing Turing’s point, why should we want to insist on higher standard for machines that we do for humans? After all, in our life we don’t have any direct evidence about the internal mental states of any other humans. Instead of arguing continually over the point , it is usual to have the polite convention that everyone thinks. And questioning machines act on consciousness is difficult, but it got nothing to do with practice of artificial intelligence. I agree to the fact that, we are interested in created programs that behave intelligently, not in whether someone else pronounces them to be real or simulated. To understand this fact, we should consider the question of whether artifacts are considered real. Frederick Wohler synthesized artificial urea in 1848 and why its so important means, it proved that organic and inorganic chemistry could be united. Like the artificial Chateau Labour wine would not be the Chateau Labour wine, even if it was chemically indistinguishable, simply because it was not made in the right place in the right way.

We conclude this topic by saying the philosophers John Searle’s lines,

‘no one supposes that a computer simulation of a storm will leave us all wet. Why on earth would anyone in his right mind suppose a computer simulation of mental process?'[4]

The behaviour of an artifact is important while in others it is the artifacts pedigree matters. For artificial minds, there is no convention just we are left to rely on intuitions. From the above quotes, its easy to agree that computer simulations of storms do not make us wet, instead its not clear how to make this analogy over to computer simulation of mental processes. Are mental processes more like storm or chess? like Chateau Labour or like Urea?

This all depends on your theory of mental process and what it states. Under the theory of Functionalism any intermediate caused condition between input and output gives state of mind. In other words, two systems with isomorphic causual process would have the similar mental state. Therefore, computer program might have same mental state as a person.

But in contrast, the theory of biological naturalism states that, ‘the mental states are higher level emergent features which are caused by low-level neurological processes in the neurons and properties of the neurons that matters.

To analyse these two view points, let we look at one of the oldest problem in the philosophy of mind.

The mind-body problem:

This problem questioning the mental states and processes related to the brain (bodily) state and processes. By analysing the problem of mind-architecture problem, it allow us to talk about the possibility of machines have minds. Mr. Rene Descartes, who considered how an immortal soul interacts with the body and concluded with the soul and body are two distinct type of things-a dualist theory. The monist theory called materialism, tells there is no such thing like immortal souls; only material objects. Consequently, mental states such as pain, knowing that one is riding in a horse or believing that Delhi is capital of India are brain states. Mr. John Searle pithily sums up the idea with the slogan, ” Brains cause minds”.

The materialist must face the two serious obstacles. The first problem is freewill: how can it be that a purely physical mind, where every transformation is governed strictly by the laws of physics, still retains any freedom of choice? Many philosophers agreed that this problem requires a careful reconstitution of our naïve notion of free will, rather than presenting any threat to materialism.

Then the next problem concerns the issues of consciousness but not identical, questions of understanding and self-awareness. So simply, why is it feel like anything to have certain brain states, whereas presumably does not feel like anything to have other physical states, for example being a rock. To start to answer such questions, we need ways to talk about brain states at levels more abstract than specific configurations of all atoms of the brain of a particular person at a particular time. For example, as I think about the capital of India, my brain undergoes myriad tiny changes from one picosecond o the next, but these never brings the qualitative changes in brain state.

To account this, we need a notion of a brain state types, under which we can able to judge whether two brain states belongs to the same or different types. Though different opinions are existed, almost everyone believes that if one takes a brain and replaces some of the carbon atoms by a new set of carbon atoms (perhaps even atoms of a different isotope of carbon, as is sometimes done in brain-scanning experiments) the mental will not be affected. This is a good thing because real brains are continually replacing their atoms through metabolic process, and yet this in itself does not seem to cause major mental upheavals.

Let us consider a particular kind of mental state: the propositional attitudes which are also known as intentional states. These are the states such as believing, desiring, fearing, knowing which refers to some aspect of the external world. Consider the examples, the belief that Delhi is the capital of India is a belief about a particular city and its status. We will be asking whether it is possible for computers to have intentional states, then it helps to understand how to characterize such states. Hence the identity or non-identity of mental states should be determined by staying completely ” inside the head”, without the reference to the real world.

To analyse this dilemma we turn to the thought experiment that attempts to separate intentional states from their external objects. From these several theories we conclude that mental states cannot be duplicated just in the basis of some program having same functional behaviour with similar inputs and outputs.

The Ethics and Risks of developing artificial intelligence pose some problems beyond that of, to say,

We might loose our jobs to automation.

Humans might have too much leisure time.

Peoples might lose their sense of being unique.

We might feel like loosing some privacy rights.

The use of artificial intelligence systems might result in a loss of accountability.

The success of AI might mean the end of the human race.

CONCLUSION:

I conclude the machines are intelligent but often dependable on others intelligence. The ‘intelligence explosion’ has also been called the technological singularity by maths professor Mr. Venor Vinge who writes that ” within 30 years , we will have the technological means to creates super human intelligence. After human era will be ended.” Considering the curve of technical progress Venor and Good said the progress growth is exponentially at present. However, it is quite a step to explorate that the curve will continue on to the singularity of near- infinite growth. The potential threats to society posed by Artificial Intelligence and relative technology some are unlikely and two basic thing which needs serious handling. The ultra intelligent machines might lead to a future that is very different from today and we may not like it. Next one is that the robotics technology may enable weapons of mass destruction to be deployed by psychopathic individuals. And I conclude that this is more of a threat from biotechnology and nano technology than from robotics.

Thank's for Your Vote!
Philosophical implications of artificial intelligence. Page 1
Philosophical implications of artificial intelligence. Page 2
Philosophical implications of artificial intelligence. Page 3
Philosophical implications of artificial intelligence. Page 4
Philosophical implications of artificial intelligence. Page 5
Philosophical implications of artificial intelligence. Page 6
Philosophical implications of artificial intelligence. Page 7
Philosophical implications of artificial intelligence. Page 8
Philosophical implications of artificial intelligence. Page 9

This work, titled "Philosophical implications of artificial intelligence" was written and willingly shared by a fellow student. This sample can be utilized as a research and reference resource to aid in the writing of your own work. Any use of the work that does not include an appropriate citation is banned.

If you are the owner of this work and don’t want it to be published on AssignBuster, request its removal.

Request Removal
Cite this Essay

References

AssignBuster. (2022) 'Philosophical implications of artificial intelligence'. 2 August.

Reference

AssignBuster. (2022, August 2). Philosophical implications of artificial intelligence. Retrieved from https://assignbuster.com/philosophical-implications-of-artificial-intelligence/

References

AssignBuster. 2022. "Philosophical implications of artificial intelligence." August 2, 2022. https://assignbuster.com/philosophical-implications-of-artificial-intelligence/.

1. AssignBuster. "Philosophical implications of artificial intelligence." August 2, 2022. https://assignbuster.com/philosophical-implications-of-artificial-intelligence/.


Bibliography


AssignBuster. "Philosophical implications of artificial intelligence." August 2, 2022. https://assignbuster.com/philosophical-implications-of-artificial-intelligence/.

Work Cited

"Philosophical implications of artificial intelligence." AssignBuster, 2 Aug. 2022, assignbuster.com/philosophical-implications-of-artificial-intelligence/.

Get in Touch

Please, let us know if you have any ideas on improving Philosophical implications of artificial intelligence, or our service. We will be happy to hear what you think: [email protected]