Tutor HuntResources Philosophy Resources

Could A Machine Think?

For Heythrop College, University of London

Date : 23/03/2015

Author Information

Gemma

Uploaded by : Gemma
Uploaded on : 23/03/2015
Subject : Philosophy

Artificial-intelligence research has come a long way in recent years, with machines now able to perform all kinds of functions, some of which humans can do, and some of which we can't. In order to determine whether or not it is possible for a machine to think as we do, it is important to clarify what we mean by 'machine' and by 'think'. The Oxford Dictionary defines a machine as:

"An apparatus using mechanical power and having several parts, each with a definite function and together performing a particular task."

By this definition, we must take such everyday objects as clocks to be machines. I think it is safe to assume that the majority of people would agree that a clock is not capable of any kind of conscious thought. However, in cases of more advanced machines such as computers, the answer to the question of whether or not a machine could think is less obvious, and it is primarily these cases I will be dealing with in the course of this essay. As for the notion of thinking, I will argue that this entails not only computational functions such as symbol manipulation, but requires an awareness of self, and an awareness of one's own consciousness of a subjective experience.

This definition of thought requires further justification. Of course it would be unreasonable to deny that a machine can appear to think in the way that it behaves. Alan Turing posits a test for artificial intelligence that has become known as Turing's test. He holds that if the behaviour of a machine is indiscernible from that of a person then the machine can be said to be thinking intelligently. An American psychiatrist, Kenneth Colby, created a computer program called PARRY. PARRY was a simulation of a paranoid patient and was interviewed by psychiatrists, along with a real paranoid person. The transcri pts from both interviews were given to a different group of psychiatrists who could not work out which were from PARRY's interview. This is an example of a machine that passes Turing's test. However, Daniel Dennett argued that the PARRY test was flawed because the psychiatrists were restricted to asking PARRY and the human patient ethically suitable questions, all of which PARRY was pre-programmed to respond to. Had they been allowed to try to confuse the interviewees they would have soon discovered that PARRY was a machine. PARRY only seems to pass Turing's test because of the assumptions in the process and so cannot be said to be thinking intelligently.

Other machines may be more successful in passing Turing's test, however, this is not because the machine is engaged in the same process of 'thinking' as that of a person. It is not the outward behaviour we should be focusing on to determine whether or not a machine can think, but the internal cognitive processes and subjective experiences. This is most aptly demonstrated by Searle's 'Chinese Room' experiment, in which Searle is in a room where Chinese symbols are posted through the door. He uses a set of English instructions to process the symbols and selects the right ones to post back. No one looking at his responses would doubt that he spoke Chinese, and yet he understands nothing of the symbols he is processing, and so clearly we cannot conclude that he truly understands Chinese. Machines are not like human minds because they do not possess intentionality, they can process information, but they do not understand what they are processing:

"the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don`t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics." (Searle, 1980)

There is clearly more to thought than this kind of symbol manipulation.

The critical responses to the 'Chinese Room' experiment are, on the whole, unsatisfactory. The systems reply holds that the person in the room may not understand Chinese, but they are only one part of the system, and the machine as a whole does understand Chinese. Paul and Patricia Churchland argue that it is "irrelevant" that no part of the machine can understand Chinese because no one neuron in the human brain can understand Chinese, but the brain as a whole can. Therefore, in the same way, Searle's system as a whole can understand Chinese. Searle's response successfully stifles the systems reply by asking us to internalise all the elements of the system. The person in the room memorises all the symbols and all the rules, as well as doing all the calculations in their head. He still does not actually understand the meaning of the Chinese symbols and so clearly the system as a whole does not either because the system is within him, and so the machine, even as a whole, cannot be said to be capable of thought.

What is it then, that separates mere computational functioning, the input and output of data, from true conscious thought? David Rosenthal argues that we must be aware of a mental state in order for it to be considered as such:

"A state is conscious if whoever is in it is to some degree of being in it in a way that does not rely on inference. or on some sort of sensory input." (Rosenthal, 1986)

When a calculator adds two numbers together, it outputs the correct answer. We can come to the same answer, but the difference is that we understand the answer we have arrived at and we have an awareness of ourselves (the subject) as separate from the sum (the object). Furthermore, the most significant requirement for true thought, which is missing from the functioning of machines, is qualia, the subjective experience of what it is like to be something. Thomas Nagel's highly influential article 'What is it like to be a bat?' highlights the fact that there is something it is to be a particular organism (in this case a bat) that can never be captured by the knowledge and accounts of another organism. Omitting this subjective element of experience is the greatest failure of physicalist theories of mind. As Nagel concludes:

"it is a mystery how the true character of experiences could be revealed in the physical operation of that organism." (Nagel 1974)

Therefore it is clear that we cannot analyse our consciousness in purely physical terms. There is more to thought than the physical actions of the brain, and so a perfectly reconstructed mechanic brain will still not be able to truly 'think.' This was elaborated on by Frank Jackson in his paper 'What Mary didn't know', in which a girl, Mary, lives from birth in a black and white room where she learns everything there is to know about science, including the scientific facts about colour. One day she leaves the room and perceives colour for the first time. She learns something new because she learns what it is like to see a colour. Ted Block posits an 'absent qualia' objection to functionalism, concluding that even machines functionally equivalent to humans cannot truly think because do not have a subjective mental life and so qualia - the ability to experience something - is absent.

Ultimately, no matter how advanced machines become, they will never be able to truly 'think'. To suggest that computational functioning is the same as human thinking is to totally undermine the wonder that is our power of thought. Without our self-awareness, our subjective experience of qualia, we are little more than zombies. We can input and output data yes, but we can also intuit judgments, respond to beauty, and fall in love. These experiences are all subjective, and form the most defining aspect of our ability to think, the one that sets us apart from the clocks and thermostats that strong AI essentially reduce us to. The only intentionality a machine can have is programmed into it by its human creator, and so any semblance of thought that it may demonstrate is merely a severely limited reflection of the creator's. The idea of artificial intelligence is a fascinating conjecture, but we could never create a machine that truly shares our experiences, especially whilst comparatively little is yet known about human consciousness and what it really means to think.

Bibliography

Block - 'Are Absent Qualia Impossible?' (The Philosophical Review) 1980 Churchland and Churchland - 'Could a Machine Think?' (Scientific American) 1990 Colby - 'Artificial Paranoia: A Computer Simulation of Paranoid Processes',(Pergamon Press) 1975 Crane - 'Elements of Mind' (Oxford University Press) 2001 Dennett - 'Can Machines Think?' in 'Alan Turing: Life and legacy of a great thinker', (Springer Verlag) 2003 Dupre - '50 Philosophy Ideas You Really Need To Know' (Quercus) 2007 Jackson - 'What Mary Didn't Know', (The Journal of Philosophy) 1986 Nagel - 'What is it like to be a bat?' (The Philosophical Review) 1974 Pinchin - 'Issues in Philosophy', (Macmillan) 1990 Rosenthal - 'Two Concepts of Consciousness', (D. Reidel) 1986 Rosenthal - 'The Nature of Mind' (Oxford University Press) 1991 Searle - 'Minds, Brains and Programs', (Cambridge University Press) 1980

This resource was uploaded by: Gemma

Other articles by this author