A key part of Turing’s argument is that the perception of the interrogator (who is analogous to any of us) is all that he/we have to determine whether or not the machine is “thinking”, by the test of whether or not it is possible to distinguish a machine from a person. This is a kind of solipsism. If I am the perceiver, I have no way of knowing if anybody else even exists or is a figment of my imagination. (Descartes – I think therefore I am, etc. does anybody else think? We’ll never really know). So even in observing a very human like person there is no way to know if they are really ‘thinking’ or just operating by a program.
Obviously we assume that we can project our own mental experiences on other humans, being so much like them, and in fact as mentioned by Searle, we even tend to project them onto inanimate objects, which he says contributes to the three main reasons why people expect that programing a thinking machine is possible, and why AI exists as it does: the reasons being that
1- we associate both mental activity and computers with information processing in a similar way, as we consider information processing of a program modeled from our logical mental processes,
2-that we project mental states onto computers, and
3- that a residual sense of dualism, although disputed by AI, is at the core of some fundamental AI ideas, the dualism for example that the brain and mind are separate, and therefore mental processes can be formalized as by a program.
According to Searle, the dualistic analogy that mind:brain = product:computer breaks down for three reasons:
1-that being purely formal the program can have many “crazy realizations” depending on its context and application,
2- that programs are purely formal and so lack intentionality (one formal instance can only initiate the following formal instance, can not result in ‘understanding’, and can not arise from an intention other than its set formal precedent), and
3-that mental states and events are the products of the brain, whereas programs are made (as simulations of the mind abstracted to a purely formal version of itself, isolated from the brain, or whatever the machine’s equivalent of a brain would be).
He says machines, in fact only machines, can think, but specifically only the brain machines or ones with equal causal functions and intentionality. He does say that if we were to somehow recreate the mind and brain exactly then it would of course act as a mind and brain.
But like Borges’s map story, this may just mean somehow creating a human. (the map is a very small representation, the more accurate we try to make it, the bigger it has to get to accomodate new information, until finally it is the size of the entire world, and sits on top of it, completely covering it up)
I have to believe that if there is something particular to the human that makes us special in this ‘understanding’ regard, then it must exist on a spectrum, and in trying to build the sort of replica ‘machine’ we would perhaps approach understanding at some point before the thing seems to be a completely human replica, and so maybe there is a certain exact point on the spectrum where this kind of thinking/understanding/intentionality becomes valid by Searle’s argument. But if this point does exist then we are really saying that there is an identifiable variable that distinguishes a non thinking machine from a thinking machine and that this variable may be very very small.
So if were were building an entire thinking human out of all sorts of scrap parts, at what point in our construction would it begin thinking? at what point would it begin to be human?
It seems extremely improbable that such a point could exist, and so there is probably no such distinction between thinking and non thinking. Unless you were to say that there can never be such a re-creation, and you must just be born a human to think, so the original Searlian causality is key and must occur in the traditional missionary way. And this leads to the question, at what point do human beings born the usual way begin thinking? Are we born with a though, or is language and though a program that our brains learn over time? In this sense Turing has a great point. This learned part can be purely formal. But Searle’s point is that only when combined with this original seed of human brain growth will this formal program result in intentionality and understanding.
Turing limited his argument to the digital machine to isolate the purely formal. Searle says that if we limit the question “can a machine think?” to digital computers only, then they can carry out a program but cannot understand, regardless of their ability to work with formal programs or the complexity of the program.
Turing, I think, would say “why would this distinction matter if we would be unable to make it” Or rather, “what is the validity of the speculation if you are aware of your inability to know or perceive this distinction?” (Kind of an annoying solipsistic argument, and therefor difficult to disagree with.) Of course I will never know for sure if anybody else, or any machine thinks, and since I assume that people think, then I could definitely be fooled by a very advanced machine into thinking it were a person. Searle would say ok, but there are still interesting differences.
In a way Turing is saying Searle’s distinctions (bet thinking/ understanding/ formal/ intentional/ causal/ etc) are irrelevant because all that matters is the interrogator’s (or our) perception being fooled, making any of our distinctions or diagnostics irrelevant, while Searle would say that Turing’s interrogator’s ability to distinguish human from computer is irrelevant, since we can, and would learn more, from further analyzing the differences between the mental and programmatic behaviors of the game participants.
On the other hand, I always am attracted to the idea that when we understand enough we will find a very mechanical, programatic structure at the core of our mental process that maybe originates from the most basic brain function, and in this sense I find Turing’s point very important. But until we figure out what that program is and create a computer with human emotion and understanding, discussing the differences between the way we operate and the way a computer with the most appropriate program would, instead of dismissing it as irrelevant, is probably the way to discovering our inner machine and building this future computer.
Finally, because of our tendency to project human like qualities on inanimate objects, a human interrogator in the Turing test is not a suitable interrogator, because he/she would be easily fooled as they desperately try to see the human in everything.
And finally finally, I think we are not born thinking, but feeling, and as we learn the complicated program of language and thought over time, its parts and meanings are nested and intertwined amid the feelings and sensations of the brain, and are forever influenced, as is their development, by them, and vice versa. Maybe if a computer had this quality of subjective sensations preceding, influencing, and developing alongside the learning of a formal program then it would seem a lot more human.