One of the “characters” in the 1968 film 2001: A Space Odyssey (directed by Stanley Kubrick) is a computer called HAL 9000. In addition to having a highly developed artificial intelligence, this computer is shown to have certain human emotions. The computer has a great deal of pride in its own abilities, and this leads to it having feelings of jealousy. At one point in the film, HAL lies about a malfunction on the outside of the spaceship. The computer then proceeds to kill all of the crewmembers except one, who is able to kill HAL by disconnecting it.
The characterization of HAL in the film raises an interesting question regarding artificial intelligence: Can a computer lie? In order for this question to be answered, it needs to be rephrased. First, we must consider what factors enable human beings to lie. In my opinion, human beings lie because they have emotions. More specifically, people are motivated to lie to one another because of their desires. In this regard, people who lie generally do so either because they want to get something out of another person or because they want to avoid getting something from another person.
The next issue to consider is where desires come from. I feel that desires arise because people have a particular kind of consciousness, which can be called self-awareness. Before desires can arise, a being must have awareness of itself as being distinct from other beings. Furthermore, the being must have a sense that other beings have things that it lacks. Emotions, desires and self-awareness are obviously things which are found in human beings and not in machines. Therefore, the question to be considered is whether or not computers with artificial intelligence will be able to imitate these types of human behaviors in the future.
If we can answer this question, we will be able to determine whether or not a computer can have the capability for lying. Unfortunately, this is not an easy question to answer and there is currently a great deal of controversy surrounding this topic. There are many experts who feel that computers will eventually be able to simulate the human mind; however, there are just as many other experts who adamantly disagree with this point of view. In the 1950s, Allen Newell and Herbert Simon were among the first researchers to develop the theory that computers would eventually be able to imitate the human mind.
Newell and Simon compared artificial intelligence to the human mind by creating an “information-processing model. ” According to this model, a computer and a human mind are similar to one another in that both are designed to process information by manipulating symbols. On the basis of this model, Newell and Simon concluded it would not be long before computers would be able to “think” like humans; not merely in a superficial sense, but in the exact same way. Their hypothesis was that “a physical symbol system has the necessary and sufficient means for general intelligent action.
Further support for this view came from the British mathematician Alan Turing. Turing devised a theoretical experiment (now known as the “Turing Test”) in which a person has a conversation over a teletype machine. The person does not know whether the teletype machine is connected to a human being or to a computer on the other end. A series of questions are asked in an effort to determine which one it is. According to this test, “if no amount of questioning or conversation allows you to tell which it is, then you have to concede that a machine can think.
Alan Turing used this idea to formulate one of the fundamental principles of artificial intelligence: “If a machine behaves intelligently, we must credit it with intelligence. ” Another way of saying this is: “A perfect simulation of thinking is thinking. ” On the surface, this seems like a rather weak conclusion. However, Robert Sokolowski makes an interesting point in his article “Natural and Artificial Intelligence. ” In that article, Sokolowski notes that there are actually two different ways of looking at the word “artificial.
On the one hand, it can relate to something like “artificial flowers,” which are made of paper or plastic and are therefore obviously not real flowers. On the other hand, it can relate to an idea like “artificial light,” which really is light. Thus, in the words of Sokolowski, artificial light “is fabricated as a substitute for natural light, but once fabricated it is what it seems to be. ” The proponents of AI believe that the word “artificial,” as used in “artificial intelligence,” is capable of reflecting this second meaning as well as the first.
The followers of Newell, Simon, and Turing would agree with the idea that a machine could have both desires and awareness. In this regard, scientists have learned that emotions in human beings are aroused by chemicals in the brain. Although computers work with electricity instead of chemicals, an analogy can easily be drawn between the processing of information in a computer and the processing of emotional “information” in the brain.
In his book Man-Made Minds: The Promise of Artificial Intelligence, M. Mitchell Waldrop makes the point that emotions are not simply random events; rather, they serve important functions in the lives of human beings. Based on recent discoveries in the field of psychology, Waldrop claims that emotions serve two major functions. The first is to help people focus their attention on things that are important to them. The second purpose, which is related to the first, is to help people determine what goals and motivations are important to them. According to Waldrop, there is no reason why a computer could not be programmed to carry out these same functions.
In fact, Waldrop indicates that computer programs have been developed in recent years that seem to be on the border of expressing rudimentary emotions. These programs could be used to enable a computer to tell when its operator is sad or angry. In the words of Waldrop, “such a computer could then be programmed to make the appropriate responses – saying comforting words in the first case, or moving the conversation toward a less provocative subject in the second. ” As Waldrop points out, this behavior would make it seem as if the computer was “empathizing” with the operator.
In defense of those who believe that computers might someday be able to imitate human beings, Waldrop claims that people can be easily confused on this issue if they only think of a computer as a cold, empty machine consisting of lights and switches. This perspective changes dramatically when a person imagines the type of robot that might be possible in the future: “A computer equipped with artificial eyes, ears, arms, legs, and skin – even artificial glands. ” This type of android is often seen in science fiction books, movies and television shows.
The issue of whether a computer shares human traits becomes more confusing when it is made to look and act like a human being. Despite the efforts of the AI proponents, they have not yet been able to create a machine that is truly capable of simulating the human mind. The process of creating this type of computer has turned out to be much more difficult than the researchers of the 1950s anticipated. Furthermore, since the 1960s, there has been a group of scientists who disagree with the idea that a computer can ever have a human-like consciousness.
In actuality, the belief that a machine is incapable of thinking can be traced back to the seventeenth century and the theories of the French philosopher Rene Descartes. However, this argument gained new life in the 1960s, in rebuttal to the efforts of AI researchers such as Alan Turing and others. The opponents of AI research feel that there is something unique about human nature that can never be duplicated in a computer. In the words of Hilary Putnam, a professor at Harvard University: “The question that won’t go away is how much what we call intelligence presupposes the rest of human nature.
M. Mitchell Waldrop agrees that there is something special about human consciousness. According to Waldrop, “the essence of humanity isn’t reason, logic, or any of the other things that computers can do; it’s intuition, sensuality, and emotion. ” This perspective is known as the holistic point of view because its proponents believe that the human brain is not simply a device for information processing. Rather, it is believed that there is something more to the mind – an intuitive or spiritual side.
According to this point of view, the differences between computers and humans cannot be fully understood unless the entirety of human experience is taken into consideration. One of the first researchers to advocate this point of view was Hubert Dreyfus, author of a book entitled What Computers Can’t Do. Later, Hubert Dreyfus wrote more books and articles on the topic with the assistance of his brother Stuart. The Dreyfus brothers feel that the arguments of the AI researchers are too simplistic. In this regard, they claim: “Too often, computer enthusiasm leads to a simplistic view of human skill and expertise.
The AI proponents are accused of having a limited perspective that fails to address such things as human intuition. According to Hubert and Stuart Dreyfus, this failure occurs because intuition is not apparent within the matter of the brain. As such, the Dreyfus brothers reject the “information-processing model” and propose instead “a nonmechanistic model of human skill. ” The noted philosopher Mortimer J. Adler agrees that human intelligence is not a material thing. For this reason, Adler likewise agrees that computers cannot truly compete with the marvelous powers of the human mind.
Although computers can imitate the mind in many ways, “they cannot do some of the things that almost all human beings can do, especially those flights of fancy, those insights attained without antecedent logical thought, those innovative yet nonrational leaps of the mind. ” In fact, Adler makes a direct rebuttal against the theoretical viewpoint of Newell and Simon with his claim “that the brain is only a necessary, but not the sufficient, condition of conceptual thought, and that an immaterial intellect as a component of the human mind is required in addition to the brain as a necessary condition.
There are many other elements of the human mind that are currently inapplicable to computers. For example, computers are incapable of utilizing what Dreyfus and Dreyfus refer to as “everyday know-how. ” By this, the Dreyfus brothers “do not mean procedural rules but knowing what to do in a vast number of special cases. ” The Dreyfus brothers also note that computers lack the ability to generalize, as well as the ability to learn from their own experiences.
In order for a machine to be truly intelligent, “it must be able to generalize; that is, given sufficient examples of inputs associated with one particular output, it should associate further inputs of the same type with that same output. ” Hilary Putnam points out that true human intelligence requires more than the manipulation of codes and symbols. Thus, “to figure out what is the information implicit in the things people say, the machine must simulate understanding a human language. ” Again, this is something that is currently missing in computer technology.
Furthermore, it is a thing that may never be achievable. In this regard, Putnam refers to the research of the linguist Noam Chomsky, who discovered that there might be a “template for natural language” within the human mind. This “template,” which enables people to learn languages, may be at least partially innate, or “hard-wired-in,” to use the terminology of Hilary Putnam. Even if this type of thing could be transferred to a computer, the programming involved would be extremely complex and could take years to accomplish.
After all, it takes a human child many years to learn to master a language in all its subtlety. Robert Sokolowski also describes some of the human things that are lacking in computers. These include “quotation,” or the ability to appreciate another person’s point of view. Sokolowski also mentions the inability of computers to make creative distinctions. In addition, Sokolowski refers to the fact that today’s computers are incapable of having passionate desires. According to Hubert Dreyfus, there is yet another vital thing that is found in human beings but missing in computers – a body.
In his book What Computers Can’t Do, Dreyfus claims that pattern recognition is an important aspect in true artificial intelligence. However, he also claims that this ability “is possible only for embodied beings,” because it “requires a certain sort of indeterminate, global anticipation. ” Although Dreyfus acknowledges the possibility of androids with human-like bodies in the future, he does not think that this will ever be the same as having a real human body. The difficulties of trying to make a computer behave like a human being can be seen in a program created by K. M. Colby called “Simulation of a Neurotic Process.
This program is supposed to simulate the thinking of a woman who is suffering from repressed emotions, as well as feelings of anxiety and guilt. However, as noted by Margaret A. Boden, there are several failings in this program and thus the results are not as deep and complex as what would be found in a real human being. Because of this, Boden claims that this “neurotic program” is not a true representation of neurotic behavior; rather, “it embodies theories representing clumsy approximations of these psychological phenomena. ” Thus, in answering the question “Can a computer lie? the answer is clearly, no, not at this time.
Of course, it is possible that a present-day computer could be programmed to lie; however, on its own, a computer lacks the necessary self-awareness and desire to have the motivation to commit such an act. Yet, I have to agree with the idea that anything may be possible in the future. Even Mortimer J. Adler, while arguing in favor of the immateriality of intelligence, admits that “the present difference in the degree of structural complexity between the human brain and that of artificial intelligence machines can certainly be overcome in the future.
Perhaps the AI advocates like Alan Turing, Allen Newell and Herbert Simon will eventually be proved to be right. Perhaps future computers will be programmed to imitate human intelligence so well that androids will begin to act like humans in emotional and intuitive ways. If that happens, then, as in the case of “artificial light,” it will no longer be possible for people to distinguish between a human mind and artificial mind. It is even possible that machines may someday learn to duplicate themselves.
If that happens, it is possible that computers will evolve over time as they learn to adapt to the environment. One of the things that computers would probably acquire is human-like emotion, because these would help them in their successful adaptation, just as they have helped human beings in the same way. At that time, it would certainly be possible for a computer to lie in order to gain its “selfish” ends, just as a human is capable of doing now. The idea of a computer of the future lying is really not too far-fetched.
This is true, even if such a computer were still less intelligent than an average adult human. After all, children and even pet animals are capable of practicing deceit on one level or another in order to get something that they want out of a parent or master. The main objection to the idea that computers might someday become human-like is that it would imply that humans are not as special and unique as they like to think they are. In the words of Daniel C. Dennet: “There is something about the prospect of an engineering approach to the mind that is deeply repugnant to a certain sort of humanist.
Marvin Minsky, an AI enthusiast from the Massachusetts Institute of Technology, put this threat more vividly with this claim that “the brain happens to be a meat machine. ” However, as Waldrop points out, scientific progress has always represented uncomfortable change for human beings. This can be seen, for example, in the discoveries of Copernicus, Darwin, and Freud, all of which marked dramatic changes in the ways human beings saw themselves and their place in the world. Perhaps, as Waldrop argues, such scientific advances don’t have to be taken as a “message of doom.
Perhaps, as computers become more intelligent, the more subtle and vital differences that make humans unique from machines will become apparent. Thus, we will gain deeper insights into the human mind and what it really means to be a human being. The scientist Douglas Hofstadter, who claims that the reductionism of comparing the human mind to a computer does not bother him, gives another optimistic opinion. In Hofstadter’s words: “To me, reductionism does not ‘explain away’; rather, it adds mystery. ” Therefore, a future in which machines are more human-like is not only possible; it also might not be as bad as present-day humans fear.