Artificial Intelligence essay

The origin of artificial intelligence can be traced in fiction, philosophy, and imagination. The early inventions in engineering, electronic, and other fields have influenced the artificial intelligence greatly (AI). It became an additional mechanism of solving problems, performing basic works in knowledge presentation, learning, and demonstration programs in language translation, understanding, associative memory, theorem proving, knowledge based systems, as well as inference (Berlatsky, 2011). The current paper intends to investigate the field of artificial intelligence in relation to the functioning of the human brain.

Historically, artificial intelligence is characterized by fantasies, demonstrations, possibilities, and promises. During the last five decades, the AI community has been able to develop experimental machines that could be used to test hypothesis concerning the mechanisms of intelligent. Even though, the achievement of the full-blown artificial intelligence has remained in the future, an ongoing dialogue must be maintained concerning the implications in realization the full-blown development of artificial intelligence. The philosophers have floated the possibilities of development of intelligent machines as a literary device that would help in defining human. Gottfried Wilhelm (2005) seems to perceive a possibility of having a device that has mechanical reasoning applying the rules of logic in settlement of disputes. Leibniz and Pascal Blaise designed calculators that mechanized arithmetic, though they never claimed that this device could think.

Writers of science fiction have applied the possibility of intelligent machines in their advancement of fantasy of non-human intelligent, as well as making human think about their characteristics. Baum described several robots and mechanical man in 1907. Robots and other artificially created beings such as Mary Shelly’s Fraskenstein and Golden in Jewish tradition have for a long time captured the public’s interest. The mechanical dolls and animals were actually designed from mechanisms behind the clock during the 17th century. Though these gadgets were limited, they were more designed out of curiosity rather than for demonstration purposes. They provided initial credibility to the mechanistic perspectives of behaviors and to the suggestion that such behavior should be feared. With the mechanization of the industrial world, the machinery has become more sophisticated (Whitby, 2009).

Chess as a game is obviously an activity that calls for intelligence. No wonder, that automated chess-playing gadgets of the 18th century were seen as intelligent gadgets that even fooled some people to believe that these machines were playing chess autonomously. Chess was widely used as a mean of practicing representation and inference mechanisms during the early decades of the development of artificial intelligence. A major landmark was reached when the Deep Blue automated program defeated the world chess champion in 1997. Because of the calculating power, the computers in 1940s were referred as “giant brains”.

Though the robots have ever been a perception of public intelligent on computers, the early robotics’ efforts have been identified to have a lot to do with the mechanical engineering than with the intelligent regulation. In recent times, robots have been developed to become an influential mean for testing the idea concerning the intelligent behavior. Furthermore, giving the robots common knowledge regarding the everyday functions and objects in the human environment has become a discouraging task. It is unfortunate obvious, when a moving robot is not able to differentiate a shadow and a stairwell. Nevertheless, some of the resounding successes of the artificial intelligent planning and perception method are in the autonomous of NASA’s vehicles in the space. Stanford team won the Defense Advanced Research Projects Agency’s grand challenge for the autonomous vehicles, where 5 out of 23 vehicles were able to complete the course of about 132 miles. Nevertheless, the artificial intelligence is not a robot. It is more about understanding the intelligent actions and thought-using computer as an experimental device. Herb Simon had laid the foundation for the information processing and symbols manipulation theory of psychology (Boden, 1990).

The calculators cannot be regarded as intelligent. They solve the mathematical assignments, but human preprograms their intelligent to do so. The calculators can never learn new thing outside their limited domain of utility. Calculators solve problems simply because human are able to solve the same problems. Human have envisioned automated machines that are able to go beyond the ability to become intelligent machines. The various sophisticated calculating devices that have been developed to solve mathematical problems of great complexity are merely advanced calculators. They are preprogrammed to function just in line with what human want them to do. After being fed with the inputs, they generate the correct outputs. They may perform these functions at a blazingly high speed, but their underlying mechanisms are based on the human control.

In other words, the artificial intelligence is not only supposed to solve the problems, but should consider different ways of its solving and find the best one. Though most scientific fields, such as mathematics, chemistry, biology, and physics are defined, the discipline of artificial intelligence has remained enigmatic. Despite nearly five decades of research in the artificial intelligent field, there is no an acceptable definition of AI. Even to greater instance, a field of computational intelligence has gained a prominence as the alternative to the artificial intelligence. Many have believed that the methods have been adopted from the rubric of AI will never succeed. It may be surprising to find that about five decades of intense research in AI have been followed without any acceptable goal, or even a simple but a rigorous definition of the notion itself (Winston, 1984).

All computers are synonymous in manipulation of symbols; at very basic level, the common symbols are ones and zeros. It is easy for human to assign meanings to these symbols and their combinations. There is no significant difference between the person assigning the meaning to these symbols in a computer program and the one who assigning the meaning to the binary digits manipulation through a calculator. Neither the calculator nor the program has developed any symbolic meaning on its own. According to Waterman (1986), artificial intelligence as part of computer science is concerned with the development of intelligent computer programs. Rich (1993) stated that the artificial intelligence was a study on how to program computer so that it could do things better than people perform. Nevertheless, if this definition would be regarded statically, it disqualifies the existence of AI. Once a program in computer has exceeded the capability of the programmer, it cannot longer be in the domain of artificial intelligence. Russell (2004) states that an intelligent system possesses the highest utility that may be attained by a programmed system with similar computational limitations.

However, the above given definition seems to offer intelligence to a calculator. The artificial intelligence is a fragmented collection of endeavors. It is currently true definition as it was two decades ago. One of the reasons why there is less progress towards what was envisioned in the 1950s can be traced from the lack of awareness of the progress that has been made in this field. In about half a century ago, there has been a development of computational devices and programming languages that are powerful to develop experimental tests on the ideas concerning the intelligence. According to Turing’s seminal paper in 1950 concerning the philosophy, he stated that the human mind was a major turning point in the development of artificial intelligent. The article crystallized the ideas about the chances of programming an automated computer to function intelligently, including the description of the landmark computer game such as Turing’s Test. The early programs were specifically restricted in their scope by the speed of the processor, the size of memory, the relative clumsiness of languages, and the early operating system. The symbol manipulation languages on the top of the hardware have advanced in their memory and processors. However, there were numerous, remarkable demonstrations of the programs that solved the problems, which could be previously solved by a few intelligent people (Feigenbaum, 1983).

The Turing Test

Turing (1950) suggested a test that included three people: an interrogator, a male, and female players. The interrogator is to be enclosed in a room isolated from the man and woman. The role of the interrogator is to determine who is a man player and who is a woman player through a series of questions. The man is supposed to cause the interrogator to make the conclusion. The man’s objective is to be deceitful. In contrast, the aim of the woman is to assist the interrogator. Turing recommended that the possible strategy for the woman provided truthful answers. To eliminate the pitch of the voice as a clue, a teleprinter was applied for communication between the interrogator and the other two players. Presumably, if the interrogator could show no improvement in the ability to determine who played a game, the machine would be considered to have won the test (Time-Life, 1986). 

There are common misconceptions that Turing Test is based on machines fooling the interrogator to believe that it is a human. Nevertheless, the test is meant to determine whether it is possible for a machine to be effective as a human in fooling the interrogator to believe that he is a woman. Turing did not envision the challenge to be so great. He had to restrict his study on digital computers. Through considerable analysis, Turing indicated that digital computers were universal, and the machines could execute all the computable processes. Thus, the limitation to digital computers cannot be regarded as a significant restriction of the test. In respect to the suitability of the test, Turing assumed that the game would be weighed heavily against the digital computer.

Turing (1950) considered several objections and rejected others to the plausibility of a thinking machine. He felt that an argument that supported the existence of the extra-sensory awareness in the human would have been most compelling of all the objections. According to the Countess of Lovelace (1842), the computer could only function according to the input and the program that have been fed into it. Therefore, computer will never be able to generate anything new. Turing compared the statement to the argument that a computer cannot take human by surprise. He noted that a machine in some instances functioned in unexpected ways since the entire set of initial conditions of the computer was generally not known. The accurate forecast of all probable behavior of the machines is impossible. Furthermore, Turning defined a learning machine as a mechanism which was capable of adjusting its own configuration through a series of punishments and rewards. Thus, such a machine should be able to alter its own programs and develop unexpected behavior. He predicted that, in five decades, it would be possible to program computers to make them play an imitation game with accuracy that an average interrogator would attain above 70 percent chances of making the correct identification.

Model of Human Knowledge

The acknowledgement of the Turing Test focused the attention in the mimicking the human behavior. During the time of the Turing test, it was beyond any reasonable consideration that the computer could have passed the test. In the Turing test, rather than focusing on the human behavior in the conversation, the attention was in the experiment that was limited to the domain of interest. The Turing test received a significant amount of attention because of three reasons. The first reason is its rules that are static and easy to express to the computer and to the human players. The second is that they are examples from a category of problems related to the human reasoning and actions. The third reason is that the ability of the computer action could be measured against that of human (Russell, 1975).

The most of research in the AI has been focused at the development of the heuristics that can be used in case of two-players, nonrandom games, and zero-sum. The zero-sum is a term indicating that any potential gain to one participator would translate to a loss to the other player. Nonrandom is a term implying that the allocation of the resources in the test is solely deterministic. On the other hand, perfect information shows that participators have a complete know how regarding the disposition of both players’ resources. The general etiquette was to examine the decision making process during the game with the aim of discovering the consistent that is set for the parameters or questions that are assessed during the decision making process. These conditions are formulated in an algorithm that is set to generate the behavior, which is similar to the one made by the expert faced by the same situations. It is believed that if enough quantity of heuristics could be programmed into a computer, the infallible computational and sheer speed ability of the computer would allow it to exceed the performance of human expert (Hunt, 1975).

Evolving Artificial Intelligence

It is natural to state that by replicating the evolutionary learning process into a computer, it can gain artificial intelligent, which can adapt its behavior to meet the goals within a range of environments. Knowledge-based systems, expert system, fuzzy systems, neutral networks, and other systems generate actions in forms of stimulus, as a function of the environment. None of the mentioned frameworks, however, is essentially intelligent. They are simply some mathematical functions that assume inputs and produce outputs. It is only when a learning mechanism is introduced into these frameworks, it became meaningful to discuss the intelligence emerging from such system. Considering that, there are other means for developing learning systems that are available. The evolution provides the most important learning mechanism that is applicable across each of the framework as well as combinations of these outlines. This applies to all of human beings (Russell, 1995).

It has happened at the earliest instances of natural selection process and has pervaded the subsequent living things. In various ways, life is intelligence, and it is not possible to partition the processes. Up to date, the majority of the self-described efforts in the artificial intelligence have been depending on the comparisons to the human behavior and intelligence. This happened in the presence of overextended claims and high-visibility claims and projections.

According to Steel (2003), most people in the field of artificial intelligence have known for a long time that mimicking human intelligence exclusively is not the best path for progress, neither from the perspective of developing a practical application nor from a perspective of a progressing in the scientific understanding of the intelligence. In another word, the artificial can never be as nearly as important as the intelligence. For machines, it is not significant for them to fake the intelligence and to mimic its obvious consequences according to the Turing Test. It is simply sophistry to state that some software is example of artificial intelligence since human in the artificial intelligence community has designed them.

The significant development in the making of the intelligent machines-computers that adapt their behavior to meet the objectives within the range of the environments-demands more than just assuming that there is the creation of artificial intelligence. Numerous judgments concerning the proper objective for artificial intelligence study have been established. However, intuitively, intelligence must assume the same process in living organisms the same way it is in machines. Genesereth (1987) states that the artificial intelligence is the study of the intelligent behavior. The ultimate goal is a theory of intelligence accounting for the behavior of a naturally occurring intelligent being, and that guides the development of an artificial entity that has intelligent behavior (Parker, 1995).

Conclusion

According to Mayr (1988), the evolution process can be described viewed as a four-fold process made up of self-reproduction, competition, mutation, and selection. The self-reproduction of the germ line RNA and DNA systems is widely described biologically. In positively entropic universe-as described by the second law of thermodynamics-the property of mutability is guaranteed; fault in the information record is inevitable. A finite arena assures the existence of competition of survival among organisms. Selection becomes a natural consequence of the excess of organisms that have filled the available resources. The implication of this very simple rule is that evolution is a procedure that can be simulated and be applied in the development of creativity and imagination mechanically in machines (Russell, 2009).

The intelligence should never be restricted entirely to the biological organisms. Intelligence is a property that is synonymous with the purposeful decision-making. Its definition applies equally to humans, robots, colonies of insects, as well as in the social groups. Therefore, more generally, intelligence is the capability of a system to acclimatize its behaviors to meet its goals in a diverse environment (Rich, 1983).

Use discount code Get your first order

Related essays

  1. Negative Results to Which People's Obsession with Smartphones Can Lead
  2. Greenhouse Gas Emissions Reduction