Artificial intelligence - 2020

Artificial intelligence


Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks typically related to an intelligent person. The term is often applied to projects of developing systems. The characteristics of intellectual processes are human characteristics, such as the ability to reason, invent, generalize, or learn from past experience. Since the development of the digital computer, it has been demonstrated that computers can be programmed to perform very complex tasks - for example, finding evidence for mathematical theorems or playing chess - with extreme efficiency. Yet, despite the steady advancement of computer processing speed and memory capacity, there are still no programs that match human flexibility in the tasks required for broad domains or more daily knowledge. On the other hand, some programs have achieved performance levels of human specialists and professionals to perform specific tasks, so that in this limited sense artificial intelligence applications are found as diverse as medical diagnostics, computer search engines, and voice or handwriting recognition. .




What is the meaning of wisdom?


The most common human behavior is marked for intelligence, while even the most complex insect behavior is not taken as a sign of intelligence. What's the difference Consider the behavior of the excavated okra, Sfex ichneumoneus. When the female wasp returns to her burrow with food, she first collects it at the door, checks for intruders inside her burrow, and then only if the shore is clear, she carries it inside her burrow. The reality of the sheep's instinctive behavior is evident if it moves a few centimeters away from its burrow when it enters the food: when it rises, it repeats the whole process as many times as the food is displaced. Intelligence should include the ability to adapt to new situations that are clearly absent in the case of Sp Specks.


Psychologists usually portray human intelligence not just from one feature but from a combination of many different abilities. Research at AI focuses primarily on the following components of intelligence: learning, reasoning, problem solving, perception, and language.


Learning


There are different types of learning applied to artificial intelligence. Learning from the simplest trials and errors. For example, a computer program can easily find tricks in random to solve a pair-in-one chase problem until the pair is found. The program can then store the solution with the position so that the next time the computer encounters a situation like this it can remember the solution. This simple memorizing of individual objects and processes - called rotate learning - is relatively easy to implement on a computer. More challenging is the problem of implementation called generalization. Generalization involves new situations in line with past experiences. For example, a program that knows the past tense of a regular English verb by roteco cannot produce the past tense of a word like a little jump unless it was previously presented in a jump, while a program capable of generalizing can learn "ad ad" rules and the like. Build a jump past tense based on experience with action.


Arguing


To argue is to draw the appropriate adaptations for the situation. Inferences are classified as either deductive or persuasive. The first example is, “Fred should be in either a museum or a cafe. He's not in the cafe; So he is in the museum, and the last "previous accidents of this type were caused by equipment failure; therefore this accident is caused by equipment failure." Inductive reasoning is common in science, where data is collected and temporary models are developed to predict future behavior — such as the presence of random data does not force the model to modify. Are made up of a small set of axes and rules.
There has been considerable success in attracting inferences on programming computers, especially deductive inferences. However, correct reasoning does not only include inference diagrams; It includes relevant diagrams for solving a particular task or situation. This is one of the most difficult problems to deal with AI.

Problem solving


Problem solving, especially in artificial intelligence, can be characterized as a systematic search through a range of possible actions to reach some defined goal or solution. Problem solving methods are divided into specific purpose and general purpose. A special purpose method is tailored to a particular problem and often exploits many specific features of the problem embedded situation. In contrast, a general-purpose method applies to a wide variety of problems. A.I. A general-purpose technique used in is semantic-end analysis - a step-by-step, or incremental, reduction of the gap between the current state and the final goal. The program selects tasks from a list of tools - in the case of ordinary robots it may include PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT.


Many different problems have been solved by artificial intelligence programs. Some examples are finding winning tricks (or sequences of moves) in board games, developing mathematical proofs, and manipulating "virtual objects" in a computer-generated world.



Perception


In perception the environment is scanned by various sensory organs, real or artificial, and the scene is decomposed into different objects in different spatial relationships. The analysis is complicated by the fact that an object can look different depending on how it is viewed, the direction and intensity of light in the scene, and how much it differs from the surrounding area.


Currently, the artificial perception is open enough to enable optical sensors to identify individuals, autonomous vehicles to drive at moderate speeds on open roads, and robots to move around buildings collecting empty soda cans. One of the earliest systems for integrating perception and action was Freddie, a movie television eye and a panser hand-held static robot, built over a period of time at the University of Edinburgh in Scotland. Freddie was able to identify various objects and be instructed to assemble simple artifacts, such as a toy car, from random piles of components.


Language


Language is a system of signals that conveys meaning from a convention. In this sense, it is not necessary to limit the language to spoken words. Traffic signs, for example, make a miniature, it’s a matter of confession - a “threat ahead” in some countries. These are the specific languages ​​in which the linguistic units have the connotation of convention and the linguistic meaning is very different from the natural meaning, for example the phrase "rain to those clouds" is given as an example and "falling under pressure does not mean bad valve."


Unlike birdclips and traffic signs, an important feature of fully developed languages ​​is their productivity. Productive language can create unlimited variety of sentences.


It is relatively easy to write computer programs that seem to be able to respond fluently in human language to questions and statements, in severely restricted contexts. Although none of these programs really understand the language, they can in principle reach a point where the command of their language is different from that of the average person. So what does true understanding include, even if a computer uses language like a native speaker? There is no universal consensus on the answer to this difficult question. According to one theory, understanding or not understanding depends not only on one's behavior but also on one's history: to understand, one must have learned the language and be trained to take one's place in the linguistic community. Interacting with other language users.


Learn more!


Similar topics

Computing distributed

Embedded processor

Computer architecture

Computer art

Computer integrated construction

Computer memory

Supercomputer

Computer graphics

Digital computer

Quantum computer

Methods and goals in AI


Symbolic vs. Connective Approaches

AI research follows two different, and somewhat competitive, methods, the symbolic (or “up-down”) approach, and the connector (or “bottom-up”) approach. The down-down approach above attempts to make an intellectual copy by analyzing perceptions independent of the biological structure of the brain during the processing of symbols - hence the symbolic label. On the other hand, the up-and-coming approach creates artificial neural networks in imitation of the structure of the brain - from where the connector is labeled.

To clarify the difference between these approaches, pay attention to the task of creating the system, equipped with an optical scanner, which recognizes the letters of the alphabet. The bottom-up approach usually involves training the artificial neural network by presenting it one by one, gradually improving the performance by "tuning" the network. (Tuning adjusts the response of different nerve pathways to different stimuli.) In contrast, a top-down approach usually involves writing a computer program that compares each letter to a geometric description. Simply put, neural activities are the basis of the bottom-up approach, while symbolic descriptions are the basis of the top-down approach.

At the Foundations of Learning, Columbia University, New York City psychologist Edward Thorndick first suggested that human learning involves some unknown property of the relationship between neurons in the brain. In the Behavior organization, Donald Hebbe, a psychologist at McGill University in Montreal, Canada, suggested that the study specifically involves strengthening certain aspects of neural activity by increasing the likelihood (weight) of induced neuron fire between related connections. The concept of weighted connections is described in the previous section.

Ellen Newell, a researcher at Symbolic Monica, California, RAND Corporation, and Herbert Simon, a psychologist and computer scientist at Carnegie Mellon University in Pennsylvania, Pittsburgh, Pittsburgh and two strong advocates for Symbolic AI, summed up the up-and-down approach. What they called physical symbol system hypotheses. This hypothesis states that the structural processing of symbols is sufficient, in principle, to produce artificial intelligence in digital computers and, more importantly, that human intelligence is the result of the same kind of symbolic manipulation.

The top-down and bottom-up approaches were followed simultaneously over the decades, and both achieved remarkable, if limited, results. In the 1970s, however, down-up AI was neglected, and by the 1s approach, this approach became dominant again. Nowadays, both approaches are adopted, and it is accepted that both difficulties are encountered. Symbolic techniques work in simplified areas, but in the face of the real world, they generally deteriorate. Meanwhile, bottom-up researchers are unable to mimic the nervous system of simple living things. The Canorahabitis Elegans, a well-studied worm, contains about 30,000 neurons with a well-known pattern of interactions. Connective models, however, have failed to mimic this worm. Clearly, the neurons of Cognitive theory are gross oversupply of the real thing.


Strict AI, applied AI, and cognitive simulation


Using the methods outlined above, AI research attempts to reach one of three goals: rigorous AI, applied AI, or cognitive simulation. To build a machine for hard AI. (The term powerful AI was coined by John Serel, a philosopher at the University of California, Berkeley, for this research.) The ultimate ambition of strong AI is to produce a machine whose overall intellectual capacity is different from that of a human. As described in the Early Milestone section of the AI, there has been great interest in this goal over the decades, but such optimism has helped to contain the extreme difficulties. To date, little progress has been made. Some critics question whether the research will produce a system with the overall intellectual potential of ants in the near future. In fact, some researchers working in the other two branches of AI do not seem to be able to follow the strict AI.


AI, also known as advanced information processing, aims to produce commercially viable "smart" systems - for example "specialist" medical diagnostic systems and stock trading systems. Applied AI has enjoyed great success as described in the Section Expert System.


In cognitive simulation, computers are used to test the principles of how the human brain works - for example, the principles of how people recognize faces or remember memories. Cognitive simulation is a powerful tool in both neuroscience and cognitive psychology.


Learn more!


Alan Turing and the beginnings of AI


Theoretical work


Early enough work in the field of artificial intelligence was done in the mid-20th century by British logician and computer pioneer Alan Mathison Turing. Turi describes an abstract computer scan machine that includes unlimited memory and a scanner that proceeds from memory, symbol by symbol, what it detects and writes more symbols. The scanner's function is determined by a program of instructions that are also stored in memory as symbols. This is Turing's stored program concept, and inherent in it is the possibility of operating the machine, and so modifying or improving, its own program. The concept of Turing is now only known worldwide as the Turing machine. All modern computers are abstract universal Turing machines.


During World War II, Turing England was a leading cryptanalist at the Government Code and Cipher School in Bleach Park. Turing could not push the stored program into the project of building electronic computer machines until the hostilities in Europe ended. However, during the war, he gave enough thought to the question of machine intelligence. Blech, a friend of Turing's, met with Donald Michi (who later founded the Department of Machine Intelligence and Perception at the University of Edinburgh) in the park. Guiding Principles - A process now known as heuristic problem solving.


Turing is perhaps the earliest public lecture (London, 1947) to refer to computer intelligence, "what we want is what the machine can learn from experience," and "the possibility that the machine can change its own direction provides the plant." For this. "In 194.8, he introduced a very central concept of AI entitled" Intelligent Machinery. "However, Turing did not publish this paper, and many of his ideas were later reproduced by others. For example, one of Turing's original ideas was artificial. The network of neurons was to be trained to perform specific functions, as described in Connection Connectionism.


Chase


In Blech Park, Turing presented his ideas on machine intelligence in the context of chess - a useful source of challenging and clearly defined problems against which proposed methods for problem solving can be tested. In theory, a chess-playing computer can search in detail through all available moves, but in practice this is impossible because it involves testing a large number of moves astronomically. Herocystics are needed to guide narrower, more discriminatory searches. Although Turing used to design chess programs, in the absence of a computer, he had to satisfy himself in principle to run his own chess program. The first correct AI programs had to wait for the arrival of storage-programmed electronic digital computers.


Turing predicted that computers would one day play better chess, and 500 years later in 1997, the chess computer Deep Blue, built by the International Trade Machine Corporation (IBM), defeated world champion Gary Kasparov during his reign. In a six-game match. After Turing's prediction came true, he hoped that chess programming would contribute to understanding how humans do not think. The great improvement in computer chess since Turing's day is credited to the advancement of computer engineering in AI-Deep Blue's 266 parallel processors, enabling it to test 200 million possible moves per second and look forward to as many as 14 plays. Many agree with Noam Chomsky, a linguist at the Massachusetts Institute of Technology (MIT), who is as intriguing as the bulldozer who won the Computer Olympic weightlifting competition by beating his grandmother in chess.
Learn more!


Turing test


Turing abandoned the traditional debate over the definition of intelligence, launching a practical test for computer intelligence, now known as the Turing test. The Turing test consists of three participants: a computer, a human questioner, and a human foil. The interrogator tries to determine by asking the questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The questioner can ask questions as discriminating and wide as he likes, and the computer is allowed to do everything to force the misidentification. (For example, the computer may answer, "No," in response, "Are you a computer?" And follow the request to multiply a large number by another with long pauses and incorrect answers.) The foil should help to make a correct identification. Many people play the role of interrogator and foil, and if a sufficient proportion of questioners are unable to distinguish the computer from the human, (according to supporters of Turing's test) the computer is considered an intelligent, thought-provoking unit. Pledged डलर 1,000 and provided डलर 2,000 each year for outstanding efforts. However, no AII program has come close to passing the Undulted Turing Test.

Early milestones in AI


First AI programs


The earliest successful AI program was written by Christopher Stralley, director of programming research groups at Oxford University. The Strache Checkers (Draft) program was run on the Ferranti Mark computer at the University of Manchester. In the summer this event can play a full game of checkers at a reasonable pace.
Published on information about the most successful demonstration of machine learning. Shopper, written by Anthony Ottinger of the University of Cambridge, operated on an EDSAC computer. The shopper's replica world was the mall of eight stores. When instructed to purchase an item, the shopper searches for it, randomly searching until the item is found in the store. While searching, the shopper memorizes some of the items stored in each store he visited (just like the human shopper did). The next time the shopper is sent out for the same item or some other item that already exists, it goes straight to the right store. This simple type of learning, as pointed out in the introductory part, is what is called intelligence ?, called rotation learning.


The first AI program to run in the United States was also a Checkers program written by Arthur Samuel for a prototype of IBM. Samuel took the essentials of the Strache Checkers program and extended it for a long time. He added features that enabled him to learn from experience from the program. Samuel incorporated a mechanism of both learning and generalization, extensions that eventually led his program to win a game against the Connecticut Checkers champions.
Learn more!

Developmental Computing.


The Samuel Checkers program was also notable for being one of the first attempts at evolutionary computing. (His program "developed" became the winner's new standard by making a modified copy against the current best version of his program.) Evolutionary computers typically involve the use of some automated method to produce a successive "generation" of a program. , Unless a highly efficient solution is developed.


John Holland, a leading supporter of the evolutionary computer, also wrote test software for a prototype of the IBM 101 computer. In particular, he helped design a neural network "virtual" mouse that could be trained to navigate through a maze. This work convinced Hollande of the effectiveness of the bottom-up approach. Continuing to consult for IBM, Holland moved to the University of Michigan in 1952 to pursue a doctorate in mathematics. However, he transformed into a new interdisciplinary program developed by one of the founders of computer and information processing (later known as communication science) EIIAC and its successor, EDVAC. In his dissertation, perhaps the world's first computer science Ph.D. For, Holland proposed a new type of computer - a multiprocessor computer - that would assign each artificial neuron to a separate processor. Daniel Hillis solved the engineering difficulties to build the first such computer, the 65,53636-Processor Thinking Machine Corporation supercomputer.)


Holland enrolled in a Michigan faculty after graduation and over the next four decades directed much research into automated methods of development research, a process known by genetic algorithms. The system implemented in Holland's laboratory included a chase program, a model of single cell biological organisms, and a classification system to control the simulated gas-pipeline network. Genetic algorithms are no longer restricted to “educational” demonstrations, however; In a critical practical application, a genetic algorithm assists a crime witness to generate a picture of a criminal.


Logical reasoning and problem solving


The ability to reason logically is an important aspect of intelligence and has always been a major focus of AI research. An important marginal event in this area is the Ellen Newell and J. of the Rand Corporation. Clifford Shaw and Herbert Simon of Carnegie Mellon University were theorem-theory programs. The program was designed to validate the theorem from the so-called logic theorist, Principia Mathematica, in three volumes by the British philosophers-mathematicians Alfred North Whitehead and Bertrand Russell. In one instance, the evidence designed by the program was more beautiful than the evidence given in the books.


Newell, Simon and Shaw began writing more powerful programs, general problem solvers, or GPS. The first version of GPS was launched in 1957, and work on the project continued for nearly a decade. GPS can solve an impressive variety of puzzles using a trial and error approach. However, one criticism of GPS, and similar programs that lack any learning ability, is that the intelligence of the program is completely second to none, which comes from any information contained by the programmer.


English dialogue


The two most famous early AI programs, Elijah and Sweet, gave a curious glimpse of the intelligent conversation. (Both descriptions were first published.) Elijah imitated Joseph Wegenbam, a human physician, from MIT's AI Laboratory. Beloved, a human paranoiac mimicry, written by Stanford University psychiatrist Kenneth Colby. Psychiatrists who were asked to decide whether they were talking to a loved one or a human paranoiac were often unable to say. However, neither Cute nor Elijah can be described intellectually. Cutie's contribution to the conversation was cached by an advance programmer and kept away from the computer's memory. Elijah, too, relied on canned phrases and simple programming methods.


AI programming languages


During their work at Logic Theorist and GPS, Newell, Simon and Shaw developed their Information Processing Language (IPL), a computer language for AI programming. At the heart of the IPL was the highly flexible data structure that they called a list. A list is a sorted order of data objects. Some or all of the items in the list may be on the list. This scheme leads to a prosperous branch.


John McCarthy combined the elements of IPL with the Lambda Calculus (formal mathematical-logical system) to produce the programming language LISP (List Processor) which remains the major language for AI work in the United States. (The Lambda calculus was invented by the Princeton logician Alonzo Church in 1936 while he was researching the Einstein Enschedu spspromblum or "decision problem".


The logic programming language PROLOG (Programming en Logic) was conceived by Alanin Colmerauer at the University of Aix-Marseille, France, where the language was first implemented. PROLOG was later developed by Robert Kowalski, a member of the AI ​​group. At the University of Edinburgh. The language uses a powerful theorem-fulfillment technique known as resolution, invented in 1963 by British logician Alan Robinson at the United States Atomic Energy Commission's Argonne National Laboratory in Illinois. PROLOG can determine whether a given statement follows logically from another given statement. For example, the statement "All logistics are logical" and "Robinson is a logician," gave the statement, a PROLOG program "Robinson is logical?" Answers to query confirmation. PROLOG is widely used for AI work, especially in Europe and Japan.


Researchers for the new generation of computer technology in Tokyo have used PROLOG as the basis for sophisticated logic programming languages. Known as fifth-generation languages, they are used in numerical parallel computers developed at these institutions.


Other recent actions include the development of languages ​​for the argument about timely time-dependent data such as "the account was paid yesterday." These languages ​​are based on tense logic, which allows statements to exist in the flow of time. (Yesterday logic was invented by the philosopher Arthur Prior at the University of Canterbury, Christchurch, New Zealand.)


Microworld programs


To cope with the astonishing complexity of the real world, scientists often ignore the less relevant details; For example, physicists often ignore friction and elasticity in their models. In 1970, Marvin Minsky and Seymour Paper of MIT AI Laboratories suggested that similar AI research should focus on developing programs capable of conducting intellectual behavior in a simple artificial environment known as the microworld. Much research has focused on the so-called block world, which consists of blocks of various shapes and sizes and is placed on a flat surface.


The initial success of the microworld approach was the SHRDLU, written by Terry Vinograd of MIT. (Details of the event were published.) The SHRDLU robot hand controlled which was operated on a flat surface with a play block. Both arms and limbs were virtual. SHRDLU will respond to commands typed in natural English, "Would you please stack both the red block and the green cube or the pyramid." The program can also answer questions about its own functions. Just as SHRDLU was initially hailed as a major success, Vinograd soon announced that the event was indeed a dead end. The wide range of technology advanced in the program proved unsuitable for application in more interesting worlds. Apart from this, SHRDLU showed that it understands block microworld and the English descriptions about it were really confusing. SHRDLU did not know what a green block was.


Another product of the microworld approach was Shakey, a mobile robot Shaky developed at the Stanford Research Institute by Bantram Raphael, Neil Nielsen and others. The robot captured a specially built microworld that included walls, doorways, and some simple-sized wooden blocks. Each wall had a carefully painted baseboard to enable the robot to "see" where the wall found the floor (simplification of reality which is typical of the microworld approach). Shake had about a dozen basic abilities, such as turn, push, and CLIMB-ramp.


Shake, the robotsheck, was developed at Stanford Research Institute, Menlo Park, California. The robot is equipped with a television camera, a range finder, and a collision sensor that enables a mini-computer to remotely control its actions. Shake can do some basic tasks, such as moving forward, back, and pushing, albeit at a much slower pace. Different colors, especially the dark baseboard on each wall, help the robot to distinguish different surfaces.


Critics pointed to the highly simplified nature of Shaki's environment and stressed that despite these simplifications, Shaki operated slowly; A series of tasks that a person can plan and execute in minutes.


The biggest success of the microworld approach is the type of program described in the next section.


Learn more!

Expert systems


Expert systems capture a type of microworld - for example, the model of a ship's hold and its cargo - that is self-contained and comparatively impossible. For such AI systems every expert (or group of experts) tries to contain all the information about some of the narrow areas that are known, so that a good expert system can outperform almost any single human expert. There are Medical Commerce Diagnosis, Chemical Analysis, Credit Authority, Financial Management, Corporate Planning, Financial Document Route, Oil and Mineral Procedure, Genetic Engineering, Automobile Design and Construction, Camera Lens Design, Computer Installation Design, Airline Schedule, Cargo Placement, And automated help services for home computer owners.


Knowledge and conjecture


The basic components of an expert system are a knowledge base, or KB, and an inference engine. K.B. Interviews the person in the information to be stored in. The interviewer, or knowledge engineer, obtains information from experts in a collection of organized rules, of an "if-then" structure. These types of rules are called production rules. The inference engine enables the expert system to take deductions from the rules in KB. For example, if KB contains the production rules "if x, then y" and "if y, z z", the infrared engine is able to reduce "if x, then z". The expert system can query its user, "Is x true in the situation we are considering?" If the answer is yes, the system will move on to infer z.


Some expert systems use vague logic. In standard logic there are only two true values, true and false. This absolute accuracy makes it difficult to characterize obscure characteristics or situations. (When, correctly, is the hair thinner than the head of the tortoise?) Often the rules used by anthropologists have vague expressions, and so it is useful to use vague logic for the inference engine of the expert system.


Denderraal


Both AI researcher Edward Feisenbaum and Stanford University geneticist Joshua Lederberg began working on a chemical analysis specialist system, the heuristic dendral (later abbreviated to dendral). Matter can be analyzed, for example, a complex mixture of carbon, hydrogen and nitrogen. Beginning with spectrographic data obtained from matter, DendRAL hypothesizes the molecular structure of matter. DENDRAL's performance competed with that of chemists, and the program was used in industry and academics.


MYCIN


Work on the specialist system for treating blood infections, MYCIN, began in 1972 at Stanford University. MYCIN will attempt to diagnose patients based on reported symptoms and medical test results. The program may request for more information about the patient, as well as come up with a possible diagnosis to suggest additional laboratory tests, then recommend a course of treatment. If requested, MYCIN will state the reason for its diagnosis and recommendation. Using approximately production00 production rules, MYCIN operated at almost the same level of competence as human specialists in blood transfusions and better than general practitioners.


However, expert systems have no expertise or understanding of the limitations of their expertise. For example, if MYCIN is told that a patient with a gunshot wound is bleeding to death, the program tries to find a bacterial cause for the patient's symptoms. Expert systems can also work on absurd clerical errors, such as prescribing clearly inaccurate doses of medication for a patient whose weight and age data were mistakenly transmitted.

CYC project


CYC is a great experiment in symbolic AI. The project was launched in 1984 on behalf of a consortium of microelectronics and computer technology corporations, computer, semiconductor and electronics manufacturers. In 1995, CYC Project Director Douglas Lennet founded Sycorp, Inc. in Austin, Texas. As cut off from this project. Psychop's most ambitious goal was to build a KB that would contain a significant percentage of the human knowledge. Millions of shared ideas, or rules, were coded in CYC. The expectation was that this "critical mass" system would draw additional rules directly from general prose and ultimately serve as the basis for future generations of expert systems.


With only a fraction of its compressed common sense, CYC could draw diagrams that beat simple systems. For example, the CYC might guess, from the statement "Garcia is soaking wet", "Garcia is completing a marathon run", "Running a marathon requires high labor, which makes people sweat at a higher level of labor. Some sweat it gets wet. The rest." Among the remaining problems are search and problem solving questions - for example, the KB is related to the problem given in the information on how to search automatically. Some critics of I. believe that the frame problem is largely untenable and maintain that symbolic approaches never produce true intellectual systems.It is possible that, for example, CYC will fall into the trap of frame problems long before the system acquires a level of human knowledge. Does.


Learn more!


Connection


Connections, or neuron computations, led to the development of efforts to understand how the human brain works at the neural level and, in particular, how people learn and remember. Warren McCulloch, a neurophysiologist at the University of Illinois, and Walter Pitts, a mathematician at the University of Chicago, have published impressive texts on the neuron trap and automaton, each of which is a simple digital processor and the brain as a whole. A computer is a form of machine. MacCulloch later said, "What we thought we were doing (and I think we've done well) is treating the brain as a touring machine."


Creating an artificial neural network


Although MIT's Belmont Farley and Wesley Clark succeeded in operating the first artificial neural network, computer memory was limited to more than 12 ne neurons. They were able to train their networks to identify common patterns. In addition, they found that the random destruction of 10 percent of neurons in a trained network does not affect the performance of the network - a feature that reminds the brain of its ability to withstand the limited damage caused by surgery, accident or disease.


The simple neural network depicted in the picture depicts the central ideas of relativism. The four neurons in the network are for four inputs, and the fifth - to which each other is connected - for output. Not every neuron is either firing (1) or firing (0). Each connection has a "weight" of the output neuron, leading to N. What is called the total weighted input to N is calculated by adding the weights of all the connections from the firing neurons to N. For example, suppose only two input neurons, X and Y, are firing. The weight of the connection from X to N is 1. 1.5 and the weight of the connection from Y to N is 2, this follows that the total weighted input in N is. is yes As shown in the picture, the fire threshold of N is that. This means that if the total weighted input of n is equal to or greater than 4 then n is fired; Otherwise, N does not fire. So, for example, N does not fire if only the input neurons in fire are X and Y, but N does fire if X, Y, and Z all fire.


Network training consists of two phases. First, the external agent inputs a pattern and n. Observes the behavior of. Second, the agent adjusts the connection weight according to the rules:


If the actual output is 0 and the required output is 1, then a small fixed amount increases the weight of each connection.

If the actual output is 1 and the required output is the same, the load of each connection is reduced by the same amount. The output neuron is firing from the neuron that is firing (similarly the next time the output neuron catches fire it is given the same structure as network input).

The external agent - actually a computer program - goes into the training sample of each sample with this two-step process, which is repeated several times. During these many repetitions, a pattern of connection weights is created that enables the network to respond correctly to each pattern. Remarkably, the learning process is completely mechanical and does not require any human intervention or adjustment. The connection weight is automatically increased or decreased by a constant amount, and the exact same learning process applies to different tasks.

Perceprons


Frank Rosenblatt of Cornell Aeronautical Laboratory at Cornell University in New York began researching artificial neural networks, which he called perceptrone. He made significant contributions to the field of AI through experimental research and detailed mathematical analysis of the properties of both neural networks (using computer simulations). Rosenblatt was a charismatic communicator, and soon several research groups in the United States were studying peseptron. Rosenblatt and his followers spoke in support of the idea of ​​creating and modifying connections between neurons. Modern researchers have adopted this position.


One of Rosenblatt's contributions was to generalize the training process that Farley and Clark had only implemented on two-tier networks so that the process could be applied to multilevel networks. Rosenblatt used the phrase "back-propagating error correction" to describe his method. The method, with considerable refinement and elaboration by many scientists, and the term back-propagation are now in daily use in connectionism.


Conjunctive verbs


In a well-known connection experiment, David Rummelhart and James McLeland trained a network of English 202 artificial neurons, combining two layers of English neurons, to create past tenses of activity. Root forms of verbs, such as come, see and sleep - a layer of neurons, presented in the input layer. An observational computer program observed the difference between the actual response in the layer of output neurons and the desired response - came, said - and then adjusted the connections in the network according to the procedure described above to give the network a slight push. The direction of the right response. Almost different activities were presented to the network one by one, and the connections were adjusted after each presentation. This whole procedure was repeated about 200 times using the same verbs, after which the network correctly forms past tenses of unfamiliar verbs as well as basic verbs. For example, when first presented with a guard, the network responded securely; Crying, weeping; With sticky, clung; And with drip, dripped (complete with double p). This generalization is a remarkable example of inclusive learning. (Sometimes, though, the strangeness of English is too much for the network, and it forms from squat to squid, transported by size and remembered by mail.)


Another name for connectionism is parallel distributed processing, which emphasizes two important features. Previously, a large number of relatively simple processors - neurons ralle operated in parallel. Second, the neural network stores information in a distributed fashion, with each individual connection participating in the storage of many different objects of information. Learn how the network of the past cried from crying, for example, the network was not stored in a specific place but spread across the entire structure of the connection weight that was forged during training. The human brain also appears to store information in a distributed fashion, and is contributing to connectivity research trying to understand how it does it.


Other neural networks


Other functions on a neuron's computer include the following:


Visual perception. The network can identify faces and other objects from visual data. Identify about 10 objects from a neural network simple drawing designed by John Hummel and Irvine Bidman at the University of Minnesota. The network enables it to identify objects - including mugs and frying pans - drawn from different angles. The networks researched by MIT’s Tomaso Poggio show bent-wire shapes to pull from different angles, showing faces and different expressions photographed from different angles, and objects of cartoon drawing with gray-scale shading to show depth and direction.

Language processing Neural networks are capable of converting handwritten and typewritten content into electronic text. The United States Internal Revenue Service introduced a neuron-like system that would automatically read tax returns and correspondence. Neural networks convert speech into printed text and printed text into speech.


Financial analysis. Neural networks are increasingly being used for loan risk assessment real estate value bankruptcy prediction, share price prediction, and other business applications.


Medicine Medical applications include detection of lung nodules and heart arrhythmias and prognosis of adverse drug reactions.


Telecommunications Neural network telecommunications applications include control of the telephone switch network and echo cancellation on modems and satellite links.


Learn more!


Nouvel AI


New foundation


The approach, now known as the Novel AI, was pioneered by Rodney Brooks of Australia in the late 1980's at the MIT AI laboratory. Nouveau AI is far from a strict AI, emphasizing human-level performance in favor of a relatively moderate goal of insect-level performance. At the most basic level, Nouvel AI rejects the reliance on building internal models of the reality of symbolic AI, as described in the section Microworld Programs. Practitioners at Novell AI emphasize that true intelligence involves the ability to operate in a real-world environment.


The central idea of ​​Novel AI is that intelligence, expressed through complex behaviors, "emerges" from the interaction of some simple behaviors. For example, a robot whose simple behavior combines collision avoidance and movement towards a moving object when the object stops pausing as it gets too close.


Novell A.I. A famous example of is Brooks' robot Herbert (after Herbert Simon's name), whose environment is the busy office of the MIT AI laboratory. Herbert searches for desks and tables for empty soda cans, which he picks up and takes away. The robot's seemingly goal-oriented behavior emerges from the interaction of about 1 simple behavior. Brooks recently built a prototype of a mobile robot for surface exploration on Tuesday. (See photo and interview with Rodney Brooks.)


Novell takes the side of the frame problem discussed in the AI ​​section CIC project. Novel systems do not have a complex symbolic model of their environment. Instead, information is left "in the world" unless the system needs it. A novel system consistently reflects its sensors rather than the internal model of the world: it "reads" the outside world whatever information is needed at the right time. (As Brooks insists, the world is its own perfect model - always up-to-date and full of details.)



Existing approach


Traditional AI seeks to build vast and international intelligence that only interacts indirectly with the world (CYC, for example). Novell, on the other hand, seeks to build AI into real-world intellectual intelligence - a method known as existing approaches. Brooks quoted Turing with approval from brief sketches of the existing approach. Turing writes: “With the best sense organs that money can buy,” the machine can be taught to “understand and speak English” through the process of “following a child’s general education”. Turing contrasted it with AI's approach, which focuses on abstract activities such as playing chess. He advocated that both approaches be followed, but did not pay much attention to the more recent approach.


This view was predicted in the writings of Bert Dreyfuss, a philosopher at the University of California, Berkeley. In the early 1960's, Dreyfus opposed the concept of physical symbolism, arguing that intellectual behavior could not be completely captured by symbolic description. As an alternative, Dreyfus advocated a spy approach that emphasized the need for a body that interacts directly with tangible physical objects. Once insulted by AI's lawyers, Dreyfus is now considered a proponent of the current approach.


Critics of the Nouvelle AI have pointed to the failure to produce systems that demonstrate anything like the behavioral complexity found in real insects. Suggestions by researchers that their nouvelle systems may soon become conscious and language ownership seems completely premature.


Learn more!


Is hard AI possible?


As described in the previous sections of this article, the ongoing success of applied AI and cognitive simulation seems to have been ensured. Although strong AI - that is, artificial intelligence that aims to mimic human intellectual abilities - remains controversial. Its reputation has been tarnished by increasing claims of success in professional magazines and popular presses. At present a tangible system of demonstrating the overall intelligence of the cockroach is also proving to be lovely, let alone a system that can compete with the human race. The small achievements of AI cannot be measured by increasing the difficulty. Decades of research into symbolic AI has failed to present any conclusive evidence that the symbol system can reveal the human intellectual level; Connectors are unable to model the nervous systems of simple invertebrates; And Nauvei sees AI critics only as a mystical view that a high level of behavior in understanding, planning, and reasoning will somehow emerge from the interaction of basic behaviors such as avoidance of obstruction, gaze control, and object manipulation.


Although the lack of adequate progress can only be a testament to the difficulty of rigid AI, not for its impossibility. We look for strong artificial intelligence for many reasons. Can a computer think? Noam Chomsky suggests that it is pointless to discuss this question, as it is essentially an arbitrary decision whether to extend the general use of the word thinking to include machines. There is no real question as to whether any such decision is right or wrong, just as there is no question as to whether our decision to fly an airplane is right or whether our decision to swim is wrong. . In any case, it obscures matters. The important question is, can the computer think it is appropriate to think, and if so, what conditions does the computer have to meet to describe it?


Some authors offer the Turing test as a definition of intelligence. Although Turing himself said that the computer which should be described as intelligence, his test could fail if it was unable to successfully mimic the human. For example, why does an intelligent robot designed to monitor mining on the moon need to pass itself off as a human being in conversation? If an intellectual unit can fail a test, then the test cannot serve as a definition of intelligence. It is also doubtful that passing the test will actually show that the computer is intellectual, pointed out information theorist Claude Shannon and AI pioneer John McCarthy. Shannon and McCarthy argued that it was theoretically possible to design a machine with complete canned answers to all the questions that the questioner could ask within a certain period of time. Like cutie, this machine will answer the interviewer's questions by looking at the appropriate responses on a huge table. This objection shows that a system without any intelligence can pass the Turing test.


In fact, AI has no real definition of intelligence to offer, even in a paradigm shift. Mice are intelligence, but what must artificial intelligence achieve before researchers claim this level of success? When an artificial system is considered intelligent in the absence of reasonably accurate criteria, there is no objective way to tell whether an AI research program has succeeded or failed. One of the consequences of failing to build a satisfactory standard of AI intelligence is that whenever researchers achieve a goal of AI - for example, a program that summarizes newspaper articles or beats world chess - critics may say “this is not intelligence! "Responding to the problem of defining Marvin Minsky's intelligence is like maintaining a turret like him - that intelligence is just our name for the mental process of solving any problem we don't understand." It will disappear as soon as it is discovered.

BJ Copeland

Comments

Post a Comment