Artificial Intelligence (AI) is nothing but the branch of computer science which aims to create intelligence of machines. John McCarthy defines it as "the science and engineering of making intelligent machines."
Today, AI has become an essential part of the technology industry, providing the heavy lifting for many of the most complicated problems in computer science. AI research is highly technical and specialized, deeply divided into sub-fields that often fail to communicate with each other. Sub-fields have grown up around particular institutions, the work of individual researchers and the solution of specific problems etc. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence is still a long-term goal of (some) research.
Branches:
Some of the branches of AI are listed as follows:
Logical AI:
What a program knows about the world in general the details of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain events are appropriate for achieving its goals.
Search:
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by any theorem. Discoveries are continually made about how to do this more efficiently in various domains.
Representation:
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
Inference:
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises.
Pattern recognition:
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
Learning from experiences:
The approaches to AI based on connectionism and neural networks specialize in that. There is also learning of laws expressed in logic and is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
Heuristic:
A heuristic is a way of trying to discover something or an idea embedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal.
Epistemology:
This is a study of the kinds of knowledge that are required for solving problems in the world.
Ontology:
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are.
Applications:
Some applications of AI are depicted in following figure:
Playing Games :
You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.
Speech recognition:
Computer speech recognition reached a practical level for limited purposes. Thus it is possible to instruct some computers using speech which is more convenient than using keyboard and the mouse.
Understanding natural language:
Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the area the text is about, and this is presently possible only for very limited areas.
Computer vision:
The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.
Expert systems:
A ``knowledge engineer'' interviews experts in a certain domain and tries to represent their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. Thus the usefulness of current expert systems depends on common sense.
(
This article is the topic of 7th unit from RTMNU MBA 3 rd sem IT syllabus notes.Further topics will be covered in upcoming blogs For more notes you can also refer to other links as given below:
)