"Computer Science and Computer Engineering". Development of natural language interfaces and machine translation


The development of artificial intelligence

Story artificial intelligence started not too long ago. In the second half of the 20th century, the concept artificial intelligence(artificial intelligence) and several definitions have been proposed. One of the first definitions, which, despite the considerable breadth of interpretation, still has not lost its relevance, is the presentation of artificial intelligence as: "A way to make a computer think like a person."

The relevance of the intellectualization of computing systems is due to the need for a person to find solutions in such realities. modern world as inaccuracy, ambiguity, uncertainty, fuzziness and unfounded information. The need to increase the speed and adequacy of this process stimulates the creation of computing systems, through interaction with the real world by means of robotics, production equipment, instruments and other hardware, can contribute to its implementation.

Computing systems, which are based exclusively on classical logic - that is, algorithms for solving known problems, face problems when they encounter uncertain situations. In contrast, living beings, although they lose in speed, are able to make successful decisions in such situations.

An example of artificial intelligence

An example is the stock market crash of 1987, when computer programs sold hundreds of millions of dollars worth of shares in order to make a profit of several hundred dollars, which actually created the preconditions for the collapse. The situation was corrected after the transfer of full control over exchange trading to protoplasmic intelligent systems, that is, to people.

Defining the concept of intelligence as a scientific category, it should be understood as the suitability of the system for learning. Thus, one of the most specific, in our opinion, definitions of artificial intelligence is interpreted as the ability of automated systems to acquire, adapt, modify and replenish knowledge in order to find solutions to problems that are difficult to formalize.

In this definition, the term "knowledge" has a qualitative difference from the concept of information. This difference well reflects the representation of these concepts in the form information pyramid in fig 1.

Figure 1 - Information pyramid

It is based on data, the next level is occupied by information, and the level of knowledge completes the pyramid. As you move up the information pyramid, the volumes of data turn into the value of information and further into the value of knowledge. That is, information arises at the moment of interaction of subjective data and objective methods of their processing. Knowledge is formed on the basis of the formation of distributed relationships between heterogeneous information, while creating a formal system - a way to reflect them in precise concepts or statements.

It is the support of such a system - a knowledge system, in such an up-to-date state, which allows you to build action programs to find solutions to the tasks assigned to them, taking into account specific situations that are formed at a certain point in time in the environment, is the task of artificial intelligence. Thus, artificial intelligence can also be represented as a universal algorithm capable of creating algorithms for solving new problems.

Kolomna Institute (branch)

State educational institution of higher

vocational education

"MOSCOW STATE OPEN UNIVERSITY"

Department of Informatics and Information Technology

"APPROVED"

educational and methodical

Council of KI (f) MGOU

Chairman of the board

Professor

A.M. Lipatov

"___" ____________ 2010

P.S. Romanov

BASICS OF ARTIFICIAL INTELLIGENCE

Textbook on the disciplines of the direction

"Informatics and Computer Engineering»

For university students

Kolomna - 2010

At

Printed in accordance with the decision of the educational and methodological council of the Kolomna Institute (branch) of the State Educational Institution of Higher Professional Education "MGOU" dated __________ 2010 No. ________

DK 519.6

R69 Romanov P.S.

Fundamentals of artificial intelligence. Tutorial. - Kolomna: KI (f) MGOU, 2010. - 164 p.

The tutorial covers the basics of artificial intelligence. The basic concepts of artificial intelligence are presented. The provisions of the theory of fuzzy sets are given. The main intelligent systems, their purpose, classification, characteristics, creation problems, examples are considered.

The textbook is intended for students of higher educational institutions studying in the direction of "Informatics and Computer Engineering". It can be used in the study of intelligent information systems by students of other specialties.

Reviewer: doctor of technical sciences, professor V.G. Novikov

© Romanov P.S.

©KI(f) MGOU, 2010

Introduction………………………………............……………………………………...5

Chapter 1. Basic concepts of artificial intelligence .............................................. 6

§ 1.1. Basic terms and definitions ............................................................... .....6

§ 1.2. The history of the development of AI systems.................................................... .............12

§ 1.4. Main directions of development and application

intelligent systems .................................................................. ................25

Chapter 2 32

§ 2.1. Fuzzy set. Operations on fuzzy sets…..32

§ 2.1.1. Basic operations on fuzzy sets....................................35

§ 2.2. Building a Membership Function.......................................................38

§ 2.2.1. Some Methods for Constructing a Membership Function......................39

§ 2.3. Fuzzy numbers ................................................................ .................................44

§ 2.4. Operations with fuzzy numbers (L-R)-type.................................................46

§ 2.5. Fuzzy and Linguistic Variables...............................................47

§ 2.6. Fuzzy relationships .................................................................. ........................fifty

§ 2.7. Fuzzy logic ................................................................ ...................................51

§ 2.8. Fuzzy conclusions .............................................................. ...............................53

§ 2.9. Automation of information processing using

fuzzy systems ................................................................ .................................59

Chapter 3. Basic intelligent systems....................................................................64

§ 3.1. Data and knowledge ............................................................... ...................................64

§ 3.2. Knowledge Representation Models .............................................................. .........66

§ 3.3.1. Production Rules .................................................................. ...............69

§ 3.3.2. Frames................................................. .........................................72

§ 3.3.3. Semantic networks .................................................................. ......................74

§ 3.4. Expert systems. Subject areas..............................................76

§ 3.5. Purpose and scope of expert systems ..............................77

§ 3.6. Methodology for the development of expert systems....................................................81

§ 3.7. Basic expert systems .................................................................. .........86

§ 3.8. Difficulties in the development of expert systems and their ways

overcoming ................................................. .................................................90

§ 3.9. Purpose, classification of robots .............................................. 94

§ 3.10. Examples of robots and robotic systems...............................................97

§ 3.10.1. Household (domestic) robots .............................................. ....97

§ 3.10.2. Rescue robots and research robots...............................99

§ 3.10.3. Robots for industry and medicine...................................100

§ 3.10.4. Military robots and robotic systems....................................101

§ 3.10.5. The brain as an analog-to-digital device..............................................104

§ 3.10.6. Robots - toys ............................................... ....................104

§ 3.11. Problems of technical implementation of robots .............................................. 105

§ 3.12. Adaptive Industrial Robots....................................................................114

§ 3.12.1. Adaptation and training .............................................................. .............114

§ 3.12.2. Classification of adaptive control systems

industrial robots .................................................................. ...117

§ 3.12.3. Examples of adaptive control systems for robots...............123

§ 3.12.4. Problems in the creation of industrial robots..............................128

§ 3.13. Neural network and neurocomputer technologies............................132

§ 3.13.1. General characteristics of the direction .................................... 132

§ 3.13.2. Neuropackages .............................................................. .........................140

§ 3.14. Neural networks................................................ ...............................147

§ 3.14.1. Perceptron and its development .............................................................. .....147

3.14.1.1. Mathematical neuron of McCulloch-Pitts .................147

3.14.1.2. Rosenblatt's Perceptron and Hebb's Rule.......................148

3.14.1.3. Delta Rule and Letter Recognition.......................................150

3.14.1.4. Adaline, Madaline and the Generalized Delta Rule..........152

§ 3.14.2. Multilayer Perceptron and Inverse Algorithm

error propagation .................................................................. .....155

§ 3.14.3. Types of activation functions...............................................160

Introduction

The science called "artificial intelligence" is included in the complex computer science, and the technologies created on its basis belong to information technologies. The task of this science is to provide reasonable reasoning and action with the help of computing systems and other artificial devices. As an independent scientific direction, artificial intelligence (AI) has existed for just over a quarter of a century. During this time, the attitude of society towards specialists engaged in such research has evolved from skepticism to respect. In advanced countries, work in the field of intelligent systems is supported at all levels of society. There is a strong opinion that it is these studies that will determine the nature of the information society, which is already replacing the industrial civilization that reached its highest peak in the 20th century. Over the past years of the formation of AI as a special scientific discipline, its conceptual models have been formed, specific methods and techniques that belong only to it have accumulated, and some fundamental paradigms have been established. Artificial intelligence has become a completely respectable science, no less honorable and necessary than physics or biology.

Artificial intelligence is an experimental science. The experimental nature of AI is that when creating certain computer representations and models, the researcher compares their behavior with each other and with examples of solving the same problems by a specialist, modifies them based on this comparison, trying to achieve a better match between the results. For modifying programs in a "monotonic" way to improve results, one must have reasonable initial ideas and models. They are delivered by psychological studies of consciousness, in particular, cognitive psychology.

An important characteristic of AI methods is that it deals only with those competence mechanisms that are verbal in nature (allow symbolic representation). Not all the mechanisms that a person uses to solve problems are like that.

The book presents the basics of AI, which provide an opportunity to navigate in in large numbers publications on the problems of artificial intelligence and obtain the necessary knowledge in this field of science.

Armavir State

Pedagogical University

BASICS OF ARTIFICIAL INTELLIGENCE

for students studying in the specialty "Informatics"

Armavir 2004

Printed by the decision of the ASPU UMC

Reviewer: , Candidate of Physical and Mathematical Sciences, Associate Professor, Head of the Internet Center of the Kabardino-Balkarian State Agricultural Academy

Kozyrev artificial intelligence. Teaching aid for students studying in the specialty "informatics". - Armavir, 2004

The basic concepts of artificial intelligence, directions and prospects for the development of research in the field of artificial intelligence, the basics of the logic programming language PROLOG are considered.

The educational and methodical manual is intended for students studying in the specialty "informatics", and can also be used by everyone who is interested in artificial intelligence and logic programming.

Introduction………………………………………………..……………………... 4

1. Artificial intelligence: subject, history
development, research directions ……..………………….. 5

1.1. Directions of research in the field
artificial intelligence…..…………………………………………….. 5


artificial intelligence….…………………………..………………..... 6

2. Knowledge system……………………………………………………….. 8

3. Knowledge representation models…………………………………. nine

3.1. Semantic networks……………………………………………………..9

3.2. Frame model …………………………………………….…………10

3.3. Production model………………………………………………..11

3.4. Logical model……………………………………………………. .12

4. Expert systems………………………………………………...12

4.1. Appointment of expert systems……………………………………….12

4.2. Types of tasks solved with the help of expert systems…………….14

4.3. The structure of expert systems…………………………………………...15

4.4. The main stages of development of expert systems……………………16

4.5. Tools for developing expert systems………18

5. PROLOGUE - logic programming language ……….19

5.1. General information about the PROLOGUE…………………………………………………………………………………………19

5.2. Suggestions: facts and rules………………………………………20

5.4. Variables in PROLOG……………………………………………...22

5.5. Objects and data types in PROLOG………………………………...23

5.6. The main sections of the PROLOG program…………………………….23

5.7. Backtracking……………………………………………………...24

5.8. Backtracking control: fail and cut predicates ……26

5.9. Arithmetic calculations……………………………………………27

5.10. Recursion……………………………………………………………… .28

5.11. Lists…………………………………………………………………30

5.12. Standard List Processing Tasks………………………….….31

Literature………………………………………………............................... .35

Introduction

In recent decades, artificial intelligence has invaded all areas of activity, becoming a means of integrating sciences. Software tools based on the technology and methods of artificial intelligence have become widespread in the world. Intensive research on the creation of a single information space that creates conditions for joint remote work based on knowledge bases has now begun to be carried out by all economically developed countries. The course "Fundamentals of Artificial Intelligence" in higher education includes the study of such sections as the representation of knowledge in a formal language, the structure of expert systems and the basic principles of their development, various strategies for finding a goal. One of the main lines of the course is the discussion of the implementation of artificial intelligence systems for solving specific applied problems.

As a computer support for the course, the Visual Prolog program development tool environment is considered. The programming language Prolog, based on the ideas and methods of mathematical logic, was originally created for the development of artificial intelligence applications. Applications such as knowledge bases, expert systems, natural language interfaces, and intelligent information management systems are efficiently programmed in the Visual Prolog environment. A high level of abstraction, the ability to represent complex data structures and model logical relationships between objects make it possible to solve problems in various subject areas.

The teaching aid "Fundamentals of Artificial Intelligence" will help to expand the ideas of the future computer science teacher about the areas of application of the theory of artificial intelligence, about existing and promising programming languages ​​and hardware structures for creating artificial intelligence systems.

1. Artificial intelligence: subject, history of development, areas of research.

Iintellectualus(lat) - mind, reason, mind, mental abilities of a person. Artificial intelligence(AI) is a field of computer science, the subject of which is the development of hardware and software tools that allow the user to solve problems that are traditionally considered intellectual. The theory of artificial intelligence is the science of knowledge, how to extract it, represent it in artificial systems, process it inside the system and use it to solve practical problems. Technologies using AI are used today in many application areas.

The beginning of research in the field of AI (the end of the 50s of the 20th century) is associated with the work of Newell, Saiman and Shaw, who studied the processes of solving various problems. The results of their work were such programs as "LOGIC-THEORETIC", intended for proving theorems in propositional calculus, and "GENERAL PROBLEM SOLVER". These works marked the beginning of the first stage of research in the field of AI, associated with the development of programs, problem solving based on the use of various heuristic methods.

The heuristic method of solving the problem was considered as inherent in human thinking "in general", which is characterized by the emergence of guesses about the way to solve the problem with their subsequent verification. It was opposed to the algorithmic method used in the computer, which was interpreted as the mechanical implementation of a given sequence of steps, deterministically leading to the correct answer. The interpretation of heuristic methods for solving problems as a purely human activity led to the emergence and further spread of the term AI

A. Neurocybernetics.

Neurocyberietics is focused on hardware modeling of structures similar to the structure of the brain. Physiologists have long established that the basis of the human brain is a large number of interconnected and interacting nerve cells - neurons. Therefore, the efforts of neurocybernetics were focused on creating elements similar to neurons and combining them into functioning systems. These systems are called neural networks, or neural networks. Recently, neurocybernetics has begun to develop again due to a leap in the development of computers. Neurocomputers and transputers appeared.

There are currently three approaches to creating neural networks:

hardware- creation special computers, expansion cards, chipsets that implement all the necessary algorithms,

program- Creation of programs and tools designed for high-performance computers. Networks are created in the computer's memory, all the work is done by its own processors;

hybrid is a combination of the first two. Part of the calculations is performed by special expansion boards (coprocessors), part - by software.

B. Black box cybernetics.

The cybernetics of the "black box" is based on a principle opposite to neurocybernetics. It doesn't matter how the "thinking" device works. The main thing is that it reacts to the given input actions in the same way as the human brain.

This area of ​​artificial intelligence was focused on the search for algorithms for solving intellectual problems on existing computer models.

Research in the field of artificial intelligence has come a long and thorny path: the first hobbies (1960), pseudoscience (1960-65), success in solving puzzles and games (), disappointment in solving practical problems (), first successes in solving a number of practical problems ( ), mass commercial use in solving practical problems (). But the basis of commercial success is rightfully made up of expert systems and, first of all, real-time expert systems. It was they who allowed artificial intelligence to move from games and puzzles to mass use in solving practically significant problems.

1.2. The main tasks solved in the region
artificial intelligence

Knowledge representation and development of knowledge-based systems

Development of knowledge representation models, creation of knowledge bases that form the core of expert systems (ES). Recently, it includes models and methods for extracting and structuring knowledge and merges with knowledge engineering. In the field of artificial intelligence, expert systems and tools for their development have achieved the greatest commercial success.

Games and creativity.

Game intellectual tasks - chess, checkers, go. It is based on one of the early approaches - the labyrinth model plus heuristics.

Development of natural language interfaces and machine translation

Voice control, translation from language to language. The first program in this area is a translator from English into Russian. The first idea - word for word translation, turned out to be fruitless. Currently, a more complex model is used, including the analysis and synthesis of natural language messages, which consists of several blocks. For analysis it is:

The language that uses the production model is PROLOGUE.

3.4. Logic model

Their description is based on a formal system with four elements:

M=<Т, Р, А, В >, where

T is a set of basic elements of various nature with corresponding procedures;

R is a set of syntactic rules. With their help, syntactically correct sets are formed from the elements of T. The P(R) procedure determines whether this collection is correct;

A is a subset of the set P, called axioms. Procedure P(A) gives an answer to the question of belonging to the set A;

B is the set of inference rules. By applying them to the elements of A, one can obtain new syntactically correct collections to which these rules can be applied again. The P(V) procedure determines, for each syntactically correct set, whether it is inferred.

4. Expert systems

4.1. Appointment of expert systems

Expert systems(ES) are complex software systems that accumulate knowledge of specialists in specific subject areas and replicate this empirical experience for consultations of less qualified users.

The purpose of the study of expert systems is the development of programs that, when solving problems from a certain subject area, obtain results that are not inferior in quality and efficiency to the results obtained by experts.

Expert systems are designed to solve non-formalized, practically significant tasks. The use of an expert system should only be done when their development is possible and appropriate.

Facts indicating the need to develop and implement expert systems:

Lack of professionals who spend significant time helping others;

The need for a large team of specialists, since none of them has sufficient knowledge;

Low productivity, since the task requires a complete analysis of a complex set of conditions, and the average specialist is not able to view (in the allotted time) all these conditions;

The presence of competitors who have an advantage in that they are better at the task at hand.

By functional The purpose of expert systems can be divided into the following types:

1. Powerful expert systems designed for a narrow circle of users (control systems for complex technological equipment, expert air defense systems). Such systems typically operate in real time and are very expensive.

2. Expert systems designed for a wide range of users. These include systems of medical diagnostics, complex teaching systems. The knowledge base of these systems is not cheap, as it contains unique knowledge obtained from experts. The collection of knowledge and the formation of a knowledge base is carried out by a specialist in the collection of knowledge - a cognitive engineer.

3. Expert systems with a small number of rules and relatively inexpensive. These systems are designed for the mass consumer (systems that facilitate troubleshooting in equipment). The use of such systems allows you to do without highly qualified personnel, reduce the time of troubleshooting and troubleshooting. The knowledge base of such a system can be supplemented and changed without the help of system developers. They usually use knowledge from various reference manuals and technical documentation.

4. Simple expert systems for individual use. Often made on their own. They are used in situations to facilitate everyday work. The user, having organized the rules into a certain knowledge base, creates his own expert system on its basis. Such systems are used in jurisprudence, commercial activities, repair of simple equipment.

Use of expert systems and neural networks brings significant economic benefits. For example: - American Express reduced its losses by $27 million a year thanks to an expert system that determines the appropriateness of issuing or refusing a loan to a particular firm; - DEC saves $70 million a year with XCON/XSEL, a customer-specific configuration computing system VAX. Its use reduced the number of errors from 30% to 1%; - Sira reduced pipeline construction costs in Australia by $40 million with a pipeline management expert system.

4.2. Types of tasks solved with
expert systems

Data interpretation. Interpretation refers to the determination of the meaning of the data, the results of which must be consistent and correct. Examples of ES:

Detection and identification of various types of ocean-going vessels - SIAP;

Determination of the main personality traits based on the results of psychodiagnostic testing in the AVTANTEST and MICROLUSHER systems, etc.

Diagnostics. Diagnostics refers to the detection of a malfunction in some system. Examples of ES:

Diagnosis and therapy of narrowing of the coronary vessels - ANGY;

Diagnostics of errors in hardware and software of computers - CRIB system, etc.

Monitoring. The main task of monitoring is the continuous interpretation of data in real time and signaling the output of certain parameters beyond the permissible limits. The main problems are the "skip" of an alarming situation and the inverse problem of a "false" operation. Examples of ES:

Control over the operation of SPRINT power plants, assistance to nuclear reactor dispatchers - REACTOR:

Monitoring of emergency sensors in a chemical plant - FALCON, etc.

Design. Design consists of preparing specifications for the creation of "objects" with predetermined properties. The specification is understood as the entire set of necessary documents, a drawing, an explanatory note, etc. Examples of ES:

Designing computer configurations VAX - 1/780 in the XCON (or R1) system,

LSI design - CADHELP;

Synthesis of electrical circuits - SYN and others.

Forecasting. Predictive systems logically deduce likely consequences from given situations. Examples of ES:

Weather prediction - WILLARD system:

Estimates of the future harvest - PI. ANT;

Forecasts in the economy - ECON and others.

Planning. Planning is understood as finding action plans related to objects capable of performing certain functions. In such ES, behavioral models of real objects are used in order to logically deduce the consequences of the planned activity. Examples of ES:

Robot Behavior Planning - STRIPS,

Planning of industrial orders - 1SIS,

Experiment Design - MOLGEN et al.

Education. Training systems diagnose errors in the study of any discipline with the help of a computer and suggest the right solutions. They accumulate knowledge about a hypothetical "student" and his characteristic mistakes, then in the work they are able to diagnose weaknesses in the knowledge of students and find appropriate means to eliminate them. Examples of ES:

Teaching the Lisp programming language in the "Lisp Teacher" system;

PROUST system - learning the language Pascal, etc.

Expert system solutions are transparent, i.e. they can be explained to the user at a qualitative level.

Expert systems are able to replenish their knowledge in the course of interaction with an expert.

4.3. Structure of expert systems

The structure of expert systems includes the following components:

Knowledge base- the core of the ES, the body of knowledge of the subject area, recorded on a machine medium in a form understandable to the expert and the user (usually in some language close to natural). In parallel with such a "human" representation, there is a knowledge base in the internal "machine" representation. It consists of a set of facts and rules.

Facts - describe objects and the relationship between them. Rules - used in the knowledge base to describe relationships between objects. Based on the relationships defined by the rules, a logical inference is performed.

Database- is intended for temporary storage of facts and hypotheses, contains intermediate data or the result of communication between systems and the user.

Machine-logical inference- a reasoning mechanism that operates on knowledge and data in order to obtain new data; for this, a software-implemented decision search mechanism is usually used.

Communication subsystem- serves to conduct a dialogue with the user, during which the expert system asks the user for the necessary facts for the reasoning process, and also allows the user to control the course of reasoning to some extent.

Explanation Subsystem- is necessary in order to give the user the opportunity to control the course of reasoning.

Knowledge Acquisition Subsystem- a program that provides a knowledge engineer with the ability to create knowledge bases in an interactive mode. It includes a system of nested menus, knowledge representation language templates, hints ("help" - mode) and other service tools that make it easier to work with the database.

The expert system operates in two modes:

Acquisition of knowledge (definition, modification, addition);

Problem solving.

In this mode, task data is processed and, after appropriate encoding, transferred to the blocks of the expert system. The results of processing the received data are sent to the advice and explanations module and, after recoding into a language close to natural, they are issued in the form of advice, explanations and comments. If the answer is not clear to the user, he may require the expert system to explain how it was received.

4.4. Main stages of development of expert systems

The technological process of developing an industrial expert system can be divided into six main stages:

1. Choosing the right problem

Activities leading up to the decision to start developing a particular ES include:

Identification of the problem area and tasks;

Finding an expert who is willing to cooperate in solving the problem, and appointing a development team;

Determination of a preliminary approach to solving the problem;

Analysis of costs and profits from development;

preparation detailed plan development.

2. Development of a prototype system

prototype system is a truncated version of an expert system designed to check the correctness of coding facts, relationships, and expert reasoning strategies.

The prototype must meet two requirements:

The prototype system should solve the most typical problems, but it should not be too big.

Prototyping time and effort should be low.

The operation of the prototype programs is evaluated and tested to bring them into line with real user requests. The prototype is checked for:

Convenience and adequacy of input-output interfaces (the nature of the questions in the dialogue, the coherence of the output text of the result, etc.)

Efficiency of the control strategy (enumeration order, use of fuzzy inference, etc.);

Quality of test cases;

Correctness of the knowledge base (completeness and consistency of the rules).

An expert usually works with a knowledge engineer who helps to structure knowledge, define and form the concepts and rules needed

to solve the problem. If successful, the expert, with the help of a knowledge engineer, expands the knowledge base of the prototype about the problem area.

If it fails, it can be concluded that What other methods are needed to solve this problem or develop a new prototype.

3. Development of a prototype to an industrial expert system.

At this stage, the knowledge base is significantly expanded, a large number of additional heuristics are added. These heuristics generally increase the depth of the system by providing more rules for subtle aspects of individual cases. After establishing the basic structure of the ES, the knowledge engineer proceeds to develop and adapt the interfaces through which the system will communicate with the user and the expert.

As a rule, a smooth transition from prototypes to industrial expert systems is implemented. Sometimes, when developing an industrial system, additional stages are distinguished for the transition: demonstration prototype - research prototype - live prototype - industrial system.

4. System evaluation

Expert systems are evaluated in order to check the accuracy of the program and its usefulness. Evaluation can be carried out on the basis of various criteria, which we group as follows:

User criteria (comprehensibility and "transparency" of the system operation, convenience of interfaces, etc.);

Criteria for invited experts (evaluation of advice-solutions offered by the system, its comparison with its own solutions, evaluation of the subsystem of explanations, etc.);

Criteria of the development team (implementation efficiency, performance, response time, design, breadth of coverage of the subject area, consistency of the knowledge base, the number of deadlocks when the system cannot make a decision, analysis of the sensitivity of the program to minor changes in knowledge representation, weight coefficients used in the mechanisms of logical output, data, etc.).

5. Docking system

At this stage, the expert system is docked with other software tools in the environment in which it will operate and the training of the people it will serve. Coupling also implies the development of links between the expert system and the environment in which it operates.

Interfacing includes enabling the EC to communicate with existing databases and other systems in the enterprise, as well as improving system time-dependent factors so that it can operate more efficiently and improve the performance of its technical means if the system operates in an unusual environment (for example, communication with measuring devices).

6. System support

Recoding a system to a language like C improves performance and portability, but reduces flexibility. This is acceptable only if the system retains all knowledge of the problem area, and this knowledge will not change in the near future. However, if the expert system is created precisely because the problem area changes, then it is necessary to maintain the system in the development tool environment.

Artificial intelligence languages

Lisp (LISP) and Prolog (Prolog) are the most common languages ​​for solving artificial intelligence problems. There are also less common artificial intelligence languages, such as REFAL, developed in Russia. The universality of these languages ​​is less than that of traditional languages, but artificial intelligence languages ​​compensate for its loss with rich opportunities for working with symbolic and logical data, which is extremely important for artificial intelligence tasks. On the basis of artificial intelligence languages, specialized computers (for example, Lisp machines) are created for solving artificial intelligence problems. The disadvantage of these languages ​​is their inapplicability for creating hybrid expert systems.

Special software tools

Lisp libraries and add-ons for the artificial intelligence language Lisp: KEE (Knowledge Engineering Environment), FRL (Frame Representation Language), KRL (Knowledge Represantation Language), ARTS, etc. high level than is possible in conventional artificial intelligence languages.

"Shells"

"Shells" are empty" versions of existing expert systems, i.e. ready-made expert systems without a knowledge base. An example of such a shell is EMYCIN (Empty MYCIN - empty MYC1N), which is an empty MYCIN expert system. The advantage of shells is that that they do not require the work of programmers at all to create a ready-made expert system.It only requires experts in the subject area to populate the knowledge base.However, if a certain subject area does not fit well into the model used in some shell, it is not easy to populate the knowledge base in this case.

5. PROLOGUE - the language of the logical
programming

5.1. General information about PROLOG.

PROLOG (PROGRAMMING IN LOGIC) is a logic programming language designed to solve problems from the field of artificial intelligence (creation of ES, translation programs, natural language processing). It is used for natural language processing and has powerful tools for extracting information from databases, and the search methods used in it are fundamentally different from traditional ones.

The basic constructions of PROLOGUE are borrowed from logic. PROLOGUE is not a procedural, but a declarative programming language. It is focused not on the development of solutions, but on a systematic and formalized description of the problem so that the solution follows from the description.

The essence of the logical approach is that the machine is offered not an algorithm as a program, but a formal description of the subject area and the problem being solved in the form of an axiomatic system. Then the search for a solution using the output in this system can be entrusted to the computer itself. The main task of a programmer is to successfully represent the subject area with a system of logical formulas and such a set of relations on it that most fully describe the task.

Fundamental properties of PROLOG:

1) inference mechanism with search and return

2) built-in pattern matching mechanism

3) simple and easily changeable data structure

4) lack of pointers, assignment and transition operators

5) naturalness of recursion

Stages of programming in PROLOG:

1) declaration of facts about objects and relations between them;

2) definition of the rules for the relationship of objects and relations between them;

3) formulation of the question about objects and relations between them.

The theoretical basis of PROLOG is a section of symbolic logic called the predicate calculus.

Predicate is the name of a property or relationship between objects with a sequence of arguments.

<имя_предиката>(t1, t2, ..., tn)), t1,t2,...,tn are arguments

For example, the fact black(cat) is written using the black predicate, which has one argument. Fact wrote (Sholokhov, "QUIET DON") written using the predicate wrote, which has two arguments.

The number of predicate arguments is called the arity of the predicate and is denoted by black/1 (the black predicate has one argument, its arity is one). Predicates may have no arguments; the arity of such predicates is 0.

The Prolog language grew out of the work of A. Colmerauer on natural language processing and the independent work of Robert Kowalski on Applications of Logic to Programming (1973).

The most famous programming system in Russia is Turbo Prolog - a commercial implementation of the language for IBM-compatible PCs. In 1988, a much more powerful version of Turbo Prolog 2.0 was released, including an improved IDE, a fast compiler, and low-level programming tools. Borland distributed this version until 1990, and then PDC acquired the exclusive right to use the compiler source code and further market the programming system under the name PDC Prolog.

In 1996, the Prolog Development Center released Visual Prolog 4.0 to the market. The Visual Prolog environment uses an approach called "visual programming", in which the appearance and behavior of programs are defined using special graphical design tools without traditional programming in an algorithmic language.

Visual Prolog includes an interactive visual development environment (VDE - Visual Develop Environment), which includes text and various graphic editors, code generation tools that construct control logic (Experts), as well as an extension of the visual programming interface (VPI - Visual Programming Interface). ), a Prolog compiler, a set of various plug-in files and libraries, a link editor, files containing examples and help.

5.2. Suggestions: facts and rules

A PROLOGUE program consists of sentences, which can be facts, rules, or queries.

Fact is a statement that some specific relationship between objects is observed. A fact is used to show a simple relationship between data.

Fact structure:

<имя_отношения>(t1,t2,...,tn)), t1,t2,...,tn are objects

Fact examples:

studies (Ira, university). % Ira studies at the university

parent (ivan, alexey). % Ivan is the parent of Alexei

programming_language (prologue). % Prolog is a programming language

The set of facts is database. In the form of a fact, the program records data that is accepted as true and does not require proof.

rules are used to establish relationships between objects based on existing facts.

Rule structure:

<имя_правила> :- <тело правила>or

<имя_правила >if<тело правила>

The left side of the inference rule is called head rules, and the right side - body. The body can consist of several conditions separated by commas or semicolons. The comma stands for the logical AND operation, the semicolon stands for the logical OR operation. Sentences use variables to generalize the formulation of inference rules. Variables only work in one sentence. The name in different sentences refers to different objects. All sentences must end with a dot.

Rule examples:

mother (X, Y) :- parent (X, Y), woman (X).

student (X) :- studies (X, institute); studying (X, university).

A rule differs from a fact in that a fact is always true, and a rule is true if all the statements that make up the body of the rule hold. Facts and rules form knowledge base.

If there is a database, then you can write request(target) to it. A query is a statement of a problem that the program must solve. Its structure is the same as that of a rule or a fact. There are queries with constants and queries with variables.

Queries with constants allow you to get one of two answers: “yes” or “no”

For example, there are facts:

knows (lena, tanya).

knows (lena, sasha).

knows (Sasha, Tanya).

a) Does Lena know Sasha?

request: knows (lena, sasha).

Result: yes

b) Does Tanya know Lena?

request knows (tanya, lena).

Result: no

If a variable is included in the request, then the interpreter tries to find such values ​​for which the request will be true.

a) Who does Lena know?

request: knows (lena, X).

Result:

X = Tanya

X = sasha

b) Who knows Sasha?

request: knows (X, Sasha).

Result: X = Lena

Requests can be compound, i.e., consist of several simple requests. They are combined with the sign “,”, which is understood as a logical connective “and”.

Simple queries are called subgoal, the multipart query evaluates to true when each subgoal is true.

To answer whether Lena and Sasha have common acquaintances, you should make a request:

knows (lena, X), knows (sasha, X).

Result:

X = Tanya

5.4. Variables in PROLOG

A variable in PROLOG is not treated as an allocated piece of memory. It is used to designate an object that cannot be referred to by name. A variable can be thought of as a local name for some object.

The variable name must begin with an uppercase letter or underscore and contain only letters, numbers, and underscores: X, _y, AB, X1. A variable that has no value is called free, a variable that has a value - specific.

A variable consisting of only an underscore is called anonymous and is used when its value is not significant. For example, there are facts:

parent (Ira, Tanya).

parent (Misha, Tanya).

parent (olya, ira).

All parents need to be identified

Request: parent(X, _)

Result:

X = Ira

X = Misha

X = Olya

The scope of the variable is the assertion. Within a statement, the same name belongs to the same variable. Two statements can use the same variable name in completely different ways.

There is no assignment operator in PROLOGUE, its role is played by the equality operator =. The target X=5 can be viewed as a comparison (if X has a value) or an assignment (if X is free).

In PROLOG you cannot write X=X+5 to increase the value of a variable. A new variable should be used: Y=X+5.

5.5. Objects and data types in PROLOG

Data objects in PROLOG are called terms. A term can be a constant, a variable, or a composite term (structure). The constants are integers and real numbers (0, - l, 123.4, 0.23E-5), as well as atoms.

Atom– any sequence of characters enclosed in quotation marks. The quotes are omitted if the string begins with a lowercase letter and contains only letters, numbers, and underscores (i.e., if it can be distinguished from a variable notation). Atom examples:

abcd, “a+b”, “student Ivanov”, prologue, “Prologue”.

Structure allows you to combine several objects into a single whole. It consists of a functor (name) and a sequence of terms.

The number of components in a structure is called the arity of the structure: data/3.

A structure can contain another structure as one of its objects.

birthday_date (person("Masha","Ivanova"), data(April 15, 1983))

domain in PROLOG is a data type. The standard domains are:

integer - whole numbers.

real - real numbers.

string - strings (any sequence of characters enclosed in quotes).

char is a single character enclosed in apostrophes.

symbol - a sequence of Latin letters, numbers and underscores starting with a small letter or any sequence of characters enclosed in quotation marks.

5.6. The main sections of the PROLOG program

As a rule, the program on PROLOG consists of four sections.

DOMAINS– section of description of domains (types). The section is used if non-standard domains are used in the program.

For example:

PREDICATES- predicate description section. The section is used if non-standard predicates are used in the program.

For example:

knows (name, name)

student (name)

CLAUSES- offer section. It is in this section that sentences are written: facts and rules of inference.

For example:

knows (Lena, Ivan).

student (ivan).

student_sign(X, Y):- knows (X, Y), student (Y).

GOAL- target section. The request is written in this section.

For example:

sign_student(lena, X).

The simplest program may contain only a GOAL section, for example:

write("Enter your name: "), readln(Name),

write("Hello, ", Name, "!").

Maslennikova O.E. , Popova I.V.

Tutorial. Magnitogorsk: MaGU, 2008. 282 pp. The textbook outlines knowledge representation models, expert systems theory, the basics of logical and functional programming. Much attention is paid to the history of the development of artificial intelligence. The presentation of the material is accompanied by a large number of illustrations, exercises and questions for self-control are offered.
The work is aimed at full-time and part-time students studying in the areas of "Informatics", "Physics and Mathematics Education (Profile - Informatics)". Introduction to Artificial Intelligence.
The history of the development of artificial intelligence as a scientific direction.
The main directions of research in the field of artificial intelligence.
Philosophical aspects of the problem of artificial intelligence.
Questions for self-control.
Literature.
Knowledge representation models.
Knowledge.
Logical model of knowledge representation.
semantic networks.
Frames.
production model.
Other models of knowledge representation.
Exercises.
Questions for self-control.
Literature.
Expert systems.
The concept of an expert system.
Types of expert systems and types of tasks to be solved.
Structure and operating modes of the expert system.
Expert systems development technology.
Expert system tools.
intellectual Information Systems.
Exercises.
Questions for self-control.
Literature.
Prolog as a logic programming language.
Introduction to logic programming.
Representation of knowledge about the subject area in the form of facts and rules of the Prolog knowledge base.
Descriptive, procedural and machine meaning of a Prolog program.
Basic programming techniques in Prolog.
Visual Prolog environment.
Exercises.
Literature.
Introduction to functional programming.
History of functional programming.
Properties of functional programming languages.
Tasks of functional programming.
Exercises.
Answers for self-test.
Literature.
ssary.
Appendix 1.
Appendix 2
Appendix 3

The file will be sent to selected email address. It may take up to 1-5 minutes before you receive it.

The file will be sent to your Kindle account. It may take up to 1-5 minutes before you receive it.
Please note you "ve to add our email [email protected] to approved email addresses. Read more.

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you "ve read. Whether you" ve loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Ministry of Education and Science of the Russian Federation Magnitogorsk State University O.E. Maslennikova, I.V. Popova Fundamentals of artificial intelligence Textbook Magnitogorsk 2008 UDC 681.142.1.01 LBC Z97 M Reviewers: Doctor of Physical and Mathematical Sciences, Professor S.I. Kadchenko Doctor of technical sciences, professor A,S. Sarvarov M. Maslennikova O.E., Popova I.V. Fundamentals of artificial intelligence: textbook. allowance / O.E. Maslennikova, I.V. Popov. - Magnitogorsk: MaGU, 2008. - 282 p. ISBN 978-5.86781-609-4 The textbook outlines knowledge representation models, expert systems theory, basics of logical and functional programming. Much attention is paid to the history of the development of artificial intelligence. The presentation of the material is accompanied by a large number of illustrations, exercises and questions for self-control are offered. The work is focused on full-time and part-time students studying in the areas of "Computer Science", "Physics and Mathematics Education (Profile - Informatics)". UDC 681.142.1.01 BBC Z97 ISBN 978-5.86781-609-4  Maslennikova O.E., Popova I.V., 2008  Magnitogorsk State University, 2008 -2- CONTENTS CHAPTER 1. INTRODUCTION TO ARTIFICIAL INTELLIGENCE..... .............. 5 1.1. HISTORY OF THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE AS A SCIENTIFIC DIRECTION ............................................................... ................................................. ........... 9 1.2. MAIN DIRECTIONS OF RESEARCH IN THE FIELD OF ARTIFICIAL INTELLIGENCE........................................................................... ................................................. ............. 13 1.3. PHILOSOPHICAL ASPECTS OF THE PROBLEM OF ARTIFICIAL INTELLIGENCE....... 16 QUESTIONS FOR SELF-CONTROL.............................................. ......................................... 21 LITERATURE....... ................................................. ................................................. 21 CHAPTER 2. KNOWLEDGE REPRESENTATION MODELS.................................................................. 22 2.1. KNOWLEDGE................................................. ................................................. ....... 22 2.2. LOGICAL MODEL OF KNOWLEDGE REPRESENTATION.................................................................. 25 2.3. SEMANTIC NETWORKS.................................................................. ................................................. 58 2.4. FRAMES .................................................. ................................................. ...... 59 2.5. PRODUCT MODEL ............................................... ............................................... 62 2.6. OTHER KNOWLEDGE REPRESENTATION MODELS .............................................................. .... 64 EXERCISES ............................................... ................................................. ......... 78 SELF-CHECK-IN QUESTIONS .............................. ............................... 83 LITERATURE .......... ................................................. ............................................. 84 CHAPTER 3. EXPERT SYSTEMS............................................................... .......... 86 3.1. THE CONCEPT OF THE EXPERT SYSTEM .............................................................. ............... 86 3.2. TYPES OF EXPERT SYSTEMS AND TYPES OF SOLVED PROBLEMS.................................................. 89 3.3. STRUCTURE AND OPERATING MODES OF THE EXPERT SYSTEM .............................................. 99 3.4. TECHNOLOGY FOR THE DEVELOPMENT OF EXPERT SYSTEMS .............................................................. 102 3.5. TOOLS OF THE EXPERT SYSTEM .............................................. 113 3.6. INTELLIGENT INFORMATION SYSTEMS .................................................. 129 EXERCISES .............. ................................................. ............................................... 135 SELF-CHECK-IN QUESTIONS .............. ................................................. ................. 136 LITERATURE .................................. ................................................. ...................... 138 CHAPTER 4. PROLOGUE AS A LOGIC PROGRAMMING LANGUAGE .................................. ................................................. ........... 139 4.1. VIEW OF LOGIC PROGRAMMING .............................................. 139 4.2. REPRESENTATION OF KNOWLEDGE ABOUT THE SUBJECT AREA AS FACTS AND RULES OF THE PROLOGUE KNOWLEDGE BASE.............................................................. .................................................. 140 4.3 . DESCRIPTIONAL, PROCEDURAL AND MACHINE MEANING OF THE PROGRAM ON THE PROLOGUE........................................................... ................................................. ............. 148 4.4. BASIC PROGRAMMING TECHNIQUES IN PROLOG .............................................. 151 4.5. VISUAL PROLOG ENVIRONMENT............................................... .................................. 154 EXERCISES ............................... ................................................. .................................... 194 LITERATURE .............. ................................................. ......................................... 197 -3- CHAPTER 5. INTRODUCTION OF FUNCTIONAL PROGRAMMING. ................................................. ............................... 199 5.1. HISTORY OF FUNCTIONAL PROGRAMMING .................................................. 200 5.2. PROPERTIES OF FUNCTIONAL PROGRAMMING LANGUAGES 203 5.3. FUNCTIONAL PROGRAMMING TASKS................................................................. 207 EXERCISES.................................................. ................................................. .................................................. 210 SELF-TEST ANSWERS....... ................................................. ................... 210 LITERATURE .............................. ................................................. ......................... 211 GLOSSARY ....................... ................................................. .............................. 213 APPENDIX 1 .................. ................................................. ......................... 221 APPENDIX 2 ..................... ................................................. ....................... 252 APPENDIX 3 ......................... ................................................. .................... 265 -4- FOREWORD Recently, there has been an increase in interest in artificial intelligence, caused by increased requirements for formation systems. Humanity is steadily moving towards a new information revolution, comparable in scale to the development of the Internet. Artificial intelligence is a branch of informatics, the purpose of which is the development of hardware and software tools that allow a non-programmer user to set and solve their own, traditionally considered intellectual tasks, communicating with a computer in a limited subset of natural language. The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created. The purpose of this manual is to present the main directions and methods used in artificial intelligence, as well as to determine the possibility of their use in professional pedagogical activity. This tutorial consists of five chapters. The first one provides a brief introduction to artificial intelligence: the history of its development as a scientific direction is considered, the main areas of artificial intelligence are highlighted, and such philosophical aspects of the problem as the possibility of existence, safety, and usefulness of artificial intelligence are considered. The second chapter is devoted to the description of classical models of knowledge representation: logical, semantic, frame, production and neural network. The third chapter deals with theoretical and practical issues of developing expert systems; describes the XpertRule wrapper. The fourth chapter outlines the basic principles of programming in the Prolog language, describes the Visual Prolog environment. The fifth chapter covers the basics of functional programming with LISP examples. The manual contains a large number of illustrations, exercises and questions for self-control. For the convenience of studying the material, a glossary is provided. -5- CHAPTER 1. INTRODUCTION TO ARTIFICIAL INTELLIGENCE Artificial intelligence (AI) is a new area of ​​informatics, the subject of which is any human intellectual activity that obeys known laws. Figuratively, this direction is called “the eldest son of informatics”, since many problems unsolved by it are gradually finding their solution within the framework of artificial intelligence. It is known that the subject of informatics is information processing. The field of AI includes such cases (tasks) from this processing that cannot be performed using simple and accurate algorithmic methods, and of which there are a great many. AI relies on knowledge about the process of human thinking. At the same time, it is not known exactly how the human brain works, however, to develop effective programs with AI elements, the knowledge about the features of human intelligence that science has today is enough. At the same time, AI does not try to copy exactly the work of the human brain, but tries to simulate its functions using computer technology. From the very moment of its birth, AI has been developing as an interdisciplinary direction interacting with computer science and cybernetics, cognitive sciences, logic and mathematics, linguistics and psychology, biology and medicine (Fig. 1). Informatics and cybernetics. Many specialists came to AI from computer science and cybernetics. Also, many combinatorial problems that cannot be solved by traditional methods in computer science have migrated to the field of AI. In addition, the results obtained in AI are borrowed when creating software and become part of Computer Science (computer science). Cognitive Sciences. Cognitive sciences are sciences about knowledge. AI also deals with knowledge. But cognitive sciences use not only informational and neurobiological approaches, but also consider the social and psycholinguistic aspects of the use of knowledge. Logic and mathematics. Logic underlies all known knowledge representation formalisms, as well as programming languages ​​such as Lisp and Prolog. To solve AI problems, methods of discrete mathematics, game theory, and operations theory are used. In turn, AI can be used to prove theorems, solve problems in various areas of mathematics: geometry, integral calculus. Psychology and linguistics. Recently, AI specialists have become interested in the psychological aspects of human behavior in order to model it. Psychology helps to build models of value assessments, making subjective decisions. Of interest is the psychology of communication -6 - "man-computer", psycholinguistics. Computational linguistics is a part of AI that is based on mathematical methods for processing natural and artificial languages, on the one hand, and on the phenomenology of language, on the other hand. Biology and medicine allows you to better study and understand the work of the brain, systems of vision, hearing and other natural sensors and give a new impetus to the field of modeling their work. Rice. 1. Interaction of AI with other disciplines There is no single definition of AI, just as there is no single definition of natural intelligence. Among the many points of view on this scientific field, three now dominate. 1. Research in the field of AI is fundamental research, within the framework of which models and methods for solving problems are developed, which were traditionally considered intellectual and were not previously amenable to formalization and automation. 2. AI is a new direction in computer science associated with new ideas for solving problems on a computer, with the development of a fundamentally different programming technology, with a transition to a computer architecture that rejects the classical architecture, which dates back to the first computers. 3. As a result of work in the field of AI, a lot of applied systems are born that solve problems for which previously created systems were not suitable. -7- To illustrate the first approach, we can give an example with a calculator. At the beginning of the century, arithmetic calculations with multi-digit numbers were the lot of a few gifted individuals, and the ability to perform such arithmetic operations in the mind was rightfully considered a unique gift of nature and was the object of scientific research. In our time, the invention of the calculator has made this ability available even to a third grader. The same is true in AI: it enhances the intellectual capabilities of a person, taking on the solution of tasks that were not previously formalized. To illustrate the second approach, we can consider the history of an attempt to create a fifth-generation computer. In the mid-80s, Japan announced the start of an ambitious project to create fifth-generation computers. The project was based on the idea of ​​a hardware implementation of the PROLOG language. However, the project ended in failure, although it had a strong influence on the development and distribution of the PROLOG language as a programming language. The reason for the failure was the hasty conclusion that one language (albeit a fairly universal one) can provide the only solution for all problems. Practice has shown that so far a universal programming paradigm for solving all problems has not been invented and it is unlikely to appear. This is due to the fact that each task is a part of the subject area that requires careful study and a specific approach. Attempts to create new computer architectures continue and are associated with parallel and distributed computing, neurocomputers, probabilistic and fuzzy processors. Work in the field of creating expert systems (ES) can be attributed to the third, most pragmatic direction in AI. Expert systems are software systems that replace a human specialist in narrow areas of intellectual activity that require the use of special knowledge. Creation of ES in the field of medicine (like MYCIN) allows spreading knowledge to the most remote areas. Thus, in combination with telecommunications access, any rural doctor can get advice from such a system, replacing him with communication with a specialist on a narrow issue. In Russia, AI found its supporters almost from the moment of its appearance. However, this discipline did not immediately receive official recognition. AI has been criticized as a sub-branch of cybernetics, considered "pseudoscience". Until some point in time, the shocking name "artificial intelligence" also played a negative role. So, in the Presidium of the Academy of Sciences there was a joke that “those who lack natural intelligence are engaged in artificial intelligence.” However, today AI is an officially recognized scientific direction in Russia, the journals "Control Systems and Machines" and "AI News" are published, scientific conferences and seminars are held. There is a Russian AI Association with about 200 members, whose president is Doctor of Technical Sciences D. A.Pospelov, and -8- Honorary President Academician of the Russian Academy of Sciences G.S.Pospelov. There is the Russian Institute of Artificial Intelligence under the Council of the President of the Russian Federation on Informatics and Computer Science. Within the framework of the Russian Academy of Sciences there is a Scientific Council on the problem of "Artificial Intelligence". With the participation of this Council, many books on the subject of AI and translations have been published. The works of D.A. Pospelov, Litvintseva and Kandrashina are well known - in the field of knowledge representation and processing, E.V. Popov and Khoroshevsky - in the field of natural language processing and expert systems, Averkin and Melikhov in the field of fuzzy logic and fuzzy sets, Stefanyuk - in the field of learning systems, Kuznetsov, Finn and Vagin - in the field of logic and knowledge representation. In Russia, there is a traditionally strong computer linguistic school, which originates from Melchuk's work on the SmyslText model. Among the famous computer linguists are Apresyan, Gorodetsky, Paducheva, Narinyani, Leontiev, Chaliapin, Zaliznyak Sr., Kibrik Sr., Baranov and many others. etc. 1.1. The history of the development of artificial intelligence as a scientific direction The idea of ​​creating an artificial likeness of the human mind to solve complex problems and simulate the thinking ability has been in the air since ancient times. In ancient Egypt, a "reviving" mechanical statue of the god Amon was created. In Homer's Iliad, the god Hephaestus forged humanoid creatures. In literature, this idea was played up many times: from Pygmalion's Galatea to Papa Carlo's Pinocchio. However, the ancestor of artificial intelligence is considered to be the medieval Spanish philosopher, mathematician and poet R. Lull (c.1235-c.1315), who in the XIV century. tried to create a machine for solving various problems based on a general classification of concepts. In the XVIII century. G. Leibniz (1646 - 1716) and R. Descartes (1596 - 1650) independently developed this idea, proposing universal languages ​​for classifying all sciences. These ideas formed the basis of theoretical developments in the field of creating artificial intelligence (Fig. 2). The development of artificial intelligence as a scientific direction became possible only after the creation of computers. This happened in the 40s. 20th century At the same time, N. Wiener (1894 - 1964) created his fundamental works on a new science - cybernetics. The term artificial intelligence was proposed in 1956 at a seminar with the same name at Stanford University (USA). The seminar was devoted to the development of logical, not computational problems. Soon after the recognition of artificial intelligence as an independent branch of science, there was a division into two main areas: neurocybernetics and “black box” cybernetics. And only at the present -9 - the tendencies towards the unification of these parts again into a single whole have become noticeable. In the USSR, in 1954, at the Moscow State University, under the guidance of Professor A.A. Lyapunov (1911 - 1973), the seminar "Automata and Thinking" began its work. Leading physiologists, linguists, psychologists and mathematicians took part in this seminar. It is generally accepted that it was at this time that artificial intelligence was born in Russia. As well as abroad, the directions of neurocybernetics and cybernetics of the "black box" have stood out. In 1956 -1963. intensive searches for models and algorithms of human thinking and the development of the first programs were carried out. It turned out that none of the existing sciences - philosophy, psychology, linguistics - can offer such an algorithm. Then the cybernetics offered to create their own models. Various approaches have been created and tested. The first research in the field of AI is associated with the creation of a program for playing chess, since it was believed that the ability to play chess is an indicator of high intelligence. In 1954, the American scientist Newell conceived the idea of ​​creating such a program. Shannon proposed, and Turing specified, a method for creating such a program. The Americans Shaw and Simon, in collaboration with a group of Dutch psychologists from Amsterdam, under the guidance of de Groot, created such a program. Along the way, a special language IPL1 (1956) was created, designed to manipulate information in symbolic form, which was the predecessor of the Lisp language (MacCarthy, 1960). However, the first artificial intelligence program was the Logic Theorist program, designed to prove theorems in propositional calculus (August 9, 1956). The chess program was created in 1957 (NSS - Newell, Shaw, Simon). Its structure and the structure of the Logic Theorist program formed the basis for the creation of the Universal Problem Solver (GPS-General Problem Solving) program. This program, by analyzing the differences between situations and constructing goals, is good at solving tower of Hanoi puzzles or calculating indefinite integrals. The EPAM program (Elementary Perceiving and Memorizing Program) is an elementary program for perception and memorization, conceived by Feigenbaum. In 1957, an article by Chomsky, one of the founders of computational linguistics, appeared on transformational grammars. At the end of the 50s. the labyrinth search model was born. This approach represents the problem as a certain graph reflecting the state space1, and in this graph, the search for the optimal path from the input data to the resulting ones is carried out. A lot of work has been done to develop this model, but in solving practical problems the idea has not received wide distribution. 1 The state space is a graph whose vertices correspond to the situations encountered in the problem (“problem situations”), and the solution of the problem is reduced to finding a path in this graph. - 10 - Early 60s. - the era of heuristic programming. A heuristic is a rule that is not theoretically justified, but allows to reduce the number of iterations in the search space. Heuristic programming is the development of an action strategy based on known, predetermined heuristics. In the 1960s, the first programs that worked with natural language queries were created. The BASEBALL program (Green et al., 1961) responded to requests for the results of past baseball matches, the STUDENT program (Bobrow, 1964) was available to solve algebraic problems formulated in English. Rice. 2. Milestones in the development of AI as a scientific field Great hopes were placed on work in the field of machine translation, the beginning of which is associated with the name of the Russian linguist Belskaya. However, it took researchers many years to understand that automatic translation is not an isolated problem and requires the presence of such a necessary stage as understanding for successful implementation. Among the most significant results obtained by domestic scientists in the 60s, it should be noted the algorithm "Cortex" by M. Bongard, which simulates the activity of the human brain in pattern recognition. In 1963 - 1970. methods of mathematical logic began to be connected to the solution of problems. A new approach to formal logic, based on reducing reasoning to a contradiction, appeared in 1965 - 11 - (J. Robinson). Based on the resolution method, which made it possible to automatically prove theorems in the presence of a set of initial axioms, in 1973 the Prolog language was created. In the USSR in 1954 - 1964. separate programs are created and the search for solutions to logical problems is investigated. In Leningrad (LOMI - Leningrad Department of the Mathematical Institute named after V.A.Steklov) a program is being created that automatically proves theorems (ALPEV LOMI). It is based on the original reverse derivation by S.Yu.Maslov, similar to the Robinson resolution method. In 1965-1980. a new science is being developed - situational management (corresponds to the representation of knowledge in Western terminology). The founder of this scientific school is Professor D.A. Pospelov. Special models for representing situations - knowledge representations - have been developed. Abroad, research in the field of AI is accompanied by the development of new generation programming languages ​​and the creation of increasingly sophisticated programming systems (Lisp, Prolog, Plannar, QA4, Macsyma, Reduce, Refal, ATNL, TMS). The results obtained are beginning to be used in robotics, when controlling robots, stationary or mobile, operating in a real three-dimensional space. This raises the problem of creating artificial organs of perception. Prior to 1968, researchers worked mainly with separate "microspaces", they created systems suitable for such specific and limited areas of application as games, Euclidean geometry, integral calculus, "the world of cubes", processing of simple and short phrases with a small vocabulary . Almost all of these systems used the same approach - a simplification of combinatorics based on reducing the necessary enumeration of alternatives based on common sense, the use of numerical evaluation functions and various heuristics. In the early 1970s, there was a quantum leap in artificial intelligence research. This is due to two reasons.  First. All researchers gradually realized that all previously created programs lack the most important thing - deep knowledge in the relevant field. The difference between an expert and an ordinary person is that an expert has experience in a given field, i.e. accumulated knowledge over the years.  Second. A specific problem arises: how to transfer this knowledge to a program if its direct creator does not have this knowledge. The answer is clear: the program itself must extract them from the data received from the expert. Research on problem solving and natural language understanding is united by one a common problem- presentation of knowledge. By 1970, there were - 12 - many programs based on these ideas. The first of these is the DENDRAL program. It is designed to generate structural formulas of chemical compounds based on information from a mass spectrometer. The program was developed at Stanford with the participation of the Nobel laureate D. Lederberg. She gained experience in the process of her own functioning. The expert put many thousands of elementary facts into it, presented in the form of separate rules. The system under consideration was one of the first expert systems and the results of its work are amazing. At present, the system is supplied to consumers together with the spectrometer. In 1971, Terry Winograd developed the SHRDLU system, which simulates a robot that manipulates blocks. You can speak English with the robot. The system is interested not only in the syntax of phrases, but also correctly understands their meaning thanks to semantic and pragmatic knowledge about its “world of cubes”. Since the mid-1980s, artificial intelligence has been commercialized abroad. Annual investments are growing, industrial expert systems are being created. There is a growing interest in self-learning systems. In our country, 1980-1990. are held active research in the field of knowledge representation, knowledge representation languages, expert systems (more than 300) are being developed. The REFAL language is being created at Moscow State University. In 1988, the AI ​​​​is created - the Association of Artificial Intelligence. More than 300 researchers are its members. President of the Association - D.A. Pospelov. The largest centers are in Moscow, St. Petersburg, Pereslavl-Zalessky, Novosibirsk. 1.2. The main directions of research in the field of artificial intelligence At present, AI is a rapidly developing and highly branched scientific field. Only in computational linguistics in the world more than 40 conferences are held annually. Almost every European country, as well as the USA, Canada, Japan, Russia, Southeast Asia, regularly hosts national conferences on AI. In Russia, this event is held every two years under the auspices of the Russian Association for AI (RAAI). In addition, the International Joint Conference on AI (IJCAI) is held every two years. More than 3 thousand periodicals publish scientific results in this area. There is no complete and strict classification of all areas of AI; an attempt to classify the tasks that AI solves is shown in Fig. 3. According to D.A. Pospelov in AI, there are two dominant approaches to research in the field of AI: neurobionic and informational (Fig. 4 and 5). - 13 - Tasks General Formal Expert Game Perception (Chess, Go, Puzzles) Engineering Natural Language Processing Mathematics Scientific Analysis Common Sense Reasoning Geometry Financial Analysis Robot Control Program Verification Medical Diagnostics 3. Tasks of AI Proponents of the first set themselves the goal of artificially reproducing the processes that take place in the human brain. This direction is located at the intersection of medicine, biology and cybernetics. At the same time, they study the human brain, identify how it works, create technical means to repeat biological structures and the processes occurring in them. The field of AI can be conditionally divided into five major sections: - neuron-like structures; − programs for solving intellectual problems; − systems based on knowledge; − intellectual programming; − intelligent systems. Each section can be represented as follows (see Figure 4-9). - 14 - Fig. 4. Neural structures Fig. 5. Programs for solving intellectual problems 6. Knowledge based systems - 15 - Pic. 7. Intelligent programming 8. Intelligent systems 1.3. Philosophical aspects of the problem of artificial intelligence The main philosophical problem in the field of artificial intelligence is related to the search for an answer to the question: is it possible or not to simulate human thinking. In the event that a negative answer to this question is ever received, then all other questions in the field of AI will not make the slightest sense. Therefore, when starting the study of artificial intelligence, a positive answer is assumed in advance. Evidence of the possibility of modeling human thinking. 1. Scholastic: the consistency of artificial intelligence and the Bible. Apparently, even those who are far from religion know the words of the Holy Scripture: "And the Lord created man in his own image and likeness ...". Based on these words, we can conclude that since the Lord, firstly, created people, and secondly, they are essentially similar to him, then people are quite capable of creating someone in the image and likeness of a person. 2. Biological. The creation of a new mind in a biological way is quite a common thing for a person. Observing children, we see that - 16 - they acquire most of the knowledge through training, and not as embedded in them in advance. This statement has not been proven at the present level, but according to external signs, everything looks exactly like this. 3. Empirical. What previously seemed to be the pinnacle of human creativity - playing chess, checkers, recognizing visual and sound images, synthesis of new technical solutions, in practice turned out to be not such a difficult task. Now the work is being done not at the level of the possibility or impossibility of implementing the above, but about finding the most optimal algorithm - often these problems are not even classified as problems of artificial intelligence. It is hoped that a complete simulation of human thinking is also possible. 4. Possibility of self-reproduction. The ability to reproduce itself has long been considered the prerogative of living organisms. However, some phenomena occurring in inanimate nature (for example, the growth of crystals, the synthesis of complex molecules by copying) are very similar to self-reproduction. In the early 1950s, J. von Neumann began a thorough study of self-reproduction and laid the foundations for the mathematical theory of "self-reproducing automata." He also proved theoretically the possibility of their creation. There are also various informal proofs of the possibility of self-replication, but for programmers, the most striking proof, perhaps, will be the existence of computer viruses. 5. Algorithmic. The fundamental possibility of automating the solution of intellectual problems with the help of a computer is provided by the property of algorithmic universality. This property of a computer means that it is possible to programmatically implement (i.e., represent in the form of a computer program) any information conversion algorithms on them. Moreover, the processes generated by these algorithms are potentially feasible, i.e., that they are feasible as a result of a finite number of elementary operations. The practical feasibility of the algorithms depends on the available means, which may change with the development of technology. Thus, in connection with the advent of high-speed computers, algorithms that previously were only potentially feasible have become practically feasible. In addition, the content of this property is predictive in nature: whenever in the future any prescription is recognized by the algorithm, then regardless of the form and means in which it is initially expressed, it can also be specified in the form of a computer program. However, one should not think that computers and robots can, in principle, solve any problems. The analysis of various problems led mathematicians to a remarkable discovery. The existence of such types of problems was rigorously proved for which a single efficient algorithm that solves all problems of a given type is impossible; in this sense, it is impossible to solve problems of this type with the help of computing machines. This fact contributes better understanding what machines can do and what they cannot do. Indeed, the statement about the algorithmic unsolvability of a certain class of problems is not just an admission that such an algorithm is unknown and has not yet been found by anyone. Such a statement is at the same time a forecast for all future times that this kind of algorithm is not known to us and will not be indicated by anyone, or, in other words, that it does not exist. AI can be considered in a number of tools (intellectual and non-intellectual) that were created and mastered by mankind on the path of its historical development. These include:  hand tools;  machines and machines;  language and speech;  counting devices;  means of VT and telecommunications. Philosophers argue that the production of tools (in the broadest sense) is the most important activity that distinguishes our ancestors from other primates. Human beings stand out among animals in their ability to produce knowledge and tools. No other technological or socio-political invention has caused such a huge separation in the development of homo sapience from other species of wildlife. Development computer technology can be broadly divided into two areas: digital processing and symbolic processing. The first direction made information much more convenient for storage, processing and transmission than all previous improvements in paper technology. The computer surpassed all computing tools of the past (abacus, abacus, adding machine) in speed, variety of functions, ease of use. Consistently expanding the scope of automation in the field of monotonous mental work, digital information processing has expanded the scope of the printing press and the industrial revolution to new frontiers. The second branch of computer technology, sign processing (Newell and Simon's term) or artificial intelligence, has allowed the computer to mimic sensory perception and orientation, reasoning and problem solving, natural language processing, and other human abilities. In other words, AI is a new kind of toolkit, alternative to existing ones. This reality has forced AI philosophers to move beyond the question "Is it possible to create an intelligent machine?" to the problem of the influence of intellectual tools on society. Among other things, the possible social effect of the development of AI is considered, namely: - 18 - an increase in the level of intelligence of the whole society, which will give new discoveries, inventions and a new understanding of humanity itself.  change in the situation when the majority of people is the means and instrument of production. The next philosophical question of AI is the purpose of creation. In principle, everything we do in practical life is usually aimed at doing nothing else. However, with a sufficiently high standard of living (a large amount of potential energy) of a person, it is no longer laziness (in the sense of the desire to save energy), but search instincts that play the first roles. Suppose that a person has managed to create an intellect that exceeds his own (if not in quality, then in quantity). What will happen to humanity now? What role will the person play? Why is he needed now? And in general, is it necessary in principle to create AI? Apparently, the most acceptable answer to these questions is the concept of an "intelligence amplifier" (IA). According to S.L. Sotnik, an analogy with the president of the state is appropriate here - he is not required to know the valency of vanadium or the Java programming language in order to make a decision on the development of the vanadium industry. Everyone minds his own business - the chemist describes technological process , the programmer writes the program; the economist tells the president that by investing in industrial espionage, the country will receive 20%, and in the vanadium industry - 30% per annum. I think that with such a formulation of the question, anyone can make the right choice. In this example, the president is using a biological AI - a group of specialists with their protein brains. But non-living MIs are already being used - for example, computers, on-board computing devices. In addition, a person has long been using power amplifiers (SS) - a concept that is in many respects similar to UI. Cars, cranes, electric motors, presses, guns, airplanes, and much, much more serve as power amplifiers. The main difference between UI and CS is the presence of will: the former can have its own “desires”, and act differently from what is expected of it. Thus, the security issue of AI systems arises. How to avoid those negative consequences that accompany any new achievement of scientific and technological revolution? This problem has been haunting the minds of mankind since the time of Karel Capek, who first used the term "robot". Other science fiction writers also contributed greatly to its discussion. The most famous are a series of stories by the science fiction writer and scientist Isaac Asimov, in which one can find the most developed and accepted by most people solution to the security problem. These are the three laws of robotics. 1. A robot cannot harm a person or by its inaction allow a person to be harmed.  - 19 - 2. A robot must obey commands given to it by a human, except in cases where these commands contradict the first law. 3. The robot must take care of its safety, as far as it does not contradict the first and second laws. Subsequently, Asimov adds the "Zero Law" to this list: "A robot cannot cause harm to humanity or, by its inaction, allow harm to be done to humanity." At first glance, such laws, if fully observed, should ensure the safety of mankind. However, a closer look raises some questions. First, the laws are formulated in human language, which does not allow their simple translation into an algorithmic form. Let's assume this problem has been solved. Now, what will the AI ​​system mean by the term "harm"? Will she decide that the very existence of a person is a complete harm? After all, he smokes, drinks, ages and loses health over the years, suffers. Wouldn't the lesser evil quickly end this chain of suffering? Of course, some additions related to the value of life and freedom of expression can be introduced. But these will no longer be the simple three laws that were in the original. Further: what will the AI ​​system decide in a situation where saving one life is possible only at the expense of another? Of particular interest are those cases where the system has no complete information about who is who. However, despite these problems, these laws are a pretty good informal basis for testing the reliability of the security system for AI systems. So, isn't it reliable system security? Based on the concept of UI, we can offer the following option. According to numerous experiments, despite the lack of reliable data on what each individual neuron in the human brain is responsible for, many of the emotions usually correspond to the excitation of a group of neurons (neural ensemble) in a completely predictable area. There were also held reverse experiments when stimulation of a certain area caused the desired result. These could be emotions of joy, oppression, fear, aggressiveness. Thus, it seems possible to take the degree of satisfaction of the human host's brain as an objective function. If you take measures to exclude self-destructive activity in a state of depression, as well as provide for other special states of the psyche, then you get the following. Since it is assumed that normal person will not harm himself and, for no particular reason, others, and the UI is now part of this individual (not necessarily a physical community), then all three laws of robotics are automatically fulfilled. At the same time, security issues are shifted to the field of psychology and law enforcement, since - 20 - the (trained) system will not do anything that its owner would not want. Questions for self-control 1. What is artificial intelligence? 2. What scientific areas does artificial intelligence interact with? 3. Describe the approaches to understanding the subject of artificial intelligence as a scientific discipline. 4. Describe the current state of AI in Russia. 5. Describe the "pre-computer" stage of the development of artificial intelligence 6. Describe the development of artificial intelligence in the 40s. 20th century 7. Describe the development of artificial intelligence in the 50s. 20th century 8. Describe the development of artificial intelligence in the 60s. 20th century 9. Describe the development of artificial intelligence in the 70s. 20th century 10. Describe the development of artificial intelligence in the 80s. 20th century 11. Describe the main tasks of artificial intelligence. 12. What sections are allocated in the field of artificial intelligence? 13. Provide evidence of the possibility of modeling human thinking. 14. What justifies the transition to the problem of the influence of intellectual tools on society? 15. What causes and how can the security problem of artificial intelligence systems be solved? Literature 1. Luger, J., F. Artificial intelligence: strategies and methods for solving complex problems: per. from English / George F. Luger. - M .: Williams Publishing House, 2003. - 864 p. 2. Fundamentals of artificial intelligence / B.V. Kostrov, V.N. Ruchkin, V.A. Fulin. – M.: DESS, Tekhbuk, 2007. – 192 p. 3. Website of the Russian Association of Artificial Intelligence. – Access mode: http://www.raai.org/ 4. Sotnik, S.L. Fundamentals of designing artificial intelligence systems: lectures. – Access mode: http://newasp.omskreg.ru/intellect/f25.htm 5. Russell, S. Artificial intelligence: a modern approach / Stuart Russell, Peter Norvig. – M.: Williams Publishing House, 2006. – 1408 p. - 21 - CHAPTER 2. KNOWLEDGE REPRESENTATION MODELS 2.1. Knowledge What types of knowledge are needed to enable "intelligent" behavior? The "secret" of the phenomenology of the knowledge model lies in the world around us. In the general case, the knowledge representation model should provide a different description of the objects and phenomena that make up the subject area in which the intelligent agent has to work. The subject area is a part of reality associated with the solution of a problem. An intelligent agent is a system (human, program) that has intellectual abilities. Knowledge is the revealed patterns of the subject area (principles, connections, laws). Knowledge has a more complex structure than data (metadata). At the same time, knowledge is specified both extensionally (i.e., through a set of specific facts corresponding to a given concept and related to the subject area) and intensionally (i.e., through properties corresponding to a given concept and a scheme of relationships between attributes). Types of knowledge Objects. Usually, a person represents knowledge in terms of facts about the objects around him. For this reason, there must be ways to represent objects, classes (categories, types) of objects, describe properties and interactions of objects. One way to classify objects is a class hierarchy. In addition, it is necessary to distinguish between abstract objects that are used to denote groups (sets, classes) of individuals. Example "Birds have wings" "Doves are birds" "Snow is white" "This book is new" - individual object Situations - all kinds of interactions between objects. Example “Yesterday it rained” “The train was 10 minutes late” An example of the classification of situations proposed by Paducheva is shown in Fig. 9. In addition, in order to be able to describe situations in themselves, the presentation model must be able to describe the location of events on the time axis, as well as their causal relationship. Situations Static States Constant properties and relationships Dynamic Processes Stable Incidents Temporary Results Events Fig. 9. An example of the classification of situations proposed by Paducheva When presenting a hierarchy of objects and relationships, the main difficulty is the choice of base, i.e. property (attribute) according to which the division occurs. Usually, even if a person easily distinguishes between different types of objects and situations in life, the attempt of verbal classification presents a big problem. Procedures. Behavior (for example: cycling) requires knowledge that is beyond declarative knowledge about objects and relationships between them. This is knowledge about how to do this or that action, which is called procedural knowledge, or experience (skill). Like riding a bicycle, most conscious behaviors (such as communication, understanding, or theorem proving) involve procedural knowledge, and it is often difficult to clearly distinguish between procedural knowledge and object knowledge. Example The term "doctrinalism" - describes the situation of the lack of procedural knowledge of a person who pretends to be a specialist Meta-knowledge also includes - 23 - what people know about their own ability as a knowledge processor: strengths, weaknesses, experience levels in various areas, and a sense of progress in solving problems. Classification of knowledge By depth:  Superficial knowledge (a set of empirical associations and causal relationships between the concepts of the subject area).  In-depth knowledge (abstractions, images, analogies, which reflect the understanding of the structure of the subject area and the relationship of individual concepts). By way of existence:  Facts (well-known circumstances).  Heuristics (knowledge from the experience of experts). In terms of rigidity:  Rigid knowledge (allows you to receive unambiguous clear recommendations under given initial conditions).  Soft knowledge (allows multiple, vague solutions and various recommendations). By presentation forms:  Declarative knowledge (facts in the form of structured data sets).  Procedural knowledge (algorithms in the form of fact processing procedures). By way of acquisition:  Scientific knowledge (obtained in the course of systematic training and/or study).  Everyday, everyday knowledge (obtained in the course of life). To place the knowledge base in order to use it to solve applied problems, it is necessary to formally describe it using mathematical models. As already mentioned, knowledge representation is possible with the help of declarative and procedural models. Typical declarative models usually include network and frame models; to procedural - logical and production. From the point of view of the approach to the representation of knowledge in a computer, knowledge representation models can be classified as follows: Based on the heuristic approach: "troika", production, frame, network models Based on the theoretical approach: based on formal logic and based on "human logic" - modal and multivalued. - 24 - 2.2. Logical model of knowledge representation Basic concepts of logic Most people think that the word "logical" means "reasonable". Thus, if a person reasons logically, then his reasoning is justified, so he does not allow hasty conclusions. Logic is the science of the forms and methods of correct thinking. This means that given the required number of true facts, the conclusion must always be true. On the other hand, if the logical conclusion is invalid, this means that a false conclusion has been drawn based on the true facts. It is necessary to separate the concepts of formal logic and informal. A distinctive feature of informal logic is that it is used in everyday life. A complex logical proof is a chain of logical conclusions in which one conclusion leads to another, and so on. In formal logic, also called symbolic logic, what is important is how the logical conclusion is carried out, how other factors are taken into account that provide proof of the truth or falsity of the final conclusion in a valid way. Logic also needs semantics to give meaning to symbols. Formal logic uses a semantics based not on the use of words that carry an emotional load, but on the choice of meaningful names for variables, like programming. Like mathematics, logic directly studies not empirical, but abstract objects. This raises the question: What is the nature or ontological status of abstract objects? What kind of abstract objects are we talking about? In (classical) logic, two fundamental varieties of abstract objects are distinguished: − concepts (properties); − relationships. Concepts can be either simple or complex. Complex concepts are a collection of relatively simpler concepts ( simple properties ) that are related to one another. More complex abstract objects are judgments, the structural elements of which are also concepts and certain relations. Judgments, in turn, are structural elements of inferences (systems of judgments), and inferences are structural elements of concepts and theories (systems of inferences). On Fig. 10 shows the hierarchy of types of abstract objects in classical logic. The specificity of logic lies in the fact that it studies the most general, universal relationships, or relationships, between abstract objects. In accordance with this, there is the following object - 25 - definition of logic: "Logic is the science of universal (generally valid) relationships between concepts, judgments, inferences and other abstract objects." Concepts and theories (systems of inference) Inferences (system of judgments) Judgments Concepts (properties) Relations Fig. 10. Hierarchy of types of abstract objects in classical logic Example “Student” is a concept. "Affiliation" is a property. "Diligent student", "Student of the 4th year" - relationships. “A person studies at a university” is a judgment. “If a person studies at an institute, then he is either a student or a graduate student” - a conclusion. "First order predicate calculus theory" is a concept. Concept Concepts are abstract objects accessible to human understanding as simple and complex properties (features) of empirical objects. The concept is opposed to such entities as: "word", "perception", "empirical object". The concept is a universal unit of thinking and the basis of intellectual activity. The most important characteristics of the concept are content and volume. All logical characteristics and logical operations are the result of inferential knowledge from the law of inversely proportional relationship between the content and scope of the concept. Any concept has the scope of the concept (conceptual scope) and addition to the scope of the concept (Fig. 11, 12). The scope of a concept is a set (set) of all those empirical (individual objects) to which this concept is inherent (as a property, sign). - 26 - Supplement to the volume - the totality of all those empirical objects that do not have this concept. Concept Х а1 а2 V Volume а3 Fig. 11. Concept X, scope of concept X, scope element (a1, a2, a3) Х Not Х Fig. 12. Scope and its complement Example Concept: factual data model. Scope of the concept: relational, network, hierarchical data models Supplementation of the scope: documentary data models (descriptor, thesaurus, document format-oriented data models) Concepts can be of the following types: 1) by scope: a. uniform (U =1 element, KAMAZ); b. general (U>1 element, Moscow Automobile Plant); 2) by the existence of elements: a. nonempty(student); b. empty (kolobok); 3) by the structure of elements: a. non-collective (North Pole); b. collective (debtor); 4) by content: a. irrelevant(audience); b. correlative (parents); 5) by the presence of qualities, properties, relationships a. positive (virtue); b. negative (offence); 6) by the quality of the elements: a. registered (Journal "Open Systems", 1/2008); b. unregistered (intelligentsia), abstract; 7) by the nature of the object: a. specific (pen); - 27 - b. abstract (model). Based on the listed types, it is possible to give a logical description of any concept, that is, to show the use of the concept in all seven senses. For example, the concept of a debtor is general, non-empty, collective, correlative, positive, non-recordable and specific. Basic methods of comprehension of concepts The main methods of comprehension of a concept include: - abstraction; − comparison; − generalization; − analysis; − synthesis. Abstraction is the mental selection (understanding) of a certain property or relationship by abstracting from other properties or relations of an empirical object. Comparison is the establishment of similarities or differences between objects. Generalization is the mental selection of a concept by comparing some other concepts. Abstraction, comparison and generalization are techniques closely related to each other. They can be called "cognitive procedures". Comparison is impossible without regard to abstraction. Generalization involves comparison and at the same time is nothing but a kind of complex abstraction, etc. Analysis is the mental division of an empirical or abstract object into its constituent structural components (parts, properties, relationships). Synthesis is the mental union of various objects into some integral object. Examples 1. Comparison of people by height involves abstraction to highlight the property "growth" of the concept "person". 2. Generalization: "chair" and "table" - "furniture". Correlation of concepts To explain the relationships between concepts, you can use diagrams in the form of Euler circles (Fig. 13). Examples Uniform (equivalent): Kazan is the capital. Independent (crossing): passenger-student. Submission: tree - birch. Opposite (contrarality): white and black. - 28 - Conraditory: white - not white. Subordination (subcontrary): officers (major-captain). The logical division of the concept is the division of the scope of the concept into non-intersecting parts based on some attribute. Concepts X, Y Incompatible M(X)M(Y)= Compatible M(X)M(Y) Independent Contradictory Y=Not-X X Y X M(X)M(Y); M(X)M(Y)M(X); M(X)M(Y)M(Y) M(X)M(Y)=U Contradictory Identity (uniform) X, Y X Y M(X)= M(Y) M(X)M(Y) U X subordinate to Y X Y M(X)M(Y)=M(X) 13. Correlation of concepts In this case, there are: − generic concept X; − division members (species concepts A and B); − division base (i.e. sign). - 29 - Three rules of logical division. 1. The rule of incompatibility. The volumes of species concepts should not intersect (i.e., division members should not be incompatible with each other); 2. Sequence rule. You can not divide at once on several grounds; 3. The rule of proportionality. The sum of the volumes of specific concepts should be equal to the volume of the generic concept. Dichotomous division (the most rigorous form) is the division of concepts according to the principle of contradiction (A, not-A). Classifications are certain systems (ordered sets) of specific concepts. Classifications are used to search for new relationships between concepts, as well as to systematize existing knowledge. Example 1. The periodic table is an example of the scientific classification of chemical elements. 2. An example of the classification of information systems (IS) is shown in the figure below. Division bases: functional purpose. A, B, C are examples of information systems according to this classification. IS Factographic systems Artificial intelligence systems Documentary systems IS "University" Lingvo "Consultant Plus" A B C Fig. 14. Example of classification Techniques for comprehension of concepts (abstraction, comparison, generalization, analysis, synthesis, division) are universal and fundamental cognitive procedures that have not yet been successfully modeled within the framework of artificial intelligence. This is one of the fundamental sections of classical logic, which must be integrated into the theory of knowledge bases. After that, the tasks of modeling such mental acts as hypotheses, teaching declarative knowledge will become available, and inference procedures will become more capacious. - 30 - Judgment Judgment is a structurally complex object that reflects the objective relationship between the object and its property. Judgment is opposed to such entities as: “sentence”, “perceptions”, “scenes from the real world”. Example. The following sentences express the same proposition: - "A shark is a predatory fish"; - "All sharks are predatory fish." - "The predatory fish are sharks." Classical logic considers the structure of a simple proposition in a slightly different interpretation than is customary in modern logico-linguistic studies. So, in accordance with the concepts of classical logic about the structure of a judgment, a simple judgment is an abstract object, the main structural elements of which are: − an individual concept (IC); − predicate concept (PC); − predication relation (RP). Examples Given the sentence: "Plato is a philosopher." In this sentence, which expresses the proposition S: "Plato" is a logical subject, i.e. a symbol denoting the individual concept of judgment S. "Philosopher" is a logical predicate, i.e. a symbol denoting the predicate concept of the proposition S. “To be” is a subject-predicate link, i.e. a symbol denoting a predication relation. Thus, we can draw the following intermediate conclusion: - an individual concept is a system of concepts, considered as a conceptual entity, some empirical object; - predicate concept - a concept considered as a property of a particular empirical object; - relation of predication - a relation that connects the individual and predicate concepts of some empirical object into an integral abstract object. In addition, several types of simple judgments can be distinguished (See Fig. 15). There are several ways to formalize elementary judgments. - 31 - 1st way. Natural language, which is traditionally considered cumbersome and inaccurate, but a formal method that could be compared in its universality with natural language has not yet been invented. Simple judgments Attributive About relations of Existence Monks, as a rule, are modest Magnitogorsk to the south of Chelyabinsk There are blue fir-trees Pic. 15. Types of simple judgments 2nd method. Traditional Aristotelian logic. 3rd way. Modern symbolic logic. The main types of complex judgments In addition to judgments expressed in Aristotelian logic by statements of the form A, E, I, O (see p. Aristotle's Logic), there are various kinds of complex judgments. The more complex the judgment, the more difficult it is to accurately formalize it by means of traditional Aristotelian logic, and in some cases such formalization is simply impossible. Therefore, the analysis of the logical structure of complex judgments is expedient to carry out by means of modern symbolic logic, including the means of propositional logic and predicate logic (see the relevant paragraphs of the paragraph). The main types of complex judgments are − conjunctive; − disjunctive; − implicative; - modal: o alethic (necessary, perhaps by chance); o epistemic (I know, believe, believe, believe); o deontic (decided, forbidden); o axiological (good, bad); o temporal (in the past, earlier, yesterday, tomorrow, in the future); - questions: o whether - questions; o something questions. There is also a continuity of classes of logic and methods of artificial intelligence. - 32 - Inference By inference (in traditional logic) is meant some form of thinking by which a mental transition (called "inference") is made from one or more propositions (called "premise") to some other proposition (called "conclusion") . Thus, a conclusion is a complex abstract object in which, with the help of certain relations, one or more judgments are combined into a single whole. The term syllogism is used to denote a conclusion in logic. Syllogisms are either formal or informal. The first formal syllogisms were used by Aristotle. The syllogistic developed by him (the theory of formal syllogisms, i.e. inferences) had a significant impact on the development of ancient and scholastic logic, served as the basis for the creation of the modern logical theory of inferences. To consolidate the concepts of logic, it is necessary to complete the exercises on page 78. Laws of logic The most important logical laws include: - identities (any object is identical only to itself); - non-inconsistency (statements that contradict each other cannot be true at the same time); - excluded third (out of two statements that are mutually contradictory, one is true, the other is false, and the third is not given); - sufficient reason (any true statement has a sufficient reason, due to which it is true, not false). Let's take a closer look at each of these positions. I. The law of identity The law of identity proves that every thought is identical to itself, "A is A" (A → A), where A is any thought. For example: "Table salt NaCl consists of Na and Cl." If this law is violated, the errors listed below are possible. Amphibolia (from the Greek amphibolos - ambiguity, duality) is a logical error, which is based on the ambiguity of linguistic expressions. Another name for this error is “thesis substitution”. Example “It is rightly said that the language will bring you to Kyiv. And I bought a smoked tongue yesterday. Now I can safely go to Kyiv.” - 33 - Equivocation is a logical error, which is based on the use of the same word in different meanings. Equivocation is often used as an artistic rhetorical device. In logic, this technique is also called "concept substitution". Example “The old sea wolf is really a wolf. All wolves live in the forest." Here the error is due to the fact that in the first judgment the word "wolf" is used as a metaphor, and in the second premise - in its direct meaning. Logomakhia is a dispute about words, when during the discussion the participants cannot come to a common point of view due to the fact that they did not clarify the original concepts. Thus, the law of identity expresses one of the most important requirements of logical thinking - certainty. II. The law of non-contradiction This law expresses the requirement of non-contradiction of thinking. The law of non-contradiction says: two judgments, of which one asserts something about the subject of thought (“A is B”), and the other one denies the same thing about the same subject of thought (“A is not B”), cannot be simultaneously true , if at the same time the attribute B is affirmed or denied about the subject of thought A, considered at the same time and in the same respect. For example, the judgments “Kama is a tributary of the Volga” and “Kama is not a tributary of the Volga” cannot be simultaneously true if these judgments refer to the same river. There will be no contradiction if we affirm something and deny the same thing about the same person, who, however, is considered at different times. Thus, the judgments “This person is a student of the Moscow State University” and “This person is not a student of the Moscow State University” can be simultaneously true if the first of them refers to one time (when this person studies at the Moscow State University), and in the second - another (when he graduated from the university). The law of non-contradiction indicates that of two opposing propositions, one is necessarily false. But since it extends to opposing and contradictory propositions, the question of the second proposition remains open: it cannot be both true and false: paper cannot be white and non-white. III. The Law of the Excluded Middle The Law of the Excluded Middle states that two contradictory propositions cannot be both false: one of them is necessarily true; the other is necessarily false; the third judgment is excluded, i.e. either A is true or not-A. - 34 - The Law of the Excluded Middle formulates an important requirement for your thinking: you cannot deviate from recognizing the truth of one of two contradictory statements and look for something third between them. If one of them is recognized as true, then the other must be recognized as false and not to look for the third. Example: animals can be either vertebrates or non-vertebrates, there can be nothing third. IV. The Law of Sufficient Reason The content of this law can be expressed as follows: in order to be considered completely reliable, any proposition must be proven, i.e. sufficient grounds must be known on the strength of which it is considered true. A sufficient reason may be another, already tested by practice, recognized as true thought, the necessary result of which is the truth of the position being proved. Example. The rationale for the proposition "The room is getting warmer" is the fact that the mercury in the thermometer expands. In science, the following are considered sufficient grounds: a) statements about verified facts of reality, b) scientific definitions, c) previously proven scientific statements, d) axioms, and also e) personal experience. Logical inference Logical inference is the derivation of some formula from a set of other logical formulas by applying inference rules. Interpreter boolean expressions , using a logical conclusion, builds the necessary chain of calculations on the basis of the original description. The significance of the logical approach lies in the possibility of constructing an interpreter, the operation of which does not depend on logical formulas. The rules in the logical representation look like: P0←P1, …, Pn. P0 is called the goal, and P1, P2, ..., Pn - the body of the rule. Predicates P1, P2, ..., Pn conditions that must be met in order for the achievement of the goal P0 to be successful. Let's analyze the basics of logical inference using the example of the procedure for determining the correctness of reasoning. Definition of logically correct reasoning When we say that one sentence D logically follows from another P, we mean the following: whenever the sentence P is true, then the sentence D is also true. In propositional logic, we are dealing with formulas P and D, depending on some variables X1, X2,.., Xn. Definition. We will say that the formula D(X1, X2,...,Xn) logically follows from the formula P(X1, X2,...,Xn) and denote P ├ D if for - 35 - any sets of values ​​X1, X2 ,...,Xn under the condition P(X1, X2,...,Xn) = I2, the condition D(X1, X2,...,Xn) = I is satisfied. The formula P is called the premise, and D is the conclusion of the logical reasoning . Usually, in logical reasoning, not one premise P is used, but several; in this case, the reasoning will be logically correct; from the conjunction of the premises, the conclusion logically follows. Checking the correctness of logical reasoning The first way is by definition: a) write down all premises and conclusions in the form of propositional logic formulas; b) make a conjunction of formalized premises Р1& P2&…& Рn,; c) check on the truth table whether the conclusion D follows from the formula P1&P2&...&Pn. The second method is based on the following sign of logical consequence: "The formula D logically follows from the formula P if and only if the formula P | - D is a tautology." Then checking the correctness of logical reasoning comes down to answering the question: is the formula a tautology? This question can be answered by constructing a truth table for the formula, or by reducing this formula with the help of equivalent transformations to a well-known tautology. The third method of checking the correctness of logical reasoning will be called abbreviated, because it does not require exhaustive enumeration of variable values ​​to build a truth table. To justify this method, we formulate a condition under which logical reasoning is incorrect. The reasoning is incorrect if there is a set of values ​​for the variables X01, X02,. ., X0n such that the premise D(X01, X02,.., X0n)=A 3 and the conclusion P(X01, X02,.., X0n)=I. Example. Reasoning is given: “If it is raining, then the cat is in the room or in the basement. A mouse in a room or in a mink. If the cat is in the basement, then the mouse is in the room. If the cat is in the room, then the mouse is in the mink, and the cheese is in the refrigerator. It's raining now and the cheese is on the table. Where is the cat and where is the mouse? Let's introduce the following designations: D - "it's raining"; K - "cat in the room"; P - "cat in the basement"; M - "mouse in the room"; N - "mouse in a mink"; Х - ""cheese in the refrigerator"; ¬Х - "cheese on the table". We get the following reasoning scheme: D→K|R M|N K→H&X 2 3 True False - 36 - R→M D&¬X --- Let's use the rules of inference 1) D&¬X├D; 2) D&¬X├¬X; 3) D→K|P, D├ K|R. Next, consider two options.Option A. Let K take place. Then 4a) K, K → H&X, K├ H&X; 5a) H&X ├ X; 6a) ¬X,X├X&¬X - got a contradiction, which means that the assumption was wrong and this option is impossible. Option B, Let R take place. Then 4b) P, P → M├M; 5b) P, M├P&M Conclusion P&M is obtained, i.e. "the cat is in the basement, and the mouse is in the room" Example Check the correctness of the reasoning in a shortened way. Reasoning is given: "If it's cold today, then I'll go to the skating rink. If it's thaw today, then I'll go to the disco. Today it will be frost or thaw. Therefore, I'll go to the disco." Decision. We formalize the problem condition by introducing the notation: M - “today it will be frosty”; K - “I will go to the skating rink”; O - “it will be thaw today”; D - “I I'm going to the disco. The reasoning scheme has the form: M→K O→D M|O ---D The reasoning is logically correct if, for any set of values ​​of the variables (M, K, O, D), the days of which all premises are true, the conclusion is also true. Assume the opposite: there is a set (M0,K0,O0.D0) such that the premises are true and the conclusion is false. Applying definitions logical operations , let's try to find this set. We are convinced that the assumption is valid for the values ​​of the variables - 37 - M0 = I, K0 = I, O0 = L, D0 = L (Table 1). Therefore, the reasoning is not logically correct. Table 1 Scheme for solving a logical problem No. 1 2 3 4 5 6 7 and definitions of disjunction from 1, 6 and definitions of implication Another way to solve the problem is to build a truth table for the formula (M→K)&(O→D)&(M˅O)→D and make sure that it is not a tautology. Then, on the basis of logical following, the reasoning is not logically correct. Since four propositional variables (M, K, O, D) are involved in the reasoning, the truth table will contain 16 rows, and this method is time consuming. With the help of inference rules, it is possible to construct a logically correct reasoning, but it is not always possible to prove the incorrectness of a logical reasoning. Therefore, for this problem, the most convenient way is to check the correctness of logical reasoning. To consolidate the rules of logical inference, you must complete the exercises on page 78. The main sections of modern symbolic logic In the development of classical logic, three main stages are distinguished: ancient logic (about 500 BC - early AD), scholastic logic ( beginning AD - the first half of the 19th century), modern symbolic logic (mid-19th-20th centuries) Modern symbolic logic is divided into main sections, the essence of which is disclosed below. Propositional logic (propositional calculus). He studies simple judgments, considered without regard to their internal structure, as well as elementary conclusions, the most accessible to human understanding. In natural language, such simple propositions are represented by sentences that are considered only from the point of view of their truth or falsity, and inferences are represented by the corresponding systems of statements. - 38 - Predicate logic (predicate calculus). More complex objects of study are judgments considered with regard to their internal structure. The section of logic that studies not only the connections between propositions, but also the internal conceptual structure of propositions, has been called "logic of predicates". Metalogic. Metalogic is an extension of predicate logic. The subject of its study is the whole sphere of relations as a whole, all those universal relations that can take place between concepts, judgments, conclusions, as well as the symbols that designate them. The following paragraphs of the paragraph present the key positions of propositional logic and first-order predicates. In order to better understand modern logics, it is necessary to consider the main provisions defined by Aristotle's syllogisms. Aristotle's logic In Aristotle's logic, the structure of elementary judgments is expressed by the structures: - S is P (1); − S is not P (2) , where S is some logical subject (from lat. Subjectum); P - some logical predicate (from lat. Predicatum). The types of judgments in Aristotle's logic are listed below. 1. General affirmative judgments - A "All S is P" - All poets are impressionable people. The words "is", "is not" serve as a subject-predicate link. From statements (1) and (2) with the help of the words "all" and "some" statements of the form are constructed: - all S is P: Type A (Affirmo); − some S are P: Type I (AffIrmo); − all S are not P: Type N (Nego); − some S are not P: Type O (NegO). 2. General negative judgments - E (N) "No S is P" - No person is omniscient. 3. Particular affirmative judgment - I "Some S are P" - Some people have curly hair. 4. Partial Negative - O "Some S's are not P's" - Some people can't listen. Statements like A, E, I, O are simple categorical statements that form the foundation of all Aristotelian logic. Between the truth and falsity of statements of the type A, E, I, O, there is a functional-holistic relationship, which is usually depicted as a logical square (Fig. 16, Table 2). - 39 - When using the logical square, it is important to take into account the following subtlety: the word "some" is understood in this case in a broad sense - as "some, and maybe all." Table 2 Truth table for judgments of Aristotle's logic A I L L E L L I I I I L O L I I Fig. 16. Logical square Explanations to Aristotle's logical square In the upper left corner of the logical square are statements of type A (general affirmative). In the upper right corner are statements of type E (general negative). In the lower left corner (under A) are statements of type I (partial affirmative). In the lower right corner (under E) are statements of type O (particularly negative). Statements of types A and O, as well as statements of types E and I, are mutually contradictory, or contradictory (diagonal relations). Statements of types A and E are in relation to contrariety, or opposition. - 40 - Statements of type I are subordinate to (therefore imply) statements of type A. Statements of type O are subordinate to statements of type E. While contradictory statements have opposite truth values ​​(one is true, the other is false), counter statements cannot be both true, but they can be both false. With the help of a logical square, one can deduce judgments that are opposite, contradictory and subordinate to the data, establishing their truth or falsity. Example 1. Any judgment is expressed in the sentence A → 1. 2. No judgment is expressed in the sentence E → 0. 3. Some judgments are not expressed in the sentence O → 0. 4. Some judgments are expressed in the sentence I → 1. In addition, using Aristotle's logical square, you can establish types of relationships between judgments: 1) obtaining inference knowledge; 2) comparison of different points of view on debatable issues; 3) editing texts and in other cases. Propositional calculus formalisms Many models of knowledge representation are based on propositional and predicate calculus formalisms. A rigorous exposition of these theories from the point of view of classical mathematical logic is contained in the works of Shenfield and Teise, Pospelov can find a popular exposition of these theories, which can be recommended as an initial introduction. By Taise's definition, logical propositions are a class of natural language sentences that can be true or false, and propositional calculus is the branch of logic that studies such sentences. A natural question arises: What about the sentences of the language, about the truth of which nothing definite can be said? Example. “If it rains tomorrow, I will stay at home.” For now, we will simply assume that all sentences with which we have to deal belong to the class of logical propositions. Statements will be denoted by capital letters of the Latin alphabet and an index, if this is required by the presentation. Examples of notation for statements: S, S1, S2, H, H1, H2. As already noted, a logical proposition is either true or false. A true statement is assigned a logical value - 41 - TRUE (or AND), a false one - a logical value FALSE (or L). Thus, the truth value forms the set (I, L). In the propositional calculus, five logical connectives are introduced (Table 3), with the help of which, in accordance with the construction rules, logical formulas are compiled. Table 3 Logical connectives Common name Type Other notation designation Negation Unary -, ~, NOT, NOT  Conjunction ^ Binary & , . , AND , AND * Disjunction  Binary OR OR Implication  Binary => -> Equivalence  Binary<=> <->~ * Note: not to be confused with the truth value I. The set of rules for constructing logical formulas based on propositions includes three components: − basis: any proposition is a formula; − induction step: if X and Y are formulas, then X, (X ^ Y), (X  Y), X Y and X  Y are formulas; − constraint: the formula is uniquely obtained using the rules described in the basis and the induction step. Formulas are denoted by capital letters of the Latin alphabet with indices. Examples of logical formulas are given in the example. Examples a) T = S1 ^ S2; b) N = H1H2. Expression a) can be read like this: "The logical formula T is a conjunction (logical connective AND) of logical statements S1 and S2." The interpretation of expression b) is as follows: "The logical formula N is a disjunction (logical OR connection) of the negation (NOT) of the logical statement H1 and the logical statement H2." The truth value of a logical formula is a function of the truth values ​​of its constituent statements and can be uniquely determined using truth tables. Below are the truth tables for negation and binary connectives (Tables 4, 5) Thus, if the truth values ​​for statements from example a) are known, for example S1 = I, S2 = L, then the truth value for the formula - 42 - T can be found at the intersection of the second row and third column in Table 5, that is, T = L. Table 4 Truth table for negation ¬X AND L L AND Table 5 Truth table for binary connectives X Y X^Y X Y XY XY AND AND AND AND I I I L I L L I I I I L I L L I I I First order predicate logic Relations between objects are described using special mathematical concepts called logical predicates, and predicate calculus is a branch of logic that deals with their study . Any logic is a formal system, for which the following must be defined: - the alphabet of the system - a countable set of symbols; - formulas of the system - some subset of all words that can be formed from the characters included in the alphabet (usually, a procedure is set that allows you to make formulas from the characters of the alphabet of the system); − axioms of the system - a selected set of formulas of the system; − system inference rules - a finite set of relations between system formulas. The vocabulary of predicate calculus in the standard presentation includes the following concepts: - variables (we will denote them last letters English alphabet u, v, x, y, z); − constants (we will denote them by the first letters of the English alphabet a, b, c, d): o individual constants; o functional constants; o predicate constants; − statements; - 43 - - logical connectives (¬ (negation), conjunction, disjunction, implication); − quantifiers: (existence, generality); − terms; − functional forms; − predicate forms; − atoms; − formulas. Individual Constants and Individual Variables These are similar to the constants and variables of calculus, with the only difference being that their range is individuals, not real numbers. In the theory of artificial intelligence, named constants and variables in the agent's memory that correspond to objects and concepts in the real world are usually called concepts. In first-order languages, variables are only individual, so they are called simply variables. As will be shown below, the use of first-order languages ​​and the rejection of the use of high-order languages ​​imposes additional restrictions on the class of natural language sentences under consideration. Individual constants will be denoted by lowercase letters a, b, c, u, v, w of the Latin alphabet with indices or mnemonic names taken from the text. Lowercase characters will be used to denote variables. letters x,y,z , latin alphabet with indexes. Example. Individual constants: a1, b1, c, u, v1, seller_w, k22, purchase_l, m10, book_a1 . Variables: x, y2, z33. Predicate Constants Predicate constants are used to denote a relation that describes a predicate. A predicate constant does not change its truth value. It is associated with a suitable number of arguments or parameters, called terms, forming a predicate form. The designation of the predicate constant is mnemonic names or the letter of the Latin alphabet Р with indices. The language of predicates contains the language of propositions, since a proposition is nothing but a predicate constant with no arguments, or a null-place predicate form. The semantic area of ​​the predicate form coincides with the area of ​​change of the statement, i.e. (I, L). Function constants The function constant (f, g, h) as well as the predicate constant, when combined with a suitable number of terms, form a functional form. The difference between the functional form and the predicate form lies in the fact that its semantic domain is made up of a set of individual constants. A null-place function constant is just an individual constant. logical connectives in predicate calculus serve to form formulas. Quantifiers. The predicate calculus uses two quantifiers: the general quantifier () and the existential quantifier (). The expression xP is read as “for any x P is true”. The expression xP is read as "there is an x ​​for which P is true." A term is an expression formed from variables and constants, possibly with the use of functions. Terms, forms, atoms and formulas in the predicate calculus are built using the following rules: - any variable or constant is a term; − if t1,...,tn are terms and f is an n-place function symbol, then f(t1,...,tn) is a term; − there are no other terms. In fact, all objects in the logic of first-order predicates are represented precisely in the form of terms. If the term does not contain variables, then it is called the main or constant term. A term (t1,t2 ...tn) is any variable and any functional form. A functional form is a functional constant paired with an appropriate number of terms. If f is a functional local constant and t1 ..., tn are terms, then the corresponding form is usually denoted by f(t1, ...,tn). If n=0, then simply f is written. A predicate form is a predicate constant concatenated with an appropriate number of terms. If p is the corresponding m -place constant and t1, . .., tn are terms, then the corresponding form is denoted by p(t1,...,tm). An atom is a predicate form or some equality, i.e. an expression like (s=t), where s and t are terms. An atomic or elementary formula is obtained by applying a predicate to terms, more precisely, it is an expression p(t1,...,tn), where p is an n-place predicate symbol (formula), and t1,...,tn are terms. The concept of a formula is defined recursively (inductively) by the following rules: - an atom is a formula; - if A is a formula, A is a formula; - if A and B are formulas, then (A ^ B), (A  B), (A  B) and (A  B) formulas; - if A is a formula and x is a variable, then xA and xA are formulas. Let us represent the alphabet of predicate logic in terms of concepts. Constants. They serve as names for individuals (as opposed to names for collections): objects, people, or events. The constants are represented by - 45 - symbols like Jacque_2 (the addition of 2 to the word Jacque indicates a well-defined person among people with that name), Book_22, Package_8. Variables. They denote the names of aggregates, such as a person, a book, a parcel, an event. The symbol Book_22 represents a well-defined instance, and the symbol book indicates either the set of "all books" or "the book concept". Symbols x, y, z represent the names of collections (certain sets or concepts). Predicate names (predicate constants). They define the rules for connecting constants and variables, such as grammar rules, procedures, mathematical operations. Predicative names use characters like the following phrases: Send, Write, Plus, Separate. Function names (function constants) represent the same rules as predicates. In order not to be confused with predicate names, functional names are written in one lower case : phrase, send, write, plus, split. The symbols that are used to represent constants, variables, predicates and functions are not "words of the Russian language". They are the symbols of some representation - the words of the "object language" (in our case, the language of predicates). The representation must exclude any ambiguity of the language. Therefore, the names of individuals contain the numbers assigned to the names of the populations. Jack_1 and Jack_2 represent two people with the same name. These representations are the concretization of the name of the collection "Jacques". A predicate is a predicate name along with an appropriate number of terms. A predicate is also called a predicate form. Example. In Russian: Jacques sends a book to Marie, logically: Parcel (Jacques_2, Marie_4, Book_22). Fuzzy logic The emergence of fuzzy logics, the theory of fuzzy sets and other "fuzzy" theories is associated with the work of the American scientist Zadeh. Zadeh's main idea was that the human way of reasoning, based on natural language, cannot be described within the framework of traditional mathematical formalisms. These formalisms are characterized by strict unambiguity of interpretation, and everything related to the use of natural language has a multivalued interpretation. Zadeh's goal was to build a new mathematical discipline, which would be based not on classical set theory, but on fuzzy set theory. Consistently carrying out the idea of ​​fuzziness, according to Zadeh, it is possible to build fuzzy analogies of all basic mathematical concepts and create the necessary formal apparatus for modeling human reasoning and the human way of solving problems (Fig. 17). - 46 - Creation of fuzzy set theory - Decision Mathematical fuzzy set theory - The basis of the mechanism Formalization of reasoning of the human way - Task Thesis - a person in his daily life - The problem thinks and makes decisions based on fuzzy concepts Pic. 17. The logic of the emergence of the theory of fuzzy sets Currently, the theory of fuzzy sets and fuzzy logic (fuzzy set & fuzzy logic) occupies a strong place among the leading areas of artificial intelligence. The concept of “fuzziness”, initially applied to sets, and then to logic, was successfully extended to other areas of mathematics and computer science and now already exist: - the theory of fuzzy relations; - theory of fuzzy sets; - theory of fuzzy measures and integrals; - theory of fuzzy numbers and equations: - theory of fuzzy logic and approximate reasoning: - theory of fuzzy languages; - theory of fuzzy algorithms; - theory of fuzzy optimization and decision-making models. The following packages are most popular with Russian customers: 1) CubiCalc 2.0 RTC is one of the most powerful commercial expert systems based on fuzzy logic, which allows you to create your own applied expert systems; 2) CubiQuick - academic version of the CubiCalc package; 3) RuleMaker - a program for automatic extraction of fuzzy rules from input data; 4) FuziCalc - spreadsheet with fuzzy fields, allowing you to make quick estimates with inaccurately known data without error accumulation; 5) OWL - a package containing the source texts of all known types of neural networks, fuzzy associative memory etc. The main "consumers" of fuzzy logic in the Russian market are: bankers, financiers and experts in the field of political and economic analysis. - 47 - Most human tasks do not require high precision. Often you have to find a reasonable compromise between the concepts of "accuracy" and "importance" when communicating with the real world. For example: to make a decision about crossing a street, a person does not estimate the speed of an approaching car with an accuracy of tenths of meters per second. He defines for himself the speed of the car as "very fast", "fast", "slow", etc., i.e. uses linguistic variables to indicate speed. In the theory of fuzzy sets, the following ways of formalizing fuzzy concepts are proposed. The first way (based on the work of Zadeh) involves the rejection of the main assertion of classical set theory that some element can either belong or not belong to the set. In this case, a special characteristic function of the set is introduced - the so-called membership function, which takes values ​​from the interval . This method leads to continuum logic. With the second more general way formalization of fuzziness, it is assumed that the characteristic functions of the set take a value not from the interval , but in a finite or infinite distributive lattice. This generalization is called fuzzy sets in the sense of Gauguin. The third way is P-fuzzy sets. With this generalization, each element of the universal set is associated not with a point in the interval , but with a subset or part of this interval. The algebra of P-fuzzy sets can be reduced to the algebra of classes. The fourth way is heterogeneous fuzzy sets. Here, in the general case, the elements of the universal set are assigned values ​​in various distributive lattices. Each element can be associated with the most appropriate rating for it. Moreover, the values ​​of the estimates themselves can be fuzzy and given as functions. A general idea of ​​fuzzy logic has been obtained. Now about everything in more detail. Consider the conceptual apparatus, which is based on the concept of "linguistic variable". Definition of a linguistic variable (intuitive)4 If a variable can take on the meanings of words in a natural language (for example, "small", "fast", etc.), then this variable is defined as a linguistic variable. Words whose values ​​are taken by a linguistic variable usually denote fuzzy sets. 4 Intelligent information systems: Guidelines to the laboratory workshop on the course "Intellectual information systems" for students of the specialty 071900 - Information systems in economics / Ufimsk. state aviation tech. un-t; compiled by G.G. Kulikov, T.V. Breikin, L.Z. Kamalova. - Ufa, 1999. -40 p. - 48 - A linguistic variable can take either words or numbers as its values. Definition of a linguistic variable (formal) A linguistic variable is a five (x, T(x), X, G, M), where x is the name of the variable; T(x) is the set of names of linguistic values ​​of the variable x, each of which is a fuzzy set on the set X; G is a syntactic rule for the formation of x value names; M - semantic rule to associate each magnitude of value with its concept. The purpose of the concept of a linguistic variable is to formally say that a variable can take words from natural language as values. In other words, each linguistic variable consists of: - name; − the set of its values, which is also called the base term set T. The elements of the base term set are the names of fuzzy variables; − universal set X; − syntactic rule G, according to which new terms are generated using words of a natural or formal language; − semantic rule P, which associates each value of a linguistic variable with a fuzzy subset of the set X. For example, if we say “fast speed”, then the variable “speed” should be understood as a linguistic variable, but this does not mean that the variable “speed” is not can take on real values. A fuzzy variable is described by a set (N,X,A), where N is the name of the variable, X is the universal set (reasoning area), A is the fuzzy set on X. The values ​​of the linguistic variable can be fuzzy variables, i.e. . the linguistic variable is at a higher level than the fuzzy variable. The main approach to the formalization of fuzziness is as follows. A fuzzy set is formed by introducing a generalized concept of membership, i.e. extension of the two-element set of values ​​of the characteristic function (0,1) to a continuum . This means that the transition from the complete membership of an object to a class to its complete non-membership does not occur abruptly, but smoothly, gradually, and the membership of an element in a set is expressed by a number from the interval . - 49 - Fuzzy set (NM) , is defined mathematically as a set of ordered pairs composed of elements x of the universal set X and the corresponding degrees of membership μа(x) or (since the membership function is an exhaustive characteristic of NM) directly in the form of a function by the Universal set X of the fuzzy set A is the domain of definition of the membership function μа. On fig. 18 shows the main varieties of membership functions. Rice. 18. Type of membership functions According to the type of membership functions, they are divided into: - submodal (Fig. 1. c); − amodal (Fig. 1. a); − multimodal (Fig. 1. m); − unimodal (Fig. 1. u). − Example. 1) A =((x1,0.2),(x2,0.6),(x3,1),(x4,0.8)); 2) A = 0. 2|x1 + 0.6|x2 + 1|x3 + 0.8|x4. 3) The same example can be presented in the form of a table. Table 6 A= Membership function description table x1 x2 x3 x4 0.2 0.6 1 0.8 Example "Many tall people" In real life, such a thing as "height of a tall person" is subjective. Some believe that a tall person should be more than 170 cm tall, others - more than 180 cm, others - more than 190 cm. Fuzzy sets make it possible to take into account such blurring of estimates. - 50 - Let x be a linguistic variable denoting "height of a person", its function of belonging to the set of tall people A:X(0,1), where X is a set that includes all possible values ​​of a person's height, is given as follows: Then the set of "tall people" is given by the expression A=(x| A(x)=1), x ϲ X. Graphically, this is shown in fig. 19 (solid line), i.e. depends on the individual making the assessment. Let the membership function A:X(0,1) have the form shown in the figure by a dotted line. Rice. 19. Fuzzy set of tall people Thus, a person with a height of 145 cm will belong to set A with a degree of membership A(145)=0, a person with a height of 165 cm - A(165) = 0.3, a person with a height of 185 cm -A (185) = 0.9, a height of 205 cm - A(205)=1. Example. "Are you cold now?" A person perceives a temperature of +60oF (+12oC) as cold, and +80oF (+27oC) as heat. A temperature of +65oF (+15oC) seems low to some, quite comfortable to others. We call this group of definitions the function of belonging to sets describing the subjective perception of temperature by a person. Machines are not capable of such fine gradation. If the standard for defining cold is “temperature below +15oC”, then +14.99oC would be considered cold, but +15oC would not. On fig. 20. is a graph that helps to understand how a person perceives temperature. It is just as easy to create additional sets describing the perception of temperature by a person. For example, you can add sets such as "very cold" and "very hot". It is possible to describe similar functions for other concepts, such as open and closed states, cooler temperature - 51 - or cooling tower temperature. Rice. 20. Fuzzy set "Temperature" Thus, we can draw the following conclusions on the essence of the concept of "fuzzy set": 1) fuzzy sets describe indefinite concepts (fast runner, hot water, hot weather); 2) fuzzy sets allow the possibility of partial belonging to them (Friday is partly a day off (shortened), the weather is rather hot); 3) the degree of belonging of an object to a fuzzy set is determined by the corresponding value of the membership function on the interval (Friday belongs to the days off with a membership degree of 0.3); 4) the membership function associates an object (or a logical variable) with the value of the degree of its membership in a fuzzy set. Curve Shapes for Membership Functions There are over a dozen typical curves for membership functions. The most widespread are: triangular, trapezoidal and Gaussian membership functions. The triangular membership function is defined by a triple of numbers (a,b,c), and its value at the point x is calculated according to expression (1).  bx 1  b  a , a  x  b;  c  x MF (x)   , b  x  c; c  b  0, in all other cases   - 52 - (1) With (b-a)=(c-b) we have the case of a symmetric triangular membership function (Fig. 21), which can be uniquely specified by two parameters from the triple (a ,b,c). Rice. 21. Triangular membership function Similarly, to set a trapezoidal membership function, four numbers (a,b,c,d) are needed.  bx 1  b  a , a  x  b;  1, b  x  c; MF (x)   d  x , c  x  d; d c 0, in all other cases  (2) With (b-a)=(d-c), the trapezoidal membership function takes on a symmetrical form (Fig. 22). Rice. 22. Trapezoidal membership function The set of membership functions for each term from the base term-set T is usually shown together on one graph. On fig. 23 shows the formalization of the inaccurate concept of "human age". So, for a person 48 years old, the degree of belonging to the set "Young" is 0, "Average" - 0.47, "Above average" - 0.20. - 53 - Fig. 23. Description of the linguistic variable "Age of a person" Basic operations on fuzzy sets Basic operations on NMs from the class of all NMs F(X)=( | :X  ) of the universal set X are presented below. 1. Addition5  2 =   = 1-  1,  x  X 24. Graph of the "Complement" operation on the function M 2. Intersection I (minimum: non-interacting variables).  3 = ( 1   2) (x)= min( 1(x),  2(x)) ,  x  X 3. Union I (maximum: non-interacting variables).  3 = ( 1   2) (x)= max( 1(x),  2(x)) ,  x  X 4. Intersection II (limited product).  3 = ( 1   2) (x)= max(0,  1(x) +  2(x)-1) , x  X 5. Union II (maximum: limited amount).  3 = ( 1   2) (x)= min(1,  1(x) +  2(x)) ,  x  X 6. Intersection III (algebraic product). 5 Hereinafter, on a yellow background, operations are displayed that are the same for all three bases. - 54 -  3 = ( 1   2) (x)=  1(x) *  2(x) ,  x  X 7. Union III (algebraic sum).  3 = ( 1   2) (x)=  1(x) +  2(x)-  1(x)   2(x) ,  x  X A B Fig. 25. Graph of the operation of the intersection I (A) of the union I (B) of the functions M and M1 A B Fig. 26. Graph of the operation of the intersection II (A) of the union II (B) of the functions M and M1 A B Pic. 27. Graph of the operation of the intersection III (A) of the union III (B) of the functions M and M1 - 55 - 8. Difference.  3 =  1(x) -  2(x) = max(0,  1(x) -  2(x)) ,  x  X 9. Concentration.  3 =  2(x) ,  x  X 28. Graph of the difference between the functions M and M1 Fig. 29. Graph of the concentration of the function M1 Unlike Boolean algebra, in F(X) the laws of elimination of the third are not satisfied. When constructing the operations of union or intersection in F(X), one must discard either the laws of elimination of the third, or the property of distributivity and idempotency. Fuzzy objects can be classified according to the type of range of values ​​of the membership function. And here the variants X are distinguished: - lattice; - semigroup; - ring; - category. Important for practical applications in terms of expressing qualitative representations and assessments of a person in the process of making a decision of a problem is the case of S-fuzzy sets specified by a pair (X, ), where - 56 - :XS is a mapping from X to a linearly ordered set S It is natural to impose on S the requirements of finiteness and completeness. An example of a finite linearly ordered set is a set of linguistic values ​​of the linguistic variable "QUALITY" = (poor, average, good, excellent). 1 2 3 4 5 6 7 8 9 OR non-interacting variables) (EITHER, ... , OR) Intersection II (limited AND product) Union II (limited sum) OR Intersection III (algebraic AND product) Union III (algebraic sum) OR Difference Concentration VERY As shown, depending From the ways of introducing operations of union and intersection of NM, there are three main theories of NM. In accordance with similar criteria, they divide: − fuzzy logic with maximin operations (operations 1,2,3,8,9); − fuzzy logic with limited operations (operations 1,4,5,8,9); − probabilistic fuzzy logic (operations 1,6,7,8,9). The interpretation of truth as a linguistic variable leads to fuzzy logic with the values ​​"true", "very true", "completely true", "more or less true", "not very true", "false", etc. , i.e. to fuzzy logic, on which the theory of approximate reasoning is based. Fields of application of the theory of fuzzy sets in various areas of human knowledge Philosophically, the theory of NM is remarkable in that it opens







2022 gtavrl.ru.