Design of algorithmic support. Information systems components


Flight and navigation complex is a complex combination of hardware and software integrated into a single network. The solution to the main task - increasing the reliability, safety and regularity of flights - is achieved through the use of special automated systems for optimizing flight modes. Under these conditions, the role of software in the structure of the navigation system increases immeasurably compared to individual navigation devices and systems. The perfection of the PNC software largely determines the efficiency and flexibility of the entire complex.

In a broad sense, software is understood as a set of mathematical, linguistic, information and software itself. Mathematical software includes methods and methods of information processing and calculations, models and algorithms. Linguistic support is a set of programming languages ​​used in PNC to describe various procedures, algorithms, models. Information support is divided into on-board databases and operational information coming from on-board systems. The software consists of programs and documents (on computer and paper media).

Programs are divided into system-wide, basic and applied. General-system programs, which are, in fact, operating systems, are designed to organize the functioning of the PNC as a computing system (planning the computing process, managing it, distributing resources, etc.) and do not reflect the specifics of a particular PNC. Basic and application software is created directly for the needs of PNK. The basic includes those programs that ensure the correct functioning of application programs. Application programs implement elements of the mathematical software of PNC and solve specific problems. Application programs are created in the form of separate modules that are connected control program at various stages of flight and implement private PNC algorithms.

When developing software, it is necessary to take into account a number of requirements, such as small computational error, minimum time implementation, the minimum required amount of memory, the ability to control the progress of calculations, protection against systematic and random failures.

According to the principle of constructing the software structure, PNCs can be procedure- and problem-oriented. Modern PNK software is based on modular principle, when each module is designed to solve a separate problem and the modules can be combined in various combinations. This structure makes it possible to expand the functions of the PNC without changing its main part by creating and adding new modules, but this limits the number and direction of connections in the complex and dictates the rigid logic of its organization. Promising PNCs are expected to use elements artificial intelligence, which will adapt to changes in external conditions, rebuilding the structure of the PNK.



In Fig. Figure 2.25 shows the structure of the general PNC algorithm, which consists of a set of related private algorithms:

KNS- a complex of navigation systems, including the entire set of on-board navigation and flight equipment;

APPO- transformation and primary processing algorithms;

AKOI - algorithms for complex information processing;

AU - object control algorithms;

AOVI- algorithms for exchanging and issuing information;

SOI PU- information display system and control panels;

ASIO - algorithms for protection and exclusion of failures;

AIP SV- flight and navigation simulation algorithms;

ADOP- algorithms for dispatching and organizing interruptions;

automatic transmission- control and inspection algorithms.

The general PNC algorithm is designed to implement the entire variety of tasks facing the complex; it includes sets of functionally interconnected private algorithms that solve a single problem of reliable information processing with the required accuracy and specified discreteness and generate control and information signals.

The CNS may include one or more inertial navigation systems, which are the basis of the PNK, a complex of radio engineering navigation systems (RSBN, RSDN, SNS, etc.), an air signal system and other systems necessary to solve the problems of controlling a specific object.

Transformation and primary processing algorithms perform analog-to-digital conversion, averaging or pre-filtering of measurements. The same group of algorithms brings the readings of various sensors to a single coordinate system.

Algorithms for complex information processing use the information redundancy of PNC meters to solve the problem of filtering, extrapolation and interpolation of data. The quality of these algorithms determines the accuracy and reliability of flight navigation support. The most widely used modifications of the digital Kalman filter in this class of algorithms are

Object control algorithms implement all control tasks performed on board the aircraft. The range of tasks implemented is significantly wider than that of the self-propelled guns, which only provides control of the aircraft’s movement. This group of algorithms, together with the crew, ensures the fulfillment of the flight goal or flight mission.

All algorithms are implemented in the form of software modules that execute private control algorithms, which in turn are divided into target and functional. The former implement complete target tasks, such as control of the FPU, trajectory, landing, etc. The latter form specific functions of the control process (optimization of flight modes, terminal control, precision characteristics of the complex, etc.).

Algorithms for exchanging and issuing information are an element of the information display system. They connect PNK subscribers with the onboard computer of the computing complex and perform the functions of converting information, receiving, transmitting and temporarily storing data.

Algorithms for dispatching and organizing interrupts are the basis operating system computing complex PNK and SOI. Their main purpose is to distribute the sequence and execution time of individual private algorithms.

Control and inspection algorithms solve the problems of assessing the technical condition, shutting down or restoring faulty equipment and reconfiguring the FNC.

The listed private algorithms reflect only the most general structure of the algorithmic support of the PNC, which can vary significantly depending on the type of aircraft. Software and algorithmic support for promising PNCs should be created using artificial intelligence, adaptability properties and reconfiguration capabilities of the complex.

2.8. CONCEPT OF REQUIRED NAVIGATION CHARACTERISTICS OF FLIGHT AND NAVIGATION EQUIPMENT

The ICAO Special Committee on Future Air Navigation Systems (FANS) has developed the Required Navigation Performance (RNP) concept, which moves from a mandatory requirement for airborne navigation equipment to optimal combination on-board navigation equipment of the aircraft and the technical capabilities of a specific airspace for all phases of flight. This is how the transition from air traffic control to a more flexible air traffic management (ATM) is realized.

When an aircraft is flying along a route according to the RNP type, the minimum required accuracy of maintaining navigation characteristics is established, i.e. the width of the corridor (in nautical miles) in which the aircraft must remain for at least 95% of the flight time. In this case, accuracy is determined by the total error of the navigation system, display and piloting technique.

Four main types of RNP are planned to be used for flights along the route:

RNP 1 provides the most effective use of accurate aircraft position information to provide flexibility in the organization and change of routes, as well as for the management of air traffic during the transition from the aerodrome to the en-route flight and back;

RNP 4 is intended for the organization of ATS routes and airspace patterns with limited distance between ground navigation aids and is used in continental airspace;

RNP 12, 6 determines the possibility of limited optimization of routes in areas with a reduced level of provision of navigation aids;

RNP 20 describes the minimum capabilities that are considered acceptable to support flights on ATS routes.

In order to ensure the required level of flight safety for area navigation (RNAV) methods currently being introduced into ATS practice, in addition to the RNP type, two additional indicators are established:

integrity of maintaining the safety corridor, determined by the probability of non-detection by the navigation system of linear lateral deviation exceeding twice the permissible error of the navigation characteristic (10 -5 per 1 hour of flight);

continuity of failure-free operation of the navigation system, determined by the probability of issuing a false or true failure warning (10 -4 per 1 hour of flight) during critical stages of the flight.

The use of area navigation techniques within the RNP concept allows flight in any airspace within prescribed position accuracy tolerances, while eliminating the need to fly directly over ground-based navigation aids.

For the most critical phases of a flight (approach, landing and departure), the RNP supplement establishes requirements for integrity, continuity and availability (functional readiness, which is determined by the probability that during the implementation of the planned maneuver navigation system capable of performing its functions) means of navigation in a given airspace. Quantitatively, the parameters of the landing approach are characterized by the boundaries of the external and internal containment corridors of the aircraft, as well as the likelihood of disruption of the integrity, continuity and availability of navigation information received from on-board equipment and ground-based navigation aids. Thus, for a landing approach according to the CAT III difficulty category, the following quantitative indicators of the specified parameters are established:

loss of integrity in the interval from the final approach control point - to a height of 30 m above the landing point (165 s), from a height of 30 m to the moment of contact (30 s);

loss of continuity in the above sections, respectively, and ;

Availability 0.999 at 30 m altitude.

Data on the width of corridors according to KIR SAT III are presented in Fig. 2.26.


Rice. 2.26. Corridor boundaries according to RNP CAT III

To implement algorithmic and software information systems with the set goal, it is necessary to consistently solve the following problems.

1. Development of principles for the construction and architecture of an instrumental system for the integration of production data, including the integration of various technological data used by the industry.

2. Creation of an integration model of production data (IMPD) of OGDC based on the proposed principles of construction and formulated requirements for the developed instrumental system.

3. Development of algorithmic support for instrumental SIPD. Solving this problem also involves studying the effectiveness of the proposed algorithms.

4. Software development for instrumental DSPD. The result of solving this problem should be software created taking into account the developed principles and architecture of the instrumental system and implementing the proposed algorithms.

5. Creation and implementation of the developed instrumental system for solving practical problems of creating specific SIPD and integrating production data of modern information systems with their help.

In the organisation information processing systems lies:

A set of interrelated methods and tools for collecting and processing data necessary for organizing facility management.

SOI are based on the use of computers and other modern means information technology, which is why they are also called automated data processing systems (ASOD). Without a computer, the construction of SOI is possible only on small objects.

The use of a computer does not mean performing individual information and computing work, but a set of works connected into a single complex and implemented on the basis of a single technological process.

SOI should be distinguished from automated control systems (ACS). The functions of the automated control system include, first of all, the performance of calculations related to solving management problems, with the selection of optimal plan options based on economic and mathematical methods and models, etc. Their direct purpose is to increase management efficiency. The functions of SOI are the collection, storage, search, and processing of data necessary to perform these calculations at the lowest cost. When creating an ASOD, the task is to select and automate labor-intensive, regularly repeated routine operations on large amounts of data. SOI - This is usually part and the first stage of development of an automated control system. However, SOIs also function as independent systems. In some cases, it is more efficient to combine the processing of homogeneous data within one system for large number control problems solved in different automated control systems; create SOI for collective use.



The automated information system has supporting and functional parts, consisting of subsystems (Fig. 1.38).

Rice. 1.38 Automated information system

Subsystem- This is a part of the system, distinguished by some characteristic.

The functional part of the information system ensures the execution of tasks and the purpose of the information system. In fact, this contains a model of the organization's management system. Within this part, management goals are transformed into functions, functions into subsystems of the information system. Subsystems implement tasks. Typically, in an information system, the functional part is divided into subsystems according to functional characteristics:

· management level (highest, middle, lowest);

· type of managed resource (material, labor, financial, etc.);

· scope of application (banking, stock market, etc.);

· management functions and management period.

For example, a technological process management information system is a computer information system that provides decision support for technological process management with a given discreteness and within a certain management period.

In table 5 indicates some of the possible information systems, but they are sufficient to illustrate the relationship between system functions and management functions.

Functional sign determines the purpose of the subsystem, as well as its main goals, objectives and functions. The structure of an information system can be presented as a set of its functional subsystems, and a functional characteristic can be used in the classification of information systems.

For example, the information system of a manufacturing company has the following subsystems: inventory management, production process management, etc.

In the economic practice of industrial and commercial facilities, the typical types of activities that determine the functional attribute of the classification of information systems are: production, marketing, financial, personnel.

Functions of information systems Table 5

Thus, “functional components” constitute the substantive basis of the IS, based on models, methods and algorithms for obtaining control information.

The functional structure of an IS is a set of functional subsystems, sets of tasks and information processing procedures that implement the functions of the control system. In the management system of large enterprise-corporations, independent subsystems (circuits) of the functional and organizational level of management are distinguished:

1. Strategic analysis and management. This is the highest level of management, ensures centralization of management of the entire enterprise, and is aimed at the highest level of management.

2. Production management.

Developed foreign-made ERP systems have an established structure of the basic components of the enterprise management system:

1. Accounting and finance.

2. Materials management (logistics).

3. Production management.

4. Ensuring production.

5. Management of transportation and remote warehouses.

6. Personnel management.

7. Salary.

8. Modeling of business processes.

9. Decision support systems (DSS).

The supporting part of the IS consists of information, technical, mathematical, software, methodological, organizational, legal and linguistic support. A special place in the process of informatization of society is occupied by the creation computer networks and building on their basis distributed information processing systems (DPIS) . RSOI represent a set of nodes geographically distant from each other, united by a data transmission system and interacting through the exchange of messages. Such systems provide distributed processing data, in which an application process from one node can access information from any other node. The ultimate goal of creating RSIO is the integration of information and computing resources, as well as communications and office equipment, etc., of an entire region of users.

An example of a RSDI is a distributed database (RDB), which is a collection of logically related databases located in different nodes and application task flows - global transactions that can simultaneously use several databases as a single whole. The most important problem The problem that arises in any RDB is the protection of information resources stored in it from incorrect actions. As a result of concurrent transactions, some of these transactions may temporarily compromise the integrity of the RDB. Obviously, a certain transaction processing discipline is required to allow

fix problems. Such a discipline exists and is known as transaction serialization. For the practical implementation of this discipline in RDBs, blocking mechanisms, timestamps, and an optimistic approach are most often used. In the implementation of concurrency control algorithms in the RDB, it is proposed to use a fault-tolerant transaction management system (FTS) as an integral part of the RDBMS, which ensures the interaction of application processes with information resources RBD.

OSUT represented as a distributed software package, consisting of separate modules. Main requirements and distinctive features OSUT is to ensure the consistency of the RDB in the process of processing parallel user requests in the event of possible asynchronous failures of nodes (processes).

The following components of the OSUT function in each node J:

Module (transaction generator) – transaction generator;

Module (synchronization nucleus) – synchronizer of transactional requests;

Module (transaction manager) – transaction commit manager;

Module (data manager) – data manager;

Module (election manager) – coordinator election manager;

Module (rollback manager) – transaction rollback manager;

Simulation modeling is a powerful engineering method for studying complex systems, used in cases where other methods are ineffective. A simulation model is a system that displays the structure and functioning of the original object in the form of an algorithm that connects input and output variables accepted as characteristics of the object under study. Simulation models are implemented in software using various languages.

Laboratory work No. 2"Testing the thermal converter"

Topic: STUDY AND CHECKING OF THERMAL CONVERTER.

1. Study measurement methods and the design of a platinum-rhodium-platinum reference thermal converter.

2. Familiarize yourself with the installation diagram and placement of instruments on the laboratory bench.

Work progress: The platinum-rhodium-platinum reference thermoelectric converter is designed to transmit the size of the temperature unit (Figure 1.39). The materials of thermoelectrodes of thermal converters meet the requirements of the following regulatory documents positive thermoelectrode made of wire with a diameter of 0.5 mm from an alloy of the PlRD-10 brand (platinum + 10% rhodium) according to GOST. The thermoelectrodes of thermal converters are reinforced with a solid ceramic two-channel tube, one of the channels of which is marked with the symbol of the thermoelectrode located in it; the tube material is aluminum oxide ceramics with a content of at least 99%.

Fig 1.39 Platinum-rhodium-platinum thermoelectric converter

Converter tolerance classes:

1. Resistance converters are manufactured with a nominal static conversion characteristic (NSC) and a permissible resistance deviation at 0°C (R0) from the nominal value in accordance with GOST 6651.

Table 6

2. The value of W100, determined by the ratio of the resistance of the thermal converter at 100°C (R100) to the resistance of the thermal converter at 0°C (R0), according to GOST 6651.

Table 7

Laboratory work No. 3 " Verification of the standardizing converter GSP"

Study of the device and verification of the GSP normalizing converter

verification of the GSP standardizing converter.

Progress:

The State System of Industrial Instruments and Automation Facilities (GSP) was created to provide technical means for monitoring, regulating and managing technological processes in various sectors of the national economy.

In the early stages of creating automation equipment in various organizations and enterprises, many different measuring and control devices with similar technical characteristics were developed, but the possibility of joint operation of devices from different manufacturers was not taken into account. This led to an increase in the cost of developing complex systems and hampered the widespread implementation of automation tools.

Currently, GSP is an operationally, informationally, energetically, metrologically and structurally organized set of products intended for use as means of automatic and automated systems for monitoring, measuring, regulating technological processes, as well as information and measuring systems. SHG has become technical base to create automatic process control systems (APCS) and production control systems (APCS) in industry. Its development and application contributed to the formalization of the process of designing automated process control systems and the transition to machine design.

The creation and improvement of GSP is based on the following system-technical principles: typification and minimization of the variety of functions of automatic control, regulation and management; minimization of nomenclature technical means; block-modular construction of instruments and devices; aggregate construction of control systems based on unified

instruments and devices; compatibility of instruments and devices.

Based on functionality, all GSP products are divided into the following four groups of devices: obtaining information about the state of a process or object; receiving, converting and transmitting information via communication channels; transformation, storage and processing of information, formation of control commands; use of command information.

The first group of devices, depending on the method of presenting information, includes: sensors; normalizing converters that generate a unified communication signal; devices that provide the presentation of measurement information in a form accessible to direct perception by an observer, and devices for alphanumeric information entered manually by the operator.

The second group of devices contains commutators of measuring circuits, signal and code converters, encoders and decoders, matching devices, telesignaling, telemetering and telecontrol equipment. These devices are used to convert both measuring and control signals.

The third group consists of signal analyzers, functional and operational converters, logical devices and memory devices, masters, regulators, control computing devices and complexes.

The fourth group includes actuators (electric, pneumatic, hydraulic or combined actuators), power amplifiers, auxiliary devices for them, as well as information presentation devices.

Minimization of the range of monitoring and control equipment is implemented on the basis of two principles: unification of devices of the same functional purpose based on the parametric range of these products and aggregation of a set of technical means for solving large functional problems.

Currently, parametric series of sensors for pressure, flow, level, temperature and electrical measuring instruments have been developed.

Nevertheless, their optimization continues according to technical and economic indicators, for example, according to the criterion of minimum total costs to satisfy given needs. This criterion is based on the contradiction between the interests of the consumer and the manufacturer: the fewer devices there are in a series, the lower the costs for their development and development, and the larger quantities they are produced, which also reduces the manufacturer’s costs. Increasing the number of devices in a row provides savings to the consumer due to more effective use their capabilities or more accurate adherence to technological process regimes.

Aggregate complexes (AK) are a set of technical means, organized in the form of functional-parametric series, covering the required measurement ranges under various operating conditions and ensuring the performance of all functions within a given class of tasks.

The principle of aggregation in SHGs is used very widely. The unified basic design of sensors of thermal energy quantities with unified pneumatic and electrical signals was created from only 600 items of parts, while 136 types and 863 modifications of these sensors were obtained.

The concepts of compatibility inherent in GSP, common to all products, can be formulated as follows.

Information compatibility- a set of standardized characteristics that ensure consistency of communication signals by type and nomenclature, their informative parameters, levels, spatio-temporal and logical relationships and type of logic. For all GSP products, unified communication signals and unified interfaces have been adopted, which are a set of software and hardware that ensure the interaction of devices in the system.

Structural compatibility - a set of properties that ensure consistency of design parameters and mechanical coupling of technical means, as well as compliance with ergonomic standards and aesthetic requirements when used together.

Interoperability- a set of properties that ensure the operability and reliability of the functioning of technical equipment when used together in a production environment, as well as ease of maintenance, adjustment and repair.

Metrological compatibility - a set of selected metrological characteristics and properties of measuring instruments that ensure comparability of measurement results and the ability to calculate the error of measurement results when operating technical means as part of systems.

According to the type of energy used as a carrier of information signals, GSP devices are divided into electrical, pneumatic, hydraulic, as well as devices operating without the use of auxiliary energy - direct-acting devices and regulators. In order to ensure the joint operation of devices of different groups, appropriate signal converters are used. In automated control systems, the most effective is the combined use of devices from different groups.

The advantages of electrical appliances are well known. These are, first of all, high sensitivity, accuracy, speed, ease of transmission, storage and processing of information. Pneumatic devices provide increased safety when used in highly flammable and explosive environments, high reliability in harsh operating conditions and aggressive atmospheres. However, they are inferior to electronic devices in terms of speed and the ability to transmit signals over long distances. Hydraulic devices make it possible to obtain precise movements of actuators and high forces.

In technical documentation, the most widely used classification feature is product type- a set of products of the same functional purpose and principle of operation, similar in design and having the same main parameters. One type may include several standard sizes and modifications or designs of the product. Standard sizes products of the same type differ in the values ​​of the main parameter (usually allocated for single-function products).

Modification - a set of products of the same type, having certain design features or a certain value of the non-main para-

meters. Under execution usually mean products of the same type that have certain design features that affect their performance characteristics, for example, tropical or marine.

Complex - a larger classification grouping than a type. In SHG complexes are divided into unified and aggregate. Distinctive feature unified complex is that any combination of its technical means with each other does not lead to the implementation of new functions by these means. IN aggregate complexes New functions can be implemented using various combinations of technical means. The most widely used are aggregate complexes of electrical measuring equipment (ASET), computer equipment (ASVT), telemechanics (ASTT), collection of primary information (API), etc.

The exchange of information between technical means of the GPS is implemented using communication signals and interfaces.

In automated control systems, electrical communication signals are the most common, the advantages of which are high signal transmission speed, low cost and availability of energy sources, and ease of laying communication lines. Pneumatic signals are used mainly in the oil, chemical and petrochemical industries, where it is necessary to ensure explosion safety and high speed is not required. Hydraulic signals are mainly used in hydraulic servo systems and control devices for hydraulic actuators.

Information signals can be presented in a natural or unified form.

Natural signal is the signal of the primary measuring transducer, the type and range of change of which is determined by its physical properties and the range of change of the measured value. Typically these are the output signals of measuring transducers, most often electrical, which can be transmitted over a short distance (up to several meters). Type of storage medium and range of change unified signal do not depend on the quantity being measured and the measurement method. Typically, a unified signal is obtained from a natural one using built-in or external normalizing converters. Main types of unified analog

GPS signals are given in table. 8.

Of the electrical signals, the most common are unified signals direct current and tension. Frequency signals are used in telemechanical equipment and a complex of technical means of local information and control systems.

Table 8

Laboratory work No. 4 " Checking the pyrometric millivoltmeter"

Study of the device and verification of pyrometric millivoltmeters

Purpose of work: Familiarization with the principle of operation, device and methodology

verification of pyrometric millivoltmeters.

Work progress: When checking pyrometric millivoltmeters, the operations specified in table should be performed. 9

Table 9

VERIFICATION MEANS

2. 2.1. When carrying out verification, the following standard means are used:

3.
verification:

4. exemplary millivoltmeters of accuracy classes 0.2 and 0.5;

5. DC potentiometers of accuracy classes 0.05-0.002;

6. normal elements of accuracy classes 0.002-0.005;

7. measuring coils electrical resistance accuracy class 0.01.

8. 2.2. When performing verification, auxiliary verification tools are used:

9. zero indicators with a constant current (0.1-15)·10 A/div and an external critical resistance of no more than 500 Ohms;

10. DC sources;

11. incandescent batteries with a voltage of 1.28 V and a capacity of 500 Ah,

12. acid batteries with voltage from 2 to 6 V;

13. low voltage DC stabilizers;

14. adjustable direct current sources of the IRN type;

15. DC resistance stores of accuracy classes 0.2 and 0.1;

16. slider rheostats from 100 to 1000 Ohms;

17. magnifying glass 2 and 2.5;

18.
devices for checking balance with angles of 5 and 10°.

19. Specifications verification means.

20. The error of standard verification tools must be 5 times less than the permissible error of the device being verified according to GOST 22261-76.

21. 2.3. It is acceptable to use other verification tools with parameters.

22. 3. CONDITIONS AND PREPARATION FOR VERIFICATION

23. 3.1. Verification is carried out at normal values ​​of all influencing quantities in accordance with GOST 22261-76.

24. 3.2. Before performing verification, perform the following: preparatory work:

25. A) prepare and turn on the device being verified in accordance with the technical documentation for the operation of the device being verified and the instructions on the dial and body of the device;

26. b) pyrometric millivoltmeters, which have a scale expressed in degrees of temperature, are included in the measuring circuit in series with a resistor. The resistance of the resistor must correspond to the resistance indicated on the scale of the device being tested, with a tolerance:

27. Ohm;

28. V) when checking pyrometric millivoltmeters with a scale expressed in millivolts, as well as those intended for working with total radiation telescopes, the resistor is not included in the measuring circuit;

29.
G) the correcting rheostat (reading corrector) of a pyrometric millivoltmeter, designed to work with total radiation pyrometer telescopes, is set to the extreme (zero) position when determining the main error;

30. d) when calibrating PP-1 and PR 30/6 millivoltmeters for scale marks of 1000°C and above, the resistance value is increased by 1.2 Ohms, which corresponds to the conditional increase in the resistance of the thermocouple when heated;

31. e) when determining the main error and variation in the readings of regulating millivoltmeters, the set temperature indicators are installed outside the scale marks so that they do not interfere with the free movement of the needle. The contact device of the regulating millivoltmeter is connected to the network 2 hours before the start of verification (unless another time is specified in the technical description of the device);

32. and) When checking multi-point self-recording pyrometric millivoltmeters, all input circuits of the device being tested are connected in parallel.

33. 4. VERIFICATION

34. 4.1. Visual inspection

35. 4.1.1. When conducting an external inspection, it should be established:

36. A) compliance of millivoltmeters with GOST 22261-76 and GOST 9736-68;

37. b) reliability of fastening of external and internal parts of the device and absence of damage;

38. V) absence of breaks in the millivoltmeter circuit, which is detected when the clamps are closed and the device is rocked;

39. G) free movement of the pointer.

40.
If a millivoltmeter does not comply with at least one of the requirements of this standard, it is considered unsuitable for use and no further verification is performed.

41. 4.2. Testing is carried out when the millivoltmeter is connected to the measuring circuit, and the following is checked:

42. A) correct operation of the corrector in accordance with GOST 9736-68;

43. b) the serviceability of the correcting rheostat (reading corrector) built into the millivoltmeter, designed to work with total radiation telescopes. To do this, setting the pointer at the highest mark of the scale at the zero position of the correcting rheostat, gradually rotate the rheostat knob and observe the change in the millivoltmeter readings.

43 4.3. Determination of metrological parameters

4.3.1 Definition internal resistance The millivoltmeter is carried out by the compensation method of comparison with a reference coil according to the diagram shown in Fig. 1.40,

or by the substitution method according to the circuit, as follows in Fig. 1.41:

a) a value is set on the resistance magazine that is close to the internal resistance of the millivoltmeter being tested;

b) in position I of switch P, use a potentiometer to measure the voltage drop on the millivoltmeter being verified, setting an adjustable resistance to the current that deflects the needle within the millivoltmeter scale;

c) in position II of switch P, change the magazine resistance until the voltage drop value measured by a potentiometer on a millivoltmeter is obtained, while the value of the internal resistance of the millivoltmeter is equal to the set resistance.

44 5. REGISTRATION OF VERIFICATION RESULTS

455.1. The verification data of millivoltmeters of accuracy classes 0.2 and 0.5 are entered into a protocol which is stored in the organization that performed the verification during the period between two handwritings of the device.

465.2. Data on verification of instruments of accuracy class 1; 1.5; 2.5 are recorded in the observation log.

475.3. Millivoltmeters that meet the requirements for them are subject to branding after verification.

485.4. For millivoltmeters of accuracy classes 0.2 and 0.5, at the customer’s request, an extract from the verification report is issued indicating the correction values ​​in millivolts.

495.5. If the millivoltmeter is unsuitable, the metrological service authorities issue a notice of unsuitability indicating the reasons and cancel the mark.

Laboratory work No. 5 " Checking the automatic potentiometer"

Studying the device and checking the automatic potentiometer

Purpose of work: Familiarization with the principle of operation, device and methodology

automatic potentiometer.

Work progress: When checking automatic potentiometers and bridges, the “Rules for the technical operation of consumer electrical installations and safety rules for the operation of consumer electrical installations” approved by Gosenergonadzor and the requirements established by GOST 12.2.007.0-75 must be observed.

At checking the automatic potentiometer For a portable potentiometer of the PP-P type, it must be taken into account that these devices are potentiometers of the same accuracy class. Therefore, to reliably check an automatic potentiometer for a range of, for example, 16.76 mV, it is necessary to know the corrections of any point on the scale of the PP potentiometer with an accuracy of 0.03 mV and the section switch to 0.01 mV. When calibrating a device for a different measurement range, the requirements for the reference device change proportionally. When checking an automatic potentiometer against a portable potentiometer of the PP type, it is necessary to take into account that these potentiometers are devices of the same accuracy class. Therefore, for example, for reliable verification of the automatic potentiometer for a measurement range of 16.76 mV, it is necessary to know the corrections of any point on the slide chord scale of the PP potentiometer with an accuracy of 0.03 mV and the section switch to 0.01 mV. When calibrating a device for a different measurement range, the requirements for the reference device change proportionally. The third method involves using only a portable potentiometer. When checking an automatic potentiometer against a portable potentiometer of the PP-P type, it must be taken into account that these devices are potentiometers of the same accuracy class. Therefore, to be safe checking the automatic potentiometer for a range of, for example, 16.76 mV, it is necessary to know the corrections of any point on the slide chord scale of the PP potentiometer with an accuracy of 0.03 mV and the section switch to 0.01 mV. When calibrating a device for a different measurement range, the requirements for the reference device change proportionally.

Laboratory work No. 6 " Checking the resistance thermal converter"

Studying the device and checking the resistance thermal converter

Purpose of work: Familiarization with the principle of operation, device and methodology

resistance thermal converter GOST 8.461-2009.

Progress of work: resistance thermal converters made of platinum, copper and nickel. Verification methodology current This standard applies to resistance thermal converters made of platinum, copper and nickel in accordance with GOST 6651, intended for measuring temperatures from minus 200 0 C to plus 850 0 C or in part of this range, as well as to resistance thermal converters in circulation manufactured before implementation of GOST 6651, and establishes the methodology for their initial and periodic verification. In accordance with this standard, sensitive elements of resistance thermal converters used as temperature measuring instruments can also be verified. Temperature values ​​in this standard correspond to the International Temperature Scale ITS-90

Laboratory work No. 7 " Temperature measurement with a radiation pyrometer"

Purpose of work: Familiarization with the principle of operation, device and methodology

radiation pyrometer.

Progress: familiarization with the design and operation of radiation pyrometers.

DESCRIPTION OF RADIATION PYROMETERS

At high temperature Any heated body emits a significant portion of thermal energy in the form of a stream of light and heat rays. The higher the temperature of the heated body, the greater the intensity of the radiation. A body heated to approximately 600°C emits invisible infrared heat rays. A further increase in temperature leads to the appearance of visible light rays in the emission spectrum. As the temperature rises, the color changes from red to yellow and white, which is a mixture of radiation of different wavelengths.

Here is a set of mathematical formulas, methods and models for realizing the goals and objectives of IS.

In the case of designing new information processing processes, corresponding algorithms should be presented.

2.5.5. Software

You should indicate the system software necessary for the operation of the proposed IS (including network software and workstation software).

The development tools used (programming languages, development environments) are indicated and the developed software package is briefly described.

Then the automated functions are described in detail, the developed software modules and their relationships, the tree of calling procedures and programs, and a diagram of the relationship of program modules and information files are shown.

Tree of automated functions. First, we should give a hierarchy of management and data processing functions that the software product being developed is designed to automate. In this case, two subsets of functions can be distinguished and detailed: a) those that implement service functions (for example, checking passwords, maintaining a calendar, archiving databases, etc.), b) those that implement the basic functions of entering primary information, processing, maintaining directories, answering queries, and etc. (Fig. 4)

Rice. 4. Example of a function tree

Identifying the composition of functions, their hierarchy and choosing a communication language (for example, a “menu” type language) allows you to develop the structure of a dialogue script, which makes it possible to determine the composition of dialogue frames, the content of each frame and their subordination.

Dialogue structure. When developing a dialogue structure, it is necessary to provide for the ability to work with input documents, generate output documents, adjust input data, view entered information, work with files of normative and reference information, log user actions, as well as assistance at all stages of work.

At this point you should select a way to describe the dialogue. Typically, there are two ways to describe dialogue. The first involves the use of a tabular form of description. The second uses a representation of the dialogue structure in the form of a digraph, the vertices of which can be renumbered (Fig. 5), and a description of its content in accordance with the numbering of the vertices, either in the form of screens, if the messages are relatively simple, or in the form of a table.

Dialogue in IS cannot always be formalized in a structural form. As a rule, dialogue is explicitly implemented in those ISs that are strictly tied to the execution of the subject technology. In some complex ICs (for example, in expert systems ah) the dialogue is not formalized in a structural form and then this paragraph may not contain the described schemes.

ABOUT
Writing a dialogue implemented using a context-sensitive menu does not require a non-standard approach. It is only necessary to unambiguously define all the levels at which the user makes a decision regarding the next action, and also justify the decision to use this particular technology (describe additional functions, context clues, etc.)

Rice. 5. Example dialogue script

Tree of software modules. Based on the results obtained above, a tree of software modules is built (Fig. 6), reflecting the structural package diagram, containing software modules of various classes:

    performing official functions;

    control modules designed to load menus and transfer control to another module;

    modules related to input, storage, processing and output of information.

Rice. 6. Tree of program modules

In this paragraph, it is necessary to indicate the identifier and functions performed for each module, for example, in the form:

Identifier

Functions performed by the module

Getting started with the program. Selecting main menu items.

Designed to store non-visual components

Registration of a new application.

Directory of customers.

Directory of vehicle brands.

Body type directory

Registration, viewing and editing of an individual card vehicle

Directory of grounds for application

Fuel and lubricants directory.

Registration, viewing and editing of an individual driver card.

Log of received requests for transport.

Driver class directory

Registration of a new waybill, editing entry fields.

The description of the software modules should include a description of the block diagrams of the algorithms of the main calculation modules.

Diagram of the relationship between program modules and information files reflects the relationship between software and information support of the IS, and can be represented by several diagrams, each of which corresponds a certain regime(for example, Fig. 7). The head part is represented as one block with indicators of mode schemes.

Rice. 7. An example of a diagram of the relationship between program modules and information files

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Department: General Physics

On the topic of: Algorithmic and software modern radiophysical experiment

Moscow, 2008

Algorithmic and software of a modern radiophysical experiment

Since ASRFI is created to solve a certain range of problems related to the study of previously unknown properties of research objects, the characteristics of its links and the requirements for the system as a whole are focused on the most effective implementation of well-defined algorithms that provide maximum information content. Consequently, by the beginning of the development of a complex of technical means of ASRFI, the main control algorithms must be worked out to such an extent that it is possible to obtain estimates of the main characteristics of individual programs, their connections with each other and data arrays.

The sequence of stages of creating algorithmic and software is shown in Fig. 1.8. Unlike systems designed to solve problems related to the functioning of technical objects, the characteristics of which can largely be known in advance, ASRFI are developed for the study of radiophysical objects, the properties of which are usually unknown in advance. Therefore, the task of developing control algorithms is necessarily preceded by solving the problem of determining mathematical models that describe the OP. Both of these tasks constitute the content of the algorithmization of the process of measuring RFV. The resulting mathematical models of the OP and the radiophysical processes occurring in it and determining its properties, control algorithms and programs that implement them are an integral part of the ASRFI mathematical software.

A generalized diagram of algorithmic support for the implementation of ASRFI is shown in Fig. 1.9 ASRFI algorithms 1 are determined by three enlarged blocks: system control algorithms 2, information input and output algorithms 3, algorithms for solving computational problems 4. The main enlarged functions of system control algorithms are the organization of control of the parameters of individual functional modules (FM) 5 [operator R 2 1 in formula (1.27) when specifying the operator R 2 ] and restructuring 6 [operator R 2 2 in (1.27)]. Algorithm block 3 provides reception 7 and output 8 [operators R 2 3 , R 2 4 in (1.27)] of all signals (both digital and analog) during the interaction of the computer with external devices. Algorithm block 4 is designed to solve all computational problems that are also functionally interconnected with the previous algorithm blocks. Preliminary digital processing signals 9 [operator R 2 5 in (1.27)] involves ensuring the quality of their further processing (preventing the effect of aliasing, digital signal filtering, weighing the entered arrays of digital information with weight windows, etc.), if the need arises.

Mathematical signal processing 10 [operator R 2 6 in (1.27)] must provide all computational procedures, including special mathematical processing to obtain a measurement result in a specific RFE.

If ASRFI reaches the intellectual level in its organization, then its functioning necessarily involves the creation of expert systems, the functions of which also include the implementation of relevant management principles 11 [operator R 2 7 in (1.27)].

For the generalized classification of hardware and algorithmic support of ASRFI, taking into account the above, the general measurement equation in operator form will have the form:

(1.31)

In the diagram in Fig. 1.9, the division of algorithms is conditional. There are extensive functional connections between them, which will be discussed further.

In paragraph 1.4 2 it is shown that a fundamental increase in the information content of the SRFI can be achieved by introducing elements of flexibility into all parts of its hardware and, therefore, ensuring their adaptive properties that allow programmatic adjustment of the parameters of the SRFI without interrupting the current experiment. There are functional connections between these links and the computer, and their characteristics are controlled within the limits of flexibility according to certain algorithms implemented in the computer by software. In addition, the capabilities of modern computers make it possible to implement many hardware analogues of FM in algorithmic execution. Moreover, in many cases, the characteristics of algorithmic PMs are better than those of their hardware counterparts.

Fig. 1.8 Sequence of stages in the development of algorithmic and software for a complex system

Fig. 1.9 Generalized structure of algorithmic support for ASRFI:

1 - algorithms; 2 - system control; 3 - exchange with external devices; 4 - solutions to computational problems; 5 - functional management; 6 - structural management; 7 - signal input; 8 - signal output; 9 - preliminary digital signal processing; 10 - mathematical signal processing; 11 - analysis of databases and knowledge, formation of logical conclusions.

ASRFI software is developed on the basis of already developed algorithms. After the composition of all the tasks of the developed ASRFI has been determined, methods for solving them have been selected, information connections between them and the sequence of their solution have been established, they have been combined into subsystems, it is appropriate to distribute the functions of managing them between the software, technical support and a person (expert). It is determined based on system considerations, taking into account material costs. These characteristics are reflected in the requirements for the algorithm (or timing diagram) of the system. Consequently, the construction of an algorithm (timing diagram) and the choice of distribution of functions between the expert, hardware and software represent a problem, the solution of which determines all subsequent decisions.

It is known 76 that according to their functional characteristics, software can also be divided into functionally complete FMs. It is almost impossible to create comprehensive, unified software for complex XPS. Some software unification is possible only for standardized means of organizing an experiment, for example, using the above-mentioned systems VECTOR, CAMAC, FASTBUS, VME, etc., which also have a logical standard.

Modern trends in the development of software for providing ASRFI should probably be considered the creation software shells, within which the synthesis of virtual systems is possible. Examples of such software shells are software included in labVIEW, labWINDOWS, etc. . One of the most promising areas of software development at present, obviously, should be considered software for organizing intelligent systems. However, as will be shown below, in this case the specifics of a particular experiment will necessarily affect it, which in this case makes it impossible to completely unify the AO and software.

Existing methods for designing flexible systems for scientific research

The emergence of microprocessor-based tools (MPS) immediately led to the emergence of a new class of measuring equipment - digital measuring instruments (DMI), which have some functional flexibility and adaptability (in particular, automatic selection of measurement ranges, etc.), which to a certain extent made them more convenient to use. However, the capabilities of the MPS are so significant that it makes sense to use them not just for measuring the RFV, but also for their further mathematical processing, which is not possible with the digital data processing due to the lack of flexible programming capabilities.

With the advent of MPS, mini- and micro-computers with flexible programming capabilities also appeared, capable of interacting (exchanging information) with external devices. This provided the ability to enter and process measurement information into a computer using all its computing and other capabilities. The presence of such qualities in MPS has led to the creation of a variety of interface tools that provide interaction between MPS and other devices in systems of various configurations and intended, among other things, for measurement purposes.

The emergence of interfaces made it possible to increase computing power, by combining several computers, to create multi-level (hierarchical) computing structures that made it possible to solve increasingly complex problems, including in experimental research. The ability to output information from the MPS to external devices makes it possible to generate control actions according to a given algorithm.

Standardization and unification components measuring and control computing systems were the basis for the creation of formalized methods for designing measuring and computing systems (MCS) based on the use of standard technical solutions. One of the first applications of the layout method was the creation of automated process control systems. However, such systems do not have software flexibility and real-time adaptability.

A further development of the layout method is the method of designing an IVC using unified modular-type layout elements (design layout method). As is known, IVK are measuring instruments, which include measuring, computing and software components. It is noted that both hardware and software modules can be used to design IVK. Separate hardware subunits can be built on the basis of standard modular-type systems (for example, interface means in the CAMAC standard). Such measuring and computing tools have the properties of flexibility at the level of modular restructuring. However, they have the disadvantages specified in clause 1.4.2

Design of particularly complex measurement systems for complex research in nuclear physics, space physics, aerospace research, etc. produced using the compositional method. This method involves decomposing a complex problem into several the most important parameters, the solution of which is carried out by many teams of specialized specialists, using network planning. The result of the subsequent composition of the solutions obtained is complex hierarchical systems. The solution to such problems is available only to a group of scientific teams (research institutes, design bureaus, etc.).

Further progress in the development of computers and the element base led to the emergence of new approaches in the development of SRFI: imparting the properties of maximum flexibility, adaptability and intellectualization (creation of databases, knowledge bases and measuring systems). In the development of interfacing means, their flexibility began to be ensured not by the principle of modularity, but by the use of software-controlled electronic switching within a single modular board. IN Lately Integration processes also began to appear in the synthesis of both hardware and algorithmic support for SRFI. The same processes, but less dynamically, began to manifest themselves when the measuring and computing part of the SRFI was merged with experimental installations. In particular, in our case, this was manifested in the implementation of several (more than two) related, complementary and interdependent methods for measuring RFV and in the organization of program-controlled influence on the IR as part of the same SRFI. Integration of hardware and algorithmic support of SRFI in combination with the introduction of flexibility and adaptability properties when organizing software-controlled influence of OI, of course, leads to an increase in their efficiency.

However, the main drawback inherent in these methods of designing SRFI is that the capabilities of the metrological optimization criterion are not fully used in order to achieve maximum characteristics. This leads to suboptimal synthesis of SRPI already at the initial stage, which subsequently leads to the need for its refinement.

The above shortcomings of existing methods for designing systems for conducting scientific research require the development of new methods, the creation of appropriate flexible, software-controlled interface tools and means of influencing OI in order to ensure the adaptive properties of these systems to solve the most modern tasks in radiophysical measurements.

Literature

Alferov Zh.I. Heterojunctions in semiconductor electronics of the near future // Physics today and tomorrow / Ed. V.M. Tuchkevich. L., 1973.

Alferov Zh.I. Heterojunctions in semiconductor electronics // Physics today and tomorrow: Science Forecasts. M.: Nauka, 1975.

Alferov Zh.I., Konnikov S.G., Korolkov V.I. // FTP. 1973. T.7.

Alferov Zh.I. Injection heterolasers // Semiconductor devices and their application / Ed. I'M IN. Fedotova. M., 1971.

Alferov Zh.I., Andreev V.M., Portnoy E.L., Protasov I.I. // FTP. 1969. T.3. No. 9. P.1324-1327.

Alferov Zh.I. // FTP. 1967. T.1. P.436.

Gvozdev V.I., Nefedov E.I. Volumetric microwave integrated circuits. M.: Science. Ch. ed. physical - mat. lit., 1985.256 p.

Nefedov E.I. Diffraction electromagnetic waves on dielectric structures. M.: Nauka, 1979.

Neganov V.A., Raevsky S.B., Yarovoy G.P. Linear macroscopic electrodynamics / Ed. Neganova V.A. T.1. M.: Radio and Communications, 2000.509 p., ill. 123, table 5.

Dmitrenko A.G., Kolchin V.A. // Izv. universities Radiophysics. 2000. T.43. Issue 9. P.766-772.

Similar documents

    Differential equation of thermal conductivity. Heat flow through an elementary volume. Conditions for setting the boundary value problem. Methods for solving heat conduction problems. Numerical methods for solving the heat equation. Calculation of the temperature field of the plate.

    thesis, added 04/22/2011

    Algorithms for solving problems in physics. Fundamentals of kinematics and dynamics. Conservation laws, mechanical vibrations and waves. Molecular physics and thermodynamics. Electric field, laws of direct current. Elements of the theory of relativity, light quanta.

    tutorial, added 05/10/2010

    Study of harmonic processes in linear circuits, description of the amplitude-frequency characteristics of quadripoles. Basic methods of calculation and design of electrical circuits and modern computer technology and software.

    course work, added 11/16/2013

    What is a task, classes, types and stages of problem solving. The essence of the heuristic approach to solving problems in physics. The concept of heuristics and heuristic learning. Characteristics of heuristic methods (pedagogical techniques and methods based on heuristics).

    course work, added 10/17/2006

    Automated lighting control system, its operating algorithm, hardware and software. Possible problems during implementation and ways to solve them. Calculation of power of voltage stabilizers. Calculations for voltage regulation.

    thesis, added 07/01/2014

    Features of the development of the fundamental electrical diagram control system of technological machines. Justification for the choice of power electrical equipment, control and protection equipment. Characteristics of the methodology for selecting the type of control panel and its layout.

    training manual, added 04/29/2010

    Development of mathematical methods and algorithms for the synthesis of control laws based on them. Inverse problems of dynamics in the theory of automatic control. Application of the spectral method for solving inverse problems of dynamics, characteristics of functions.

    course work, added 12/14/2009

    Automation of switching and control systems for water supply and water treatment of a building. Installation of level switches to automate pump operation. Classification of numerical program control. Schematic diagram ATS of single-acting transformer.

    test, added 12/06/2010

    Consideration of the main goals and objectives of designing nuclear power plants of a modern nuclear power plant. Study of design standards in accordance with the requirements, governing documents. Features of creating a power unit for educational purposes.

    abstract, added 04/18/2015

    Energy resource efficiency analysis. Analytical review current state of scientific research in the field of resource conservation at enterprises of the fuel and energy complex. Innovative projects, development prospects for Gazprom Dobycha Noyabrsk LLC.

Algorithmic support (Lecture)

LECTURE PLAN

1. Algorithms for primary information processing

2. Algorithms for secondary information processing

3. Algorithms for predicting the values ​​of quantities and indicators

4. Control algorithms

Algorithmic support is a set of interconnected algorithms. Many algorithms are divided into 6 groups:

1. Algorithms for primary information processing (filtering, taking into account the nonlinearity of the characteristic).

2. Algorithms for determining process indicators (algorithms for secondary information processing), determination of integral and average values, speed, forecasting, etc.

3. Control algorithms.

4. Algorithms for digital regulation and optimal control.

5. Logic control algorithms.

6. Algorithms for calculating technical and economic indicators.

1. Algorithms for primary information processing

Primary information processing includes filtering the useful signal, checking information for reliability, analytical calibration of sensors, extrapolation and interpolation, and taking into account dynamic connections.

Filtration– the operation of separating the useful signal of measurement information from its sum with noise. Depending on the interference, the following filters are distinguished:

1. low pass filters (LPF).

2. high-frequency filters (HFF).

3. bandpass filters (PF, pass signals of a certain frequency).

4. notch filters (PF, do not allow signals of a certain frequency to pass through).

The most common are NSFs, which are divided into moving average filters, exponential smoothing filters and median filters.

Difference equation of exponential smoothing filter

We obtain the exponential smoothing filter equation under the following assumptions:

assumption 1:useful signal x(t ) is a random stationary process with known static characteristics Mx - expected value; Dx – dispersion; - autocorrelation function showing the degree of connection between signal values ​​at times shifted relative to each other by time τ. The wanted signal is not correlated with the interference.

assumption 2: interference f (t ) is a random stationary process, uncorrelated with the useful signal and with known static characteristics Mf =0; ; wherein k<0 m >0.

In the continuous version, the properties of the exponential smoothing filter are described by the DE:

.

Transfer function - aperiodic link

.

Replacing the derivative with a difference and obtaining the difference equation:

– difference equation

A ,

where T is the time constant, T 0 is the sensor polling period, γ is the controller setting parameter. The optimal value is determined by minimizing the filter error. The optimal value of the filter setting parameter depends on the static properties of the desired signal and interference. In practice, in most cases, these parameters cannot be determined; the smaller , the stronger the smoothing property of the filter; however, at small values, distortion of the useful signal may occur.

This filter is the most common low-pass filter.

Moving Average Filter Difference Equation

In analog form (continuous version), the FSK equation has the form:

.

Using the rectangle method, we can obtain the difference equation:

Replacing the integral with a sum (using the rectangle method for integration), we obtain:

where is the area of ​​the rectangles;

T– averaging time;

Т= nT 0 , n – this is the number of averaging points, a filter settings parameter. Optimal value n is determined by minimizing the error (error variance) of the filter and depends on the static properties of the useful signal and interference.

The larger n , the greater the smoothing property of the filter.

Zero-order static filters

A static filter is a filter that, in its analog version, is a parallel connection ( n +1) chains consisting of an amplifying link and a pure delay link.

The PF of such a filter has the form:

where τ is the delay time;

n– filter order.

When n =0 we have a static filter of zero order W (p )= b 0 → .

When using this formula y(t ) will be a biased estimate of the useful signal x(t),

those. - mathematical expectation of the output signal.

To obtain an unbiased estimate, you must use the following function:

In this case .

b 0 as a setting parameter.

To implement a zero-order static filter in software, use the formula:

First order static filters

The PF of such filters has the form: .

Expected value:

In order for the filter to have an unbiased estimate when accounting

Where - filter settings.

Minimizing the filtering error value, we obtain: .

For software implementation - - sensor polling period.

Difference equation:.

at n =0 we have a static filter of zero order W (p )= b 0 .

When using this formula y(t ) will be a biased estimate of the useful signal x(t ), i.e. - mathematical expectation of the output signal

To obtain an unbiased estimate, you must use the following function: .

In this case .

b 0 as a setting option .

For software implementation of a first-order static filter, use the formula: .

Robust filters

Filters of this type are designed to filter out abnormal emissions. Robust filters include the median filter and the ladder-exponential smoothing filter.

Median filter

The implementation of the median filter is carried out according to the formula: , where M is the setting parameter,

med – operator meaning the operation of estimating the median.

The median is estimated using the following algorithm:

The samples are sorted into a series in ascending order.

When M is odd, the central value of this series is chosen as the median. If the value is even, the half-sum of the two average values ​​of the series is chosen as the median.

Ladder exponential smoothing filter

The operating algorithm of this filter is as follows:

,

where is the standard deviation (RMS) of the interference, is the modulus of the increment of the useful signal at adjacent samples.

Difference equations of filters with a given frequency response

If it is necessary to implement a low-frequency filter with a given frequency response, then for these purposes it is necessary to use LFC (logarithmic frequency response).

- dependence of the harmonic signal transmission coefficient on frequency.

.

It is necessary to determine the LFC, and then the PF and then move from the PF to the discrete PF using the Laplace transform.

Transfer function (TF) is the ratio, in the Laplace image, of the output function to the input function under zero initial conditions.

, Where R– complex quantity.

Discrete conversion:

.

Changed the variable:

.

Transition from PF to discrete PFcan be produced based on wearing: .

After obtaining a discrete PF, you can easily obtain a difference equation using the displacement (delay) theorem:

Displaced lattice function

.

Non-recurrent, non-recursive system: - the presence of only input signals on the right side, - the presence of output signals.

For frequency response, type

(*);

.

A and B we substitute in the expression (*) and the DFT is defined. Next, you need to write a difference equation and create a program.

Displacement theorem:

;

We transform using the displacement theorem and get

For a high-pass filter with characteristic : ;

;

.

For a bandpass filter:

;

;

.

For a notch filter:

;

;

.

To implement the filtering procedure, other filters besides those considered are used, which are more complex adaptive and frequency response with steep edges. Such filters include Chebyshev, Kalman, and Wiener filters.

Checking the accuracy of information

Unreliability of information appears when information and measuring channels fail. There are two types of refusals: complete and partial. Complete failure occurs when the measuring transducer fails, or when the communication line is damaged. In the event of a partial failure, the technical means remain operational, but the measurement error exceeds the permissible value.

Algorithms to detect complete failures:

1) parameter tolerance control algorithm: condition check -X i minX iX i max

X i min – minimum possible value i-th parameter;

X i max – maximum possible value i-th parameter.

If the condition is not met, then the information is unreliable. In this case, reliable information obtained at a previous point in time is used, or the average value is used i -th parameter.

2) A The algorithm is based on determining the rate of change i th parameter and condition check:

A ≤ Xi ≤ B

Х i =dX i (t)/dt

dX i (t)/ dt =(X i (k)- X i (k -1))/ T 0, where T – polling period, T=dt

3) Hardware redundancy algorithm - algorithminformation monitoring, with the help of which partial failures based on the use of information redundancy are identified. Redundancy can be obtained by reserving information and measuring channels (hardware redundancy), or by determining some parameters using direct measurement, or by calculations using other parameters.

Hardware redundancy is a sign of failure, violation of a condition - | X i - ­ X­| < C, Where

‌X is the average value over all measurement transformations

X i – value obtained from i measurement conversion

C – the largest permissible value of the difference module (2-3 of the root-mean-square error of the transformation change)

4) The material balance equation has the form:f( x 1 , x 2 , …. x n)=0. The equation is satisfied only if the parameter values x 1 , x 2 , …. x n correspond to the true values. If the parameters change with an error, we have . When substituting values , we will receive . If , then the information is considered unreliable.

X - measured quantity,

Y - steady signal

y = f(x ) is the static characteristic of the sensor.

Analytical calibration of a sensor (AGS) refers to the determination (restoration) of a measured value from a signal taken from a sensor (transducer).

, Where x ^ - estimate of the measured value obtained from the signal taken from the sensor; f -1 – inverse function y = f(x).

If the calibration characteristic of the measuring transformation is specified analytically, then AGD is reduced to the implementation of a computational operation.

If the static characteristic of the sensor is linear: y = ax + b , then analytical calibration is reduced to the implementation of computational operations, that is, to the formula=(y - b)/ a.

In this case, the analytical calibration of the sensor is expressed in scaling. However, most industrial sensors (converters) have a nonlinear static characteristic, which is often determined experimentally and presented in the form of a graph or calibration table (for this purpose, passport data is used). When presenting the calibration characteristic in a table, the AGD method is used, which consists in approximating the calibration characteristic with an analytical expression. One of the most common analytical calibration methods is approximation usingpower polynomials:

where are the coefficients that must be numerically determined;

n – degree of the polynomial.

Using this formula, a number of problems arise:

1. Selecting the criterion by which the coefficients are determined a j ;

2. Determination of the degree of polynomials ( n ), providing the required approximation accuracy.

Depending on the criterion used for approximation, the following polynomials are distinguished:

1. Polynomials of best uniform approximation (BSU).

The criterion for determining the coefficients of these polynomials is the requirement to ensure specified accuracy at any point in the sensor operating range. To approximate this polynomial, it is necessary to minimize the linear form, for which linear programming methods are used (solving the optimization problem). Linear programming is a branch of mathematics that deals with methods for determining the extremum of a linear criterion under linear constraints. The most common linear programming method is the simplex method (a method of sequentially improving a plan). Disadvantage of the NRP polynomial is the complexity of determining the coefficients, that is, the need to solve a linear programming problem.

2. Asymptotic polynomials.

Dignity is the ability to preliminary estimate the degree of a polynomial before calculating the coefficient. The calculation of coefficients is based on a graduated table. Here is a fragment of this table:

Degree

Points used

Polynomial coefficients

Accuracy parameter

y 0 =b

y 1 =(b-a)/2

y 2 =a

a 0 =1/4[(x 0 +2x 1 +x 2) – 2((b+a)/(b-a))(x 0 -x 2)]

a 1 =(1/(b-a))(x 0 -x 2)

L 1 =1/2(1/2x 0 - x 1 - 1/2x 2)

y 0 =b

y 1 =b-1/4(b-a)

y 2 =a+1/4(b-a)

y 3 =a

a 0 =2/3((b+a)/(b-a)) 2 (x 0 -x 1 -x 2 +x 3)-1/3((b+a)/ (b-a))(x 0 + x 1 -x 2 -x 3)+1/6(-x 0 +4x 1 +x 2 -x 3)

a 1 =2/3(b-a)[ 1-4((b+a)/(b-a))](x 0 -x 2)+(1+4) ((b+a)/(b-a))( x 1 -x 3)

a 2 =2/3(2/(b-a)) 2 (x 0 -x 1 -x 2+ x 3)

L 2 =1/3(1/2x 0 - x 1 +x 2 -1/2x 3))

a≤y≤b

x 0 , x 1 , x 2 – values ​​of the measured parameter corresponding y 0 , y 1 , y 2

3. Regression polynomials are used for AGD of non-standard sensors. As a criterion for determining the coefficients, the value of the root-mean-square error of approximation in the range of change of the measured value is taken: (the sum of squared errors is minimized)

To determine the polynomial coefficients, the least squares method is used, in which the criterion is minimized and the system of equations is solved:

dI (..)/da 0=0

…..

dI (..)/ dan =0

Comparing different polynomials, we can conclude: regression polynomials give the smallest mean square error. The NRP polynomials give the minimum maximum error, and the asymptotic ones occupy an intermediate position between them.

Application of interpolation and extrapolation when monitoring parameters and indicators

The process of obtaining information about continuously changing quantities in an automated process control system occurs discretely in time, so the task arises of restoring the values ​​of measured quantities at times that do not coincide with the moments of measurements.

For control, when it is necessary to know the value of a measured quantity at the current or future point in time, the method of extrapolating the value of a quantity obtained at a previous point in time is used.

To analyze production operations and calculate technical and economic indicators, it is necessary to determine the value of quantities at previous points in time; in this case, interpolation methods are used.

In most cases, extrapolation is carried out using a stepwise method. With stepwise extrapolation about the value of the measured quantity at any this moment time is judged by the measured value of the last measurement current. Stepwise extrapolation error: ,

Where - autocorrelation function (establishes the degree of connection);

T0 - sensor polling period A;

Measurement conversion error.

Thus, the error of stepwise extrapolation depends on the static properties of the measured quantity, the sampling period and the error measuring channel, which must be taken into account when choosing a survey period.

For interpolation, piecewise linear approximation is most often used, which is carried out at two points using the following formula:

Less accurate is step interpolation.

Accounting for dynamic connections

The presence of an inertial sensor can significantly distort the frequency composition of the measured signal; for example, when measuring temperature in furnaces, massive covers are used to protect thermocouples from mechanical damage, which causes a significant dynamic error.

If we accept the static transmission coefficient of the inertial sensor equal to one, that is, when , then the following relationship must be taken into account:, those. At the current moment in time, a signal is generated at the output of the sensor, carrying information about the value of the parameter at the previous moment in time, i.e. at a point in time.

2. Algorithms for secondary information processing

The main secondary processing operations include:

· determination of integral and average values ​​of quantities and indicators;

· determining the rate of change of values ​​and indicators;

· determination of quantities and indicators that cannot be measured by the direct method (indirect measurement);

· predicting the values ​​of quantities;

· determination of static characteristics, quantities and indicators.

Used for management and analysis of work. Great importance has a definition of the total quantities of matter or energy obtained in production over a certain time interval. Examples are the consumption of electricity, fuel per hour, shift, day, and so on. The same purposes are served by determining the average values ​​of the measured quantities, which are operational indicators (average time, average pressure, etc.)

Let us consider methods of discrete integration of a measured quantity continuously varying over time. The following are numerical methods of integration.

1. Rectangle method.

The essence of the method is to replace the implementation x( t ) by its stepwise extrapolation over time t.

, , where is the sensor polling period.

In its presented form, the integration algorithm is rarely used; its implementation requires remembering all the values. In practice, the recurrent formula is used:

2. Trapezoid method.

The trapezoid method is more accurate. Recurrence formula: .

The error of the trapezoid method is less than the error of the rectangle method by the amount:

.

As calculations show, the error of discrete integration decreases by approximately 10% when moving from the rectangle method to the trapezoid method with n >10, when multiples have a more significant impact on the calculation result, therefore, in practice, in most cases, the rectangle method is used as it is simpler and more economical.

The average value is determined through the integral: , Where

Integration time.

Differentiation of discretely measured quantities. To analyze the progress of a technological process, it is very important to determine not only the numerical values ​​of the parameters, but also the trend of their use at the current time (is the parameter increasing or decreasing). In this case, it is necessary to determine the rate of change of the parameter, that is, to carry out differentiation.

The derivative of the error must also be determined when implementing a controller, for example with PD or PID links.

The simplest discrete differentiation algorithm is based on the use of the following function: , where T 0 is the sensor polling period.

3. Algorithms for predicting the values ​​of quantities and indicators

To calculate the predicted values, it is necessary to construct mathematical model time series. In the practice of short-term forecasting, the autoregressive model and the polynomial model are most widely used.

The autoregressive model has the form: , where a are the coefficients, p is the order. The predicted values ​​are calculated using the formula: , where are the measured or predicted values ​​of the time series at points in time t =(n - k + l) To.

This algorithm is easy to implement, but its disadvantage is low accuracy, since the results of a(k) are not refined based on the forecast results. The polynomial model method does not have this drawback: , where n - number of the current step, l - number of forecast steps.

Estimation of the parameters of this model A is updated as each new value of the time series arrives. For these purposes, exponential averages of various orders are used.

1st order: Z 1 (j )=γ y (j )+(1-γ) Z 1 (j -1)

2 orders: Z 2 (j)=γ Z 1 (j)+(1-γ) Z 2 (j -2)

… …

r order: Z N (j )=γ Z r -1 (j )+(1-γ) Z r (j -1), where is the forecasting setting parameter.

The choice of this parameter is based on the following properties: if it is desirable for the forecast to be based on the latest values ​​of the time series, then you should choose a value close to 1. If it is necessary to take into account the previous values ​​of the time series, then it is necessary to reduce it.

The coefficients are calculated using the formula for a 1st order model:

The coefficients are calculated using the formula for a 2nd order model:

The coefficients in the polynomial law are calculated through 1st and 2nd order models; Higher order models are rarely used, because the quality of the forecast increases slightly.

Determination of statistical indicators of measured quantities

Knowledge of statistical characteristics is necessary to assess the quality of manufactured products and determine the moment of disruption of the process. In this case, the values ​​of the statistical characteristics of the measured quantities change. Feature of the definition lfyys [characteristics is the use of recurrent formulas.

Mathematical expectation (1 – non-recurrent formula, 2 – recurrent formula)

Variance (1 – non-recurrent formula, 2 – recurrent formula)

4. Control algorithms

The concept of control is a broader concept and includes measuring quantities and indicators and comparing them with acceptable limits.

Let us consider general and particular formulations of the problem of determining quantities and indicators.

General setting:

A set of values ​​and indicators is specified that need to be determined in the control object. The required accuracy of their assessment is indicated. There is a set of sensors that are installed or can be installed on an automated object. For each individual indicator, it is required to find a group of sensors, their sampling frequency and algorithms for processing the signals received from them. As a result, the value of this quantity would be determined with the required accuracy.

The accuracy of estimating the required value is determined by the accuracy of the operation of the measuring circuits (sensor, converter), the frequency of their interrogation and the accuracy of the computational processing of the measuring signals into the desired value.

Private productions:

1. Determining the current value of a quantity directly by measuring it with an automatic device or sensor.

- when the required measurement accuracy is much less than the accuracy of the sensor from the converter;

- when the required measurement accuracy is greater than the accuracy of the sensor or transducer.

The second case is more general. For control, it is necessary to find algorithms for converting the sensor signal that would increase the accuracy to the required value. To do this, it is necessary to analyze the existing error and identify its individual components, and then compensate for them using special algorithms.

Depending on the causes of errors, the following are used: algorithms that reduce the error:

Analytical calibration of sensors.

If the error is caused by the nonlinearity of the static characteristic of the sensor.

Filtering the signal from interference.

If there is a source of significant interference within an object or sensor that interferes with the desired signal.

Extrapolation and interpolation

If a significant error in estimating a value is caused by a large value of the survey period.

Sensor dynamic error correction

If the sensor is an inertial link, and the measured value changes over time at a significant speed.

2. Determining the value of a quantity calculated from the signals measured by the sensor.

For example, estimating the total value, average value, speed, etc. In this case, it is necessary to select rational algorithms for processing the measured signal.

In addition, the use of AGD, filtering, etc. algorithms is not excluded here.

This task is most difficult in cases where the nature of the relationship between the measured signals and the desired quantity is not known (indirect measurement). In this case, it is necessary to analyze the equations of material and heat balance, which make it possible to identify this relationship or use regression analysis.

Determining the polling period for sensors of measured values

The survey period significantly affects the accuracy of control. Let's consider a method for determining the survey period based on determining the autocorrelation function.

Let the root mean square error be given. Determination of magnitude x(t ). We need to find a time interval T0 between measurements in which the error in determining the value would not exceed the specified value. The technique is based on the dependence of the error and the autocorrelation function:

where is the autocorrelation function.

,

where n - sample size from which the autocorrelation function is determined.

The essence of the technique is as follows:

1. Data is collected with an arbitrary polling period T0 (as less as possible). Number of polling points: 30-50. The obtained data is entered into the table:

Time

Meaning

Deviation over time

T0

2 T 0

3 T 0

x 0

T0

2 T 0

3 T 0

n T 0

Error value

;

, , Where i – table row number, k – column number.

.

2. A graph of the error versus the polling period is plotted.

3. By value the value is determined from the graph .

The value of polling periods for sensors used in practice.

· Consumption: 0.1 – 2s.

· Level: ≈5s.

· Pressure: 0.5 – 10s.

· Temperature: 5 – 30C.

· Concentration: ≈20s.

Types of control

The general function of automatic control is to record the progress of the technological process over time and continuously (periodically) compare the process parameters with the specified ones.

The following types of control are distinguished:

1. Control of technological processes in normal mode.

2.

3.

4.

5. Power-on control/ turning off the equipment.

6. Equipment performance monitoring.

7.

The main control operation is that for each controlled parameterx(t i) at the moment of timetit is necessary to check the fulfillment of the condition:, Where - number of parameters,m i– lower permissible limit of changei-th parameter,M i– upper permissible limit.

All controlled parameters can be divided into three groups:

1. Parameters requiring continuous monitoring.

2. Parameters requiring periodic monitoring.

3. Free process indicators.

Due to the discrete nature of the measurement process in automatic systems, continuous monitoring is impossible, since the question arises about the sampling step (sampling period).

This step must be selected from the condition:.

To maximize the change in a parameter over a period of timet 0 did not exceed some specified positive value . Taking this into account, the conditions of continuous control are reduced to checking the inequality: .

The parameters that require periodic monitoring include those parameters for which at some point in time it is permissible to exceed the established limits. For such parameters on ;

- start of time counting.

Free process indicators are some functions of parameters that need to be monitored:, . Typically, in practice, free indicators require periodic monitoring.

Control of the technological process in normal mode.

Depending on which group the technological parameter belongs to, appropriate monitoring is carried out (continuous or periodic).

If the established limits are exceeded, the time, number of the parameter or ratio whose limit was violated, and the amount of deviation from the limit are recorded with a “-” sign. In addition, the operator leading the process must be able to control the current value of any technological parameter. This type of control is called on-demand control. Thus, technology control in normal mode comes down to determining the value of quantities and comparing their values ​​with pre-established values ​​(limits).

Quality control of manufactured products.

This type of control is carried out using the same methods, however, in most cases, quality indicators require periodic monitoring.

Control of the process when it reaches the rated power level.

The main objective is to ensure safety, so the limit values ​​may differ from the limit value in normal operation. For these purposes, a special subroutine is used.

Monitoring the serviceability of equipment.

When their equipment fails, manual or automatic switching on of backup equipment is provided.

Equipment on/off control carried out using discrete signals characterizing the current state of the equipment. For example, when a tank is full, it turns off and connects empty tanks.

Equipment performance monitoring carried out on the basis of technical and economic indicators.

Control over the process in emergency modes.

Automatic alarm, protection and blocking are provided. It is possible to recognize emergency situations and automatically recover from such situations.







2024 gtavrl.ru.