Static and dynamic characteristics of measuring instruments. Methods for identifying and correcting DAC errors


Most important point What characterizes both DACs and ADCs is the fact that their inputs or outputs are digital, meaning that the analog signal is sampled by level. Typically an N-bit word is represented as one of 2N possible states, so an N-bit DAC (with a fixed voltage reference) can only have 2N values analog signal, and the ADC can only produce 2 N different values binary code. Analog signals can be represented in the form of voltage or current.

The resolution of an ADC or DAC can be expressed in several different ways: LSB weight, ppm FS, millivolts (mV), etc. Different devices (even from the same chip manufacturer) are defined differently, so to properly compare devices, ADC and DAC users must be able to convert various characteristics. Some values ​​of the least significant bit (LSB) are given in Table 1.

Table 1. Quantization: Least Significant Bit (LSB) value

Resolution ability N 2N Full scale voltage 10V ppm FS %FS dB FS
2-bit 4 2.5 V 250000 25 -12
4-bit 16 625 mV 62500 6.25 -24
6-bit 64 156 mV 15625 1.56 -36
8-bit 256 39.1 mV 3906 0.39 -48
10-bit 1024 9.77 mV (10 mV) 977 0.098 -60
12-bit 4096 2.44 mV 244 0.024 -72
14-bit 16384 610 µV 61 0.0061 -84
16-bit 65536 153 µV 15 0.0015 -96
18-bit 262144 38 µV 4 0.0004 -108
20-bit 1048576 9.54 µV (10 µV) 1 0.0001 -120
22-bit 4194304 2.38 µV 0.24 0.000024 -132
24-bit 16777216 596 nV* 0.06 0.000006 -144
*600 nV is in the 10 kHz frequency band, occurring at R = 2.2 kOhm at 25 ° C. Easy to remember: 10-bit quantization at a full scale value of FS = 10 V corresponds to LSB = 10 mV, accuracy 1000 ppm or 0.1%. All other values ​​can be calculated by multiplying by coefficients equal to powers of 2.

Before looking at the features internal device ADC and DAC, it is necessary to discuss the expected performance and the most important parameters digital-to-analog and analog-to-digital converters. Let's look at the definition of errors and technical requirements for analog-to-digital and digital-to-analog converters. This is very important for understanding the strengths and weaknesses ADCs and DACs built according to different principles.

The first data converters were intended for use in measurement and control applications, where the exact timing of the input signal conversion was usually not important. The data transfer speed in such systems was low. In these devices, the DC characteristics of the A/D and D/A converters are important, but the frame synchronization and AC characteristics are not important.

Today, many, if not most, ADCs and DACs are used in audio, video and radio signal sampling and reconstruction systems, where their AC characteristics are decisive for the operation of the entire device, while the DC characteristics of the converters may not be important.

Figure 1 shows the ideal transfer function of a unipolar, three-bit digital-to-analog converter. In it, both the input and output signals are quantized, so the transfer function graph contains eight separate points. Regardless of how this function is approximated, it is important to remember that the actual transmission characteristic of a digital-to-analog converter is not a continuous line, but a number of discrete points.


Figure 1. Transfer function of an ideal three-bit digital-to-analog converter.

Figure 2 shows the transfer function of a three-bit ideal unsigned analog-to-digital converter. Note that the analog signal at the ADC input is not quantized, but its output is the result of quantizing that signal. The transfer characteristic of an analog-to-digital converter consists of eight horizontal lines, but when analyzing the offset, gain and linearity of the ADC, we will consider the line connecting the midpoints of these segments.



Figure 2. Transfer function of an ideal 3-bit ADC.

In both cases discussed, the full digital scale (all "1s") corresponds to the full analog scale, which coincides with the reference voltage or a voltage dependent on it. Therefore, a digital code represents a normalized relationship between an analog signal and a reference voltage.

The transition of an ideal analog-to-digital converter to the next digital code occurs from a voltage equal to half the least significant digit to a voltage less than half the least significant digit of the full scale voltage. Since the analog signal at the ADC input can take on any value, and the output digital signal is a discrete signal, an error occurs between the real input analog signal and the corresponding output value digital signal. This error can reach half the least significant digit. This effect is known as quantization error or transformation uncertainty. In devices that use signals alternating current,this quantization error results in quantization noise.

The examples shown in Figures 1 and 2 show the transient characteristics of unsigned converters operating with a signal of only one polarity. This is the simplest type of converter, but bipolar converters are more useful in real applications.

There are two types of bipolar converters currently in use. The simpler of them is a conventional unipolar converter, the input of which is supplied with an analog signal with a constant component. This component introduces an offset of the input signal by an amount corresponding to the most significant bit (MSB) unit. Many converters can switch this voltage or current to allow the converter to be used in either unipolar or bipolar mode.

Another, more complex type of converter is known as a signed ADC and in addition to N information bits there is an additional bit that shows the sign of the analog signal. Sign analog-to-digital converters are used quite rarely, and are used mainly as part of digital voltmeters.

There are four types of DC errors in ADCs and DACs: offset error, gain error, and two types of linearity errors. The offset and gain errors of ADCs and DACs are similar to those of conventional amplifiers. Figure 3 shows the conversion of bipolar input signals (although the offset error and zero error, which are identical in amplifiers and unipolar ADCs and DACs, are different in bipolar converters and should be taken into account).



Figure 3: Converter Zero Offset Accuracy and Gain Accuracy

The transfer characteristic of both DAC and ADC can be expressed as D = K + GA, where D is a digital code, A is an analog signal, K and G are constants. In a unipolar converter, the coefficient K is equal to zero; in a bipolar converter with a bias, it is equal to one of the most significant digits. The bias error of the converter is the amount by which the actual value of the gain K differs from the ideal value. Gain error is the amount by which the gain G differs from the ideal value.

In general, the gain error can be expressed as the difference between two coefficients, expressed as a percentage. This difference can be considered as the contribution of the gain error (in mV or LSB values) to the total error at maximum value input signal. Typically the user is given the opportunity to minimize these errors. Note that the amplifier first adjusts the offset when the input signal is zero, and then adjusts the gain when the input signal is close to the maximum value. The algorithm for tuning bipolar converters is more complex.

The integral nonlinearity of the DAC and ADC is similar to the nonlinearity of the amplifier and is defined as the maximum deviation of the actual transmission characteristic of the converter from a straight line. In general, it is expressed as a percentage of full scale (but can be represented in LSB values). There are two general methods for approximating transmission characteristics: the end point method and the best straight line method (see Figure 4).



Figure 4. METHOD FOR MEASURING TOTAL LINEARITY ERROR

When using the end point method, the deviation of an arbitrary characteristic point (after gain correction) from a straight line drawn from the origin is measured. Thus, Analog Devices, Inc. measure the values ​​of the integral nonlinearity of converters used in measurement and control tasks (since the magnitude of the error depends on the deviation from the ideal characteristic, and not on an arbitrary “best approximation”).

The best line method provides a more adequate prediction of distortion in applications dealing with AC signals. It is less sensitive to nonlinearities technical characteristics. Using the best approximation method, a straight line is drawn through the transmission characteristic of the device using standard methods curve interpolation. After this, the maximum deviation is measured from the constructed straight line. Typically, integral nonlinearity measured in this way accounts for only 50% of the nonlinearity estimated by the end-point method. This makes the method preferable for specifying impressive technical characteristics in a specification, but less useful for analyzing real-world error values. For AC applications, it is better to determine harmonic distortion than DC nonlinearity, so the best straight line method is rarely needed to determine converter nonlinearity.

Another type of converter nonlinearity is differential nonlinearity (DNL). It is associated with the nonlinearity of the code transitions of the converter. Ideally, change by one least significant bit digital code exactly corresponds to a change in the analog signal by the value of the least significant unit. In a DAC, changing one least significant bit of the digital code should cause the signal to change by analog output, exactly corresponding to the value of the least significant digit. At the same time, in the ADC when moving from one digital level the next, the value of the signal at the analog input must change exactly by the value corresponding to the least significant digit of the digital scale.

Where the change in the analog signal corresponding to a change in the least significant bit of the digital code is greater or less than this value, we speak of a differential nonlinear (DNL) error. The DNL error of a converter is usually defined as the maximum value of differential nonlinearity detected at any transition.

If the DAC's differential nonlinearity is less than –1 LSB at any transition (see Figure 2.12), the DAC is said to be nonmonotonic, and its transmission response contains one or more local maxima or minima. Differential nonlinearity greater than +1 LSB does not cause monotonicity violation, but is also undesirable. In many DAC applications (especially closed-loop systems where non-monotonicity can change the negative feedback on the positive) monotony of the DAC is very important. Often the monotonicity of a DAC is explicitly stated in the datasheet, although if the differential nonlinearity is guaranteed to be less than the least significant bit (i.e. |DNL| . 1LSB), the device will be monotonic even if it is not explicitly stated.

It is possible for an ADC to be non-monotonic, but the most common manifestation of DNL in an ADC is missing codes. (see Fig. 2.13). Missing codes (or non-monotonicity) in an ADC are just as undesirable as non-monotonicity in a DAC. Again, this occurs when DNL > 1 LSB.



Figure 5. Non-ideal 3-bit DAC transfer function


Figure 6. Non-ideal 3-bit DAC transfer function

Determining missing codes is more difficult than determining non-monotonicity. All ADCs are characterized by some transition noise, illustrated in Figure 2.14 (think of this noise as the last digit of a digital voltmeter flickering between adjacent values). As resolution increases, the range of the input signal corresponding to the transition noise level can reach or even exceed the signal value corresponding to the least significant one. In this case, especially in combination with a negative DNL error, it may happen that there are some (or even all) codes where transition noise is present throughout the entire range of input signal values. Thus, there may be some codes for which there is no input signal value at which that code is guaranteed to appear in the output, although there may be some range of input signal at which the code will sometimes appear.



Figure 7. Combined effects of code transition noise and differential nonlinearity (DNL)

For a low-resolution ADC, the no-missing condition can be defined as a combination of transition noise and differential nonlinearity that would guarantee some level (say, 0.2 LSB) of noise-free code for all codes. However, it is not possible to achieve the high resolution of today's sigma-delta ADCs, or even the lower resolution of a wide-bandwidth ADC. In these cases, the manufacturer must determine noise levels and resolution in some other way. It is not so important which method is used, but the specification should clearly define the method used and the expected characteristics.

Literature:

  1. Analod-Digital Conversion, Walt Kester editor, Analog Devices, 2004. - 1138 p.
  2. Mixed-Signal and DSP Design Techniques ISBN_0750676116, Walt Kester editor, Analog Devices, 2004. - 424 p.
  3. High Speed ​​System Application, Walt Kester editor, Analog Devices, 2006. - 360 p.

Together with the article "Static transmission ADC characteristics and DAC" read:

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

  • CONTENT 2
  • INconducting 3
  • 1. Technical task 6
  • 2. Development and description of the system measuring channels to determine static and dynamic characteristics 8
  • 2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments 8
  • 2.2 Development of complexes of standardized metrological characteristics 12
  • 3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS 16
  • 3.1 Development of metrological reliability of measuring instruments. 16
  • 3.2 Changes in metrological characteristics of means 19
  • measurements during operation 19
  • 3.3 Development of metrological standardization models 22
  • characteristics 22
  • 4. CLASSIFICATION OF SIGNALS 26
  • 5. Channel development 30
  • 5.1Development of a channel model 30
  • 5.2 Development of a measuring channel model 30
  • LITERATURE 35

Introduction

One of the main forms of state metrological supervision and departmental control aimed at ensuring the uniformity of measurements in the country, as mentioned earlier, is the verification of measuring instruments. Instruments released from production and repair, received from abroad, as well as those in operation and storage are subject to verification. The basic requirements for the organization and procedure for verification of measuring instruments are established by GOST “GSI. Verification of measuring instruments. Organization and procedure.” The term “verification” was introduced by GOST “GSI. Metrology. Terms and definitions” as “the determination by a metrological body of the errors of a measuring instrument and the establishment of its suitability for use.” In some cases, during verification, instead of determining error values, they check whether the error is within acceptable limits. Thus, verification of measuring instruments is carried out to establish their suitability for use. Those measuring instruments, the verification of which confirms their compliance with metrological and technical requirements to this SI. Measuring instruments are subjected to primary, periodic, extraordinary, inspection and expert verification. Instruments undergo primary verification upon release from production or repair, as well as instruments received for import. Instruments in operation or storage are subject to periodic verification at certain calibration intervals established to ensure the suitability of the instrument for use for the period between verifications. Inspection verification is carried out to determine the suitability for use of measuring instruments in the implementation of state supervision and departmental metrological control over the condition and use of measuring instruments. Expert verification is performed when controversial issues arise regarding metrological characteristics (MX), serviceability of measuring instruments and their suitability for use. Metrological certification is a set of activities to study the metrological characteristics and properties of a measuring instrument in order to make a decision on the suitability of its use as a reference instrument. Usually for metrological certification they are special program works, the main stages of which are: experimental determination of metrological characteristics; analysis of the causes of failures; establishing a verification interval, etc. Metrological certification of measuring instruments used as reference ones is carried out before commissioning, after repair and, if necessary, changing the category of the reference measuring instrument. The results of metrological certification are documented with appropriate documents (protocols, certificates, notices of the unsuitability of the measuring instrument). The characteristics of the types of measuring instruments used determine the methods for their verification.

In the practice of calibration laboratories, various methods for calibrating measuring instruments are known, which for unification are reduced to the following:

* direct comparison using a comparator (i.e. using comparison tools);

* direct measurement method;

* method of indirect measurements;

* method of independent verification (i.e. verification of measuring instruments of relative values, which does not require transfer of unit sizes).

Verification of measuring systems is carried out by state metrological bodies called the State Metrological Service. The activities of the State Metrological Service are aimed at solving scientific and technical problems of metrology and implementing the necessary legislative and control functions, such as: establishing units of physical quantities approved for use; creation of exemplary measuring instruments, methods and measuring instruments of the highest accuracy; development of all-Union verification schemes; determination of physical constants; development of measurement theory, error estimation methods, etc. The tasks facing the State Metrological Service are solved with the help of the State System for Ensuring the Uniformity of Measurements (GSI). The state system for ensuring the uniformity of measurements is the regulatory and legal basis for metrological support of scientific and practical activities in terms of assessing and ensuring measurement accuracy. It is a set of regulatory and technical documents that establish a unified nomenclature, methods for presenting and assessing the metrological characteristics of measuring instruments, rules for standardization and certification of measurements, registration of their results, requirements for state tests, verification and examination of measuring instruments. The main regulatory and technical documents of the state system for ensuring the uniformity of measurements are state standards. Based on these basic standards Regulatory and technical documents are being developed that specify the general requirements of basic standards for various industries, measurement areas and measurement methods.

1. Technical specifications

1.1 Development and description of a system of measuring channels to determine static and dynamic characteristics.

1.2 Materials of scientific and methodological developments of the ISIT department

1.3 Purpose and purpose

1.3.1 This system is designed to determine the characteristic instrumental components of measurement errors.

1.3.2 Develop a measuring system information system allowing you to automatically receive necessary information, process and issue it in the required form.

1.4 System requirements

1.4.1 The rules for selecting sets of standardized metrological characteristics for measuring instruments and methods for their standardization are determined by the GOST 8.009 - 84 standard.

1.4.2 Set of standardized metrological characteristics:

1. measures and digital-to-analog converters;

2. measuring and recording instruments;

3. analog and analog-to-digital measuring converters.

1.4.3 Instrumental error of the first model of normalized metrological characteristics:

Random component;

Dynamic error;

1.4.4 Instrumental error of the second model of normalized metrological characteristics:

where is the main SI error without breaking it into components.

1.4.5 Compliance of models of standardized metrological characteristics with GOST 8.009-84 on the formation of complexes of standardized metrological characteristics.

2. Development and description of a system of measuring channels to determine static and dynamic characteristics

2.1 Development of the principle of selection and standardization of static and dynamic characteristics of measuring channels of measuring instruments

When using SI, it is fundamentally important to know the degree to which the information being measured, contained in the output signal, corresponds to its true value. For this purpose, certain metrological characteristics (MX) are introduced and standardized for each SI.

Metrological characteristics are characteristics of the properties of a measuring instrument that influence the measurement result and its errors. The characteristics established by regulatory and technical documents are called standardized, and those determined experimentally are called valid. The MX nomenclature, the rules for selecting standardized MX complexes for measuring instruments and the methods for their standardization are determined by the GOST 8.009-84 standard "GSI. Standardized metrological characteristics of measuring instruments."

Metrological characteristics of SI allow:

determine measurement results and calculate estimates of the characteristics of the instrumental component of the measurement error in real conditions of SI application;

calculate MX channels of measuring systems consisting of a number of measuring instruments with known MX;

produce optimal choice SI, providing the required quality of measurements under known conditions of their use;

compare SI various types taking into account the conditions of use.

When developing principles for the selection and standardization of measuring instruments, it is necessary to adhere to a number of provisions outlined below.

1. The main condition for the possibility of solving all of the listed problems is the presence of an unambiguous connection between normalized MX and instrumental errors. This connection is established through a mathematical model of the instrumental component of the error, in which the normalized MX must be arguments. It is important that the MX nomenclature and methods of expressing them are optimal. Experience in operating various SIs shows that it is advisable to normalize the MX complex, which, on the one hand, should not be very large, and on the other hand, each standardized MX must reflect the specific properties of the SI and, if necessary, can be controlled.

Standardization of MX measuring instruments should be carried out on the basis of uniform theoretical premises. This is due to the fact that measuring instruments based on different principles can participate in measurement processes.

Normalized MX must be expressed in such a form that with their help it is possible to reasonably solve almost any measurement problems and at the same time it is quite simple to control the measuring instruments for compliance with these characteristics.

Normalized MX must provide the possibility of statistical integration and summation of the components of the instrumental measurement error.

In general, it can be defined as the sum (combination) of the following error components:

0 (t), due to the difference between the actual conversion function under normal conditions and the nominal one assigned by the relevant documents to this type of SI. This error is called the main error, caused by the reaction of the SI to changes in external influencing quantities and informative parameters of the input signal relative to their nominal values. This error is called additional;

dyn, caused by the reaction of the SI to the rate (frequency) of change of the input signal. This component, called dynamic error, depends both on the dynamic properties of the measuring instruments and on the frequency spectrum of the input signal;

int , caused by the interaction of the measuring instrument with the measurement object or with other measuring instruments connected in series with it in the measuring system. This error depends on the characteristics of the parameters of the SI input circuit and the output circuit of the measurement object.

Thus, the instrumental component of the SI error can be represented as

where * is a symbol for the statistical combination of components.

The first two components represent the static error of the SI, and the third is the dynamic one. Of these, only the main error is determined by the properties of SI. Additional and dynamic errors depend both on the properties of the SI itself and on some other reasons (external conditions, parameters of the measuring signal, etc.).

The requirements for universality and simplicity of the statistical combination of the components of the instrumental error determine the need for their statistical independence - non-correlation. However, the assumption of the independence of these components is not always true.

Isolating the dynamic error of the SI as a summed component is permissible only in a particular, but very common case, when the SI can be considered a linear dynamic link and when the error is a very small value compared to the output signal. A dynamic link is considered linear if it is described by linear differential equations with constant coefficients. For SI, which are essentially nonlinear links, separating static and dynamic errors into separately summable components is unacceptable.

Normalized MX must be invariant to the conditions of use and operating mode of the SI and reflect only its properties.

The choice of MX must be carried out so that the user has
the ability to calculate SI characteristics using them under real operating conditions.

Standardized MX, given in the regulatory and technical documentation, reflect the properties not of a single instance of measuring instruments, but of the entire set of measuring instruments of this type, i.e. are nominal. A type is understood as a set of measuring instruments that have the same purpose, layout and design and satisfy the same requirements regulated in the technical specifications.

The metrological characteristics of an individual SI of this type can be any within the range of nominal MX values. It follows that the MX of a measuring instrument of this type should be described as a non-stationary random process. A mathematically strict account of this circumstance requires normalization of not only the limits of MX as random variables, but also their time dependence (i.e., autocorrelation functions). This will lead to an extremely complex standardization system and the practical impossibility of controlling MX, since it would have to be carried out at strictly defined intervals. As a result, a simplified standardization system was adopted, providing a reasonable compromise between mathematical rigor and the necessary practical simplicity. In the adopted system, low-frequency changes in the random components of the error, the period of which is commensurate with the duration of the verification interval, are not taken into account when normalizing MX. They determine the reliability indicators of measuring instruments, determine the choice of rational calibration intervals and other similar characteristics. High-frequency changes in the random components of the error, the correlation intervals of which are commensurate with the duration of the measurement process, must be taken into account by normalizing, for example, their autocorrelation functions.

2.2 Development of complexes of standardized metrological characteristics

The large variety of SI groups makes it impossible to regulate specific MX complexes for each of these groups in one regulatory document. At the same time, all SI cannot be characterized by a single set of normalized MX, even if it is presented in the most general form.

The main feature of dividing measuring instruments into groups is the commonality of the complex of standardized MXs necessary to determine the characteristic instrumental components of measurement errors. In this case, it is advisable to divide all measuring instruments into three large groups, presented according to the degree of complexity of MX: 1) measures and digital-to-analog converters; 2) measuring and recording instruments; 3) analog and analog-to-digital measuring converters.

When establishing a set of standardized MX, the following model of the instrumental component of measurement error was adopted:

where by symbol<< * >> indicates the combination of the SI error in real conditions of its use and the error component int, caused by the interaction of the SI with the measurement object. By combining we mean applying some functionality to the components, which allows us to calculate the error caused by their joint influence. In each case, the functionality is determined based on the properties of a particular SI.

The entire MX population can be divided into two large groups. In the first of them, the instrumental component of the error is determined by statistically combining its individual components. In this case, the confidence interval in which the instrumental error lies is determined with a given confidence probability less than one. For MX of this group, the following error model is adopted in real application conditions (model 1):

where is the systematic component;

Random component;

Random component due to hysteresis;

Combining additional errors;

Dynamic error;

L is the number of additional errors, equal to all quantities that significantly affect the error in real conditions.

Depending on the properties of a given type of SI and the operating conditions of its use, individual components may be missing.

The first model is selected if it is accepted that the error occasionally exceeds the value calculated from the standardized characteristics. In this case, using the MX complex, it is possible to calculate point and interval characteristics in which the instrumental component of the measurement error is found with any given confidence probability, close to unity, but less than it.

For the second MX group, statistical aggregation of components is not applied. Such measuring instruments include laboratory means, as well as most standard means, the use of which does not involve repeated observations with averaging of results. Instrumental error in in this case is defined as the arithmetic sum of the largest possible values ​​of its components. This estimate gives a confidence interval with a probability equal to one, which is the upper limit estimate of the desired error interval, covering all possible, including very rarely realized, values. This leads to a significant tightening of the requirements for MX, which can only be applied to the most critical measurements, for example those related to the health and life of people, with the possibility of catastrophic consequences of incorrect measurements, etc.

Arithmetic summation of the largest possible values ​​of the components of the instrumental error leads to the inclusion in the complex of normalized MX limits of permissible error, and not statistical moments. This is also acceptable for measuring instruments that have no more than three components, each of which is determined by a separate standardized MX. In this case, the calculated estimates of the instrumental error obtained by arithmetic union highest values of its components and statistical summation of the characteristics of the components (with a probability, although lower, but quite close to unity), will practically not differ. For the case under consideration, SI error model 2:

Here is the main error of the SI without breaking it into components (unlike model 1).

3. DEVELOPMENT OF METROLOGICAL MEASURING INSTRUMENTS

3.1 Development of metrological reliability of measuring instruments.

Model 2 is applicable only for those SIs whose random component is negligible.

The issues of choosing MX are regulated in sufficient detail in GOST 8.009-84, which shows the characteristics that should be standardized for the above-mentioned SI groups. The above list may be adjusted for specific means measurements taking into account its features and operating conditions. It is important to note that one should not normalize those MXs that make an insignificant contribution to the instrumental error compared to others. Determination of whether a given error is important or not is made on the basis of the materiality criteria given in GOST 8.009-84.

During operation, the metrological characteristics and parameters of the measuring instrument undergo changes. These changes are random, monotonous or fluctuating in nature and lead to failures, i.e. to the inability of the SI to perform its functions. Failures are divided into non-metrological and metrological.

Non-metrological is a failure caused by reasons not related to changes in the MX of the measuring instrument. They are mainly of an obvious nature, appear suddenly and can be detected without verification.

Metrological is a failure caused by MX leaving the established permissible limits. As studies have shown, metrological failures occur much more often than non-metrological ones. This necessitates the development special methods their prediction and detection. Metrological failures are divided into sudden and gradual.

Sudden failure is a failure characterized by an abrupt change in one or more MXs. These failures, due to their randomness, cannot be predicted. Their consequences (failure of readings, loss of sensitivity, etc.) are easily detected during operation of the device, i.e. by the nature of their manifestation they are obvious. A feature of sudden failures is the constancy of their intensity over time. This makes it possible to apply classical reliability theory to analyze these failures. In this regard, failures of this kind will not be considered further.

Gradual failure is a failure characterized by a monotonous change in one or more MXs. By the nature of their manifestation, gradual failures are hidden and can only be detected based on the results of periodic monitoring of the measuring instruments. In the following, it is these types of failures that will be considered.

The concept of metrological serviceability of a measuring instrument is closely related to the concept of “metrological failure”. It refers to the state of the SI in which all standardized MX correspond to the established requirements. SI's ability to preserve set values metrological characteristics for a given time under certain modes and operating conditions is called metrological reliability. The specificity of the problem of metrological reliability is that for it the main position of the classical reliability theory about the constancy of the failure rate over time turns out to be unlawful. Modern reliability theory is focused on products that have two characteristic states: operational and inoperative. A gradual change in the SI error makes it possible to introduce as many operational states as desired with different levels of operating efficiency, determined by the degree of approximation of the error to the permissible limit values.

The concept of metrological failure is to a certain extent conditional, since it is determined by the MX tolerance, which in general can vary depending on specific conditions. It is also important what to record exact time The occurrence of a metrological failure due to the hidden nature of its manifestation is impossible, while obvious failures, which the classical theory of reliability deals with, can be detected at the moment of their occurrence. All this required the development of special methods for analyzing the metrological reliability of SI.

The reliability of a measuring instrument characterizes its behavior over time and is a generalized concept that includes stability, reliability, durability, maintainability (for recoverable measuring instruments) and storability.

The stability of SI is a qualitative characteristic reflecting the constancy of its MX over time. It is described by the time dependences of the parameters of the error distribution law. Metrological reliability and stability are different properties of the same SI aging process. Stability brings more information on the constancy of the metrological properties of the measuring instrument. This is, as it were, its “internal” property. Reliability, on the contrary, is an “external” property, since it depends both on stability and on the accuracy of measurements and the values ​​of the tolerances used.

Reliability is the property of an SI to continuously maintain an operational state for some time. It is characterized by two states: operational and inoperative. However, for complex measuring systems, a larger number of states may occur, since not every failure leads to a complete cessation of their functioning. Failure is a random event associated with disruption or cessation of SI performance. This determines the random nature of failure-free indicators, the main one of which is the distribution of failure-free operation time of the SI.

Durability is the property of an SI to maintain its operational state until a limiting state occurs. An operational state is a state of the SI in which all its MX correspond to the normalized values. A limiting state is a state of SI in which its use is unacceptable.

After a metrological failure, the SI characteristics can be returned to acceptable ranges through appropriate adjustments. The process of making adjustments can be more or less lengthy depending on the nature of the metrological failure, the design of the measuring instruments and a number of other reasons. Therefore, the concept of “maintainability” was introduced into the reliability characteristic. Maintainability is a property of measuring equipment, which consists in its adaptability to preventing and detecting the causes of failures, restoring and maintaining it working condition by Maintenance and repairs. It is characterized by the expenditure of time and money to restore the measuring instrument after a metrological failure and maintain it in working condition.

As will be shown below, the process of changing MX is continuous, regardless of whether the SI is in use or is stored in a warehouse. The property of an SI to preserve the values ​​of indicators of reliability, durability and maintainability during and after storage and transportation is called its persistence.

3.2 Changes in metrological characteristics of means

measurements during operation

The metrological characteristics of SI may change during operation. In what follows, we will talk about changes in error (t), implying that any other MX can be considered in a similar way instead.

It should be noted that not all error components are subject to change over time. For example, methodological errors depend only on the measurement technique used. Among instrumental errors, there are many components that are practically not subject to aging, for example, the size of the quantum in digital devices and the quantization error determined by it.

The change in MX of measuring instruments over time is due to aging processes in its nodes and elements caused by interaction with the external environment. These processes occur mainly at the molecular level and do not depend on whether the measuring instrument is in operation or stored for conservation. Consequently, the main factor determining the aging of measuring instruments is the calendar time that has passed since their manufacture, i.e. age. The rate of aging depends primarily on the materials and technologies used. Research has shown that irreversible processes that change the error occur very slowly and in most cases it is impossible to record these changes during the experiment. In this regard, various mathematical methods are of great importance, on the basis of which models of error changes are built and metrological failures are predicted.

The problem solved when determining the metrological reliability of measuring instruments is to find the initial changes in MX and construct a mathematical model that extrapolates the results obtained over a large time interval. Since the change in MX over time is a random process, the main construction tool mathematical models is the theory of random processes.

The change in SI error over time is a non-stationary random process. Many of its implementations are shown in Fig. 1 in the form of error modulus curves. At each moment t i they are characterized by a certain probability density distribution law p(, t i) (curves 1 and 2 in Fig. 2a). In the center of the strip (curve cp (t)) the highest density of errors is observed, which gradually decreases towards the boundaries of the strip, theoretically tending to zero at an infinite distance from the center. The upper and lower boundaries of the SI error band can only be presented in the form of some quantile boundaries, within which are contained most of the errors realized with a confidence probability P. Outside the boundaries with probability (1 - P)/2 are the errors that are most distant from the center of implementations.

To apply a quantile description of the boundaries of the error band in each of its sections t i, it is necessary to know the estimates of the mathematical expectation cp (t i) and the standard deviation of individual implementations. The error value at the boundaries in each section t i is equal to

r (t i) = cp (t) ± k(t i),

where k is a quantile factor corresponding to a given confidence probability P, the value of which significantly depends on the type of error distribution law across sections. It is practically impossible to determine the type of this law when studying SR aging processes. This is due to the fact that distribution laws can undergo significant changes over time.

Metrological failure occurs when the straight line curve intersects ± etc. Failures can occur at various times in the range from t min to t max (see Fig. 2, a), and these points are the intersection points of the 5% and 95% quantiles with line permissible value errors. When the curve (t) reaches the permissible limit, 5% of devices experience metrological failure. The distribution of the moments of occurrence of such failures will be characterized by the probability density p H (t), shown in Fig. 2, b. Thus, as a model of a nonstationary random process of change in time of the SI error module, it is advisable to use the dependence of the change in time of the 95% quantile of this process.

Indicators of accuracy, metrological reliability and stability of an SI correspond to various functionals built on the trajectories of changes in its MX (t). The accuracy of the SI is characterized by the value MX at the considered moment in time, and for the set of measuring instruments - by the distribution of these values, represented by curve 1 for the initial moment and curve 2 for the moment t i. Metrological reliability is characterized by the distribution of times when metrological failures occur (see Fig. 2,b). SI stability is characterized by the distribution of MX increments over a given time.

3.3 Development of metrological standardization models

characteristics

The MX standardization system is based on the principle of adequacy of the measurement error estimate and its actual value, provided that the estimate actually found is an estimate “from above”. The last condition is explained by the fact that an estimate “from below” is always more dangerous, since it leads to greater damage from unreliable measurement information.

This approach is quite understandable, taking into account that accurate normalization of MX is impossible due to the many influencing factors that are not taken into account (due to their ignorance and the lack of a tool for identifying them). Therefore, rationing, to a certain extent, is an act of will when reaching a compromise between desire full description measurement characteristics and the ability to carry this out in real conditions under known experimental and theoretical limitations and the requirements of simplicity and clarity of engineering methods. In other words, complex methods MX descriptions and normalizations are not viable

The consumer receives information about typical MX from the technical documentation for SI and only in extremely rare, exceptional cases independently conducts an experimental study of the individual characteristics of SI. Therefore, it is very important to know the relationship between MX SI and instrumental measurement errors. This would allow, knowing one complex MX SI, to directly find the measurement error, eliminating one of the most time-consuming and complex tasks of summing up the components of the total measurement error. However, this is hampered by one more circumstance - the difference between the MX of a particular SI and the metrological properties of many of the same SIs. For example, the systematic error of a given SI is a deterministic quantity, but for a set of SI it is a random quantity. The NMX complex must be installed based on the requirements of the actual operating conditions of specific measuring instruments. On this basis, it is advisable to divide all SI into two functional categories. For the first and third groups of SI, the characteristics of interaction with devices connected to the input and output of the SI, and non-informative parameters of the output signal should be normalized. In addition, for the third group the nominal transformation function f nom (x) must be normalized (in the SI of the second group it will be replaced by a scale or other calibrated reading device) and full dynamic characteristics. The indicated characteristics for SI of the second group do not make sense, with the exception of recording instruments for which it is advisable to normalize complete or partial dynamic characteristics

The most common forms of recording the CSI accuracy class are:

where c and d are constant coefficients according to formula (3.6); x k - final value of the measurement range; x - current value;

where b= d; a = c-b;

3) symbolic notation, characteristic of foreign CCAs,

op = ± ,

GOST 8.009 - 84 provides two main models (Ml and MP) for the formation of NMX complexes, corresponding to two models of the occurrence of SI errors, based on the statistical combination of these errors.

The model is applicable for SI, the random component of the error of which can be neglected. This model includes the calculation of the largest possible values ​​of the components of the SI error to guarantee the probability P = 1 of preventing the SI error from going beyond the calculated limits. Model II is used for the most critical measurements related to taking into account technical and economic factors, possible catastrophic consequences, threats to human health, etc. When the number of components exceeds three, this model gives a rougher (due to the inclusion of rarely occurring components), but reliable estimate “from above” of the main SI error.

Model 1 gives a rational estimate of the main SI error with probability P<1 из-за пренебрежения редко реализующимися составляющими погрешности.

Thus, the NMX complex for error models I and II provides for the statistical integration of individual error components, taking into account their significance.

However, for some SIs such statistical unification is impractical. These are precise laboratory industrial (in technological processes) measuring instruments that measure slowly changing processes under conditions close to normal, exemplary measuring instruments, when used, repeated observations with averaging are not performed. In such instruments, their main error or the arithmetic sum of the largest possible values ​​of the individual error components can be taken as instrumental (model III).

Arithmetic summation of the largest values ​​of error components is possible if the number of such components is no more than three. In this case, the estimate of the total instrumental error will practically not differ from the statistical summation.

4. CLASSIFICATION OF SIGNALS

A signal is a material carrier of information that represents a certain physical process, one of the parameters of which is functionally related to the physical quantity being measured. This parameter is called informative.

A measuring signal is a signal containing quantitative information about the physical quantity being measured. Basic concepts, terms and definitions in the field of measuring signals are established by GOST 16465 70 "Radio signals. Terms and definitions". The measuring signals are extremely varied. Their classification according to various criteria is shown in Fig. 3.

Based on the nature of measuring informative and time parameters, measuring signals are divided into analog, discrete and digital.

An analog signal is a signal described by a continuous or piecewise continuous function Y a (t), and both this function itself and its argument t can take on any values ​​at given intervals Y<=(Y min ; Y max) и t6(t mjn ; t max)

A discrete signal is a signal that varies discretely in time or in level. In the first case, it can take nT at discrete moments in time, where T = const - sampling interval (period), n = 0; 1; 2;. integer, any values ​​Y JI (nT)e(Y min ; Y max), called samples, or samples. Such signals are described by lattice functions. In the second case, the values ​​of the signal Y a (t) exist at any time te(t niin ; t max), but they can take on a limited range of values ​​h ; =nq, multiples of the quantum q.

Digital signals are level-quantized and time-discrete signals Y u (nT), which are described by quantized lattice functions (quantized sequences), which at discrete moments of time PT accept only a finite series of discrete values ​​of quantization levels h 1 , h 2 ,., h n

According to the nature of changes over time, signals are divided into constants, whose values ​​do not change over time, and variables, whose values ​​change over time. Constant signals are the simplest type of measuring signals.

Variable signals can be continuous in time or pulsed. A signal whose parameters change continuously is called continuous. A pulse signal is a signal of finite energy, significantly different from zero during a limited time interval commensurate with the time of completion of the transient process in the system to which this signal is intended to influence.

According to the degree of availability of a priori information, variable measuring signals are divided into deterministic, quasi-deterministic and random. A deterministic signal is a signal whose law of change is known, and the model does not contain unknown parameters. The instantaneous values ​​of a deterministic signal are known at any time. The signals at the output of the measures are deterministic (with a certain degree of accuracy). For example, the output signal of a low-frequency sine wave generator is characterized by the amplitude and frequency values ​​​​that are set on its controls. The errors in setting these parameters are determined by the metrological characteristics of the generator.

Quasi-deterministic signals are signals with a partially known nature of change over time, i.e. with one or more unknown parameters. They are most interesting from a metrological point of view. The vast majority of measurement signals are quasi-deterministic.

Deterministic and quasi-deterministic signals are divided into elementary, described by simple mathematical formulas, and complex. Elementary signals include constant and harmonic signals, as well as signals described by the unit and delta functions.

Signals can be periodic or non-periodic. Non-periodic signals are divided into almost periodic and transient. Nearly periodic is a signal whose values ​​are approximately repeated when a properly selected number of almost period is added to the time argument. A periodic signal is a special case of such signals. Almost periodic functions are obtained by adding periodic functions with incommensurable periods, for example Y(t) sin(cot) - sin(V2(0t). Transient signals describe transient processes in physical systems.

A signal is called periodic, the instantaneous values ​​of which are repeated at a constant time interval. The period T of the signal is a parameter equal to the smallest such time interval. The frequency f of a periodic signal is the reciprocal of the period.

A periodic signal is characterized by a spectrum. There are three types of spectrum:

* complex complex function of a discrete argument that is a multiple of an integer number of frequency values ​​f of a periodic signal Y(t)

* amplitude - a function of a discrete argument, which is the module of the complex spectrum of a periodic signal

* phase - a function of a discrete argument, which is an argument of the complex spectrum of a periodic signal

A measuring system, by definition, is designed to perceive, process and store measurement information in the general case of heterogeneous physical quantities through various measuring channels (IC). Therefore, calculating the error of a measuring system comes down to estimating the errors of its individual channels.

The resulting relative error of the IR will be equal to

where x is the current value of the measured value;

x P - the limit of a given channel measurement range at which the relative error is minimal;

Relative errors calculated at the beginning and end of the range, respectively.

IR - a chain of various perceiving, converting and recording links

5. Channel development

5.1Development of a channel model

In real data transmission channels, the signal is affected by complex interference and it is almost impossible to give a mathematical description of the received signal. Therefore, when studying signal transmission through channels, idealized models of these channels are used. A data transmission channel model is understood as a description of a channel that allows one to calculate or evaluate its characteristics, on the basis of which one can explore various ways of constructing a communication system without direct experimental data.

The model of a continuous channel is the so-called Gaussian channel. The noise in it is additive and represents an ergodic normal process with zero mathematical expectation. The Gaussian channel reflects quite well only the channel with fluctuation noise. For multiplicative interference, a channel model with a Rayleigh distribution is used. For impulse noise, a channel with a hyperbolic distribution is used.

The discrete channel model coincides with the models of error sources.

A number of mathematical models of error distribution in real communication channels have been put forward, such as Hilbert, Mertz, Maldenbrot, etc.

5.2 Development of a measuring channel model

Previously, measuring equipment was designed and manufactured mainly in the form of separate instruments designed to measure one or several physical quantities. Currently, conducting scientific experiments, automation of complex production processes, control, and diagnostics are unthinkable without the use of measurement information systems (MIS) of various purposes, which make it possible to automatically obtain the necessary information directly from the object being studied, process it and issue it in the required form. Specialized measuring systems are being developed for almost all areas of science and technology.

When designing an IIS according to given technical and operational characteristics, a task arises related to the choice of a rational structure and a set of technical means for its construction. The structure of the information system is mainly determined by the measurement method on which it is based, and the number and type of technical means by the information process occurring in the system. An assessment of the nature of the information process and types of information transformation can be made based on an analysis of the information system information model, but its construction is a rather labor-intensive process, and the model itself is so complex that it makes it difficult to solve the problem.

Due to the fact that in the third generation IMS information processing is carried out mainly by universal computers, which are a structural component of the IMS, and when designing IMS they are selected from a limited number of serial computers, the information model of the IMS can be simplified by reducing it to a model of a measuring channel (MC). ). All measuring channels of the IIS, which include elements of information processes, from receiving information from the object of study or control to its display or processing and storing, contain a certain limited number of types

transformation of information. By combining all types of information conversion in one measuring channel and isolating the latter from the IMS, and also keeping in mind that analog signals are always active at the input of the measuring system, we obtain two models of measuring channels with direct (Fig. 4a) and reverse (Fig. 4b) ) transformations of measurement information.

On the models, in nodes 0 - 4, information is converted. The arrows indicate the direction of information flows, and their letter designations indicate the type of transformation.

Node 0 is the output of the research or control object, on which analog information A is generated, which determines the state of the object. Information A arrives at node 1, where it is converted to the form A n for further transformations in the system. In node 1, conversions of a non-electrical information carrier into an electrical one, amplification, scaling, linearization, etc. can be carried out, i.e., normalization of the parameters of the information carrier A.

In node 2, the normalized information carrier A„ for transmission over the communication line is modulated and provided in the form of an analog A n or discrete D m signal.

Analog information A n in node 3 is demodulated and sent to node 4, where it is measured and displayed.

Fig.4 Model of the measuring channel of direct (a) and reverse (b) transformations of measuring information

Discrete information in node Z 1 is either converted into analog information A n and enters node 4 1, or after digital conversion it is sent to a digital information display device or to a device for processing it.

In some ICs, the normalized information carrier A from node 1 immediately goes to node 4 1 for measurement and display. In other ICs, analog information A, without a normalization operation, immediately enters node 2, where it is sampled.

Thus, the information model (Fig. 4a) has six branches through which information flows are transmitted: analog 0-1-2-3 1 -4 1 and 0-1-4 1 and analog-discrete 0-1-2-3 2 -4 1 , 0-1-2-3 2 -4 2 and 0-2-З 2 -4 1 , 0-2-3 2 -4 2 . Branch 0-l-4 1 is not used when constructing measuring channels of the IMS, but is used only in autonomous measuring instruments and therefore is not shown in Fig. 4a.

The model shown in Fig. 4 b differs from the model in Fig. 4 a only in the presence of branches 3 2 -1"-0, 3 1 -1"-0, 3 2 -1"-1 and 3 1 -1"- 1, through which reverse transmission* of the analog information carrier A n is carried out. In node 1, the output discrete information carrier A l is converted into a homogeneous signal with the input information carrier A or the normalized information carrier A n signal A. Compensation can be made both according to A and A n.

Analysis of the information models of the measuring channels of the IMS showed that when constructing them based on the direct conversion method, only five variants of structures are possible, and when using measurement methods with inverse (compensatory) information conversion 20.

In most cases (especially when constructing an IIS for remote objects), the generalized information model of the IC IIS has the form shown in Fig. 4a. The analogue-discrete branches 0-1-2-3 2 -4 2 and 0-2-3 2 are most widespread. -4 2 . As can be seen, for the indicated branches the number of levels of information conversion into IC does not exceed three.

Since the nodes contain technical means that transform information, taking into account the limited number of transformation levels, they can be combined into three groups. This will allow, when developing an IC IIS, to select the necessary technical means to implement a particular structure. The group of technical means of node 1 includes the entire set of primary measuring transducers, as well as unifying (normalizing) measuring transducers (UMTs) that perform scaling, linearization, power conversion, etc.; blocks of test formation and exemplary measures.

In node 2, if there are analog-discrete branches, there is another group of measuring instruments: analog-to-digital converters (ADC), switches (CM), which serve to connect the corresponding source of information to the IR or processing device, as well as communication channels (CC).

The third group (node ​​3) combines code converters (PCs), digital-to-analog converters (DACs) and delay lines (DLs).

The given IC structure, which implements the direct measurement method, is shown without the switching element and ADC connections that control the operation. It is standard, and most multi-channel IMS are built on its basis, especially long-range IMS.

Of interest are the methods for calculating IC for the various information models discussed above. A strict mathematical calculation is impossible, but using simplified methods of approach to determining the components of the resulting error, parameters and distribution laws, specifying the value of the confidence probability and taking into account the correlations between them, it is possible to create and calculate a simplified mathematical model of a real measuring channel. Examples of calculating the error of channels with analog and digital recorders are considered in the works of P.V. Novitsky.

LITERATURE

1. V. M. Pestrikov Home electrician and more... Ed. Nit. - 4th edition

2. A.G. Sergeev, V.V. Krokhin. Metrology, uch. manual, Moscow, Logos, 2000

3. Goryacheva G. A., Dobromyslov E. R. Capacitors: Handbook. - M.: Radio and communication, 1984

4. Rannev G. G. Methods and measuring instruments: M.: Publishing center "Academy", 2003

5. http//www.biolock.ru

6. Kalashnikov V.I., Nefedov S.V., Putilin A.B. Information-measuring equipment and technologies: textbook. for universities. - M.: Higher. school, 2002

Similar documents

    Description of the operating principle of an analog sensor and selection of its model. Selection and calculation of an operational amplifier. Operating principle and choice of analog-to-digital converter microcircuit. Development of the program algorithm. Description and implementation of the output interface.

    course work, added 02/04/2014

    Preparing an analog signal for digital processing. Calculation of the spectral density of an analog signal. Specifics of digital filter synthesis based on a given analog prototype filter. Calculation and construction of time characteristics of an analog filter.

    course work, added 11/02/2011

    Calculation of filter characteristics in time and frequency domains using fast Fourier transform, output signal in time and frequency domains using inverse fast Fourier transform; determination of the power of the filter's own noise.

    course work, added 10/28/2011

    Development of an analog-to-digital converter adapter and an active low-pass filter. Sampling, quantization, coding as signal conversion processes for the microprocessor section. Algorithm of operation of the device and its electrical circuit.

    abstract, added 01/29/2011

    4:2:2 digital stream parameters. Development of a circuit diagram. Digital-to-analog converter, low-pass filter, analog signal amplifier, output stage, PAL encoder. Development of PCB topology.

    thesis, added 10/19/2015

    An algorithm for calculating a filter in the time and frequency domains using the discrete fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT). Calculation of the output signal and the inherent noise power of the synthesized filter.

    course work, added 12/26/2011

    Classification of filters according to the type of their amplitude-frequency characteristics. Development of schematic diagrams of functional units. Calculation of an electromagnetic filter for separating electron beams. Determination of the active resistance of the rectifier and diode phase.

    course work, added 12/11/2012

    Development of block diagrams of transmitting and receiving devices of a multi-channel information transmission system with PCM; calculation of basic time and frequency parameters. Project of a pulse amplitude modulator for converting an analog signal into an AIM signal.

    course work, added 07/20/2014

    Typical block diagram of an electronic device and its operation. Properties of a frequency filter, its characteristics. Calculation of the input voltage converter. Design and principle of operation of a relay element. Calculation of an analog time delay element.

    course work, added 12/14/2014

    Consideration of the design of a rheostatic measuring transducer and the principle of its operation. Studying the block diagram of converting an analog signal from a measuring controller into digital form. Study of the operating principle of parallel ADC.

A digital-to-analog converter (DAC) is a device for converting a digital code into an analog signal in a magnitude proportional to the value of the code.

DACs are used to connect digital control systems with devices that are controlled by the level of an analog signal. Also, the DAC is an integral part in many analog-to-digital device and converter structures.

The DAC is characterized by a conversion function. It relates a change in digital code to a change in voltage or current. The DAC conversion function is expressed as follows

U out- output voltage value corresponding to the digital code Nin, supplied to the DAC inputs.

U max- maximum output voltage corresponding to the maximum code applied to the inputs N max

Size K DAC, determined by the ratio, is called the digital-to-analog conversion coefficient. Despite the stepwise nature of the characteristic associated with a discrete change in the input value (digital code), it is believed that DACs are linear converters.

If the value Nin represented through the values ​​of the weights of its digits, the transformation function can be expressed as follows

, Where

i- digit number of the input code Nin; A i- meaning i th digit (zero or one); Ui – weight i-th category; n – number of bits of the input code (number of bits of the DAC).

The weight of the bit is determined for a specific bit capacity, and is calculated using the following formula

U OP - DAC reference voltage

The operating principle of most DACs is the summation of shares of analog signals (discharge weight), depending on the input code.

The DAC can be implemented using current summation, voltage summation, and voltage division. In the first and second cases, in accordance with the values ​​of the bits of the input code, the signals of current generators and E.M.F. sources are summed up. The last method is a code-controlled voltage divider. The last two methods are not widely used due to the practical difficulties of their implementation.

Methods for implementing a DAC with weighted summation of currents

Let's consider the construction of a simple DAC with weighted summation of currents.

This DAC consists of a set of resistors and a set of switches. The number of keys and the number of resistors is equal to the number of bits n input code. Resistor values ​​are selected in accordance with the binary law. If R=3 Ohms, then 2R=6 Ohms, 4R=12 Ohms, and so on, i.e. Each subsequent resistor is 2 times larger than the previous one. When a voltage source is connected and the switches are closed, current will flow through each resistor. The current values ​​of the resistors, thanks to the appropriate choice of their ratings, will also be distributed according to the binary law. When submitting an entry code Nin The keys are switched on in accordance with the value of the corresponding bits of the input code. The key is closed if the corresponding bit is equal to one. In this case, currents are summed up in the node, proportional to the weights of these bits, and the magnitude of the current flowing from the node as a whole will be proportional to the value of the input code Nin.

The resistance of the matrix resistors is chosen to be quite large (tens of kOhms). Therefore, for most practical cases, the DAC plays the role of a current source for the load. If it is necessary to obtain voltage at the output of the converter, then a current-voltage converter is installed at the output of such a DAC, for example, on an operational amplifier

However, when the code changes at the DAC inputs, the amount of current taken from the reference voltage source changes. This is the main disadvantage of this method of constructing a DAC. . This construction method can only be used if the reference voltage source has low internal resistance. In another case, at the moment the input code is changed, the current taken from the source changes, which leads to a change in the voltage drop across its internal resistance and, in turn, to an additional change in the output current not directly related to the code change. The structure of the DAC with switching switches allows us to eliminate this drawback.

In such a structure there are two output nodes. Depending on the value of the bits of the input code, the corresponding keys are connected to the node connected to the output of the device, or to another node, which is most often grounded. In this case, current flows constantly through each resistor of the matrix, regardless of the position of the switch, and the amount of current consumed from the reference voltage source is constant.

A common disadvantage of both structures considered is the large ratio between the smallest and largest values ​​of the matrix resistors. At the same time, despite the large difference in resistor ratings, it is necessary to ensure the same absolute accuracy of fit for both the largest and the smallest resistor rating. In an integrated DAC design with more than 10 bits, this is quite difficult to achieve.

Structures based on resistive materials are free from all of the above disadvantages. R-2R matrices

With this construction of the resistive matrix, the current in each subsequent parallel branch is two times less than in the previous one. The presence of only two resistor values ​​in the matrix makes it quite easy to adjust their values.

The output current for each of the presented structures is simultaneously proportional not only to the value of the input code, but also to the value of the reference voltage. It is often said that it is proportional to the product of these two quantities. Therefore, such DACs are called multipliers. Everyone will have these properties. DAC, in which the formation of weighted current values ​​corresponding to the discharge weights is carried out using resistive matrices.

In addition to being used for their intended purpose, multiplying DACs are used as analog-to-digital multipliers, as code-controlled resistances and conductivities. They are widely used as components in the construction of code-controlled (tunable) amplifiers, filters, reference voltage sources, signal conditioners, etc.

Basic parameters and errors of the DAC

The main parameters that can be seen in the directory:

1. Number of bits – number of bits of the input code.

2. Conversion coefficient – ​​the ratio of the output signal increment to the input signal increment for a linear conversion function.

3. Settling time of the output voltage or current - the time interval from the moment of a given code change at the input of the DAC until the moment at which the output voltage or current finally enters the zone with the width of the least significant digit ( MZR).

4. Maximum conversion frequency – the highest frequency of code changes at which the specified parameters comply with the established standards.

There are other parameters that characterize the performance of the DAC and the features of its functioning. These include: low and high level input voltage, current consumption, output voltage or current range.

The most important parameters for a DAC are those that determine its accuracy characteristics.

Accuracy characteristics of each DAC , First of all, they are determined by errors normalized in magnitude.

Errors are divided into dynamic and static. Static errors are the errors that remain after the completion of all transient processes associated with changing the input code. Dynamic errors are determined by transient processes at the DAC output that arise as a result of a change in the input code.

Main types of static DAC errors:

The absolute conversion error at the end point of the scale is the deviation of the output voltage (current) value from the nominal value corresponding to the end point of the scale of the conversion function. Measured in units of the least significant digit of the conversion.

Output zero offset voltage – DC voltage at the output of the DAC with an input code corresponding to a zero output voltage value. Measured in low order units. Conversion factor error (scale) – associated with the deviation of the slope of the conversion function from the required one.

DAC nonlinearity is the deviation of the actual conversion function from the specified straight line. It is the worst error that is difficult to combat.

Nonlinearity errors are generally divided into two types - integral and differential.

Integral nonlinearity error is the maximum deviation of the actual characteristic from the ideal one. In fact, this considers the averaged transformation function. This error is determined as a percentage of the final range of the output value.

Differential nonlinearity is associated with the inaccuracy of setting the weights of the discharges, i.e. with errors of divider elements, scatter of residual parameters of key elements, current generators, etc.

Methods for identifying and correcting DAC errors

It is desirable that error correction be carried out during the manufacture of converters (technological adjustment). However, it is often desirable when using a specific sample BIS in one device or another. In this case, the correction is carried out by introducing into the structure of the device, in addition to LSI DAC additional elements. Such methods are called structural.

The most difficult process is to ensure linearity, since they are determined by the related parameters of many elements and nodes. Most often, only the zero offset and coefficient are adjusted

The accuracy parameters provided by technological methods deteriorate when the converter is exposed to various destabilizing factors, primarily temperature. It is also necessary to remember about the aging factor of elements.

Zero offset error and scale error are easily corrected at the DAC output. To do this, a constant offset is introduced into the output signal, compensating for the offset of the converter characteristic. The required conversion scale is established either by adjusting the gain set at the output of the amplifier converter, or by adjusting the value of the reference voltage if the DAC is a multiplying one.

Correction methods with test control consist of identifying DAC errors across the entire set of permissible input influences and adding corrections calculated on the basis of this to the input or output value to compensate for these errors.

For any correction method with control using a test signal, the following actions are provided:

1. Measuring the characteristics of the DAC on a set of test influences sufficient to identify errors.

2. Identification of errors by calculating their deviations from measurement results.

3. Calculation of corrective amendments for the converted values ​​or the required corrective effects on the corrected blocks.

4. Carrying out correction.

Control can be carried out once before installing the converter into the device using special laboratory measuring equipment. It can also be carried out using specialized equipment built into the device. In this case, monitoring, as a rule, is carried out periodically, all the time while the converter is not directly involved in the operation of the device. Such organization of control and correction of converters can be carried out when it operates as part of a microprocessor measuring system.

The main disadvantage of any end-to-end testing method is the long testing time along with the heterogeneity and large volume of equipment used.

The correction values ​​determined in one way or another are stored, as a rule, in digital form. Correction of errors, taking into account these corrections, can be carried out both in analog and digital form.

With digital correction, corrections are added taking into account their sign to the DAC input code. As a result, a code is received at the DAC input, which generates the required voltage or current value at its output. The simplest implementation of this correction method consists of an adjustable DAC, at the input of which a digital storage device is installed ( memory). The input code plays the role of an address code. IN memory The corresponding addresses contain pre-calculated, taking into account corrections, code values ​​supplied to the corrected DAC.

For analog correction, in addition to the main DAC, another additional DAC is used. The range of its output signal corresponds to the maximum error value of the corrected DAC. The input code is simultaneously supplied to the inputs of the corrected DAC and to the address inputs memory amendments From memory corrections, the correction corresponding to the given value of the input code is selected. The correction code is converted into a signal proportional to it, which is summed with the output signal of the corrected DAC. Due to the smallness of the required range of the output signal of the additional DAC compared to the range of the output signal of the corrected DAC, the first’s own errors are neglected.

In some cases, it becomes necessary to correct the dynamics of the DAC.

The transient response of the DAC will be different when changing different code combinations, in other words, the settling time of the output signal will be different. Therefore, the maximum settling time must be taken into account when using a DAC. However, in some cases it is possible to correct the behavior of the transfer characteristic.

Features of using LSI DAC

For the successful use of modern BIS It is not enough for DACs to know the list of their main characteristics and the basic circuits for their inclusion.

Significant impact on application results BIS The DAC fulfills the operational requirements determined by the characteristics of a particular chip. Such requirements include not only the use of permissible input signals, voltage of power supplies, capacitance and load resistance, but also the order of switching on different power sources, separation of circuits connecting different power sources and the common bus, the use of filters, etc.

For precision DACs, noise output voltage is of particular importance. A feature of the noise problem in a DAC is the presence of voltage surges at its output caused by switching switches inside the converter. The amplitude of these bursts can reach several tens of weights MZR and create difficulties in the operation of analog signal processing devices following the DAC. The solution to the problem of suppressing such bursts is to use sample-and-hold devices at the output of the DAC ( UVH). UVH controlled from the digital part of the system, which generates new code combinations at the DAC input. Before submitting a new code combination UVH switches to storage mode, opening the analog signal transmission circuit to the output. Thanks to this, the spike in the DAC output voltage does not reach the output UVH, which is then put into tracking mode, repeating the DAC output.

Special attention when building a DAC based on BIS It is necessary to pay attention to the choice of the operational amplifier that serves to convert the DAC output current into voltage. When applying the DAC input code to the output OU there will be an error DU, caused by its bias voltage and equal to

,

Where U cm– bias voltage OU; R os– resistance value in the feedback circuit OU; R m– resistance of the resistive matrix of the DAC (output resistance of the DAC), depending on the value of the code applied to its input.

As the ratio varies from 1 to 0, the error due to U cm, changes in the aisles (1...2)U cm. Influence U cm neglected when using OU, which one .

Due to the large area of ​​transistor switches in CMOS BIS significant output capacitance of the LSI DAC (40...120 pF depending on the value of the input code). This capacitance has a significant impact on the output voltage settling time. OU to the required accuracy. To reduce this influence R os bypassed with a capacitor With OS.

In some cases, it is necessary to obtain a bipolar output voltage at the DAC output. This can be achieved by introducing an output voltage range bias at the output, and for multiplying DACs by switching the polarity of the reference voltage source.

Please note that if you are using an integrated DAC , having a larger number of bits than you need, then the inputs of unused bits are connected to the ground bus, unambiguously determining the level of logical zero on them. Moreover, in order to work with the widest possible range of the output signal of the LSI DAC, the digits are taken as such digits, starting with the least significant one.

One of the practical examples of using DACs is signal shapers of various shapes. I made a small model in Proteus. Using a DAC controlled by the MK (Atmega8, although it can also be done on Tiny), signals of various shapes are generated. The program is written in C in CVAVR. By pressing the button, the generated signal changes.

LSI DAC DAC0808 National Semiconductor, 8-bit, high-speed, included according to the standard circuit. Since its output is current, it is converted into voltage using an inverting amplifier using an op-amp.

In principle, you can even have such interesting figures, it reminds me of something, right? If you choose a higher bit depth, you will get smoother

Bibliography:
1. Bakhtiyarov G.D., Malinin V.V., Shkolin V.P. Analog-to-digital converters/Ed. G.D. Bakhtiyarov - M.: Sov. radio. – 1980. – 278 p.: ill.
2. Design of analog-digital control microprocessor systems.
3. O.V. Shishov. - Saransk: Mordov Publishing House. University 1995. - p.

Below you can download the project at

Digital-to-analog converters have static and dynamic characteristics.

Static characteristics of the DAC

Main static characteristics DACs are:

· resolution;

· nonlinearity;

· differential nonlinearity;

· monotony;

· conversion factor;

· absolute full scale error;

· relative full scale error;

· zero offset;

absolute error

Resolution – this is the increment of U OUT when transforming adjacent values ​​D j, i.e. differing by one least significant unit (EMP). This increment is the quantization step. For binary conversion codes, the nominal value of the quantization step is

h = U PS /(2 N – 1),

where U PN is the nominal maximum output voltage of the DAC (full scale voltage), N is the bit capacity of the DAC. The higher the bit depth of the converter, the higher its resolution.

Full scale error – the relative difference between the real and ideal values ​​of the conversion scale limit in the absence of a zero offset, i.e.

It is the multiplicative component of the total error. Sometimes indicated by the corresponding EMP number.

Zero offset error – the value of U OUT when the DAC input code is zero. It is an additive component of the total error. Typically stated in millivolts or as a percentage of full scale:

Nonlinearity – maximum deviation of the actual conversion characteristic U OUT (D) from the optimal one (Fig. 5.2, line 2). The optimal characteristic is found empirically so as to minimize the value of the nonlinearity error. Nonlinearity is usually defined in relative units, but in the reference data it is also given in the EMP. For the characteristics shown in Fig. 5.2,

Differential nonlinearity – the maximum change (taking into account the sign) of the deviation of the actual transformation characteristic U OUT (D) from the optimal one when moving from one value of the input code to another adjacent value. Usually defined in relative units or in EMR. For the characteristics shown in Fig. 5.2,

Monotone conversion characteristics - increase (decrease) of the DAC output voltage (U OUT) with an increase (decrease) of the input code D. If the differential nonlinearity is greater than the relative quantization step h/U PN, then the converter characteristic is nonmonotonic.

The temperature instability of the DAC is characterized by temperature coefficients full scale errors and zero offset errors.

Full scale and zero offset errors can be corrected by calibration (tuning). Nonlinearity errors cannot be eliminated by simple means.

Dynamic characteristics of the DAC

TO dynamic characteristics am DACs include settling time and conversion time.

With a sequential increase in the values ​​of the input digital signal D(t) from 0 to (2 N – 1) through the least significant unit, the output signal U OUT (t) forms a stepped curve. This dependence is usually called the DAC conversion characteristic. In the absence of hardware errors, the midpoints of the steps are located on the ideal straight line 1 (see Fig. 5.2), which corresponds to the ideal conversion characteristic. The actual transformation characteristic may differ significantly from the ideal one in terms of the size and shape of the steps, as well as their location on the coordinate plane. There are a number of parameters to quantify these differences.

The dynamic parameters of the DAC are determined by the change in the output signal when the input code changes abruptly, usually from the value “all zeros” to “all ones” (Fig. 5.3).

Settling time – time interval from the moment of betrayal
input code (Fig. 5.3, t = 0) until the last time the equality is satisfied:

|U OUT – U ПШ | = d/2,

with d/2 usually corresponding to EMP.

Slew rate – maximum rate of change of U OUT (t) during the transient process. Defined as the increment ratio D U OUT to the time Dt during which this increment occurred. Usually specified in the technical specifications of a DAC with a voltage output signal. For digital-to-analog converters with current output, this parameter largely depends on the type of output op-amp.

For multiplying DACs with voltage output, the unity gain frequency and power bandwidth are often specified, which are mainly determined by the properties of the output amplifier.

Figure 5.4 shows two linearization methods, from which it follows that the linearization method for obtaining the minimum value of D l, shown in Fig. 5.4, ​​b, allows you to reduce the error D l by half compared to the linearization method at boundary points (Fig. 5.4, a).

For digital-to-analog converters with n binary digits, in the ideal case (in the absence of conversion errors), the analog output U OUT is related to the input binary number as follows:

U OUT = U OP (a 1 2 -1 + a 2 2 -2 +…+ a n 2 -n),

where U OP is the reference voltage of the DAC (from the built-in or external source).

Since ∑ 2 -i = 1 – 2 -n, then with all bits turned on, the output voltage of the DAC is equal to:

U OUT (a 1 …a n) = U OP (1 – 2 -n) = (U OP /2 n) (2 n – 1) = D (2 n – 1) = U PS,

where U PN is the full scale voltage.

Thus, when all bits are turned on, the output voltage of the digital-to-analog converter, which in this case forms U PN, differs from the value of the reference voltage (U OP) by the value of the least significant digit of the converter (D), defined as

D = U OP /2 n.

When any i-th bit is turned on, the output voltage of the DAC will be determined from the relationship:

U OUT /a i = U OP 2 -i .

A digital-to-analog converter converts the digital binary code Q 4 Q 3 Q 2 Q 1 into an analog value, usually voltage U OUT. or current I OUT. Each bit of the binary code has a certain weight of the i-th bit twice as much as the weight of the (i-1)th one. The operation of the DAC can be described by the following formula:

U OUT = e (Q 1 1 + Q 2 2 + Q 3 4 + Q 4 8 +…),

where e is the voltage corresponding to the weight of the least significant digit, Q i is the value of the i-th digit of the binary code (0 or 1).

For example, the number 1001 corresponds to:

U OUT = e (1· 1 + 0 · 2 + 0 · 4 + 1 · = 9 · e,

and the number 1100 corresponds

U OUT = e (0· 1 + 0 · 2 + 1 · 4 + 1 · = 12 · e.

With a sequential increase in the values ​​of the input digital signal D(t) from 0 to 2N-1 through the least significant unit (EMP), the output signal U out (t) forms a stepped curve. This dependence is usually called the DAC conversion characteristic. In the absence of hardware errors, the midpoints of the steps are located on the ideal straight line 1 (Fig. 22), which corresponds to the ideal transformation characteristic. The actual transformation characteristic may differ significantly from the ideal one in terms of the size and shape of the steps, as well as their location on the coordinate plane. There are a number of parameters to quantify these differences.

Static parameters

Resolution- increment Uout when converting adjacent values ​​Dj, i.e. different on the EMR. This increment is the quantization step. For binary conversion codes, the nominal value of the quantization step is h=U psh /(2N-1), where U psh is the nominal maximum output voltage of the DAC (full scale voltage), N is the bit capacity of the DAC. The higher the bit depth of the converter, the higher its resolution.

Full scale error- the relative difference between the real and ideal values ​​of the conversion scale limit in the absence of a zero offset.

It is the multiplicative component of the total error. Sometimes indicated by the corresponding EMP number.

Zero offset error- the value of Uout when the DAC input code is zero. It is an additive component of the total error. Typically stated in millivolts or as a percentage of full scale:

Nonlinearity- maximum deviation of the actual conversion characteristic U out (D) from the optimal one (line 2 in Fig. 22). The optimal characteristic is found empirically so as to minimize the value of the nonlinearity error. Nonlinearity is usually defined in relative units, but in the reference data it is also given in the EMP. For the characteristics shown in Fig. 22.

Differential nonlinearity is the maximum change (taking into account the sign) of the deviation of the real transformation characteristic Uout(D) from the optimal one when moving from one input code value to another adjacent value. Usually defined in relative units or in EMR. For the characteristics shown in Fig. 22,

The monotonicity of the conversion characteristic is an increase (decrease) in the DAC output voltage Uout with an increase (decrease) in the input code D. If the differential nonlinearity is greater than the relative quantization step h/Upsh, then the converter characteristic is non-monotonic.

The temperature instability of a DA converter is characterized by the temperature coefficients of full scale error and zero offset error.

Full scale and zero offset errors can be corrected by calibration (tuning). Nonlinearity errors cannot be eliminated by simple means.

Dynamic parameters

The dynamic parameters of the DAC are determined by the change in the output signal when the input code changes abruptly, usually from the value “all zeros” to “all ones” (Fig. 23).

Settling time- time interval from the moment the input code changes (in Fig. 23 t=0) until the last time the equality is satisfied

|U out - U psh |= d/2,

Slew rate- maximum rate of change of Uout(t) during the transient process. It is defined as the ratio of the increment D Uout to the time t during which this increment occurred. Usually specified in the technical specifications of a DAC with a voltage output signal. For a DAC with a current output, this parameter largely depends on the type of output op-amp.

For multiplying DACs with voltage output, the unity gain frequency and power bandwidth are often specified, which are mainly determined by the properties of the output amplifier.

DAC noise

Noise at the DAC output can appear for various reasons caused by physical processes occurring in semiconductor devices. To assess the quality of a high-resolution DAC, it is customary to use the concept of root mean square noise. They are usually measured in nV/(Hz) 1/2 in a given frequency band.

Surges (pulse noise) are sharp short spikes or dips in the output voltage that occur during a change in output code values ​​due to the non-synchronism of opening and closing analog switches in different bits of the DAC. For example, if, when moving from the code value 011...111 to the value 100...000, the key of the most significant digit of the D-A converter with the summation of weight currents opens later than the keys of the lower digits close, then a signal will exist at the DAC output for some time, corresponding to code 000...000.

Overshoot is typical for high-speed DACs, where capacitances that could smooth them out are minimized. A radical way to suppress emissions is to use sample-and-hold devices. Emissions are assessed by their area (in pV*s).

In table 2 shows the most important characteristics of some types of digital-to-analog converters.

table 2

DAC name Digit capacity, bit Number of channels Output type Setup time, µs Interface Internal ION Voltage power supply, V Power consumption mW Note
Wide range of DACs
572PA1 10 1 I 5 - No 5; 15 30 On MOS switches, multiplying
10 1 U 25 Last Eat 5 or +/-5 2
594PA1 12 1 I 3,5 - No +5, -15 600 On current switches
MAX527 12 4 U 3 Parall. No +/-5 110 Loading input words via 8-pin bus
DAC8512 12 1 U 16 Last Eat 5 5
14 8 U 20 Parall. No 5; +/-15 420 On MOS switches, with an inverse resistive matrix
8 16 U 2 Parall. No 5 or +/-5 120 On MOS switches, with an inverse resistive matrix
8 4 - 2 Last No 5 0,028 Digital potentiometer
Micropower DACs
10 1 U 25 Last No 5 0,7 Multiplying, in an 8-pin package
12 1 U 25 Parall. Eat 5 or +/-5 0,75 Multiplying, consumption - 0.2 mW in economy mode
MAX550V 8 1 U 4 Last No 2,5:5 0,2 Consumption 5 µW in economy mode
12 1 U 60 Last No 2,7:5 0,5 Multiplying, SPI-compatible interface
12 1 I 0,6 Last No 5 0,025 Multiplying
12 1 U 10 Last No 5 or 3 0.75 (5 h)
0.36 (3 h)
6-pin package, consumption 0.15 μW in economy mode. I 2 C-compatible interface
Precision DACs






2024 gtavrl.ru.