A program for calculating software reliability. Review of software packages for calculating the reliability of technical systems


The reliability models used in this case are of interest primarily for predicting failures during operation and debugging of the program. In this case, the values ​​of the model parameters are determined during operation or debugging of the program based on data on the moments of failures. The lack of general reference data is explained by the fact that each programmer is a unique technological object for creating programs, and each of his programs is an exclusive product.

The most developed apparatus for assessing reliability characteristics is based on the Dzelinsky-Moranda reliability model, which will be discussed below.

Calculation method for predicting software failures

The model under consideration is based on the following assumptions:

    the time until the next failure is distributed exponentially;

    the failure rate of a program is proportional to the number of errors remaining in the program.

According to these assumptions, the probability of failure-free operation of programs as a function of time t i is equal to:

P(t i )=exp(-l i × t i ) , (1)

Where l i = WITH × (N-(i-1)). (2)

Here WITH– proportionality coefficient;

N– the initial number of program errors.

In expression (1) the timing t i starts from the moment of the last (i-1) program failure, and the value l i changes when predicting different failures.

Values C And N in expression (2) are determined from experimentally recorded time intervals Dt i between the moments of failures occurring during program debugging. Based on the maximum likelihood technique, the value N is obtained as a solution to the nonlinear equation:

Where TO– number of experimentally obtained intervals between failures.

Real value N obtained by selection based on the fact that it is an integer.

Proportionality factor value WITH get like:

. (4)

This technique works for K³2, i.e. it is necessary to have at least two experimentally obtained intervals between the moments when errors occur.

Example of Software Failure Prediction

Let time intervals be recorded during program debugging Dt 1 =10, Dt 2 =20, Dt 3 =25 between program failures. Values Dt can be determined in units of time, or can be determined in the number of runs of the program during testing. Let's determine the probability of the program working P(t 4 )= exp(- l 4 × t 4 ) , i.e. absence of the next, fourth failure, starting from the moment the third failure is eliminated and the average time T 4 until the next program failure.

We solve equation (3) with respect to N by brute force method.

For N=4 we have at K=3

For N=5

Provides the smallest error N=4 , from where, in accordance with expression (4):

.

Thus, the probability of failure-free operation in the absence of the 4th failure is

P(t 4 )= exp(-0,02 × t 4 ) , A T 4 =1/ l 4 =50 .

We remind you that the countdown t 4 begins after the third failure occurs and is determined in units of time or in the number of program runs.

Example star network calculation:

Local computer network(LAN) usually includes a set of user workstations, a network administrator workstation (one of the user stations can be used), a server core (a set of hardware server platforms with server programs: file server, WWW server, database server, mail server, etc.), communications equipment (routers, switches, hubs) and structured cabling system (cable equipment).

Calculation of LAN reliability begins with the formation of the concept of failure of a given network. To do this, we analyze the management functions that are performed at the enterprise using this LAN. Functions that cannot be violated are selected, and the LAN equipment involved in their implementation is determined. For example: of course, during the working day it should be possible to call/record information from the database, as well as access to the Internet.

For a set of such functions, a structural electrical diagram is used to determine the LAN equipment, the failure of which directly disrupts at least one of the specified functions, and a logical diagram for calculating reliability is drawn up.

This takes into account the number and working conditions of repair and restoration teams. The following conditions are generally accepted:

Recovery is limited – i.e. At any given time, more than one failed element cannot be restored, because there is one repair team;

The average recovery time of a failed element is set either on the basis of permissible interruptions in the operation of the LAN, or from technical capabilities delivery and inclusion in the work of this element.

Within the framework of the above approach to calculation, the reliability calculation scheme, as a rule, can be reduced to a series-parallel circuit.

Let us set as a criterion for LAN failure the failure of equipment included in the network core: servers, switches or cable equipment.

We believe that the failure of user workstations does not lead to a failure of the LAN, and since the simultaneous failure of all workstations is an unlikely event, the network continues to function in the event of individual failures of workstations.

Reliability of a star network.

Failures do not affect the failure of the entire network. The reliability of the LAN is determined by the reliability of the central node.

Let us assume that the local network under consideration includes one server, two switches and fourteen cable fragments belonging to the network core. The intensity of failures and restorations for them are given below, as before K G = 1-l/m.

The restoration intensity values ​​are maximum for cables, the replacement of which is carried out using spare ones, and minimum for switches, the repair of which is carried out by specialized companies.

The calculation of the characteristics of subsystems of servers, switches and cables is carried out using expressions for the serial connection of elements.

Server subsystem:

l C =2*l 1 =2*10 -5 ; K GS =1-2*10 -4 ;m C = =0.1 1/h.

Switch subsystem:

l k =2*10 -5; K Gk =1-2*10 -3 ;m k =
1/h.

Cable subsystem:

l l =14*10 -6; K Gl = 1-14*10 -6;m l = 1 1/h.

For the entire network:

l s =6.5*10 -5 ; K G s =1-2.4*10 -3;m s =0.027 1/h.

Calculation result:

T = 15 thousand hours, K G = 0.998, T V » 37 hours.

Calculation of LAN cost:

14 network cards: 1500 rub.

Cable 1km: 2000 rub.

Connectors: 200 rub.

Server: 50 thousand. rub.

Total: 2,53,700 t. Rub.

One of the most important characteristics The quality of a software tool is reliability.

Reliability- the ability of a software tool to remain operational for a period of time certain period time, under certain operating conditions, taking into account the consequences for the user of each failure.

Efficient is the state of a software tool in which it is capable of performing specified functions with the parameters established by the requirements terms of reference. With the transition to operational state associated failure event.

The reason for software failure is the inability to full check in the process of testing and testing. When operating a software tool in real conditions, a combination of input data may arise that will cause a failure; therefore, the performance of the software tool depends on the input data, and the smaller this dependence, the higher the level of reliability.

To assess reliability, three groups of indicators are used: qualitative, ordinal and quantitative.

The main quantitative indicators of software reliability include:

The probability of failure-free operation P(t3) is the probability that, within a given operating time, a system failure does not occur. Running time - duration or volume of work:

P(t3) = P(t≥t3),

where t is the random operating time of the substation until failure, t3 is the specified operating time.

Failure probability is the probability that a system failure occurs within a given operating time. This indicator is the reverse of the previous one:

Q(t3) = 1 - P(t3).

The system failure rate λ(t) is the conditional probability density of a software failure occurring at a certain point in time, provided that the failure did not occur before that time:

λ(t) = f (t) / P(t),

where f(t) is the failure probability density at time t:

There is the following relationship between λ(t) and P(t):

In a particular case λ = const.

Р(t) = exp(- λ(t) d t.).

Р(t) = exp(-λ(t)).

If during the testing process the number of failures is recorded over a certain time interval, then λ(t) is the number of failures per unit of time.

Mean time to failure Ti is the mathematical expectation of the operating time of the software until the next failure

where t is the operating time of the software from (K-1) to K failure.

Ti = (t1+t2+...+tn)/n,

where ti is the operating time of the software between failures, n is the number of failures.

Average recovery time Тв - mathematical expectation of recovery time tвi - time spent on restoration and localization of failure - tо.л.i, time to eliminate the failure - tу.о.i, time of throughput testing of functionality - tп.п.i:

tвi = tо.л.i + tу.о.i + tп.п.i.

For this indicator, the term “time” means the time spent by the programmer on the listed types of work.

Availability coefficient K2 is the probability that the software is expected to be in working condition at any time when it is used for its intended purpose:

K2 = Ti / (Ti + Tv).

The reason for software failure is errors, which may be caused by: an internal property of the software, the reaction of the software to changes in the external operating environment. This means that even with the most thorough testing, assuming that it was possible to get rid of all internal errors, one cannot say with complete confidence that a failure will not occur during the operation of the software.

The main means of determining quantitative indicators of reliability are reliability models, by which we mean a mathematical model built to assess the dependence of reliability on parameters known in advance or estimated during the creation of the software tool. In this regard, the determination of the reliability of indicators is usually considered in the unity of three processes - prediction, measurement, evaluation.

Prediction- this is the determination of quantitative indicators of reliability based on the characteristics of the future software tool.

Measurement- this is a determination of quantitative indicators of reliability, based on the analysis of data on intervals between failures obtained when executing programs under test conditions.

Assessment is a determination of quantitative reliability indicators based on data on intervals between failures obtained when testing a software tool under real operating conditions.

All reliability models can be classified by which of the listed processes they support (predicting, forecasting, evaluating, measuring). It should be noted that reliability models that use data on intervals between failures as initial information can be classified as measuring and estimating equally. Some models, based on information obtained during software testing, make it possible to make predictions about the behavior of the software during operation.

Let's consider analytical and empirical models of reliability.

Analytical models make it possible to calculate quantitative indicators of reliability based on data on the behavior of the program during testing (measuring and evaluating models).

Empirical models are based on an analysis of the structural features of the programs. They consider the dependence of reliability indicators on the number of intermodule connections, the number of cycles in modules, the ratio of the number of straight sections to the number of branch points, and the like. It should be noted that often empirical models do not provide final results of reliability indicators.

Analytical modeling of software reliability includes four steps:

Determination of proposals related to the software testing procedure;

Development or selection of an analytical model based on assumptions about the testing procedure;

Selection of model parameters using the obtained data;

Application of the model - calculation of quantitative indicators of reliability using the model.

Analytical models are presented in two groups: dynamic and static models. In dynamic models reliability of a software tool, the behavior of the program (the occurrence of failures) is considered over time. In static models the occurrence of failures is not associated with time, but only the dependence of the number of errors on the number of test runs (in the error domain) or the dependence of the number of errors on the characteristics of the input data (in the data domain) are taken into account. To use dynamic models, it is necessary to have data on the occurrence of failures over time. Static models fundamentally different from dynamic ones in that they do not take time into account errors appear during testing and no assumptions are made about the behavior of the risk function λ(t). These models are built on a solid statistical foundation.

Corcoran model

Application of the model requires knowledge of the following indicators:

The model contains a varying probability of failures for various sources of errors and, accordingly, a different probability of correcting them;

The model uses such parameters as the result of only N tests in which Ni errors of the i-th type are observed;

The detection of an error of the i-th type during N trials appears with probability ai.

The reliability level indicator R is calculated using the following formula:

where N0 is the number of failure-free (or unsuccessful) tests performed in a series of N tests,

k - known number types of errors,

Yi is the probability of errors occurring,

for Ni > 0, Yi = ai,

at Ni = 0, Yi = 0.

Schumann model

The Schumann model is a discrete-time dynamic model for which data is collected during testing software for fixed or random time intervals. Schumann's model assumes that testing is carried out in several stages. Each stage represents the execution of the program on the full range of developed test data. Identified errors are logged but not corrected. At the end of the stage, quantitative reliability indicators are calculated, errors found are corrected, test sets are adjusted, and the next testing stage is carried out. The Schumann model assumes that the number of errors in the program is constant and no new errors are introduced during the correction process. The rate of error detection is proportional to the number of remaining errors.

It is assumed that there are Et errors before testing begins. During testing time τ, εc errors are detected per command in machine language.

Thus, the specific number of errors per one machine instruction remaining in the system after τ testing time is equal to:

εr (τ) = Et / It * εc (τ),

where It is the total number of machine instructions, which is assumed to be constant during the testing phase.

It is assumed that the value of the failure rate function Z(t) is proportional to the number of errors remaining in the program after the time τ spent on testing:

Z (t) = C * εr (τ) ,

where C is some constant

t is the operating time of the program without failures.

Then, if the program operating time without failure t is counted from the point t = 0, and τ remains fixed, the reliability function, or the probability of failure-free operation in the interval from 0 to t, is equal to:

R (t, τ) = exp (-C * * t) (1.9)

tav = 1 / (C * ).

We need to find the initial error value Et and the proportionality coefficient - C. During the testing process, information is collected about the time and number of errors on each run, i.e. the total testing time τ is the sum of the time of each run

τ = τ1 + τ2 + τ3 + … + τn.

Assuming that the error rate is constant and equal to λ, we can calculate it as the number of errors per unit time, where Ai is the number of errors on the i-th run:

Having data for two different testing times τa and τb, which are chosen arbitrarily taking into account the requirement that εc(τb) > εc(τa), we can compare the equations given above for τa and τb:

The unknown parameter C is obtained by substituting Et into expression (1.13). Calculating relations (1.13). Calculating relations (1.13) and (1.14) we obtain:

programs according to formula (1.9).

Let us carry out calculations in relation to the curriculum.

For example, the program has It = 4381 statements.

During successive test runs, the following data was obtained:

Let's choose two points based on the requirement that the number of errors found in the interval A - B be greater than in the interval 0 - A. Let's take 2 runs for point A, and 8 runs for point B. Then the errors found at the testing stages in the intervals 0 - A and A - B will be equal, respectively:

εс(τА) = 3 ⁄ 4381= 0.0007

εс(τВ) = 7 ⁄ 4381 = 0.0015.

Testing time at intervals is:

Let's calculate the error rates at two intervals:

λA = 3 ⁄ 13 = 0.23

λB = 7 ⁄ 12 = 0.58.

Then the number of errors available before testing starts is:

Let's calculate the probability of failure-free operation during time t at τ =

Let's take t=60 min.

Thus, the reliability of failure-free operation is quite high and the likelihood of failures and errors is low.

Model La Padula

See methodological guide to diploma design (L.E. Kunitsyna), pages 27-29.

Size: px

Start showing from the page:

Transcript

1 # 06, June 2016 UDC Review software systems calculation of reliability of technical systems Introduction Shalamov A.V., master's student Russia, Moscow, MSTU im. N.E. Bauman, Department of Design and Production Technology electronic equipment» Scientific supervisor: Soloviev V.A., Associate Professor Russia, Moscow, MSTU named after. N.E. Bauman, Department of “Design and Manufacturing Technology of Electronic Equipment” Currently, there are many solutions on the market for reliability calculation systems, both foreign and Russian production. The most popular foreign reliability calculation systems include the following: Relex, Risk Spectrum, A.L.D., ISOgraph. Among the Russian systems, the following systems can be distinguished: Arbitr, ASM, ASONIKA-K. Some of the above systems, in addition to tools for calculating reliability parameters, allow solving a wide range of related engineering problems. Next, we will consider in more detail the given software packages (PCs) from the point of view of their use for calculating the reliability of ERA. PC Relex and Risk Spectrum PC Relex and Risk Spectrum allow you to carry out logical-probabilistic analysis of the reliability and safety of technical systems, for example, calculating the reliability of modern automated process control systems, optimizing man-made risk and determining the optimal parameters of a maintenance system for potentially dangerous objects. The Risk Spectrum software was mainly used in the probabilistic safety analysis of nuclear power facilities at the design stage. The Spectrum complex is used at more than 50% of the world's nuclear power plants and is included in the list software, certified by the Certification Council

2 software tools of Gosatomnadzor of Russia in 2003 PC Relex and Risk Spectrum can be used to calculate the reliability of not only control or technological systems, but also instrument-making products in transport and defense technology. The modeling and calculation of reliability and safety indicators of technical systems, widely used in Europe and the USA, are based on logical-probabilistic methods that use event trees and failure trees as a means of constructing graphical models of reliability (Figure 1). Using the machine mathematical logic allows you to formalize the operating conditions of complex technical systems and calculate their reliability. If it can be argued that the system is operable if its elements A and B are operable, then we can conclude that the operability of the system (event C) and the operability of elements A and B (event A and event B) are interconnected by the logical operability equation: C = A B. Here the notation is used to represent the logical AND operation. The logical performance equation for this case can be represented by a diagram of the sequential connection of elements A and B. In general, an event tree is understood as graphic model, describing the logic of development of various variants of the emergency process caused by the initiating event under consideration. A fault tree is understood as a graphical model that displays the logic of events leading to system failure due to the occurrence of various combinations equipment failures and personnel errors. Rice. 1. Fault tree in the Relex PC Youth scientific and technical bulletin of the FS, ISSN

3 The fault tree includes graphic elements that serve to display elementary random events (basic events) and logical operators. Each logical operator of Boolean algebra corresponds to a specific graphic element, which makes it possible to decompose complex events into simpler ones (basic or elementary). The fault tree module of the Relex PC uses logical-dynamic operators that take into account the dependence of events, timing relationships, and priorities. It allows you to calculate the following indicators: probability of failure, unavailability, failure flow parameter, average number of failures. The values ​​of the indicators are calculated both for the vertex event and for each intermediate one. For each selected event, you can view and analyze sets of corresponding minimum sections. In the Risk Spectrum PC, the event tree is presented in the form of a table containing a header line, a field containing an open binary graph, and several columns with characteristics of the final states of the modeled object, which are realized during the implementation of emergency sequences (Figure 2). The heading of the 1st column of the table indicates the designation of the initial events. Subsequent column headings from left to right contain the names and symbols intermediate events corresponding to the successful or unsuccessful execution of safety functions, operable or failed states of safety systems, or individual components(equipment and technical means), correct or erroneous actions of personnel. The columns characterizing final states (FS) indicate their numbers, symbols, types (for example, FS with core damage), probabilities of implementation, logical formulas corresponding to these emergency sequences (EA). With the help of AP, options for the development of the emergency process are displayed on the event tree. In this case, an accident is understood as a sequence of events leading to a certain final state of an object, including the initial event of the accident, successful or unsuccessful activation of safety systems and the actions of personnel during the development of the accident. Many well-known foreign companies work with Relex PCs: LG, Boeng, Motorolla, Dell, Cessna, Siemens, Raytheon, HP, Honda, Samsung, CiscoSystems, Nokia, EADS, 3M, NASA, Intel, GM, Kodak, AT&T, Philips, Pirelli , Quallcomm, Seagete, Emerson. The Relex reliability studio 2007 PC includes various analytical modules for solving a wide range of problems: predicting reliability, maintainability,

4 analyzes of types, consequences and criticality of failures, Markov analysis, statistical analysis, assessment of the cost of equipment service life, as well as reliability flowcharts, fault/event trees, failure notification system, analysis and corrective actions, FRACAS system (Failure Reporting Analysis and Corrective Action System), a system for assessing human factors and risk analysis. Rice. 2. Binary tree of events in the Spectrum PC The reliability prediction module contains models for calculating reliability indicators of elements. It includes an extensive database containing classification characteristics of elements and reliability characteristics. Calculations are carried out in accordance with the following standards: MIL-HDBK-217, Telcordia (Bellcore), TR-332, Prism, NSWC-98/LE1, CNET93, HRD5, GJB299. The maintainability analysis module implements the provisions of the standard for studying the maintainability of systems MIL-HDBK-472. The problems of predicting preventative maintenance are solved. The module for analyzing the types, consequences and criticality of failures meets the standards MIL-STD-1629, SAE ARP 5580, etc. Dangerous failures are ranked and assessed according to risk priorities. The Reliability Block Diagram (RBD) module is used to analyze complex redundant systems. Contains both analytical and Monte Carlo simulation methods. The fault trees/event trees module allows you to implement procedures for deductive and inductive analysis of the development of failures, Youth Scientific and Technical Bulletin of the Federal Assembly, ISSN

5 events in the system. Used for reliability and safety analysis. Contains a wide range of logical-functional vertices. The Relex PC Markov Modeling module allows you to use processes that are used in modeling and analyzing system reliability. The models developed with the help of this apparatus are dynamic and reflect the necessary temporary conditions and other features and dependencies that specify the trajectory of system transitions in the space of possible states formed by failures and restoration of elements. The Relex Markov PC module implements Markov processes with a discrete set of states and continuous time, taking into account the following features of the functioning and redundancy of systems: incompatible types of failures of elements, the sequence of failures, changes in failure rates of elements depending on events that have already occurred (in particular, the degree of load on the reserve ), the number of restoration teams (limited/unlimited), the order of restoration, restrictions on spare parts, different operating efficiencies in various states systems and income (losses) for transitions to states. Calculated indicators: the probability of each state, the probability of failure-free operation (failure) at a given time interval. The statistical analysis module "Weibull" is designed for processing test results and operation. To describe catastrophic failures on a bathtub-shaped failure rate curve, normal, lognormal, and Weibull distributions are widely used. For example, the Weibull distribution, which is the distribution of minimum values, is most often used when predicting the probability of failure-free operation and the mean time between failures for a given operating time of a complex technical system being designed. Lognormal and Weibull distributions describe failures characteristic of the aging period equally well. The Weibull statistical analysis module uses various types of distributions, including normal, Weibull, lognormal, uniform, exponential, Gumbel, Rayleigh, binomial and others. Presentation and analysis of data for selected classes of parametric distributions is carried out using the “probability paper” method. On it, the analyzed distribution is represented by a straight line, which provides clarity and allows you to naturally apply all methods of regression analysis, in particular, checking the adequacy of the model and the significance of regression coefficients (Fisher analysis). To estimate distribution parameters, it is proposed

6 a large set of methods, for example, the Hazen, Benard methods and their modifications, binomial estimation, the method of averages, the maximum likelihood method and its modification. Using the economic calculation module, the service life cost is assessed at all stages of creation, operation, and disposal of the system. PC ASM The most famous of the domestic PCs is the software complex for automated structural-logical modeling (PC ASM). Theoretical basis is a general logical-probabilistic method system analysis, which implements all the capabilities of the main apparatus for modeling the algebra of logic in the base of the operations “AND”, “OR”, “NOT”. The form of representation of the initial structure of the system is a diagram of functional integrity that allows you to display almost everything known species structural models systems The complex automatically generates design analytical models of system reliability and safety and calculates the probability of failure-free operation, mean time to failure, availability factor, mean time between failures, mean recovery time, probability of failure of the system being restored, probability of readiness of a mixed system, as well as the significance and contribution of elements to various indicators of the reliability of the system as a whole. PC ASM also allows you to automatically determine the shortest paths for successful operation, minimum failure sections and their combinations. As the main advantage of Russian systems over foreign ones, it is worth highlighting the lower cost of implementation and support, the absence technological dependence and convenience of staff training. PC ASONIKA-K Also presented on the Russian market is the ASONIKA-K system, a software tool for solving problems of analysis and ensuring reliability within computer-aided design REA. In terms of its capabilities, the ASONIKA-K subsystem is not inferior to foreign A.L.D. PCs. Group, Relex, Isograph, etc. The advantage is the ability to use ready-made components produced in this country, as well as Russian standards, when calculating. Meets the requirements of the complex of military standards "Moroz-6" for electronic equipment for critical use and the US standard MIL-HDBK-217 and the Chinese standard GJB/z 299B. ASONIKA-K is a software tool created in the client-server technology. The PC server database contains the Youth Scientific and Technical Bulletin of the FS, ISSN

7 continuously updated information on the reliability of both domestic and foreign products electronic technology, built on unique principles that significantly facilitate the task of its administration, including: editing data on the reliability of ERI, editing mathematical models ERI, adding new ERI classes. The ASONIKA-K software package includes the following subsystems: a system for calculating the reliability characteristics of components, a system for calculating product reliability indicators, a results analysis system, a project archiving system, reference system, a database maintenance system, a user administration system, a system for analyzing and accounting for the influence of external factors on the reliability, an information and reference system on the reliability characteristics of components of modern complex computer technology (SCT) and ERI. The database of the client part of the PC contains information about the designed electronic equipment. Rice. 3. Analysis of redundancy in the ASONIKA-K PC This organization of the client part makes it possible to carry out REA calculations in parallel from several workstations. The client part of the program has a graphical post-processor and interfaces with systems for modeling physical processes and structural design, including ASONIKA-T, P-CAD 2001, ASONIKA-M, etc. The mathematical core of the PC contains a reliability model

8 exponential and DN distributions and can be adapted to any other reliability model. It allows you to calculate REA containing up to four hierarchical levels of disaggregation and having various types of redundancy. The calculation results can be presented both in text and graphic form. PC ASONIKA-K allows you to carry out the following types of analysis of reliability calculations: analysis of the results of reliability calculations of REA, the SRN of which is random connection components (tree-like, hierarchical) and analysis of the calculation results of the components, with serial connection. The use of the ASONIKA-K PC makes it possible to increase the reliability of electronic equipment by redundant its components. Figure 3 shows the values ​​of the probability of failure-free operation, the availability factor and the operational readiness factor of the entire facility as a whole. Failures of component parts are sudden and represent independent events; the time to failure is a random variable distributed according to an exponential law with a constant failure rate λ. The function and density of the MTBF distribution, as well as the dependence of the failure rate of the designed electronic equipment using graphical analysis, are also shown. The PC allows you to perform reliability calculations using various types redundancy of components: sliding hot standby, hot standby and without redundancy, and also provides ways to monitor their performance (continuous/periodic). In the future, it is planned to add two more modules to the PC: a system for accounting for the influence of external factors on reliability characteristics and an information and reference system for reliability characteristics element base. Conclusion PC Relex, Risk Spectrum and ASM implement a class of models for assessing reliability indicators of technical systems of logical-probabilistic modeling. It can be called a class of statistical models, since they allow one to calculate indicators of reliability, safety and efficiency of systems at an arbitrary point in time, depending on possible sets of operational and inoperative states of system elements. Individual PC modules A.L.D. Group (RAM Commander), Relex, Isograph can be used for automated calculation of the reliability of domestic electronic equipment only on the basis of imported electronic equipment, the reliability of which is assessed using various foreign reference books. Youth scientific and technical bulletin of the FS, ISSN

9 The use of foreign PCs requires users to be highly trained in the field of mathematical statistics and its application to problems of reliability theory. Russian PCs are not inferior in capabilities to foreign PCs and can be recommended for calculating the reliability of domestic electronic equipment based on both imported and domestic electronic equipment. The main advantage is the ability to conduct reliability calculations using domestic component databases and standards. Bibliography . Stroganov A.V., Zhadnov V.K., Polessky S.M. Review of software systems for calculating the reliability of complex technical systems / ed. D. D. Krasnova. M.: HSE, p. . Tikhomirov M.V., Shalumov A.S. Assessing the reliability and quality of RES / ed. M. V. Khokhlova. M.: Solon-press, p. . Shalumov A. S. Advantages of AS for ensuring the reliability and quality of ASONIKA equipment. M.: MIEM, p. . Zatylkin A.V., Tankov G.V., Kochegarov I.I. Algorithmic and software for calculating reliability parameters of RES / ed. S. P. Malukva. M.: PSU, p.


Lomaev E.N., Demyokhin F.V., A.V. Fedorov, M.I. Lebedeva, A.V. Semerikov REVIEW OF SOFTWARE COMPLEXES FOR ASSESSING THE RELIABILITY OF AUTOMATIC FIRE PROTECTION SYSTEMS AND OBJECT SAFETY Ongoing

USING Windchill Quality Solutions for Quality Control and Reliability Analysis General information about Windchill Quality Solutions Windchill Quality Solutions (formerly Relex) is designed for

2 1. Goals and objectives of the discipline The purpose of studying the discipline “Reliability of technical systems and man-made risk” is to provide knowledge on the basics of assessing the reliability of technical systems; introduce

Kulygin V.N., Zhadnov I.V., Polessky S.N., Tsyganov P.A. PROGRAM FOR CALCULATING RELIABILITY INDICATORS OF ELECTRONIC MODULES (ASONIKA-K-SCH system) UDC 621.396.6, 621.8.019.8 Program for calculating reliability indicators

2.8. Calculation of system reliability with protection 2.8.1. Description of the problem There is a system consisting of a technical object and a system for protecting the object from the consequences of failures of its elements. As an example of this

Lecture.1. Concept of structural diagram reliability. All technical objects consist of elements. Elements can be physically connected to each other in a variety of ways. For a clear picture of connections

Tran Dong Hung (Vietnam) (Academy of the State Fire Service of the Ministry of Emergency Situations of Russia, e-mail: [email protected]) TECHNOLOGY FOR ASSESSING THE RELIABILITY OF AUTOMATED FIRE PROTECTION CONTROL SYSTEMS

MINISTRY OF HEALTH OF THE RUSSIAN FEDERATION VOLGOGRAD STATE MEDICAL UNIVERSITY DEPARTMENT OF BIOTECHNICAL SYSTEMS AND TECHNOLOGY TEST TASKS FOR VERIFICATION, SAFETY AND RELIABILITY

MINISTRY OF EDUCATION AND SCIENCE OF THE RF STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION “SAMARA STATE AEROSPACE UNIVERSITY NAMED AFTER ACADEMICIAN S.P. QUEEN

Zhadnov V.V., Polessky S.N. DESIGN ASSESSMENT OF THE RELIABILITY OF COMBINED RADIO ENGINEERING SYSTEMS The current trend in the development of radio engineering systems (RTS) is characterized by an increase in the functions performed

UDC 656.56: 68.3 SHEVCHENKO D. N. Candidate of Technical Sciences, Associate Professor (BelSUT) ANALYSIS OF DYNAMIC FAILURE TREE The article was presented by Doctor of Technical Sciences, Prof. Bochkov K. A. Introduction Fault Tree Analysis FTA is one of

1. Goals and objectives of the discipline The purpose of studying the discipline “Reliability of technical systems and man-made risk” is to provide knowledge on the basics of assessing the reliability of technical systems; introduce the theory

Working programm compiled in accordance with the state educational standard of higher education vocational education in the direction of training specialists 3001 “Information systems and technologies”.

Application of automated structural modeling for design calculation of reliability of automated control systems A.S. Mozhaev, M.S. Skvortsov, A.V. Strukov /JSC "SPIK SZMA", St. Petersburg/ Introduction Reliability calculation

TITLE SHEET The program is compiled on the basis of the federal state educational standard of higher education (level of training of highly qualified personnel) in the field of training 06/13/01

1 TECHNOLOGY OF AUTOMATED STRUCTURAL-LOGIC MODELING IN DESIGN CALCULATIONS OF SYSTEMS RELIABILITY Nozik A.A. OJSC "Specialized Engineering Company "SEVZAPMONTAZHAVTOMATIKA" Abstract.

Structural reliability. Theory and practice Antonov A.V., Plyaskin A.V., Tataev Kh.N. ON THE ISSUE OF CALCULATING THE RELIABILITY OF RESERVED STRUCTURES TAKEN INTO ACCOUNT THE AGING OF ELEMENTS The article discusses the issue of calculation

ALORIMS OF AVOMAZIZED SRUURNO-LOICHESOO MODELING OF RELIABILITY AND SAFETY OF SRUURNO-COMPLEX SYSTEMS Mozhaeva IA, Nozik AA, Strukov AV JSC "SPI SZMA", St. Petersburg, E-mal: fo@zmacom Abstract Considered

SOFTWARE COMPLEX FOR AUTOMATED MODELING AND CALCULATION OF RELIABILITY AND SAFETY OF APCS AT THE DESIGN STAGE Nozik A.A., Mozhaev A.S., Potapychev S.N., Skvortsov M.S. The choice is justified and determined

Ministry of Education and Science of the Russian Federation Federal State Budgetary educational institution higher professional education "Perm National Research Polytechnic

The program is compiled on the basis of the federal state educational standard of higher education (level of training of highly qualified personnel) in the direction of training 06/27/01 “Management

MINISTRY OF TRANSPORT OF THE RUSSIAN FEDERATION FEDERAL STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION ULYANOVSK HIGHER AVIATION SCHOOL OF CIVIL AVIATION (INSTITUTE)

Lecture 3 3.1. The concept of the flow of failures and restorations A recoverable object is an object for which restoration to an operational state after a failure is provided for in the regulatory and technical documentation.

Test on the topic "IC Reliability" #num 1 Reliability is: 1) the property of an object to continuously maintain an operational state during the entire operating time; 2) the property of an object to continuously remain operational

THIRD EDITION, REVISED AND ADDED MOSCOW “ENERGY” 1977 The book is devoted to issues of reliability of automated systems. The features of reliability assessment and calculation are described. Considerable attention

1. GOALS OF DISCIPLINE MASTERING. The goals of mastering the discipline are: to familiarize students with the basic concepts and definitions from reliability theory, reliability indicators of power supply systems (SES)

Federal agency by education Tomsk State University of Architecture and Civil Engineering RELIABILITY OF TECHNICAL SYSTEMS AND TECHNOGENIC RISK Guidelines for independent work of students

The program is compiled on the basis of the federal state educational standard of higher education (level of training of highly qualified personnel) in the direction of training 01.06.01 “Mathematics”

Redundancy V. Lecture text Introduction The problem of analysis and ensuring reliability is associated with all stages of the creation of computers and the entire period of their practical use at the Ministry of Emergency Situations. Set of events

K. Kapoor, L. Lamberson RELIABILITY AND SYSTEMS DESIGN Translation from English by E. G. KOVALENKO, edited by Dr. Tech. sciences, prof. I. A. USHAKOVA Publishing house "Mir" Moscow 1980 Contents Preface

GOST 24.701-86 Group P87 INTERSTATE STANDARD Unified system of standards for automated control systems RELIABILITY OF AUTOMATED CONTROL SYSTEMS Basic provisions Unified system of

Example. Reliability of the power supply system Figure 1 shows the initial functional diagram(connectivity graph with cycles) of the power supply system (SES) of the well-known problem 35 of I.A. Ryabinin, in which

Government of the Russian Federation Federal State Autonomous Educational Institution of Higher Professional Education "National Research University "Higher School of Economics"

1 LECTURE 3. Problems of reliability of power supply Reliability theory serves as the scientific basis for the activities of laboratories, departments, bureaus and reliability groups at enterprises, in design, research and development

ASSESSMENT, FORECASTING AND MANAGEMENT OF RESOURCE CHARACTERISTICS OF NPP EQUIPMENT Antonov A.V., Dagaev A.V. Obninsk Institute of Nuclear Energy, Russia Currently, a number of nuclear power units

Reliability theory is a branch of applied mathematics in which methods for ensuring efficient work products. Reliability in the broad sense of the word refers to the ability of a technical device

2 PERFORMERS Senior software engineer LLC "NTC SZMA" Leading specialist of JSC "SPIK SZMA" Leading programmer LLC "NTC SZMA" Mozhaeva I.A. Strukov A.V. Kiselev A.V. 3 CONTENTS INTRODUCTION... 5 1 DESCRIPTION

Government of the Russian Federation Federal State Autonomous Educational Institution of Higher Professional Education "National Research University "Higher School of Economics"

Ministry Agriculture Russian Federation Federal State Educational Institution of Higher Professional Education "Moscow State Agricultural Engineering University named after V.P. Goryachkin" Faculty of Correspondence Education Department "Machine Repair and Reliability"

APPLICATION OF THE PC ARBITER FOR SOLVING PROBLEMS OF AUTOMATED ANALYSIS OF THE RELIABILITY OF SHIP NUCLEAR POWER INSTALLATION SYSTEMS I. V. Kudinovich, N. V. Shklyarov, A. A. Nozik, A. V. Strukov (St. Petersburg)

Modeling of random impacts In modeling systems using simulation methods, significant attention is paid to taking into account random factors and impacts on the system. To formalize them we use

LECTURE. Basic statistical characteristics of reliability indicators THIS The mathematical apparatus of reliability theory is based mainly on probability-theoretic methods, since the process itself

Lecture 6 61 Markov processes in calculating the reliability of non-redundant recoverable objects The main features of recoverable systems compared to non-recoverable ones are the large

Ministry of Education of the Republic of Belarus Educational Institution "Belarusian State University of Informatics and Radioelectronics" APPROVED by Dean of the Faculty of Education and Science A.V. Budnik WORK PROGRAM academic discipline"Reliability

FEDERAL AGENCY FOR EDUCATION UKHTA STATE TECHNICAL UNIVERSITY DEPARTMENT OF INDUSTRIAL SAFETY AND ENVIRONMENTAL PROTECTION RELIABILITY OF TECHNICAL SYSTEMS AND TECHNOGENIC RISK Methodological

Modeling of sudden failures based on the exponential reliability law As stated earlier in, the cause of a sudden failure is not related to a change in the state of the object over time,

Barinov S.A., Tsekhmistrov A.V. 2.2 Student of the Military Academy of Logistics and Technical Support named after Army General A.V. Khruleva, St. Petersburg CALCULATION OF RELIABILITY INDICATORS OF ROCKET AND ARTILLERY PRODUCTS

2 Contents Scope... 5 2 Regulatory references... 5 3 Terms and definitions... 6 4 Designations and abbreviations... 7 5 Purpose and objectives of reliability assessment... 8 6 Responsibility... 8 7 General provisions ...

FEDERAL AIR TRANSPORT AGENCY FEDERAL EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION MOSCOW STATE TECHNICAL UNIVERSITY OF CIVIL AVIATION (MSTU GA)

1 Lecture 5. Reliability indicators IT Reliability indicators characterize such important properties of systems as failure-free operation, survivability, fault tolerance, maintainability, storability, durability

Analysis of software reliability forecasting models T. Khunov National Research University Higher School of Economics MIEM [email protected] Abstract This paper provides an analysis of models for predicting the reliability of software

Goals and objectives of the discipline The discipline “Reliability of special-purpose vehicles” is a professional cycle discipline in the training of engineers in the specialty “ Vehicles

Profile: “Mathematical and instrumental methods of economics” Section I. Foundations of the theory of probability and mathematical statistics 1. Statistical and classical definition of probability. The concept of random

Ufa: UGATU, 202 T. 6, 8 (53. P. 67 72 V. E. Gvozdev, M. A. Abdrafikov STATISTICAL PROPERTIES OF CONFIDENCE ESTIMATES OF BOUNDARY VALUES OF RELIABILITY CHARACTERISTICS UDC 68.5 The article is devoted to issues of confidence

Test option for the correspondence group in the discipline “Reliability of technical systems and man-made risk.” As a result of operation, a number of statistical data were obtained on the reliability of non-repairable

FEDERAL AGENCY OF RAILWAY TRANSPORT FEDERAL STATE BUDGETARY EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION "MOSCOW STATE UNIVERSITY OF COMMUNICATIONS"

UDC 004.94, 519.2 A.Yu. Rusin, M. Abdulhamed (Tver State Technical University; e-mail: [email protected]) INFORMATION PROCESSING IN THE SYSTEM FOR TESTING INDUSTRIAL EQUIPMENT FOR RELIABILITY

Lecture 8 8.1. Laws of distribution of reliability indicators Failures in railway automation and telemechanics systems occur under the influence of various factors. Since each factor in turn

UDC 59.873 Algorithm and methodology for analyzing the reliability of a combat vehicle V. O. Karasev, student Russia, 05005, Moscow, MSTU. N.E. Bauman, Department of Informatics and Control Systems Scientific supervisor:

Lecture 4. Basic quantitative indicators of reliability of technical systems Purpose: To consider the main quantitative indicators of reliability Time: 4 hours. Questions: 1. Indicators for assessing the properties of technical

UDC 681.3 A.I. Ryzhenko, E.I. Ryzhenko, D.V. Kolesnichenko Determination of the reliability of non-repairable redundant technical products National Aerospace University named after. NOT. Zhukovsky "KhAI"

7627 UDC 62-192 ON THE ISSUE OF RESOURCE ASSESSMENT OF TECHNICAL SYSTEMS N.V. Lubkov Institute of Management Problems. V.A. Trapeznikov RAS Russia, 117997, Moscow, Profsoyuznaya st., 65 E-mail: [email protected] Keywords:

1 The program is compiled on the basis of the federal state educational standard of higher education (level of training of highly qualified personnel) in the field of training 06/13/01 “Electrical

Structural reliability. Theory and practice Tkachev O.A. ANALYSIS OF RELIABILITY OF NETWORKS CONSISTING OF IDENTICAL ELEMENTS Analytical models are proposed that allow one to obtain expressions for determining

Software Reliability | reliability engineer's blog website

Software reliability. Introduction

Software reliability is a mysterious and elusive thing. If you try to find something on this topic in Yandex, you will see a bunch of theoretical articles where a lot of clever words and formulas are written, but not a single article contains a single example of a real calculation of program reliability.

If you want to have a good understanding of technology reliability issues and become a highly paid specialist, I invite you to take my reliability training course.

The situation at space industry enterprises is even better. When I asked the specialists of one Ural NGO how they consider the reliability of the software, they rolled their eyes and said: “What is it, we take it per unit and that’s it.” And we ensure reliability through testing.” I agree that this approach has the right to life, but I would like more. In short, I wrote my own method, I ask for love and favor. Below is a calculator on which you can calculate the reliability of this software of yours.

The problem of software reliability is becoming increasingly important due to the constant complication of developed systems, the expansion of the range of tasks assigned to them, and, consequently, a significant increase in the volume and complexity of software. In short, we have reached the day when hardware has become more reliable than software, and one error in the program code can ruin a space mission worth billions of dollars.

The reliability of software is determined by the presence of various types of errors in programs, usually introduced into it during development. By software reliability we mean the ability to perform specified functions, maintaining over time the values ​​of established operational indicators within specified limits corresponding to specified modes and conditions of execution. An error is understood as any failure of a program to perform specified functions. The occurrence of an error is a failure of the program.

Software reliability indicators

The most common indicators of software reliability are the following:
– the initial number of errors N0 in the software after building the program and before debugging it;
– the number of errors n in the software detected and remaining after each debugging stage;
– time between failures (MTBF), hours;
– probability of failure-free operation (FBO) of the software for specified time work P(t);
– software failure rate λ, 10-6 1/h.

Simplified software reliability assessment

First, let's look at the methods that the domestic regulatory framework offers us. The only regulatory document on this topic is
Software reliability assessment in accordance with GOST 28195-99 is calculated using a very simplified method, which states the actual reliability based on operating experience of the software package P(t) 1-n/N, where n is the number of failures during software testing; N – number of experiments during testing. Obviously, nothing can be calculated using this method.

Statistical assessment of software reliability

Of much greater interest is the average statistical estimate described in the initial number N0 of errors in software after offline debugging. According to this assessment, the number of errors per 1 K words of code is 4.34 for languages low level(Assembler) and 1.44 for high level languages ​​(C++). Unfortunately, it is not entirely clear what the authors meant by the phrase “1K words of code.” In English-language literature, it is customary to use the thousand lines of code (TCC) parameter (KLOC). So, according to the operating room Windows systems 2000 error density is 1.8-2.2 on TSC. Considering that Windows 2000 is written in the C programming language and has a similar number of errors, it can be assumed with a high degree of certainty that the domestic authors had in mind the TSC parameter.
Domestic authors provide statistical indicators of software failure rate λ. Let's bring them to
table 1.1.

Table 1.1

Unfortunately, the authors do not say for which software language this is valid. In addition, correction factors are introduced:

Table 1.2

And a coefficient reflecting the impact of program running time:

Table 1.3

Then the software failure rate λ is determined using tables 1.1-1.3 by the expression:

λ by = λ* Kr* Kk* Kz* Ki (1.1)

Calculation example 1.
The software volume is 1 MB, for example.
Then, according to table 1.1 λ = 6
We use average correction factors. Let be:
Kr = 2 (short period of software use)
Kk = 0.25 ( high quality BY)
Kz = 0.25 ( high frequency software changes)
Ki = 1 (average workload level)
λ = 0.1 * 10 -6 failures/hour

P(t) = exp**(-λ*t) (1.2)

This statistical model for assessing software reliability has significant advantages compared to the simplified one, however, it also has a number of serious disadvantages, in particular, it does not take into account the software development language and has large intervals of software volume. That is, it is impossible, for example, to say how reliable a 2 gig program will be and which should work for 10 years.
In addition, correction factors have a subjective assessment. From what ceiling they were taken is unknown.
An attempt to eliminate these shortcomings is Quantitative model software reliability assessments.

Quantitative model for assessing software reliability

This model is based on my assumption that the level of software reliability depends on the size of the software (in bits or thousands of lines of code). This statement does not contradict the classical theory of reliability, according to which the more complex an object is, the lower its reliability. It's logical. The more lines of code there are, the more errors there will be in the end and the lower the probability of failure-free operation of the program.
We use an estimate of the number of errors depending on the development language from the statistical model:

Table 1.4

Knowing V, the amount of software code, in bits, we can get the number of lines of this code. It is more convenient to use the TSC parameter.

TSC = V/146000 (1.3)

Using the data in Table 1.4, we can obtain β, the rate of errors per thousand lines of code:

β = 1.44*TSK/1000 (1.4)

The software volume is 10 MB. Development language C++.
Then, according to 1.3-1.4, β will be 0.08
This indicator is very close to the result of Example 1.

This is how the idea came to compare the parameter λ, the software failure rate obtained by the statistical model, and β, the coefficient of the number of software errors.

Now attention! As we can see, there is a strong correlation of results between the software failure rate taking into account correction factors and β - the coefficient of the number of software errors. The use of other correction factors leads to similar results.

We can make the assumption that the β we introduced (invented by me) physical meaning close to λ, the failure rate. λ characterizes the failure rate. β characterizes the frequency of errors in the program, and therefore failures. But!λ and β are different. λ, once defined for a transistor, does not change depending on the number of transistors. β is a dynamic coefficient. How more volume program, the larger β. But this is also logical. How more program, the more errors it contains. In addition, it can be assumed that the authors of Table 1.1 wrote it for software in the C language.

Obviously, the longer a program runs, the higher the likelihood that it will fail.
Using an exponential reliability model (when using this model, the flow of failures is assumed to be constant), you can obtain the FBG software:

P(t) = exp**(-λ*t)

To summarize, in order to assess the reliability of software, it is necessary to know its development language (high or low) and the amount of software code.

Reliability of aviation instruments and measuring and computing systems, V.Yu. Chernov/ V.G. Nikitin; Ivanov Yu.P. – M. 2004.
Reliability and efficiency in technology: Handbook., V.S. Avduevsky. 1988.
Estimating source lines of code from object code, L. Hatton. 2005.

Now try to calculate something. For example, find the reliability of software that is 100 MB in size and should last 100 hours. Important! Please note that when the volume of software changes, λ is recalculated each time for a specific software size.

Laboratory report on the topic:

Software Reliability Models

1 . Schumann model is based on the following assumptions:

    the total number of instructions in a machine language program is constant;

    at the beginning of assembly tests, the number of errors is equal to a certain constant value, and as errors are corrected, they become smaller. During program testing, no new errors are introduced;

    errors are initially distinguishable; the total number of corrected errors can be used to judge the remaining ones;

    the program failure rate is proportional to the number of residual errors.

It is assumed that before testing begins (i.e. at the moment =0) there are M errors. During testing time τ, ε 1 () errors are detected per machine language instruction.

Then the specific number of errors per one machine instruction remaining in the system after testing time τ is equal to:

where I is the total number of machine instructions, which is assumed to be constant during the testing phase.

It is assumed that the value of the function of the number of errors Z(t) is proportional to the number of errors remaining in the program after the time τ spent on testing.

Z (t) = C * ε 2 (τ),

where C is a certain constant, t is the operating time of the program without failures.

Then, if the program operating time without failure t is counted from the point t = 0, and τ remains fixed, the reliability function, or the probability of failure-free operation in the interval from 0 to t, is equal to

We need to find the initial value of the errors M and the proportionality coefficient C. These unknowns are estimated by running a functional test at two points of the debugging axis variable  a and  b, chosen so that ε 1 ( a)

During the testing process, information is collected about the time and number of errors on each run, i.e. The total testing time τ is the sum of the time of each run:

τ = τ 1 + τ 2 + τ 3 + … + τ n.

Assuming that the error rate is constant and equal to λ, we can calculate it as the number of errors per unit time,

where A i is the number of errors on the i -th run.

Then
. (5)

Having data for two different testing moments  a and  b, we can compare equations (3) for τ a and τ b:

(6)

(7)

From relations (6) and (7) we find the unknown parameter C and M:

(8)

(9)

Having received the unknowns M * and C *, we can calculate the reliability of the program using formula (2).

Example 1.

The program contains 2,000 command lines, of which, before operation (after the debugging period), 15 command lines contain errors. After
20 days of operation, 1 error detected. Find the average time of error-free operation of the program and the failure rate of the program with a proportionality coefficient equal to 0.7.


Failure Rate

Example 2.

Using the conditions of example 1, determine the probability of error-free operation of the program for 90 days.

Example 3.

Determine the initial number of possible errors in a program containing 2,000 command lines, if during the first 60 days of operation 2 errors were discovered, and over the next 40 days one error was discovered. Define T 0 – the average time of error-free operation corresponding to the first and second periods of operation of the program and the proportionality coefficient.

Failure rates:

2. Mills model. Let it be discovered during testing n original errors and v from S scattered errors. Then the estimate N- the initial number of errors in the program will be

.

The second part of the model is related to testing the hypothesis of expression and testing N.

Consider the case when the program contains TO own errors and S scattered errors. We will test the program until we find all the scattered errors. At the same time, the number of source errors detected is accumulated and stored. Next, the model reliability estimate is calculated:

(11)

as the probability that the program contains K errors.

The value of C is a measure of confidence in the model and shows the probability of how correctly the value of N is found. These two related relationships form a useful error model: the first predicts the possible number of errors initially present in the program, and the second is used to establish a confidence level for the forecast.

The formula for calculating C in the case where not all artificially scattered errors are detected is modified in such a way that the assessment can be performed after v (vS) scattered errors are detected:

1
(12)

where the numerator and denominator of the formula for n  TO are binomial coefficients.

Example 4.

Let's assume that the program has 3 inherent errors. Let's introduce 6 more errors randomly.

During testing we found:

1) 6 scattered errors and 2 own errors;

2) 5 errors from scattered ones and 2 own ones;

3) 5 errors from scattered ones and 4 own ones.

Find reliability using the Mills-S model.

osh - own

osh - random

according to formula (12)

3. Simple intuitive model. Using this model involves testing by two groups of programmers (or two programmers depending on the size of the program) independently of each other, using independent test sets. During testing, each group records all the errors it finds.

Let the first group discover n 1 errors, the second n 2, n 12 is the number of errors discovered by both the first and second groups.

Let us denote by N the unknown number of errors present in the program before testing begins. Then the testing efficiency of each group can be determined as

.

Testing effectiveness can be interpreted as the probability that a bug will be found. Thus, we can assume that the first group detects an error in the program with probability, the second - with probability. Then the probability p 12 that the error will be detected by both groups can be taken equal to . On the other hand, since the groups act independently of each other, then p 12 = p 1 p 2. We get:

From here we get an estimate of the initial number of program errors:

Example 5.

In the process of testing the program, the 1st group found 15 errors, the 2nd group found 25 errors, there were 5 total errors. Determine reliability using a simple intuitive model.

4. Corcoran model

Application of the model requires knowledge of the following indicators:

    the model contains a varying probability of failures for various sources of errors and, accordingly, a different probability of correcting them;

    the model uses such parameters as the result of only N tests in which N i errors of the i-th type are observed;

    detection of an error of the i-th type during N tests appears with probability a i .

The reliability level indicator R is calculated using the following formula:

where N 0 is the number of failure-free (or unsuccessful) tests performed in a series of N tests, k is the known number of types of errors, a i is the probability of identifying an error of the i-th type during testing,

Y i - probability of errors, for N i > 0, Y i = a i, for N i = 0, Y i = 0.

Example 6.

100 tests of the program were carried out. 20 out of 100 tests were unsuccessful, and in the remaining cases the following data were obtained:

Error type

Probability of error a i

1. Calculation errors

2. Logical errors

3. I/O errors

4. Data manipulation errors

5. Pairing errors

6. Data definition errors

7. Errors in the database

Assess reliability using the Corcoran model.

Initial data:


Example 7. 100 tests of the program were carried out. 20 out of 100 tests were unsuccessful, and in the remaining cases the following data were obtained:

Error type, i

The likelihood of an error occurring. a i

Number of errors N i during testing

provision. Kulakov. Quality control software provision. For the preparation of...
  • Creation of an automated system to optimize the creation process reliable software provision in JAVA language

    Test >>

    Problem reliability software provision has two aspects: security and evaluation reliability. For provision reliability programs proposed... AND STRUCTURES USED. Internal state models the system being developed is described by information about...

  • Reliable software a tool as a product of programming technology. Historical and social context of programming

    Abstract >> History

    The main cause of development errors software funds. Model translations and sources of errors. Intelligent... architectural function? Literature for lecture 6. 6.1. G. Myers. Reliability software provision. - M.: Mir, 1980. - P. 78-91. 6.2. E.W. ...

  • Certification and reliability software provision

    Abstract >> Computer science, programming

    User, i.e. software error is not an inherent property software provision. The presence of errors... ways to prevent them. Models reliability Software Classification models reliability Software Exponential model (model Schumann) A series is introduced...

  • Software security trading enterprises

    Abstract >> Computer Science

    ... software provision; installation and configuration of equipment and software provision; setting up the system as a whole; training; revision software provision...with information and models, participate in... systems are reliability, scalability, ...





  • 

    2024 gtavrl.ru.