What is a bank card transaction? Operation prohibition


From January 1 of this year, the law obliged banks to inform their customers about every transaction made. At the same time, the way in which the client must be informed is not fixed by law. Banks that care about their customers use the most convenient and timely method of notification - SMS. The rest use less expensive means for themselves - notification by e-mail or messages in their personal account on the bank’s website.

Whatever method your bank chooses, you should monitor alerts carefully, because when it comes to fraudulent transactions, literally every minute can play into your hands or ruin your chances of returning the stolen amount.

Please keep in mind that fraud is any transaction with your card that is not made by you. There are several types of fraud:

  • Your card is stolen or lost and then used without your permission.
  • You did not receive a new card (or a replacement card) from the issuer and did not know that it was in the wrong hands until you received documentation of transactions that you did not make. Your personal information is used by another person to apply for the card. This type of fraud is very difficult to detect unless the issuing bank receives a complaint from the customer or the account is audited shortly after opening. If you are not a client of this bank, you may not know that someone has received a card under your name until you apply for a loan and are denied due to bad credit history.
  • The account statement contains data on transactions that you did not make, this may mean that a counterfeit card with the same number as yours is in circulation.
  • An attacker with fake documents, posing as the card holder, gains control of the holder's account, demanding to replace the card for the same account. Usually they require you to send the card to a different address. You usually find out about this when you receive a statement about the status of your account, or when bills stop arriving at your address.
  • You have your card, but the attacker performs transactions using the card number, for example, ordering goods by mail, telephone or online

If you find yourself in any of these situations, the first thing you need to do is contact your bank. By law, the bank must compensate you for the money missing from your card if a complaint about an illegal transaction was received within the first 24 hours after the funds were written off.

According to Deputy Head of Card Business Development Stepan Zaitsev, if the client does not report a disputed transaction within 24 hours, he can do so later. You can write a claim at any bank office. Further consideration will be carried out in the manner established by the bank. The period for responding to a client’s request is 30 working days (up to 60 – with the participation of foreign acquiring banks in the operation). To clarify the entire situation, the bank may request additional documents and information from the client.

1. Transactionsand blocking

2. Transaction concept

When working with databases, errors and failures are possible. They can be caused by errors of users interacting with the DBMS, or unstable operation of computers. Therefore, the DBMS uses special methods for canceling actions that caused such errors. An SQL command that affects the contents and structure of a database is not irreversible. The user can determine what will happen after the end of her actions: whether the changes made to the database will remain or they will be ignored. To do this, the sequence of operations on the database is combined into groups - transactions.

Transactionis a sequence of operations performed on a database and transferring it from one consistent state to another consistent state.

A transaction is considered as some indivisible action on the database, meaningful from the user’s point of view, that is, it is a logical unit of system operation. A transaction begins whenever a database session occurs.

An example of a transaction would be transferring money through an ATM. Amount 100 tr. transferred from the current account to the card account. The program subtracts the amount from the current account and then adds it to the card account. While the program is running, after the first modification, a power failure occurs and the card account does not increase. In order to avoid such a situation, both commands must be combined into a transaction. If all commands in a transaction are not executed, the transaction is rolled back.

Let's define a transaction to enter data about books newly received by the library. This operation can be divided into 2 sequential ones: first, entering data about the book is a new row in the table Books. Then you need to enter data about all copies of the book - this is entering a set of new rows into the table Instances. If this sequence of actions is interrupted, the database will not correspond to the actual object, so it is advisable to perform it as a single database operation.

3. Transaction properties. Ways to complete transactions

There are various transaction models that can be classified based on various properties including transaction structure, intra-transaction concurrency, duration, etc.

Currently, the following types of transactions are distinguished: flat or classic transactions, chain transactions and nested transactions.

Flat transactions are characterized by the classic properties of atomicity, consistency, isolation, and durability.

· The property of atomicity is expressed in the fact that the transaction must be completed as a whole or not at all.

· The consistency property ensures that as a transaction progresses, data moves from one consistent state to another consistent state—the transaction does not destroy the mutual consistency of the data.

· The isolation property means that transactions competing for access to the database are physically processed sequentially, isolated from each other, but to users it appears as if they are being executed in parallel.

· Durability properties mean that if a transaction is completed successfully, then the data changes that were made by it cannot be lost under any circumstances, even in the event of subsequent errors.

There are 2 options for completing the transaction:

· If all statements execute successfully and no software or hardware failures occur during the transaction, the transaction is committed. (A commit is a write to disk of changes to the database that were made during the execution of a transaction.) As long as the transaction is not committed, these changes can be undone and the database can be returned to the state it was in when the transaction began. Committing a transaction means that all results of the transaction become permanent. They will become visible to other transactions only after the current transaction is committed.

· If a transaction fails, the database must be returned to its original state. Rolling back a transaction is the action of undoing all data changes that were made by SQL statements in the body of the current pending transaction.

4. OperatorsTransactSQLfor working with transactions

In ANSI/ISO standard operators defined COMMIT And ROLLBACK in the standard, the start of a transaction is implicitly specified by the first data modification operator; Operator COMMIT means successful completion of the transaction, the transaction results are recorded in external memory; when the operator completes the transaction ROLLBACK the results of the transaction are reversed. Successful completion of the program in which the transaction was initiated means the successful completion of the transaction (as if the statement had been usedCOMMIT ), unsuccessful completion - aborts the transaction (as if the statement had been usedROLLBACK ). In this model, each statement that changes the state of data is considered a transaction. This model was implemented in the first versions of commercial DBMSs. Subsequently, an extended transaction model was implemented in the SYBASE DBMS.

The extended transaction model (for example, in the SQL SERVER DBMS) provides a number of additional operations:

· operator BEGIN TRANSACTION reports the start of a transaction;

· operator COMMIT TRANSACTION reports the successful completion of the transaction. This operator, just like COMMIT in the ANSI/ISO standard model, records all changes that were made in the database during the transaction;

· operator SAVE TRANSACTION creates a save point inside the transaction, which corresponds to the intermediate state of the database saved at the time of execution of this statement. In the operator SAVE TRANSACTION there may be a save point name, so during the execution of a transaction several save points corresponding to several intermediate states can be remembered;

· operator ROLLBACK has 2 modifications. If it is used without an additional parameter, then it is interpreted as a rollback statement for the entire transaction, but if it has a parameter ROLLBACK n, then it is interpreted as a statement to partially roll back the transaction to save point n.

Savepoints are useful in long, complex transactions to allow changes made by certain statements to be reversed.

In most cases, you can set a parameter called AUTOCOMMIT , which will automatically remember all executed commands, and the actions that led to the error will always be automatically canceled. Typically this mode is set using a command like:

SET AUTOCOMMIT ON ;

and return to normal dialog processing of requests:

SET AUTOCOMMIT OFF ;

In addition, it is possible to install AUTOCOMMIT , which the DBMS will perform automatically during registration. If the user session ends abnormally, for example, a system failure occurs, then the current transaction will automatically roll back the changes. It is not recommended to organize work so that single transactions contain many commands, especially not related to each other. This can lead to the fact that when canceling changes, too many actions will be performed, including those that are necessary and did not cause errors. The best option is when a transaction consists of one command or several closely related commands.

A trigger executes as an implicitly defined transaction, so transaction control commands can be used within the trigger. Specifically, when an integrity constraint violation is detected, the command must be used to abort the trigger and undo any changes the user attempted to make. ROLLBACK TRANSACTION . If the trigger completes successfully, you can use the command COMMIT TRANSACTION .
Executing a command ROLLBACK TRANSACTION or COMMIT TRANSACTION does not interrupt the trigger, so you should carefully monitor attempts to roll back a transaction multiple times when different conditions are met.

Transaction example:

BEGIN TRAN

UPDATE account

SET balance= balance- 100

If @@ error=0

BEGIN

ROLLBACK TRAN

RETURN

END

UPDATE card_account

SET balance=balance+100

WHERE account_number=@s_account

If @@ error=0

BEGIN

ROLLBACK TRAN

RETURN

END

COMMIT TRAN

Team BEGIN TRAN notifies the server that a transaction has begun. This means that before the server receives the commandCOMMIT TRAN all changes are temporary. Therefore, if the server crashes after the first update, the transaction will be rolled back. No process will be able to access the data until the transaction is completed.

5. Transaction log.

The implementation of the principle of saving intermediate states, confirming or rolling back a transaction is provided by a special mechanism, to support which a system structure called a transaction log has been created. The transaction log contains a sequence of records about changes to the database. It is designed to ensure reliable storage of data in the database. This assumes the possibility of restoring a consistent database state after any kind of hardware and software failures. General principles of logging and recovery:

· the results of committed transactions must be saved in the restored state of the database;

· the results of uncommitted transactions should not be present in the restored state of the database.

This means that the most recently consistent state of the database is restored.

The following situations are possible in which it is necessary to restore the database state:

· Recovery from sudden loss of RAM contents (soft crash). This situation may occur in the following cases: during a power outage or when a fatal processor failure occurs. The situation is characterized by the loss of that part of the database that was in the RAM buffers at the time of the failure.

· Recovery after failure of the main external database storage media (hard failure).

The system must be able to recover from both minor disruptions (for example, failed transactions) and major failures (for example, power failures, hard failures).

In the event of a soft failure, it is necessary to restore the contents of the database using the contents of the transaction logs stored on the disks. In the event of a hard failure, it is necessary to restore the contents of the database using archived copies and transaction logs that are stored on undamaged external media.

There are two main options for logging information. In the 1st option, for each transaction, a separate local log of database changes is maintained by this transaction. Such logs are called local logs. They are used for local transaction rollbacks. In addition, a general database change log is maintained, which is used to restore the database after soft and hard failures.

This approach allows you to quickly perform individual transaction rollbacks, but leads to duplication of information in the local and shared logs. Therefore, the second option is more often used - maintaining only a general database change log, which is also used when performing individual rollbacks.

The general structure of the log can be represented as a sequential file that records every database change that occurs during the execution of a transaction. All transactions have internal numbers, so the transaction log records all changes made by all transactions.

Each log entry is marked with the transaction number to which it relates and the attribute values ​​that it changes, in addition, for each transaction, the command to start and end the transaction is recorded in the log.

For greater reliability, the transaction log is often duplicated by the DBMS system tools, which is why the amount of external memory is many times greater than the actual amount of data in the database.

There are 2 options for transaction logging: a protocol with delayed updates and a protocol with immediate updates.

Logging based on the principle of deferred updates assumes the following mechanism for executing transactions:

1. When transaction T1 begins, an entry is made in the protocol

T1 Begin Transaction

2. During the execution of the transaction, a new value is written in the protocol for each modified record

T1. ID _ RECORD , attribute, new value

(ID _ RECORD – unique record number)

3. If all the actions that make up the transaction are completed successfully, then the transaction is partially recorded and entered into the protocol:

T 1 COMMT

4. Once the transaction is committed, the log records related to T1 are used to make changes to the database.

5. If a failure occurs, the DBMS looks at the log and finds out which transactions need to be redone. Transaction T1 must be redone if the protocol contains both entries T1 Begin Transaction And T 1 COMMT . The database may be in an inconsistent state, but any new values ​​for the changed data items are contained in the log, requiring the transaction to be re-executed. To do this, use the system procedureREDO(), which replaces all data element values ​​with new ones, scanning the protocol in forward order.

6. If the protocol does not contain a transaction commit command WITH OMMIT, then no action is required, and the transaction starts again.

An alternative mechanism with immediate execution involves making changes immediately to the database, and not only new, but also all old values ​​of the attributes being changed are entered into the protocol, so each record looks like this:

T1. ID _ RECORD , attribute, new value old value

In this case, writing to the log precedes the immediate execution of the operation on the database. When a transaction is committed, that is, the command is encountered T1 COMMIT, and it is executed, then all changes are already made to the database and no further actions are required in relation to this transaction.

When a transaction is rolled back, a system procedure is executed UNDO(), which returns all the old values ​​in the canceled transaction, sequentially going through the protocol, starting with the command BEGIN TRANSACTION.

To recover from a failure, the following mechanism is used:

· If a transaction contains a command to start a transaction, but does not contain a commit command with confirmation of its execution, then the sequence of actions is performed as when rolling back a transaction, that is, the old values ​​are restored.

In fact, restoration occurs using more complex algorithms, because changes, both in the log and in the database, are not recorded immediately, but are buffered. Change logging is closely related not only to transaction management, but also to buffering database pages in RAM. If a database change record, which should be logged when any database modification operation is performed, was actually immediately written to external memory, this would result in a significant system slowdown. Therefore, entries in the log are also buffered: during normal operation, the next page is pushed into the external memory of the log only when it is completely filled with entries.

6. Locks.

In multi-user systems with a single database, multiple users or applications can work simultaneously. One of the main tasks of a DBMS is to ensure user isolation, that is, to create such an operating mode that each user feels as if he is working with the database alone. This DBMS task is commonly called transaction parallelism.

There are three main problems with parallel database processing:

§ Missing changes . This situation occurs in cases where 2 transactions simultaneously change the same record in the database. For example, 2 operators are working to receive orders; the first operator accepted an order for 30 monitors. When he contacted the warehouse, there were 40 monitors listed there, and, having received confirmation from the client, he formalized the sale of 30 monitors out of 40. In parallel, a second operator works with him, who accepts an order for 20 of the same monitors, and in turn, contacting warehouse, he receives the same value of 40, and places an order for his client. Having finished working with the data, it executes the command Update, which enters 20 as the balance of monitors in the warehouse. After this, the first operator finishes working with his customer and also executes the command Update, which records the remainder 10 as the number of monitors in stock. In total, they sold 50 monitors out of 40 available, and will have 10 monitors in stock.

§ Intermediate data problems . Associated with the ability to access intermediate data. Let’s say the first operator, negotiating with his customer, introduced the ordered 30 monitors, but before finalizing the order, the client wanted to find out some more characteristics of the product. The application that operator 1 is working with has already changed the rest of the monitors in the warehouse, and information about the 10 remaining monitors is now stored there. At this time, the second operator is trying to accept an order from his customer for 20 monitors, but his application shows that there are only 10 monitors left in the warehouse and the operator is forced to refuse his customer. At this time, the customer of the first operator decides not to buy monitors, the operator rolls back the transaction and there are 40 monitors in the warehouse again. This situation became possible because the second operator's application had access to the intermediate data that the first application generated.

§ Inconsistent data problems. Associated with the possibility of changing data x, already read x another application. Both operators start working almost simultaneously, receive an initial warehouse state of 40 monitors, and then the first operator sells 30 monitors to his customer. He shuts down his application and it issues a COMMIT transaction command. The database state is consistent. At this moment, the customer of the second operator decides to place an order and the second operator, accessing the data again, sees that the number of monitors has changed. The second operator believes that the integrity of the transaction has been violated, because During the execution of one job, he received 2 different warehouse states. This situation occurred because the 1st operator's application was able to modify a data tuple that the second operator's application had already read.

Summarizing the above problems, we can distinguish the following types of conflicts between two parallel transactions:

· W-W – transaction 2 is trying to change an object that was changed by transaction 1 that did not end;

· R-W – transaction 2 is trying to change an object read by transaction 1 that has not completed;

· W-R transaction 2 tries to read an object modified by transaction 1 that did not complete;

7. Transaction serialization

In order to avoid such conflicts, it is necessary to develop some procedure for the coordinated execution of parallel transactions. This procedure must satisfy the following rules:

1. During the transaction, the user sees only the agreed upon data. The user should not see inconsistent intermediate data.

2. When transactions are executed in parallel in DB 2, the results of executing the transactions should be the same as if transaction 1 was executed and then transaction 2, or vice versa.

The procedure that implements these principles is called transaction serialization. It ensures that each user accessing the database works with it as if there were no other users simultaneously accessing the same data. The result of a joint execution of a transaction is equivalent to the result of some sequential execution of the same transactions.

The simplest solution would be to execute transactions sequentially, but such a solution is not optimal in terms of time; there are more flexible methods for managing parallel access to the database. The most common mechanism for solving these problems is to lock an object (for example, a table) for the duration of the transaction. If a transaction accesses a locked object, it remains in a pending state until the object is unlocked, at which point it can begin processing it. However, blocking creates new problems - transaction delays due to blocking.

So, locks, also called object synchronization locks, can be applied to different types of objects. The largest object of locking can be the entire database, but this type of locking will make the database inaccessible to all other applications that work with this database. The next type of locking object is tables. A transaction that operates on a table locks it for the duration of the transaction. This type of locking is preferable to the previous one because it allows parallel execution of transactions that operate on other tables.

A number of DBMSs implement page-level locking. In this case, the DBMS only locks individual pages on disk when a transaction accesses them. This type of locking is even softer, and allows different transactions to work on the same table if they access different data pages.

In some DBMSs, row-level locking is possible, but such a locking mechanism requires additional costs to support. SQL Server strives to implement record-level locking to ensure maximum concurrency. As the number of row locks increases, the server may switch to page locks if the number of records exceeds a threshold.

8. Overriding locks at the request level. Types of locks

If after the table name in the clause FROM follows one of the following keywords, the request interferes with the lock manager and the specified lock type is applied:

· NOLOCK - allows dirty reading;

· PAGLOCK - page-level locking;

· ROWLOCK - record-level locking;

· TABLOCK - shared table lock;

· TABLOCKX - exclusive table locking

Currently, the problem of blocking is the subject of a large number of studies.

There are two basic types of locks (synchronization locks):

Shared (non-hard) locks – This mode means shared locking of an object and is used to perform a read operation on an object. Objects locked in this way do not change during the execution of a transaction and are accessible to other transactions, but only in read mode;

Exclusive (hard) locks – do not allow anyone other than the owner of this lock to access the data. These locks are used for commands that change the contents or structure of a table and last until the end of the transaction.

Object acquisitions by multiple read transactions are compatible, that is, multiple transactions are allowed to read the same object. The acquisition of an object by one read transaction is not compatible with the acquisition by another transaction of the same object by write. Captures of the same object by different write transactions are not compatible.

However, using different types of locks leads to the problem of deadlocks. The deadlock problem arose when considering the execution of parallel processes in operating environments and was also associated with the management of shared objects. Example of a deadlock: Let transaction A hardlock table 1, and then hardlock table 2. Transaction B, on the contrary, hardlock table 2, and then hardlock table 1.

If both of these transactions started at the same time, then after performing modification operations on the first table, they will both end up waiting indefinitely: transaction A will wait for transaction B to complete and table 2 to be unlocked, and transaction B will wait in vain for transaction A to complete and table 1 to be unlocked.

Situations can be much more complex. The number of mutually blocked transactions may be much greater. Each transaction cannot detect this situation on its own. The DBMS must resolve it. Most commercial DBMSs have a mechanism to detect such deadlocks.

The basis of deadlock detection is the construction (or constant maintenance) of a transaction waiting graph. The wait graph can be a directed graph with transaction names at its vertices. If transaction T1 waits for transaction T2 to finish, then an arrow goes from vertex T1 to vertex T2. Additionally, arrows can be labeled with the names of blocked objects and the type of blocking.

The locking mechanism uses the concept of a lock isolation level, which determines how many tables will be locked. Traditionally, three levels of isolation are used:

· An isolation level called reread implements a strategy such that within a given transaction, all records retrieved by queries cannot be modified. These records cannot be changed until the transaction is completed.

· An isolation level, called a stability pointer, prevents each record from being modified while it is being read, or from being read while it is being modified.

· The third level of stability is called read only. Read-only locks the entire table and therefore cannot be used with modification commands. Thus, read-only ensures that the query output is internally consistent with the table data.

So, the concurrency control tool in the DBMS determines the extent to which simultaneously issued commands will interfere with each other. In modern DBMSs, it is an adaptable tool that automatically finds the optimal solution, taking into account ensuring maximum database performance and data availability for existing teams.

9. CONTROL QUESTIONS

1. Define transaction. Give examples of transactions.

2. List and describe the properties of transactions.

3. What are the possible options for completing transactions?

4. What language operators SQL serve to work with transactions in the extended transaction model?

5. Can I use transaction control commands in triggers?

6. What is the purpose of a transaction log?

7. In what cases is database recovery performed using the transaction log?

8. What transaction logging options are there?

9. What are the differences between the transaction logging options: a protocol with delayed updates and a protocol with immediate updates.

10. What problems arise when users work in parallel with a database?

11. What database objects can be locked to implement the principle of user isolation?

12. Is it possible to set the type of blocking in queries?

13. What types of object acquisition by multiple transactions exist? Which ones are compatible?

14. What is the problem with dead ends?

There are various transaction models that can be classified based on various properties including transaction structure, intra-transaction concurrency, duration, etc.

Currently, the following types of transactions are distinguished: flat or classic transactions, chain transactions and nested transactions.

Flat, or traditional, transactions are characterized by four classical properties: atomicity, consistency, isolation, durability (strength) - ACID (Atomicity, Consistency, Isolation, Durability). Traditional transactions are sometimes called ACID transactions. The properties mentioned above mean the following:

· Atomicity property(Atomicity) is expressed in the fact that the transaction must be completed in its entirety or not at all.

· Consistency property(Consistency) ensures that as transactions are executed, data moves from one consistent state to another - the transaction does not destroy the mutual consistency of the data.

· Isolation property Isolation means that transactions competing for access to the database are physically processed sequentially, isolated from each other, but to users it appears as if they are being executed in parallel.

· Durability property(Durability) is interpreted as follows: if a transaction is completed successfully, then the changes to the data that were made by it cannot be lost under any circumstances (even in the event of subsequent errors).

There are two options for completing a transaction. If all statements complete successfully and no software or hardware failures occur during the transaction, the transaction is committed.

Committing a transaction is the action of writing to disk the changes in the database that were made during the execution of a transaction.

As long as the transaction is not committed, it is permissible to undo these changes, restoring the database to the state it was in at the start of the transaction. Committing a transaction means that all results of the transaction become permanent. They will become visible to other transactions only after the current transaction has been committed. Until this point, all data affected by the transaction will be “visible” to the user in the state at the start of the current transaction.

If something happens during a transaction that makes it impossible for it to complete normally, the database must be returned to its original state. Rolling back a transaction is the action of undoing all data changes that were made by SQL statements in the body of the current pending transaction.

Each statement in a transaction does its part of the work, but for the entire work to complete successfully, all of their statements must complete unconditionally. Grouping statements in a transaction tells the DBMS that the entire group should be executed as a single unit, and that such execution should be supported automatically.

The ANSI/ISO SQL standard defines the transaction model and the functions of the COMMIT and ROLLBACK statements. The standard defines that a transaction begins with the first SQL statement, initiated by the user or contained in a program, that changes the current state of the database. All subsequent SQL statements constitute the body of the transaction. The transaction ends in one of four possible ways (Figure 11.1):

1. COMMIT statement means successful completion of the transaction; its use makes permanent changes made to the database within the current transaction;

2. The ROLLBACK statement aborts a transaction, undoing changes made to the database as part of that transaction; a new transaction begins immediately after ROLLBACK is used;

3. successful completion of the program in which the current transaction was initiated means the successful completion of the transaction (as if the COMMIT statement had been used);

4. erroneous termination of the program aborts the transaction (as if the ROLLBACK statement had been used).

In this model, each statement that changes the state of the database is considered a transaction, so upon successful completion of this statement, the database goes to a new stable state.

The first versions of commercial DBMSs implemented the ANSI/ISO transaction model. Subsequently, an expanded transaction model was implemented in the SYBASE DBMS, which includes a number of additional operations. The SYBASE model uses the following four statements:

· The BEGIN TRANSACTION statement signals the start of a transaction. Unlike the ANSI/ISO model, where the start of a transaction is implicitly specified by the first data modification statement, in the SYBASE model the start of a transaction is specified explicitly using the start transaction statement.

· The COMMIT TRANSACTION statement indicates the successful completion of a transaction. It is equivalent to the COMMIT statement in the ANSI/ISO standard model. This statement, like the COMMIT statement, records all changes that were made in the database during the transaction.

· The SAVE TRANSACTION statement creates a save point inside the transaction, which corresponds to the intermediate state of the database saved at the time of execution of this statement. The SAVE TRANSACTION statement can contain the name of the save point. Therefore, during the execution of a transaction, several save points may be remembered, corresponding to several intermediate states.

· The ROLLBACK operator has two modifications. If this statement is used without an additional parameter, it is interpreted as a rollback statement for the entire transaction, that is, in this case it is equivalent to the ROLLBACK statement in the ANSI/ISO model. If the rollback operator has a parameter and is written in the form ROLLBACK B, then it is interpreted as an operator of a partial rollback of the transaction to save point B.

Rice. 11.1.ANSI/ISO transaction model

The principles of executing transactions in the extended transaction model are presented in Fig. 11.2. In the figure, the operators are marked with numbers to make it easier for us to track the progress of the transaction in all valid cases.

Rice. 11.2.Examples of performing transactions in the extended model

The transaction begins with an explicit transaction start operator, which has number 1 in our scheme. Next comes operator 2, which is a search operator and does not change the current state of the database, and the following operators 3 and 4 transfer the database to a new state. Statement 5 saves this new intermediate state of the database and marks it as an intermediate state at point A. Statements 6 and 7 follow, which transition the database to the new state. And operator 8 stores this state as an intermediate state at point B. Operator 9 enters new data, and operator 10 does some checking of condition 1; if condition 1 is met, then operator 11 is executed, which rolls back the transaction to the intermediate state B. This means that the consequences of the actions of operator 9 are, as it were, erased and the database returns to the intermediate state B, although after executing operator 9 it was already in the new condition. And after the transaction is rolled back, instead of operator 9, which was previously executed from the In DB state, operator 13 for entering new data is executed, and then control is transferred to operator 14. Operator 14 again checks the condition, but this time for some new condition 2; if the condition is met, then control is transferred to operator 15, which rolls back the transaction to intermediate state A, that is, all operators that changed the database, starting from 6 to 13, are considered unexecuted, that is, the results of their execution have disappeared and we are again in the state And, as after the execution of operator 4. Next, control is transferred to operator 17, which updates the contents of the database, after which control is transferred to operator 18, which is associated with checking condition 3. The check ends either by transferring control to operator 20, which commits the transaction, and the database moves to a new one stable state and cannot be changed within the current transaction. Or, if control is transferred to operator 19, then the transaction is rolled back to the beginning and the database returns to its initial state, and all intermediate states here have already been checked, and it is impossible to perform a rollback operation to these intermediate states after executing operator 19.

Of course, SYBASE's extended transaction model supports a much more flexible transaction mechanism. Savepoints allow you to set markers within a transaction so that only part of the work done in the transaction can be undone. It is useful to use savepoints in long and complex transactions to provide the ability to undo a change for certain statements. However, this causes additional expenditure of system resources - the operator performs the work, and the changes are then canceled; usually improvements in processing logic may be a better solution.

Transaction log

The implementation in the DBMS of the principle of saving intermediate states, confirming or rolling back a transaction is ensured by a special mechanism, to support which a certain system structure is created, called Transaction log.

However, the purpose of a transaction log is much broader. It is designed to ensure reliable storage of data in the database.

And this requirement implies, in particular, the possibility of restoring a consistent state of the database after any kind of hardware and software failures. Obviously, some additional information is needed to perform,reconstructions. In the vast majority of modern relational DBMSs, such redundant additional information is maintained in the form of a database change log, most often called Transaction log.

So, the general purpose of logging database changes is to ensure that the database can be restored to a consistent state after any failure. Since the basis for maintaining the integrity of the database is the transaction mechanism, logging and recovery are closely related to the concept of transaction. The general principles of recovery are the following:

· the results of committed transactions must be saved in the restored state of the database;

· The results of uncommitted transactions should not be present in the restored state of the database.

This, in fact, means that the most recently consistent state of the database is restored.

The following situations are possible in which it is necessary to restore the state of the database.

· Individual transaction rollback. This rollback should be applied in the following cases:

o the standard situation for rolling back a transaction is its explicit completion with the ROLLBACK statement;

o abnormal termination of the application program, which is logically equivalent to executing the ROLLBACK statement, but physically has a different execution mechanism;

o forced transaction rollback in case of deadlock during parallel execution of transactions. In such a case, to break the deadlock, this transaction can be selected as a “victim” and its execution can be forcibly terminated by the DBMS kernel.

· Recovery from sudden loss of RAM contents (soft crash). This situation may arise in the following cases:

o in case of emergency shutdown of electrical power;

o when a fatal processor failure occurs (for example, RAM control is triggered), etc. The situation is characterized by the loss of that part of the database that was contained in RAM buffers at the time of the failure.

· Recovery after failure of the main external database storage media (hard failure). This situation, given the fairly high reliability of modern external memory devices, may occur relatively rarely, but nevertheless, the DBMS should be able to restore the database even in this case. The basis of recovery is the backup copy and the database change log.

To restore a consistent database state when an individual transaction is rolled back, you must undo the database modification statements that were executed in that transaction. To restore a consistent database state in the event of a soft failure, it is necessary to restore the contents of the database using the contents of the transaction logs stored on disks. To restore a consistent database state in the event of a hard failure, it is necessary to restore the contents of the database using archived copies and transaction logs that are stored on undamaged external media.

In all three cases, the basis of recovery is redundant data storage. This redundant data is stored in a log containing a sequence of database change records.

There are two main options for logging information. In the first option, each transaction maintains a separate local log of database changes by that transaction. These logs are called local logs. They are used for individual transaction rollbacks and can be maintained in RAM (more correctly, in virtual) memory. In addition, a shared database change log is maintained, which is used to recover the state of the database after soft and hard failures.

This approach allows you to quickly perform individual transaction rollbacks, but leads to duplication of information in the local and shared logs. Therefore, the second option is more often used - maintaining only a general database change log, which is also used when performing individual rollbacks. Next we consider this option.

The general structure of the log can be conditionally presented in the form of a sequential file, which records every change in the database that occurs during the execution of a transaction. All transactions have their own internal numbers, so a single transaction log records all changes made by all transactions.

Each transaction log entry is labeled with the transaction number to which it relates and the attribute values ​​that it changes. In addition, for each transaction, the command to start and end the transaction is recorded in the log (see Figure 11.3).

For greater reliability, the transaction log is often duplicated by the system tools of commercial DBMSs, which is why the amount of external memory is many times greater than the actual amount of data stored in the storage.

There are two alternative transaction logging options: a delayed update protocol and an immediate update protocol.

Lazy-change logging assumes the following transaction execution mechanism:

1. When transaction T1 begins, an entry is made in the protocol

<Т1 Begin transaction>

2. During the execution of the transaction, a new value is written in the protocol for each changed record: . Here ID_RECORD is the unique record number.

3. If all the actions that make up transaction T1 are completed successfully, then the transaction is partially recorded and entered into the protocol<Т1 СОММIТ>.

4. Once the transaction is committed, the log records related to T1 are used to make appropriate changes to the database.

5. If a failure occurs, the DBMS looks at the log and finds out which transactions need to be redone. Transaction T1 must be redone if the protocol contains both entries<Т1 BEGIN TRANSACTION и <Т1 СОММIТ>. The database may be in an inconsistent state, but any new values ​​for the changed data items are contained in the log, requiring the transaction to be re-executed. To do this, a certain system procedure REDOQ is used, which replaces all values ​​of data elements with new ones, scanning the protocol in direct order.

6. If the protocol does not contain a COMMIT transaction commit command, then no action is required, and the transaction is started again.

Rice. 11.3.Transaction log

An alternative mechanism with immediate execution involves making changes immediately to the database, and not only new, but also all old values ​​of the attributes being changed are entered into the protocol, so each record looks like<Т1, ID_RECORD, атрибут новое значение старое значение...>. In this case, writing to the log precedes the immediate execution of the operation on the database. When a transaction is committed, that is, the command is encountered<Т1 СОММIТ>and it is executed, then all changes are already made to the database and no further actions are required in relation to this transaction.

When a transaction is rolled back, the UNDO() system procedure is executed, which returns all the old values ​​in the canceled transaction, sequentially going through the protocol starting with the BEGIN TRANSACTION command.

To recover from a failure, the following mechanism is used:

· If a transaction contains a command to start a transaction, but does not contain a commit command with confirmation of its execution, then the sequence of actions is performed as when rolling back a transaction, that is, the old values ​​are restored.

· If a failure occurred after the last database modification command was executed, but before the commit command was executed, then the commit command is executed, and no changes occur to the database. The work happens only at the protocol level.

· However, it should be noted that recovery problems look much more complicated than the previously described algorithms, taking into account the fact that changes both in the log and in the database are not immediately entered, but are buffered. The next section is devoted to this.

Journaling and Buffering

Change logging is closely related not only to transaction management, but also to buffering database pages in RAM.

If the database change record that should be logged when any database modification operation is performed were actually written to external memory immediately, this would result in a significant system slowdown. Therefore, entries in the log are also buffered: during normal operation, the next page is pushed into the external memory of the log only when it is completely filled with entries.

The problem is to develop some general push policy that would ensure that the state of the database can be restored after failures.

The problem does not occur with individual transaction rollbacks, because in these cases the contents of RAM are not lost and the contents of both the log buffer and the database page buffers can be used. But if there is a soft failure and the contents of the buffers are lost, some consistent state of the log and database in external memory must be available to perform database recovery.

The basic principle behind the consistent push policy of the log buffer and database page buffers is that a change to a database object must be recorded in external log memory before the modified object is in external database memory. The corresponding logging (and buffering control) protocol is called Write Ahead Log (WAL) and consists in the fact that if you want to write a changed database object to external memory, you must first ensure that the transaction log is written to external memory records of its changes.

In other words, if in the external memory of the database there is a certain database object in relation to which a modification operation has been performed, then in the external memory of the log there is necessarily a record corresponding to this operation. The reverse is not true, that is, if the external memory log contains a record of some modification operation of a database object, then the modified object itself may not be in the external database memory.

An additional condition for pushing out buffers is the requirement that each successfully completed transaction must be actually committed to external memory. Whatever failure occurs, the system must be able to restore a database state containing the results of all transactions committed at the time of the failure.

A simple solution would be to flush the log buffer, followed by a bulk flush of the database page buffers modified by the transaction. Quite often this is done, but it causes significant overhead when performing the transaction commit operation.

It turns out that the minimum requirement to ensure that the last consistent state of the database can be restored is that when a transaction is committed, all records of changes to the database by that transaction are pushed into external log memory. In this case, the last journal entry made on behalf of this transaction is a special entry about the end of the transaction.

Let's now look at how database restore operations can be performed in a variety of situations if the system maintains a common log for all transactions with shared record buffering, supported by the WAL protocol.

Individual transaction rollback

In order to be able to perform an individual transaction rollback using the general log, all log entries for a given transaction are linked into a reverse list. The beginning of the list for pending transactions is the record of the last database change made by this transaction. For completed transactions (individual rollbacks of which are no longer possible), the beginning of the list is the record about the end of the transaction, which is necessarily pushed into the external log memory. The end of the list is always the first record about the database change made by this transaction. Typically, each record is given a unique transaction identifier so that a direct list of records of database changes by a given transaction can be reconstructed.

So, an individual transaction rollback (we emphasize once again that this is only possible for unfinished transactions) is performed as follows:

· The next record is selected from the list of this transaction.

· The opposite operation is performed: instead of the INSERT operation, the corresponding DELETE operation is performed, instead of the DELETE operation, INSERT is performed, and instead of the direct UPDATE operation, the reverse UPDATE operation is performed, restoring the previous state of the database object.

· Any of these reverse transactions are also logged. Actually, this is not necessary for an individual rollback, but when performing an individual rollback of a transaction, a soft failure may occur, during recovery from which it will be necessary to roll back a transaction for which the individual rollback has not been completely completed.

· When the rollback completes successfully, an end-of-transaction entry is written to the log. From the log's point of view, such a transaction is committed.

Recovering from a soft failure

A major challenge to recovering from a soft failure is that a single logical database update operation can change multiple physical database blocks, such as a data page and multiple index pages. Database pages are buffered in RAM and ejected independently. Despite the use of the WAL protocol, after a soft failure, the set of external memory pages of the database may be inconsistent, that is, some of the external memory pages correspond to the object before the change, and some - after the change. Logical level operations are not applicable to such an object state.

The state of the external database memory is said to be physically consistent if the sets of pages of all objects are consistent, that is, they correspond to the state of the object either before it was changed or after it was changed.

We will assume that the log marks points of physical consistency of the database - points in time at which external memory contains the consistent results of operations that completed before the corresponding point in time, and there are no results of operations that did not complete, and the log buffer is pushed into external memory. A little later we will look at how physical coherence can be achieved. Let's call such points tpc (time of physical consistency) - points of physical coordination.

Then, at the time of soft failure, the following transaction states are possible:

· the transaction has been successfully completed, that is, the COMMIT transaction confirmation operation has been completed and confirmation of its execution in external memory has been received for all transaction operations;

· the transaction was completed successfully, but for some operations confirmation of their execution in external memory was not received;

· the transaction received and executed the ROLLBACK command;

· the transaction has not been completed.

Physical Database Consistency

How can one ensure the presence of points of physical consistency of the database, that is, how to restore the state of the database at the time of tpc? There are two main approaches used for this: the shadow engine approach and the per-page database change logging approach.

When opening a file, the table mapping the numbers of its logical blocks to the addresses of physical blocks of external memory is read into RAM. When any block of a file is modified, a new block is allocated in external memory. In this case, the current mapping table (in RAM) is changed, and the shadow table remains unchanged. If a failure occurs while working on an open file, external memory automatically saves the state of the file before it was opened. To explicitly restore a file, it is enough to read the shadow mapping table into RAM again.

The general idea of ​​the shadow mechanism is shown in Fig. 11.4.

Rice. 11.4.Using shadow tables to display information

In the context of a database, the shadow mechanism is used as follows. Operations are periodically performed to establish database physical consistency points (checkpoints). To do this, all logical operations are completed, all RAM buffers whose contents do not match the contents of the corresponding external memory pages are ejected. The shadow table for mapping database files is replaced with the current one (more correctly, the current mapping table is written in place of the shadow one).

Restoring to tpc is instantaneous: the current mapping table is replaced with a shadow one (during recovery, the shadow mapping table is simply read). All recovery problems are solved, but at the cost of too much external memory. In the limit, you may need twice as much external memory as is actually needed to store the database. The shadow mechanism is a reliable, but too crude tool. A consistent state of external memory is ensured at one point in time common to all objects. In fact, it is enough to have a collection of consistent sets of pages, each of which can have its own time references.

To meet this weaker requirement, page-by-page changes are logged in addition to logical logging of database change operations. The first stage of recovery from a soft failure consists of a page-by-page rollback of pending logical operations. Just as with logical records for transactions, the last record of page changes from a single logical operation is the end of operation record.

In this approach, there are two methods to solve the problem. When using the first method, a common log of logical and page operations is maintained. Naturally, the presence of two types of records, interpreted completely differently, complicates the structure of the journal. In addition, records of page changes, the relevance of which is local in nature, significantly (and not very meaningfully) increase the log.

Therefore, maintaining a separate (short) page change log is becoming increasingly popular. This technique is used, for example, in the well-known product Informix Online.

Let's assume that in some way it was possible to restore the external memory of the database to the state at the time tpc (how this can be done is a little later). Then:

· For transaction T1 no action is required. It ended before tpc, and all its results are reflected in the external database memory.

· For transaction T2, you need to redo the remaining operations (redo). Indeed, in external memory there are completely no traces of operations that were performed in transaction T2 after the moment tpc. Therefore, directly reinterpreting T2's operations again is correct and will result in a logically consistent database state (since transaction T2 completed successfully before the soft failure, the log contains a record of all changes made by that transaction).

· For the TK transaction, you need to perform the first part of the operations (undo) in the opposite direction. Indeed, the external memory of the database completely lacks the results of TK operations that were performed after the moment tpc. On the other hand, the external memory is guaranteed to contain the results of TK operations that were performed before the moment of tpc. Therefore, the reverse interpretation of the TK transactions is correct and will lead to a consistent state of the database (since the TK transaction did not complete at the time of the soft failure, all consequences of its execution must be preserved during recovery).

· For transaction T4, which managed to start after the moment tpc and end before the moment of soft failure, you need to perform a complete re-interpretation of operations (redo).

· Finally, for transaction T5 that started after the tpc moment and did not have time to complete by the time of the soft failure, no action is required. The results of this transaction's operations are completely absent from external database memory.

Recovering from a Hard Crash

It is clear that the database change log is not sufficient to restore the last consistent state of the database after a hard failure. The basis of recovery in this case is the log and an archive copy of the database.

The recovery begins with a reverse copy of the database from the backup copy. Then, for all completed transactions, a redo is performed, that is, the operations are re-executed in direct order.

More precisely, the following happens:

· all operations are performed according to the log in the forward direction;

· For transactions that were not completed at the time of failure, a rollback is performed.

In fact, since a hard crash does not result in the loss of memory buffers, it is possible to recover the database to a level where even pending transactions can continue. But usually this is not done, because recovery from a hard failure is a rather lengthy process.

Although there are special requirements for maintaining a log in terms of reliability, it is, in principle, possible to lose it. Then the only way to restore the database is to return to the backup copy. Of course, in this case you won't be able to get the last consistent state of the database, but it's better than nothing.

The last issue we'll look at briefly concerns the production of database backups. The easiest way is to archive the database when the log is full. A so-called “yellow zone” is introduced in the journal, upon reaching which the formation of new transactions is temporarily blocked. When all transactions are completed and, therefore, the database is in a consistent state, you can archive it, and then start filling out the log again.

You can back up your database less often than the log becomes full. When the log is full and all started transactions have ended, you can archive the log itself. Since such an archived log is essentially only required to recreate the archived copy of the database, the log information can be significantly compressed during archiving.

The concept of transaction is at the core of the relational paradigm. A transaction consists of one or more DML commands followed by either a ROLLBACK or a COMMIT. It is possible to use the SAVEPOINT command for specific control within a transaction. Before looking at the syntax, it is necessary to review the concept of transactions. Related to this topic is the topic of consistent reading; this is implemented automatically at the Oracle server level, but some programmers can control it using SELECT commands.

Oracle's mechanism for ensuring transactional integrity is based on a combination of undo segments and a log file: this mechanism is undoubtedly the best of all created to date and fully meets international data processing standards. Manufacturers of other databases implement the standard in their own other ways. In short, any relational database must satisfy the ACID test: atomicity (A - atomicity), consistency (C - consistency), isolation (I - isolation) and durability (D - durability) must be guaranteed.

Atomality

The principle of atomicity states that either all parts of a transaction must succeed or none of them. For example, if a business analyst approved the rule that when an employee’s salary changes, the employee’s level necessarily changes, then your atomic transaction will consist of two parts. The database must ensure that either both changes are applied, or neither. If only one change is successful, you will have an employee whose salary is incompatible with his level: data corruption in business terms. If anything (anything at all) goes wrong before the transaction is committed, the database must ensure that all work done up to that point from the start of the transaction is undone: this should work automatically. Even though transaction atomicity sounds like something small, transactions can be long and very important. Let's consider another example, in the ledger there cannot be data for half a month of August and half a month of September: closing a month from a business point of view is one atomic transaction that can process millions of rows and thousands of tables and work for several hours (or be canceled if something happens). that went wrong). A transaction can be canceled manually (by issuing the ROLLBACK command) but it must be automatic and non-cancellable in case of an error.

Consistency

The data consistency principle states that the result of a query must be consistent with the state of the database at the time the query starts. Let's imagine a simple query that calculates the average value of a column in a table. If the table is large, it will take quite a long time to go through all the rows of the table. If other users are updating data while the query is running, should the query take the new values ​​or the old ones? Should the query result take into account rows that were added or ignore rows that were removed? The principle of consistency requires that the database ensure that any changes after the start of a query are not visible to that query; the query must return the average value of the column at the time the query was launched, regardless of how long the query lasted or what changes were made to the data. Oracle guarantees that if the request is completed successfully, the result will be consistent. However, if the database administrator has not configured the database appropriately, the query may fail with the famous “ORA-1555 snapshot too old” error. Previously, it was very difficult to resolve such errors, but in the latest versions, the administrator can easily resolve these situations.

Isolation

The isolation principle states that an unfinished (unconfirmed) transaction should be invisible to the rest of the world. While a transaction is in progress, only the session that performs this transaction sees changes. All other sessions should see unchanged data. Why is that? Firstly, the entire transaction may not be completed completely (remember the principle of atomicity and consistency) and therefore no one should see changes that can be canceled. Secondly, during the transaction, the data (in business terms) is incoherent: for our example of updating the salary, there will be a period of time when the salary has been changed, but the level has not yet been reached. Transaction isolation requires the database to hide current transactions from other users: they will see the data before the changes while the transaction is in progress, and then immediately see all the changes as a consistent set of data. Oracle guarantees transaction isolation: there is no way for a session (other than the one making the changes) to see unconfirmed data. Reading uncommitted data (known as a dirty read) is not allowed by Oracle (even though some other databases do).

Durability

The durability principle states that if a transaction is successfully completed, then it should be impossible to lose that data. During a transaction, the principle of isolation requires that no one except the session performing the changes see them. But once a transaction has completed successfully, the changes must be available to everyone and the database must ensure that they are not lost. Oracle meets this requirement by writing all change vectors to log files before the changes are committed. By applying this change log to backups, it is always possible to repeat any changes that were made when the database was stopped or damaged. Of course, data can be lost due to user errors, such as running invalid DML queries or deleting tables. But from the point of view of Oracle and the DBA, such events are also transactions: according to the durability principle, they cannot be undone.

Execute SQLrequests

The entire SQL language consists of about a dozen commands. Now we are interested in the commands: SELECT, INSERT, UPDATE and DELETE.

Executing the SELECT command

The SELECT command receives data. Executing the SELECT command is a process consisting of several stages: the server process executing the request will check whether the required data blocks exist in memory, in the cache buffer. If they are there, then execution can continue, otherwise the server process must find the data on disk and copy it to the cache buffer.

Always remember that server processes read blocks from datafiles into the database buffer cache, DBWn writes blocks from the database buffer cache to the datafiles.

When the blocks with the data needed to complete the query are in the cache buffer, any additional processes (such as sorting and aggregation) continue in the PGA session. When execution is complete, the result is returned to the user process.

How does this relate to the ACID test? For consistency, if a request detects that a block of data has changed since the start of the request, the server process will find a rollback segment (or undo segment) corresponding to this change, find the old version of the data and (for the current request) undo the change. Thus, changes that occurred after the start of the request will not be visible. Similarly, transaction isolation is guaranteed, even though the isolation is also based on committed changes. Frankly, if the data needed to undo the changes no longer exists in the rollback segment, this mechanism will not work. This is where the “snapshot too old” error comes from.

Figure 8-4 shows the processing path for a SELECT query.

Step 1 is passing the user request from the user process to the server process. The server process scans the cache buffer for the necessary blocks and if they are in the buffer, it proceeds to step 4. If not, then step 2 finds the blocks in the data files and step 3 copies the data to the buffer. Step 4 passes the data to the server process where there may be additional processing before Step 5 returns the query result to the user process.

Executing the UPDATE Command

For any DML command, it is necessary to work with data blocks and undo blocks, as well as create a change log (redo): A, C and I principles of the ACIDS test require the creation of undo data; D requires the creation of redo data.

Undo is not the opposite of redo! Redo protects all block changes, regardless of whether they are changes to a table block, index, or rollback segment. For redo - undo, the segment is the same segment as tables and all changes must be durable

The first step when executing a DML command is the same as when executing a SELECT command: the necessary blocks must be found in the cef buffer or copied from data files to the buffer. The only difference is that an additional empty (or expired) rollback block is required. Then execution becomes more complicated than with the SELECT command.

First, locks must be specified for all rows and corresponding indexes that will be involved in the process.

Then the redo data is created: the server process writes to the buffer logs the vector of changes that will be applied to the data. Redo data is created for both data block changes and rollback block changes: if a column in a row is updated then the rowid and the new value is written to the log buffer (the change that will be applied to the table block), as well as the old value of the column (the change for the rollback block) . If the column is part of an index key, then changes to the index will also be written to the log buffer, along with changes that will be made in the rollback block to protect index changes.

After all the redo data has been created, the data in the cache buffer is updated: the data block is updated to the new version with the changed column, and the old version is written to the rollback block. From this moment until the transaction is confirmed, all requests from other sessions accessing this line will be redirected to the rollback block. Only the session that makes the UPDATE will see the current version of the row in the table block. The same principle applies to all related indexes.

Executing INSERT Commandsand DELETE

Conceptually, INSERT and DELETE are handled in the same manner as UPDATE. First, the buffer is searched for the necessary blocks and if they are not there, they are copied into memory.

Redo is created in the same way: all change vectors that will be applied to data and rollback blocks are first written to the log buffer. For an INSERT command, the vector of changes to a table block (and possibly index blocks) are the bytes that make up the new row (and possibly the new index key). The vector for the undo block is the rowid of the new line. For a DELETE command, the vector for the undo block is the entire line.

The key difference between INSERT and UPDATE commands is the amount of data to be rolled back. When a row is added, the only data to be rolled back will be the rowid entry in the rollback block, because to undo an INSERT command, the only information Oracle needs is the rowid of the row and the command can be created

delete from table_name where rowid=rowd_id_of_new_row;

Running this command will undo the change.

For a DELETE command, the entire row (which can be several kilobytes) must be written to an undo block, and the delete can then be undone if necessary by generating a query that re-adds the entire row to the table.

Start and end of transaction

A session begins a transaction the moment it executes any DML command. The transaction continues for as many DML commands as possible until the session issues a ROLLBACK or COMMIT command. Only confirmed changes will become guaranteed and will be available for other sessions. It is not possible to start a transaction within a transaction. The SQL standard does not allow users to start a transaction and then start a new one before completing the first one. This can be done using PL/SQL (the third generation Oracle language), but not standard SQL.

The transaction control commands are COMMIT, ROLLBACK, and SAVEPOINT. There may also be circumstances other than explicitly calling a COMMIT or ROLLBACK command that immediately aborts the transaction

  • Executing a DDL or DCL command
  • Ending a user process (for example, the user has exited SQL *Plus or SQL Developer)
  • Client session "died"
  • Problems in the system

If the user issues a DDL command (CREATE, ALTER or DROP) or a DCL command (GRANT or REVOKE) then the active transaction (if one exists) will be committed. This happens because DDL and DCL commands are themselves transactions. Since it is not possible to create nested transactions in SQL, if the user already has a transaction in progress, all user commands will be confirmed along with the DDL or DCL command.

If you started a transaction by executing the query DML and then closed the program without explicitly specifying a COMMIT or ROLLBACK before exiting, the transaction will be aborted - but whether it is committed or aborted is entirely up to the program. Different programs may have different behavior depending on how you ended up working in the program. For example, in Windows you can usually exit the program by selecting the File – Exit menu item or by clicking on the cross in the upper right corner. The programmer could process these completion methods differently and in the first case indicate COMMIT, and in the second ROLLBACK. In any case, it will be a controlled exit.

If a client session fails for any reason, the database will always abort the transaction. Such failures can be for various reasons: the user process could be “killed” by the dispatcher, problems with the network, or a breakdown of the user’s machine. In any case, the COMMIT or ROLLBACK command was not explicitly specified and the database needs to decide what happened. In this case, the session is “killed” and the active transaction is canceled. And the database behaves in exactly the same way in case of problems on the server side. If the database was closed abnormally, then at the next start all transactions that were started but not clearly completed will be canceled.

Controltransactions: COMMIT, ROLLBACK, SAVEPOINTand SELECT FOR UPDATE

Oracle begins a transaction the moment the first DML command is issued. The transaction lasts until the ROLLBACK or COMMIT command is called. The SAVEPOINT command is not part of the SQL standard and is in reality an easy way for a programmer to undo changes partially in reverse order.

The COMMIT command is where many people (and even some DBAs) show a lack of understanding of the Oracle architecture. When you issue a COMMIT, all that physically happens is that LGWR writes the log buffer to disk. DBWn does absolutely nothing. This is one of the most important properties of Oracle for achieving high database performance.

What does DBWn do when a COMMIT command is executed? Answer: absolutely nothing

To make a transaction durable, all you need to do is write the changes that were made during the transaction to disk: there is no need for actual data on disk. If changes are recorded in the form of many copies of change logs on disk, then even if the database is damaged, all transactions can be repeated by restoring a backup copy of the data before the error and applying changes from the logs. At this point, you need to understand the fact that COMMIT just clears the log buffer to disk and marks the transaction as completed. This is why transactions involving millions of updates on thousands of files over several hours can be confirmed in a fraction of a second. Since LGWR records logs in almost real time, virtually all transaction changes are already written to disk. When you issue a COMMIT, LGWR immediately writes the log to disk: your session will wait until the recording finishes. The delay time will be equal to the time it takes to write the latest data from the log buffer, which usually takes a few milliseconds. Your session can then continue running, and all other sessions will not be redirected to data in the rollback segment when accessing updated data, unless consistency requires it. The change vectors written to the change redo log are all changes: those applied to both data blocks (tables and indexes) and rollback blocks.

The redo log includes all changes: applied to the data segments and to the undo segments for confirmed and unconfirmed transactions

The most incomprehensible thing is that redo is written to LGWR files and will contain both confirmed and unconfirmed transactions. Even more, at any time, DBWn may or may not write modified data segment or rollback segment blocks to the data files for committed and uncommitted transactions. That is, your database on disk is inconsistent: data files may store data from unconfirmed transactions and may not contain confirmed changes. But at any time, in the event of a problem, there is enough information in the log file on disk to repeat the confirmed transactions that are missing in the data files (using changes to data blocks) and restore the rollback segments (using changes to rollback blocks) needed to cancel all unconfirmed transactions that are recorded to data files.

Any DDL command, as well as GRANT or REVOKE will confirm the current transaction

ROLLBACK

While a transaction is in progress, Oracle stores an image of the data before the transaction begins. This image is used by other sessions that access the data involved in the transaction. It is also used to cancel a transaction automatically if something goes wrong or the session cancels the transaction.

Syntax for canceling a transaction

ROLLBACK ;

The state of the data before the transaction was canceled contains changes, but the information needed to cancel these changes is available. This information is used by other sessions to fulfill the isolation principle. A ROLLBACK transaction will cancel all changes, restoring the data image before the start of the transaction: all added rows will be deleted, all deleted rows will be restored, all rows in which the values ​​have changed will return to their original state. Other sessions won't even know that anything happened, they never saw the change. And the session that initiated the transaction after cancellation will see the data as it was before the transaction began.

SAVEPOINT

A savepoint allows programmers to set a flag in a transaction which can then be used to control the effect of canceling the transaction. Instead of canceling the entire transaction and ending it, it becomes possible to undo changes made after a specific flag but leave changes made before that flag. The action of the transaction at this moment continues: the transaction is not confirmed, it is still possible to cancel the entire transaction and the changes are not visible to other sessions.

Command Syntax

SAVEPOINT savepoint

This command creates a point in the transaction that can be used later in the ROLLBACK command. The following table shows the number of rows in the table visible to different sessions during a transaction at different points in time. The table used is called TAB and has one column

In example c, two transactions were completed: the first was completed with the COMMIT command and the second was ROLLBACK. It can be seen that the use of save points affects only within the transaction for the session that initiated the transaction: the second session does not see anything that is not confirmed.

SELECTFORUPDATE

The last command for managing transactions is SELECT FOR UPDATE. Oracle, by default, provides the highest level of parallelism: reading data does not block writing, writing does not block changing. In other words, there is no problem if one session tries to read data that another session changes and vice versa. But sometimes you may need to change this behavior and prevent the possibility of changing the data that is read by the session.

Typical application behavior is to retrieve data using the SELECT command, display the data for the user to view, and allow the data to be modified. Since Oracle supports parallel work of users, nothing prevents another user from obtaining the same data. If both sessions try to make any changes, strange situations may arise. The following example shows such a situation

This is what the first user will see (assuming SQL *Plus is used)

This result is a little confusing for the user. To solve this problem, you can block the rows that the query returned

select * from regions for update;

The FOR UPDATE directive will lead to locking of the tables that the query returns. Other sessions will not be able to change the data and thus subsequent changes will be successful: other sessions will not be able to change the data. That is, one session will have a consistent reading of the data, but the price for this will be that other sessions will get stuck if they try to change data that is locked (other sessions can read this data).

A row lock caused by a FOR UPDATE command will last until the session issues a COMMIT or ROLLBACK command. The transaction completion command must be executed even if you did not run any DML commands.

The so-called “auto-commit”

To complete the overview of how transaction management is handled, it is necessary to dispel all doubts about the so-called “auto-commit” or implicit commit. You will often hear that Oracle will automatically confirm. The first case is the previous case when you executed the DDL command, the other situation is when the user exited the program such as SQL *Plus.

In fact, everything is very simple. There is no such thing as an auto-commit. When you execute a DDL command, the normal COMMIT that is built into the DDL command works. But what happens when you leave the program? If you are using SQL Plus on Windows and issue a DML command and then an EXIT command (EXIT is a *Plus SQL command, not an SQL command), your transaction will be committed. This is because the SQL *Plus developers built the COMMIT command into the EXIT command. If you click on the red cross in the upper right corner, the ROLLBACK command will be called. This happens because, again, the SQL *Plus developers programmed the program to behave this way. On another operating system, the behavior of the SQL Plus program may be different, the only way to find out is to test the program (or read the source code, which in the case of the SQL Plus program is impossible unless you work in Oracle using this program).

SQL *Plus has the SET AUTOCOMMIT ON command. Calling this command tells SQL *Plus how to handle user requests: SQL *Plus will add a COMMIT command call after any DML command. This way all requests will be confirmed as soon as they are completed. But again, all this happens entirely on the user process side; the database has no auto-commit, and all long-running changes will be isolated from other sessions until the request completes successfully. Even in this case, if you run a long execution request, then, for example, terminate the user process through the task manager, then PMON will detect the “ghost” session and cancel the transaction.

Every time a bank client uses a bank card to pay for goods, withdraw funds or make transfers, certain transactions are carried out. And although all transactions take only a few minutes, the full cycle of transactions is quite an extensive process, which includes sending requests for debiting money, processing them and executing them.

A transaction is any operation with a bank card, the execution of which leads to a change in the client’s account status. The transaction can be carried out in real time (online) and offline.

Online transactions require payment confirmation at the time of payment or funds transfer.

Online transactions include money transfers between cards, cash withdrawals from ATMs, and payment transactions at retail outlets and stores. Let's consider the process of completing an online transaction using the example of paying for goods in a shopping center.

There are three parties involved in the operation:

  • the acquiring bank serving the selected retail outlet (its POS terminal is installed in the store);
  • issuing bank servicing the payment bank card;
  • an international payment system that is an intermediate link in conducting settlement transactions (Visa, MasterCard, etc.).

Online Transaction Procedure

The settlement transaction begins from the moment the payment card is handed over to the cashier and the POS terminal reads the data necessary for payment (card number, validity period, owner’s name and other information encrypted on the magnetic tape). The read information is transferred to the acquiring bank that services the POS terminal (as a rule, stores enter into special agreements for servicing terminals, according to which commissions are charged for each transaction).

The received data is transferred by the acquiring bank to the data processing center (DPC) of the international payment system servicing the card.

The data center checks for the presence or absence of a payment card in the stop list (the stop list may contain cards suspected of fraud), as a result of which the operation is approved or rejected.

After this, the information is transferred to the processing center of the issuing bank, where the payment is approved. Here the transaction is checked for legality: it checks whether there are sufficient funds to complete the transaction, and checks whether the entered PIN code corresponds to the real value. In addition, a check is performed to ensure that the established limit for transactions is exceeded.

The issuing bank's response is sent back through the data center to the acquiring bank and the store. Payment details are displayed on a check, which is handed over to the buyer.

Features of online and offline operations

The considered actions when performing online transactions complete the interaction between the buyer and the store. But the transaction process itself does not end there. The fact is that funds are not debited from the card immediately: they are temporarily blocked. Funds are transferred to the store from the acquirer's account, and they are debited from the card only after the acquiring bank transfers to the issuer a financial document for their debiting. This may happen over several days or even a month.

Offline transactions follow a different principle. They pass without verification by the remote party and without approval or rejection. The transaction is pre-approved, the balance on the bank card is reserved, and all payment details are stored in the memory of the payment terminal.

An offline transaction is carried out later, when the information accumulated in the terminal is transmitted via communication channels to the servicing bank. As a rule, several days pass from the moment of request for payment to the moment of actual payment.

Offline transactions are used in cases where it is not possible to establish a connection with the processing center in real time (on airplanes, buses, taxis, etc.).

Prohibition and cancellation of transactions

The most common transactions are in-store payments, money transfers and cash withdrawals. There are several reasons why transactions may be prohibited.

The most common of them:

  • the bank card has been blocked;
  • there are not enough funds on the bank card necessary to complete the operation;
  • the payment card has established restrictions on making payments;
  • the payment card has expired;
  • an error was made when entering the PIN code;
  • the bank card is included in the stop list on suspicion of money laundering, fraud, etc.;
  • There are technical problems (on the website, with the ATM, etc.).

If the prohibition of transactions is not related to an insufficient card balance, you must contact the servicing bank to resolve the problems. In some cases, transactions can be canceled at the initiative of the clients themselves (of course, if we are not talking about cash withdrawals). You also need to know about the possibility of canceling transactions in order to be able to return funds debited from your card fraudulently.

The easiest way is to cancel the operation on the day on which it was performed.

The function of canceling operations is available in the terminals themselves.

If the data from the terminals has already been transferred to the bank, you should contact the financial institution itself.







2024 gtavrl.ru.