Let's understand the utilities for database backup. Creating an automatic backup of a SQL database on the SQL Express Edition server Ms sql automatic backup


Database administrators are divided into those who make backups and those who will make backups.

Introduction

This article describes the most common 1C IB backup using MS SQL Server 2008 R2 tools, explains why you should do it this way and not otherwise, and dispels several myths. The article contains quite a lot of references to MS SQL documentation; this article is more of an overview of backup mechanisms than a comprehensive guide. But for those who are faced with this task for the first time, simple and step-by-step instructions are given that are applicable to simple situations. The article is not intended for administration gurus, gurus already know all this, but it is assumed that the reader is able to install MS SQL Server himself and force this miracle of hostile technology to create a database in its depths, which in turn he is able to force to store 1C data.

I consider the TSQL BACKUP DATABASE command (and its brother BACKUP LOG) to be essentially the only means of backing up 1C databases using MS SQL Server as a DBMS. Why? Let's look at what methods we generally have:

How Fine Badly Total
Upload to dt Very compact format. It takes a long time to form, requires exclusive access, does not save some unimportant data (such as user settings in earlier versions), and takes a long time to deploy. It's not so much a backup method as it is a way to move data from one environment to another. Ideal for narrow channels.
Copying mdf and ldf files A very clear way for novice admins. Requires releasing database files from locking, and this is possible if the database is disabled (take offline command of the context menu), detached (detach) or the server is simply stopped. Obviously, users will not be able to work at this time. This method makes sense to use if and only if an accident has already occurred, so that when trying to recover, at least have the opportunity to return to the option from which the recovery began.
Backup using OS or hypervisor A convenient way for development and testing environments. Not always friendly with data integrity. Resource-intensive method. May have limited use for development. It has no practical meaning in a food environment.
Backup using MS SQL No downtime required. Allows you to restore a complete state at an arbitrary moment, if you worry about it in advance. Excellent automation. Economical in time and other resources. Not a very compact format. Not everyone knows how to use this method to the required extent. For food environments - the main tool.

The main difficulties when using backup using built-in MS SQL tools arise from a basic misunderstanding of the principles of operation. This is partly explained by great laziness, partly by the lack of a simple and understandable explanation at the level of “ready-made recipes” (hmm, let’s just say, I haven’t come across one), and the situation is also aggravated by the mythological advice of “underguru” on the forums. I don’t know what to do with laziness, but I’ll try to explain the basics of backup.

What and why do we save?

A long time ago, in a distant galaxy, there was such a product of engineering and accounting thought as 1C: Enterprise 7.7. Apparently due to the fact that the first versions of 1C:Enterprise were developed to use the popular dbf file format, its SQL version did not store enough information in the database to consider MS SQL backup complete, and even with every change in the structure it was broken operating conditions of the full recovery model, so it was necessary to go to various lengths to force the backup system to perform its main function. But since version 8 came out, DBAs have finally been able to relax. Standard backup tools allow you to create a complete and holistic backup system. Only the logbook and some little things such as settings for the position of forms (in older versions) are not included in the backup, but the loss of this data does not affect the functionality of the system, although it is certainly correct and useful to make backup copies of the logbook.

Why do we need backup at all? Hm. At first glance, a strange question. Well, probably, firstly, to be able to deploy a copy of the system and secondly, to restore the system in case of failure? I agree about the first one, but the second purpose is the first backup myth.

Backup is the last line of system security. If a database administrator has to restore a product system from backup copies, it is likely that many serious mistakes were made in the organization of work. You cannot treat backup as the main way to ensure data integrity; no, it is rather closer to a fire extinguishing system. A fire extinguishing system is required. It must be configured, tested and operational. But if it worked, then this in itself is a serious emergency with a lot of negative consequences.

To ensure that backup is used only for “peaceful” purposes, use other means to ensure performance:

  • Ensure the physical safety of your servers: fires, floods, poor power supplies, cleaners, construction workers, meteorites and wild animals are all just waiting around the corner to destroy your server room.
  • Address information security threats responsibly.
  • Make changes to the system skillfully and make sure in advance that these changes will not lead to deterioration. In addition to a plan for making changes, it is advisable to have a plan for “what to do if everything goes wrong.”
  • Actively use technologies to increase system availability and reliability instead of later dealing with the consequences of accidents. For MS SQL you should pay attention to the following features:
    • Using MS SQL clusters (although, to be honest, I think this is one of the most expensive and useless ways to occupy a database administrator for systems that do not require 24x7)
    • Database mirroring (synchronous and asynchronous depending on availability, performance and cost requirements)
    • Transaction log delivery
    • Replication using 1C tools (distributed databases)

Depending on the system availability requirements and the budget allocated for these purposes, it is quite possible to choose solutions that will reduce downtime and failure recovery time by 1-2 orders of magnitude. There is no need to be afraid of accessibility technologies: they are simple enough to learn in a few days with basic knowledge of MS SQL.

But, no matter what, backup is still necessary. This is the same reserve parachute that you can use when all other means of rescue fail. But, like a real reserve parachute, for this:

  • this system must be correctly and professionally configured in advance,
  • a specialist using the system must have theoretical and practical skills in its use (regularly reinforced),
  • the system should consist of the most reliable and simple components (this is our last hope).

Basic information about storing and processing MS SQL data

Data in MS SQL is usually stored in data files (hereinafter referred to as FD - an abbreviation that is not commonly used; in this article there will be several more not very common abbreviations) with the mdf or ndf extensions. In addition to these files, there are also transaction logs (TL), which are stored in files with the .ldf extension. Often, novice administrators are irresponsible and frivolous when it comes to VT, both in terms of performance and storage reliability. This is a very serious mistake. In fact, it’s rather the opposite, if there is a reliably functioning backup system and a lot of time can be allocated to restore the system, then you can store data on a fast, but extremely unreliable RAID-0, but then the data must be stored on a separate reliable and productive resource (although would be on RAID-1). Why is that? Let's take a closer look. Let me make a reservation right away that the presentation is somewhat simplified, but sufficient for an initial understanding.

The FD stores data in pages of 8 kilobytes (which are combined into extents of 64 kilobytes, but this is not significant). MS SQL does not guarantee that immediately after executing a data change command, these changes will go to the FD. No, it's just that the page in memory is marked as "requiring saving." If the server has enough resources, then soon this data will be on disk. Moreover, the server works “optimistically” and if these changes occur in a transaction, then they may well end up on disk before the transaction is committed. That is, in the general case, during active operation, the FD contains scattered pieces of unfinished data and incomplete transactions, for which it is unknown whether they will be canceled or committed. There is a special command "CHECKPOINT", which tells the server that it needs to flush all unsaved data to disk "right now", but the scope of this command is quite specific. Suffice it to say that 1C does not use it (I have not encountered it) and to understand that during operation the FD is usually not in an intact state.

To cope with this chaos, we just need VT. The following events are recorded in it:

  • Information about the start of the transaction and its identifier.
  • Information about the fact of committing or canceling a transaction.
  • Information about all changes in data in the FD (roughly speaking, what happened and what happened).
  • Information about changes to the FD itself or the database structure (increasing files, decreasing files, allocating and freeing pages, creating and deleting tables and indexes)

All this information is written indicating the identifier of the transaction in which it occurred and in sufficient volume to understand how to move from the state before this operation to the state after this operation and vice versa (the exception is the partial-logging recovery model).

It is important that this information is written to disk immediately. Until the information is recorded in the VT, the command is not considered executed. In a normal situation, when the size of the VT is sufficient and when it is not very fragmented, records are written to it sequentially in small records (not necessarily multiples of 8 kb). Only data that is actually necessary for recovery is included in the transaction log. In particular Not information is obtained about which request text led to the modifications, what execution plan this request had, which user launched it, and other information unnecessary for recovery. The query can provide some insight into the transaction log data structure

Select * from::fn_dblog(null,null)

Due to the fact that hard drives work much more efficiently with sequential writes than with a chaotic stream of read and write commands, and due to the fact that SQL commands will wait until the end of the write to the hard drive, the following recommendation arises:

If there is even the slightest possibility, then in a product environment, VTs should be located on separate (from everything else) physical media, preferably with minimal access time for sequential recording and with maximum reliability. For simple systems, RAID-1 is quite suitable.

If the transaction is canceled, then the server will return all changes already made to the previous state. That is why

Canceling a transaction in MS SQL Server usually lasts comparable to the total duration of operations for changing the data of the transaction itself. Try not to cancel transactions or make the decision to cancel as early as possible.

If the server unexpectedly stops working for some reason, then when it is restarted, it will be analyzed which data in the FD does not correspond to the integral state (unrecorded but committed transactions and recorded but canceled transactions) and this data will be corrected. Therefore, if you, for example, started rebuilding the indexes of a large table and restarted the server, then when you restart it, it will take a significant amount of time to roll back this transaction, and there is no way to interrupt this process.

What happens when the VT reaches the end of the file? It's simple - if there is free space at the beginning, then it will start writing to the free space at the beginning of the file before the occupied space. Like a looped magnetic tape. If there is no space at the beginning, then the server will usually try to expand the transaction log file, while for the server the new piece allocated is a new virtual transaction log file, of which there can be a lot in the physical transaction file, but this has little to do with backup. If the server fails to expand the file (the disk space has run out or the settings prohibit expanding the file), then the current transaction will be canceled with error 9002.

Oops. What needs to be done to ensure that there is always a place in the railway? This is where we come to the backup system and recovery models. To cancel transactions and to restore the correct state of the server in the event of a sudden shutdown, it is necessary to store records in the JT, starting from the start of the earliest open transaction. This minimum is written and stored in ZhT Necessarily. Regardless of the weather, server settings and the wishes of the admin. The server cannot allow this information not to exist. Therefore, if you open a transaction in one session and perform different actions in others, the transaction log may end unexpectedly. The oldest transaction can be identified with the DBCC OPENTRAN command. But this is only the necessary minimum of information. What happens next depends on recovery models. There are three of them in SQL Server:

  • Simple— only the remainder of the VT necessary for life is stored.
  • Full— the entire VT is stored since the last backup transaction log. Please note, not from the moment of a full backup!
  • Bulk logged— part (usually a very small part) of operations are recorded in a very compact format (essentially just a record that such and such a page of the data file has been changed). Otherwise identical to Full.

There are several myths associated with recovery models.

  • Simple allows you to reduce the load on the disk subsystem. This is wrong. exactly the same amount is written as with Bulk logged, only it is considered free much earlier.
  • Bulk logged allows you to reduce the load on the disk subsystem. For 1C this is almost not the case. In fact, one of the few operations that can fall under minimal logging without additional dances with a tambourine is loading data from an upload in dt format and restructuring tables.
  • When using the Bulk logged model, some transactions are not included in the transaction log backup and it does not allow you to restore the state at the time of this backup. This is not entirely true. If the operation is minimally logged, then the current pages with data will be included in the backup copy and it will be possible to “play” the transaction log to the end (although this is not possible at an arbitrary point in time if there are minimally logged operations).

It is almost pointless to use the Bulk logged model for 1C databases, so we do not consider it further. But we will consider the choice between Full and Simple in more detail in the next part.

  • Transaction log structure
    • Recovery Models and Transaction Log Management
    • Transaction Log Management
  • Using Transaction Log Backups

How backup works in Simple and Full recovery models

Based on the type of formation, backup copies are of three types:

  • Full(Full)
  • Differential(Differential, difference)
  • Log(Backup copy of transaction logs, given how often this term is used, we will abbreviate it to RKZhT)

Don't get confused here: a full recovery model and a full backup are significantly different things. In order not to confuse them, below I will use English terms for the recovery model and Russian terms for types of backups.

Full and differential copy work the same for Simple and Full. Transaction log backup is completely missing from Simple.

Full backup

Allows you to restore the state of the database to a certain point in time (to the one at which the backup was started). Consists of a page-by-page copy of the used part of the data files and an active piece of the transaction log during the time the backup was being formed.

Differential backup

Stores pages of data that have changed since the last full backup. When restoring, you must first restore a full backup copy (in NORECOVERY mode, examples will be given below), then you can apply any of the subsequent differential copies to the resulting “blank”, but, of course, only from those made before the next full backup. Due to this, you can significantly reduce the amount of disk space for storing a backup copy.

Important points:

  • Without a previous full backup, a differential copy is useless. Therefore, it is advisable to store them somewhere close to each other.
  • Each subsequent differential backup will retain all the pages included in the previous differential backup taken after the previous full backup (although perhaps with different content). Therefore, each subsequent differential copy is larger than the previous ones, until a full copy is made again (if this is violated, it is only due to compression algorithms)
  • For recovery at some point it is enough last full backup at this point and last differential copy at this point. Intermediate copies are not needed for recovery (although they may be needed to select the moment of recovery)

RKZhT

Contains a copy of the VT for a certain period. Usually from the moment of the last RKZhT until the formation of the current RKZhT. RKZHT allows you to restore the state from a copy restored in NORECOVERY mode at any point in time included in the period of the restored copy of the ZHT at any subsequent point in time included in the interval of the restored backup copy. When creating a backup with standard parameters, space in the transaction log file is freed up (until the time of the last open transaction).

Obviously, the RKZhT does not make sense in the Simple model (then the ZhT contains only information from the moment of the last unclosed transaction).

When using RKZhT, an important concept arises - continuous chain of RKZhT. This chain can be interrupted either by the loss of some backup copies of this chain, or by converting the database to Simple and back.

Attention: a set of RKZhT is essentially useless if it is not a continuous chain, and the start point of the last successful full or differential backup must be inside period of this chain.

Common misconceptions and myths:

  • "RKZhT contains transaction log data from the moment of the previous full or differential backup." No, that's not true. The RKZhT also contains, at first glance, useless data between the previous RKZhT and the subsequent full backup.
  • "A full or differential backup should free up space inside the transaction log." No, that's not true. Full and differential backups do not affect the RKZhT chain.
  • VT needs to be periodically cleaned manually, reduced, and shredded. No, it’s not necessary and, on the contrary, it’s undesirable. If you release the VT between the RCVT, the RCVT chain needed for restoration will be broken. And constant reduction/expansion of the file will lead to its physical and logical fragmentation.

How it works in simple

Let there be a database of 1000 GB. Every day the database grows by 2 GB, while 10 GB of old data is changed. The following backups have been made

  • Full copy of F1 from 0:00 February 1 (volume 1000 GB, compression is not taken into account for simplicity of the picture)
    • Differential copy of D1.1 from 0:00 February 2 (volume 12 GB)
    • Differential copy of D1.2 from 0:00 February 3 (volume 19 GB)
    • Differential copy of D1.3 from 0:00 February 4 (volume 25 GB)
    • Differential copy of D1.4 from 0:00 February 5 (volume 31 GB)
    • Difference copy D1.5 from 0:00 February 6 (volume 36 GB)
    • Differential copy of D1.6 from 0:00 February 7 (volume 40 GB)
  • Full copy of F2 from 0:00 February 8 (volume 1014 GB)
    • Differential copy of D2.1 from 0:00 February 9 (volume 12 GB)
    • Differential copy of D2.2 from 0:00 February 10 (volume 19 GB)
    • Differential copy of D2.3 from 0:00 February 11 (volume 25 GB)
    • Differential copy of D2.4 from 0:00 February 12 (volume 31 GB)
    • Difference copy D2.5 from 0:00 February 13 (volume 36 GB)
    • Differential copy of D2.6 from 0:00 February 14 (volume 40 GB)

Using this set, we can restore data at 0:00 on any day from February 1 to February 14. To do this, we need to take a full copy of F1 for the week of February 1-7 or a full copy of F2 for February 8-14, restore it in NORECOVERY mode and then apply a differential copy of the desired day.

How it works in full

Let us have the same set of backup full and differential backups as in the previous example. In addition to this, there are the following RKZhT:

  • RKZhT 1 for the period from 12:00 January 31 to 12:00 February 2 (about 30 GB)
  • RKZhT 2 for the period from 12:00 February 2 to 12:00 February 4 (about 30 GB)
  • RKZhT 3 for the period from 12:00 February 4 to 12:00 February 6 (about 30 GB)
  • RKZhT 4 for the period from 12:00 February 6 to 12:00 February 7 (about 30 GB)
  • RKZhT 5 for the period from 12:00 February 8 to 12:00 February 10 (about 30 GB)
  • RKZhT 6 for the period from 12:00 February 10 to 12:00 February 12 (about 30 GB)
  • RKZhT 7 for the period from 12:00 February 12 to 12:00 February 14 (about 30 GB)
  • RKZhT 8 for the period from 12:00 February 14 to 12:00 February 16 (about 30 GB)

Note:

  1. The size of the RKZhT will be approximately constant.
  2. We can make backups less often than differential or full ones, or we can do them more often, then they will be smaller in size.
  3. We can now restore the system state to any point from 0:00 on February 1st, when we have the earliest complete copy, until 12:00 on February 16th.

In the simplest case, we need to restore:

  1. Last full copy before recovery
  2. Last differential copy before recovery
  3. All RKZhT, from the moment of the last difference copy until the moment of restoration
  • Full copy of F2 from 0:00 February 8
  • Difference copy D2.2 from 0:00 February 10
  • RKZhT 6 for the period from 12:00 January 10 to 12:00 February 12

First, F2 will be restored, then D2.2, then RKZhT 6 until 13:13:13 on February 10. But a significant advantage of the Full model is that we have a choice - to use the last full or differential copy or NOT the last one. For example, if it were discovered that the copy of D2.2 was corrupted, and we needed to restore it to the moment before 13:13:13 on February 10, then for the Simple model this would mean that we could only restore the data to the moment D2.1. With Full - "DON"T PANIC", we have the following options:

  1. Restore F2, then then D2.1, then RKZHT 5, then then RKZHT 6 until 13:13:13 on February 10.
  2. Restore F2, then RKZHT 4, then RKZHT 5, then then RKZHT 6 until 13:13:13 on February 10.
  3. Or even restore F1 and run all RKZhT up to RKZhT 6 until 13:13:13 on February 10th.

As you can see, the full model gives us more choice.

Now let’s imagine that we are very cunning. And a couple of days before the failure (13:13:13 February 10) we know that there will be a failure. We are restoring the database on a neighboring server from a full backup, leaving the opportunity to roll up subsequent states with differential copies or RKZhT, i.e. we left it in NORECOVERY mode. And every time, immediately after the formation of the RKZhT, we apply it to this reserve base, leaving it in NORECOVERY mode. Wow! Why, it will now take us only 10-15 minutes to restore the database, instead of restoring a huge database! Congratulations, we have reinvented log shipping, one of the ways to reduce downtime. If you transfer data this way not once per period, but constantly, then you will get mirroring, and if the source base waits for the mirror base to be updated, then this is synchronous mirroring, if it does not wait, then it is asynchronous.

You can read more about high availability tools in the help:

  • High Availability (Database Engine)
    • Understanding High Availability Solutions
    • High level of accessibility. Interaction and collaboration

Other Backup Considerations

You can safely skip this section if you are bored with the theory and are itching to try out the backup settings.

File groups

1C:Enterprise essentially does not know how to work with file groups. There is a single file group and that’s it. In fact, a programmer or MS SQL database administrator is able to put some tables, indexes, or even pieces of tables and indexes into separate file groups (in the simplest version, into separate files). This is necessary either to speed up access to some data (by placing it on very fast media), or vice versa, by sacrificing speed and placing it on cheaper media (for example, little-used but voluminous data). When working with file groups, it is possible to make backup copies of them separately, and you can also restore them separately, but you need to take into account that all file groups will have to be “caught up” to one point by rolling the RKZhT.

Data files

If a person manages the placement of data in different file groups, then when there are several files inside the file group, then MS SQL Server pushes the data into them independently (if the volume of files is equal, it will try evenly). From an application point of view, this is used to parallelize I/O operations. But from the point of view of backups, there is another point. For very large databases in the pre-SQL 2008 era, it was a typical problem to allocate a contiguous window for a full backup, and the destination disk for this backup might simply not accommodate it. The easiest way in this case was to make a backup copy of each file (or file group) into its own window. Now, with the active spread of backup compression, this problem has become less, but this technique can still be kept in mind.

Backup compression

MS SQL Server 2008 introduced a super-mega-ultra feature. From now on and forever, backups can be compressed when generated on the fly. This reduces the size of a 1C database backup by 5-10 times. And considering that the performance of the disk subsystem is usually the bottleneck of the DBMS, this not only reduces the cost of storage, but also greatly accelerates backup (although the load on the processors increases, but usually the processor power is quite sufficient on the DBMS server).

If in the 2008 version this feature was only available for the Enterprise edition (which is very expensive), then in 2008 R2 this feature was given to the Standard version, which is very pleasing.

Compression settings are not covered in the examples below, but I strongly recommend using backup compression unless there is a specific reason to disable it.

One backup file - many internals

In fact, a backup is not just a file, it is a rather complex container in which many backups can be stored. This approach has a very ancient history (I personally have been observing it since version 6.5), but at the moment for administrators of “regular” databases, especially 1C databases, there are no serious reasons not to use the “one backup copy - one file” approach . For general development, it is useful to explore the ability to put several backup copies into one file, but most likely you will not have to use it (or if you have to, it will be sorting out the rubble of a would-be administrator who unqualifiedly used this opportunity).

Multiple mirror copies

SQL Server has another great feature. You can create a backup copy in parallel in several receivers. As a simple example, you can dump one copy onto a local disk and simultaneously store it on a network resource. A local copy is convenient, since recovery from it is much faster; a remote copy will withstand physical destruction of the main database server much better.

Examples of backup systems

Enough theory. It's time to prove with practice that this whole kitchen works.

Setting up a typical server reservation through Maintenance Plans

This section is structured in the form of ready-made recipes with explanations. This section is very boring and long due to the pictures, so you can skip it.

Using the service plan creation wizard

Setting up server backup using TSQL scripts, examples of some features

The question immediately arises, what else is needed? It seems like you just set everything up and everything works like clockwork? Why bother with all sorts of scripts? Service plans do not allow:

  • Use mirror backup
  • Use compression settings different from the server settings
  • Does not allow flexible response to emerging situations (no error handling capabilities)
  • Does not allow flexible use of security settings
  • Service plans are very inconvenient to deploy (and maintain the same) on a large number of servers (even, perhaps, already on 3-4)

Below are typical backup commands

Full backup

Full backup with overwriting the existing file (if any) and checking page checksums before writing. When creating a backup, every percentage of progress is counted

BACKUP DATABASE TO DISK = N"C:\Backup\mydb.bak" WITH INIT, FORMAT, STATS = 1, CHECKSUM

Differential backup

Similarly - difference copy

BACKUP DATABASE TO DISK = N"C:\Backup\mydb.diff" WITH DIFFERENTIAL, INIT, FORMAT, STATS = 1, CHECKSUM

RKZhT

Transaction log backup

BACKUP LOG TO DISK = N"C:\Backup\mydb.trn" WITH INIT, FORMAT

Mirror backup

It is often convenient to make not one backup copy at once, but two. For example, one can be stored locally on the server (so that it is at hand), and the second is immediately formed into a physically remote and protected from adverse influences storage:

BACKUP DATABASE TO DISK = N"C:\Backup\mydb.bak", MIRROR TO DISK = N"\\safe-server\backup\mydb.bak" WITH INIT, FORMAT

An important point that is often missed: the user on whose behalf the MSSQL Server process is launched must have access to the resource "\\safe-server\backup\", otherwise the copying will fail. If MSSQL Server is launched on behalf of the system, then access should be given to the domain user "server_name$", but it is still better to correctly configure MS SQL to run on behalf of a specially created user.

If you do not specify MIRROR TO , then it will not be 2 mirror copies, but one copy, divided into 2 files, according to the interleave principle. And each of them separately will be useless.

After studying a lot of information from different sources, I decided to describe the process of setting up a MS SQL Server database backup for full recovery model, which model to use is up to you, but I will add that if there is a large flow of information in your database (for example, tens, hundreds or thousands of documents are created in 1 hour), then the loss of information during a day of work will be simply unacceptable, in this case only the complete model will ensure the safety of your data. This article is intended for novice system administrators and contains in my opinion the minimum set of actions for backing up a 1C database. Installing\configuring the SQL server itself and deploying a database on it is not within the scope of this article.

We will make all settings using SQL Management Studio. First you need to create a Backup Device, you don’t have to create it, but in my opinion it is much more convenient and correct. in a snap SQL Management Studio -> Server Objects -> Backup Devices. You need to specify the name of the device and the file in which the backups will be stored (preferably with the BAK extension), then you can view the contents of the media, all backups will be listed there.

Now you can start setting up the Maintenance Plan. A Maintenance Plan can be created for all databases at once, but it is more convenient to create your own maintenance plan for each database.

Our Service Plan will have three subplans: 1 - database backup (Full); 2 - database backup (Difference); 3 - Backup the Transaction Log. Each subplan has its own execution schedule. Everyone sets up the schedule according to their own discretion, but in my case, a full copy is done once a week on Sunday, a differential copy every day except Sunday, and a transaction log every hour. With this backup model, you can restore the desired database to any date and time, and we save space on your hard drive because A full backup is actually performed once a week, and only changes are made during the week.

Setting up a daily schedule. Weekly differs only in the checkbox "Sunday" and unchecked from "Monday" to "Saturday"

Schedule for railway transport. The saving time during the day is highlighted in red, it makes sense, for example, if users work with the database during a certain period, if the operating mode is 24x7, then we leave it at default.

The figure below shows the weekly subplan editor; it consists of tasks that are performed in a given sequence. The sequence is set manually, and green arrows mean that the next task will be completed only if the previous task is completed successfully, and blue arrows mean that the task will be completed whenever the previous task is completed. In the maintenance subplan editor, tasks can be added from the "Elements Panel", which is located in the upper left corner when the editor is open.

Tasks. You need to go into each task and select the database for which it will be executed and a number of other settings (if any). Let's look at what tasks the weekly subplan of our maintenance plan contains.

1. “Check Database Integrity Task”. The following task will only be executed if the database does not contain errors. (Should we back up the database with errors?)

2. "Rebuild Index Task". It is necessary to restore (Rebuild) the index every day, because... when working with indexes, they become highly fragmented, and when fragmentation exceeds 25%, SQL begins to noticeably slow down. This operation is quite resource-intensive, so it can be done at least once a week, and daytime subplan to replace it with the less resource-intensive task "Index Reorganization".

3. “Update Statistics Task”. For optimization... By the way, this task can be performed several times during the day if your database is heavily loaded.

4. After updating the statistics, you MUST clear the procedural cache. To do this, drag the “Execute T-SQL statement” task into the editor and write a procedure in the “T-SQL statement:” field DBCC FREEPROCCACHE. But you need to take into account that this procedure clears the cache of ALL databases, and we updated the statistics one at a time! How to clear the procedural cache for a specific database, read. In short: DBCC FLUSHPROCINDB(DB_ID)

5. "DB Backup" (Back Up Database Task). In this task, we indicate which database we are backing up, the type of backup (For a weekly subplan - Full, for a daily subplan - Differential, for an hourly - Transaction log.) We put the switch in the position "Create a backup copy of databases in one or more files" and add it earlier created backup device. In this case, ALL copies are saved in one file, which was specified when creating backup device, if the switch is left in “Create a backup file for each database”, then for each backup a separate file will be created for Full, Differential and VT, which is very inconvenient when restoring, but convenient when storing. Don't forget to indicate that you need to compress backups!

6. "Clear Log" Clears records created when executing tasks. You can also enable the "Post-Maintenance Cleanup" task and configure it to delete text logs or outdated backups.

The subplan for VT backup consists of one task “Database backup”. For me, it is more convenient to save the VT not to the Backup Device, but to a separate file, which must be specified in the task settings.

2. View information about backup and restore events for a database

In order to find out when backup copies of a specific database were created, as well as when the database was restored from a backup copy, you can use the standard report “ » (Backup and Restore Events). To generate this report, you need to right-click on the corresponding database in the Object Browser (Server Oblects) and select “ Reports" (Reports) - " Standard report" (Standard Reports) - " Backup and restore events» (Backup and Restore Events).

The generated report contains the following data:

  • Average Time Taken For Backup Operations
  • Successful Backup Operations
  • Backup Operation Errors
  • Successful Restore Operations

To view this information, you need to expand the corresponding grouping in the report.

Did this article help you?

Let's restore the “Test _Recovery” database to the “ t 4».

Let's start restoring the database from a full backup "Full2_Test_Recovery.bak" using "SQL Server Management Studio" " Right-click on the database " Test_Recovery ", in the menu that appears, select " Tasks", then "Restore", then "Database".

In the window that appears " Restore Database" in the "Sourse" section, select "Device". Next « Add ", enter the path "\\ vniz - tst - bkp 01. test . local\Backup_SQL\Full 2_Test_Recovery. bak", click "Ok". In the “Destination” section, select Database "Test Recovery"

Click “Ok”

The base will be restored successfully.

Let's look at restoring a database using Transact-SQL.

Right-click on the “Test_Recovery” database and select “New Query” from the menu that appears:

In the window that appears, enter:

USE master

RESTORE DATABASE Test_Recovery

FROM DISK = "\\vniz-tst-bkp01.test.local\Backup_SQL\Full2_Test_Recovery.bak"

WITH REPLACE

The base will be restored successfully.

In this example we used the "REPLACE" parameter:

Recovery typically prevents a database from being accidentally overwritten by another database. If the database specified in the RESTORE statement already exists on the current server, and the family GUID for the specified database is different from the family GUID for the database recorded in the backup set, then the database will not be restored.

The REPLACE option overrides several important checks typically performed by a restore operation. The following checks are cancelled.

  • Checking to restore over an existing database a backup created for another database.When you use the REPLACE option, a restore can write data over an existing database, regardless of which databases are contained in the backup set, even if the specified data name is different from what was written in the backup set. This may result in the database being accidentally overwritten by another database.
  • Testing to recover a database that uses the full recovery model or the bulk-logged recovery model for which a tail-log backup was not taken and the STOPAT option was not applied.When you use the REPLACE option, you may lose committed data because the most recently logged data has not yet been copied to the backup.
  • Overwrite existing files.

It is recommended to configure regular database backups(in case of hardware or software failures), and it is best to save backup copies for the last few days, for example seven (for the last week).

To do this, you can use either the task scheduler built into SQL Server - “SQL Server Agent” (not included in the free version), or the standard “Windows Scheduler” in combination with the SQLCMD.EXE utility, which allows you to run queries to SQL Server from the command line. In the scheduler, you must create at least seven jobs (one for each day of the week), each of which will (once a week) replace one of the seven files containing the corresponding database backup.

In addition, it is recommended to store backup files not only on the hard drive of the computer where SQL Server is installed, but also to duplicate them to tape or the hard drive of another computer on the network. To do this, you can either use special software that allows you to make backup copies of the entire disk, or use the same scheduler to copy files to tape or another computer (second step).

Using Windows Scheduler (free version)

To create a task in Windows Scheduler you need to:

Launch the Notepad program (Start->All Programs->Accessories->Notepad) and enter the following two lines, then save them as a batch file (*.BAT):

SQLCMD -S (local) -E -Q "BACKUP DATABASE AltaSVHDb TO DISK = "D:\BACKUP\ AltaSVHDb_monday.bak" WITH INIT, NOFORMAT, SKIP, NOUNLOAD"
XCOPY D:\BACKUP\ AltaSVHDb_monday.bak \\BACKUP_SERVER\Folder\*.* /Y

Where "(local)"- server name (in case of installing a named instance of SQL Server, you must specify the full name: “COMPAN_NAME\SQLEXPRESS”), "AltaSVHDb"- database name, "D:\BACKUP\ AltaSVHDb_monday.bak"- the name of the file to create a backup copy of (will vary by day of the week), "BACKUP_SERVER"- the name of the computer to which additional copying will be performed, "Folder"- a folder on this computer (it must be shared).

Launch the Task Scheduling Wizard (Control Panel->Scheduled Tasks->Add Task) and click the “Next” button:

Click the “Browse” button and specify the path to the command file (*.BAT) created in step a):

Specify a name for the task, select the “weekly” run option and click the “Next” button:

Check the box next to the desired day of the week, and in the “Start time” field specify the time when the backup process should start (usually this is done at night), then click the “Next” button:

Enter the username and password (twice) of the OS account under which the task will be executed, and click the “Next” button:

Attention! In order for the task to complete successfully, you must grant the account specified here (domain or local computer) write rights to the above folder "\\BACKUP_SERVER\Folder", as well as configure access to SQL Server itself.

Click the “Finish” button

Note. To check the functionality of the created task, you need to right-click on the task of interest in the list of tasks (Control Panel->Scheduled Tasks) and select “Run” in the context menu, then make sure that the database backup file was successfully created in the paths that were specified in step a).

Using "SQL Server Agent" (not included in the free version)

To create a job in SQL Server Agent you need to:

Launch the SQL Server Management Studio utility and connect to the server under an administrator account.

In the left part of the window, right-click on the “Server objects/Backup devices” section and select “Create a backup device” in the context menu:

In the “Device Name” field, enter the name that will be associated with the database backup file, change the path in the “File” field if necessary and click “OK”:

In the left part of the window, right-click on the “SQL Server Agent/Tasks” section and select “Create task” in the context menu:

In the “Name” field, enter the name of the task:

On the “Steps” page, click the “Create” button:

In the window that appears, enter a name in the “Step Name” field, make sure that “Transact-SQL (T-SQL) Script” is selected in the “Type” field, and enter the line in the “Command” field:

BACKUP DATABASE AltaSVHDb TO AltaSVHDb_monday WITH INIT, NOFORMAT, SKIP, NOUNLOAD

Where "AltaSVHDb"- database name, "AltaSVHDb_monday"- name of the backup device created in step c) (will vary by day of the week):

In the previous window, click the “OK” button; as a result, the following line should appear on the “Steps” page:

In order for the database backup file to be immediately copied to another computer on the network, you must repeat steps f) - h), in the “Create a job step” window, selecting the “Operating system (CmdExec)” value in the “Type” field, and specifying in the “Command” field line:

XCOPY D:\MSSQL\BACKUP\AltaSVHDb_monday.bak \\BACKUP_SERVER\Folder\*.* /Y

Where "D:\MSSQL\BACKUP\AltaSVHDb_monday.bak"- the path specified in step c) (will vary by day of the week), "BACKUP_SERVER"- the name of the computer to which the copy will be made, "Folder"- folder on this computer (must be shared):

Note. In order for the file to be copied successfully, you must run the “SQL Server Agent” under a Windows domain account that has been granted write rights to the above-mentioned folder (see also “SQL2005_installation.doc” or “SQL2008_installation.doc”), and also configured access to the SQL itself Server (see the section “Configuring access rights to the database”, you must include this account in the “sysadmin” role on the “Server Roles” page, and do nothing on the “User Mapping” and “Protected Objects” pages).

On the “Schedules” page, click the “Create” button:

Enter a name in the “Name” field, make sure that “Recurring task” is selected in the “Schedule type” field, and “Weekly” is selected in the “Run” field. Check the box next to the desired day of the week (uncheck the rest), and in the “One-time task” field indicate the time when the backup process should start (usually this is done at night):

In the previous window, click the “OK” button; as a result, the following line should appear on the “Schedules” page:

Click the “OK” button.

Note. To check the functionality of the created task, you need to right-click on the task of interest in the “SQL Server Agent/Tasks” section and select “Run task at a step” in the context menu, select the first step of this task in the window that appears and click “OK”. Next, a window will appear showing the progress of the task. If the task ends with an error, a detailed description of the error can be seen by calling the “View Log” item in the same context menu.







2024 gtavrl.ru.