Install SQL Server R Services (in database). Event service in SQL server


Often the base operates under “normal” conditions. What does this mean:

  • The SQL server is well “fed”, i.e. amount of RAM provided for SQL work servers should be selected based on 70% of the size of all mdf database files.
  • The processor is not loaded more than 50% for 90% of the time.
  • There is sufficient disk space (in particular, the temp.db database is used for sorting; 1C uses it for all its life activities, so it’s worth taking care of the disk space with this database in advance).
  • Database recovery mode is “Simple”. (It has been empirically found that a large ldf file slows down 1c, and the possibility of restoring from a log file is very doubtful).

It is also worth considering several nuances:

  • When using the Standard edition of SQL, with a complete rebuild of the index, all users will be disconnected from the database, so it is worth taking this into account when deciding on a Weekly maintenance plan (the plan will be described below).
  • It is worth considering that the 1C server also consumes memory, especially if you use thin clients or web services.
  • It is better for SQL itself to limit the maximum amount of RAM in the server parameters, so that when critical mass is reached, it begins to clear unnecessary data from RAM in advance. And so that as it grows it doesn’t drive the entire server into a stupor.

Under normal conditions, it is rational to use 2 service plans Weekly(once a week) and Daily(on the remaining 6 days of the week).

Weekly

General form

By service plan items:

  1. Rebuilding the index. The point of the task is to delete all existing indexes and install new ones. (roughly speaking, inventory and putting everything in order).
    As parameters:
    • Selecting a target base (this will happen in almost all tasks, so I will not pay attention to this parameter further within this article).
    • Object in which we select “Tables and Views”.
    • Options free space– for small volumes hard drive You can select the “default” option, but I recommend using “Change the percentage of free space on the page”, the recommended value is 20%. This will allow you to leave a supply of free pages and keep indexes up to date longer. WARNING: Increases the size of the database.
    • Sort results in tempdb. I think there is no need to explain, but I want to warn you that at this time tempdb will grow very much, although sorting in it is designed to speed up the process, be careful, have some space left.
    • Saving the index online is a feature available for the enterprise version of SQL. Allows you to re-index without disconnecting clients.

    !!! ATTENTION!!! In the Standard version, when reindexing, clients are disconnected from the database for the duration of this step.

    Example settings


  2. Statistics update. The task of collecting information about the state of indexes in the database. (In general, it is of little relevance after re-indexing, but I still do it).
    Options:
    • An object. All the same tables and views as for rebuilding the index.
    • Update. Here we update all statistics.
    • View type – Full view.

    This way we update statistics across the entire database.

    Example settings


  3. Executing a T-SQL statement. This is the execution of an arbitrary command on SQL language, in particular we are interested in dbcc proccache

    As the name suggests, clearing the cache.

    Example


  4. Checking the integrity of the database. It seems unnecessary to explain here - we make sure that nothing is broken. In the parameters “include indexes” in the check, it was not in vain that they were rebuilt.

    Example settings


  5. Database backup. We need to talk more here, due to many features. It is better to study this item separately on your own in other guides; the format of this article does not provide for an in-depth study of backup.
    But I want to warn you about a couple of nuances:
    • SQL doesn’t know how to clean its container, so if you add backups to a file (it’s also called a “Backup Device”), you’ll end up filling up all the free space.
    • SQL remembers about its backups, therefore, having manually made a one-time backup (for example, to take the database to another location, or to deploy it for testing to another database from the backup), the next “difference” will be counted from it. To prevent this, you need to check the “Only backup" There is no such item in the backup task. In general, in a weekly plan I still recommend using full type backup.
    • And it would be good to check the copy, so you can sleep better.
    • Compression, in general, can be used, but be careful, then difference values ​​must also be compressed.

    Example settings

  6. Clearing the log.
    • Backup and recovery log.
    • Agent Quest Log SQL Server.
    • Maintenance plan log.

    I clean everything. As the name suggests, it cleans events in the SQL log. I believe that events older than 4 weeks are unlikely to interest me, because if there is a problem, then report it within a month.

    Example settings


  7. Operator notification. Point again for self-study. But as the name implies, it is for reporting problems during the execution of the plan.

Daily

General form

It makes no sense to talk separately. Almost everything is the same as Weekly.
The difference is in the first task – “Index Reorganization”. The tasks differ in that the reorganization attempts to straighten the existing indexes, rather than doing everything with clean slate. The greater the fragmentation, the more often it costs to launch. But under normal conditions, once a day is enough to keep the index up to date until the next rebuild.

Options


You can also use differential backup.

That's all. I repeat, I did not see any dogma at this point; this option was developed and tested by me. Relevant for databases ranging in size from 6 to 100 GB.

I wish you fast and reliable work.
P.S. Due to the fact that I am not a full-fledged DBA, perhaps my comments are very superficial, I will be happy to read the comments in the comments and in PM.

You probably know that database maintenance is a whole complex of procedures: creating backups, checking integrity, maintaining indexes, statistics, etc. On the Internet (and on Habré as well) many articles and recommendations have been written on this topic. However, when implementing 1C: Enterprise, we often have to deal with the fact that database maintenance is configured either incorrectly or according to a very simplified scheme. For example, in order not to bother with managing transaction logs, the Simple Recovery model is installed for “combat” databases. And this despite the fact that the loss of information in a couple of hours is already critical for the company. Sometimes the task of compressing database files is included in regular maintenance (“the problem did not grow”), or after updating the indexes, statistics are destroyed and other similar mistakes. This happens because most often companies do not have an experienced database administrator and maintenance has to be handled by one of the IT service employees - an “unwitting” database administrator (DBA). At the same time, such a DBA does not always understand all the risks and responsibilities assigned to him.



For database maintenance, Microsoft offers Maintenance Plans in SQL Server Management Studio (SSMS). However, as practice shows, only an experienced DBA can create and configure a high-quality and reliable service plan. I would like to note that reliable maintenance is as automated as possible and does not require regular manual monitoring by the administrator, and also guarantees that the data can be restored in the event of a failure.

Third-party programs that are available on the market and can make life easier mainly automate the creation of backups. The choice of such programs is very wide. They allow you to make compressed and encrypted backups to FTP/GoogleDrive/Amazon and so on. Backups here can be compared to the shrimp that Bubba talked about in the movie “Forrest Gump”: “ ... you can fry them, boil them, bake them, stew them, you can make shrimp kebabs, Creole shrimp, gumbo shrimp fried with rice...».

However, as was said, setting up backups is not everything, so such programs cover only part of the issues.

As a result, the “unwitting” DBA has to read articles, understand SSMS, develop a backup strategy, search for scripts, and configure notifications. It takes a lot of time, but there is always something to swear at... And I want to live serenely! So that you do it once and forget it.

In this article I would like to give an overview of our program Quick Maintenance & Backup(QMB), which will help you quickly and easily set up database maintenance on Microsoft SQL Server. It is undeniable that for large and highly loaded databases you cannot do without an experienced DBA and individual performance tuning, but if you have to deal with many small databases (usually up to 50-80 GB), then this utility will be useful for both beginners and advanced users.

Key features of QMB

  • Easy and fast setup
  • Maintenance of several SQL Servers in one program. SQL Server versions 2000 and older are supported, including Express editions
  • 30 built-in tasks with open scripts, including popular Ola Hallengren scripts:

    – backups - full, differential, transaction log
    – integrity check
    – maintenance of indexes and statistics
    – maintenance of system databases
    – copying backup files with the ability to specify a storage period
    – automated verification of backups through recovery
    – maintaining database copies up to date

  • 7 predefined maintenance policies for Full and Simple recovery models
  • Monitoring free space on SQL Server disks
  • User tasks on scripting languages Transact SQL, CMD, VBScript, JavaScript, PowerShell and others
  • Statistics on changes in database sizes. Calculation of average data growth
  • Notifications by e-mail
  • Detailed maintenance log

The short video below shows an end-to-end example of servicing setup using QMB. The following description complements the video and talks about some of the features of the program.

Concept: accessible to beginners, convenient for professionals

On the one hand, we tried to make the program accessible to beginners and implement the most common service scenarios. On the other hand, we wanted to make the program convenient for advanced users and help them set up a wide variety of scenarios, including combining database maintenance operations using Transact SQL with other routine procedures of your applications. For example, in QMB you can create a script that will first load data into 1C: Enterprise, and only then make a backup and perform the rest of the maintenance. The result is a scheduler that provides its own framework for execution T-SQL scripts And batch files(with the ability to store the results of their execution).

Architecture

The program has three components: a GUI client, a QMB Service and a file database for storing its data. When installing QMB, all three program components are installed. Maintenance plans are not created, so the SQL Server Agent service is not required. Read more about architecture.

Maintenance policies, scenarios and tasks

As stated above, QMB does not create maintenance plans on SQL Server. Instead, it is created Service Policy, which is saved in local storage (file database). In essence, a policy is a grouping of databases with similar properties that are maintained according to the same rules. The policy contains a list of databases, settings for storing and copying backups. The policy includes one or more scenarios service. The script contains a set tasks, executed sequentially for each (included in the policy) database. If we draw an analogy with maintenance plans, the scenario can be compared to nested Maintenance Plans.

A task in QMB can be one of five types:

  • T-SQL script
  • Creating a backup copy (T-SQL script)
  • Restoring an archived copy ( dynamic script T-SQL)
  • Custom script (not T-SQL)
  • Copying archive copies (used in the system task of the same name)
The program has two sets of built-in tasks. The first set of tasks is based on T-SQL scripts obtained from open sources and created by QMB developers. The second set is based on scripts by Ola Hallengren (a database administrator from Sweden), who developed three popular stored procedures for maintaining databases. Ola procedures are installed automatically in system base data master, when creating a policy from a template.

Maintenance of large and small databases. Policy templates

You can create a service policy from a template or manually from scratch. Current version The program includes 7 templates, which differ mainly:
  • Database recovery model. 5 policies with a full recovery model and 2 with a simple recovery model.
  • Index maintenance order. For small databases, index defragmentation is performed every night, and updated statistics are updated during the day; For large databases, index defragmentation is performed once a week.
  • A set of tasks used in scenarios. For maintenance, tasks/scripts of Ola Hallengren or QMB are used.
For production databases with daily operational information entry (OLTP databases), it is recommended to select a policy with Full recovery model - for example, for 1C: Enterprise databases into which data is entered daily. This model allows you to restore the database to the current or to an arbitrary point in time.

After creating a policy from a template, you can change any of its parameters - scenarios and schedules, tasks and notification order. In the future, the created policy can be copied for other servers registered in the program.

More information about the differences in templates can be found in the help.

Tasks

As mentioned above, there are 5 types of tasks in QMB - some of their features are described below.

Script execution

Most system tasks are T-SQL scripts. The script itself can be viewed in the task form:

Script texts (T-SQL, CMD, VBS, PowerShell and others) may contain markers that will be replaced with the corresponding values ​​before execution. For example, a marker ?DataBaseName? will be replaced with the database name, and the token ?BackupDirectory? – to the path to the backup directory specified in the policy. Full list markers can be found in the help.

Maintenance window optimization
It happens that in a limited time window it is necessary to fit not only database maintenance using SQL Server, but also the execution of other routine operations of your application. For example, testing and correcting 1C databases, uploading using the 1C: Enterprise platform, conducting exchanges, etc. Typically, the Windows task scheduler or the 1C: Enterprise scheduler is used for this. However, in this case, it is necessary to space out the procedures in time with a good margin - so that they are guaranteed not to overlap. As a result, tasks may not fit into the available time window.

With QMB, you can make the most of the maintenance window by scripting the execution of T-SQL scripts and batch files in VBS, JavaScript, CMD, PowerShell and others. Below is a simple example of an alternative backup copying task using the Robocopy utility:

It should be noted that the batch file can be executed both on the machine where the program is installed and on the SQL Server side. This allows you to operate backup files on the SQL Server side. For example, you can write a script that will archive the latest backup and upload it to any cloud storage or implement your own copying algorithm. In the following articles I plan to talk in more detail about this opportunity and provide scripts for working with 1C: Enterprise 8 databases.

Message output and maintenance log
All messages output during script execution are redirected to the program maintenance log. This applies to messages output by the print and raiserror commands for T-SQL scripts, as well as messages output to the console by echo commands for other CMD scripts and batch files. And that's great! Because readable and understandable logs are a huge time saver, and as a bonus, the text of errors is sent in an email notification.

Automated verification of backups through recovery

The presence of backups does not mean that it will be possible to restore data in the event of a failure - restoration may fail for a variety of reasons. For example, it may happen that the chain of archive copies is interrupted, and you will not even know about it until you try to restore the data. This is why best practices say that a good DBA should regularly check the created backup copies and perform restores from them. There is simply no other 100% way. Microsoft also recommends testing all backup copies at least once. Tasks for automated recovery there is no option in SSMS, and there are not many people willing to check backups manually every day.

QMB has a special task that will sequentially restore the entire chain of backups for each policy database: Full backup –> Differential backup –> Transaction log backup. Recovery is performed to a temporary test database, which is deleted after checking its integrity.

For example, in our company there are about 60 small databases on a virtual SQL Server, with a total volume of about 100 GB. QMB checks the recovery capability of all databases every night. The verification takes about an hour and a half and this gives us a guarantee that all backups have been verified. If the chain of backups is interrupted, you will receive a notification with something like this error:

1. Task “Recovery from backup copies to a temporary database with subsequent integrity check” (database: Buh_Oazis)


Such errors rarely occur, usually due to the carelessness or ignorance of employees. In this case, we simply do an additional full backup.

Tips for those who want to set up such a check:

  1. The restore operation is resource-intensive, so it should be included in scripts that are executed only during non-working hours.
  2. To create a temporary test database and restore backups to it, a reserve is required disk space, equal to at least the largest database in the policy + 10% of its volume.
  3. Restoring large databases can take considerable time. Do not enable a scan task unless you are sure that the operation will be completed within the allotted maintenance window.
  4. Place the task correctly in the script. Please note that the restoration is performed at the current point in time, i.e. at the time of task execution. For example, if the task of checking backups is placed immediately after creating a full backup, then only the last backup will be tested, because it will be enough to restore the database to the current point in time.
  5. If there is not enough maintenance window to check backups of all policy databases, you can check backups of only certain databases. Or distribute tasks by day of the week. For example, check the backups of databases A and B tonight, and tomorrow – databases C and D.
  6. It is not recommended to make backups to a network folder, because... when restoring, you have to “drag” backup files over the network, which significantly increases the recovery time. It would be more correct to set up the creation of backups on a local disk with daily copying to a network folder.

Automated maintenance of database copies up to date

Using the program, you can keep copies of databases up to date. For example, for 1C developers, you can update the test database every night. To do this, you need to create a task similar to the built-in “Recovery from archive. copies to a temporary database." In the task, you must specify the source database for backups and the database into which the restoration will be performed. And only then place the task in the night scenario. The figure below shows a task that restores backups of the Accounting database to the AccountingCopy database. Moreover, if there is no AccountingCopy database on SQL Server, it will be created automatically.

During the recovery procedure, the AccountingCopy database will be switched to single-user mode with all user connections disconnected.

Copying backup files

The video showed how the program configures additional copying of backups to a network or local folder. Copying backups allows you to, to a certain extent, insure yourself against damage to files, the disk or the entire server. In cases with a virtual SQL Server, copying backups to a real physical disk will allow you to quickly restore one or more databases without waiting for the entire database to be restored. virtual machine.

Below I would like to focus on several features of copying backups using QMB:

  1. The frequency of copying is determined by the script schedule containing the “Copying archive copies” task. A task can be placed in one or more scenarios.
  2. Only new and changed backup files are copied - this reduces the load on the network and allows frequent copying. For example, you can copy each time after creating a new transaction log backup.
  3. For a network folder, you can set the retention period for files. Thus, you can store backups on a local SQL Server disk, for example, for 1 week, and network folder in 1 month.
  4. It is possible to configure backup copying only for selected policy databases.

Database recovery

You can restore the database in the standard SSMS console. However, QMB has an analogue with simpler settings:

The “Restore from backup copy” command allows you to:

  • Restore the database from backups to a specified point in time from automatic selection backup chains (if backups were created on the same SQL Server).
  • Restore a database from a full backup file.
  • Restore backups of one database to another database, including a new one.
  • Perform a database integrity check after it has been restored.

Email Alerts

If you've ever used email alerts in SSMS, you probably know that DataBase Mail messages contain minimal information. For example, in case of an error, a message like this will be sent:
TASK IN PROGRESS:
"Database Workers.NestedPlan_1" started at 05/19/2015 17:00:00
DURATION:
0 hour, 0 minute, 5 sec.
STATE:
Error
MESSAGES:
Failed to complete the task. The task was launched by Schedule 9 (MaintenancePlan). The last step performed was Step 1 (Transaction Log Backup).

From such a message you can understand that an error has occurred, but in order to understand what kind of error it is (and assess its severity), you need to look at the server logs. In addition, Database Mail will send a notification every time an error appears - it is possible that you will have hundreds of similar messages in your mail.

Unlike Database Mail, QMB sends the first 15 lines of error text in the notification. Usually this is enough to understand the cause and take the necessary actions. You can view the full log in the program maintenance log. Example of an error message:

The scenario "Resource-intensive tasks for medium OLTP databases (every night)" was executed with errors on the server "Srv05".

Scenario start: 06/06/2015 1:00
End of work: 06/06/2015 1:29
Duration: 00:29:28

Total tasks: 7
Tasks completed: 7
With errors: 1

1. Task “Recovery from backup copies to a temporary database with subsequent integrity check” (database: IPGor)
Message: 4305, Level: 16, State: 1, Line: 21
The log in this backup set starts with LSN 5235000000291100001, which cannot yet be applied to the database. An earlier log backup that includes LSN 5228000000281600001 can be restored.

Message: 3013, Level: 16, State: 1, Line: 21
RESTORE LOG aborted with error.

Message: 50000, Level: 16, State: 1, Line: 119
An error occurred during the recovery process

There is also a mechanism to prevent sending large quantity identical letters, for example, if the error is repeated regularly.

Licensing policy and cost

The full version of the program can be downloaded from our website. Eat trial period(30 days from the date of first registration of SQL Server), after which a license must be purchased for each SQL Server. However, QMB allows for free(with some restrictions) maintain databases on SQL Express. There are also inexpensive commercial licenses for SQL Express starting from RUB 1,560. The current cost of a Professional license for Russian companies is 7100 rub. Specifications and prices can be viewed.

Licenses are eternal and not limited in time. If necessary, the license can be easily transferred from one server to another.

Support
From the moment you purchase a commercial license, you can roll out any program updates for 1 year; to install subsequent updates, you will need to extend support.

Conclusion

Sometimes I come across the opinion that third party programs SQL Server is viewed solely as a crutch. That, supposedly, backups or maintenance configured using such programs are, by definition, worse than using a regular SQL Server agent. In this case, I have to explain that SQL Server only understands Transact SQL statements and does not care at all whether it is SQL Server Agent or another program that sends this statement. For example, to check the integrity of the database he needs to send the command DBCC CHECKDB, and to make a backup – BACKUP DATABASE. Obviously, the result will always be identical, regardless of who sends this command.

I hope you found this review helpful. Remember that poor performance and sudden stops of SQL Server damage the reputation of the entire IT organization, while data loss in most cases leads to even more serious consequences. If you have created a maintenance plan, but are not confident in its reliability, then you are sitting on a time bomb - I strongly advise you to prevent an emergency in advance rather than deal with its consequences.

Thank you for your attention, I am ready to answer your questions in the comments.

Tags: Add tags

So, continuing the topic of servicing 1C databases, let’s take a closer look at the control system relational databases Microsoft data SQL Server. This product provides us with great capabilities for processing, storing, backing up and restoring databases. I will start a short series of articles devoted to this topic. Everything that is written below is a personal opinion on this issue and is subject to criticism.

This article discusses the process of creating base maintenance plans. We will consider notifying the operator, as well as an example of restoring the database, in the following articles.

In the test laboratory we have the following:

  • Server Windows Server 2008 Enterprise: SRV-1C-TEST.
  • Microsoft SQL Server 2008: SRV-1C-TEST.
  • Test base Buh Firma.

As usual, we set ourselves the task:

Carry out base maintenance between 00:30 and 01:00, and maintenance should not be noticeable (or barely noticeable) to base users.

Let's start with the important points. MS SQL database can have one of three types of recovery model:

  • Simple.
  • Full.
  • With incomplete logging.

Also, when backing up, we are given three copy options to choose from:

  • Complete.
  • Difference.
  • Copying the transaction log (logs).

With the full copy option, the mdf database and transaction log are saved. A differential backup (also known as differential) copies data that has changed since the last full backup was created. Copying a transaction log, accordingly, saves only the transaction log itself.

If you select the simple model, you can restore the database from the last differential or full backup. When choosing a full recovery model, we can restore the database up to a minute, creating a complete backup copy, for example, at night and during the day, create copies of the transaction log. Below we will see where this point comes up. I would also like to cite some excerpts from MSDN: “The bulk-logged recovery model is intended solely as a complement to the full recovery model. In general, the bulk-logged recovery model is similar to the full-logged recovery model, except that most bulk operations are logged in it to a minimal extent."

You can view the recovery model of your database by going to the database properties, for example Buh Firma and going to the line - Parameters.

In MSSQL 2008, the default recovery model in created databases is Full.

How to choose a recovery model? We just need to answer the question: is the loss of information fatal during the time elapsed after a full backup? If the answer is yes, then choose the full recovery model; if not, choose the simple one. The bulk logging model should only be used during massive database operations.

Thus, if you chose simple model, then you will be able to restore data only at the time of nightly full or differential copying, and after that users will restore all information manually. When choosing the Full model, you must back up the transaction log, otherwise the logs will grow significantly. With any recovery model, you should always have a full backup.

First, we will create a nightly base maintenance plan, which will include the sequence of the following actions:

  • Checking database integrity
  • Rebuilding the index
  • Update statistics
  • Clearing the DBMS procedural cache
  • Database backup
  • Cleaning after service
  • Clearing the log

To do this, connect to the MSSQL server using the environment Microsoft SQLServer Management Studio. You can start the environment by going to Start - All Programs - Microsoft SQL Server 2008.

Let's connect with SQL server and let's go to Management - Service Plans. Right-click on Service Plans and select Create a service plan. Let's give him a name: SRV1CTEST.

Before us is the SRV1CTEST window, in which we will create the sequence of actions indicated earlier. We immediately see it appearing Nested_Plan1. To the right of the name of the nested plan, you will see a sign-shaped icon. Click on it and get into the properties of the task schedule. Here you can change the name of the nested plan, set the repetition frequency to Daily and set the time. And so now it remains to fill our plan with tasks. To do this, from the Toolbar, which is located on the right side, drag tasks.

Let's start with Database integrity checks.

After you have dragged the task, double-click on it. A window will open in which in the Database line we select our created database Buh Firma. Next, add tasks in the same way Rebuilding the index And Update statistics, not forgetting to select the desired database in them.

Procedure Rebuilding the index recreates the index with a new fill factor. Due to this, we increase the performance of work in the database.

Task Update statistics Updates table data information for MS SQL. Which also improves productivity. But after this operation it is necessary to clear the cache.

Let's stop for now and talk about setting up connections between tasks. The connections reflect the sequence of execution. To make a connection between tasks, you need to click once on the task and you will see an arrow appear. She needs to be dragged to the next task. The connection can have 3 colors: blue, green and red, each of which means three types of transition firing: upon simple completion of the previous task - Completion, in case of successful completion - Success, and if an error occurs while executing the previous task - Error. You can see all these parameters by right-clicking on the arrow drawn between tasks. Thus, if we need to Rebuilding the index fired only after successful completion of the task Checking Database Integrity, we must connect them with an arrow. By right-clicking on the arrow, change its mode to Successfully, as we see, its color has changed to green.

On this moment we have 3 created tasks in our nested plan. As you may have noticed, the task Clearing the DBMS procedural cache is not in the toolbar. We will use the problem Executing a T-SQL statement. Let's drag it into the plan and double-click on it. We see a window in which we enter the following:

DBCC FREEPROCCACHE

This article prompted me to write a problem that I recently solved for one of the customers.

More precisely, there were even several problems that, as usual, were layered on top of each other (or they were “layered” by administrators trying to solve the problem).
The article does not aim to provide a comprehensive and detailed description of the entire Service Broker system, with all its capabilities.
This is just a description of the environment that I encountered most frequently when solving problems.
Probably everyone who reads this blog know what Service Broker is in SQL Server, but in order to start with one starting point, I will say a few words about this thing.

This useful thing first appeared in SQL Server 2005, and has not changed much since then. More precisely, it has grown with some new capabilities, but the principles laid down in those years have remained unchanged.
So.
As follows from https://technet.microsoft.com/ru-ru/library/ms166049(v=sql.105). aspx, Service Broker helps you create asynchronous, loosely coupled applications in which independent components work together to perform a task. These components exchange messages that contain the data needed to complete a task. This section describes the following aspects of Service Broker:
dialogues;

  • streamlining and coordinating messages;
  • transaction-based asynchronous programming;
  • support for loosely coupled applications;
  • components of the Service Broker component.

The main building blocks of the system are:

  • Messages. Blocks of information exchanged between participants.
  • Contract. Messages are collected into a contract so that the exchange of messages is more formalized.
  • Queues. Messages to be sent and received are placed in special queues that both the sender and the recipient have.
  • Service. All of the above components are connected by a service, which is the unit of interaction in the system.
  • Routes. To deliver messages to services, delivery routes are created.
  • Dialogue. A programmable way to send a message from an initiator to a recipient and back.
  • Transactions. All data processing during transmission and receipt is carried out on a transactional basis, excluding data loss.

My test environment includes:

  • Two instances of SQL Server installed on different virtual machines
  • Domain controller.
  • Access points (Endpoints) are configured for using Windows authentication.
  • Users without logins are created on both servers, using certificates for mutual authentication.
  • Certificates were copied to both servers using backup.

How it all works together. Below is a simplified messaging diagram and its description.

1. When sent from the initiator, the message ends up in the internal transmission queue, which is an internal system table sys.sysxmitqueue. By executing a request to it, we will get what we expected.

To view messages in the queue, the dynamic view sys.transmission_queue is “exposed” externally, by executing a query to which you will get almost the same result. However, in this view there is a very useful element, this is the transmission_status column containing information about an error that occurred while transmitting and processing the message.
For example: " Connection attempt failed with error: ‘10061(No connection could be made because the target machine actively refused it.)‘.”
The message is also logged in the transaction log, which ensures its transactional processing.

All messages from all created services pass through this internal table and, accordingly, through viewing, and if there are no errors, they are transmitted further. Messages can be encrypted using certificates before transmission. Whether messages are encrypted or not depends on the settings of the dialog initiating the connection.
2. After the message is placed in the transmission queue (sys.transmission_queue), its classification is performed.
The essence of classification is to determine where the service to which this message is addressed is located. To determine the direction of transmission, delivery routes created during the service deployment stage are used. IN in this case two routes are configured. One points to a remote consumer service, the other points to a local SQL Server instance to deliver local alerts.

We can see the classification step running using SQL Profiler traces if we select events related to Service Broker. Please note that some of the events related to Service Broker are posted in the Security Audit section.

3. After classifying the message, a connection is established with a remote (as in this case) connection point (Endpoint). In this case, this is a server using TCP protocol, with the name SQL2014-I1 and the port number (Endpoint) 4022. If an error occurs during the connection, it will appear in the trace (as shown below) and in the transmission_status column of the sys.transmission_queue view.

4. At the next stage, the message is sent to the service by the recipient and confirmed by the receiving party. If you query the transmission_queue view at the bottom, it will return an empty result, since all data from the sender queue has been transferred.

In the trace on the initiator and sender we will see confirmation of the transfer.

5. Since the message has been delivered to the recipient, it should appear in the recipient's input queue.

Further, target service must read messages from the queue (preferably one at a time) and send confirmation of receipt to the initiator of the dialogue. This procedure is done using a special syntax, which is in the attached scripts https://technet.microsoft.com/en-us/library/bb839483(v=sql.105).aspx.
As data is read from the queue, it becomes empty, and confirmation messages are sent to the sender (or not) in the form, the composition of which, and whether they are sent or not, depends on the service developer.
If confirmation messages are programmed to be sent as the queue is read, then they follow the same path as from the recipient, only in reverse order.
In this blog, we examined in some detail the entire path from sender to recipient, and in the next blog we will begin to consider issues related to solving problems with Service Broker at each stage of processing and transmission.

Alexander Kalenik, Senior Premier Field Engineer (PFE), MSFT (Russia)

Quite often, developers of client-server applications need to organize some kind of mechanism that allows them to notify a particular client based on an event on the SQL server. Even more often, it is the customer’s rose-colored dream for the developer to implement such a mechanism. For example, if shipment limits are exceeded to any consumer, managers working with this consumer must be immediately notified. Some system customers demand (and all customers, without exception, dream of this) that when some data changes, this information is automatically updated for other users of the system, and immediately. The feasibility of such a requirement will not be discussed here (it has many grounds for criticism), only solutions will be discussed here. microsoft sql server has standard remedy for notification organizations - alerts, but this tool has very limited use, which by and large does not make it possible to create a guaranteed working mechanism based on it. And here's why: Communication with the client program can be accomplished by sending an e-mail or emulating a "net send" message. Both are inconvenient for receiving notifications.

The e-mail tool is inconvenient for the following reasons:

a) there is no guarantee of delivery, mail may be lost.
b) mail may get stuck at intermediate nodes.
c) tcp/ip protocol is required
d) an smtp server is required and a mail profile is configured.
e) special configuration of the sql server is required so that it can send letters.
f) each client waiting for an event is required to have a mailbox.
g) you need to organize an email client in the client program.

Sending via “net send” is inconvenient for the following reasons:

a) there is no guarantee of delivery, since it is organized through the mailslot facility, which does not have such a guarantee.
b) correct netbios name resolution is required on the network.
c) “Client for Microsoft Networks” is required on the client.
d) the standard mailslot is used, this may interfere with other programs.

And in general, the alerts tool is inconvenient due to the need to register each client as an operator and configure it accordingly. Those. For the simplest cases, alerts can be used. But for most cases it is not applicable.

Known implementations and concepts

The general public knows several options for implementing the mechanism for notifying the client by the server. This:

1. Creating an object (extended stored procedure or activex), through which the sql server notifies the client via tcp/ip sockets. At the same time, wiretapping is organized on the client, i.e. the client program became a tcp/ip server.
Disadvantages of this method:
a) Binding to the tcp/ip protocol. On a network where only ipx, netbeui or appletalk is used, such a mechanism cannot be used.
b) No asynchrony. If this event is generated from a trigger, there will be performance issues.

2. Creating an object (extended stored procedure or activex), through which the sql server notifies the client via named pipes or mailslots. At the same time, wiretapping of one or the other is organized on the client.
Disadvantages of this method:
a) correct netbios name resolution is required on the network.
b) the Client for Microsoft Networks is required on the client.
c) in case of using mailslot there is no guarantee of delivery.
d) in case of using named pipes, this cannot be applied on client computers with the Windows 95/98/me operating system, since named pipe can only be created in operating system on nt architecture.
e) No asynchrony. If this event is generated from a trigger, there will be performance issues.

3. Periodic polling of the SQL server by the client (periodic reading of a special event table). This is a very simple way, but, nevertheless, free from most of the above disadvantages. Unfortunately, this method has its own two specific disadvantages: a) receipt of the notification may be delayed by the polling timeout and b) with a small timeout, significant traffic occurs. However, with a small number of sessions, this method is quite suitable and is undeservedly ignored.

Proposed solution

We offer you a solution to the problem, free from the above (all of the above!) problems, but at the same time quite simple. The idea is this: a certain binary object is placed on the server, which the sql server can call (and this can only be an extended stored procedure or an activex object), which has two unrelated methods.
The first method uses the win32api createevent function to create a win32 kernel object called "event" with a unique name passed as a parameter. Next, the win32api waitforsingleobject function is called, and upon encountering it, the thread stops and waits until this kernel object signals. I draw your attention to the fact that any number of such kernel objects can be created. This is limited only by the number of handles in the system.
The second method calls the kernel event object by the name specified by the parameter using the win32api setevent function and sets the "signaled" property to it. Once this happens, the thread with the first method wakes up and returns control to the calling process. The second method does not wait for a result, but returns control to its calling process immediately after setting the "signaled" property. This achieves asynchrony.
Now all that remains is to make the stored ones t-sql procedures, managing this object and the necessary functionality “in our pocket”. The client program, in a separate thread, launches a stored procedure for waiting for an event, passing a unique address attribute as a parameter. This can be a username, a computer name, or any string. The main thing is that this is a unique identifier within the client-server system. The stored procedure will return a result only if an event is generated for this destination. When an event is received, the procedure is restarted. When the program is closed, the thread waiting for the event is simply terminated via terminatethread.
At first glance, this method has a “terrible” drawback - there is a constant connection with the sql server, which most of the time does nothing. But this is only the first impression. In fact, resources are used here only to maintain the connection - this is something like several kilobytes per session. That's all! No more tangible resources will be wasted, especially given the benefits described below. You also don’t have to worry about additional licenses if you choose the “per server” licensing model. In this case, one machine can have as many connections to the sql server as desired; it will still take exactly one client license.

Ready solution

The solution consists of an activex object in the form of a file algoevt.dll and two stored procedures spwaitforevent and spraiseevent. Before use, this file must be placed on the server and the activex object must be registered using the regsvr32.exe system utility. Further all work will be done through stored procedures. The ready-made solution implements slightly more functionality than the described concept. In addition to the fact of the event, you can also transmit arbitrary information in the form of a string of up to 250 characters. Each procedure has two parameters. The first is the unique address identifier mentioned above, and the second parameter is additional transmitted information. spwaitforevent must be called from the client from a separate thread (the lowest thread priority can be selected). When an event is received, the procedure must be restarted. The request execution timeout must be set to infinite.







2024 gtavrl.ru.