Ipb system architecture client server. Pagination - information necessary to organize page navigation


Are unequal components information network. Some own a resource and are therefore called servers; others access these resources and are called clients. Let's look at how they interact with each other and what the client-server architecture is.

Client-server architecture

The “Client-Server” architecture is the interaction of structural components in a network based on those defined by the network, where the structural components are the server and nodes that provide certain specialized functions (services), as well as clients who use this service. Specific functions are usually divided into three groups based on solving specific problems:

  • data input and presentation functions are designed for user interaction with the system;
  • applied functions - each has its own set;
  • resource management functions are designed to manage the file system, various databases and other components.

For example, a computer without a network connection represents presentation, application, and control components at various levels. These types of levels are considered to be the operating system, application and service software, various utilities. In the same way, all the above components are presented on the Internet. The main thing is to properly ensure network interaction between these components.

How the client-server architecture works

Client-server architecture is most often used to create corporate databases in which information is not only stored, but also periodically processed using various methods. The database is the main element of any corporate information system, and the server houses the core of this database. Thus, the most complex operations related to data entry, storage, processing and modification take place on the server. When a user (client) accesses the database (server), the request is processed: directly accessing the database and returning a response (processing result). The result of processing is a network message about the success of the operation or an error. Server computers can handle simultaneous requests from multiple clients for the same file. Such work over the network allows you to speed up the work of the applications you use.

Client-server architecture: application of technology

This architecture is used to access various resources using network technologies: databases, mail servers, firewalls, proxy servers. The development of client-server applications can improve the security, reliability and performance of the applications used and the network as a whole. Client-server applications are most often used for business automation.

5 Features and benefits of client/server architecture

What is client/server architecture? To a certain extent, it can be called a return to the “host computer + terminals” model, since the core of such a system is a database server, which is an application that carries out a set of data management actions - executing queries, storing and backing up data, tracking referential integrity , checking user rights and privileges, maintaining a transaction log. At the same time, an ordinary personal computer can be used as a workplace, which allows you not to give up your usual working environment(Fig. 5).

Fig.5. Stage 4: Data processing in a client/server architecture

What are the advantages of client-server information systems compared to their counterparts created on the basis of network versions of desktop DBMSs?

One of the most important benefits is the reduction in network traffic when executing requests. For example, if it is necessary to select five records from a table containing a million, the client application sends a query to the server, which the server compiles, optimizes and executes, after which the result of the query (those same five records, and not the entire table) is transmitted back to the workstation ( unless, of course, the client application correctly formulates requests to the server). At the same time, as a first approximation, you often don’t have to think about whether there is an index in general that can facilitate the search for the necessary records - if there is one, it will be used by the server, if not, the request will still be executed, although, most likely, in a longer amount of time .

The second advantage of the client/server architecture is the ability to store business rules on the server, which avoids duplicating code in various applications using a common database. In addition, in this case, any editing of data, including editing by non-standard means, can only be carried out within the framework of these rules.

In addition, to describe server-side business rules, most typical situations(as in the example with customers and orders) there are very convenient tools- so-called CASE tools (CASE stands for Computer-Aided System Engineering), which allow you to describe such rules and create database objects that implement them (indexes, triggers), literally drawing connections between tables with the mouse, without any programming. In this case, the client application will be spared a significant part of the code associated with implementing business rules directly in the application. Note also that some of the code associated with data processing can also be implemented in the form of server stored procedures, which makes it possible to further “lighten” the client application, meaning that the requirements for workstations may not be so high. This ultimately reduces the cost of the information system even when using an expensive server DBMS and a powerful database server.

In addition to the listed capabilities, modern server DBMSs have numerous means of managing user privileges and access rights to various database objects. Typically, a database stores information about its users, their passwords and privileges, and each database object, such as a table, is owned by a user. The owner of an object can grant other users the right to use the object in one way or another (for example, allow some other user to read data from it).

Some server DBMSs support so-called roles, which are a set of rights to access certain database objects. This can be convenient in the case of a large number of users with the same type job responsibilities. Take, for example, a commercial bank. Obviously, the teller of such a bank can add entries to the table that stores information about account transactions, but should not edit the bank's chart of accounts, while other bank employees generally should not make changes to the table of account transactions. If a bank has several dozen tellers, it makes sense if this server allows you to define the appropriate role, describe for it a set of rights to database objects and distribute it to the desired contingent of users.

Modern server DBMSs also have extensive capabilities for data backup and archiving, and often “optimization of query execution. They also, as a rule, provide the ability parallel processing data, especially when using multiprocessor computers as a database server.

So, client-server Information system consists in the simplest case of three main components:

A database server that manages data storage, access and protection, backup, monitors data integrity in accordance with business rules and, most importantly, fulfills client requests;

A client that provides other clients with a user interface that executes application logic, validates data, sends requests to the server, and receives responses from it;

Network and communication software that interacts between client and server through network protocols.

There are also more complex implementations of the client/server architecture, for example, three-tier information systems using an application server, as well as information systems using a Web server, which runs applications that deliver data to the user's Web browser.

1.6. System components

Client

The Client computer is the end user's entry point into the client-server environment. To do this, the workstation must have fairly good computing capabilities and be able to make requests for shared system resources. The client uses resources provided to it by one or more processing servers. The client is an active member of this bundle - it sends requests and receives responses. Client computer V in this case refers to specific user. In some cases the workstation itself may function as a client, and in others as a server. The client can be either Intel 386-based or a powerful RISC processor. These workstations operate under a GUI graphical user interface and appear to the user in a similar manner. By interacting with the user, the client effectively hides the server and network from the user, which creates the illusion of application integrity and independence from all other processes, machines or networks.

Server

The server performs a series of tasks for numerous clients. The essence of its functioning is in processing multiple and often spontaneous customer requests. However, the server must provide multitasking and memory sharing. The operating system software on the server performs the same functions as on the client computer (for example, interrupt handling and communication), as well as the physical processes of writing and reading data. Servers provide program operation, database and file processing, printing, fax transmission, communications, access restriction systems and a network management system. The server is quite specific, i.e. it performs certain previously functionally related processes.

Net

The essence of the closed-circuit system network is its inseparability with the external environment. Network connects workstations shared resources and is the system in which data is transmitted. Networks can be classified by their geographic extent. Local networks serve individual 1| buildings or several separate buildings (for example, a campus). Metropolitan networks serve entire cities or metropolitan areas. Next come the regional and republican networks.

Applications

The software ties the other three components of the architecture together. Basic distinctive feature is the presence of data processing capabilities that physically distribute them between the client and the server, but for the user they represent a single whole (the so-called combined processing).

There are two different kinds of software for client-server technology. The software installed on the server (back-end tool) ensures the collection, storage and processing of data. Example similar programs can serve Oracle, Sybase and Ingres.

Software on the client computer (front-end application, front-end, pre-treatment data) is more interactive, easier to use and more user friendly. Examples include programs such as Developer 2000, Power Builder and Designer 2000.

With the growing popularity of f/s technology, many manufacturers of relevant software have appeared on the market. This could not but lead to chaos and disorder. As chaos grew, rules were created requiring developers to follow certain standards. These standards reflect the requirement for compatibility of software used on frontal machines and database machines.

Each data processing machine has its own front-end software. For Oracle it is Developer 2000, and for Sybase it is Power Builder. A special feature of the system is that each front-end computer can communicate with the database computer. So, in the case of a base Oracle data, the Power Builder application can be used with minor modifications.

1.6.1 Putting all the parts together

The client-server system is a harmonious composition of three separate technologies that work inextricably together to provide efficient storage and fast access to the data.

The software on the client computer, the so-called front-end software, is responsible for the screen and user input/output. The software on the server is responsible for processing the entered information and accessing the data disks. For example, a user on a client machine creates a request for data in the database, the front-end program sends this request through the network to the server. The database server conducts a search and sends back data corresponding to the request (see system design in Fig. 6).

Fig.6. Design of the client-server system

1.7 Multi-tier information systemsInternet

Distributed information systems represent the next stage in the development of information systems architecture. The need for it appears with the further consolidation of information systems, associated with an increase in the number of users, the emergence of remote branches, and the need for centralized storage and processing of data. When large number users face problems with timely and synchronous replacement of versions of client applications on workstations (especially in the case of territorial dispersion of the enterprise), problems with maintaining settings, as well as network and database server overload.

These problems are solved by creating multi-tier information systems with a “thin” client (Fig. 7).

In this case, the problem of supporting settings is solved by transferring it to an intermediate link (such software is called middleware), called an application server. It can also be assigned other functions, such as carrying out calculations, processing data, and generating reports. Accordingly, these same functions are removed from the client application, therefore the requirements for both workstation resources and the update frequency of the client application itself are reduced. With a reasonable distribution of functions between the application server and the client, the latter usually contains only functionality related to providing the user with an interface for viewing and editing. For this reason, it is usually called a "thin" client (as opposed to the classic "thick client" characteristic of traditional client/server architecture).

As for the timely updating of thin client versions, this problem is often solved by delivering applications using technologies used on the Internet (using Web servers, Web browsers, Internet protocols). If we are talking about an enterprise-scale network in which similar technologies are used for corporate purposes, then the term intranet is usually used.

Fig.7. Stage 5: Data processing in a multi-tier architecture

The most common ways to deliver thin clients using such technologies today are to copy or install applications from a Web server, and one option is to copy an ActiveX component that fully implements the functionality of the thin client in order to display it in a browser.

Speaking about the use of the Internet/Intranet, one cannot help but dwell on the possibilities of creating applications for Web servers. Such applications, on the one hand, can be clients of server DBMSs, and on the other hand, they usually generate dynamic HTML pages (including data from these DBMSs) upon request of a client application, whose role in this case is played by a Web browser (called in this case, an “ultra-thin” client, Fig. 8). Note that in Lately Such applications are becoming increasingly widespread.

Fig.8. How a Web Application Works

1.8 Why are multi-tier information systems needed?

Information systems created on the basis of the classical client/server architecture, called two-tier systems or systems with a “thick” client, consist of a database server containing tables, indexes, triggers and other objects generated in one way or another that implement business rules of this information system, and one or more client applications that provide a user interface and perform validation and data processing according to the algorithms they contain. If we talk about client applications created to access data sources, they use calls to functions of the application programming interfaces of the client parts of the corresponding server DBMS. These calls are made, for example, by using the Borland Database Engine (BDE) library, although this is generally not required (for example, some Oracle users call Oracle Call Interfase functions directly in their applications). Accordingly, such a client application requires the presence on the End User’s computer of the client part of the used server DBMS (and a license to use it) and presence in random access memory a set of dynamically loaded libraries both from the client part and from VDE (or another library replacing it), such as database drivers, libraries containing API functions of client parts, etc. When using access through OOVS, the presence on the workstation of the corresponding ODBC driver and ODBC administrator. It complicates technical requirements, imposed on the hardware of the client workstation, and ultimately leads to an increase in the cost of the entire system as a whole (Fig. 9).

Another factor leading to an increase in the cost of operating an information system is the need to install and configure BDE, ODBC and the client part of the server DBMS, which is often a very labor-intensive process, especially with a large number and heterogeneous fleet of workstations. Note that when creating a distribution kit for a client application, as a rule, you can include BDI in it, but in the vast majority of cases it is impossible to include the client part of the server DBMS, since it must be installed in accordance with the rules specified in the license agreement of the server DBMS manufacturer .

There is another important factor: the more complex the configuration that provides access to workstation data, the more often disruptions occur in its operation. According to some Western sources, reconfiguring and maintaining software that allows workstations to access data leads to an average of four days of workstation downtime per year.

There is one more factor directly related to the considerable popularity of development tools that use BDE. Today, both on the Russian and global markets, there are a considerable number of different software products(especially encyclopedias and reference books), during the installation of which the BDE is also installed. In this case, there is no guarantee that the version of BDE included with such a product will be newer than that used in the corporate information system, and that the installer will not overwrite the BDE configuration file, replacing it with its own (this, of course, is contrary to the rules for creating distribution kits , but such cases happen from time to time even to good commercial products). Both of these usually lead to disruption of the software that provides access to the data.

Fig.-9. Classic client application (“thick” client).

The way out of this situation is to create systems with the so-called “thin” client, in particular with a client that does not contain a BDE and the client part of the server DBMS. In this case, the functionality related to data access (and often some other functionality) is assigned to another application, usually called an application server and which is a client of the server DBMS. In turn, client applications do not access directly the server DBMS by calling client API functions, but rather the application server, which is a source of data for them, while the actual client part of the server DBMS and BDE-type libraries on the workstation where such a client application is used , are not required to attend. Instead (for example), a single dynamically loaded library is used. Thus, the created information system becomes a three-tier one, and the application server is the middle link in the chain “thin client - application server - database server” and, accordingly, belongs to the class of middleware products (Fig. 10).

Fig. 10. Problem solving: thin client and application server

How can this technology be practically implemented? On the one hand, using a set of components and classes that ensure the creation of application servers and client parts, and on the other hand, using MIDAS, which allows you to launch remote servers applications, exchange information about OLE servers between registries, and optimize the load when using multiple application servers. This chapter will look at the simplest practical examples implementation of such a three-tier system.

1.9 TERMINOLOGY OF DISTRIBUTED DBMS

This section outlines specific terms, which are used in the book and concern the interaction of components and programs. This interaction is inevitable for distributed DBMSs, so if you are only planning to develop local DBMSs, you can skip this section.

Today, there are three parallel developing and competing technologies for the interaction of objects and programs: MIDAS (Multitier Distributed Application Services Suite), COM (Common Object Model- component object model) of Microsoft Corporation, CORBA (Common Object Require Broken Architecture - architecture with a supplier of required common objects) of the independent group OMG. The basic principles of these technologies and the terms used in them are described below.

1.10 MIDAS technology

Midas (Multitier Distributed Application Services Suite) is a new product from Inprise (Borland) designed to operate a server application created using C++ Builder 3 and Delphi 3. This product expands the capabilities provided to developers by Microsoft DCOM (Distributed Component Object Model) technology. This product allows you to provide high performance, reliability and protection from failures when operating such systems.

The architecture of a three-tier information system built using MIDAS is shown in Fig. eleven.

Fig. 11. Architecture of a three-tier information system using MIDAS

Let's look at what the technologies used in MIDAS are.

Remote Data Broker allows you to create distributed three-tier information systems consisting of a server DBMS, a middle link and a “thin” client, while the middle link can generally consist of several application servers and operate on several computers. Note that a “thin” client (an example of which was discussed above) is an application that does not contain business rules, but only provides a user interface.

The data source for the thin client is the application server, which receives requests from the client to retrieve or change data. When such a request is received, the application server contacts the database server of which it is a client with its own request. Having received the result of its own request from the server, the application server transmits the data to the client.

A component for storing data received from the application server in the client cache and has both navigation methods and methods for editing data. In addition, this component has methods that allow you to save data from the cache in a file and restore it from there, implementing the so-called “briefcase model” - a data processing model based on the fact that the “thin” client edits data for the most part in the absence of a connection with server, using only cache or local external devices, and only occasionally connects to the application server to transfer changed data to it for further processing.

Once the client receives a set of data from the application server, this set can be used by the component that, along with other components and the libraries that support them, makes up the client part of the Remote Data Broker.

Note that Remote Data Broker provides developers with ample opportunities to solve problems typical for multi-user data access associated with attempts to simultaneous editing multiple users of the same data. In this case, the locking mechanism used in the traditional two-tier client/server model may be ineffective or even unacceptable, since the time period between editing a record and storing it in the database can be very long. Therefore, when the application server tries to save a modified record in the database, it searches for the modified record either by the key field or by all fields, depending on the value of the property of the component responsible for this process on the application server, and compares all fields of the modified record with the original values ​​(i.e. . those that were in the client's cache at the time this entry was received from the server before the user changed this entry in the cache). If any fields were modified by another user in the time between the client receiving the original record and attempting to save the changes, the record may be passed back to the client application for further processing by the user.

Note also that the remote data modules (Remote Data Module objects), which are part of the Remote Data Broker server part, allow you to provide a DCOM interface for| corresponding objects, making them externally manageable and thus turning the application server into a DCOM server. Such publication of objects is carried out by selecting the option to export from a remote data module. context menu appropriate component when developing an application server.

Business Object Broker performs searches for the thin client the desired server applications among externally accessible servers published in the global registry, which is an open part of the registries of computers containing application servers. It is used when duplication of application servers is required and the ability to connect the Client application to another server if the application server in use fails, or when it is necessary to distribute clients evenly across application servers. Another important component of MIDAS is ConstraintBroker, which makes it possible to use the business rules of a database server with a thin client. Typically, when designing databases, business rules and referential integrity rules are implemented in the form of database objects, such as indexes, triggers, and stored procedures. This approach to data design allows these objects to be used by various client applications without writing additional code.

In the case of a classic two-tier client-server information system, when data changes, the client application tries to send the changed record to the server, and the server, in turn, tries to save it in the database by starting the corresponding transaction. If a record does not satisfy the referential integrity conditions defined on the server, the transaction is rolled back and the server returns an error message to the client application, after which the user must edit the data intended to be saved. If such incidents occur frequently, it will result in network congestion and increased server response time.

To reduce the number of incorrect records sent to the server, sometimes part of the business rules is reproduced in client application. In this case, partial control of the record’s compliance with business rules is performed without contacting the server, but the possibility of sending an incorrect record still remains, since usually the code contained in stored procedures and triggers is not reproduced in client applications. In addition, when business rules change, such an application requires modifications to it, which entails labor costs associated with installation and configuration new version at workstations.

When using ConstraintBroker, this problem is solved differently. In this case, the Remote Data Broker not only delivers data to the client application, but also accesses the application server's data dictionary to obtain the server's restrictions and pass them on to the client. Accordingly, when trying to transfer a record to the application server, the analysis of the record's compliance with the server rules is carried out directly in the client application without contacting the database server, which reduces the load on the servers and the network. Note that when changing business rules, you should make corresponding changes to the application server data dictionary, which can be done using the utility included in MIDAS, which also allows you to enter server restrictions, create and change tables, indexes, triggers, stored procedures, reference rules integrity on the database server.

Thus, the use of MIDAS technology allows you to create multi-tier information systems with a “thin” client that does not require installation and configuration, providing protection against failures in the operation of application servers, as well as reducing server and network load by transferring business rules and server restrictions to client application along with data.

In addition to the obvious advantages of the three-tier architecture, MIDAS also provides developers with additional features increasing the reliability of the created information system. For example, if there are several application servers of the same type on the network, the failure of one of them will lead to the distribution of thin clients connected to it among other servers - Business Object Broker will do this. It also ensures uniform loading of application servers by client connections.

But that is not all. It is the three-tier architecture that makes it possible to actually centralize data storage and processing with simultaneous access to up-to-date information in the case when the workstation is located at a considerable distance from the application server, excluding laying local network, since the application server can be accessed » by other means, such as a modem connection or access via the Internet. The requirements for the reliability of such a connection are low, since when using such an architecture, data caching is actively used on the workstation, and at the same time, the use of ConstraintBroker allows you to check the compliance of modified data with server rules directly on the workstation, therefore the use of “thin” clients and application servers managed by MIDAS , is one of the solutions for geographically dispersed enterprises, organizations with remote branches, including in other cities and countries.

1.11 COM technology

COM technology is being developed by Microsoft and is designed to allow one program (client) to make an object that is part of another program (part of the server) work as if this object were part of the client, and both programs can generally be located on different computers(including those located in different parts of the world), written in different languages and be executed under the control of different operating systems. Moreover, the computers themselves can be of different types - for example, an IBM-compatible PC and a SUN workstation.

A key aspect of COM is the so-called interface. An interface has a unique identifier and a set of parameters that describe the methods, events, and properties of a shared object. Interface Identifier ID (Interface Identifier) ​​is a special case of GUID (Global Unique Identifier - globally unique identifier). Windows32 includes functions that generate GUIDs, and the likelihood of two GUIDs matching is negligible. Interface parameters generally describe a certain class with a CLSID identifier (Class ID is implemented as a GUID), i.e., the types and names of the fields used in it, the number and types of access parameters available methods and properties, names of methods and properties, etc. Having received the interface of an external COM object, the client can use it in the same way as its own objects. Any COM object has an IUnknow interface, through which it can access the main interface of the object.

A COM server is an executable program or DLL that contains one or more COM objects.

Depending on the location of the client and server, three options are possible:

The client and server are located on the same machine and run in the same process (this is how the Delphi program interacts with ActiveX components)", in this case the server is a DLL ; the client and server are located on the same machine, but run in different processes (for example, Exel tables are inserted into a Word document); in this case, the server is a program;

The client and server are located on different machines; the server can be either a program or a DLL; a distributed version of COM is used, which is called DCOM.

In the first case, the client, using the object interface, directly accesses the object’s methods in its own address space (Fig. 12).

Fig. 12 Interaction between client and server in one process.

If the server is launched in another process or on another machine, two intermediaries are located between the object and the client - Proxy (authorized) and Stub (stub) (Fig. 13). The client pushes the call's parameters onto the stack and calls the object's interface method. However, this call is intercepted by the Proxy, packages the parameters of the call into a COM packet and forwards it to the Stub of another process, possibly on another machine. Stub unpacks the parameters, pushes them onto the stack, and calls the desired method on the object. Thus, the object's method executes in the server process's own address space.

1.12 Technology CORBA

Like COM, CORBA makes extensive use of the object interface. The main difference between CORBA and COM is the layer integrated into it that implements access to remote objects.

In accordance with this technology, the interaction between client and server looks like this (Fig. 14).

Two intermediary objects are created on the client's machine: Stub (stub) and ORB (Object Require Broker - broker of the required object). Stub acts as an authorized representative of the object: with using the object interface, the client accesses to Stub as if it were the object itself.

Fig. 13. Interaction between client and server in different processes.

Rice. 14. Interaction between client and server in CORBA.

Having received a method call, Stub broadcasts this call to an ORB object, which sends it to the network broadcast message. This message is responded to by one of the Smart Agent objects installed in network environment client (both on the local network and on the Internet). Smart Agent models a network directory in which object servers known to it are registered. It finds the desired server network address and passes the request to the ORB object on the server machine. Note that data exchange between ORB (client and server) and Smart Agent is carried out using a special UDP protocol, which uses network resources more carefully than the TCP protocol. Through the BOA (Basic Object Adapter), the data is received by a special server object called Skeleton. Skeleton pushes the call parameters onto the object's address space stack and implements the call itself. The role of the SAI object is to filter calls to the server object: using its methods, the server, through Skeleton, can declare some of its fields and properties read-only or completely hidden from a given client. (Because the technology treats the data exchanged between client and server as simply strings of bytes, the client must place its authorized key in the call buffer on systems that are protected from "outside" clients.)

The highlight of CORBA is the way it describes an object's interface. For these purposes, a special language IDL (Interface Definition Language) has been developed, very reminiscent of the C++ language. After describing the interface in terms of this language, the IDL compiler automatically creates Stub and Skeleton objects. The exchange of information about the interface between developers is carried out in terms of a high-level language, while the interface description compiler translates its text into machine instructions for a specific computer (client or server). As a result, a high degree of independence of data exchange from the client and user hardware is achieved.

To implement the technology, at least one Smart Agent must exist in the client’s network environment. If data exchange is carried out on the office local network, Smart Agent is installed on the host machine (on a file server or a machine with a SQL server), and when exchanging data over the Internet - on one of its nodes. When creating a server, automatic registration objects in one or more Smart Agents. Thus, Smart Agent “knows” at which network addresses its servers are located. This allows the system to increase its reliability: if one of the servers fails, Smart Agent will repeat the call and switch to another server if it fails again.

1.13 Some conclusions

Thus, the client/server architecture has a number of significant advantages compared to the traditional architecture of information systems based on network versions of desktop DBMSs: higher performance, lower network graphics, improved means of ensuring security and data integrity, the ability to set business rules .

We also note that there are opportunities to improve client-server systems by moving to a multi-tier architecture with a “thin” client or, if necessary, to applications for Web servers.

1.14. Application of Client/Server systems

The use of client-server systems is mainly concentrated in:

Banking;

Air ticket sales system;

Internet networks.

Banking

We are all very familiar with basic banking operations. Here they are:

2. Placement and withdrawal of cash and non-cash money from the deposit;

3. Providing loans;

4. Investments;

5. Following the instructions of the bank client.

These are just a few of the many functions performed by a bank these days. The globalization of the economy has led to a wide distribution of bank branches throughout the country. So, for example, a bank client has an account in New York, but he wants to pay a check in Los Angeles, or get cash from an ATM in Florida.

Possibilities that we could only dream of before have become a reality with the advent of client-server architecture. What it looks like now. A depositor who opened an account in Los Angeles wants to withdraw money in Florida. He finds the nearest branch in Florida and withdraws money using a bank machine.

How does the translation happen?

After the user enters the account number, the local terminal transmits a request for the account number and amount to the host computer. The server compares the account number and checks the sufficiency of the balance. If there is enough money in the account, the required amount is withdrawn from it, and a new balance is “registered” on the server. This is a method of making payments between the local terminal and the server.

Air ticket sales system

Today, for example, you can book tickets in Connecticut for a TWA flight from New York via St. Louis to San Francisco. This is made possible through the combined technological efforts of client-server configured networks and databases.

When it is necessary to buy a plane ticket from New York to St. Louis, from St. Louis to San Francisco, a seat is reserved for the traveler. The advantage of this system is that in response to a request made by a passenger from New York about the status of his order, the purchase terminal will receive a response specifically about his tickets.

Internet

The Internet is the most striking example of organizing a system according to the client-server type. The Internet can be called the widest selection of various materials that can be accessed anywhere in the world. Some time ago, access to it could be obtained by those who knew exactly where it was located. The architecture of the c/s made it publicly available.

It is known that the Internet is a collection of small networks located around the world. In order for all networks to understand each other, it is necessary that they speak the same language, called TCP/IP. Regardless of geographic distance and platform, it becomes possible for client and server machines to talk to each other.

Let's see why the World Wide Web (WWW) can be called the most popular software application s/s on the Internet.

Suppose the WWW is a collection of many pages of information of various kinds - sports, religion, technology, theater, art, music, all stored on a computer. Such a computer containing information is called a Web Server. The client computer, at the user's request, contacts the server, and this request is made using a browser program. The browser displays the contents of the server in the form of a list, much like the table of contents of a book. The user can select what he wants and requests it from the server. The server provides exactly the information for which the request was made.

1.15. Examples of development of individual database servers

Since their main functions remain the same, individual database servers vary depending on the application. Some of these differences are listed below:

Compatibility with each other;

Optimization and performance;

Data integrity control;

Translation processing;

Competitiveness, anti-lag protection and multi-user access control;

Tamper protection and client authentication;

Backup, data recovery and other database functions.

Regardless of how the concept of client-server architecture is defined (and there are many such definitions in the literature), the basis of this concept is a distributed computing model. In the most general case, under client And server two interacting processes are understood, one of which is a provider of some service for the other.

The term "client-server" refers to this architecture software package, in which its functional parts interact according to the “request-response” scheme. If we consider two interacting parts of this complex, then one of them (the client) performs an active function, that is, it initiates requests, and the other (the server) passively responds to them. As the system develops, the roles may change, for example, some software block will simultaneously perform the functions of a server in relation to one block and a client in relation to another.

Server - one or more multi-user processors with a single memory field, which, in accordance with the needs of the user, provides them with the functions of calculation, communication and access to databases. Server can be called a program that provides some services to other programs. Examples of servers are the Apache web server, database servers - MySQL, ORACLE, network file systems and Windows printers.

Client - a workstation for one user, providing registration mode and other functions necessary at his workplace - calculations, communication, access to databases, etc. A client can be called a program that uses the service provided by the server program. Client Examples - MSIE (MS Internet Explorer), ICQ client.

Often people simply refer to the computer on which one of these programs runs as a client or server.

In essence, client and server are roles performed by programs. Clients and servers can physically reside on the same computer. The same program can be both a client and a server at the same time, etc... these are just roles.

If we draw an analogy with society - a bank or a store - “servers”. They provide some services to their clients. But the bank may at the same time be a client of some other company, etc...

Client-Server processing is an environment in which application processing is distributed between a client and a server. Machines are often involved in processing different types, and the client and server communicate with each other using a fixed set of standard exchange protocols and procedures for accessing remote platforms.

DBMS with personal computers(such as Clipper, DBase, FoxPro, Paradox, Clarion have network versions that simply share database files of the same formats for the PC, while implementing network locks to restrict access to tables and records. In this case, all work is carried out on the PC .The server is used simply as a shared one remote disk large capacity. This way of working leads to the risk of data loss due to hardware failures.

Compared to such systems, systems built in the Client-Server architecture have the following advantages:

    allow you to increase the size and complexity of programs running on a workstation;

    ensures the transfer of the most labor-intensive operations to a server, which is a machine with greater computing power;

    minimizes the possibility of loss of information contained in the database through the use of internal data protection mechanisms available on the server, such as, for example, transaction tracing systems, rollback after a failure, and means of ensuring data integrity;

    reduces the amount of information transmitted over the network several times.

    In a client-server architecture, the database server not only provides access to shared data, but also handles all processing of that data. The client sends requests to the server to read or change data, which are formulated in SQL. The server itself makes all the necessary changes or selections, while monitoring the integrity and consistency of the data, and sends the results in the form of a set of records or a return code to the client’s computer.

    It allows you to optimally distribute the computing load between the client and the server, which also affects many characteristics of the system: cost, performance, support.

    1.2. Story…

    The architecture and term "client-server" were first used in the early 80s. The first applications with a client-server architecture were databases.

    Before this, there was no clear division - the program usually did everything itself - including working with data in the file system, presenting data to the user, etc. Over time, the volume and criticality of data for business grew, and this over time began to give rise to problems (performance, security and others).

    Then they decided that it would be convenient to install the database on a powerful separate computer (server) and allow this database to be used by many small computer users (clients) via the network, which was done.

    Essentially, the “explosion” in the popularity of client-server technology was caused by the invention of IBM simple language queries to relational databases SQL data. Today SQL is the universal standard for working with databases. Recently, this “explosion” continues with the invention of the Internet, in which literally every interaction occurs using a client-server architecture.

    1.3. Protocols

    The server and client on the network “talk” to each other in a “language” (in the broad sense of the word) that is understandable to both parties. This “language” is called a protocol.

    In the case of a bank, the protocol can be called the forms that the client fills out.

    In our case, examples of protocols:

    FTP ( File Transfer protocol)

    HTTP (Hyper Text Transfer Protocol)

    SMTP (Simple Mail Transfer Protocol)

    IP (Internet Protocol)

    MySQL Client/Server Protocol

    Note that protocols can be at different levels. Classification systems of levels may be different, but one of the most famous lines is OSI (Open Systems Interconnection), which has 7 levels.

    For example, HTTP is an application (seventh - highest) layer protocol, and IP is a network (third) layer protocol.

    1.4. Distribution of functions in the client-server architecture

    IN classical architecture The client-server has to distribute the three main parts of the application across two physical modules. Typically, data storage software is located on a server (for example, a database server), the user interface is on the client side, but data processing must be distributed between the client and server parts. This is the main drawback of the two-tier architecture, from which several unpleasant features, greatly complicating the development of client-server systems.

    The process of developing such systems is quite complex and one of the most important tasks is precisely the decision on how the functionality of the application should be distributed between the client and server parts. Trying to solve this problem, developers get two-tier, three-tier and multi-tier architectures. It all depends on how many intermediate links are included between the client and server.

    The main task that the client application solves is providing an interface with the user, i.e. entering data and presenting results in a user-friendly form, and managing application scenarios.

    The main functions of a server DBMS are ensuring reliability, consistency and security of data, managing client requests, fast processing SQL queries.

    The entire logic of the application - application tasks, business rules - in a two-tier architecture is distributed by the developer between two processes: client and server (Fig. 1).

    At first, most of the application's functions were solved by the client, the server was only involved in processing SQL queries. This architecture is called “thick client - thin server”.

    The emergence of the ability to create stored procedures on the server, i.e. compiled programs with internal operating logic, has led to a tendency to transfer an increasing part of the functions to the server. The server became more and more “fat”, and the client became “thinner”.

    This solution has obvious advantages, for example, it is easier to maintain, because all changes need to be made in only one place - on the server.

    The models discussed above have the following disadvantages.

    1. “Thick” client:

    – complexity of administration;

    – updating the software becomes more complicated, since it must be replaced simultaneously across the entire system;

    – the distribution of powers becomes more complicated, since access is limited not by actions, but by tables;

    – the network is overloaded due to the transmission of unprocessed data through it;

    – weak data protection, since it is difficult to correctly distribute powers.

    2. “Fat” server:

    – implementation becomes more complicated, since languages ​​like PL/SQL are not suitable for developing such software and there are no good debugging tools;

    – the performance of programs written in languages ​​like PL/SQL is significantly lower than those created in other languages, which is important for complex systems;

    – programs written in DBMS languages ​​usually do not work reliably; an error in them can lead to failure of the entire database server;

    – the resulting programs are completely unportable to other systems and platforms.

    To solve these problems, multi-level (three or more levels) client-server architectures are used. A multi-level client-server architecture can significantly simplify distributed computing, making it not only more reliable, but also more accessible.

    However, the language in which stored procedures are written is not powerful or flexible enough to make it convenient to implement complex logic applications.

    Then there was a tendency to entrust the execution of application tasks and business rules to separate component applications (or several components) that can run both on a dedicated computer - the application server, and on the same computer where the database server runs. This is how three-tier and multi-tier client-server architectures emerged.


    Rice. 1. Distribution of functions between client and server

    Special middleware software has emerged that should ensure the joint functioning of many components of such a multi-component application. Such applications are flexible, scalable, but difficult to develop.


    BIBLIOGRAPHY

  1. Informatics / Ed. N.V. Makarova. – M.: Finance and Statistics, 1998.

    Evdokimov V.V. and others. Economic informatics. St. Petersburg: Peter, 2004.

    Kazakov S.I. Fundamentals of network technologies - M.: Radio and communication, 2004.

    Kogalovsky M.R., Database technology on personal computers, - M.: Finance and Statistics, 2003.

    Popov V.V. Basics computer technology. –M.: Finance and Statistics, 2001.

    Figurnov V.E. IBM PC for the user. M., 2000.

OPERATING SYSTEM MS-DOS. BASIC CONCEPTS AND COMMANDS BASIC CONCEPTS: DATABASE, DBMS, ENTITY, ATTRIBUTE, RELATIONSHIP (ONE-TO-ONE, ONE-TO-MANY, MANY-TO-MANY), RELATIONSHIP, PRIMARY KEY

Client-server is a computing or network architecture in which tasks or network load are distributed between service providers, called servers, and service customers, called clients. Often, clients and servers communicate over a computer network and can be either different physical devices or software.

Advantages

Makes it possible, in most cases, to distribute functions computing system between several independent computers on the network. This makes it easier to maintain the computer system. In particular, replacing, repairing, upgrading or moving a server does not affect customers.

All data is stored on the server, which, as a rule, is protected much better than most clients. It is easier to enforce permission controls on the server to allow only clients with the appropriate access rights to access data.

Allows you to combine different clients. Clients with different hardware platforms, operating systems, etc. can often use the resources of one server.

[edit]

Flaws

Server failure can render the entire computer network inoperable.

Supporting the operation of this system requires a separate specialist - a system administrator.

High cost of equipment.

[edit]

Multi-tier client-server architecture

Multi-level client-server architecture is a type of client-server architecture in which the data processing function is carried out on one or more separate servers. This allows you to separate the functions of storing, processing and presenting data for more efficient use of the capabilities of servers and clients.

Special cases of multi-level architecture:

Three-tier architecture

[edit]

Dedicated server network

A dedicated server network (Client/Server network) is a local area network (LAN) in which network devices centralized and managed by one or more servers. Individual workstations or clients (such as PCs) must access network resources through server(s).

Introduction

A lot has already been written about client-server technology. It can be noted that some of the excitement around this topic that took place two years ago has now definitely subsided. Articles in the press and conversations on the sidelines have acquired a calm, business-like tone and now, as a rule, discuss specific aspects of the application of this technology. The question "To be or not to be a client-server architecture?" Now no one raises it - everyone knows that “To be!”

However, many readers may have only recently become interested in this topic, so, in our opinion, it is worth returning to it again and calmly, in a businesslike manner, discussing what client-server architecture is, why it is needed and how to approach it.

What is client-server architecture?

Generally speaking, a client-server system is characterized by the presence of two interacting independent processes - a client and a server, which, in general, can be executed on different computers, exchanging data over the network. According to this scheme, data processing systems based on DBMS, mail and other systems can be built. We will, of course, talk about databases and systems based on them. And here it will be more convenient not just to consider the client-server architecture, but to compare it with another - file-server.

In a file server system, data is stored on a file server (for example, Novell NetWare or Windows NT Server), and its processing is carried out on workstations, which, as a rule, operate one of the so-called “desktop DBMSs” - Access, FoxPro , Paradox, etc..

The application on the workstation is “responsible for everything” - for creating the user interface, logical data processing and for direct data manipulation. The file server provides only the lowest level of services - opening, closing and modifying files, I emphasize - files, not a database. The database exists only in the "brain" of the workstation.

Thus, several independent and inconsistent processes are involved in the direct manipulation of data. In addition, to carry out any processing (search, modification, summation, etc.), all data must be transferred over the network from the server to the workstation (see Fig. Comparison of file-server and client-server models)

In a client-server system, there are (at least) two applications - a client and a server, sharing among themselves those functions that, in a file-server architecture, are performed entirely by an application on a workstation. Data storage and direct manipulation is carried out by a database server, which can be Microsoft SQL Server, Oracle, Sybase, etc.

The user interface is created by the client, for the construction of which you can use a number of special tools, as well as most desktop DBMSs. Data processing logic can be executed on both the client and the server. The client sends requests to the server, usually formulated in SQL. The server processes these requests and sends the result to the client (of course, there can be many clients).

Thus, one process is responsible for directly manipulating the data. At the same time, data processing occurs in the same place where the data is stored - on the server, which eliminates the need to transfer large amounts of data over the network.

When do you need a client-server architecture?

Even a very detailed analysis of the features of the client-server architecture may not answer the question “What will this give me?” Let's look at this architecture in terms of business needs. What qualities does a client-server bring to an information system:

Reliability

Anyone who has at least once been in the role of a database administrator at the moment when this database “died” due to a server or workstation “freezing”, a power failure or some other misfortune will never again neglect reliability issues (if , of course, will be able to retain this role). If you haven't played this role yet, I hope you have the imagination to replay this thriller in your head, and the prudence to keep your database (and yourself) as safe as possible. How does client-server architecture help here?

The database server performs data modification based on a transaction mechanism, which gives any set of operations declared as a transaction the following properties:

atomicity - under any circumstances, either all operations of the transaction will be performed, or none will be performed; data integrity upon transaction completion;

independence - transactions initiated by different users do not interfere with each other’s affairs;

failure resistance - after the transaction is completed, its results will not be lost.

The transaction mechanism supported by the database server is much more efficient than the similar mechanism in desktop DBMSs, because the server centrally controls the operation of transactions. In addition, in a file-server system, a failure on any of the workstations can lead to data loss and its inaccessibility to other workstations, while in a client-server system, a failure on the client almost never affects the integrity of the data and their availability to other clients.

Scalability

Scalability is the ability of the system to adapt to the growth in the number of users and the volume of the database with an adequate increase in the performance of the hardware platform, without replacing software.

It is well known that the capabilities of desktop DBMSs are seriously limited - five to seven users and 30-50 MB, respectively. The numbers, of course, represent some average values; in specific cases they can deviate in either direction. Most importantly, these barriers cannot be overcome by increasing hardware capabilities.

Systems based on database servers can support thousands of users and hundreds of GB of information - just give them the appropriate hardware platform.

Safety

The database server provides powerful tools data protection from unauthorized access, impossible in desktop DBMSs. At the same time, access rights are administered very flexibly - down to the level of table fields. In addition, you can completely prohibit direct access to tables, allowing the user to interact with the data through intermediate objects - views and stored procedures. So the administrator can be sure that no too smart user will read what he is not supposed to read.

Flexibility

In a data application, there are three logical layers:

user interface;

logical processing rules (business rules);

data management (one should not confuse logical layers with physical levels, which will be discussed below).

As already mentioned, in a file server architecture, all three layers are implemented in one monolithic application running on a workstation. Therefore, changes in any of the layers clearly lead to modification of the application and subsequent updating of its versions on workstations.

In the two-tier client-server application shown in Fig. 1, as a rule, all functions for creating a user interface are implemented on the client, all functions for data management are implemented on the server, but business rules can be implemented both on the server using server programming mechanisms (stored procedures, triggers, views, etc. ), and on the client. In a three-tier application, a third, intermediate level appears, implementing business rules, which are the most frequently changed application components (see Fig. Three-tier client-server application model)

The presence of not one, but several levels allows you to flexibly and cost-effectively adapt the application to changing business requirements.

Let's try to illustrate all of the above with a small example. Let's assume that a certain organization's payroll rules (business rules) have changed and the corresponding software needs to be updated.

1) In a file server system, we "simply" make changes to the application and update its versions on workstations. But this “simply” entails maximum labor costs.

2) In a two-tier client-server system, if the payroll algorithm is implemented on the server in the form of a payroll rule, it is executed by a business rules server, implemented, for example, as an OLE server, and we will update one of its objects without changing anything neither in the client application nor on the database server.

Stages of building a client-server system.

Suppose you are using an application today, implemented in a file-server architecture, using means Microsoft Access, and think about its development. The following steps may be considered.

1. Transfer the database to Microsoft SQL Server, keeping the interface and operating logic unchanged. At the same time, you will not take advantage of all the advantages of the client-server architecture, but you can rest assured that your data is securely stored.

2. Develop a full-fledged two-tier client-server application using the same Access - SQL Server combination, which works very well. This can be done, for example, by gradually changing individual components of the application obtained in step 1. An alternative would be to develop a completely new application using Visual Basic, Delphi, or any other of the dozens of available tools as a client.

3.If you are planning serious growth of your organization, then the use of a three-tier architecture will allow you to more flexibly distribute the growing load between servers and minimize the costs of maintaining and developing the system.

We hope this article has given you a general understanding of client-server architecture and its benefits. In future issues we plan to talk in more detail about Microsoft SQL Server and building systems based on it.

As a rule, computers and programs that are part of an information system are not equal. Some of them own resources (file system, processor, printer, database, etc.), others have the ability to access these resources. The computer (or program) that manages a resource is called the server of that resource (file server, database server, computing server...). The client and server of a resource can be located either within the same computer system or on different computers connected by a network.

The basic principle of the client-server technology is to divide the application functions into three groups:

· data entry and display (user interaction);

· applied functions specific to a given subject area;

· resource management functions (file system, database, etc.)

Therefore, in any application the following components are distinguished:

· data presentation component

· application component

· resource management component

Communication between components is carried out according to certain rules, which are called the “interaction protocol”.

5.1.2. Client-server interaction models

Research company Gartner Group information technologies, the following classification of two-tier client-server interaction models is proposed (these models are called two-tier because the three application components are distributed in different ways between two nodes):

Historically, the model of distributed data presentation was the first to appear, which was implemented on a general purpose computer with non-intelligent terminals connected to it. Data management and user interaction were combined in one program; only the “picture” generated on the central computer was transmitted to the terminal.

Then, with the advent of personal computers (PCs) and local area networks, remote database access models were implemented. For some time, the basic architecture for PC networks was the file server architecture. In this case, one of the computers is a file server; the clients run applications that combine a presentation component and an application component (DBMS and application program). The exchange protocol is a set of low-level operation calls file system. This architecture, usually implemented using personal DBMS, has obvious disadvantages - high network traffic and lack of unified access to resources.

With the advent of the first specialized database servers, the possibility of a different implementation of the remote database access model became possible. In this case, the DBMS kernel operates on the server, the exchange protocol is provided using SQL language. This approach, compared to a file server, leads to a reduction in network load and unification of the client-server interface. However, network traffic remains quite high, and it is still impossible to satisfactorily administer applications, since various functions are combined in one program.

Later, the concept of an active server was developed, which used a stored procedure mechanism. This allowed part of the application component to be transferred to the server (distributed application model). Procedures are stored in a database dictionary, shared among multiple clients, and executed on the same computer as the SQL server. The advantages of this approach: centralized administration of application functions is possible, network traffic is significantly reduced (since calls to stored procedures are transmitted rather than SQL queries). The disadvantage is the limited tools for developing stored procedures compared to general-purpose languages ​​(C and Pascal).

In practice, a mixed approach is now usually used:

· the simplest application functions are performed by stored procedures on the server

· more complex functions implemented on the client directly in the application program

A number of commercial DBMS vendors have now announced plans to implement stored procedure execution mechanisms using the Java language. This corresponds to the concept of a “thin client”, the function of which remains only to display data (remote data presentation model).

Recently, there has also been a trend towards using a distributed application model. A characteristic feature of such applications is the logical division of the application into two or more parts, each of which can be executed on a separate computer. Dedicated parts of the application communicate with each other, exchanging messages in a pre-agreed format. In this case, the two-tier client-server architecture becomes three-tier, and in some cases, it may include more links.

5.1.3. Transaction monitors

In the case when the information system combines enough a large number of various information resources and application servers, the question arises of optimal management of all its components. In this case, specialized tools are used - transaction processing monitors (often called simply “transaction monitors”). At the same time, the concept of a transaction is expanded in comparison with what is known in database theory. In this case, this is not an atomic action on the database, but any action in the system - issuing a message, writing to an index file, printing a report, etc.

To communicate between the application program and the transaction monitor, a specialized API (Application Program Interface) is used, which is implemented as a library containing calls to basic functions (establish a connection, call a specific service, etc.). Application servers (services) are also created using this API, each service is assigned a unique name. The transaction monitor, having received a request from an application program, passes its call to the appropriate service (if it is not running, the necessary process is spawned), and after processing the request by the application server, it returns the results to the client. The XA protocol has been developed for the interaction of transaction monitors with database servers. The presence of such a unified interface allows the use of several different DBMSs within one application.

Using transaction monitors on large systems provides the following benefits:

· The concentration of all application functions on the application server provides significant independence both from the implementation of the user interface and from the specific method of resource management. This also ensures centralized administration of applications, since the entire application is located in one place, and is not “spread out” across the network among client workstations.

· The transaction monitor is able to start and stop application servers itself. Depending on the network load and computing resources, it can transfer or copy part of the server processes to other nodes. This ensures load balance is achieved.

· Provides dynamic system configuration, i.e. a new resource server or application server can be added without stopping it.

· The reliability of the system increases, because in case of failures, the application server can be moved to a backup computer.

· It becomes possible to manage distributed databases (for more details, see the next paragraph).

5.2. Processing distributed data

IN modern business very often there is a need to provide access to the same data to groups of users geographically remote from each other. An example would be a bank with several branches. These branches may be located in different cities, countries or even on different continents, however, it is necessary to organize the processing of financial transactions (moving money between accounts) between branches. The results of financial transactions must be visible simultaneously in all branches.

There are two approaches to organizing the processing of distributed data.

1. Distributed database technology. Such a database includes fragments of data located on various network nodes. From the users' point of view, it looks as if all data is stored in one place. Naturally, such a scheme places stringent demands on the performance and reliability of communication channels.

2. Replication technology. In this case, the data of all computers is duplicated in each network node. Wherein:

· Only data modification operations are transmitted, not the data itself

· transmission can be asynchronous (non-simultaneous for different nodes)

· data is located where it is processed

This allows us to reduce the requirements for bandwidth communication channels, moreover, if the communication line of any computer fails, users of other nodes can continue to work. However, this allows for unequal database states for different users at the same point in time. Therefore, it is impossible to eliminate conflicts between two copies of the same record.







2024 gtavrl.ru.