What network architectures are the most common today. Basic definitions and terms


With all the variety of specific implementations of modern information networks, the vast majority of them are based on one or another typical architecture.

Today it is customary to define five typical architectures for building information networks:

architecture terminal- main computer ;

· peer-to-peer architecture;

architecture client-server;

architecture computer-network;

architecture smart grid.

It should be noted that within each of the standard architectures there is a certain variety of approaches to the implementation of network architecture, but basically they all fit within the boundaries of one or another basic concept of building an information network, from those mentioned above.

3.1. ARCHITECTURE TERMINAL-HOST COMPUTER

Terminal-host architecture ( terminal-host computer architecture, English.) – the concept of building an information network in which all data processing is carried out in one or a group of host computers.

This architecture defines two types terminal network equipment (Data Terminal EquipmentDTE). The first of them carries out data storage, processing, network routing, and network management. This type is represented by the so-called main (central) computers or mainframes (mainframe, English). Host computers are generally via multiplexers-demultiplexers interact with the second type of terminal equipment – terminals(Fig. 3.1.), the objectives of which are:

· sending commands to the mainframe to organize sessions and perform tasks;

· entering into the mainframe the data necessary to complete tasks;

· receiving the results of calculations from the mainframe.

The main computer with a group of terminals form a centralized data processing complex. Here, the functions of interaction between partners (mainframe and terminals) are sharply asymmetrical.

At the time of the appearance of the architecture in question, there were no Personal Computers (PCs) yet. Therefore, the inequality of partners was determined by the complexity and high cost of the produced basic computers, as well as the desire to simplify the equipment located at specialists’ workplaces, to make it small-sized and cost-effective. The network uses one type of OS, which runs the mainframe.

The mainframe is a classic example of centralization of computing, since all information and computing resources, storage and processing of huge amounts of data are concentrated in a single complex.

The main advantages of the centralized terminal-host computer architecture are due to the ease of administration and information security. All terminals were of the same type, and, therefore, the devices at user workstations behaved predictably and could be replaced at any time. The costs of maintaining terminals and communication lines were easily predicted.

A classic example of a network architecture with a central computer is famous network ALOHA (Hello, Hawaiian.), which is a network of the University of Hawaii. The network began operating in 1970. It provided communication between the central computer located in Honolulu and terminals located on all the islands of the Hawaiian archipelago. The ALOHA network did not use multiplexers-demultiplexers. Instead, two radio frequency channels were allocated for communication: one was allocated for transmitting messages from the mainframe to the terminals, the second - in the opposite direction. The division of the second channel between the terminals was carried out using the random access method.

In the networks of the architecture under consideration, terminals were gradually replaced by PCs. As a result, some of the data processing functions previously performed by mainframes moved to PCs. In addition, switching and routing tasks were also removed from the central computers, which were transferred to the switching nodes. Instead of multiplexers-demultiplexers, dedicated communications equipment (DCE) began to be used.

As a result, gradually the “terminal-host” architecture in its pure form was predominantly supplanted by other architectures and, above all, by the “client-server” architecture.

3.2. PEEK NETWORK ARCHITECTURE

Peer-to-peer architecture ( peer-to-peer architecture) is the concept of an information network in which each workstation can provide and consume resources. Sometimes such a network (architecture) is called peer-to-peer.

The architecture of a peer-to-peer network is characterized by the fact that in it all workstations (computers) have equal rights (Fig. 3.2) and their access to each other’s resources is symmetrical. Thanks to this, the user can perform distributed data processing, work with application programs, external devices, as well as files located on any system. Peer-to-peer architecture provides:

Connecting a peer-to-peer network as a single client to a large local network based on the architecture client-server;



· facilitated organization of teleconferences.

The role that each computer plays in interactions with other computers on the network when providing a certain service is not fixed, as is the case, for example, in a client-server architecture, but depends on the context of the operation being performed and on the characteristics of the current situation. In some cases, a computer can be a server, in others it can be a client.

This architecture is characterized by simplicity of networking and easy expansion.

The main advantages of peer-to-peer architecture over terminal-host and client-server architectures are low cost, ease of operation, and a good reflection of the real work process of user groups. This is where convenient forms of transferring data to each other and retrieving the necessary programs and data from all computers on the network are provided.

The use of a peer-to-peer architecture does not exclude the use of elements of other types of architectures in the same network. In this case, it is customary to talk about integral architecture, when using which some types of interaction occur when executing symmetric, and others - asymmetric (relative to network objects) protocols.

During the early development of personal computers, a peer-to-peer network was in a generally accepted way sharing files and peripherals. Peer-to-peer networks consume quite a few computer resources, but intensive work on the network significantly slows down direct work user on the server.

The main restrictions for peer-to-peer networks are as follows:

· The number of computers in a peer-to-peer network should be between 10 and 30, depending on the intensity of the exchange of information messages on the network.

· It is not customary to use workstations connected by a peer-to-peer network as application servers. These networks are designed to share resources such as files, multi-user databases, peripheral equipment (printers, scanners, etc.).

· Applications on a computer serving as a server in a peer-to-peer network suffer when the computer's resources are used by others. You can control the degree of performance degradation by assigning higher priorities to local tasks, but this slows down the access of other network users to its shared hardware and software resources.

A peer-to-peer networking problem occurs when the workstation(s) are disconnected from the network. In these cases, those types of services that were provided by the disconnected station disappear from the network. Therefore, there is a need to monitor the status of network components that can be independently disconnected from it at any time. Solving security and data integrity problems is becoming more difficult.

Peer-to-peer architecture is effective in small local networks. IN large networks(with a large number of stations), including local ones, it gives way to architecture client-server.

One of the first peer-to-peer network systems there was a system PC LAN IBM, developed in cooperation with Microsoft. PC LAN was easy to install and manage, and did not require the involvement of a network administrator to maintain its functionality. However, when the number of computers connected to such a network approached a hundred, the system’s performance deteriorated sharply.

Initially, the departmental network was also based on peer-to-peer architecture ARPANet(see section 5 of this manual), which later became the starting core of the Internet.

In the 90s of the last century, peer-to-peer architecture, due to its inherent limitations, lost ground in favor of other network architectures. However, there is now renewed interest in this network concept. Last but not least, this is due to the sharply increased performance of workstations. Research projects, system prototypes and software products dedicated to this issue have appeared. The search continues for new technical solutions. It can be assumed that many distributed systems new generation will be based on peer-to-peer architecture.

3.3. CLIENT-SERVER ARCHITECTURE

Architecture client-server (CSAClient-Server Architecture, English.) is the concept of organizing a network in which the bulk of its resources are concentrated in servers serving their clients.

The technological revolution brought about by the advent of the PC has made it possible in many cases to have computing and information resources on the user's desktop and manage them at will using a windowed graphical interface. The increase in PC performance made it possible to transfer parts of the system (user interface, application logic) to be executed on a PC, directly at the workplace, and leave data processing functions on the central computer. The system has become distributed - one part of the functions is performed on a central computer, the other on a personal computer, which is connected to the central one through a communication network. Thus, a client-server model of interaction between computers and programs on the network emerged, and on this basis, application development tools for implementing information systems began to develop.

As the name suggests, the CSA architecture defines two types of components interacting in the network: servers And clients. Each of them is a complex of interconnected application programs. Servers provide resources necessary for users. Clients use these resources and provide convenient user interfaces.

The terms "client" and "server" refer to the roles that different components play in a distributed computing environment. The “client” and “server” components do not necessarily have to run on different machines, although most often this is exactly what happens - the client-application is located on the user’s workstation, and the server is on a special dedicated machine.

The client generates a request to the server to perform the appropriate functions. For example, a file server provides storage of public data, organizes access to it, and transfers the data to the client. Data processing is distributed in one ratio or another between the server and the client. Recently, the share of processing attributable to the client has come to be called “ thick » client.

In modern client-server architecture, there are four groups of objects: clients, servers, data And network services. Clients are located in systems (for example, computers) located at user workstations. Data is mainly stored on servers. Network services are shared application programs, which interact with clients, servers and data. In addition, services manage distributed data processing procedures and inform users about changes occurring in the network.

Depending on the complexity of the application processes being performed and the number of working clients, two- And three-level architecture.

The simplest is two-level ( Two-tier architecture, English.) architecture (Fig. 3.3). Here, clients perform simple data processing operations, work out the interface for interacting with the server, and contact it with requests. Most of the processing tasks are performed by the server, which for these purposes often has a database (DB) and in this case is called database server. The database server is responsible for storing, managing and integrity of data, and also provides the ability for simultaneous access by several users. The client part is represented by " fat client”, that is, an application on which the basic rules of the system are concentrated and the software user interface is located.

Despite the simplicity of constructing such an architecture, it has serious drawbacks, the most significant of which are high requirements for network resources and network bandwidth, as well as the complexity of updating software due to the interaction logic distributed between the client and the database server. In addition, with a large number of clients, the requirements for hardware database server – the most expensive node in any information system.

The next step in the development of client-server architecture was the introduction of a middle level that implements the tasks of managing database access mechanisms (Fig. 3.4). In a three-tier architecture ( three-tierarchitecture, English.) instead of a single server are used application servers and database servers. Their use allows you to dramatically increase the performance of the local network.

The advantages of this architecture are obvious. On the application server, it became possible to connect various databases. Now, the database server is freed from the task of parallelizing work between different users, which significantly reduces its hardware requirements. In this situation, it turned out to be possible to reduce the requirements for client machines by performing resource-intensive operations by the application server and now solving only data visualization tasks. Therefore, this version of CSA is often called the “ thin client».

But the bottleneck here, as in two-level CSA, remains the increased requirements for network bandwidth, which imposes severe restrictions on the use of such systems in networks with unstable communications and low bandwidth (networks mobile communications, GPRS, and in some cases Internet).

Further development CSA is associated with a layered architecture ( N-tier architecture, English.), which uses program partitioning tools or distributed objects to divide the computing load among as many application servers as necessary for the existing load level. With a multi-level system model, the number of possible client locations is significantly greater than when using two- and three-tier models.

INFORMATION SOURCES

1. Service of thematic explanatory dictionaries “Glossary Commander”. (http://www.glossary.ru).

8. Alperovich M. Once again about the client-server architecture. "Computer-Inform". 1997, No. 2


Terminal equipment[data] – DTE, is a type of network devices that generate or receive data in accordance with accepted protocols, process and store them, and operate under the control of an application process.

Along with DTE equipment, another type of equipment is widely used in networks - DCE (Data Communication Equipment, English – communication equipment), not being the source or final recipient of the data.

Multiplexer- a device that creates from several separate information flows common aggregated stream that can be transmitted over one physical communication channel.

Demultiplexer– a device that divides the total aggregated flow into several component flows.

Terminal- a device for prompt input and output of information used in interaction remote user with a computer or network.

The term " mainframe"in general, has two interpretations: 1. A mainframe computer is a high-performance computer with a significant amount of RAM and external memory, designed for organizing centralized data storage large capacity and performing intensive computational work. 2. Computer with architecture IBM System/360, 370, 390, zSeries.

Peer-to-peer– from English peer-to-peer - equal to equal.

Applications server– a computer that allows other computers to run the operating system and applications from it, rather than from their local drives.

The most common types of servers are: file servers, database servers, print servers, Email, WEB servers and others. Recently, they have been intensively implemented multifunctional application servers.

Architecture refers to the organization of interaction between network nodes. In the standard classification, there are three types of architecture. They correspond to the main types of LAN. Architecture refers to the organization of interaction between network nodes. There are three main architectures, which correspond to the main types of LAN.

Architecture type - bus

The specificity of this type of architecture is that each of the LAN nodes transmits data to a common backbone. In this regard, any network node can have access to information on the backbone.

Architecture type - star

The specificity is that each of the LAN nodes is allocated separate channel for communication with the central node of the network. From the node, the information goes to the server, which can publish it to other nodes.

Architecture type - ring

The specificity is that the connection of network nodes occurs sequentially. Data exchange can occur exclusively between nodes that are located nearby. If it is necessary to exchange data with other LAN nodes, they can be transmitted in transit.

Data channels

If previously only wired local area networks were used, now wireless ones are popular in many cases. Currently The following types of LAN are distinguished:

    wired cable LAN

    fiber optic cable LAN

    wireless LANs

Typically, LANs are built on database of the building's SCS data transmission medium. When designing a LAN of any type, reliability and security requirements should be taken into account. As a rule, to ensure security, there is a single authorization point for all applications and resources on the local network. Wireless network used where the use of a traditional LAN with wires is impossible or unprofitable.

2.1. General points when organizing a LAN

A computer connected to the network is called a workstation, a computer that provides its resources is called a server, and a computer that has access to shared resources is called a client.

Several computers located in the same room or functionally performing the same type of work: accounting or planning, registration of incoming products, etc., are connected to each other and combined into a work group so that they can share various resources: programs, documents , printers, fax, etc.

A workgroup is organized so that the computers included in it contain all the resources necessary for normal operation. As a rule, a workgroup that includes more than 10 - 15 computers includes a dedicated server - a fairly powerful computer on which all shared directories and special software for managing access to the entire network or part of it are located.

Groups of servers are combined into domains. A domain user can log on to the network at any workstation in that domain and gain access to all of its resources. Typically, in server networks, all shared printers are connected to print servers.

From the point of view of organizing the interaction of computers, networks are divided into peer-to-peer (Peer-to-Peer Network) and with a dedicated server (Dedicated Server Network). In a peer-to-peer network, each computer plays an equal role. However, an increase in the number of computers on the network and an increase in the volume of data sent leads to the fact that network bandwidth becomes a bottleneck.

The widely used operating system Windows 95 (98), developed by Microsoft, is designed primarily to work in peer-to-peer networks, to support the computer as a client of other networks.

Windows 95, like Windows for Workgroups, can serve as a server on a network. Compatibility with old network drivers MS-DOS and Windows Z.x is ensured. The new operating system allows you to:

Share hard disks, printers, fax cards, organize peer-to-peer local area networks (LAN);

Use remote access and turn your office computer into a callable server;

Support 16-bit DOS network drivers.

The network administrator can set the overall design of the desktop system, determine what operations will be available to network users, and control the configuration of the desktop system.

A network located in a relatively small area is called local (LAN - Local Area Network). In recent years, the LAN structure has become more complex due to the creation of heterogeneous networks connecting different computer platforms. The ability to conduct video conferencing and use multimedia increases the requirements for network software. Modern servers can store binary large objects (BLOBs) containing text, graphics, audio, and video files. In particular, if you need to obtain a HR department database over the network, then BLOB technology will allow you to transfer not only personal data: last name, first name, patronymic, year of birth, but also portraits in digital form.

Two technologies for using the server

There are two technologies for using a server: file server technology and client-server architecture. The first model uses a file server on which most programs and data are stored. At the user's request, the necessary program and data are sent to him. Information processing is performed at the workstation.

In systems with a client-server architecture, data exchange is carried out between the client application (front-end) and the server application (back-end). Data is stored and processed on a powerful server, which also controls access to resources and data. The workstation receives only the results of the query. Developers of information processing applications commonly use this technology.

The use of large and complex applications has led to the development of a multi-level, primarily three-level architecture with data placed on a separate database server (DB). All calls to the database go through the application server, where they are combined. Reducing the number of database calls reduces license fees for the DBMS.

6. Topology – this is the configuration of connecting elements into a network. Topology largely determines such the most important characteristics network, such as its reliability, performance, cost, security, etc.

One of the approaches to classifying LAN topologies is to distinguish two main classes of topologies: broadcast Andsequential.

INbroadcast configurations, each personal computer transmits signals that can be perceived by other computers. Such configurations include “common bus”, “tree”, “star with a passive center” topologies. A star-type network can be thought of as a type of “tree” that has a root with a branch to each connected device.

INconsecutive configurations, each physical sublayer transmits information to only one personal computer. Examples of sequential configurations are: random (random connection of computers), hierarchical, “ring”, “chain”, “star with an intellectual center”, “snowflake”, etc.

Let's briefly look at the three most common (basic) LAN topologies: star, bus, and ring.

When star topology Each computer is connected via a special network adapter with a separate cable to the central node. The central node is a passive connector or active repeater.

The disadvantage of this topology is low reliability, since the failure of the central node leads to the shutdown of the entire network, and also the usually long cable length (this depends on the actual placement of computers). Sometimes, to increase reliability, a special relay is installed in the central node, which allows you to disconnect failed cable beams.

Common bus topology involves the use of one cable to which all computers are connected. Information on it is transmitted by computers one by one.

The advantage of this topology is, as a rule, a shorter cable length, as well as higher reliability than that of a “star”, since the failure of an individual station does not disrupt the operation of the network as a whole. The disadvantages are that a break in the main cable leads to the inoperability of the entire network, as well as poor security of information in the system on physical level, since messages sent by one computer to another, in principle, can be received on any other computer.

At ring topology Data is transmitted from one computer to another via relay. If a computer receives data that is not intended for it, it passes it on further along the ring. The recipient does not transmit the data intended for him anywhere.

The advantage of the ring topology is higher system reliability in the event of cable breaks than in the case of a common bus topology, since there are two access paths to each computer. The disadvantages of the topology include the large length of the cable, low performance compared to the “star” (but comparable to the “common bus”), as well as poor information security, as with the topology with a common bus.

The topology of a real LAN may be exactly the same as one of the above or include a combination of them. The structure of the network is generally determined by the following factors: the number of computers being connected, requirements for reliability and efficiency of information transfer, economic considerations, etc.

Network architecture refers to a set of standards, topologies and protocols low level necessary to create a functional network.

Over the years, many different architectures have been developed in network technology. Let's look at them.

Token Ring.

The technology was developed by IBM in the 1970s and later standardized by IEEE in the 802 Project as the 802.5 specification. It has the following characteristics:

· physical topology – “star”;

· logical topology – “ring”

· data transfer speed – 4 or 16 Mbit/s;

· transmission medium – twisted pair (2 pairs are used);

UTP – 150 m (for 4 Mbit/s)

STP – 300 m (for 4 Mbit/s)

or 100 (for 16 Mbit/s);

· maximum segment length with repeaters:

UTP – 365 m

STP – 730 m

* maximum number of computers per segment – ​​72 or 260 (depending on cable type)

To connect computers in networks Token Ring MSAU hubs, unshielded or shielded twisted pair are used (it is also possible to use optical fiber).

To the advantages of architecture Token Ring This can be attributed to the high transmission range when using repeaters (up to 730 m). Can be used in automated systems in real time.

The disadvantages of the architecture are quite high cost and low equipment compatibility.

Network environment ARCNet was developed by Datapoint Corporation in 1977. It did not become a standard, but complies with the IEEE 802.4 specification. This simple, flexible and inexpensive architecture for small networks (up to 256 computers) is characterized by the following parameters:

· physical topology – “bus” or “star”;

· logical topology – “bus”

· access method – token passing;

· data transfer speed – 2.5 or 20 Mbit/s;

· transmission medium – twisted pair or coaxial cable;

· maximum size frame – 516 bytes;

· transmission medium – twisted pair or coaxial cable

maximum segment length:

For twisted pair – 244 m (for any topology)

For coaxial cable – 305 m or 610 m (for bus or star topology, respectively).

Hubs are used to connect computers. The main type of cable is coaxial type RG-62. Twisted pair and optical fiber are also supported. BNC connectors are used for coaxial cables, RJ-45 connectors are used for twisted pair cables. The main advantage is the low cost of equipment and relatively long range.

AppleTalk.

Proprietary network environment proposed by Apple in 19883 and built into Macintosh computers. It includes a whole set of protocols that correspond to the OSI model. At the network architecture level, the LokalTalkФ protocol is used, which has the following characteristics:



· topology – “bus” or “tree”;

· access method – CSMA/CA;

· data transfer rate – 230.4 Kbps;

· data transmission medium – shielded twisted pair;

· maximum network length – 300 m;

· maximum number of computers – 32.

The very low bandwidth has led many manufacturers to offer expansion adapters that allow AppleTalk to work with high-bandwidth network environments - EtherTalk, TokenTalk, FDDITalk. In local networks built on IBM-compatible computers, the AppleTalk network environment is practically not found.

100VG-AnyLAN.

Architecture 100VG-AnyLAN was developed in the 90s by AT&T and Hewlett-Packard to combine Ethernet and Token Ring networks. In 1995, this architecture received IEEE 802.12 standard status. It has the following parameters:

· topology – “star”;

· access method – by request priority;

· data transfer speed – 100 Mbit/s;

· transmission medium – twisted pair category 3, 4 or 5 (all 4 pairs are used);

· maximum segment length (for HP equipment) – 225 m.

Due to the complexity and high cost of the equipment, it is currently practically not used.

Architecture for home networks.

Home PNA.

In 1966, a number of companies came together to create a standard that allowed home networks to be built using regular telephone wiring. The result of this work was the appearance in 1998 of architecture Home PNA 1.0 and then Home PNA 2.0, Home PNA3.0. Their brief characteristics:

Table No. 1. Comparison of standards Home PNA.

All of these standards use the most popular medium access method – CSMA/CD; as a medium - telephone cable; RJ-11 telephone connectors are used as connectors. Devices Home PNA can work with both twisted pair and coaxial cable, and the transmission range increases significantly.

It should be remembered that telephone lines in Russia do not meet the standards of developed countries, both in quality and coverage. Adapter prices are quite high. However, this architecture can be considered as an alternative for wireless networks in office buildings and residential buildings.

Home networks based on electrical wiring.

This technology has appeared recently and is called Home PLC. The equipment is available for sale, but is not yet popular.

HomePlug network parameters:

· topology – “bus”;

· data transfer speed – up to 85 Mbit/c$

· access method – CSMA/CD;

· transmission medium – electrical wiring;

Disadvantages of networks Home PLC– insecurity from interception, requiring the mandatory use of encryption and greater sensitivity to electrical interference. Moreover, such technology is still expensive.

Technologies used in modern local networks.

Ethernet.

Architecture Ethernet unites a whole set of standards that have both common and distinct features. It was originally created by Xerox in the mid-70s and was a transmission system with a speed of 2.93 Mbit/s. After finalization with the participation of DEC and Intel architecture Ethernet served as the basis for the IEEE 802.3 standard adopted in 1985, which defined the following parameters for it:

· topology – “bus”;

· access method – CSMA/CD;

· transmission speed – 10 Mbit/s;

· transmission medium – coaxial cable;

· the use of terminators is mandatory;

· maximum length of a network segment – ​​up to 500 m;

· maximum network length – up to 2.5 km;

· maximum number of computers in a segment – ​​100;

· the maximum number of computers on the network is 1024.

The original version provided for the use of two types of coaxial cable, “thick” and “thin” (standards 10Base-5 and 10Base-2, respectively).

In the early 90s, a specification appeared for building Ethernet networks using twisted pair (10Base-T) and optical fiber (10Base-FL). In 1995, the IEEE 802.3u standard was published, providing transmission speeds of up to 100 Mbit/s. In 1998, the IEEE 802.3z and 802.3ab standard appeared, and in 2002, IEEE802.3 ae. Comparison of standards is given in table No. 12.2.

Table No. 12.2. Characteristics of various Ethernet standards.

Implementation Speed ​​Mbit/s Topology Transmission medium Maximum length cable, m
Ethernet
10Base-5 "tire" Thick coaxial cable
10Base-2 "tire" Thin coaxial cable 185; realistically up to 300
10Base-T "star" twisted pair
10Base-FL "star" optical fiber 500 (hub station); 200 (between concertrators)
Fast Ethernet
100Base-TX "star" Category 5 twisted pair (2 pairs used)
100Base-T4 "star" Twisted pair category 3,4, 5 (four pairs are used)
100Base-FX "star" Multimode or singlemode fiber 2000 (multi-mode) 15,000 (single-mode) realistically - up to 40 km
Gigabit Ethernet
1000Dase-T "star" Twisted pair cable category 5 or higher
1000Dase-CX "star" Special cable type STR
1000Dase-SX "star" optical fiber 250-550 (multi-mode), depending on type
1000Dase-LX "star" optical fiber 550 (multi-mode); 5000 (single mode); realistically – up to 80 km
10 Gigabit Ethernet
10GDase-x "star" optical fiber 300-40000 (depending on cable type and laser wavelength)

A disadvantage of Ethernet networks is their use of CSMA/CD (Carrier Sense Multiple Access with Collision Detection) media access method. As the number of computers increases, the number of collisions increases, which reduces throughput network and increases frame delivery time. Therefore, the recommended load on an Ethernet network is considered to be 30-40% of the total bandwidth. This drawback is easily eliminated by replacing hubs with bridges and switches that can isolate data transfer between two computers on the network from others.

There are many advantages of Ethernet. The technology itself is easy to implement. The cost of the equipment is not high. Almost any type of cable can be used. Therefore, at present, this network architecture can be said to be dominant.

Wireless network

Wi-Fi is a technology that is popular in the world and rapidly developing in Russia, providing wireless connectivity. mobile users to the local network and the Internet (Fig. 12.5).


The 802.11 standard specifies the use of half-duplex transceivers only, which cannot simultaneously transmit and receive information. Therefore, all standards use CSMA/CA (collision avoidance) as the media access method to avoid collisions.

The main disadvantage Wi-Fi networks is a short data transmission range, not exceeding 150 m (maximum 300 m) for most devices in open space and only a few meters indoors.

This problem is solved by the WiMAX architecture, which is being developed within the IEEE 802.16 working group. The implementation of this technology, which also uses radio signals as a transmission medium, will provide users with high-speed wireless access at distances of up to several tens of kilometers (Fig. 10.6.).


Rice. 12.6. Wireless connection of mobile users to the local network and the Internet (up to tens of km).

New bluetooth technology uses a 2.4 GHz radio signal. It has low power consumption, which allows it to be used in portable devices - laptops, mobile phones (Fig. 12.7.)



Rice. 12.7. Wireless connection of mobile users to the local network and the Internet (up to ten meters).

Bluetooth requires virtually no setup. It has low range (up to 10 meters) at 400-700 Kbps.

Distributed Computing Specialization:

Networks and protocols;

Network multimedia systems;

Distributed computing;

Today the concept will no longer surprise anyone. However, many of us, when we mention them, don’t even think much about what such a connection is and how network services work. Let's try to consider this issue in the briefest possible way, since a whole monograph could be written about networks and their capabilities in the modern world.

Network architecture: main types

So, as follows from the basic interpretation of the term itself, they represent a certain number of terminals (computers, laptops, mobile devices) connected to each other, which, in fact, forms a network.

Today, there are two main types of connections: wired and wireless, which uses a connection through a router such as a Wi-Fi router. But this is just the tip of the iceberg. In fact, the network architecture involves the use of several components, and therefore may have different classifications. It is generally accepted that there are currently three types of networks:

  • peer-to-peer networks;
  • networks with dedicated servers;
  • hybrid networks that include all types of nodes.

In addition, a separate category consists of broadcast, global, local, municipal, private networks and other varieties. Let's focus on the basic concepts.

Description of networks by type

Let's start, perhaps, with networks based on the “host computer on the network-client” interaction. As is already clear, the dominant position here is occupied by the central terminal, where the network and all its components are managed. Client terminals can only send requests to provide a connection and, subsequently, to receive information. The main terminal in such a network cannot play the role of a client machine.

Often called peer-to-peer, they differ from the first type in that the resources in them are equally distributed among all connected terminals. The most simple example can be considered the processes of downloading files using torrents. Final file fully or partially loaded may be located at different terminals. User system, downloading it to your computer, uses all currently available resources on the network to download parts of the desired file. The more there are, the higher the download speed. In this case, network addressing does not play a special role. The main condition is that client machine the appropriate software has been installed. This is what will produce client requests.

The client-server network architecture is the simplest. For a simplified understanding, the connection between computer terminals (no matter how it is made) can be represented as a library room in which there is storage or shelves with books (central server) and tables where visitors can read material taken from the shelves.

Obviously, there is a clear relationship here: the visitor comes to the library, registers or provides already registered personal data ( network identification based on the assigned IP address), then searches for the required literature (network request), finally picks up the book and reads it.

Naturally, this comparison is the most primitive, because modern networks work much more complexly. Nevertheless, for a simplified understanding of the structure, such an example is perfect.

Terminal identification issues

Now a few words about how computers on any type of network are recognized. If anyone doesn’t know, when connected, any terminal is assigned two types of IP addresses, or, more simply, a unique identifier: internal and external. The internal address is not unique. But external IP - yes. There are no two machines in the world with the same IP. This is what allows you to identify any gadget, be it a computer terminal or a mobile device, one hundred percent.

The corresponding protocol is responsible for all this. At the moment, the most common and most widely used is IPv4. However, as practice shows, it has already outlived its usefulness, since it has become unable to provide unique addresses due to the increased number of client devices. Just look at mobile equipment, because over the last decade there have been so many gadgets in use that almost every second inhabitant of the Earth has at his disposal the same mobile phone.

IPv6 protocol

Thus, the architecture of the network, in particular the Internet, began to change. And the fourth version was replaced by the sixth (IPv6). While it has not yet received particularly widespread use, nevertheless, as stated, the future is not far off, and soon almost all providers providing communication services will switch to this particular protocol (provided that they have an active DHCP server, version six).

Judge for yourself, because using this protocol with the provision of a 128-bit address allows you to reserve much more addresses than when using the fourth version.

Dedicated Servers

Now let's look at dedicated servers. The designation speaks for itself: they are designed for specific tasks. Roughly speaking, this is a real virtual Internet server, completely owned by the user who rents it. This is the meaning of hosting, when the owner of the podcast of the main resource can post any information on the allocated space.

In addition, it is not the tenant who is responsible for security, but the one who rents out the server space. There are many examples of such servers. Here you have mail, games, file sharing services, and personal pages (not to be confused with accounts in in social networks and services of this type), and much more.

Local networks

A local network, or, as it is often called, “local”, is organized to unite into one limited number terminals. The architecture of a local network in terms of connection, as is already clear, can be either a wired connection or VPN-type access. In both cases, a connection to the main administrative server is required. In this case, network services can operate in dual mode: with automatic identification (assigning an address to each machine) or with manual entry of parameters.

Local networks, in principle, have a distinctive feature, which consists only in the fact that any terminal needs registration (which is not required, for example, in the central server (plus admin). In addition, access to “shared” information can be either complete, or limited. It all depends on the settings. However, if you look even at the so-called cloud services, they, in fact, also represent a virtual network where users, after being authenticated, receive rights to access certain information, download or edit files, etc. With all this, sometimes it is even possible to simultaneously change the contents of a file in the mode real time.

Architecture a little history

Finally, we come to the network, which is the largest in the world today. Of course, this is the Internet, or World Wide Web. Prototype World Wide Web ARPANET is considered to be a communication system developed for military purposes in the United States back in 1969. At that time, however, the connection was tested between only two nodes, but over time, connection to the network via cable was established even with terminals located in the UK.

It was only much later, when identification based on TCP/IP protocols and the domain naming system appeared, that what we call the Internet today arose.

In general, it is believed that there is no single central server on the Internet where all information could be stored. Yes, today there are no disk drives of such capacity. All information is distributed among hundreds of thousands of individual servers of various types. In other words, the Internet can equally be classified as a peer-to-peer network or a hybrid network. With all this, on a separate machine you can create your own Internet server, which will not only allow you to manage network parameters or save necessary information, but also provide access to it to other users. Wi-Fi distribution- what is not the simplest example?

Basic parameters and settings

As for the parameters and settings, everything is simple. As a rule, manually entering network IP, DNS or proxy servers has not been used for a long time. Instead, any provider provides services for automatic recognition of a computer or mobile device on the network.

On Windows systems, these settings are accessed through network properties with the selection of IPv4 protocol parameters (or, if working, IPv6). As a rule, the settings themselves indicate automatic receipt addresses, which saves the user from manually entering data. True, in some cases, especially when setting up RDP clients (remote access) or when organizing access to some specific services, manual data entry is mandatory.

Conclusion

As you can see, understanding what network architecture is, in general, is not particularly difficult. In principle, only the basic aspects of organizing the work of networks were considered here in order to explain this issue to anyone, even the most unprepared user, so to speak. In reality, of course, everything is much more complicated, because we did not touch on the concepts of DNS servers, proxies, DHCP, WINS, etc., as well as issues related to software. It seems that even this minimal information will be enough to understand the structure and basic principles of operation of networks of any type.

2.1.2 Architectural principle of building networks

The architectural principle of building networks (with the exception of peer-to-peer networks in which computers have equal rights) is called “client-server”.

In a peer-to-peer network, all computers have equal rights. Each of them can act both as a server, that is, provide files and hardware resources (drives, printers, etc.) to other computers, and as a client, using the resources of other computers. For example, if a printer is installed on your computer, then all other network users will be able to print their documents with its help, and you, in turn, will be able to work with the Internet, which is connected through a neighboring computer.

The most important concepts in the theory of client-server networks are “subscriber”, “server”, “client”.

A subscriber (node, host, station) is a device connected to the network and actively participating in information exchange. Most often, the subscriber (node) of the network is a computer, but the subscriber can also be, for example, network printer or other peripheral device that has the ability to connect directly to the network.

A server is a network subscriber (node) that provides its resources to other subscribers, but does not use their resources itself. Thus, it serves the network. There can be several servers on the network, and it is not at all necessary that the server is the most powerful computer. A dedicated server is a server that deals only with network tasks. A non-dedicated server can perform other tasks in addition to network maintenance. A specific type of server is a network printer.

A client is a network subscriber who only uses network resources, but does not give his resources to the network, that is, the network serves him, and he only uses it. The client computer is also often called workstation. In principle, each computer can be both a client and a server at the same time.

Server and client are often understood not as the computers themselves, but as those running on them. software applications. In this case, the application that only sends resources to the network is a server, and the application that only uses network resources is a client.

2.1.3 Topology local networks

Under topology (layout, configuration, structure) computer network usually refers to the physical location of computers on a network relative to each other and the way they are connected by communication lines. It is important to note that the concept of topology refers primarily to local networks, in which the structure of connections can be easily traced. In global networks, the structure of connections is usually hidden from users and is not very important, since each communication session can be carried out along its own path.

The topology determines the requirements for equipment, the type of cable used, the permissible and most convenient methods of managing the exchange, reliability of operation, and possibilities for network expansion. And although a network user rarely has to choose a topology, it is necessary to know about the features of the main topologies, their advantages and disadvantages.

There are three basic network topologies:

a) bus topology

Bus (bus) - all computers are connected in parallel to one communication line. Information from each computer is simultaneously transmitted to all other computers (Fig. 1).

Rice. 1 Network topology bus

The bus topology (or, as it is also called, the common bus), by its very structure, assumes the identity of the network equipment of computers, as well as the equality of all subscribers in accessing the network. Computers on the bus can only transmit one at a time, since the communication line is in in this case the only one. If several computers transmit information simultaneously, it will be distorted as a result of overlap (conflict, collision). The bus always implements the so-called half-duplex exchange mode (in both directions, but in turn, and not simultaneously).

In the bus topology, there is no clearly defined central subscriber through which all information is transmitted, this increases its reliability (after all, if the center fails, the entire system controlled by it ceases to function). Adding new subscribers to the bus is quite simple and is usually possible even while the network is running. In most cases, when using a tire, you need minimal amount connecting cable compared to other topologies.

Since there is no central subscriber, resolving possible conflicts in this case falls on the network equipment of each individual subscriber. In this regard, network equipment in the bus topology is more complex than in other topologies. However, due to the widespread use of bus topology networks (primarily the most popular network Ethernet) the cost of network equipment is not too high.

Rice. 2. Cable break in a network with a bus topology

An important advantage of the bus is that if any of the computers on the network fails, healthy machines will be able to continue communication normally.

If the cable is broken or damaged, the coordination of the communication line is disrupted, and communication stops even between those computers that remain connected to each other. A short circuit at any point on the bus cable disables the entire network.

A failure of any subscriber's network equipment on the bus can bring down the entire network. In addition, such a failure is quite difficult to localize, since all subscribers are connected in parallel, and it is impossible to understand which of them has failed.

When passing through a network communication line with a bus topology, information signals are weakened and not restored in any way, which imposes strict restrictions on the total length of communication lines. Moreover, each subscriber can receive signals of different levels from the network depending on the distance to the transmitting subscriber. This presents Additional requirements to receiving nodes of network equipment.

If we assume that the signal in the network cable is attenuated to the limit permissible level at length L pr, then the total length of the bus cannot exceed the value of L pr. In this sense, the bus provides the shortest length compared to other basic topologies.

To increase the length of a network with a bus topology, several segments (parts of the network, each of which represents a bus) are often used, interconnected using special amplifiers and signal restorers - repeaters or repeaters (Fig. 3 shows the connection of two segments, the maximum network length in this case it increases to 2 L inc, since each of the segments can be L in length). However, this increase in network length cannot continue indefinitely. Length restrictions are associated with the finite speed of signal propagation along communication lines.

Rice. 3. Connecting bus network segments using a repeater

b) star topology;

Star (star) - other peripheral computers are connected to one central computer, each of them using a separate communication line (Fig. 4). Information from a peripheral computer is transmitted only to the central computer, and from the central computer - to one or more peripheral ones.

Rice. 4. Star network topology

A star is the only network topology with a clearly designated center to which all other subscribers are connected. Information exchange occurs exclusively through a central computer, which is responsible for huge pressure, therefore, as a rule, he cannot do anything else except the network. It is clear that the network equipment of the central subscriber must be significantly more complex than the equipment of peripheral subscribers. In this case, there is no need to talk about equal rights for all subscribers (as in a bus). Usually the central computer is the most powerful; all functions for managing the exchange are assigned to it. In principle, no conflicts are possible in a network with a star topology, since management is completely centralized.

If we talk about the star’s resistance to computer failures, then the failure of a peripheral computer or its network equipment does not in any way affect the functioning of the rest of the network, but any failure of the central computer makes the network completely inoperable. In this regard, special measures must be taken to increase the reliability of the central computer and its network equipment.

A cable break or short circuit in a star topology disrupts communication with only one computer, and all other computers can continue to work normally.

Unlike a bus, in a star there are only two subscribers on each communication line: a central one and one of the peripheral ones. Most often, two communication lines are used to connect them, each of which transmits information in one direction, that is, on each communication line there is only one receiver and one transmitter. This is the so-called point-to-point transmission. All this significantly simplifies network equipment compared to a bus and eliminates the need to use additional, external terminators.

A serious disadvantage of the star topology is the strict limitation on the number of subscribers. Typically, the central subscriber can serve no more than 8-16 peripheral subscribers. Within these limits, connecting new subscribers is quite simple, but beyond them it is simply impossible. In a star, it is possible to connect another central subscriber instead of a peripheral one (the result is a topology of several interconnected stars).

The star shown in Fig. 4, is called an active or true star. There is also a topology called passive star, which is only superficially similar to a star (Figure 5). Currently, it is much more widespread than an active star. Suffice it to say that it is used in the most popular Ethernet network today.

In the center of a network with this topology there is not a computer, but a special device - a concentrator or, as it is also called, a hub, which performs the same function as a repeater, that is, it restores incoming signals and forwards them to all other communication lines.


Rice. 5. Passive star topology and its equivalent circuit

It turns out that although the cable layout is similar to a true or active star, in fact we are talking about a bus topology, since information from each computer is simultaneously transmitted to all other computers, and there is no central subscriber. Of course, a passive star is more expensive than a regular bus, since in this case a hub is also required. However, it provides a number of additional features, associated with the advantages of the star, in particular, simplifies the maintenance and repair of the network. That is why recently the passive star is increasingly replacing the true bus, which is considered an unpromising topology.

It is also possible to distinguish an intermediate type of topology between an active and passive star. In this case, the hub not only relays the signals arriving at it, but also controls the exchange, but does not itself participate in the exchange (this is done in the 100VG-AnyLAN network).

The great advantage of a star (both active and passive) is that all connection points are collected in one place. This makes it possible to easily monitor the operation of the network, localize faults by simply disconnecting certain subscribers from the center (which is impossible, for example, in the case of a bus topology), and also limit the access of unauthorized persons to connection points vital for the network. In the case of a star, a peripheral subscriber can be approached by either one cable (which transmits in both directions) or two (each cable transmits in one of two counter directions), with the latter being much more common.

A common disadvantage For all star topologies (both active and passive), cable consumption is significantly greater than for other topologies. For example, if computers are located in one line (as in Fig. 1), then when choosing a star topology you will need several times more cable than when choosing a bus topology. This significantly affects the cost of the network as a whole and significantly complicates cable installation.

c) ring topology;

Ring (Fig. 6).

Rice. 6. Network topology ring

A ring is a topology in which each computer is connected by communication lines to two others: it receives information from one and transmits information to the other. On each communication line, as in the case of a star, only one transmitter and one receiver operate (point-to-point communication). This allows you to avoid using external terminators.

An important feature of the ring is that each computer relays (restores, amplifies) the signal coming to it, that is, it acts as a repeater. Signal attenuation in the entire ring does not matter, only the attenuation between neighboring computers on the ring matters. In practice, the size of ring networks reaches tens of kilometers (for example, in an FDDI network). The ring is significantly superior to any other topology in this regard.

In a ring topology, there is no clearly defined center; all computers can be identical and have equal rights. However, quite often a special subscriber is allocated in the ring who manages or controls the exchange. It is clear that the presence of such a single control subscriber reduces the reliability of the network, since its failure will immediately paralyze the entire exchange.

Strictly speaking, computers in a ring are not completely equal (unlike, for example, a bus topology). After all, one of them necessarily receives information from the computer transmitting at the moment earlier, and the others - later. It is on this feature of the topology that methods for controlling network exchange, specifically designed for the ring, are based. In such methods, the right to the next transmission (or, as they also say, to take over the network) passes sequentially to the next computer in the circle. Connecting new subscribers to the ring is quite simple, although it requires a mandatory shutdown of the entire network for the duration of the connection. As with a bus, the maximum number of subscribers in a ring can be quite large (up to a thousand or more). The ring topology usually has high resistance to overloads, ensures reliable operation with large flows of information transmitted over the network, since, as a rule, there are no conflicts (unlike a bus), and there is also no central subscriber (unlike a star), which can be overloaded with large flows of information.


Rice. 7. Network with two rings

The signal in the ring passes sequentially through all computers on the network, so the failure of at least one of them (or its network equipment) disrupts the operation of the network as a whole. This significant drawback rings.

Likewise, a break or short circuit in any of the ring cables makes the entire network impossible to operate. Of the three topologies considered, the ring is the most vulnerable to cable damage, so in the case of the ring topology, it is usually necessary to lay two (or more) parallel communication lines, one of which is in reserve.

Sometimes a network with a ring topology is based on two parallel ring communication lines that transmit information to opposite directions. The purpose of such a solution is to increase (ideally, double) the speed of information transfer over the network. In addition, if one of the cables is damaged, the network can work with another cable (although the maximum speed will decrease).

e) other topologies.

In practice, other local network topologies are often used, but most networks are focused on three basic topologies.

Network topology indicates not only the physical location of computers, but also the nature of the connections between them, the features of the distribution of information and signals over the network. It is the nature of the connections that determines the degree of fault tolerance of the network, the required complexity of network equipment, the most appropriate method of managing the exchange, possible types of transmission media (communication channels), the permissible size of the network (the length of communication lines and the number of subscribers), the need for electrical coordination, and much more.

Moreover, the physical location of the computers connected by the network has almost no effect on the choice of topology. No matter how the computers are located, they can be connected using any pre-selected topology (Fig. 8).

If the connected computers are located along the contour of a circle, they can be connected like a star or a bus. When computers are located around a certain center, they can be connected using bus or ring topologies.

Finally, when the computers are arranged in a line, they can be connected in a star or ring. Another thing is what the required cable length will be.


Rice. 8. Examples of using different topologies

It should be noted that topology is still not the main factor when choosing the type of network. Much more important, for example, is the level of network standardization, exchange speed, number of subscribers, cost of equipment, and selected software. But on the other hand, some networks allow different topologies at different levels. This choice rests entirely with the user, who must take into account all the considerations listed in this section.



Lecture 3. Using email. Duration 2 hours. The purpose of this topic is to provide a basic understanding of the operation and use of email in local and global computer networks. Theoretical material: 1. Introduction. 2. Principles of e-mail operation. 3. Installation postal services on computer. 4. Most popular programs for working with...

... ; 44 – violation of the rules for operating computers and their networks. The Internet computer network plays a significant role in the implementation of unauthorized access to information, being almost the most popular channel information leaks. Therefore, using its example, it is advisable to consider modern security threats and methods of protection against them, the means of protection used and security services. The Internet is really...







2024 gtavrl.ru.