Shared secret key distribution protocol. Hierarchy of key information


Annotation: In previous lectures, we discussed cryptography with symmetric keys and asymmetric keys. However, we have not yet discussed how private keys in symmetric key cryptography and public keys in asymmetric key cryptography are distributed and maintained. This lecture addresses these two issues. First, we'll talk about symmetric key distribution using a trusted third party. Second, we will show how two parties can establish a symmetric key between themselves without using a trusted third party. Third, we'll look at the Kerberos system, KDCs, and the object authentication protocol. Fourth, we will discuss public key certification using Certificate Authorities (CAs) based on the X.509 recommendations. Finally, we'll briefly explore the idea of ​​Public Key Infrastructure (PKI) and outline some of its modes of operation.

5.1. Distribution with symmetric keys

For encrypting large messages, symmetric key cryptography is more efficient than asymmetric key cryptography. Symmetric key cryptography, however, requires a encryption key that is shared between the two parties.

If Alice needs to exchange confidential messages with N people, she needs N different keys. What if N people need to communicate with each other? Then the total number of keys required is N (N - l) . If we allow Alice and Bob to use two identical keys for bidirectional communication in both directions, then only N(N - 1)/2 keys are needed. This would mean that if one million people communicate with each other, each person has almost one million different keys. In total, almost one trillion keys are needed. This is called the N-problem because the number of required keys for N objects is N 2 .

The number of keys is not the only problem; key distribution- another problem. Alice and Bob want to communicate with each other. They need a way to exchange privacy keys. If Alice wants to contact one million people, how can she exchange one million keys with one million people? Using the Internet is clearly not safe method. Clearly, we need an efficient way to maintain and distribute privacy keys.

Key Distribution Center: KDC

A practical solution is to involve a trusted third party. It's called here key distribution center (KDC - Key-Distribution Center). To reduce the number of keys, each person sets up a public secret key with a KDC, as shown in Figure 1. 5.1.


Rice. 5.1.

A secret key is established between the KDC and each member of the community. Alice has a secret key with the KDC, which we call K Alice. Bob has a secret key with the KDC, which we call K Bob. Now the question is how Alice can convey the confidential message to Bob. The process is as follows:

  1. Alice submits a request to the KDC - a statement that she needs a session (temporarily) and a privacy key between herself and Bob.
  2. KDC informs Bob of Alice's request.
  3. If Bob agrees, a session key is created between them.

The privacy key between Alice and Bob, which is installed with the KDC, is used to authenticate Alice and Bob to the KDC and prevent Eve from impersonating either of them. We will discuss later in this lecture how the session key is established between Alice and Bob.

When the number of people using KDC ( Key distribution center), increases, the system becomes uncontrollable and its bottleneck is triggered - the number of keys may run out. To solve the problem we must have many KDCs. We can divide the world into domains. Each domain can have one or more KDCs (for backup redundancy in case of failure). Now, if Alice wants to send a confidential message to Bob, who belongs to a different domain, she contacts her KDC, which in turn contacts the KDC in Bob's domain. Two KDCs can create a secret key between Alice and Bob.

Key management

In addition to choosing a cryptographic system suitable for a particular IS, an important problem is key management. No matter how complex and reliable the cryptosystem itself is, it is based on the use of keys. If to ensure confidential exchange of information between two users the process of exchanging keys is trivial, then in an information system where the number of users is tens and hundreds, key management is a serious problem.

Under key information is understood as the totality of all keys operating in the IS. If sufficiently reliable management of key information is not ensured, then by taking possession of it, the attacker gains unlimited access to all information.

Key management- information process, which includes three elements:

* key generation;

* accumulation of keys;

* key distribution.

Let's consider how they should be implemented in order to ensure the security of key information in the IS.

Key generation

At the very beginning of the conversation about cryptographic methods, it was said that you should not use non-random keys in order to make them easier to remember. Serious ICs use special hardware and software methods for generating random keys. As a rule, PSCH sensors are used. However, the degree of randomness of their generation should be quite high. Ideal generators are devices based on “natural” random processes. For example, serial samples of key generation based on white radio noise. Another random mathematical object is the decimals of irrational numbers, such as or e, which are calculated using standard mathematical methods.

In ICs with average security requirements, software key generators are quite acceptable, which calculate the PSN as a complex function of the current time and (or) number entered by the user.

Key accumulation

Under accumulation of keys refers to the organization of their storage, accounting and disposal.

Since the key is the most attractive object for an attacker, opening the way to confidential information, special attention should be paid to the issues of accumulating keys.

Secret keys should never be written explicitly on a medium that can be read or copied.

In a fairly complex information system, one user can work with a large amount of key information, and sometimes there is even a need to organize mini-databases of key information. Such databases are responsible for accepting, storing, recording and deleting used keys.

So, each information about the keys used must be stored in encrypted form. Keys that encrypt key information are called master keys. It is advisable that every user knows the master keys by heart and does not store them on any tangible media at all.

A very important condition for information security is the periodic updating of key information in the IS. In this case, both regular keys and master keys must be reassigned. In especially critical information systems, it is advisable to update key information daily.

The issue of updating key information is also related to the third element of key management - key distribution.

Key distribution

Key distribution is the most critical process in key management. There are two requirements for it:
  1. Efficiency and accuracy of distribution
  2. Secrecy of distributed keys.
Recently, there has been a noticeable shift towards the use of public key cryptosystems, in which the problem of key distribution disappears. Nevertheless, the distribution of key information in information systems requires new effective solutions.

Distribution of keys between users is implemented in two different approaches:

  1. By creating one or several key distribution centers. The disadvantage of this approach is that the distribution center knows who is assigned what keys and this makes it possible to read all messages circulating in the IS. Possible abuses have a significant impact on protection.
  2. Direct key exchange between users of the information system.
In this case, the problem is to reliably authenticate the subjects.

In both cases, the authenticity of the communication session must be guaranteed. This can be achieved in two ways:

  1. Request-response mechanism, which is as follows. If user A wants to be sure that the messages he receives from B are not false, he includes an unpredictable element (request) in the message he sends to B. When responding, user B must perform some operation on this element (for example, add 1). This cannot be done in advance, since it is not known what random number will come in the request. After receiving a response with the results of the actions, user A can be confident that the session is genuine. The disadvantage of this method is the possibility of establishing an albeit complex pattern between the request and the response.
  2. Time stamp mechanism ("time stamp"). It implies recording the time for each message. In this case, each IS user can know how “old” the incoming message is.
In both cases, encryption should be used to ensure that the response was not sent by an attacker and that the timestamp has not been altered.

When using time stamps, the problem arises of the acceptable time interval of delay to confirm the authenticity of the session. After all, a message with a “time stamp” cannot, in principle, be transmitted instantly. In addition, the computer clocks of the recipient and the sender cannot be absolutely synchronized. What delay in the “stamp” should be considered suspicious?

Therefore, in real information systems, for example, in credit card payment systems, it is the second mechanism for establishing authenticity and protecting against counterfeits that is used. The interval used is from one to several minutes. A large number of known methods of stealing electronic money are based on “wedging” into this gap with false requests to withdraw money.

For key exchange, you can use public key cryptosystems using the same RSA algorithm.

But the Diffie-Hellman algorithm turned out to be very effective, allowing two users to exchange a key without intermediaries, which can then be used for symmetric encryption.

Diffie-Hellman algorithm

Diffie and Hellman proposed to create public key cryptographic systems discrete exponentiation function.

The irreversibility of the transformation in this case is ensured by the fact that it is quite easy to calculate the exponential function in a finite Galois field consisting of p elements. ( p- either a prime number or prime to any degree). Calculating logarithms in such fields is a much more labor-intensive operation.

If y= x, 1<x<p-1, where is a fixed field element GF(p), That x=lo g y above GF(p). Having x, easy to calculate y. This will require 2 ln( x+y) multiplication operations.

Inverse calculation problem x from y will be quite difficult. If p is chosen correctly enough, then extracting the logarithm will require calculations proportional to

L(p) = exp( (ln p ln ln p) 0.5 }

To exchange information, the first user selects a random number x 1, equally probable from whole 1... p-1. He keeps this number secret, and sends the number to another user

y 1 = x mod p

The second user does the same, generating x 2 and calculating y 2, sending it to the first user. As a result of this they can calculate k 12 = x 1 x 2 mod p.

In order to calculate k 12, the first user builds y 2 to the power x 1 . The second user does the same. Thus, both users have a common key k 12, which can be used to encrypt information using conventional algorithms. Unlike the RSA algorithm, this algorithm does not allow the actual information to be encrypted.

Not knowing x 1 and x 2, an attacker can try to calculate k 12, knowing only those intercepted y 1 and y 2. The equivalence of this problem to the problem of calculating a discrete logarithm is the main and open question in systems with a public key. No simple solution has been found to date. So, if the direct conversion of 1000-bit prime numbers requires 2000 operations, then the inverse conversion (calculating the logarithm in the Galois field) will require about 10 30 operations.

As you can see, despite the simplicity of the Diffie-Hellman algorithm, its second disadvantage compared to the RSA system is the lack of a guaranteed lower estimate of the complexity of key discovery.

In addition, although the described algorithm allows us to bypass the problem of hidden key transfer, the need for authentication remains. Without additional means, one of the users cannot be sure that he exchanged keys with exactly the user he needs. The danger of imitation in this case remains.

As a generalization of what has been said about key distribution, the following should be said. The key management problem comes down to finding a key distribution protocol that would provide:

* the ability to refuse the key distribution center;

* mutual confirmation of the authenticity of session participants;

* confirmation of the authenticity of the session by the request-response mechanism, the use of software or hardware for this;

* using a minimum number of messages when exchanging keys.

With traditional encryption, both parties involved in the data exchange must receive the same key, to which other users are denied access. This usually requires frequent changes of keys to reduce the amount of data lost in the event that one of the keys becomes known to the enemy.

Therefore, the reliability of any cryptographic system largely depends on the key distribution systems, which is a means of delivering keys to two parties planning to exchange data without allowing others to see those keys.

For two sides, A and IN, As indicated below, key distribution can be organized in various ways:

  • 1. The key can be selected by party A and physically delivered to party IN.
  • 2. The key can be selected by a third party and physically delivered to participants A and IN.
  • 3. If exchange participants A and IN already use some shared key, one of the parties can transmit the new key to the other party in encrypted form using the old key.
  • 4. If both sides A and IN have cryptographically secure communication channels with a third party C, then the latter can deliver the key to participants A and IN through these secure channels.

Options 1 and 2 involve transferring the key from hand to hand. With channel encryption, this requirement may be quite reasonable, since any channel encryption device involves exchanging data only with the corresponding device at the other end of the channel.

But in the case of end-to-end encryption, physical delivery of the key is practically unacceptable. In any distributed system, each master node or terminal may engage in communication with many other master nodes and terminals. Therefore, each such device will require many keys, which will have to be supplied dynamically. The problem turns out to be very difficult to solve, especially in the case of large, globally distributed systems.

The scale of the problem depends on the number of contact pairs that have to be serviced. If end-to-end encryption is implemented at the network or IP level, then one key will be required for each pair of master nodes on the network communicating. Therefore, if there is N leading nodes, the number of required keys will be equal to / 2. If encryption is carried out at the application level, then each pair of users or processes communicating will need its own key. In this case, the network can have hundreds of leading nodes and thousands of users and processes. In Fig. 6.2 for the case of end-to-end encryption shows the dependence of the complexity of the key distribution problem on the number of pairs participating in the data exchange. For example, on a network of 1,000 nodes where encryption is performed at the node level, there would likely be about half a million keys to distribute. And if such a network supports about 10,000 applications, then application-level encryption may require the distribution of about 50 million keys.

Rice. 6.2.

Returning to the list of key distribution methods, we note that method 3 is possible both for channel encryption and for end-to-end encryption, but if an adversary ever manages to gain access to one of the keys, then he will be able to obtain all subsequent ones. Additionally, the initial distribution of potentially millions of keys must still be completed.

For end-to-end encryption, a scheme that is some variation of method 4 is widely used. In this scheme, some key distribution center is responsible for delivering keys to pairs of users (master nodes, processes, applications). Each user must receive his own unique key, which he uses together with the key distribution center for the purpose of organizing key delivery.

Rice. 6.3.

Using a key distribution center involves organizing a certain hierarchy of keys. In a minimal configuration, such a hierarchy includes two levels (Fig. 6.3). Communication between end systems is encrypted using a temporary key, often called a session key. Typically, a session key is used only for a specific logical connection, such as a virtual circuit, or for transporting data, after which the key is no longer used. The session key is obtained from the key distribution center using the same means of data delivery on the network that serve to organize communications between end users. Accordingly, session keys are transmitted in encrypted form, and the master key, common to the key distribution center and the given end system or specific user, is used for encryption.

A unique master key is created for each end system or end user and shared with a key distribution center. Of course, these master keys must also be distributed somehow. However, this problem is much simpler in complexity. As mentioned, N objects communicating in pairs require /2 session keys. And only N master keys are required, one for each object. Therefore, master keys may be distributed in some non-cryptographic manner, such as by physical delivery to the recipient.

Key distribution can be implemented in different ways. A typical scenario is shown in Fig. 6.4. This scenario assumes that each user has a unique master key shared with a key distribution center (KDC).

Let's assume that user A intends to create a logical connection with user B and a one-time session key is required to protect the data that is supposed to be transferred during this connection.

In this case, user A has a secret key K a, known only to him and the DRC, and in the same way B uses the master key K c, common with the DRC.

The information exchange system looks like this:

  • 1. Initiator A sends a request to the DRC to obtain a session key to protect the logical connection with B. The message sent in this case must include information that allows one to uniquely identify A and B, as well as some identifier N1, unique for this request, usually called opportunity (popse - given case, given time (English)). Such identifiers could be the current time, some counter, or a random number - at a minimum, this identifier must be unique for each request. In addition, to prevent the adversary from falsifying the message, it should be difficult for the adversary to guess this identifier. Therefore, a random number can be considered a good choice for an opportunity.
  • 2. The CRC responds to the request with a message encrypted using the Ka key. The only user who can receive and read this message is A, and therefore A can be sure that this message came from the CRC. The message contains two elements intended for A:
    • - one-time session key Ks, which will be used in the communication session;
    • - the original request message, including the opportunity, so that user A has the opportunity to match the response with the corresponding request.
  • 3. In this way, A can make sure that his original request was not changed on the way to the CRC, and the opportunity will not allow him to confuse the answer to this request with the answer to any of the previous requests.

Rice. 6.4.

  • 1. In addition, the message also includes two elements intended for IN:
    • - one-time session key K. y, which will be used in the communication session;
    • - identifier GO A of user A (for example, his network address).
  • 2. Both elements are encrypted using the key K b(the main key used jointly by the TsK and IN), and it is intended that they should subsequently be sent IN, to establish a connection and identify A.
  • 3. Party A saves the session key for the upcoming communication session and forwards it to the party IN information received from the CRC and intended for IN(namely, information Ek[K L ||GO A ]). Since this information is encrypted using K b, she finds herself protected. Now the recipient IN knows the session key (K s) and knows that the received information came from the DRC (since this information is encrypted using the key Kb).

At this point, the session key has been delivered to both party A and party IN, and so they can begin to exchange data securely. But before that, it is advisable to perform two more operations.

  • 1. Using the newly received session key K for encryption, the party IN sends party A a new opportunity L/
  • 2. Using the same key K s, side A returns /(N2 ), where / is a function that performs some transformation N2 (for example, adding one).

These actions are intended to convince the addressee IN is that the message he originally received (clause 3) was not reproduced.

It should be noted that the key transfer process itself is actually performed in steps 1-3, and steps 4 and 5, as well as partly step 3, are designed to provide an authentication function.

This approach creates a kind of vicious circle: in order to share a secret (the transmitted message), the sender and recipient must already have a common secret (the encryption key). Previously, this problem was solved by a non-cryptographic method - transferring the key over physically protected communication channels from eavesdropping (Fig. 1). However, creating such a channel and maintaining it in operational readiness in case of an emergency need to transfer a key is quite labor-intensive and costly.

Rice. 1.

The problem was successfully resolved within the framework of modern cryptography, which arose a little more than a quarter of a century ago, so called in contrast to “traditional cryptography” already known by that time. The solution is to use asymmetric (two-key) ciphers or key distribution schemes over open communication channels.

In the first case, the encryption and decryption procedures are performed on different keys, so there is no need to keep the encryption key secret. However, due to extremely low efficiency characteristics and susceptibility to certain special types of attacks, such ciphers turned out to be of little use for hiding directly user information. Instead, asymmetric ciphers are used as part of combined schemes, when a data array is encrypted with a symmetric cipher on a one-time key, which in turn is encrypted with a two-key cipher and in this form is transmitted along with the data.

Schemes for distributing keys over open communication channels solve the same problem in a slightly different way: during an interaction session, two correspondents develop a common secret key, which is then used to encrypt the transmitted data with a symmetric cipher. Moreover, intercepting information in the channel during a session of generating such a key does not give the enemy the opportunity to obtain the key itself: K=K(X,Y) is incomputable (Fig. 2).


Rice. 2.

Problems of asymmetric cryptography

Today, asymmetric cryptography quite successfully solves the problem of distributing keys over open communication channels. However, there are several problems that cause some concern for its future. The strength of all asymmetric cryptography schemes is based on the impossibility of an efficient computational solution to a number of mathematical problems (so-called NP problems), such as factorization (factorization) of large numbers and logarithm in large discrete fields. But this impossibility is just an assumption that can be refuted at any time if the opposite hypothesis is proven, namely NP=P. This would lead to the collapse of all modern cryptography, since the problems on which it is based on the unsolvability are quite closely related, and breaking even one cryptosystem would mean breaking most others. Intensive research is being conducted in this direction, but the problem still remains open.

Another threat to modern cryptosystems comes from so-called quantum computers - information processing devices built on the principles of quantum mechanics, the idea of ​​which was first proposed by the famous American physicist R. Feynman. In 1994, P. Shor proposed a factorization algorithm for a quantum computer, which allows you to factor a number in a time that depends polynomial on the size of the number. And in 2001, this algorithm was successfully implemented on the first working prototype of a quantum computer created by specialists from IBM and Stanford University.

According to experts, a quantum computer capable of breaking the RSA cryptosystem can be created in about 15-25 years.

Another unfortunate fact about asymmetric cryptosystems is that the minimum “secure size” of keys is constantly growing due to progress in the field. Over the entire quarter-century history of such systems, it has already grown approximately 10 times, while over the same period for traditional symmetric ciphers, the key size has changed less than twice.

All of the above makes the long-term prospects of asymmetric cryptography systems not entirely reliable and forces us to look for alternative ways to solve the same problems. Some of them can be solved within the framework of so-called quantum cryptography, or quantum communication.

Elliptic curves are a mathematical object that can be defined over any field (finite, real, rational or complex). In cryptography, final fields are commonly used. An elliptic curve is a set of points (x, y), satisfying the following equation:

y 2 = x 3 + ax + b,

and also the point at infinity. For points on a curve, the addition operation is quite easily introduced, which plays the same role as the multiplication operation in the RSA and ElGamal cryptosystems.

In real cryptosystems based on elliptic equations, the equation is used

y 2 = x 3 + ax + b mod p,

Where R- simple.

The discrete logarithm problem on an elliptic curve is as follows: given a point G on an elliptic curve of order r (the number of points on the curve) and another point Y on the same curve. We need to find a single point x such that Y = x G, that is, Y is X-th degree G.

Public key distribution

So far, the benefits of public key encryption methods have not been obvious. However, on their basis it is easy to solve the problem of developing a shared secret key for a communication session of any pair of users of an information system. Back in 1976, Diffie and Hellman proposed a public key distribution protocol for this purpose. It involves each pair of communicating users independently generating their own random number, converting it through some procedure, exchanging the converted numbers over an open communication channel, and calculating a shared secret key based on information received from the partner during the communication process. Each such key exists only during one communication session or even part of it. Thus, open key distribution allows each pair of system users to develop their own shared secret key, thereby simplifying the procedure for distributing secret keys. Although everything is not so simple - the absence of a pre-distributed shared secret key for subscribers before a communication session, in principle, does not give them the opportunity to verify each other’s authenticity by exchanging messages over an open channel. For example, you can send keys using the ElGamal algorithm described above as modified by Shamir, but how can you be sure that you are dealing with a partner and not an interceptor? To confirm authenticity, each participant in a secret network must still have his own secret key, known only to him and distinguishing him from all other subscribers. In this case, the Diffie-Hellman algorithm will provide such a procedure for presenting a password that its repeated use does not reduce the reliability of proof of the owner’s authenticity. As a result, the two functions of a shared secret key, usually delivered over a secret channel, such as protecting information in the communication channel from a third party and confirming the authenticity of each subscriber to a partner, are separated.

The Diffie-Hellman public key distribution algorithm looks like this:

    Let there be two subscribers of an open network A And B, who know the public key pair R And d. In addition, at A have a secret key X from the interval (1, n), and B have a secret key y from the same interval.

    Subscriber A sends Bx mod p, and the subscriber B sends A encryption of your key Z"=D** y mod p.

    After this, they calculate the common key Z as Z=Z"** y=Z""** x.

Using special techniques, the time for generating a public key in the Diffie-Hellman system can be reduced by 5 times compared to the ElGamal system as modified by Shamir, and by 30 times compared to RSA at the same level of security. This, from the point of view of most practical applications, turns out to be a noticeable advantage, since encryption and decryption using the RSA algorithm is approximately a thousand times slower than classical algorithms such as DES. Note that for many applications of public key cryptographic systems, the computation time of cryptographic transformations is not of great importance. For example, when identifying users using credit cards, it will not make a difference whether it requires one microsecond or one second. The same applies to the choice of a common encryption key for another, faster, but not capable of key exchange cryptographic system.

The need in open key distribution systems to have individual secret passwords distributed in advance from the center to confirm the authenticity of users does not seem to be such a burdensome task as the production and distribution from the center of pairs of secret keys for communication between subscribers. The validity period of such a password can be significantly longer than the validity period of the communication key, say a year, and their total number in the communication network is equal to the number of subscribers. In addition, with some types of communication, confirmation of the partner’s authenticity can be achieved by recognizing him by physical characteristics. For example, by voice during telephone communications or by appearance and voice when communicating via television channels. It should be noted that the distribution of keys using public key cryptographic systems has the only advantage - the need to have only one key at each secret communication node. For classical symmetric cryptographic systems, there should be as many keys as there are subscribers to the node. However, public key systems have weaknesses. So, if breaking an encryption containing a key in a classical system is fundamentally impossible, since the plaintext is meaningless and does not contain redundant information, then in systems with a public key the cryptanalyst always has hope of success. Further, if the number D is common to all network participants, then its compromise, in the form of the discovery of special properties that facilitate logarithmization, will lead to compromise of the entire network. If D is individual for each pair of subscribers, then, firstly, due to the abundance of keys, it is easier to find a weak one among them, and, secondly, although sending and storing unclassified keys is incomparably easier than secret ones, it also causes a lot of trouble. Therefore, if a cryptographer has the opportunity to use the services of a secret channel, he will always prefer it to public key distribution.

Of the practically operating communication networks that use a public key distribution system, the most seriously protected is the US government telephone network based on STU-III devices. It began operating in 1987 and now has more than 150 thousand subscribers. In Russia, a similar network, also called ATS-1 or “turntable,” is also reliably protected, but there are hundreds of times fewer subscribers there. By the early eighties, cryptologists had come to understand the advantages of so-called hybrid systems, in which public key encryption procedures are used only to transmit keys and digital signatures. And the information that needs to be transmitted is protected by a classic algorithm such as DES, the key for which is transmitted using public key encryption. The first serial device of this type was the Datacryptor from Racal-Milgo, released in 1979. Datacryptor's encryption key management appliance is designed primarily for government communications networks and is certified to comply with the English Standard for the protection of non-sensitive but sensitive information. It provides alarms about violations of cryptographic requirements and error notifications. This device uses an algorithm for establishing encrypted communication by generating and transmitting a shared secret key using the RSA algorithm. Subsequently, a lot of devices of this type were produced to protect information. Other examples of the use of new cryptographic ideas are demonstrated by many commercial networks, especially banking networks such as SWIFT. In addition, the RSA digital signature system is used in the Nuclear Test Limitation Treaty verification equipment developed by Sandia Laboratories in 1982, the BPMIS network and other systems.







2024 gtavrl.ru.