Various encryption algorithms. Types of encryption algorithms


Almost all cryptographic methods used involve breaking a message into a large number of parts (or characters) of a fixed size, each of which is encrypted separately, if not independently. This greatly simplifies the encryption task, since messages usually have different lengths.

There are three main encryption methods:: threading, block and using feedback.

The following four characteristic features of cryptographic methods are highlighted.

    Operations on individual bits or blocks.

    Dependence or non-dependence of the encryption function on the results of encryption of previous parts of the message.

3. Dependence or independence of the encryption of individual message characters on their position in the text. For example, in stream encryption, the different characters in a message are encrypted based on their position in the message. This property is called positional dependence or independence of the cipher.

4. Symmetry or asymmetry of the encryption function. This important property defines the essential difference between conventional symmetric (single-key) cryptosystems and asymmetric two-key (public key) cryptosystems. The main difference between the two is that in an asymmetric cryptosystem, knowledge of the encryption (or decryption) key is not sufficient to discover the corresponding decryption (or encryption) key.

Main characteristics of cryptosystems

cryptosystems

Operations with

bits or blocks

Dependence/independence on signs

messages

Positional dependence/independence

Symmetry/

asymmetry

In-line

encryption

does not depend

symmetrical

Block

encryption

does not depend

does not depend

symmetrical or asymmetrical

From the reverse

communication from

ciphertext

bits or blocks

does not depend

symmetrical

In a cryptosystem that has the property that the encryption function depends on the signs of the message, error propagation may occur. If, for example, a bit of the ciphertext is corrupted during transmission, then after decryption the plaintext may contain more corrupted bits. Insertion and dropout errors can also lead to catastrophic error propagation during decryption.

Stream ciphers. Stream encryption consists of adding plaintext bits modulo 2 with bits of a pseudo-random sequence.

To the benefits Stream ciphers include no error propagation, simple implementation, and high encryption speed.

Disadvantage is the need to transmit synchronization information before the message header, which must be received before any message is decrypted. This is because if two different messages are encrypted with the same key, then the same pseudo-random sequence must be used to decrypt these messages. This situation can create a dangerous threat to the cryptographic strength of the system and therefore an additional, randomly selected message key is often used, which is transmitted at the beginning of the message and is used to modify the encryption key. As a result, different messages will be encrypted using different sequences.

Stream ciphers are widely used in military systems and other similar systems to encrypt data and digitized speech signals. Until recently, such applications were predominant for this encryption method. This is explained, in particular, by the relative simplicity of designing and implementing generators of good encryption sequences. But the main factor, of course, remains the absence of error propagation in the stream cipher.

Since relatively low-quality channels are used to transmit data and voice messages in tactical communication networks, any cryptographic system that increases the already high error rate is not applicable. In such cases, it is necessary to use a cryptosystem that does not propagate errors.

However, the proliferation of errors can also be a positive thing. Let, for example, encrypted data must be transmitted over a channel with a very low probability of error (for example, 10 5) and it is very important that the data is received absolutely accurately. This is a typical situation in computer networks, where an error in a single bit can lead to catastrophic consequences, and therefore the communication channel must be very reliable. In such a situation, one mistake is as dangerous as 100 or 1000 mistakes. But 100 or 1000 errors can be found more easily than one error. Therefore, in this case, error propagation is no longer a disadvantage of the cipher.

The standard method for generating sequences for stream encryption is the method used in the DES data encryption standard in output feedback mode.

Block ciphers. For block ciphers, the plaintext is first split into blocks of equal length, then a key-dependent encryption function is used to transform the plaintext block of length T bits into a ciphertext block of the same length. An important property of block ciphers is that each bit of a ciphertext block is a function of all (or almost all) of the bits of the corresponding plaintext block, and no two plaintext blocks can be represented by the same ciphertext block. The block cipher algorithm can be used in various ways. The four encryption modes in the DES standard actually apply to any block cipher.

These modes received the following names:

    direct encryption mode, or encryption using an electronic code book (Electronic code book),

    encryption with chaining of ciphertext blocks CBC (Cipher block chaining),

    encryption with feedback from the ciphertext CFB (Cipher feedback),

    encryption with feedback from the OFB output (Output feedback).

Main advantage direct block cipher (electronic code book) is that in a well-designed block cipher system, small changes in the ciphertext will cause large and unpredictable changes in the corresponding plaintext and vice versa.

However, the use of a block cipher in this mode is associated with serious shortcomings. The first of these is that, due to the fixed nature of encryption, even with a relatively large block length, for example 50-100 bits, “dictionary” cryptanalysis is possible in a limited form.

It is clear that a block of this size may be repeated in a message due to the large amount of redundancy in typical natural language text. This could result in identical plaintext blocks of length T bits in the message will be represented by identical ciphertext blocks, giving the cryptanalyst some information about the contents of the message.

Another potential disadvantage of this cipher is error propagation (this is one of the problems with all types of ciphers except stream ciphers). Changing just one bit in a received ciphertext block will result in the entire block being decrypted incorrectly. This in turn will result in 1 to T distorted bits in the restored source text.

Due to the noted disadvantages, block ciphers are rarely used in this mode to encrypt long messages. However, in financial institutions, where messages often consist of one or two blocks, block ciphers (specifically the DES algorithm) are widely used in this simple form. Since this application involves the possibility of frequent changes of the encryption key, the probability of encrypting two identical blocks of plaintext with the same key is very small. Block ciphers are most often used in encryption systems with feedback from the ciphertext.

Education is also possible mixed (hybrid) systems of stream and block encryption using the best properties of each of these ciphers. In such systems, stream encryption is combined with pseudo-random permutations. The plaintext is first encrypted as in normal stream encryption, then the resulting ciphertext is divided into blocks of a fixed size. In each block, a pseudo-random permutation is performed under the control of a key (different permutations for individual blocks are preferred).

The order of these two operations can be reversed without affecting the basic properties of the system. The result is a cipher that does not propagate errors, but has an additional property that a stream cipher does not have. This property means that the eavesdropper does not know which plaintext bit corresponds to the ciphertext bit. This makes the encrypted message more complex and difficult to crack. But it should be noted that this is no longer a true block cipher, in which each bit of the ciphertext is a function of only one, rather than all the bits of the plaintext.

A public key cryptosystem must be a block cipher system that operates on blocks of fairly large length. This is due to the fact that a cryptanalyst who knows the public encryption key could first calculate and create a table of correspondence between plaintext and ciphertext blocks. If the length of the blocks is small (for example, 30 bits), then the number of possible blocks will not be too large (with a length of 30 bits this is 2 30 -10 9) and a complete table can be compiled, allowing instant decryption of any encrypted message using a known public key .

Many different public key cryptosystems have been proposed, the most famous of which is RSA (Rivest, Shamir, Adleman). The cryptographic strength of this system is based on the difficulty of decomposing large numbers into prime factors and the choice of two large prime numbers for the encryption and decryption keys.

It is known that the RSA algorithm cannot be used for high-speed encryption. The most optimized software implementation of this algorithm turns out to be low-speed, and several hardware implementations provide encryption speeds from 10 to 100 Kbps (using prime numbers of the order of 2 7, which seems to be the minimum length to ensure the required cryptographic strength). This means that the use of RSA for block ciphering is limited, although its use for key distribution, authentication and digital signature generation presents interesting possibilities. Some currently known public key cryptographic algorithms allow faster encryption speeds than the RSA algorithm. However, they are not that popular yet.

Encryption systems with feedback. Closed-loop encryption systems come in various practical versions. As in block cipher systems, messages are divided into a number of blocks consisting of T bits, and to convert these blocks into ciphertext blocks, which also consist of T bit, special functions are used. However, while in a block cipher such a function depends only on the key, in closed-loop ciphers it depends on both the key and one or more previous ciphertext blocks. This general definition of closed-loop encryption includes as special cases a large number of different types of practically used systems.

The use of block cipher cryptosystems with feedback gives a number of important advantages. The first and most significant is the ability to use them to detect message manipulations carried out by active eavesdroppers. This takes advantage of the fact that errors propagate, as well as the ability of such systems to easily generate a MAC message authentication code (message aithentication code). The second advantage is that CTAK ciphers, used instead of block ciphers, do not require initial synchronization. This means that if the beginning of a message is skipped upon reception, then the rest of it can be successfully decrypted (after successful reception of successive t ciphertext bit. Note also that closed-loop encryption systems are used not only to encrypt messages, but also to authenticate them.

Block cipher cryptosystems with feedback are characterized by certain disadvantages. The main one is the propagation of errors, i.e. one erroneous bit during transmission can cause from 1 to sm + i errors in the decrypted text. Thus, the requirement to increase t to increase cryptographic strength, it contradicts system requirements associated with the propagation of errors. Another disadvantage is that the design and implementation of closed-loop encryption systems is often more difficult than for stream encryption systems. Although closed-loop encryption systems of various types have been widely used for many years, there are very few specific algorithms for such systems. In most cases, published algorithms are derived from block cipher algorithms modified for special applications.

The first conclusion that can be drawn from the analysis is that most practical cryptosystems use either stream encryption or feedback encryption algorithms. Most stream cipher cryptosystems use commercial algorithms (including proprietary algorithms or proprietary algorithms) or classified government algorithms. This situation will apparently continue in the coming years.

It is also possible that most closed-loop encryption systems will be based on the use of a special variant of block cipher algorithms, in particular the most famous block cipher algorithm DES. Regarding other encryption methods, it can be said that, despite the rapid growth of publications on public key cryptosystems, only one of them, the RSA system, has stood the test of time.

But the system's algorithm has severe implementation limitations and is therefore unsuitable for some cryptographic applications. Of course, it can definitely be argued that public key cryptosystems have had a significant impact on data encryption technology. They are finding increasing use, mainly for generating digital signatures or for managing keys in conventional cryptosystems (such as key encryption).

Potential users of cryptography have the opportunity to choose between stream cipher systems and closed-loop cipher systems (possibly based on the use of block cipher algorithms). However, there are certain areas of application, such as financial transactions, where direct block encryption techniques ("electronic codebook") can be used. The choice of a cryptoalgorithm largely depends on its purpose. Some information that can guide you when choosing the type of encryption is given in the table.

Sergey Panasenko,
Head of the software development department at Ankad,
[email protected]

Basic Concepts

The process of converting open data into encrypted data and vice versa is usually called encryption, and the two components of this process are called encryption and decryption, respectively. Mathematically, this transformation is represented by the following dependencies that describe actions with the original information:

C = Ek1(M)

M" = Dk2(C),

where M (message) is open information (in the literature on information security it is often called “source text”);
C (cipher text) - the ciphertext (or cryptogram) obtained as a result of encryption;
E (encryption) - an encryption function that performs cryptographic transformations on the source text;
k1 (key) - parameter of function E, called the encryption key;
M" - information obtained as a result of decryption;
D (decryption) - decryption function that performs inverse cryptographic transformations on the ciphertext;
k2 is the key used to decrypt information.

The concept of “key” in the GOST 28147-89 standard (symmetric encryption algorithm) is defined as follows: “a specific secret state of some parameters of a cryptographic transformation algorithm, ensuring the selection of one transformation from a set of possible transformations for a given algorithm.” In other words, the key is a unique element with which you can change the results of the encryption algorithm: the same source text will be encrypted differently when using different keys.

In order for the decryption result to match the original message (i.e., for M" = M), two conditions must be met simultaneously. First, the decryption function D must match the encryption function E. Second, the decryption key k2 must match encryption key k1.

If a cryptographically strong encryption algorithm was used for encryption, then in the absence of the correct key k2 it is impossible to obtain M" = M. Cryptographic strength is the main characteristic of encryption algorithms and primarily indicates the degree of complexity of obtaining the original text from an encrypted text without a key k2.

Encryption algorithms can be divided into two categories: symmetric and asymmetric encryption. For the former, the ratio of encryption and decryption keys is defined as k1 = k2 = k (i.e., functions E and D use the same encryption key). With asymmetric encryption, the encryption key k1 is calculated from the key k2 in such a way that the reverse transformation is impossible, for example, using the formula k1 = ak2 mod p (a and p are the parameters of the algorithm used).

Symmetric encryption

Symmetric encryption algorithms date back to ancient times: it was this method of hiding information that was used by the Roman emperor Gaius Julius Caesar in the 1st century BC. e., and the algorithm he invented is known as the “Caesar cryptosystem.”

Currently, the best known symmetric encryption algorithm is DES (Data Encryption Standard), developed in 1977. Until recently, it was the “US standard”, since the government of this country recommended its use for implementing various data encryption systems. Despite the fact that DES was originally planned to be used for no more than 10-15 years, attempts to replace it began only in 1997.

We will not consider DES in detail (almost all books on the list of additional materials have a detailed description of it), but will turn to more modern encryption algorithms. It is only worth noting that the main reason for changing the encryption standard is its relatively weak cryptographic strength, the reason for which is that the DES key length is only 56 significant bits. It is known that any strong encryption algorithm can be cracked by trying all possible encryption keys (the so-called brute force attack). It is easy to calculate that a cluster of 1 million processors, each of which calculates 1 million keys per second, will check 256 variants of DES keys in almost 20 hours. And since such computing power is quite realistic by today's standards, it is clear that a 56-bit key is too short and the DES algorithm needs to be replaced with a stronger one.

Today, two modern strong encryption algorithms are increasingly used: the domestic standard GOST 28147-89 and the new US crypto standard - AES (Advanced Encryption Standard).

Standard GOST 28147-89

The algorithm defined by GOST 28147-89 (Fig. 1) has an encryption key length of 256 bits. It encrypts information in blocks of 64 bits (such algorithms are called block algorithms), which are then divided into two subblocks of 32 bits (N1 and N2). Subblock N1 is processed in a certain way, after which its value is added with the value of subblock N2 (the addition is performed modulo 2, i.e. the logical XOR operation is applied - “exclusive or”), and then the subblocks are swapped. This transformation is performed a certain number of times (“rounds”): 16 or 32, depending on the operating mode of the algorithm. In each round, two operations are performed.

The first is keying. The contents of subblock N1 are added modulo 2 with the 32-bit part of the key Kx. The full encryption key is represented as a concatenation of 32-bit subkeys: K0, K1, K2, K3, K4, K5, K6, K7. During the encryption process, one of these subkeys is used, depending on the round number and the operating mode of the algorithm.

The second operation is table replacement. After keying, subblock N1 is divided into 8 parts of 4 bits, the value of each of which is replaced in accordance with the replacement table for this part of the subblock. The subblock is then bit-rotated to the left by 11 bits.

Table substitutions(Substitution box - S-box) are often used in modern encryption algorithms, so it is worth explaining how such an operation is organized. The output values ​​of the blocks are recorded in the table. A data block of a certain dimension (in our case, 4-bit) has its own numerical representation, which determines the number of the output value. For example, if the S-box looks like 4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1 and the 4-bit block “0100” came to the input (value 4), then, according to the table, the output value will be 15, i.e. “1111” (0 a 4, 1 a 11, 2 a 2 ...).

The algorithm, defined by GOST 28147-89, provides four modes of operation: simple replacement, gamma, gamma with feedback and generation of imitation attachments. They use the same encryption transformation described above, but since the purpose of the modes is different, this transformation is carried out differently in each of them.

In mode easy replacement To encrypt each 64-bit block of information, the 32 rounds described above are performed. In this case, 32-bit subkeys are used in the following sequence:

K0, K1, K2, K3, K4, K5, K6, K7, K0, K1, etc. - in rounds 1 to 24;

K7, K6, K5, K4, K3, K2, K1, K0 - in rounds 25 to 32.

Decryption in this mode is carried out in exactly the same way, but with a slightly different sequence of using subkeys:

K0, K1, K2, K3, K4, K5, K6, K7 - in rounds 1 to 8;

K7, K6, K5, K4, K3, K2, K1, K0, K7, K6, etc. - in rounds 9 to 32.

All blocks are encrypted independently of each other, i.e., the encryption result of each block depends only on its contents (the corresponding block of the original text). If there are several identical blocks of original (plain) text, the corresponding ciphertext blocks will also be identical, which provides additional useful information for a cryptanalyst trying to crack the cipher. Therefore, this mode is used mainly for encrypting the encryption keys themselves (multi-key schemes are very often implemented, in which, for a number of reasons, the keys are encrypted on each other). Two other operating modes are intended for encrypting the information itself - gamma and gamma with feedback.

IN gamma mode Each plaintext block is added bit by bit modulo 2 to a 64-bit cipher gamma block. The gamma cipher is a special sequence that is obtained as a result of certain operations with registers N1 and N2 (see Fig. 1).

1. Their initial filling is written to registers N1 and N2 - a 64-bit value called a synchronization message.

2. The contents of registers N1 and N2 (in this case, sync messages) are encrypted in simple replacement mode.

3. The contents of register N1 are added modulo (232 - 1) with the constant C1 = 224 + 216 + 28 + 24, and the result of the addition is written to register N1.

4. The contents of register N2 are added modulo 232 with the constant C2 = 224 + 216 + 28 + 1, and the result of the addition is written to register N2.

5. The contents of registers N1 and N2 are output as a 64-bit gamma block of the cipher (in this case, N1 and N2 form the first gamma block).

If the next gamma block is needed (i.e., encryption or decryption needs to continue), it returns to step 2.

For decryption, gamma is generated in a similar manner, and then the ciphertext and gamma bits are again XORed. Since this operation is reversible, in the case of a correctly developed scale, the original text (table) is obtained.

Encryption and decryption in gamma mode

To develop the cipher needed to decrypt the gamma, the user decrypting the cryptogram must have the same key and the same value of the synchronization message that were used when encrypting the information. Otherwise, it will not be possible to obtain the original text from the encrypted one.

In most implementations of the GOST 28147-89 algorithm, the synchronization message is not secret, however, there are systems where the synchronization message is the same secret element as the encryption key. For such systems, the effective key length of the algorithm (256 bits) is increased by another 64 bits of the secret synchronization message, which can also be considered as a key element.

In the feedback gamma mode, to fill the N1 and N2 registers, starting from the 2nd block, it is not the previous gamma block that is used, but the result of encrypting the previous plaintext block (Fig. 2). The first block in this mode is generated completely similarly to the previous one.

Rice. 2. Development of a cipher gamma in the gamma mode with feedback.

Considering the mode generation of imitation prefixes, the concept of the subject of generation should be defined. A prefix is ​​a cryptographic checksum calculated using an encryption key and designed to verify the integrity of messages. When generating an imitation prefix, the following operations are performed: the first 64-bit block of the information array, for which the imitation prefix is ​​calculated, is written to registers N1 and N2 and encrypted in the reduced simple replacement mode (the first 16 rounds out of 32 are performed). The resulting result is summed modulo 2 with the next block of information and the result is stored in N1 and N2.

The cycle repeats until the last block of information. The resulting 64-bit contents of the N1 and N2 registers or part of them as a result of these transformations is called the imitation prefix. The size of the imitation prefix is ​​selected based on the required reliability of messages: with the length of the imitation prefix r bits, the probability that a change in the message will go unnoticed is equal to 2-r. Most often, a 32-bit imitation prefix is ​​used, i.e., half the contents of the registers. This is enough, since, like any checksum, the imitation attachment is intended primarily to protect against accidental distortion of information. To protect against intentional modification of data, other cryptographic methods are used - primarily an electronic digital signature.

When exchanging information, the imitation prefix serves as a kind of additional means of control. It is calculated for the plaintext when any information is encrypted and is sent along with the ciphertext. After decryption, a new value of the imitation prefix is ​​calculated and compared with the sent one. If the values ​​do not match, it means that the ciphertext was corrupted during transmission or incorrect keys were used during decryption. The imitation prefix is ​​especially useful for checking the correct decryption of key information when using multi-key schemes.

The GOST 28147-89 algorithm is considered a very strong algorithm - currently no more effective methods have been proposed for its disclosure than the “brute force” method mentioned above. Its high security is achieved primarily due to the large key length - 256 bits. When using a secret sync message, the effective key length increases to 320 bits, and encrypting the replacement table adds additional bits. In addition, cryptographic strength depends on the number of transformation rounds, which according to GOST 28147-89 should be 32 (the full effect of input data dispersion is achieved after 8 rounds).

AES standard

Unlike the GOST 28147-89 algorithm, which remained secret for a long time, the American AES encryption standard, designed to replace DES, was selected through an open competition, where all interested organizations and individuals could study and comment on the candidate algorithms.

A competition to replace DES was announced in 1997 by the US National Institute of Standards and Technology (NIST - National Institute of Standards and Technology). 15 candidate algorithms were submitted to the competition, developed by both well-known organizations in the field of cryptography (RSA Security, Counterpane, etc.) and individuals. The results of the competition were announced in October 2000: the winner was the Rijndael algorithm, developed by two cryptographers from Belgium, Vincent Rijmen and Joan Daemen.

The Rijndael algorithm is not similar to most known symmetric encryption algorithms, the structure of which is called the “Feistel network” and is similar to the Russian GOST 28147-89. The peculiarity of the Feistel network is that the input value is divided into two or more subblocks, part of which in each round is processed according to a certain law, after which it is superimposed on unprocessed subblocks (see Fig. 1).

Unlike the domestic encryption standard, the Rijndael algorithm represents a data block in the form of a two-dimensional byte array of size 4X4, 4X6 or 4X8 (the use of several fixed sizes of the encrypted block of information is allowed). All operations are performed on individual bytes of the array, as well as on independent columns and rows.

The Rijndael algorithm performs four transformations: BS (ByteSub) - table replacement of each byte of the array (Fig. 3); SR (ShiftRow) - shift of array rows (Fig. 4). With this operation, the first line remains unchanged, and the rest are cyclically shifted byte-by-byte to the left by a fixed number of bytes, depending on the size of the array. For example, for a 4X4 array, lines 2, 3 and 4 are shifted by 1, 2 and 3 bytes respectively. Next comes MC (MixColumn) - an operation on independent array columns (Fig. 5), when each column is multiplied by a fixed matrix c(x) according to a certain rule. And finally, AK (AddRoundKey) - adding a key. Each bit of the array is added modulo 2 with the corresponding bit of the round key, which, in turn, is calculated in a certain way from the encryption key (Fig. 6).


Rice. 3. Operation BS.

Rice. 4. Operation SR.

Rice. 5. Operation MC.

The number of encryption rounds (R) in the Rijndael algorithm is variable (10, 12 or 14 rounds) and depends on the block size and the encryption key (there are also several fixed sizes for the key).

Decryption is performed using the following reverse operations. A table inversion and table replacement are performed on the inverse table (relative to the one used during encryption). The inverse operation to SR is to rotate rows to the right rather than to the left. The inverse operation for MC is multiplication using the same rules by another matrix d(x) satisfying the condition: c(x) * d(x) = 1. Adding the key AK is the inverse of itself, since it only uses the XOR operation. These reverse operations are applied during decryption in the reverse sequence to that used during encryption.

Rijndael has become the new standard for data encryption due to a number of advantages over other algorithms. First of all, it provides high encryption speed on all platforms: both software and hardware implementation. It is distinguished by incomparably better possibilities for parallelizing calculations compared to other algorithms submitted to the competition. In addition, the resource requirements for its operation are minimal, which is important when used in devices with limited computing capabilities.

The only disadvantage of the algorithm can be considered its inherent unconventional scheme. The fact is that the properties of algorithms based on the Feistel network have been well researched, and Rijndael, in contrast, may contain hidden vulnerabilities that can only be discovered after some time has passed since its widespread use began.

Asymmetric encryption

Asymmetric encryption algorithms, as already noted, use two keys: k1 - the encryption key, or public, and k2 - the decryption key, or secret. The public key is calculated from the secret: k1 = f(k2).

Asymmetric encryption algorithms are based on the use of one-way functions. By definition, a function y = f(x) is unidirectional if: it is easy to calculate for all possible values ​​of x and for most possible values ​​of y it is quite difficult to calculate a value of x such that y = f(x).

An example of a one-way function is the multiplication of two large numbers: N = P*Q. In itself, such multiplication is a simple operation. However, the inverse function (decomposition of N into two large factors), called factorization, according to modern time estimates, is a rather complex mathematical problem. For example, factorization N with a dimension of 664 bits at P ? Q will require approximately 1023 operations, and to inversely calculate x for the modular exponent y = ax mod p with known a, p and y (with the same dimensions of a and p) you need to perform approximately 1026 operations. The last example given is called the Discrete Logarithm Problem (DLP), and this kind of function is often used in asymmetric encryption algorithms, as well as in algorithms used to create an electronic digital signature.

Another important class of functions used in asymmetric encryption are one-way backdoor functions. Their definition states that a function is unidirectional with a backdoor if it is unidirectional and it is possible to efficiently calculate the inverse function x = f-1(y), i.e. if the "backdoor" (some secret number, applied to for asymmetric encryption algorithms - the value of the secret key).

One-way backdoor functions are used in the widely used asymmetric encryption algorithm RSA.

RSA algorithm

Developed in 1978 by three authors (Rivest, Shamir, Adleman), it got its name from the first letters of the developers' last names. The reliability of the algorithm is based on the difficulty of factoring large numbers and calculating discrete logarithms. The main parameter of the RSA algorithm is the system module N, which is used to carry out all calculations in the system, and N = P*Q (P and Q are secret random prime large numbers, usually of the same dimension).

The secret key k2 is chosen randomly and must meet the following conditions:

1

where GCD is the greatest common divisor, i.e. k1 must be coprime to the value of the Euler function F(N), the latter being equal to the number of positive integers in the range from 1 to N coprime to N, and is calculated as F(N) = (P - 1)*(Q - 1).

The public key k1 is calculated from the relation (k2*k1) = 1 mod F(N), and for this purpose the generalized Euclidean algorithm (algorithm for calculating the greatest common divisor) is used. Encryption of data block M using the RSA algorithm is performed as follows: C=M [to the power k1] mod N. Note that since in a real cryptosystem using RSA the number k1 is very large (currently its dimension can reach up to 2048 bits), direct calculation of M [to the power k1] unreal. To obtain it, a combination of repeated squaring of M and multiplication of the results is used.

Inversion of this function for large dimensions is not feasible; in other words, it is impossible to find M given the known C, N and k1. However, having a secret key k2, using simple transformations one can calculate M = Ck2 mod N. Obviously, in addition to the secret key itself, it is necessary to ensure the secrecy of the parameters P and Q. If an attacker obtains their values, he will be able to calculate the secret key k2.

Which encryption is better?

The main disadvantage of symmetric encryption is the need to transfer keys “from hand to hand”. This drawback is very serious, since it makes it impossible to use symmetric encryption in systems with an unlimited number of participants. However, otherwise, symmetric encryption has some advantages that are clearly visible against the background of the serious disadvantages of asymmetric encryption.

The first of them is the low speed of encryption and decryption operations, due to the presence of resource-intensive operations. Another “theoretical” disadvantage is that the cryptographic strength of asymmetric encryption algorithms has not been mathematically proven. This is primarily due to the problem of the discrete logarithm - it has not yet been proven that its solution in an acceptable time is impossible. Unnecessary difficulties are also created by the need to protect public keys from substitution - by replacing the public key of a legal user, an attacker will be able to encrypt an important message with his public key and subsequently easily decrypt it with his private key.

However, these shortcomings do not prevent the widespread use of asymmetric encryption algorithms. Today there are cryptosystems that support certification of public keys, as well as combining symmetric and asymmetric encryption algorithms. But this is a topic for a separate article.

Additional sources of information

For those readers who are seriously interested in encryption, the author recommends broadening their horizons with the help of the following books.

  1. Brassard J. "Modern cryptology."
  2. Petrov A. A. "Computer security: cryptographic methods of protection."
  3. Romanets Yu. V., Timofeev P. A., Shangin V. F. "Information protection in modern computer systems."
  4. Sokolov A.V., Shangin V.F. "Information protection in distributed corporate networks and systems."

A complete description of encryption algorithms can be found in the following documents:

  1. GOST 28147-89. Information processing system. Cryptographic protection. Cryptographic conversion algorithm. - M.: State Standard of the USSR, 1989.
  2. AES algorithm: http://www.nist.gov/ae.
  3. RSA algorithm: http://www.rsasecurity.com/rsalabs/pkcs/pkcs-1.

Basic concepts and definitions

As the information society emerges, large states become able to access technical means of total surveillance of millions of people. Therefore, cryptography is becoming one of the main tools to ensure confidentiality, trust, authorization, electronic payments, corporate security and other important things.

The problem of protecting information by transforming it is dealt with cryptology , which is divided into two directions: cryptography And cryptanalysis . The goals of these directions are directly opposite.

Cryptography is engaged in the search and research of mathematical methods for converting information. Area of ​​interest cryptanalysis– research into the possibility of decrypting information without knowing the keys.

Modern cryptography includes 4 main sections:

1. Symmetric cryptosystems.

2. Public key cryptosystems.

3. Electronic signature systems.

4. Key management.

The main areas of use of cryptographic methods are the transfer of confidential information through communication channels, establishing the authenticity of transmitted messages, storing information on media in encrypted form.

Cryptography makes it possible to transform information in such a way that its reading (recovery) is possible only if the key is known. Texts based on a certain alphabet will be considered as information to be encrypted and decrypted.

Alphabet– a finite set of signs used to encode information. Examples:

ü alphabet Z33 – contains 32 letters of the Russian alphabet and a space;

ü alphabet Z256 – characters included in standard ASCII and KOI-8 codes;

ü binary alphabet Z2 – two characters (0 and 1);

ü octal or hexadecimal alphabets.

Text– an ordered set of elements of the alphabet.

Encryption– the transformative process of replacing the original (plain) text with cipher text.

Decryption(reverse encryption) – a transformative process of replacing the ciphertext with the original text based on the key.

Key– information necessary for smooth encryption and decryption of texts.

Cryptographic system represents a family T [T 1, T 2, ..., T k] of plaintext transformations. Members of this family are indexed or designated by the symbol To; parameter To is the key. The key space K is the set of possible key values. Typically the key is a sequential series of alphabetic characters.

Cryptosystems are divided into symmetrical And asymmetrical . IN symmetric cryptosystems The same key is used for both encryption and decryption. IN asymmetric systems (public key) two keys are used - public and private, which are mathematically related to each other. Information is encrypted using a public key, which is available to everyone, and decrypted using a private key, known only to the recipient of the message.

Terms key distribution And key management refer to information processing processes, the content of which is the compilation of keys and their distribution among users.

Electronic (digital) signature is called a cryptographic transformation attached to the text, which allows, when the text is received by another user, to verify the authorship and authenticity of the message.

Cryptographic strength is a characteristic of a cipher that determines its resistance to decryption without knowing the key (i.e., resistance to cryptanalysis). There are several indicators of cryptographic strength:

the number of all possible keys;

average time required for cryptanalysis.

Requirements for cryptosystems

The process of cryptographic data closure can be carried out either in software or in hardware. The hardware implementation is significantly more expensive, but has high performance, simplicity, and security. The software implementation is more practical and allows for a certain flexibility in use.

Generally accepted requirements for cryptographic systems:

· an encrypted message must be readable only if the key is available;

· the number of operations necessary to determine the used key from a fragment of an encrypted message and the corresponding plaintext must be at least the total number of possible keys;

· the number of operations required to decrypt information by searching through possible keys must have a strict lower bound and go beyond the capabilities of modern computers (taking into account the capabilities of network computing);

· knowledge of the encryption algorithm should not affect the reliability of the protection;

· a slight change in the key should lead to a significant change in the type of encrypted message;

· the structural elements of the encryption algorithm must be unchanged;

· additional bits introduced into the message during the encryption process must be completely and securely hidden in the ciphertext;

· the length of the ciphertext must be equal to the length of the original text;

· there should be no simple and easily established dependencies between the keys used sequentially in the encryption process;

· any of the many possible keys must provide reliable information protection;

· the algorithm must allow both software and hardware implementation, while changing the key length should not lead to a qualitative deterioration of the encryption algorithm.

Basic encryption algorithms

The encryption-decryption method is called code . The key used for decryption may not be the same as the key used for encryption, but in most algorithms the keys are the same.

Key-based algorithms are divided into two classes: symmetrical (with secret key) and asymmetrical (with public key). Symmetric algorithms use the same key for encryption and decryption, or the decryption key is simply calculated from the encryption key. Asymmetric algorithms use different keys, and the decryption key cannot be calculated from the encryption key.

Symmetric algorithms are divided into stream ciphers and block ciphers. Stream ones allow you to encrypt information bit by bit, while block ones work with a certain set of data bits ( typically the block size is 64 bits) and encrypt this set as a single whole.

Typically, the encryption key is a file or data array and is stored on a personal key medium (for example, a flash drive or smart card); It is mandatory to take measures to ensure that the personal key media is not accessible to anyone other than its owner.

Authenticity is ensured due to the fact that without prior decryption it is almost impossible to carry out semantic modification and forgery of a cryptographically closed message. A fake message cannot be encrypted correctly without knowing the secret key.

Data integrity is ensured by attaching a special code to the transmitted data ( imitations inserts ), generated using a secret key. The imitation insert is a type of checksum, i.e. some reference characteristic of a message against which the integrity of the latter is checked. The algorithm for generating a simulated insert must ensure its dependence, according to some complex cryptographic law, on each bit of the message. The integrity of the message is verified by the recipient of the message by generating, using a secret key, a simulated insertion corresponding to the received message and comparing it with the received value of the simulated insertion. If there is a match, it is concluded that the information was not modified on the way from the sender to the recipient.

Symmetric encryption is ideal for encrypting information “for oneself”, for example, to prevent unauthorized access to it in the absence of the owner. I have a high encryption speed; single-key cryptosystems allow us to solve many important information security problems. However, the autonomous use of symmetric cryptosystems in computer networks raises the problem of distributing encryption keys between users.

Before you begin exchanging encrypted data, you must exchange secret keys with all recipients. The transfer of the secret key of a symmetric cryptosystem cannot be carried out via public communication channels; the secret key must be transferred to the sender and recipient via a secure channel (or using a courier). To ensure effective protection of messages circulating on the network, a huge number of frequently changing keys are required (one key for each pair of users). The problem of distributing secret keys with a large number of users is a very time-consuming and complex task. In a network, N(N-1)/2 secret keys must be distributed among N users.

Asymmetric ciphers allow the public key to be available to everyone (for example, published in a newspaper). This allows anyone to encrypt a message. However, only the user who owns the decryption key can decrypt this message. The encryption key is called public key , and the decryption key is private key or secret key .

The private and public keys are generated in pairs. The secret key must remain with its owner and be reliably protected from unauthorized access (similar to the encryption key in symmetric algorithms). A copy of the public key must be kept by each subscriber of the cryptographic network with whom the owner of the secret key exchanges information.

Public key cryptographic systems use so-called irreversible or one-way functions, which have the property: given a value X relatively easy to calculate value f(x), however, if yM = j(x), then there is no easy way to calculate the value X. The set of classes of irreversible functions gives rise to all the variety of public key systems.

The process of transmitting encrypted information in an asymmetric cryptosystem is carried out as follows.

Preparatory stage:

· subscriber B generates a pair of keys: secret key k in and public key K in;

· public key K is sent to subscriber A and other subscribers (or made available, for example, on a shared resource).

Usage ( exchange of information between A and B ):

· subscriber A encrypts the message using the public key K of subscriber B and sends the ciphertext to subscriber B;

· subscriber B decrypts the message using his secret key k; no one else can decipher this message, because does not have the secret key of subscriber B.

Information protection in an asymmetric cryptosystem is based on the secrecy of the key k in the message recipient.

Advantages asymmetric cryptographic systems before symmetric cryptosystems:

ü in asymmetric cryptosystems, the complex problem of distributing keys between users is solved, because each user can generate his own key pair himself, and users’ public keys can be freely published and distributed over network communications;

ü the quadratic dependence of the number of keys on the number of users disappears; in an asymmetric cryptosystem, the number of keys used is related to the number of subscribers by a linear dependence (in a system of N users, 2N keys are used), and not quadratic, as in symmetric systems;

ü asymmetric cryptosystems allow the implementation of protocols for interaction between parties that do not trust each other, since when using asymmetric cryptosystems, the private key should be known only to its owner.

Flaws asymmetric cryptosystems:

ü at the moment there is no mathematical proof of the irreversibility of the functions used in asymmetric algorithms;

ü asymmetric encryption is significantly slower than symmetric encryption, since encryption and decryption use very resource-intensive operations; for the same reason, implementing a hardware encryptor with an asymmetric algorithm is much more difficult than implementing a hardware symmetric algorithm;

ü the need to protect public keys from substitution.

Modern encryption-decryption algorithms are quite complex and cannot be performed manually. True cryptographic algorithms are designed for use by computers or special hardware devices. In most applications, cryptography is done by software and there are many cryptography packages available.

Symmetric algorithms are faster than asymmetric ones. In practice, both types of algorithms are often used together: a public key algorithm is used to transmit a randomly generated secret key, which is then used to decrypt a message.

Many high-quality cryptographic algorithms are widely available. The most famous symmetric algorithms are DES and IDEA; the best asymmetric algorithm is RSA. In Russia, GOST 28147-89 is adopted as the encryption standard.

Table 1 shows the classification of cryptographic information closure.

Table 1

Types of conversion Transformation methods Varieties of the method Implementation method
Encryption Replacement (substitution) Simple (mono-alphabetic) Prog.
Multi-alphabetic single-circuit ordinary Prog.
Multi-alphabetic single-circuit monophonic Prog.
Prog.
Rearrangement Simple Prog.
Complicated according to the table Prog.
Complicated routes Prog.
Analytical transformation According to the rules of matrix algebra Prog.
For special dependencies Prog.
Gumming With a finite short range Hardware-program.
With a finite long range Hardware-program.
With an endless range Hardware-program.
Combined Replacement+rearrangement Hardware-program.
Replacement+gamming Hardware-program.
Rearrangement+gamming Hardware-program.
Gumming+Gumming Hardware-program.
Coding Semantic According to special tables (dictionaries) Prog.
Symbolic By code alphabet Prog.
Other types Cutting apart Semantic Hardware-program.
Mechanical Prog.
Compression-expansion

I. Under encryption refers to a type of cryptographic closure in which each character of the protected message is subject to transformation.

All known encryption methods can be divided into five groups: replacement (substitution), permutation, analytical transformation, gamma and combined encryption. Each of these methods can have several varieties.

Varieties of the method replacement (substitution ):

1) Simple (mono-alphabetic) – characters of the encrypted text are replaced by other characters of the same alphabet. If the volume of ciphertext is large, then the frequencies of appearance of letters in the ciphertext will be closer to the frequencies of appearance of letters in the alphabet (of the language in which the text is written) and decryption will be very simple. This method is currently rarely used in cases where the encrypted text is short.

2) Polyalphabetic substitution is the simplest type of transformation, which consists in replacing characters in the source text with characters from other alphabets according to a more or less complex rule. To ensure high cryptographic strength, the use of large keys is required.

At multi-alphabetic single-circuit ordinary substitution uses several alphabets to replace characters in the source text, and the alphabet is changed sequentially and cyclically, i.e. the first character is replaced by the corresponding character of the first alphabet, the second by the character of the second alphabet, etc. until all selected alphabets have been used. After this, the use of alphabets is repeated.

Feature multi-alphabetic single-circuit monophonic substitution is that the number and composition of alphabets are chosen in such a way that the frequencies of appearance of all characters in the ciphertext are the same. In this situation, cryptanalysis of the ciphertext using its statistical processing becomes difficult. Alignment of the frequency of occurrence of symbols is achieved due to the fact that for frequently occurring symbols of the source text, a larger number of replacement elements is provided than for rarely occurring ones.

Multi-alphabetic multi-circuit substitution consists in the fact that for encryption several sets (circuits) of alphabets are used, used cyclically, and each circuit in the general case has its own individual period of application. This period is calculated, as a rule, by the number of characters, after encrypting which the outline of the alphabets changes.

Way permutations - a simple method of cryptographic conversion. It is usually used in combination with other methods. This method consists in the fact that the characters of the encrypted text are rearranged according to certain rules within the encrypted block of characters. All encryption and decryption procedures using the permutation method are sufficiently formalized and can be implemented algorithmically.

Encryption by simple permutation is carried out as follows:

· select a keyword with non-repeating characters;

· the encrypted text is written in consecutive lines under the keyword symbols;

· the ciphertext is written out in columns in the sequence in which the letters of the key are located in the alphabet (or in the order of numbers in a natural series, if it is digital).

Example:

clear text: BE CAREFUL

key: 5 8 1 3 7 4 6 2

encryption scheme:

B U D E q O (where q is a space)

S T O R O G N S

We group 2 characters each and get the encrypted text:

DOOOYREZHBSqNTOUT

The disadvantage of encryption by simple permutation is that if the length of the ciphertext is large, patterns of key symbols may appear in the ciphertext. To eliminate this drawback, you can change the key after encrypting a certain number of characters. By changing the key frequently enough, the strength of encryption can be significantly increased. At the same time, however, the organization of the encryption and decryption process becomes more complicated.

Complicated rearrangement of tables lies in the fact that to record the characters of the encrypted text, a special table is used, into which some complicating elements are introduced. The table is a matrix, the dimensions of which can be chosen arbitrarily. The characters of the encrypted text are written into it, as in the case of a simple permutation. The complication is that a certain number of table cells are not used. The number and location of unused elements is an additional encryption key. Encrypted text in blocks of ( m x nS) elements are written to the table ( m x n – table dimensions, S – number of unused elements). Next, the encryption procedure is similar to a simple rearrangement.

By varying the size of the table, the sequence of key characters, and the number and location of unused elements, you can obtain the required strength of the ciphertext.

Complicated rearrangement of routes has high encryption strength, uses a complicated method of permutations along Hamiltonian-type routes. In this case, the vertices of a certain hypercube are used to record the symbols of the ciphertext, and the symbols of the ciphertext are calculated using Hamilton's routes, and several different routes are used.

Encryption method using analytical transformations provides sufficiently reliable information closure. To do this, you can use matrix algebra methods, for example, multiplying a matrix by a vector. If the matrix is ​​used as a key, and the source text characters are substituted for the vector component, then the components of the resulting vector will be ciphertext characters. Decryption is carried out using the same rule of multiplying a matrix by a vector, only the matrix inverse to the one used to perform the closure is taken as a basis, and the corresponding number of symbols of the closed text is taken as a factor vector. The values ​​of the result vector will be the digital equivalents of the plaintext characters.

Gumming- this method consists of superimposing on the source text some pseudo-random sequence generated based on the key. The procedure for applying gamma to the source text can be carried out in two ways. IN first method symbols of the source text and gamma are replaced by digital equivalents, which are then added modulo TO, where K is the number of characters in the alphabet, i.e.

t c = (t p + t g) mod K, Where t c, t p,t g – symbols of ciphertext, plaintext and gamma, respectively.

In the second method, the symbols of the source text and gamma are represented in the form of binary code, and then the corresponding bits are added modulo 2. Instead of adding modulo 2, gamma can use other logical operations, for example, transformation according to the rule of logical equivalence or logical nonequivalence. Such a replacement is equivalent to introducing another key, which is the choice of the rule for forming the symbols of the encrypted message from the symbols of the original text and gamma.

The strength of encryption using the gamma method is determined mainly by the properties of the gamma - the length of the period and the uniformity of statistical characteristics. The latter property ensures that there are no patterns in the appearance of various symbols within a period.

With good statistical properties of gamma, the strength of encryption is determined only by the length of its period. Moreover, if the length of the gamma period exceeds the length of the encrypted text, then such a cipher is theoretically absolutely secure. Any sequence of random symbols, for example, a sequence of digits of the number PI, can be used as an infinite gamma. When encrypting using a computer, the gamma sequence is generated using a pseudorandom number sensor.

Combined encryption methods use several different methods simultaneously, i.e. sequential encryption of the original text using two or more methods. This is a fairly effective means of increasing encryption strength.

A typical example of a combination cipher is the US National Data Encryption Standard (DES).

II. Under coding This type of cryptographic closure is understood when some elements of the protected data (these are not necessarily individual characters) are replaced with pre-selected codes (numeric, alphabetic, alphanumeric combinations, etc.).

This method has two varieties: semantic and symbolic encoding. At semantic coding the elements being coded have a very specific meaning (words, sentences, groups of sentences). At character encoding Each character of the protected message is encoded. Symbolic encoding is essentially the same as substitution encryption.

When used correctly, the codes are much more difficult to crack than other classic systems. There are three reasons for this. Firstly, large length of the code used (for encryption - several hundred bits; codebook - hundreds of thousands - a million bits). Secondly, codes remove redundancy - the cryptanalyst's job becomes more difficult. Third, codes operate on relatively large blocks of plaintext (words and phrases) and therefore hide local information that might otherwise provide valuable "clues" to a cryptanalyst.

TO shortcomings coding, it should be attributed to the fact that the key during coding is not used well enough, because When coding a single word and phrase, only a very small part of the codebook is used. As a result, the code, when used intensively, can be partially analyzed and is especially sensitive to attack in the presence of known plaintext. For these reasons, codes must be changed more frequently to ensure greater reliability.

III. other methods Cryptographic closures include data dissection/diversification and data compression. Data dissection/explosion consists in the fact that the array of protected data is cut into elements, each of which does not allow revealing the contents of the protected information, and the elements selected in this way are placed in different memory zones. The reverse procedure is called data collection. It is quite obvious that the algorithm for distributing and assembling data must be kept secret.

Data compression represents the replacement of frequently occurring identical strings of data or sequences of identical characters with some pre-selected characters.

Hash functions

Hash function is a one-way function designed to obtain a digest or "fingerprint" of a file, message, or some block of data.

Initially, hashing functions were used as functions for creating a unique image of information sequences of arbitrary length in order to identify and determine their authenticity. The image itself must be a small block of a fixed length, typically 30, 60, 64, 128, 256, or 512 bits. Therefore, sorting search operations and others with large arrays or databases are significantly simplified, i.e. take much less time. To ensure the required error probability, it is necessary to provide a number of requirements for the hashing function:

· the hash function must be sensitive to all kinds of changes in the text M, such as insertions, emissions, permutations;

· the hash function must have the property of irreversibility, that is, the task of selecting a document “M” that would have the required hash function value must be computationally intractable;

· the probability that the hash function values ​​of two different documents (regardless of their lengths) will coincide should be negligible.

A large number of existing mathematical functions can provide these requirements. If these functions are used for sorting, searching, etc. However, later, based on Simonson's work on authentication theory, the advisability of using hashing methods in message authentication schemes in communication channels and telecommunication systems became clear. In this regard, a number of directions have opened in research in the field of cryptography, which are related to the development of new and improvement of existing hash functions. The main idea of ​​using hashing functions is to derive one-way functions from them, which are the main product for the development of modern cryptographic mechanisms and authentication methods.
Let's look at the basic concepts related to one-way hashing functions.

Most hash functions are based on a one-way function f( ), which produces an output value of length n when specifying two input values ​​of length n. These inputs are a block of source text Mi and hash value Hi–1 previous block of text (Fig. 1):

Hi = f (Mi, Hi–1).

The hash value calculated when the last block of text is entered becomes the hash value of the entire message M.

Fig.1. One-way hash function diagram

As a result, a one-way hash function always produces an output of fixed length n (regardless of the length of the input text). The hashing algorithm is iterative, so hashing functions are also called iterative algorithms. The essence of the hashing algorithm is its one-sidedness, i.e. the function should work in one direction - compress, mix and dissipate, but never restore. Such schemes make it possible to track changes in source texts, which ensures data integrity, and in digital signature algorithms also ensures the authenticity of data. However, these functions cannot be confirmed in their pure form.

A review of encryption algorithms common in the world allows you not only to select the algorithm necessary for your task, but also to estimate the costs of its implementation and the capabilities and requirements awaiting the user.

Encryption is a method of protecting information

From time immemorial there has been no value greater than information. The twentieth century is the century of computer science and informatization. Technology makes it possible to transmit and store increasingly large amounts of information. This benefit also has a downside. Information is becoming increasingly vulnerable for various reasons:

increasing volumes of stored and transmitted data;
  • expanding the circle of users with access to computer resources, programs and data;
  • increasing complexity of operating modes of computer systems.
  • Therefore, the problem of protecting information from unauthorized access (UNA) during transmission and storage is becoming increasingly important. The essence of this problem is the constant struggle between information security specialists and their “opponents.”

    Characteristics of composite encryption algorithms

    Information protection is a set of measures, methods and means that ensure:

    • exclusion of non-compliance documents to computer resources, programs and data;
    • checking the integrity of information;
    • exclusion of unauthorized use of programs (protection of programs from copying).

    The obvious trend towards the transition to digital methods of transmitting and storing information makes it possible to use unified methods and algorithms to protect discrete (text, fax, telex) and continuous (speech) information.

    A proven method of protecting information from unauthorized access is encryption (cryptography). Encryption is the process of converting open data (plaintext) into encrypted data (ciphertext) or encrypted data into open data according to certain rules using keys. In English literature, encryption/decryption is enciphering/deciphering.

    Using cryptographic methods it is possible to:

    encryption of information;
  • implementation of electronic signature;
  • distribution of encryption keys;
  • protection against accidental or intentional changes to information.
  • There are certain requirements for encryption algorithms:

    • high level of data protection against decryption and possible modification;
    • the security of information should be based only on knowledge of the key and not depend on whether the algorithm is known or not (Kirkhoff’s rule);
    • a small change in the source text or key should lead to a significant change in the ciphertext (the “crash” effect);
    • the key value range must exclude the possibility of data decryption by enumerating the key values;
    • economical implementation of the algorithm with sufficient speed;
    • the cost of decrypting data without knowing the key must exceed the cost of the data.

    Legends of deep antiquity...

    Boris Obolikshto

    Cryptology is an ancient science and this is usually emphasized by the story of Julius Caesar (100 - 44 BC), whose correspondence with Cicero (106 - 43 BC) and other “subscribers” in Ancient Rome was encrypted . The Caesar cipher, also known as the cyclic substitution cipher, consists of replacing each letter in a message with a letter of the alphabet that is a fixed number of letters away from it. The alphabet is considered cyclic, that is, after Z comes A. Caesar replaced a letter with a letter three away from the original one.
    Today in cryptology it is customary to operate with symbols not in the form of letters, but in the form of numbers corresponding to them. So, in the Latin alphabet we can use numbers from 0 (corresponding to A) to 25 (Z). Denoting the number corresponding to the original symbol, x, and the encoded one, y, we can write down the rule for using a substitution cipher:

    y = x + z (mod N), (1)

    Where z- The secret key, N is the number of characters in the alphabet, and addition modulo N is an operation similar to ordinary addition, with the only difference being that if ordinary summation gives a result greater than or equal to N, then the value of the sum is the remainder of dividing it by N.

    The Caesar cipher in the accepted notation corresponds to the value of the secret key z = 3 (and for Caesar Augustus z = 4). Such ciphers can be solved extremely simply even without knowing the value of the key: it is enough to know only the encryption algorithm, and the key can be found by simple brute force (the so-called force attack). Cryptology consists of two parts - cryptography, which studies methods of encrypting and/or verifying the authenticity of messages, and cryptanalysis, which considers ways of decrypting and substituting cryptograms. The instability of the first ciphers for many centuries created an atmosphere of secrecy around the work of a cryptographer and slowed down the development of cryptology as a science.

    For more than two thousand years, so-called “pre-scientific” cryptography has semi-intuitively “groped” for quite a few interesting solutions. The simplest action is to perform the substitution in a non-alphabetical order. It is also a good idea to rearrange the symbols in the message (permutation ciphers).

    The work of the great architect Leon Battista Alberti (1404 - 1472) is considered to be the first systematic work on cryptography. The period until the middle of the 17th century was already full of work on cryptography and cryptanalysis. The intrigues around ciphergrams in Europe at that time are surprisingly interesting. Alas, limited by the magazine’s capabilities, we will choose only one surname known from school - François Viète (1540 - 1603), who at the court of King Henry IV of France was so successful in cryptanalysis (which did not yet bear this proud name) that the Spanish King Philip II complained to the Pope about the French use of black magic. But everything happened without bloodshed - at that time, advisers from the Argenti family, whom we would call today cryptanalysts, were already serving at the Pope’s court.

    It can be argued that for centuries, the decryption of cryptograms has been helped by frequency analysis of the appearance of individual symbols and their combinations. The probabilities of the appearance of individual letters in the text vary greatly (for the Russian language, for example, the letter “o” appears 45 times more often than the letter “f”). This, on the one hand, serves as the basis for both key discovery and analysis of encryption algorithms, and on the other hand, it is the reason for significant redundancy (in the information sense) of natural language text. Any simple substitution does not allow one to hide the frequency of occurrence of a symbol - the symbols corresponding to the letters “o”, “e”, “a”, “i”, “t”, “n” stick out like an awl from a bag in the Russian text. But information theories and a measure of redundancy have not yet been created, and to combat the enemy of the cryptographer - frequency analysis - RANDOMIZATION is proposed. Its author, Carl Friedrich Gauss (1777 - 1855), mistakenly believed that he had created an unbreakable cipher.

    The next notable figure in the history of cryptology that we should not miss is the Dutchman Auguste Kerkhoff (1835 - 1903). He is responsible for the remarkable “Kerkhoff rule”: the strength of a cipher should be determined ONLY by the secrecy of the key. Considering the time when this rule was formulated, it can be considered the greatest discovery (more than half a century before the creation of a systematic theory!). This rule assumes that the encryption ALGORITHM IS NOT SECRET, which means that an open discussion of the advantages and disadvantages of the algorithm can be held. Thus, this rule transfers works on cryptology to the category of OPEN scientific works, allowing discussion, publication, etc.

    20th century - from intuition to science

    The last name we will mention in pre-scientific cryptology is AT&T engineer Gilbert Vernam (G.S. Vernam). In 1926, he proposed a truly unbreakable cipher. The idea of ​​the cipher is to select a new z value in equation (1) for each subsequent symbol. In other words, the private key should only be used once. If such a key is chosen at random, then, as Shannon rigorously proved 23 years later, the cipher is unbreakable. This cipher is the theoretical basis for the use of so-called "cipher pads", the widespread use of which began during the Second World War. A cipherpad contains a set of one-time use keys that are selected sequentially when encrypting messages. Vernam's proposal, however, does not solve the problem of secret communication: instead of a way to transmit a secret message, it is now necessary to find a way to transmit a secret key EQUAL to it in LENGTH, i.e., containing the same number of characters as there are in the plaintext.

    In 1949, Claude Shannon's paper "Communication Theories in Secret Systems" marked the beginning of scientific cryptology. Shannon showed that for some "random cipher" the number of ciphertext characters that a cryptanalyst with unlimited resources can obtain to recover the key (and solve the cipher) is

    H (Z)/(rlog N), (2)

    Where H(Z) is the entropy of the key, r is the redundancy of the plaintext, and N- volume of the alphabet.

    Based on the efficiency with which archivers compress text files, we know well how great the redundancy of plain text is - after all, their job is to reduce redundancy (and only on the most easily eliminated part of it). With a plaintext redundancy of about 0.75 and using a 56-bit key (as assumed by DES), 11 characters of ciphertext are sufficient to recover the key with unlimited cryptanalyst resources.


    Strictly speaking, relation (2) has not been proven for an arbitrary cipher, but is true for known special cases. A remarkable conclusion follows from (2): the work of a cryptanalyst can be made more difficult not only by improving the cryptosystem, but also by reducing the redundancy of the plaintext. Moreover, if plaintext redundancy is reduced to zero, then even a short key will produce a cipher that a cryptanalyst cannot solve.

    Before encryption, information should be subjected to statistical coding (compression, archiving). At the same time, the volume of information and its redundancy will decrease, and entropy will increase (the average amount of information per character). Since the compressed text will not contain repeated letters and words, decryption (cryptanalysis) will be difficult.

    Classification of encryption algorithms

    1. Symmetrical (with a secret, single key, single-key, single-key).
    1.1. Streaming (data stream encryption):

    with a one-time or infinite key (infinite-key cipher);
  • with a final key (Vernam system);
  • based on a pseudorandom number generator (PRN).
  • 1.2. Block (data encryption block by block):
    1.2.1. Permutation ciphers (P-blocks);
    1.2.2. Substitution ciphers (substitution, S-boxes):

    • monoalphabetic (Caesar code);
    • polyalphabetic (Widgenere cipher, Jefferson cylinder, Whetstone disk, Enigma);

    1.2.3. components (table 1):

    • Lucipher (IBM, USA);
    • DES (Data Encryption Standard, USA);
    • FEAL-1 (Fast Enciphering Algoritm, Japan);
    • IDEA/IPES (International Data Encryption Algorithm/
    • Improved Proposed Encryption Standard, Ascom-Tech AG, Switzerland);
    • B-Crypt (British Telecom, UK);
    • GOST 28147-89 (USSR); * Skipjack (USA).

    2. Asymmetric (with public key, public-key):

    • Diffie-Hellman DH (Diffie, Hellman);
    • Rivest-Shamir-Adleman RSA (Rivest, Shamir, Adleman);
    • ElGamal ElGamal.

    In addition, there is a division of encryption algorithms into ciphers themselves and codes. Ciphers work with individual bits, letters, and symbols. Codes operate on linguistic elements (syllables, words, phrases).

    Symmetric encryption algorithms

    Symmetric encryption algorithms (or secret key cryptography) are based on the fact that the sender and recipient of information use the same key. This key must be kept secret and transmitted in a manner that cannot be intercepted.

    Information exchange is carried out in 3 stages:

    the sender transfers the key to the recipient (in the case of a network with several subscribers, each pair of subscribers must have its own key, different from the keys of other pairs);
  • the sender, using the key, encrypts the message, which is sent to the recipient;
  • If a unique key is used for each day and for each communication session, this will increase the security of the system.

    Stream ciphers

    In stream ciphers, that is, when encrypting a data stream, each bit of the original information is encrypted independently of the others using gamma.

    Gamma is the imposition of a cipher gamma (a random or pseudo-random sequence of ones and zeros) on open data according to a certain rule. Typically, "exclusive OR" is used, also called modulo 2 addition and implemented in assembly programs with the XOR instruction. For decryption, the same gamma is applied to the encrypted data.

    When a random gamma of the same size is used once with the encrypted data, breaking the code is impossible (so-called cryptosystems with a one-time or infinite key). In this case, "infinite" means that the scale does not repeat.

    In some stream ciphers, the key is shorter than the message. Thus, the Vernam telegraph system uses a paper ring containing a gamma. Of course, the strength of such a cipher is not ideal.

    It is clear that exchanging keys the size of the information being encrypted is not always appropriate. Therefore, the gamma obtained using a pseudorandom number generator (PSG) is more often used. In this case, the key is the generating number (initial value, initializing value, IV) for starting the PNG generator. Each PNG generator has a period, after which the generated sequence is repeated. Obviously, the period of the pseudo-random gamma must exceed the length of the encrypted information.

    A PNG generator is considered correct if observation of fragments of its output does not allow one to restore the missing parts or the entire sequence with a known algorithm, but an unknown initial value.

    When using a PSC generator, several options are possible:

    Bitwise encryption of the data stream. The digital key is used as the initial value of the PNG generator, and the output bit stream is summed modulo 2 with the original information. Such systems lack the property of error propagation.
  • Bit-by-bit encryption of a data stream with feedback (OS) using ciphertext. This system is similar to the previous one, except that the ciphertext is returned as a parameter to the PNG generator. The property of error propagation is characteristic. The area of ​​error propagation depends on the structure of the PNG generator.
  • Bitwise encryption of the data stream from the OS using the source text. The base of the PNG generator is the initial information. The property of unlimited error propagation is characteristic.
  • Bitwise encryption of the data stream from the OS using ciphertext and source text.
  • Block ciphers

    With block encryption, information is divided into blocks of a fixed length and encrypted block by block. Block ciphers come in two main types:

    permutation ciphers (transposition, permutation, P-blocks);
  • substitution ciphers (substitution, S-boxes).
  • Permutation ciphers rearrange plain data elements (bits, letters, symbols) into some new order. There are ciphers of horizontal, vertical, double permutation, grids, labyrinths, slogans, etc.

    Replacement ciphers replace elements of plain data with other elements according to a specific rule. There are simple, complex, pair substitution ciphers, alpha-syllabic encryption and column substitution ciphers. Substitution ciphers are divided into two groups:

    monoalphabetic (Caesar code);
  • polyalphabetic (Widgenere cipher, Jefferson cylinder, Whetstone disk, Enigma).
  • In monoalphabetic substitution ciphers, a letter in the original text is replaced by another, predetermined letter. For example, in the Caesar code, a letter is replaced by a letter that is separated from it in the Latin alphabet by a certain number of positions. Obviously, such a cipher is quite easy to break. You need to calculate how often letters appear in the ciphertext and compare the result with the frequency of occurrence of letters known for each language.

    In polyalphabetic substitutions, different characters from a certain set are used sequentially to replace some character of the original message in each case of its occurrence. It is clear that this set is not infinite; after a certain number of characters it must be used again. This is the weakness of purely polyalphabetic ciphers.

    In modern cryptographic systems, as a rule, both encryption methods (substitution and permutation) are used. Such an encoder is called a product cipher. It is more secure than a scrambler that uses only substitutions or permutations.

    Block encryption can be implemented in two ways:

    Without feedback (OS). Multiple bits (a block) of the plaintext are encrypted simultaneously, and each bit of the plaintext affects every bit of the ciphertext. However, there is no mutual influence of the blocks, that is, two identical blocks of source text will be represented by the same ciphertext. Therefore, such algorithms can only be used to encrypt a random sequence of bits (for example, keys). Examples are DES in ECB mode and GOST 28147-89 in simple replacement mode.
  • With feedback. Typically the OS is organized like this: the previous encrypted block is added modulo 2 to the current block. An initialization value is used as the first block in the OS chain. An error in one bit affects two blocks - the erroneous one and the next one. Example - DES in CBC mode.
  • The PNG generator can also be used for block encryption:

    1. Block-based encryption of the data stream. Encryption of consecutive blocks (substitution and permutation) depends on the PNG generator, controlled by a key.
    2. Block-by-block encryption of data flow from the OS. The PNG generator is driven by ciphertext or plaintext or both.

    The US federal standard DES (Data Encryption Standard) is very widespread, on which the international standard ISO 8372-87 is based. DES has been endorsed by the American National Standards Institute (ANSI) and recommended for use by the American Bankers Association (ABA). DES provides 4 operating modes:

    • ECB (Electronic Codebook) electronic cipher pad;
    • CBC (Cipher Block Chaining) block chain;
    • CFB (Cipher Feedback) ciphertext feedback;
    • OFB (Output Feedback) output feedback.

    GOST 28147-89 - domestic standard for data encryption. The standard includes three algorithms for encrypting (decrypting) data: a simple replacement mode, a gamma mode, a gamma mode with feedback - and a simulation insertion generation mode.

    Using simulated insertion, you can record accidental or intentional modification of encrypted information. You can generate a simulated insertion either before encrypting (after decrypting) the entire message, or simultaneously with encrypting (decrypting) block by block. In this case, the block of information is encrypted with the first sixteen cycles in the simple replacement mode, then added modulo 2 with the second block, the result of the summation is again encrypted with the first sixteen cycles, etc.

    GOST 28147-89 encryption algorithms have the advantages of other algorithms for symmetric systems and surpass them in their capabilities. Thus, GOST 28147-89 (256-bit key, 32 encryption cycles) compared to such algorithms as DES (56-bit key, 16 encryption cycles) and FEAL-1 (64-bit key, 4 encryption cycles) has more high cryptographic strength due to a longer key and a greater number of encryption cycles.

    It should be noted that, unlike DES, in GOST 28147-89 the substitution block can be arbitrarily changed, that is, it is an additional 512-bit key.

    GOST 28147-89 gamma algorithms (256-bit key, 512-bit substitution block, 64-bit initialization vector) are superior in cryptographic strength to the B-Crypt algorithm (56-bit key, 64-bit initialization vector).

    The advantages of GOST 28147-89 are also the presence of protection against the imposition of false data (generation of imitative insertion) and the same encryption cycle in all four GOST algorithms.

    Block algorithms can also be used to generate gamma. In this case, the gamma is generated in blocks and added block by block modulo 2 with the source text. Examples include B-Crypt, DES in CFB and OFB modes, GOST 28147-89 in gamma and feedback gamma modes.

    Asymmetric encryption algorithms

    Asymmetric encryption algorithms (or public key cryptography) use one key (public) to encrypt information and another (secret) to decrypt it. These keys are different and cannot be derived from each other.

    The information exchange scheme is as follows:

    the recipient calculates the public and private keys, keeps the secret key secret, and makes the public key available (informs the sender, a group of network users, publishes);
  • the sender, using the recipient's public key, encrypts the message that is sent to the recipient;
  • the recipient receives the message and decrypts it using his private key.
  • RSA

    Protected by US patent N 4405829. Developed in 1977 at the Massachusetts Institute of Technology (USA). It was named after the first letters of the authors' surnames (Rivest, Shamir, Adleman). Cryptographic strength is based on the computational complexity of the problem of factoring a large number into prime factors.

    ElGamal

    Developed in 1985. Named after the author's surname - El-Gamal. Used in the US standard for digital signature DSS (Digital Signature Standard). Cryptographic strength is based on the computational complexity of the problem of taking the logarithm of integers in finite fields.

    Comparison of symmetric and asymmetric encryption algorithms

    In asymmetric systems, it is necessary to use long keys (512 bits or more). A long key dramatically increases encryption time. In addition, key generation is very time-consuming. But you can distribute keys over unsecured channels.

    Symmetric algorithms use shorter keys, meaning encryption is faster. But in such systems, key distribution is difficult.

    Therefore, when designing a secure system, both symmetric and asymmetric algorithms are often used. Since a public key system allows keys to be distributed in symmetric systems, it is possible to combine asymmetric and symmetric encryption algorithms in a system for transmitting protected information. The first one is used to distribute keys, while the second one is used to actually encrypt the transmitted information.

    Information can be exchanged as follows:

    the recipient calculates the public and private keys, keeps the secret key secret, and makes the public key available;
  • the sender, using the recipient's public key, encrypts the session key, which is sent to the recipient over an insecure channel;
  • the recipient receives the session key and decrypts it using his private key;
  • the sender encrypts the message with a session key and forwards it to the recipient;
  • the recipient receives the message and decrypts it.
  • It should be noted that in government and military communication systems only symmetric algorithms are used, since there is no strictly mathematical justification for the stability of public key systems, just as the opposite has not been proven.

    Verifying the authenticity of information. Digital signature

    When transmitting information, the following must be provided together or separately:

    • Confidentiality - an attacker should not be able to find out the content of the transmitted message.
    • Authenticity, which includes two concepts:
    1. integrity - the message must be protected from accidental or intentional modification;
    2. Sender identification (authorship verification) - the recipient must be able to verify who sent the message.

    Encryption can provide confidentiality and, in some systems, integrity.

    The integrity of the message is checked by calculating the check function of the message - a certain number of small length. This control function must be highly likely to change even with small changes in the message (deletion, inclusion, permutation or reordering of information). The control function is called and calculated in different ways:

    Message Authentic Code (MAC);
  • quadratic congruentical algorithm (Quadratic Congruentical Manipulation Detection Code, QCMDC);
  • Manipulation Detection Code (MDС);
  • Message Digest Algorithm (MD5);
  • check sum;
  • Block Check Character (BCC);
  • cyclic redundancy check (CRC);
  • hash function (hash);
  • imitation insert in GOST 28147-89;
  • algorithm with truncation to n bits (n-bit Algorithm with Truncation).
  • When calculating the control function, some encryption algorithm can be used. It is possible to encrypt the checksum itself.

    A digital signature is widely used (a digital addition to the transmitted information, guaranteeing the integrity of the latter and allowing its authorship to be verified). Digital signature models are known based on symmetric encryption algorithms, but when using public key systems, digital signatures are more convenient.

    To use the RSA algorithm, the message must be compressed by a hashing function (MD5 - Message Digest Algorithm) to a 256-bit hash (H). The message signature S is calculated as follows:

    d
    S = H mod n

    The signature is sent along with the message.

    The identification process consists of obtaining the hash function of the message (H") and comparing it with

    e
    H = S mod n

    Where H- message hash,

    S- his signature,

    d- The secret key,
    e- public key.

    The following standards are devoted to authentication:

    • authentication (authentication) - ISO 8730-90, ISO/IES 9594-90 and ITU X.509;
    • integrity - GOST 28147-89, ISO 8731-90;
    • digital signature - ISO 7498, P 34.10-94 (Russia), DSS (Digital Signature Standard, USA).

    ISO- International Organization for Standardization /IOS/,
    ITU- International Telecommunication Union /ITU/.

    Implementation of encryption algorithms

    Encryption algorithms are implemented in software or hardware. There are a great variety of purely software implementations of various algorithms. Due to their low cost (some are completely free), as well as the increasing speed of PC processors, ease of operation and reliability, they are very competitive. The Diskreet program from the Norton Utilities package, which implements DES, is widely known.

    It is impossible not to mention the PGP package (Pretty Good Privacy, version 2.1, author Philip Zimmermann), which comprehensively solves almost all problems of protecting transmitted information. Data compression before encryption, powerful key management, symmetric (IDEA) and asymmetric (RSA) encryption algorithms, calculation of a control function for a digital signature, and reliable key generation are used.

    Publications of the magazine "Monitor" with detailed descriptions of various algorithms and corresponding listings give everyone the opportunity to write their own program (or use a ready-made listing).

    Hardware implementation of algorithms is possible using specialized microcircuits (crystals are produced for the DH, RSA, DES, Skipjack algorithms, GOST 28147-89) or using general-purpose components (due to their low cost and high performance, digital signal processors - DSP, Digital Signal Processor, DSP - are promising ).

    Among Russian developments, noteworthy are the boards "Krypton" (company "Ankad") and "Grim" (methodology and algorithms of the company "LAN-Crypto", technical development of the Scientific and Production Center "ELiPS").

    "Krypton" are single-board devices that use cryptoprocessors (specialized 32-bit microcomputers, also called "blooming"). Bloomings implement GOST 28147-89 algorithms in hardware; they consist of a computer and RAM for storing keys. Moreover, the cryptoprocessor has three areas for storing keys, which allows you to build multi-level key systems.

    For greater encryption reliability, two cryptoprocessors operate simultaneously, and a 64-bit data block is considered correctly encrypted only if the information at the output of both bloomings matches. Encryption speed - 250 KB/s.

    In addition to two bloomings, the board contains:

    controller for interface with the computer bus (with the exception of Krypton-ES boards are designed to work with the ISA bus);
  • BIOS of the board, designed to interface with the computer and perform self-testing of the device and entering keys into cryptoprocessors;
  • a random number sensor (RNS) for generating encryption keys, made using noise diodes.
  • The following types of Krypton boards are produced:

    • "Krypton-EC" is intended for PCs of the EC series 1841-1845;
    • "Krypton-3";
    • "Krypton-4" (overall dimensions have been reduced by moving a number of discrete elements into the base crystals, the exchange speed has been increased thanks to an internal 8-byte buffer);
    • "Krypton-IR" is additionally equipped with an IR controller (smart card, smart card, smart card).

    In the Krypton-EC, Krypton-3, and Krypton-4 devices, the keys are stored as a file on a floppy disk. In Krypton-IR, the keys are located on the IR, which makes it difficult to forge and copy.

    The Grim board uses digital signal processors from Analog Devices ADSP-2105 and ADSP-2101, which gives encryption speeds of 125 and 210 KB/s, respectively. The board has a physical DNG and ROM with programs for the initial test, checking access rights, loading and generating keys. The keys are stored on a non-standard formatted floppy disk. The board implements GOST 28147-89 and digital signature algorithms.

    To protect information transmitted over communication channels, channel encryption devices are used, which are manufactured in the form of an interface card or a stand-alone module. The encryption speed of various models is from 9600 bps to 35 Mbps.

    In conclusion, we note that information encryption is not a panacea. It should be considered only as one of the methods of information protection and must be used in combination with legislative, organizational and other measures.

    Public key cryptology

    Boris Obolikshto

    It would seem that the impetus given by Shannon should have caused a collapse in results in scientific cryptology. But that did not happen. Only the rapid development of telecommunications, remote access to computers with the imperfection of existing cryptosystems with a secret key brought to life the next and, perhaps, the most interesting stage of cryptology, which dates back to the article by Whitfield Diffie and Marty E. Hellman that appeared in November 1976, “New Directions in cryptography". W. Diffie himself dates the receipt of the results published in November 1976 to May of the same year; Thus, we have a reason to celebrate the TWENTIETH ANNIVERSARY of public key cryptology from May to November.

    One of the problems that has remained unresolved in traditional cryptography is the distribution of secret keys. The idea of ​​transmitting a “secret” key over an open channel seems crazy at first glance, but if, abandoning perfect secrecy, we limit ourselves to practical security, then we can come up with a way to exchange keys.

    The first of the popular methods was exponential key exchange. Its essence is as follows:

    • Alice and Bob (involving as parties not the abstract “A” and “B”, but the cute Alice and Bob, has become a tradition in this field of cryptology) choose random numbers Xa and Xb, respectively.
    • Alice gives Bob Ya =aXa (mod q), and Bob to Alice - Yb =aXb (mod q).

    Here a- the so-called primitive element of a finite Galois field GF (q), the remarkable property of which for us is that its powers give all non-zero values ​​of the field elements. The value is used as the secret key

    Ya =aXaXb (mod q),

    which Alice obtains by raising the number given by Bob to a power Xa, known only to her, and Bob - a number received from Alice to a power known only to him Xb. The cryptanalyst is forced to calculate the logarithm of at least one of the transmitted numbers.

    The stability of exponential key exchange is based on the so-called one-sidedness of the exponentiation function: the computational complexity of obtaining Ya from Xa for a q of length 1000 bits is on the order of 2000 multiplications of 1000 bit numbers, while the inverse operation will require approximately 1030 operations. ONE-WAY functions, which have a similar asymmetry in the computational complexity of the forward and inverse problems, play a leading role in public key cryptography.

    Even more interesting is the one-way function with a secret passage (“trapdoor”). The idea is to build a function that can only be reversed by knowing some “loophole” - a secret key. Then the function parameters serve as a public key that Alice can transmit over an insecure channel to Bob; Bob, using the received public key, performs encryption (calculating a direct function) and transmits the result to Alice over the same channel; Alice, knowing the "loophole" (secret key), easily calculates the inverse function, while the cryptanalyst, not knowing the secret key, is doomed to solve a much more complex problem.

    In 1976, R.C. Merkle managed to construct such a function based on the problem of packing a backpack. The task itself is one-sided: knowing the subset of loads placed in the backpack, it is easy to calculate the total weight, but knowing the weight, it is not easy to determine the subset of loads. In our case, we used a one-dimensional version of the problem: a load vector and the sum of the components of its subvectors. Having built in a “loophole”, it was possible to obtain the so-called Merkle-Hellman backpack system. The first public key cryptosystem worked, and Merkle offered $100 to anyone who could solve it.

    The award went to Adi Shamir six years after his March 1982 publication of a single-iteration Merkle-Hellman backpack system. At the Crypto"82 conference, L. Adleman demonstrated the disclosure of the backpack system on an Apple II computer. Note that Shamir did not construct a way to reverse the task of obtaining the value of the secret key; he managed to construct a key that is not necessarily equal to the secret one, but allows revealing cipher This is one of the greatest dangers for public key cryptography: there is no strict proof of the one-sidedness of the algorithms used, i.e., no one is guaranteed that it will be possible to find a decryption method that probably does not require solving the inverse problem, the high complexity of which allows one to hope for practical strength of the cipher.It is good if a world-renowned scientist (in 1982, A. Shamir was already known as one of the authors of the RSA system) is able to crack a particular system.But what if an unambitious hacker succeeds in this?

    To conclude the drama about the backpack system, let us mention another bet that Merkle made with those who wanted to discover an improved system with many iterations for the amount of $1000. And this amount had to be paid. It was obtained by E. Brickell, who in the summer of 1984 discovered a system with forty iterations and one hundred parcels per hour of Cray-1 operation.

    The fate of the RSA system, named after the first letters of the surnames of its authors R. Rivest and the already familiar A. Shamir and L. Adlman, is much more successful today. By the way, Alice and Bob owe their birth to the first systematic presentation of the RSA algorithm. With their “help,” the authors in 1977 described a system based on the one-sided properties of the factorization function (multiplying is easy, but factoring is not).

    The development of public key cryptology allowed cryptological systems to quickly find widespread commercial use. But intensive use of cryptography does not come without its complications. From time to time we learn about troubles in one or another security system. The latest high-profile incident in the world was the hacking of the Kerberos system. This system, developed in the mid-80s, is quite popular in the world, and its hacking caused considerable concern among users.

    In the case of Kerberos, the problem was not in the encryption algorithm, but in the way the random numbers were obtained, that is, in the method of implementing the algorithm. When news of a random number generation flaw in Netscape software discovered by Berkeley students last October came through, Stephen Laudin discovered a similar problem in Kerberos. Together with Brian Dole, he managed to find a hole in the Kerberos system. The characters in this story are not amateurs. Graduates of Purdue University (Illinois) collaborated with the COAST (Computer Operations, Audit, and Security Technology) laboratory, professionally engaged in computer security issues and led by Prof. Spafford, who is also the founder of PCERT (Purdue Computer Emergency Response Team), the university's "quick response" team for computer emergencies. PCERT, in turn, is a member of a similar international organization FIRST (Forum of Incident Response Teams). As we can see, sappers found the mine, and this gives us hope that users of cryptosystems will not remain defenseless even if flaws are identified.

    The content of the first address to the press (dated February 16, 1996), which was made on behalf of the discoverers by Prof. Spafford. It, along with information about the unreliability of the password system and the possibility of hacking it within five minutes, refers to the delay in further dissemination of technical information until the developers make adjustments to prevent unauthorized access.

    Our mistakes were not spared either. Fortunately, there are professionals in our area who can promptly find and show the weak points of the security system. Another month has not passed since the specialists of the Kyiv LLC "Fintronik" P.V. Leskov and V.V. Tatyanin demonstrated the shortcomings of one of the popular banking security systems: the time for opening ciphertexts was less than 6 minutes, and the time required for an uncontrolled violation of the integrity of a document (bypassing the authentication system) was less than 5 minutes. And here we, the reader, will also have to wait until the developers make the necessary changes. And then we will be able to tell you in more detail about how and what was done.

    Literature:

    1. Vodolazsky V. Commercial encryption systems: basic algorithms and their implementation. Part 1. // Monitor. - 1992. - N 6-7. - c. 14 - 19.
    2. Ignatenko Yu.I. How to make it so?.. // PC World. - 1994. - N 8. - p. 52 - 54.
    3. Kovalevsky V., Maksimov V. Cryptographic methods. // ComputerPress. - 1993. - N 5. - p. 31 - 34.
    4. Maftik S. Protection mechanisms in computer networks. - M.: Mir, 1993.
    5. Spesivtsev A.V., Wegner V.A., Krutyakov A.Yu. and others. Information protection in personal computers. - M.: Radio and communications, 1992.
    6. Xiao D., Kerr D., Madnik S. Computer protection. - M.: Mir, 1982.
    7. Shmeleva A. Make-up - what is it? // Hard "n" Soft. - 1994. - N 5.
    8. GOST 28147-89. Information processing systems. Cryptographic protection. Cryptographic conversion algorithm.

    Data encryption is extremely important to protect privacy. In this article, I will discuss the different types and methods of encryption that are used to protect data today.

    Did you know?
    Back in Roman times, encryption was used by Julius Caesar to make letters and messages unreadable to the enemy. It played an important role as a military tactic, especially during wars.

    As the capabilities of the Internet continue to grow, more and more of our businesses are being conducted online. Among these, the most important are Internet banking, online payment, emails, exchange of private and official messages, etc., which involve the exchange of confidential data and information. If this data falls into the wrong hands, it can harm not only the individual user, but also the entire online business system.

    To prevent this from happening, several network security measures have been taken to protect the transmission of personal data. Chief among these are the processes of encrypting and decrypting data, which is known as cryptography. There are three main encryption methods used in most systems today: hashing, symmetric and asymmetric encryption. In the following lines, I will talk about each of these encryption types in more detail.

    Encryption types

    Symmetric encryption

    In symmetric encryption, normal readable data, known as plain text, is encrypted so that it becomes unreadable. This data scrambling is done using a key. Once the data is encrypted, it can be sent securely to the receiver. At the recipient, the encrypted data is decoded using the same key that was used for encoding.

    Thus, it is clear that the key is the most important part of symmetric encryption. It must be hidden from outsiders, since anyone who has access to it will be able to decrypt private data. This is why this type of encryption is also known as a "secret key".

    In modern systems, the key is usually a string of data that is derived from a strong password, or from a completely random source. It is fed into symmetric encryption software, which uses it to keep the input data secret. Data scrambling is achieved using a symmetric encryption algorithm, such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), or International Data Encryption Algorithm (IDEA).

    Restrictions

    The weakest link in this type of encryption is the security of the key, both in terms of storage and transmission to the authenticated user. If a hacker is able to obtain this key, he can easily decrypt the encrypted data, defeating the entire purpose of encryption.

    Another disadvantage is that the software that processes the data cannot work with encrypted data. Therefore, to be able to use this software, the data must first be decoded. If the software itself is compromised, then an attacker can easily obtain the data.

    Asymmetric encryption

    Asymmetric key encryption works similar to symmetric key in that it uses a key to encrypt the messages being transmitted. However, instead of using the same key, he uses a completely different one to decrypt this message.

    The key used for encoding is available to any and all network users. As such it is known as a "public" key. On the other hand, the key used for decryption is kept secret and is intended for private use by the user himself. Hence, it is known as the "private" key. Asymmetric encryption is also known as public key encryption.

    Since, with this method, the secret key needed to decrypt the message does not have to be transmitted every time, and it is usually known only to the user (receiver), the likelihood that a hacker will be able to decrypt the message is much lower.

    Diffie-Hellman and RSA are examples of algorithms that use public key encryption.

    Restrictions

    Many hackers use man-in-the-middle as a form of attack to bypass this type of encryption. In asymmetric encryption, you are given a public key that is used to securely exchange data with another person or service. However, hackers use network deception to trick you into communicating with them while you are led to believe that you are on a secure line.

    To better understand this type of hacking, consider two interacting parties, Sasha and Natasha, and a hacker, Sergei, with the intent to intercept their conversation. First, Sasha sends a message over the network intended for Natasha, asking for her public key. Sergei intercepts this message and obtains the public key associated with her and uses it to encrypt and send a false message to Natasha containing his public key instead of Sasha's.

    Natasha, thinking that this message came from Sasha, now encrypts it with Sergei's public key, and sends it back. This message was again intercepted by Sergei, decrypted, modified (if desired), encrypted again using the public key that Sasha originally sent, and sent back to Sasha.

    Thus, when Sasha receives this message, he has been led to believe that it came from Natasha and remains unaware of foul play.

    Hashing

    The hashing technique uses an algorithm known as a hash function to generate a special string from the given data, known as a hash. This hash has the following properties:

    • the same data always produces the same hash.
    • It is not possible to generate raw data from a hash alone.
    • It is not practical to try different combinations of inputs to try to generate the same hash.

    Thus, the main difference between hashing and the other two forms of data encryption is that once the data is encrypted (hashed), it cannot be retrieved back in its original form (decrypted). This fact ensures that even if a hacker gets his hands on the hash, it will be of no use to him, since he will not be able to decrypt the contents of the message.

    Message Digest 5 (MD5) and Secure Hashing Algorithm (SHA) are two widely used hashing algorithms.

    Restrictions

    As mentioned earlier, it is almost impossible to decrypt data from a given hash. However, this is only true if strong hashing is implemented. In the case of a weak implementation of the hashing technique, using enough resources and brute force attacks, a persistent hacker can find data that matches the hash.

    Combination of encryption methods

    As discussed above, each of these three encryption methods suffers from some disadvantages. However, when a combination of these methods is used, they form a secure and highly effective encryption system.

    Most often, private and public key techniques are combined and used together. The private key method allows for quick decryption, while the public key method offers a safer and more convenient way to transmit the secret key. This combination of methods is known as the "digital envelope". PGP email encryption software is based on the "digital envelope" technique.

    Hashing is used as a means of checking the strength of a password. If the system stores a hash of the password instead of the password itself, it will be more secure, since even if a hacker gets his hands on this hash, he will not be able to understand (read) it. During verification, the system will check the hash of the incoming password, and see if the result matches what is stored. This way, the actual password will only be visible during brief moments when it needs to be changed or verified, greatly reducing the likelihood of it falling into the wrong hands.

    Hashing is also used to authenticate data using a secret key. A hash is generated using the data and this key. Therefore, only the data and hash are visible, and the key itself is not transmitted. This way, if changes are made to either the data or the hash, they will be easily detected.

    In conclusion, these techniques can be used to efficiently encode data into an unreadable format that can ensure that it remains secure. Most modern systems typically use a combination of these encryption methods along with strong algorithm implementations to improve security. In addition to security, these systems also provide many additional benefits, such as verifying the user's identity and ensuring that the data received cannot be tampered with.





    

    2024 gtavrl.ru.