Coursework: Material media and their development. Material for the curious


Increase in capacity of external storage media

Let's see how the volume and technology of storing information on the most common removable media has changed over time.

Conditionally everything removable media information can be divided into the following groups:

Operating principle

Order of capacity

Media type

Mechanical (perforation) Tens of bytes: Punch card 45 or 80 column 45 and 80 bytes (of which 8 bytes are service)
unlimited Punched tape (depending on length)
Magnetic Tens of kilobytes: Magnetic drum 20K–100K (latest models up to 1 GB)
unlimited Magnetic tape (depending on length)
Hundreds of kilobytes: 8" floppy disk 80K–1.6M
5.25" floppy disk 110K–1.2M
Megabyte units: 3.5" floppy disk 720K–2.88M
Tens and hundreds of megabytes: ZIP floppy 100M–250M
Semiconductor Flash memory 8M–128M (latest models up to 64G)
Optic Hundreds of megabytes: Optical CD 640M–800M (reading laser wavelength - 780 nm (infrared))
Units gigabytes Optic DVD disc 4.7G (reading laser wavelength - 650 nm (red))
Tens of gigabytes Optical disc
BR-DVD (BD-ROM),
HD-DVD
54G (30G) (reading laser wavelength - 405 nm (violet))
Hundreds of gigabytes Optical Holographic Disc HVD (Red Laser) 200G–1.6T (reading laser wavelength - 650 nm (red))
Terabyte units Optical Holographic Disc HVD (Violet Laser) 3.9T (reading laser wavelength - 405 nm (violet))
Nano-optical (atomic) Terabytes, petabytes, exabytes In developing Terabytes, petabytes, exabytes (reading laser wavelength - 210 nm (ultraviolet))

Mechanical memory (perforation)

In 1725, Basile Bouchon invented a perforated paper tape for recording programs to simplify the production of complex patterns on a loom. A Lyon weaver glues tape into a loop and uses his invention to program looms.

In 1728, Jean-Baptiste Falcon improved Bouchon's invention. It replaces perforated tape with cards connected in a chain. This allows you to easily replace program fragments.

Bouchon-Falcon looms were semi-automatic and required manual program feed. Tremendous success in automation was achieved by Joseph-Marie Jacquard, a French inventor, the son of a Lyon weaver. In 1801, he created an automatic loom controlled by punched cards. The presence or absence of holes in the punched card caused the thread to rise or fall as the shuttle stroke, thereby creating a programmed pattern. The Jacquard machine was the first mass-produced industrial device, automatically working according to a given program. This machine was awarded a medal at the Paris Exhibition, and soon more than 10 thousand of these machines were operating in France alone.

Jacquard loom punch cards:

In 1884, Herman Hollerith filed the first patent for data storage on punched cards.

Among computers in the middle of the 20th century, the most widespread in the USSR was the 80-column punched card, presented below. Each column encoded 1 byte. 8 bytes out of 80 were service bytes. There were also 45-column versions of punched cards.

According to GOST, the punched card was required to be 187.4 millimeters in length and 82.5 millimeters in width. Information was entered onto it in a two-dimensional matrix, a kind of table, usually consisting of 12 rows and 40 or 80 columns.

The processing speed of machine punched cards reached 2000 cards per minute. Reproduction (reading) of information was carried out using electromechanical readers or photocells. Punched cards with 90, 40 and 21 columns with 6, 12 and 10 lines, respectively, were also used abroad.

Magnetic memory

The idea that magnetization could be used to record sound was first expressed by a certain Overling Smith in 1888. The device described by Smith had all the distinctive features of a tape recorder: a magnetic storage medium, a mechanism for feeding it, and a magnetic head.

In 1898, the Dane Waldemar Pulsen created and patented the telegraph - a device for magnetic recording of sound. It was a copper cylinder wrapped in thin steel wire and an electromagnet moving along it. in the 30s of the 20th century in Germany, along with the idea of ​​​​using not wire, but tape with magnetic powder sprayed on it for recording.

In 1932, the Austrian scientist G. Taushek invents “Drum memory” - cylindrical memory.

In 1952, as external media information, magnetic tape was used for the first time in the IBM Model 701 computer. The magnetic tape was called Model 726. The first magnetic tape could hold 1.4 MB of data. The tape density was 800 bpi, and was designed for 9-track recording. The reading speed was 7500 bytes/second, if we also take into account the tape speed of 75 inches/sec. The tape was developed by 3M (later Imation). Around the same time, magnetic drums were also invented.

In 1962, IBM released the first devices external memory with removable disks.

In 1967, IBM invents the first floppy disk. IBM began developing floppy disks after the same company created the first hard disk in 1956. On September 13, 1956, IBM began shipping the first random access hard drive. This device was called RAMAC (Random Access Method of Accounting and Control). The RAMAC was the size of a decent-sized cabinet weighing over a ton. Memory capacity – 5 MB. The drive included 50 disks with a diameter of 24 inches (61 cm), which rotated at a speed of 1200 rpm. The read and write heads were alternately driven to each disk using a servo drive. On average, access time in RAMAC was 0.6 seconds, data transfer speeds could reach 9 KB/s. The coating of the plates was made of iron oxide. The prototype of the first floppy disk was a disk without a protective envelope. After numerous improvements in 1971, IBM introduced 8-inch floppy disks, which consisted of a simple flexible plastic disk coated with iron oxide and placed in a cardboard envelope.

The photo below shows floppy disks without protective envelopes:

A great variety of all kinds of magnetic and magneto-optical disks have appeared, which I will not list.

Semiconductor memory

In 1984, Flash memory (Flash Erase EEPROM) appeared. The first version of flash memory was developed by Toshiba, and only in 1988 did Intel introduce a similar solution. The main difference between flash and its predecessors was a different method of erasing information: data could be reset either in a certain minimum amount (most often a block of 256 or 512 bytes is taken), or the entire chip could be cleared at once.

The first flash memory drives to hit the market were ATA Flash cards. These drives are manufactured in the form of standard PC Cards. The card has a built-in ATA controller. Thanks to this, the card emulates a regular hard drive when operating. There are three types of PC-CARD ATA (I, II, III). They all differ in thickness (3.3, 5.0 and 10.5 mm, respectively). All types are backward compatible with each other - you can always use a thinner card in a thicker connector, since the thickness of the connectors for all types is the same - 3.3 mm. The most widely used cards are ATA-flash Type I cards. The cards operate at voltages of 3.3V and 5V. PC-Cards come in capacities up to 2GB. Due to its large size, flash memory of this standard is not widely used. Currently practically not used.

USB flash memory (USB memory, “flash drive”), used instead of floppy disks to transfer information between computers - absolutely new type flash drives, which appeared on the market only in 2001. The shape of the USB memory resembles an oblong keychain, consisting of two halves - a protective cap and the drive itself with a USB connector (one or two flash memory chips and a USB controller).

Optical memory

In 1972, Philips first introduced a device in which such information was read optically from a transparent plastic disk. The new medium made it possible to record a 5...7-minute video clip or a high-quality stereophonic sound recording lasting 70 minutes. Recording and reading were carried out in analog form.

In 1978, the same Philips company created a digital optical sound recording system with a modern CD as a storage medium.

In 1981, Philips, together with Sony, introduced a modified digital optical audio recording system, the parameters of which became a de facto world standard and were approved by the International Electrotechnical Commission (IEC) in 1982. These standard parameters are: disc diameter 120 mm; recording in the form of a continuous spiral track starting at the center of the disc; track width 1 µm; spiral pitch 1.6 µm; recording with constant linear speed 1.2...1.4 m/s; surface recording density 106 Mbit/cm2; information reading speed 2 Mbit/s; EFM modulation; Error correction using double interleaved Reed-Solomon code. To record information, a frame-by-frame recording system is used. The CD turned out to be so successful and capacious that the creators almost immediately drew attention to it personal computers. In 1986, the first CD-ROMs began to be built into PCs.

For the user, optical disks are a cheap but non-compact alternative to flash memory. The still widespread optical media CD-RW, written with an infrared semiconductor laser (laser LED), is currently being replaced by DVD-RW, written with a red semiconductor laser, the format of which was proposed back in 1995.

Also currently appearing are the first HD-DVD and BR-DVD devices using a violet semiconductor laser with a wavelength of 405 nm.

Also being developed hybrid drives, which can be recorded in several formats at once. In addition, holographic optical disks (HVDs) are expected to appear soon, storing pages of information in volumetric holograms. I have already talked about them in the corresponding article.

There are also more distant prospects for the development of the optical recording method. For example, the atomic holographic recording described on my website in the corresponding article.

Development of optical information storage technology

The table below provides comparative characteristics of drives and optical discs of four common formats:

There is evidence of the dynamics of shortening the wavelength of a semiconductor laser, which makes it possible to record information more densely on optical disks. There is a tendency for wavelengths to move into the ultraviolet range. So, on May 17, 2006, Japanese researchers from NTT Basic Research Laboratories created an ultraviolet LED with a wavelength of about 210 nm! This is the shortest wavelength that light can travel through air. This is the first step towards UV laser LEDs.

A schematic diagram of the wavelengths of semiconductor lasers used in various read-write devices can be seen in the following figure:

Thus, in the near future we should expect holographic disks, information on which will be recorded by a laser diode beam with an emitted wavelength of about 210 nm.

The media of the 20th century include electrical recording media.

They have a significant advantage over paper (sheets, newspapers, magazines) in terms of volume and unit cost. For storing and providing operational (not long-term storage) information, they have an overwhelming advantage; there are also significant opportunities for providing it in a form convenient for the consumer (formatting, sorting).

The disadvantage is the small screen size (or significant weight) and fragility of the reading devices, dependence on power supplies.

Currently, electronic media are actively replacing paper media in all sectors of life, which leads to significant wood savings. Their disadvantage is that for reading and for each type and format of media, you need a corresponding reading device.

Types of material storage media from the 20th century to the present

Record player. At the beginning of the 20th century, sound recording technology continued to improve - a tape recorder appeared (Fig. 10). In 1900, a tape recorder was first introduced to the public, in which sound was recorded by magnetizing sections of wire. An hour of recording at the beginning of the 20th century required 7 kilometers of wire weighing about 2 centners.

Punch cards. Since the middle of the 20th century, punched cards have appeared (Fig. 11). First computing machines in the 20-50s of the last century they still had much in common with antique boxes. Storage media in those days did not know the concepts of “convenience” and “high recording density.” Data was loaded using punched cards - cardboard cards with holes punched in them. Information was recorded and read according to certain schemes, but it was based on a binary code: presence of a hole -1, absence - 0.

HDD

The hard drive was next to enter the arena (Figure 12). This happened in 1956, when IBM began selling the first disk storage system - the 305 RAMAC. The engineering miracle consisted of 50 disks with a diameter of 60 cm and weighed about a ton. The hard drive capacity at that time was simply phenomenal - as much as 5 MB.

The main advantage of the new product was high speed work: in the RAMAC system, the read or write head “walked” freely on the surface of the disk, so that data was written and retrieved much faster than in the case of magnetic drums.

In the late 60s, IBM released a high-speed drive with two 30 MB disks. A capacity of 60 MB at that time was more than enough, and drive manufacturers began to work on reducing the size of their models. By the early 1980s, hard drives had shrunk to the size of today's 5.25-inch drives, and their price dropped to $2,000 for a 10MB drive. By 1991, the maximum capacity increased to 100 MB, by 1997 - to 10 GB, in our time the maximum capacity of a Winchester is about 1 TB.

CD

In the mid-70s, a number of large companies began to develop a fundamentally new type of storage media - optical storage. Philips and Sony have achieved outstanding success in this field. The result of their intensive work was the emergence of the CD (Compact Disk) standard (Fig. 13), which was first demonstrated in 1980.

CDs and related players went on sale in 1982. Thanks to the phenomenally low cost of media, the CD format immediately gained popularity, but at that time CDs were used only for storing audio information (up to 74 minutes of audio). To adapt their invention to work with arbitrary data, Philips and Sony created the CD-ROM (Compact Disc Read Only Memory) standard in 1984. As a result, one CD gained the ability to store up to 650 MB of information - a huge figure at that time.

Over time, media capacity increased to 700 MB (or 80 minutes of audio). In 1988, TajioYuden announced a recording format CD-R discs(Compact Disc Recordable).

In 1997, the CD-RW format appeared, allowing repeated rewriting of data on a disc. In 1996, CDs were replaced by DVD format(Digital Versatile Disc). Essentially, this is the same CD, but with increased recording density. The effect was achieved by reducing the size of the depressions and changing the type of laser. In addition, a DVD can have two working layers on one disc. The capacity of a single-layer disc is 4.7 GB, and that of a double-layer disc is 8.5 GB. Of course, special drives were released to work with DVDs.

In 1997, the DVD format was supplemented with DVD-R and DVD-RW discs. The license price for this technology was very high, so a number of companies united into the so-called “DVD+RW Alliance” and in 2002 released discs of the DVD+R and DVD+RW standards. Many old DVD drives refused to work with the new type of disc, but the “impostors” still managed to gain popularity. Today DVD-R(W) and DVD+R(W) coexist peacefully, and modern drives support both formats.

Flash memory The first version of flash memory (Flash Erase EEPROM) was developed in 1984 by Toshiba. Four years later similar solution information carrier was also presented by Intel. Flash memory-based drives are called solid-state drives because... they have no moving parts. This has improved the reliability of flash memory compared to other media.

Standard working overloads are 15 g, and short-term overloads can reach 2000 g, i.e. theoretically, the card should work excellently under the highest possible cosmic overloads and withstand drops from a three-meter height. Moreover, under such conditions, the operation of the card is guaranteed for up to 100 years.

Erasing on these cards occurs in sections, so you can't change one bit or byte without overwriting the entire section. Data can be reset or within a certain minimum size, for example, 256 or 512 bytes, or completely. The first flash drives were ATA Flash cards. They were manufactured in the form of a PC Card with a built-in ATA controller. Then more and more flash card standards began to emerge. Such as CompactFlashTypeI (CF I) and CompactFlashTypeII (CF II) - released in 1994 by SanDisk, are a modification of the PC Card.

In 1995, Smart Media Card (SMC) without a built-in controller was developed by Toshiba.

1997 - Infineon Technologies (a division of Siemens) creates Multi Media Card (MMC), they are even smaller than those discussed above and they weigh only 1.5 g, therefore they are intended for portable devices. Later, Panasonic (Matsushita Electronic), together with SanDisk and Toshiba, developed the Secure Digital (SD) standard, which are equipped with protection against illegal copying.

In 2001, USB flash appeared (Fig. 14), this card consists of a protective cap and the drive itself with a USB connector (one or two flash memory chips and a USB controller are placed inside it) equipped with means of protection against illegal copying. Technologies do not stand still. In the field of optical storage great prospects AO-DVD (Articulated Optica lDigital Versatile Disc) discs are awaiting, work on which is in full swing in the bowels of Iomega. The development is based on the idea of ​​​​using nanostructures - sections of the disk with dimensions smaller than the wavelength of laser radiation. In this case, the sections themselves can be located at different angles of inclination. As a result, information is read by analyzing the nature of the distribution of the reflected beam. In theory, the capacity of an AO-DVD disc can exceed 800 GB.

Developments in the field of holographic memory have been underway for quite some time. Optware has achieved the greatest success here. She has already managed to present to the public prototypes of HVD (Holographic Versatile Disc) format discs. It is quite possible that in a few years they will replace Blu-ray and HD DVD. The holographic disk consists of several reflective layers different types, and two lasers are used to read them. Without going into technical details, note that the theoretical volume of HVD can reach 3.9 TB.

Very soon, flash drives will be replaced by PRAM memory. It does not promise incredible amounts of stored information, but instead will offer increased performance. Another promising technology, FeRAM (Ferroelectric Random Access Memory), is still in early development. It is based on the use of ferromagnetic capacitors as memory cells and water molecules to insulate these cells. The recording density of such a drive can be increased to several thousand terabytes per square centimeter.

Some technologies will not be widespread and will be forgotten. However, one thing is clear: the capacity and speed indicators of storage media are growing faster every day, and there is no decline in their development in the near future.

Thus, methods of documenting information and forms of transmitting information are being modernized and becoming more convenient to use.

Today, almost every person, going to work, school, or just running errands, has in his pocket a USB flash drive or a small memory card on which he has photographs of his children, family, relatives, necessary documents and materials, favorite playlist, etc.

And you can’t put a rock painting in your pocket to look at it, but these information carriers are a universal human heritage.

Download in ZIP (3.66 Kb)

Files: 1 file

Electronic storage media-2.doc(14.31 Kb) - Open, Download

20th century storage media

The technology of recording information on magnetic media appeared relatively recently - approximately in the middle of the 20th century (40s - 50s). But several decades later - in the 60s and 70s - this technology became very widespread throughout the world.
Magnetic tape consists of a strip of dense material onto which a layer of ferromagnetic materials is sprayed. It is on this layer that information is “remembered”. The recording process is also similar to the process of recording on vinyl records - using a magnetic induction coil, instead of a special apparatus, a current is supplied to the head, which drives the magnet. Sound recording on film occurs due to the action of an electromagnet on the film. The magnetic field of the magnet changes in time with sound vibrations, and thanks to this, small magnetic particles (domains) begin to change their location on the surface of the film in a certain order, depending on the effect on them of the magnetic field created by the electromagnet. And when playing back a recording, the reverse recording process is observed: the magnetized tape excites electrical signals in the magnetic head, which, after amplification, go further to the speaker.
A compact cassette (audio cassette or simply cassette) is a storage medium on magnetic tape; in the second half of the 20th century, it was a common media carrier for sound recording. Used to record digital and audio information. The compact cassette was first introduced in 1964 by Philips. Due to its relative cheapness, for a long time (from the early 1970s to the 1990s) the compact cassette was the most popular recorded audio medium, but since the 1990s it has been replaced by compact discs.
Nowadays there are many different types of magnetic media in the world: floppy disks for computers, audio and video cassettes, reel-to-reel tapes, etc. But new laws of physics are gradually being discovered, and with them new possibilities for recording information. Just a couple of decades ago, many information carriers appeared based on new technology - reading information using lenses and a laser beam.
The development of material media follows the path of a continuous search for objects with high durability, large information capacity with minimal physical dimensions of the media.

Since the 1980s, optical (laser) disks have become increasingly widespread. These are plastic or aluminum disks designed to record and reproduce information using a laser beam.

Based on application technology, CDs are divided into 3 main classes:

1. Discs that allow signals to be recorded once and played back multiple times without the possibility of erasing them (CD-R)

2. Reversible optical discs that allow multiple recording, playback and erasing of signals (CD-RW)

3. Digital universal video discs DVD with large capacity (up to 17 GB).

Working with information in our time is unthinkable without a computer, since it was originally created as a means of information processing and only now it began to perform many other functions: storage, transformation, creation and exchange of information.

You need a device with which the computer will store information, then you need a storage medium on which it can be transferred from place to place. Some of them:

1. Punched card reader: designed for storing programs and data sets using punched cards - cardboard cards with holes punched in a certain sequence.

2. Magnetic tape drive (streamer): based on the use of a tape-type device and cassettes with magnetic film.

3. Floppy magnetic disk drive (FMD - disk drive). This device uses flexible magnetic disks as a storage medium - floppy disks, which can be 5 or 3 inches.

4. Hard drive magnetic disk(HDD - hard drive): is a logical continuation of technology development magnetic storage information.

5. CDs and DVDs.

6. Flash memory.


Short description

The technology of recording information on magnetic media appeared relatively recently - approximately in the middle of the 20th century (40s - 50s). But several decades later - in the 60s and 70s - this technology became very widespread throughout the world.
Magnetic tape consists of a strip of dense material onto which a layer of ferromagnetic materials is sprayed. It is on this layer that information is “remembered”. The recording process is also similar to the process of recording on vinyl records - using a magnetic induction coil, instead of a special apparatus, a current is supplied to the head, which drives the magnet. Sound recording on film occurs due to the action of an electromagnet on the film. The magnetic field of the magnet changes in time with sound vibrations, and thanks to this, small magnetic particles (domains) begin to change their location on the surface of the film in a certain order, depending on the effect on them of the magnetic field created by the electromagnet. And when playing back a recording, the reverse recording process is observed: the magnetized tape excites electrical signals in the magnetic head, which, after amplification, go further to the speaker.

“May you live in an era of change” is a very laconic and quite understandable curse for a person, say, over 30 years old. The current stage of human development has made us unwitting witnesses to a unique “era of change.” And here it’s not just the scale of modern scientific progress that plays a role; in terms of significance for civilization, the transition from stone to copper tools was obviously much more significant than doubling the computing capabilities of the processor, which in itself will be clearly more technologically advanced. The enormous, ever-increasing speed of change in the technological development of the world is simply discouraging. If a hundred years ago every self-respecting gentleman simply had to be aware of all the “new products” in the world of science and technology, so as not to look like a fool and a hillbilly in the eyes of those around him, now, given the volume and speed of the generation of these “new products”, it is completely easy to keep track of them impossible, the question is not even posed that way. The inflation of technologies, unimaginable even until recently, and the human capabilities associated with them, have actually killed the wonderful trend in literature - “Technical Fiction”. There is no longer a need for it, the future has become many times closer than ever before; the planned story about “wonderful technology” risks reaching the reader later than something similar has already rolled off the production lines of the research institute.

The progress of human technical thought has always been most quickly reflected in the field of information technology. Methods of collecting, storing, systematizing, and distributing information run like a red thread through the entire history of mankind. Breakthroughs, whether in the field of technical or human sciences, one way or another, responded to IT. The civilizational path traversed by humanity is a series of successive steps to improve methods of storing and transmitting data. In this article, we will try to understand and analyze in more detail the main stages in the process of development of information carriers, and conduct a comparative analysis of them, starting from the most primitive - clay tablets, up to the latest successes in creating a machine-brain interface.

The task posed is really no joke, look what you set your mind to, the intrigued reader will say. It would seem, how is it possible, while maintaining at least basic correctness, to compare significantly different technologies of the past and today? The fact that the way people perceive information has not actually changed much can help resolve this issue. The forms of recording and forms of reading information through sounds, images and coded symbols (writing) remain the same. In many ways, it is this given fact that has become, so to speak, a common denominator, thanks to which it will be possible to make qualitative comparisons.

Methodology

To begin with, it’s worth recalling the truisms with which we will continue to operate. The elementary storage medium of a binary system is a “bit”, while the minimum unit of data storage and processing by a computer is a “byte”, in standard form, the latter includes 8 bits. A megabyte, more familiar to our ears, corresponds to: 1 MB = 1024 kbytes = 1048576 bytes.

Reduced units per this moment are universal measures of the volume of digital data located on a particular medium, so they will be very easy to use in further work. The universality lies in the fact that a group of bits, actually a collection of numbers, a set of 1/0 values, can describe any material phenomenon and thereby digitize it. It doesn’t matter whether it’s the most sophisticated font, a picture, a melody, all these things consist of individual components, each of which is assigned its own unique digital code. Understanding this basic principle makes it possible for us to move forward.

The difficult, analog childhood of civilization

The very evolutionary development of our species threw people into the embrace of an analogue perception of the space around them, which largely predetermined the fate of our technological development.

At first glance modern man, the technologies that arose at the very dawn of humanity are very primitive; to someone who is not experienced in details, this is exactly how the very existence of humanity before the transition to the era of “digital” may seem, but is this so, was “childhood” really that difficult? Having set out to study the question posed, we can see very simple technologies for storing and processing information at the stage of their emergence. The first information carrier of its kind created by man was portable area objects with images printed on them. Tablets and parchments made it possible not only to save, but also to process this information more efficiently than ever before. At this stage, the opportunity to concentrate great amount information in specially designated places - repositories, where this information was systematized and carefully protected, became the main impetus for the development of all mankind.

The first known data centers, as we would call them now, until recently called libraries, arose in the vastness of the Middle East, between the Nile and Euphrates rivers, around the 2nd thousand years BC. All this time, the format of the information carrier itself significantly determined the ways of interaction with it. And here it is no longer so important whether it is an adobe tablet, a papyrus scroll, or a standard A4 sheet of pulp and paper; all these thousands of years have been closely united by the analogue method of entering and reading data from a medium.

The period of time during which it was the analogue way of human interaction with his information belongings that dominated successfully extended to the present day, only very recently, already in the 21st century, finally giving way to the digital format.

Having outlined the approximate time and semantic framework of the analog stage of our civilization, we can now return to the question posed at the beginning of this section: after all, these methods of data storage that we had and used until very recently, not knowing about iPads, flash drives and optical discs?

Let's do the calculation

If we put aside the last stage of the decline of analog data storage technologies, which has lasted for the last 30 years, we can sadly note that these technologies themselves are to a greater extent have not undergone significant changes for thousands of years. Indeed, a breakthrough in this area occurred relatively recently, this is the end of the 19th century, but more on that below. Until the middle of the declared century, among the main methods of recording data, two main ones could be distinguished: writing and painting. The significant difference between these methods of registering information, absolutely regardless of the medium on which it is carried out, lies in the logic of information registration.
art
Painting seems to be the most in a simple way transfer of data that does not require any additional knowledge, both at the stage of creating and using data, thereby actually being original format perceived person. The more accurately the transmission of reflected light from the surface of surrounding objects to the retina of the scribe’s eye occurs on the surface of the canvas, the more informative this image will be. The lack of thoroughness of the transmission technique and materials used by the image creator is the noise that will subsequently interfere with the accurate reading of the information recorded in this way.

How informative the image is, what quantitative value of information the drawing carries. At this stage of awareness of the process of information transfer graphically We can finally dive into the first calculations. A basic computer science course will come to our aid with this.

Any raster image discretely, this is just a set of points. Knowing this property of it, we can translate the displayed information that it carries into units that are understandable to us. Because the presence/absence of a contrast point is actually the simplest binary code 1 / 0 then and, therefore, each point acquires 1 bit of information. In turn, the image of a group of points, say 100x100, will contain:

V = K * I = 100 x 100 x 1 bit = 10,000 bits / 8 bits = 1250 bytes / 1024 = 1.22 kbytes

But let's not forget that the above calculation is correct only for a monochrome image. In the case of much more frequently used color images, naturally, the volume transmitted information will increase significantly. If we assume that the condition for sufficient color depth is 24-bit (photographic quality) encoding, and let me remind you, it has support for 16,777,216 colors, then we get a much larger amount of data for the same number of pixels:

V = K * I = 100 x 100 x 24 bits = 240,000 bits / 8 bits = 30,000 bytes / 1024 = 29.30 kbytes

As you know, a point has no size and, in theory, any area allocated for drawing an image can carry an infinitely large amount of information. In practice, there are quite certain dimensions and accordingly the volume of data can be determined.

Based on many studies, it was found that a person with average visual acuity, from a distance comfortable for reading information (30 cm), can distinguish about 188 lines per 1 centimeter, which in modern technology approximately corresponds to the standard parameter for scanning images with household scanners at 600 dpi . Therefore, from one square centimeter of a plane, without additional devices, the average person can count 188:188 points, which will be equivalent to:

For a monochrome image:
Vm = K * I = 188 x 188 x 1 bit = 35,344 bits / 8 bits = 4418 bytes / 1024 = 4.31 kbytes

For photographic quality images:
Vc = K * I = 188 x 188 x 24 bits = 848,256 bits / 8 bits = 106,032 bytes / 1024 = 103.55 kbytes

For greater clarity, based on the calculations obtained, we can easily establish how much information such a familiar A4 sheet of paper with dimensions 29.7/21 cm carries:

VA4 = L1 x L2 x Vm = 29.7 cm x 21 cm x 4.31 kbytes = 2688.15 / 1024 = 2.62 MB – monochrome picture

VA4 = L1 x L2 x Vm = 29.7 cm x 21 cm x 103.55 kb = 64584.14 / 1024 = 63.07 mb – color picture

Writing
If with the visual arts the “picture” is more or less clear, then with writing everything is not so simple. Obvious differences in the methods of transmitting information between text and drawing dictate a different approach to determining the information content of these forms. Unlike an image, writing is a type of standardized, coded transmission of data. Without knowing the code of words embedded in the letter and the letters that form them, the information load of, say, Sumerian cuneiform writing is generally zero for most of us, while ancient images on the ruins of Babylon, for example, will be quite correctly perceived even by a person completely ignorant of the intricacies of the ancient world . It becomes quite obvious that the information content of a text extremely depends on whose hands it falls into, and on how it is deciphered by a specific person.

Nevertheless, even under such circumstances, which somewhat blur the validity of our approach, we can quite unambiguously calculate the amount of information that was placed in texts on various kinds of flat surfaces.
Having resorted to the already familiar binary system coding and the standard byte, written text that can be thought of as a series of letters that form words and sentences, can be very easily reduced to the digital form 1 / 0.

The 8-bit byte that is familiar to us can acquire up to 256 different digital combinations, which should be enough for a digital description of any existing alphabet, as well as numbers and punctuation marks. This suggests the conclusion that any standard alphabetic character applied to the surface takes up 1 byte in digital equivalent.

The situation is a little different with hieroglyphs, which have also been widely used for several thousand years. By replacing an entire word with one character, this encoding clearly uses the space allotted to it much more effectively in terms of information load than what happens in alphabet-based languages. At the same time, the number of unique characters, each of which needs to be assigned a non-repeated combination of 1 and 0, is many times greater. In the most common existing hieroglyphic languages: Chinese and Japanese, according to statistics, no more than 50,000 unique characters are actually used; in Japanese, even less, at the moment the country’s Ministry of Education has identified only 1,850 hieroglyphs for everyday use. In any case, 256 combinations that fit into one byte are no longer enough. One byte is good, but two is even better, says modified folk wisdom, 65536 - this is exactly how many digital combinations we will get using two bytes, which in principle becomes sufficient to convert an actively used language into digital form, thereby assigning two bytes to the absolute majority of hieroglyphs.

The current practice of using writing tells us that about 1,800 readable, unique characters can be placed on a standard A4 sheet. By carrying out simple arithmetic calculations, you can determine how much information in digital equivalent one standard typewritten sheet of alphabetic, and more informative hieroglyphic writing will carry:

V = n * I = 1800 * 1 byte = 1800 / 1024 = 1.76 kbytes or 2.89 bytes / cm2

V = n * I = 1800 * 2 bytes = 3600 / 1024 = 3.52 kbytes or 5.78 bytes / cm2

Industrial Leap

The 19th century was a turning point for both the methods of recording and storing analog data; this was a consequence of the emergence of revolutionary materials and methods of recording information that were to change the IT world. One of the main innovations was sound recording technology.

The invention of the phonograph by Thomas Edison first gave rise to the existence of cylinders with grooves applied to them, and soon records - the first prototypes of optical disks.

Reacting to sound vibrations, the phonograph cutter tirelessly made grooves on the surface of both metal and, a little later, polymer. Depending on the captured vibration, the cutter applied a twisted groove of different depths and widths on the material, which in turn made it possible to record sound and, in a purely mechanical way, reproduce back the sound vibrations that had already been engraved.

At the presentation of the first phonograph by T. Edison at the Paris Academy of Sciences, there was an embarrassment, one not a young, linguist scientist, almost heard a reproduction of human speech mechanical device, jumped out of his seat and, indignant, threw his fists at the inventor, accusing him of fraud. According to this respected member of the academy, metal could never replicate the melodiousness of the human voice, and Edison himself is an ordinary ventriloquist. But you and I know that this is certainly not the case. Moreover, in the twentieth century people learned to store sound recordings in digital format, and now we will plunge into some numbers, after which it will become quite clear how much information fits on an ordinary vinyl record (the material has become the most characteristic and widespread representative of this technology) record.

Just like earlier with the image, here we will build on the human ability to capture information. It is widely known that most often the human ear is able to perceive sound vibrations from 20 to 20,000 Hertz, based on this constant, to switch to digital format sound, a value of 44100 Hertz was adopted, since for a correct transition, the sampling frequency of the sound vibration must be twice as high as its original value. Also, an important factor here is the coding depth of each of the 44,100 vibrations. This parameter directly affects the number of bits inherent in one wave; the greater the position of the sound wave recorded in a specific second of time, the more bits it must be encoded and the higher quality the digitized sound will sound. The ratio of sound parameters chosen for the most common format today, not distorted by compression used on audio discs, is its 16-bit depth, with an oscillation resolution of 44.1 kHz. Although there are more “capacious” ratios of the given parameters, up to 32bit / 192 kHz, which might be more comparable to the actual sound quality of the recording, we will include the ratio 16 bit / 44.1 kHz in the calculations. It was the chosen ratio that in the 80-90s of the twentieth century dealt a crushing blow to the analogue audio recording industry, becoming in fact a full-fledged alternative to it.

And so, taking the announced values ​​as the initial sound parameters, we can calculate the digital equivalent of the volume of analog information that recording technology carries:

V = f * I = 44100 Hertz * 16 bits = 705600 bits/sec / 8 = 8820 bytes/sec / 1024 = 86.13 kbytes/sec

By calculation, we obtained the necessary amount of information to encode 1 second of sound from a high-quality recording. Since the size of the plates varied, just like the density of the grooves on its surface, the amount of information on specific representatives of such a medium also varied significantly. The maximum time for high-quality recording on a vinyl record with a diameter of 30 cm was less than 30 minutes on one side, which was on the edge of the material’s capabilities; usually this value did not exceed 20-22 minutes. Having this characteristic, it follows that the vinyl surface could accommodate:

Vv = V * t = 86.13 kbytes/sec * 60 sec * 30 = 155034 kbytes / 1024 = 151.40 MB

But in fact, no more than:
Vvf = 86.13 kbytes/sec * 60 sec * 22 = 113691.6 kbytes / 1024 = 111.03 MB

The total area of ​​such a plate was:
S = π* r^2 = 3.14 * 15 cm * 15 cm = 706.50 cm2

In fact, there are 160.93 kbytes of information per square centimeter of a plate; naturally, the proportion for different diameters will not change linearly, since this is not the effective recording area, but the entire media.

Magnetic tape
The latest and, perhaps, the most effective carrier of data recorded and read by analogue methods is magnetic tape. Tape is actually the only medium that has survived the analog era quite successfully.

The technology of recording information using the magnetization method was patented at the end of the 19th century by the Danish physicist Voldemar Poultsen, but unfortunately, it did not become widespread then. For the first time, the technology was used on an industrial scale only in 1935 by German engineers, on its basis the first film tape recorder was created. For 80 years of his active use magnetic tape has undergone significant changes. Different materials were used, different geometric parameters of the tape itself, but all these improvements were based on a single principle, developed back in 1898 by Poultsen, magnetic recording of vibrations.

One of the most widely used formats was a tape consisting of a flexible base onto which one of the metal oxides (iron, chromium, cobalt) was applied. The width of the tape used in household audio tape recorders was usually one inch (2.54 cm), the thickness of the tape started from 10 microns, as for the length of the tape, it varied significantly in different skeins and most often ranged from hundreds of meters to a thousand. For example, a reel with a diameter of 30 cm could hold about 1000 m of tape.

The sound quality depended on many parameters, both the tape itself and the equipment that read it, but in general, with the right combination of these same parameters, it was possible to make high-quality studio recordings on magnetic tape. Higher sound quality was achieved by using a larger volume of tape to record a unit of sound time. Naturally, the more tape is used to record the moment of sound, the wider the range of frequencies that can be transferred to the medium. For studio, high-quality materials, the recording speed onto the tape was no less than 38.1 cm/sec. When listening to recordings at home, a recording made at a speed of 19 cm/sec was sufficient for a fairly full sound. As a result, a 1000 m reel could accommodate up to 45 minutes of studio sound, or up to 90 minutes of content acceptable to the majority of consumers. In cases of technical recordings, or speeches, for which the width of the frequency range during playback did not play a special role, with a tape consumption of 1.19 cm/sec on the above-mentioned reel, it was possible to record sounds for as much as 24 hours.

Having a general understanding of magnetic tape recording technologies in the second half of the twentieth century, we can more or less correctly convert the capacity of reel-to-reel media into units of data volume that are understandable to us, as we have already done for recordings.

A square centimeter of such media will accommodate:
Vo = V / (S * n) = 86.13 kbytes/sec / (2.54 cm * 1 cm * 19) = 1.78 kbytes/cm2

Total volume of a reel with 1000 meters of film:
Vh = V * t = 86.13 kbytes/sec * 60 sec * 90 = 465102 kbytes / 1024 = 454.20 MB

Do not forget that the specific footage of the tape in the reel was very different; it depended, first of all, on the diameter of the reel itself and the thickness of the tape. Quite common, due to their acceptable dimensions, were widely used reels that could hold 500...750 meters of film, which for the average music lover was the equivalent of an hour of sound, which was quite enough to replicate an average music album.

The life of video cassettes, which used the same registration principle, was quite short, but no less bright. analog signal to magnetic tape. By the time of industrial use of this technology, the recording density on magnetic tape had increased dramatically. The half-inch film, 259.4 meters long, contained 180 minutes of video material of very questionable quality, as it is today. The first video recording formats produced a picture at the level of 352x288 lines, the best samples showed results at the level of 352x576 lines. In terms of bitrate, the most advanced recording playback methods made it possible to approach a value of 3060 kbit/sec, with a speed of reading information from the tape of 2.339 cm/sec. A standard three-hour cassette could hold about 1724.74 MB, which in general is not so bad, as a result, video cassettes remained in great demand until very recently.

Magic number

Appearance and widespread implementation numbers (binary coding) are entirely due to the twentieth century. Although the very philosophy of coding with the binary code 1 / 0, Yes / No, one way or another hovered among humanity at different times and on different continents, sometimes taking on the most amazing forms, it finally materialized in 1937. MIT student Claude Shannon, based on the work of the great British (Irish) mathematician Georg Boulet, applied the principles of Boulenov algebra to electrical circuits, which in fact became the starting point for cybernetics in the form in which we know it now.

In less than a hundred years, both the hardware and software components of digital technology have undergone a huge number of major changes. The same is true for storage media. Starting from ultra-inefficient paper storage media for digital data, we have come to ultra-efficient solid-state storage. In general, the second half of the last century passed under the banner of experiments and the search for new forms of media, which can be succinctly called a general mess of the format.

Card
Punch cards became, perhaps, the first step towards interaction between a computer and a person. Such communication lasted for quite a long time, sometimes even now this medium can be found in specific research institutes scattered throughout the CIS.

One of the most common punched card formats was the IBM format introduced back in 1928. This format became the basis for Soviet industry. The dimensions of such a punched card according to GOST were 18.74 x 8.25 cm. The punched card could hold no more than 80 bytes, with only 0.52 bytes per 1 cm2. In this calculation, for example, 1 Gigabyte of data would be equal to approximately 861.52 Hectares of punched cards, and the weight of one such Gigabyte would be just under 22 tons.

Magnetic tapes
In 1951, the first samples of data carriers based on the technology of pulsed magnetization of tape were released specifically for registering “digits” on it. This technology made it possible to add up to 50 characters per centimeter of a half-inch metal tape. Subsequently, the technology was seriously improved, making it possible to increase the number of single values ​​per unit area many times over, as well as to reduce the cost of the material of the carrier itself as much as possible.

At the moment, according to the latest statements by Sony Corporation, their nano developments make it possible to place a volume of information equal to 23 Gigabytes per 1 cm2. Such ratios of figures suggest that this tape magnetic recording technology has not become obsolete and has quite bright prospects for further exploitation.

Gram record
Probably the most amazing method of storing digital data, but only at first glance. The idea of ​​recording a live program onto a thin layer of vinyl arose in 1976 at Processor Technology, a company based in Kansas City, USA. The essence of the idea was to reduce the cost of the storage medium as much as possible. The company's employees took an audio tape with data recorded in the existing Kansas City Standard audio format and transferred it to vinyl. In addition to reducing the cost of the media, this solution made it possible to attach an engraved plate to a regular magazine, which made it possible to distribute small programs en masse.

In May 1977, magazine subscribers were the first to receive a record in their issue that contained a 4K BASIC interpreter for the Motorola 6800 processor. The playing time of the record was 6 minutes.
This technology, for obvious reasons, did not catch on; officially, the last record, the so-called Floppy-Rom, was released in September 1978, this was its fifth release.

Winchesters
The first hard drive was introduced by IBM in 1956; the IBM 350 model was included with the company's first mass-produced computer. The total weight of this “hard drive” was 971 kg. It was similar in size to a closet. It contained 50 disks, the diameter of which was 61 cm. The total amount of information that could fit on this “hard drive” was a modest 3.5 megabytes.

The data recording technology itself was, so to speak, a derivative of recording and magnetic tapes. The disks placed inside the case contained many magnetic pulses, which were applied to them and read by the movable head of the recorder. Like a gramophone top, at each moment of time the recorder moved across the area of ​​each of the disks, gaining access to the required cell, which carried a magnetic vector of a certain direction.

At the moment, the above-mentioned technology is also alive and, moreover, is actively developing. Less than a year ago the company Western Digital released the world's first hard drive with a capacity of 10 TB. There were 7 plates in the middle of the body, and instead of air, helium was pumped into the middle of it.

Optical discs
They owe their appearance to the partnership of two corporations, Sony and Philips. The optical disc was introduced in 1982 as a viable, digital alternative to analog audio media. With a diameter of 12 cm, the first samples could accommodate up to 650 MB, which, with a sound quality of 16 bits / 44.1 kHz, amounted to 74 minutes of sound, and this value was not chosen in vain. Beethoven's 9th Symphony lasts exactly 74 minutes, which was excessively loved either by one of the co-owners of Sony or by one of the developers from Philips, and now it could fit entirely on one disc.

The technology for applying and reading information is very simple. Indentations are burned into the mirror surface of the disk, which, when reading the information optically, are clearly registered as 1/0.

Optical media technology is also thriving in our 2015 year. The technology known to us as Blu-ray disc with four-layer recording holds about 111.7 Gigabytes of data on its surface, at its not too high price, being ideal media for very “capacious” films of high resolution with deep color reproduction.

Solid state drives, flash memory, SD cards
All this is the brainchild of one technology. The principle of data recording, developed back in the 1950s, is based on recording an electric charge in an isolated region of a semiconductor structure. For a long time, it did not find its practical implementation to create a full-fledged information carrier on its basis. The main reason for this was the large dimensions of the transistors, which, with their maximum possible concentration, could not generate a competitive product in the data storage market. They remembered the technology and periodically tried to implement it throughout the 70s-80s.

The real high point for solid-state drives came in the late 80s, when semiconductor sizes began to reach acceptable sizes. In 1989, Japanese Toshiba presented a completely new type of memory “Flash”, from the word “Flash”. This word itself very well symbolized the main pros and cons of media implemented on the principles of this technology. Unprecedented speed of data access, a fairly limited number of rewrite cycles and the need for an internal power supply for some of this type of media.

To date, media manufacturers have achieved the greatest concentration of memory capacity thanks to the SDCX card standard. With dimensions of 24 x 32 x 2.1 mm, they can support up to 2 TB of data.

The cutting edge of scientific progress

All the media with which we have dealt up to this point have been from the world of non-living nature, but let’s not forget that the very first storage device of information with which we have all dealt is the human brain.

Principles of functioning of the nervous system in general outline are already clear today. And as surprising as this may sound, the physical principles of the brain are quite comparable to the principles of organization of modern computers.
A neuron is a structural and functional unit of the nervous system; it forms our brain. A microscopic cell with a very complex structure, which is actually an analogue of the transistor we are used to. Interaction between neurons occurs due to various signals, which spread with the help of ions, which in turn generate electric charges, thus creating a not quite ordinary electrical circuit.

But even more interesting is the very principle of operation of the neuron, like its silicon analogue, this structure oscillates in the binary position of its state. For example, in microprocessors, the difference in voltage levels is taken as the conditional 1/0; the neuron, in turn, has a potential difference; in fact, at any moment in time it can acquire one or two possible polarity values: either “+” or “-”. A significant difference between a neuron and a transistor is the limiting speed of the former to acquire opposite values ​​of 1 / 0. A neuron, due to its structural organization, which we will not go into in too much detail, is thousands of times more inert than its silicon counterpart, which naturally affects its speed - quantity processing requests per unit of time.

But not everything is so sad for living beings, unlike computers where processes are carried out in a sequential mode, billions of neurons integrated into the brain solve assigned tasks in parallel, which provides a number of advantages. Millions of these low-frequency processors quite successfully make it possible, in particular for humans, to interact with the environment.

Having studied the structure of the human brain, the scientific community has come to the conclusion that in fact the brain is an integral structure, which already includes a computing processor, instant memory, and long-term memory. Due to the very neural structure of the brain, there are no clear, physical boundaries between these hardware components, only blurred specification zones. This statement is confirmed by dozens of precedents from life, when, due to certain circumstances, people had part of their brain removed, up to half of the total volume. Patients after such interventions, in addition to not turning into a “vegetable,” in some cases, over time, restored all their functions and happily lived to a ripe old age, thereby being living proof of the depth of flexibility and perfection of our brain.

Returning to the topic of the article, we can come to an interesting conclusion: the structure of the human brain is actually similar to solid state drive information discussed above. After such a comparison, keeping in mind all its simplifications, we can ask the question, how much data can be accommodated in this storage? It may be surprising again, but we can get a completely unambiguous answer, so let’s do the calculation.

As a result of scientific experiments carried out in 2009 by neuroscientist, doctor of the University of Brazil in Rio De Janeiro - Suzanne Herculano-Housell, it was found that in the average human brain, weighing about one and a half kilograms, approximately 86 billion neurons can be counted, let me remind you that previously scientists it was believed that this figure for the average value is equal to 100 billion neurons. Based on these numbers and equating each individual neuron to actually one bit, we get:

V = 86,000,000,000 bits / (1024 * 1024*1024) = 80.09 Gbit / 8 = 10.01 GB

Is it a lot or a little and how competitive can this information storage environment be? It’s very difficult to say yet. Every year the scientific community pleases us more and more with progress in the study of the nervous system of living organisms. You can even find references to the artificial introduction of information into the memory of mammals. But by and large, the secrets of the brain's thinking still remain a mystery to us.

Bottom line

Although the article did not present all types of data carriers, of which there are a huge variety, the most typical representatives found a place in it. Summarizing the presented material, one can clearly trace a pattern - the entire history of the development of data carriers is based on the heredity of the stages preceding the current moment. The progress of the last 25 years in the field of storage media is firmly based on the experience gained over at least the last 100...150 years, while the growth rate of storage capacity over these quarter centuries has increased exponentially, which is a unique case throughout the entire known history of mankind.

Despite the archaic nature of analog data recording that seems to us now, until the end of the twentieth century it was a completely competitive method of working with information. Album with high-quality images could contain gigabytes of the digital equivalent of data that, until the early 1990s, were simply physically impossible to place on an equally compact medium, not to mention the lack of acceptable ways to work with such data arrays.

The first sprouts of recording on optical discs and the rapid development of HDD drives in the late 1980s crushed the competition of many analog recording formats in just one decade. Although the first optical music discs did not differ qualitatively from the same vinyl records, having 74 minutes of recording versus 50-60 (double-sided recording), compactness, versatility and further development of the digital direction, as expected, finally buried the analog format for mass use.

The new era of information media, on the threshold of which we stand, can significantly affect the world in which we will find ourselves in 10...20 years. Already, advanced work in bioengineering gives us the opportunity to superficially understand the principles of operation of neural networks and manage them certain processes. Although the potential for placing data on structures similar to the human brain is not that great, there are things that should not be forgotten. The very functioning of the nervous system is still quite mysterious, as a result of its little knowledge. The principles of placing and storing data in it, even at a first approximation, it is obvious that they operate according to slightly different laws than would be true for the analog and digital methods of information processing. Just as during the transition from the analog stage of human development to the digital, during the transition to the era of the development of biological materials, the two previous stages will serve as a foundation, a kind of catalyst for the next leap. The need to intensify the bioengineering field was obvious earlier, but only now the technological level of human civilization has risen to a level where such work can really be crowned with success. Whether this new stage in the development of IT technologies will absorb the previous stage, as we have already had the honor of observing, or will it go in parallel, is too early to predict, but the fact that it will radically change our lives is obvious.


FEDERAL AGENCY FOR EDUCATION

State educational institution of higher professional education

"ALTAI STATE UNIVERSITY"

HISTORY DEPARTMENT

Department of Archival Science and Historical Informatics

CHUPRIN VLADIMIR VLADIMIROVICH

EVOLUTION OF MATERIAL CARRIERS OF INFORMATION

Course work of a 2nd year student of full-time (full-time, part-time, correspondence) education

Barnaul 2010

Coursework on the topic: Evolution of material storage media

This topic (in my opinion) is relevant, first of all, from an ethical point of view. In our lives, we deal with documents almost every day and most people have no idea about the history of their evolution. For this reason I chose this topic.

The purpose of the work is to consider the evolution of material storage media.

Coursework objectives:

    Consideration of the evolution of material storage media from the 40th millennium BC to the 2nd millennium BC.

    The evolution of material media from the beginning of our era to the beginning of the 20th century AD.

    Consideration of the evolution of material storage media from the beginning of the 20th century to the present day.

The object of research in my coursework is the history of the development of material media. The subject of the work is a step-by-step consideration of the evolution of material storage media.

I defined the chronological framework from the fortieth millennium BC to the present day because Around the fortieth millennium BC, the first material media of information appeared.

The work is divided into chapters according to the principle: the first chapter examines the most ancient media from the 40th millennium BC to the 2nd millennium BC, which are practically no longer used. The second chapter from the beginning of our era to the beginning of the 20th century AD examines material media recognized as traditional and the third chapter examines the latest media.

The work consists of an introduction, three chapters, a conclusion, appendices and a list of references.

Our civilization is unthinkable in its current state without information carriers. Our memory is unreliable, so quite a long time ago humanity came up with the idea of ​​recording thoughts in all forms.
A storage medium is any device designed to record and store information.

Evolution is the development of a phenomenon or process as a result of gradual continuous changes, transforming one into another without jumps or breaks. Development is observed almost everywhere, and of course evolution has not bypassed material media.

Considering laws and by-laws, such as the Federal Law on Archiving in the Russian Federation 1, it seemed to me that very little attention is paid to ancient and modern information media; these laws are focused mainly on traditional information media, such as paper, etc. Different researchers have different attitudes towards ancient information carriers: some consider them necessary and very important, some simply ignore their existence, but let’s not focus on this, because each of us has our own opinion and attitude.

Chapter one: The evolution of material storage media from the 40th millennium BC to the 2nd millennium BC.

The first carriers of information were, apparently, the walls of caves. Rock paintings (Appendix 1. Fig. 1.) and petroglyphs (from the Greek petros - stone and glyphe - carving) depicted animals, hunting and everyday scenes. In fact, it is not known for sure whether the cave paintings were intended to convey information, or served as simple decoration, combined these functions, or were generally needed for something else. However, these are the oldest storage media known today, their appearance dates back to approximately 40 millennium BC.

Around the seventh century BC, clay tablets (Appendix 1. Fig. 2.), wax tablets (Appendix 1. Fig. 2.), animal bones (Appendix 1. Fig. 4.), began to be used as material carriers of information. animal skins, wooden planks (Appendix 1. Fig. 5.), etc.

Scribes used clay tablets, skins, bones and wooden planks to send correspondence. Many tablets with symbols drawn in ink or burned with a hot needle were tied together with a leather strap and placed in a basket to form a “book.” The oldest records were of an economic or religious nature.

The clay tablet is the oldest written instrument, having existed almost unchanged for millennia. Clay tablets appeared where the first writing arose - in Egypt and Mesopotamia. They were wooden planks with a layer of raw clay on the front surface. They wrote on a clay tablet with reed or bone sticks. Then the tablet was dried. Due to the fact that the clay layer was quite thin, the tablet did not crack when drying and remained intact for quite a long time. The inscription was erased by wetting the tablet with water and leveling the clay surface. If the writing had to be preserved for a long time, the tablet was fired in a kiln. The inscriptions on the burnt tablets were not destroyed over time. Therefore, today archaeologists often find shards with ancient writings, deciphering which, you can find out how the ancient peoples lived. The structure of the clay and the small surface of the clay tablet were ideally suited for cuneiform writing. Cuneiform consists of grouped wedge-shaped strokes that were pressed into the wet clay of the tablet. Cuneiform writing originated in Ancient Sumer around 3 thousand years BC. In the everyday life of many peoples, clay tablets survived until the invention of paper, and in some places they were used until its general distribution. The reason, again, is the availability of clay. To make a clay tablet, neither a special workshop nor money was required. The emergence of the first schools and the birth of literature are associated with the clay tablet. The clay tablet contributed to the development of social life, trade, science and art. That is, we can safely say that with the invention of the clay tablet, human civilization entered an era of cultural flourishing.

A wax tablet is a tablet made of hard material (boxwood, beech, bone) with a hollowed-out recess into which dark wax is poured. In the Roman Empire it replaced lead sheets. They wrote on the tablet by making marks on the wax with a sharp metal stick - a stylus. If necessary, the inscriptions could be erased, smoothed over, and the tablet could be used again. In Ancient Rome they were used for writing; in the Middle Ages they were used mainly for rough notes, business records, letters, and cash payments. They were folded inside with wax and connected in two (diptych) or three (triptych) pieces or several pieces with a leather strap (polyptych). In Rus', writing in cerah also had some distribution. This is evidenced by numerous styluses found during archaeological excavations in Novgorod and one polyptych - the Novgorod Codex. In hot climates, records on wax tablets were short-lived, but some original wax tablets have survived to this day (for example, with records of French kings). Miniatures with images of people of the Middle Ages writing on them have also been preserved (for example, an image of the 12th century writer Hildegard of Bingen).

In the second half of the third millennium, papyrus appears (Appendix 1. Fig. 5.) as a material carrier of information.

Wooden or bamboo “books” were awkward and heavy. Chinese emperors had to sign 50 kg of “documents” a day! Having learned to glue sheets of papyrus from strips of reeds, the Egyptians made the work of officials easier.

Papyrus (Greek πάπυρος), or Biblios is a writing material used in ancient times in Egypt and other countries. To make papyrus, the wetland plant of the same name (Cyperus papyrus), belonging to the sedge family, was used. In ancient times, wild papyrus was common in the Nile Delta, but now it has almost disappeared. When making writing material, papyrus stems were cleared of bark, and the core was cut lengthwise into thin strips. The resulting strips were laid out overlapping on a flat surface. Another layer of strips was laid out on them at right angles and placed under a large smooth stone, and then left under the scorching sun. After drying, the papyrus sheet was beaten with a hammer and smoothed. The resulting sheets of papyrus were then glued to one another; the front one was called protocolon(Greek προτόκολλον). The sheets in their final form looked like long ribbons and were therefore preserved in scrolls (and at a later time they were combined into books (lat. codex)). The side on which the fibers ran horizontally was the front side (lat. recto).

When the main text was no longer needed, the reverse side could, for example, be used to write down literary works (often, however, the unnecessary text was simply washed away). In Ancient Egypt, papyri appeared in the predynastic era, probably simultaneously with the invention of writing. They wrote on papyri in cursive script, first in hieratic script, and in the 1st millennium BC. e. - demotic. Cursive Egyptian hieroglyphs were used to record sacred texts. In addition, images could be applied to papyri (famous examples: the Turin erotic papyrus and vignettes of the Book of the Dead). In ancient times, papyrus was the main writing material throughout the Greco-Roman world. Papyrus production in Egypt was very large, and papyrus factories existed here even before the time of the caliphs. But papyri were preserved only in Egypt due to the unique climate. The discoveries of Greek papyri in Egypt (especially in Oxyrhynchus) made an invaluable contribution to classical philology (they are studied in a special discipline - papyrology).

Thus, for example, one of the papyri has preserved for us Aristotle's Athenian Polity, of which otherwise only the name would have been known. The works of Menander, Philodemus of Gadara, and the Latin poem “Alcestis of Barcelona” have reached us on papyri.

Papyrus began to lose its position as the main writing material in Europe and the Middle East in the 8th century. For more than two hundred years, papyrus scrolls were not stored, and over time they were replaced as the main material by parchment (Appendix 1. Fig. 7.), obtained by tanning the skins of various animals - sheep, goats, calves, and, in urgent need, cats.

For writing, brushes were used from the stems of plants of the genus Sitnik ( Juncus) or Reed ( Phragmites), later a feather with a forked end - kalyam, black and red paint.

The reed thickets on the banks of the Nile quickly thinned out. In the East, papyrus was used until the 8th century AD. e., but in Europe it was forgotten already in the early Middle Ages. For records (around the second century BC), parchment began to be used - a writing material made from untanned animal skin. Also an ancient manuscript on such material. From ancient times to the present day, parchment has been known among Jews as “gvil” as the canonical material for recording the Sinai Revelation in handwritten Torah scrolls (Sefer Torah). On the more common type of parchment “klaf” (lat. vellum) Torah passages for tefillah and mezuzah are also written. Only the skins of kosher animal species are used to make these types of parchment.

According to the Greek historian Ctesias in the 5th century. BC e. leather had already been used for a long time as a writing material by the Persians. From where it, under the name “diphthera” (διφθέρα), early moved to Greece, where processed sheep and goat skins were used for writing.

According to Pliny the Elder in the 2nd century. BC e. The kings of Egypt in the Hellenistic period, wanting to support the book wealth of the Alexandrian Library, which had found a rival in the Pergamon Library, in Asia Minor, banned the export of papyrus outside Egypt. Then in Pergamon they paid attention to the manufacture of leather, improved the ancient diphthera and put it into circulation under the name δέρμα, σωμάτιον and later, at the place of main production - περγαμηνή (among the Romans - membrane, from the 4th century. BC e. - pergamena). The king of Pergamon, Eumenes II (197-159 BC), is erroneously listed as the inventor of parchment.

During the early days of printing, there was a brief period when parchment and paper were used interchangeably: although most of the Gutenberg Bible was printed on paper, parchment versions also survive.

The rapid growth of printing in the Middle Ages led to a reduction in the use of parchment, since its price and complexity of production, as well as the volume of production, no longer satisfied the needs of publishers. From now on, to the present day, parchment began to be used mainly by artists and in exceptional cases for book publishing. In monastic book use in the Middle Ages, parchment codices gradually replaced papyrus scrolls. From the 4th century n. e. The custom of writing liturgical books on parchment was already widespread, and in the Middle Ages papyrus was almost never used for this purpose.

The Middle Ages knew two main types of parchment: actually parchment(lat. pergamen) and vellum(Latin vellum, also vellum, from French velin). The skins of sheep, rams, calves, pigs and other animals were used to make parchment. The skins of newborns and especially stillborn lambs and calves were used for vellen. In the south of Europe in the Middle Ages they used goat and sheep skins; in Germany and France they used mainly calf skins. Parchment was not made from donkey skin.

Parchment was thicker and rougher than vellum, but the early Middle Ages practically did not know vellum, since it began to be widely used in the production of books only from the end of the 12th century.

Regardless of what kind of skins were used, parchment masters began by washing the skin and removing the coarsest and toughest hair. After this, the skins were subjected to liming, that is, prolonged soaking in a lime solution. The skins were kept in lime for three to ten days, depending on the ambient temperature, and then washed in water. This made hair removal easier.

After hair loss, the skins were stretched onto wooden frames and fleshed, that is, the lower layer of the skin, the subcutaneous tissue, was separated from the dermis. This operation was performed using semicircular knives. The skins were then sanded and smoothed with pumice.

During the last operation, chalk powder was rubbed into the parchment, absorbing fat that was not removed during previous treatments. In addition, chalk powder made the parchment lighter and more uniform in color. Also, in order to whiten the parchment, flour, proteins or milk were rubbed into it.

The Russian National Library contains the manuscript of St. Augustine, written on excellent, soft and thin, almost white parchment, the workmanship of which represents a kind of perfection.

Scribes and artists received parchment cut and, as a rule, collected in notebooks. The advantage of parchment over papyrus is that parchment can be written on both sides of the sheet and can also be reused.

From the 40th millennium BC to our era, material carriers have come a very long way - starting from stone carriers and rocks and ending in the second century BC. parchment. The beginning of our era was marked by the appearance of paper. 2

Chapter two:Evolution of material media from the beginning of our era to the beginning of the 20th century AD:

In the first, early second century AD, paper appeared (Appendix 2. Fig. 1.) (presumably from the Italian Bambagia - cotton) this is a material in the form of sheets for writing, drawing, packaging, etc., obtained from cellulose: from plants, as well as from recycled materials (rags and waste paper).

Chinese chronicles report that paper was invented in 105 AD. e. Tsai Lunem. However, in 1957, a tomb was discovered in Baoqia Cave in China's northern Shanxi province, where scraps of sheets of paper were found. The paper was examined and it was established that it was made in the 2nd century BC.

Before Tsai Lun, paper in China was made from hemp, and even earlier from silk, which was made from rejected silkworm cocoons.

Tsai Lun crushes mulberry fibers, wood ash, rags and hemp. He mixed all this with water and laid the resulting mass on a mold (wooden frame and bamboo sieve). After drying in the sun, he smoothed out this mass using stones. The result was durable sheets of paper.

After Cai Lun's invention, the paper making process began to improve rapidly. They began to add starch, glue, natural dyes, etc. to increase strength.

At the beginning of the 7th century, the method of making paper became known in Korea and Japan. And after another 150 years, through prisoners of war it gets to the Arabs.

In the 6th-8th centuries, paper production was carried out in Central Asia, Korea, Japan and other Asian countries. In the 11th-12th centuries, paper appeared in Europe, where it soon replaced animal parchment. Since the 15th-16th centuries, in connection with the introduction of printing, paper production has been growing rapidly. Paper was made in a very primitive way - by manually grinding the mass with wooden hammers in a mortar and scooping it out into molds with a mesh bottom.

The invention of a grinding apparatus, the roll, in the second half of the 17th century was of great importance for the development of paper production. At the end of the 18th century, rolls already made it possible to produce large quantities of paper pulp, but manual casting (scooping) of paper delayed the growth of production. In 1799, N. L. Robert (France) invented a paper-making machine, mechanizing the casting of paper by using an endlessly moving mesh. In England, the brothers G. and S. Fourdrinier, having purchased Robert's patent, continued to work on the mechanization of low tide and in 1806 patented a paper-making machine. By the middle of the 19th century, the paper machine had evolved into a complex unit that operated continuously and largely automatically. In the 20th century, paper production became a large, highly mechanized industry with a continuous flow technological scheme, powerful thermal power plants and complex chemical workshops for the production of semi-finished fibrous products.

To prepare paper, you need plant substances with sufficiently long fibers, which, when mixed with water, will give a homogeneous, plastic, so-called. paper pulp. Semi-finished products for paper production can be:

wood pulp or cellulose; cellulose of annual plants (straw, cane, hemp, rice and others); semicellulose; waste paper; semi-rag rag; for special types of paper: asbestos, wool and other fibers.

Paper production consists of the following processes:

preparation of paper pulp (grinding and mixing components, sizing, filling and coloring of paper pulp);

production of paper pulp on a paper-making machine (dilution with water and cleaning the pulp from impurities, casting, pressing and drying, as well as primary finishing);

final finishing (calendering, cutting);

sorting and packaging.

When milled, the fibers are given the required thickness and physical properties. Grinding is carried out in batch and continuous devices (rolls, conical and disk mills, refiners and others). To make the paper suitable for writing and give it hydrophobic properties, rosin glue, paraffin emulsion, alumina and other substances that promote adhesion (the so-called sizing) are introduced into the paper pulp; to increase the connection between fibers and increase mechanical strength and rigidity, starch and animal glue are added; to increase the strength of paper in a wet state - urea and melamine-formaldehyde resins. To increase whiteness, smoothness, softness and opacity, as well as improve the printing properties of paper, mineral fillers (kaolin, chalk, talc) are introduced; to add color and increase whiteness - aniline (less often mineral) dyes. Some types of paper, for example, absorbent and electrical insulating, are produced without sizing or filling. Hemp pulp and rice paper The paper is whiter than wood pulp paper, so it often does not require additional chemical bleaching of the fibers.

The finished paper pulp with a concentration of 2.5-3.5% is supplied by a pump from the preparation department to the mixing basin, from where it is supplied to the papermaking machine. The mass is first diluted with circulating water (to a concentration of 0.1-0.7%) and passed through cleaning equipment (sandboxes, vortex and centrifugal cleaners and knotters).

The most common is the so-called canteen (flat mesh) paper-making car. It consists of a mesh, pressing and drying parts, a calender and a reel. The paper pulp flows in a continuous stream onto the moving mesh of the machine, closed in a ring, where the ebb, dewatering and compaction of the paper web occurs. Further dewatering and compaction of the web is carried out in the press part, formed by several roller presses, between the shafts of which the paper web is transported in one piece during the entire process by woolen cloth, which serves as an elastic gasket. The final removal of water occurs in the drying part, where the paper web alternately comes into contact with its surfaces with cast-iron grinding cylinders heated from the inside by steam, arranged in a checkerboard pattern in two tiers. The surface of the paper is smooth due to the fact that it is pressed against the cylinders by the upper and lower felts. Then the paper web is finished in a calender, which is a vertical battery of 5-8 metal shafts. When moving between the shafts from top to bottom, the web becomes smoother, thickens and evens out in thickness. The resulting paper sheet is wound onto rolls on a reel, which is a forcedly rotated cylinder, against which a roller with paper wound on it is pressed.

Sound recording began in the 18th century. A revolution in the storage and transmission of information occurred in the 18th century. music boxes. Until now, all storage media have been designed for a single reading device - the human eye. In the box, the melody was recorded not with musical notations, but with the protrusions of a rotating roller. It was read by a special mechanism. To pre-record the melody, a metal disk with a deep spiral groove was used. In certain places of the groove, pinpoint depressions are made - pits, the location of which corresponds to the melody. When the disk rotates, driven by a clock spring mechanism, a special metal needle slides along the groove and “reads” the sequence of dots. The needle is attached to a membrane, which produces a sound each time the needle enters a groove.

At the end of the 19th century, a phonograph (Appendix 2. Fig. 2.) and a gramophone (Appendix 2. Fig. 3.) appeared. Mechanical musical instruments with replaceable rollers were in great demand until the 30s of the 20th century. But already in 1877, Thomas Edison invented the phonograph - a device that records sound on tin or wax rollers. And in 1887, Emil Berliner discovered a method for mass reproduction of gramophone records. At first, the recording duration on each of them was only 3 minutes.

The phonograph (from the Greek φωνή - sound and γράφω - write) is the first device for recording and reproducing sound. Invented by Thomas Alva Edison, introduced November 21, 1877. Sound is recorded on the medium in the form of a track, the depth of which is proportional to the volume of the sound. The phonograph soundtrack is placed in a cylindrical spiral on a replaceable rotating drum. During playback, a needle moving along a groove transmits vibrations to an elastic membrane, which emits sound. The invention was an amazing event at the time; Further developments of the phonograph were the gramophone and gramophone. The impetus for Edison’s creation of such a device was the desire to record telephone conversations in his Menlo Park laboratory (New Jersey, USA). One day, at a telegraph repeater, he heard sounds similar to unintelligible speech. The first recordings were indentations on the surface of foil made by a moving needle. The foil was placed on a cylinder that rotated while sound was produced. The cost of the entire device was $18. Using this technique, it was possible to record the words from the children's song “Mary Had a Little Lamb” ( Mary had a little lamb). The public demonstration of the device immediately made Edison famous. To many, the reproduction of sound seemed like magic, so some dubbed Edison “the wizard of Menlo Park.” Edison himself was so amazed by the discovery that he said: “ I have never been so stunned in my life. I've always been afraid of things that work the first time" The invention was also demonstrated at the White House and the French Academy.

Edison received a patent for his invention (U.S. Patent 20052) issued by the US Patent Office on February 19, 1878.

In the period from 1878 to 1887. Edison put aside work on the phonograph (while working on the incandescent lamp). Continuing his work, Edison began using a wax-coated cylinder to record sound (an idea proposed by Charles Tente). In 1887, the inventor Emil Berliner proposed using sound carriers not of a cylindrical shape, but in the form of a flat disk (the patent was received in 1896). In this case, the audio track is a spiral, which increases the duration of the recording. Berliner called his device “gramophone.”

The original plan was to use the phonograph as a secretarial machine for recording voices during dictation.

The gramophone (from the name of the French company "Pathe") had the shape of a portable suitcase. The plate rotated using a spring motor, which had to be “started” with a special handle. However, due to its modest size and weight, simplicity of design and independence from the electrical network, the gramophone has become very widespread among lovers of classical, pop and dance music. Until the middle of our century, it was an indispensable accessory for home parties and country trips. The records were produced in three standard sizes: minion, grand and giant.

From the beginning of our era to the beginning of the twentieth century there was a big breakthrough in the evolution of material media of information - until the 18th century, media were mainly designed for visual transmission of information. Since the 18th century, it has now become possible to perceive recorded information by ear, not to mention the creation of paper, which we use to this day. 3

Chapter Three:Evolution of material storage media from the beginning of the 20th century to the present day:

At the beginning of the 20th century, sound recording technology continued to improve - a tape recorder appeared (Appendix 2. Fig. 4.). Its plates acted like the rollers of boxes. The grooves directed the movement of the needle and mechanically influenced the membrane of the gramophone. But already in 1900, a tape recorder was first introduced to the public, in which sound was recorded by magnetizing sections of wire. An hour of recording at the beginning of the 20th century required 7 kilometers of wire weighing about 2 quintals.

Since the mid-twentieth century, punched cards have appeared (Appendix 2. Fig. 5.). The first computers in the 20-50s of the last century still had much in common with ancient boxes. Storage media in those days did not know the concepts of “convenience” and “high recording density.” Data was loaded using punched cards - cardboard cards with holes punched in them. Information was recorded and read according to certain schemes, but it was based on a binary code: presence of a hole -1, absence - 0.

The next one to enter the arena was the hard drive (Appendix 3. Fig. 1.). This happened in 1956, when IBM began selling the first disk storage system - the 305 RAMAC. The engineering miracle consisted of 50 disks with a diameter of 60 cm and weighed about a ton. The hard drive capacity at that time was simply phenomenal - as much as 5 MB! The main advantage of the new product was its high operating speed: in the RAMAC system, the read/write head “walked” freely on the surface of the disk, so that data was written and retrieved much faster than in the case of magnetic drums.

In the late sixties, IBM released a high-speed drive with two 30 MB disks. A capacity of 60 MB at that time was more than enough, and drive manufacturers began to work on reducing the size of their models. By the early eighties, hard drives had shrunk to the size of today's 5.25-inch drives, and their prices had dropped to $2,000 for a 10MB drive. By 1991, the maximum capacity increased to 100 MB, by 1997 - to 10 GB, in our time the maximum capacity of a Winchester is about 1 TB.

In the mid-seventies, a number of large companies began to develop a fundamentally new type of storage media - optical storage. Philips and Sony have achieved outstanding success in this field. The result of their intensive work was the emergence of the CD standard (CompactDisk (Appendix 3. Fig. 2.), which was first demonstrated in 1980. CDs and corresponding players went on sale in 1982. Thanks to the phenomenally low cost of media, the CD format immediately gained popularity, but at that time CDs were used only for storing audio information (up to 74 minutes of audio).To adapt their invention to work with arbitrary data, Philips and Sony created the CD-ROM (Compact Disc Read Only Memory) standard in 1984 As a result, one compact disc gained the ability to store up to 650 MB of information - a huge figure at that time. Over time, the storage capacity increased to 700 MB (or 80 minutes of audio). In 1988, Tajio Yuden announced the CD-R recordable disc format (Compact Disc Recordable) The CD-RW format, which allows data on a disc to be rewritten multiple times, was introduced in 1997. In 1996, CDs were replaced by the DVD (Digital Versatile Disc) format. Essentially, this is the same CD, but with increased recording density. The effect was achieved by reducing the size of the depressions and changing the type of laser. In addition, a DVD can have two working layers on one disc. The capacity of a single-layer disc is 4.7 GB, and a double-layer disc is 8.5 GB. Of course, special drives were released to work with DVDs.

In 1997, the DVD format was supplemented with DVD-R and DVD-RW discs. The license price for this technology was very high, so a number of companies united into the so-called “DVD+RW Alliance” and in 2002 released discs of the DVD+R and DVD+RW standards. Many old DVD drives refused to work with the new type of disc, but the “impostors” still managed to gain popularity. Today DVD-R(W) and DVD+R(W) coexist peacefully, and modern drives support both formats.

The first version of flash memory (Flash Erase EEPROM) was developed in 1984 by Toshiba. Four years later, a similar information carrier solution was presented by Intel. Flash memory-based drives are called solid-state drives because... they have no moving parts. This has improved the reliability of flash memory compared to other media. Standard operating overloads are 15 g, and short-term overloads can reach 2000 g, i.e. theoretically the card should work excellently at the highest possible cosmic overloads and withstand drops from a three-meter height. Moreover, under such conditions, the operation of the card is guaranteed for up to 100 years. Erasing on these cards occurs in sections, so you can't change one bit or byte without overwriting the entire section. Data can be zeroed out either in a certain minimum size, for example, 256 or 512 bytes, or completely. The first flash drives were ATA Flash cards. They were manufactured in the form of a PC Card with a built-in ATA controller.

Then more and more flash card standards began to emerge. Such as Compact Flash TypeI (CF I) and Compact Flash TypeII (CF II) - released in 1994 by SanDisk, are a modification of the PC Card.

In 1995, SmartMedia Card (SMC) without a built-in controller was developed by Toshiba.

1997 - Infineon Technologies (a division of Siemens) creates MultiMediaCard (MMC), they are even smaller than those discussed above and weigh only 1.5 g, and therefore are intended for portable devices.

Later, Panasonic (Matsushita Electronic), together with SanDisk and Toshiba, developed the Secure Digital (SD) standard, which are equipped with protection against illegal copying.

USB appears in 2001 - flash (Appendix 3. Fig. 3.), this card consists of a protective cap and the drive itself with a USB connector (one or two flash memory chips and a USB controller are placed inside it) are equipped with means of protection against illegal copying.

Technologies do not stand still. In the field of optical storage, great prospects await AO-DVD (Articulated Optical Digital Versatile Disc) disks, work on which is in full swing at Iomega. The development is based on the idea of ​​using nanostructures - sections of the disk with dimensions smaller than the wavelength of laser radiation. In this case, the sections themselves can be located at different angles of inclination. As a result, information is read by analyzing the nature of the distribution of the reflected beam. In theory, the capacity of an AO-DVD disc can exceed 800 GB. 4

Developments in the field of holographic memory have been underway for quite some time. Optware has achieved the greatest success here. She has already managed to present to the public prototypes of HVD (Holographic Versatile Disc) format discs. It is quite possible that in a few years they will replace Blu-ray and HD DVD. The holographic disk consists of several reflective layers of different types, and two lasers are used to read them. Without going into technical details, we note that the theoretical volume of HVD can reach 3.9 TB.

Very soon, flash drives will be replaced by PRAM memory. It does not promise incredible amounts of stored information, but instead will offer increased performance. Another promising technology, FeRAM (Ferroelectric Random Access Memory), is still in early development. It is based on the use of ferromagnetic capacitors as memory cells and water molecules to insulate these cells. The recording density of such a drive can be increased to several thousand terabytes per square centimeter. Unfortunately, this is just a theory at the moment.

Some technologies will not be widespread and will be forgotten. However, one thing is clear: the capacity and speed of storage media are growing faster every day, and there is no sign of a decline in their development in the near future. 5

Conclusion:

Today, almost every person, going to work, school, or just to run errands, has a small memory card in his pocket, on which he has photographs of his children, family, older relatives, favorite playlist, etc. We all know a lot about modern media, but what to do with the rest, more ancient ones, which we don’t even think about? Of course, you can’t especially put a rock painting in your pocket to look at it, say, while traveling on a bus, but these information carriers are a universal heritage. But it is not all that bad. Other storage media are constantly developing and improving: physical dimensions are decreasing and information capacity is increasing. Legislation on new media does not stand still either. Traditional media have penetrated our lives so deeply that it is impossible to imagine life without them.

Applications:

Appendix 1. Fig. 1.

Appendix 1. Fig 2: Clay tablet:

Appendix 1. Fig3: Wax tablet

Appendix 1. Fig. 4. Fortune telling bones in ancient China:

Appendix 1. Fig. 5. Wooden Planks of Ancient Babylon:

Appendix 1. Fig. 6. Papyrus:

Appendix 1. Fig. 7. Parchment:

Appendix 2. Fig. 1. Paper:

Appendix 2. Fig. 2.: Phonograph:

Appendix 2. Fig. 3.; Gramophone:

Appendix 2. Fig.4. Record player:

Appendix 2. Fig. 5. Punch card:

Appendix 3. Fig. 1. Hard drive:

Appendix 3. Fig. 2. Compact Disk:

Appendix 3. Fig3. USB flash:

    Ilyushenko M.P., Kuznetsova T.V., Livshits Ya.Z. Documentation. Document and documentation system. –M.: MGIAN, 1977.

    Korenny A.A. Information and communication. – K.: Science. Dumka, 1986.

    Kushnarenko N.N. Documentation: Textbook. – 6th ed., erased. – K.: Knowledge, 2005

    Large encyclopedic dictionary / A. M. Prokhorov.-ed. 2nd, reworked And. additional..-M.: Great Russian Encyclopedia; St. Petersburg: Norint, 1999.

    Bachilo I.L., Lopatin V.N., Fedotov M.A. Information law, textbook. St. Petersburg: Legal Center Press, 2001.

    Klimenko S.V., Krokhin I.V., Kushch V.M., Lagutin Yu.L. Electronic documents in corporate networks. M., 2001.

    Larin M.V. Document management and new information Technology. M: Scientific book, 2001

    Tkanev A. Electronic signature: right to life// Newspaper "Business Advocate". No. 9. 2005.

    Levin V.I. Information media in the digital age / Under the general title. ed. D.G. Kraskovsky. - M.: ComputerPress, 2000.

    LENDYEL P., MORVAI S. Chemistry and technology of paper production. - M.: Lesnaya
    industry, 1978.

1 Federal Law of the Russian Federation of October 22, 2004 N 125-FZ “On archiving in the Russian Federation”

And the main parameters of the current state of the Russian monetary system Abstract >> Banking

Security (real estate, commodity material values ​​belonging to the state ... are no longer written on paper media information, and records in electronic form... are ensured by their constant evolution. Evolution instruments for regulating monetary...

  • Evolution accounting in Russia

    Abstract >> Accounting and Auditing

    Corresponds to observation in statistics. Evolution accounting must... collect primary information, its classification and recording in specific material carriers, processing and... communities. CONCLUSION Process evolution accounting in Russia...





  • 

    2024 gtavrl.ru.