What is flash memory in a computer. Flash memory


The basic design of the device has remained unchanged since 1995, when flash drives first began to be produced on an industrial scale. Without going into details, a USB flash card consists of three key elements: * USB connector - a connector well known to everyone, which is an interface between a flash drive and a computer system, be it a personal computer system, a multimedia center or even a car radio; * memory controller is a very important element of the circuit. Connects the device memory with the USB connector and manages data transfer in both directions; * memory chip is the most expensive and important part of a USB flash card. Determines the amount of information stored on the card and the speed of reading/writing data. What can change in this scheme? Nothing in principle, but modern industry provides several options for such a scheme; combination of eSATA and USB connectors, two USB connectors.

1 -- USB connector; 2 -- microcontroller; 3 -- control points; 4 -- flash memory chip; 5 -- quartz resonator; 6 -- LED; 7 -- "write protection" switch; 8 -- space for an additional memory chip.

Operating principle

Flash memory stores information in an array of floating-gate transistors called cells. In traditional devices with single-level cells (English single-level cells, SLC), each of them can store only one bit. Some new multi-level cell (MLC; triple-level cell, TLC) devices can store more than one bit by using different levels of electrical charge on a transistor's floating gate.

Types of flash memory

NOR

This type of flash memory is based on a NOR gate because in a floating gate transistor, a low gate voltage denotes a one.

The transistor has two gates: control and floating. The latter is completely isolated and is capable of retaining electrons for up to 10 years. The cell also has a drain and a source. When programming with voltage, an electric field is created at the control gate and a tunnel effect occurs. Some electrons tunnel through the insulator layer and reach the floating gate. The charge on the floating gate changes the "width" of the drain-source channel and its conductivity, which is used for reading.

Programming and reading cells have very different power consumption: flash memory devices consume quite a lot of current when writing, while the energy consumption is low when reading.

To erase information, a high negative voltage is applied to the control gate, and electrons from the floating gate move (tunnel) to the source.

In the NOR architecture, each transistor must be connected to an individual contact, which increases the size of the circuit. This problem is solved using NAND architecture.

NAND

The NAND type is based on the NAND element. The operating principle is the same; it differs from the NOR type only in the placement of the cells and their contacts. As a result, it is no longer necessary to make an individual contact to each cell, so the size and cost of the NAND chip can be significantly reduced. Also writing and erasing is faster. However, this architecture does not allow access to an arbitrary cell.

NAND and NOR architectures now exist in parallel and do not compete with each other, since they are used in different areas of data storage.

Flash memory is a type of long-lasting memory for computers in which the contents can be reprogrammed or electrically erased. Compared to Electrically Erasable Programmable Read Only Memory, operations on it can be performed in blocks that are located in different places. Flash memory costs much less than EEPROM, which is why it has become the dominant technology. Particularly in situations where stable and long-term data storage is required. Its use is allowed in a wide variety of cases: in digital audio players, photo and video cameras, mobile phones and smartphones, where there are special Android applications for the memory card. In addition, it is also used in USB flash drives, traditionally used to save information and transfer it between computers. It has gained some fame in the world of gamers, where it is often used to store game progress data.

general description

Flash memory is a type that is capable of storing information on its board for a long time without using power. In addition, we can note the highest data access speed, as well as better resistance to kinetic shock in comparison with hard drives. It is thanks to these characteristics that it has become so popular for devices powered by batteries and rechargeable batteries. Another undeniable advantage is that when flash memory is compressed into a solid card, it is almost impossible to destroy it by any standard physical means, so it can withstand boiling water and high pressure.

Low-level data access

The way to access data residing in flash memory is very different from that of conventional types. Low-level access is provided through the driver. Conventional RAM immediately responds to calls to read and write information, returning the results of such operations, but the design of flash memory is such that it takes time to think about it.

Design and principle of operation

At the moment, flash memory is widespread, which is created on single-transistor elements with a “floating” gate. This makes it possible to provide greater data storage density compared to dynamic RAM, which requires a pair of transistors and a capacitor element. At the moment, the market is replete with various technologies for constructing basic elements for this type of media, which are developed by leading manufacturers. They are distinguished by the number of layers, methods of recording and erasing information, as well as the organization of the structure, which is usually indicated in the name.

Currently, there are a couple of types of chips that are most common: NOR and NAND. In both, the storage transistors are connected to the bit buses - in parallel and in series, respectively. The first type has fairly large cell sizes and allows for fast random access, allowing programs to be executed directly from memory. The second is characterized by smaller cell sizes, as well as fast sequential access, which is much more convenient when it is necessary to build block-type devices where large amounts of information will be stored.

In most portable devices, the SSD uses the NOR memory type. However, devices with a USB interface are becoming increasingly popular. They use NAND memory. Gradually it displaces the first one.

The main problem is fragility

The first samples of mass-produced flash drives did not please users with high speeds. However, now the speed of writing and reading information is at such a level that you can watch a full-length movie or run an operating system on your computer. A number of manufacturers have already demonstrated machines where the hard drive is replaced by flash memory. But this technology has a very significant drawback, which becomes an obstacle to replacing existing magnetic disks with this medium. Due to the design of flash memory, it allows erasing and writing of information in a limited number of cycles, which is achievable even for small and portable devices, not to mention how often this is done on computers. If you use this type of media as a solid state drive on a PC, then a critical situation will come very quickly.

This is due to the fact that such a drive is built on the property of field-effect transistors to store in a “floating” gate the absence or presence of which in the transistor is considered as a logical one or zero in binary. Writing and erasing data in NAND memory is carried out using tunneled electrons using the Fowler-Nordheim method with the participation of a dielectric. This does not require what allows you to make cells of minimal sizes. But it is this process that leads to cells, since the electric current in this case forces electrons to penetrate the gate, overcoming the dielectric barrier. However, the guaranteed shelf life of such memory is ten years. Wear of the microcircuit occurs not due to reading information, but due to operations to erase and write it, since reading does not require changing the structure of the cells, but only passes an electric current.

Naturally, memory manufacturers are actively working towards increasing the service life of solid-state drives of this type: they are striving to ensure uniformity of writing/erasing processes across the cells of the array, so that some do not wear out more than others. To distribute the load evenly, software paths are predominantly used. For example, to eliminate this phenomenon, “wear leveling” technology is used. In this case, data that is often subject to changes is moved to the address space of flash memory, so recording is carried out at different physical addresses. Each controller is equipped with its own alignment algorithm, so it is very difficult to compare the effectiveness of various models, since implementation details are not disclosed. Since the volume of flash drives is becoming larger every year, it is necessary to use more and more efficient operating algorithms to guarantee the stable functioning of the devices.

Troubleshooting

One of the very effective ways to combat this phenomenon has been to reserve a certain amount of memory, which ensures load uniformity and error correction through special logical redirection algorithms for replacing physical blocks that arise during intensive work with a flash drive. And to prevent the loss of information, cells that fail are blocked or replaced with backup ones. This software distribution of blocks makes it possible to ensure load uniformity, increasing the number of cycles by 3-5 times, but this is not enough.

And other types of similar drives are characterized by the fact that a table with a file system is entered into their service area. It prevents failures in reading information at the logical level, for example, in the event of an incorrect shutdown or a sudden interruption in the supply of electrical energy. And since the system does not provide caching when using removable devices, frequent rewriting has the most detrimental effect on the file allocation table and directory table of contents. And even special programs for memory cards are not able to help in this situation. For example, during a one-time request, the user overwrote a thousand files. And, it would seem, I only used the blocks where they were located for recording once. But service areas were rewritten with each update of any file, that is, the allocation tables went through this procedure a thousand times. For this reason, the blocks occupied by this data will fail first. Wear leveling technology also works with such blocks, but its effectiveness is very limited. And it doesn’t matter what kind of computer you use, the flash drive will fail exactly when the creator intended it to.

It is worth noting that the increase in the capacity of the microcircuits of such devices has only led to the fact that the total number of write cycles has decreased, since the cells are becoming smaller, so less and less voltage is required to dissipate the oxide partitions that isolate the “floating gate”. And here the situation is such that with the increase in the capacity of the devices used, the problem of their reliability began to worsen more and more, and the class of the memory card now depends on many factors. The reliability of such a solution is determined by its technical features, as well as the current market situation. Due to fierce competition, manufacturers are forced to reduce production costs by any means. Including due to simplified design, the use of components from a cheaper set, weakening control over production and other methods. For example, a Samsung memory card will cost more than its lesser-known analogues, but its reliability raises much fewer questions. But even here it is difficult to talk about a complete absence of problems, and it is difficult to expect anything more from devices from completely unknown manufacturers.

Development prospects

While there are obvious advantages, there are a number of disadvantages that characterize the SD memory card, which prevent further expansion of its scope. That is why there is a constant search for alternative solutions in this area. Of course, first of all, they are trying to improve existing types of flash memory, which will not lead to any fundamental changes in the existing production process. Therefore, there is no doubt about only one thing: companies engaged in the manufacture of these types of drives will try to use their full potential before switching to another type, continuing to improve traditional technology. For example, the Sony memory card is currently available in a wide range of volumes, so it is assumed that it will continue to be actively sold out.

However, today, on the threshold of industrial implementation, there is a whole range of technologies for alternative data storage, some of which can be implemented immediately upon the onset of a favorable market situation.

Ferroelectric RAM (FRAM)

The technology of the ferroelectric principle of information storage (Ferroelectric RAM, FRAM) is proposed to increase the potential of non-volatile memory. It is generally accepted that the mechanism of operation of existing technologies, which consists in rewriting data during the reading process with all modifications of the basic components, leads to a certain restraint in the speed potential of devices. And FRAM is a memory characterized by simplicity, high reliability and speed in operation. These properties are now characteristic of DRAM - non-volatile random access memory that currently exists. But here we will also add the possibility of long-term data storage, which is characterized by Among the advantages of such technology, we can highlight resistance to various types of penetrating radiation, which may be in demand in special devices that are used to work in conditions of increased radioactivity or in space exploration. The information storage mechanism here is implemented through the use of the ferroelectric effect. It implies that the material is able to maintain polarization in the absence of an external electric field. Each FRAM memory cell is formed by sandwiching an ultra-thin film of ferroelectric material in the form of crystals between a pair of flat metal electrodes, forming a capacitor. The data in this case is stored inside the crystal structure. And this prevents the effect of charge leakage, which causes the loss of information. Data in FRAM memory is retained even when the power supply is turned off.

Magnetic RAM (MRAM)

Another type of memory that is considered very promising today is MRAM. It is characterized by fairly high speed performance and energy independence. in this case, a thin magnetic film placed on a silicon substrate is used. MRAM is static memory. It does not need periodic rewriting, and information will not be lost when the power is turned off. At the moment, most experts agree that this type of memory can be called a next-generation technology, since the existing prototype demonstrates fairly high speed performance. Another advantage of this solution is the low cost of the chips. Flash memory is manufactured using a specialized CMOS process. And MRAM chips can be produced using a standard manufacturing process. Moreover, the materials can be those used in conventional magnetic media. It is much cheaper to produce large quantities of such microcircuits than all the others. An important property of MRAM memory is its ability to turn on instantly. And this is especially valuable for mobile devices. Indeed, in this type, the value of the cell is determined by the magnetic charge, and not by the electrical charge, as in traditional flash memory.

Ovonic Unified Memory (OUM)

Another type of memory that many companies are actively working on is a solid-state drive based on amorphous semiconductors. It is based on phase change technology, which is similar to the principle of recording on conventional discs. Here the phase state of a substance in an electric field changes from crystalline to amorphous. And this change persists even in the absence of tension. Such devices differ from traditional optical disks in that heating occurs due to the action of electric current and not a laser. Reading in this case is carried out due to the difference in the reflectivity of the substance in different states, which is perceived by the disk drive sensor. Theoretically, such a solution has a high data storage density and maximum reliability, as well as increased performance. The maximum number of rewrite cycles is high here, for which a computer is used; a flash drive in this case lags behind by several orders of magnitude.

Chalcogenide RAM (CRAM) and Phase Change Memory (PRAM)

This technology is also based on the principle that in one phase the substance used in the carrier acts as a non-conducting amorphous material, and in the second it serves as a crystalline conductor. The transition of a memory cell from one state to another is carried out due to electric fields and heating. Such chips are characterized by resistance to ionizing radiation.

Information-Multilayered Imprinted CArd (Info-MICA)

The operation of devices built on the basis of this technology is carried out according to the principle of thin-film holography. Information is recorded as follows: first, a two-dimensional image is formed and transferred to a hologram using CGH technology. Data is read by fixing the laser beam on the edge of one of the recorded layers, which serve as optical waveguides. The light propagates along an axis that is parallel to the plane of the layer, forming an output image corresponding to the information recorded earlier. The initial data can be obtained at any time thanks to the reverse coding algorithm.

This type of memory compares favorably with semiconductor memory due to the fact that it provides high recording density, low power consumption, as well as low cost of storage media, environmental safety and protection from unauthorized use. But such a memory card does not allow rewriting of information, so it can only serve as long-term storage, a replacement for paper media, or an alternative to optical disks for distributing multimedia content.

Modern people like to be mobile and have various high-tech gadgets with them (English gadget - device), making life easier, but what is there to hide, making it more rich and interesting. And they appeared in just 10-15 years! Miniature, lightweight, convenient, digital... Gadgets have achieved all this thanks to new microprocessor technologies, but a greater contribution was made by one remarkable data storage technology, which we will talk about today. So, flash memory.

There is an opinion that the name FLASH in relation to the type of memory is translated as “flash”. Actually this is not true. One version of its appearance says that for the first time in 1989-90, Toshiba used the word Flash in the context of “fast, instant” when describing its new chips. In general, Intel is considered the inventor, introducing flash memory with NOR architecture in 1988. A year later, Toshiba developed the NAND architecture, which is still used today along with the same NOR in flash chips. Actually, now we can say that these are two different types of memory that have somewhat similar production technology. In this article we will try to understand their design, operating principle, and also consider various practical use options.

NOR

With its help, input voltages are converted into output voltages corresponding to “0” and “1”. They are necessary because different voltages are used to read/write data in a memory cell. The cell diagram is shown in the figure below.

It is typical for most flash chips and is a transistor with two insulated gates: control and floating. An important feature of the latter is the ability to hold electrons, that is, charge. Also in the cell there are so-called “drain” and “source”. When programming between them, due to the influence of a positive field on the control gate, a channel is created - a flow of electrons. Some of the electrons, due to the presence of greater energy, overcome the insulator layer and fall on the floating gate. They can be stored on it for several years. A certain range of the number of electrons (charge) on a floating gate corresponds to a logical one, and anything greater than this corresponds to a zero. When reading, these states are recognized by measuring the threshold voltage of the transistor. To erase information, a high negative voltage is applied to the control gate, and electrons from the floating gate move (tunnel) to the source. In technologies from different manufacturers, this principle of operation may differ in the way current is supplied and data is read from the cell. I would also like to draw your attention to the fact that in the structure of flash memory, only one element (transistor) is used to store 1 bit of information, while in volatile types of memory this requires several transistors and a capacitor. This makes it possible to significantly reduce the size of produced microcircuits, simplify the technological process, and, consequently, reduce costs. But one bit is far from the limit: Intel is already releasing StrataFlash memory, each cell of which can store 2 bits of information. In addition, there are trial samples with 4 and even 9-bit cells! This memory uses multi-level cell technology. They have a normal structure, but the difference is that their charge is divided into several levels, each of which is assigned a certain combination of bits. Theoretically, it is possible to read/write more than 4 bits, however, in practice, problems arise with eliminating noise and with the gradual leakage of electrons during long-term storage. In general, memory chips existing today for cells are characterized by an information storage time measured in years and a number of read/write cycles ranging from 100 thousand to several million. Among the disadvantages, in particular, flash memory with NOR architecture is worth noting poor scalability: it is impossible to reduce the area of ​​​​chips by reducing the size of transistors. This situation is related to the way the matrix of cells is organized: in the NOR architecture, an individual contact must be made to each transistor. Flash memory with NAND architecture fares much better in this regard.

NAND

The design and principle of operation of its cells is the same as that of NOR. Although, in addition to logic, there is still another important difference - the architecture of the placement of cells and their contacts. Unlike the case described above, here there is a contact matrix, in the intersections of the rows and columns of which transistors are located. This is comparable to a passive matrix in displays :) (and NOR is comparable to an active TFT). In the case of memory, this organization is somewhat better - the area of ​​the microcircuit can be significantly reduced due to the size of the cells. The disadvantages (to be sure) are the lower speed of operation in byte-by-byte random access operations compared to NOR.

There are also such architectures as: DiNOR (Mitsubishi), superAND (Hitachi), etc. They do not represent anything fundamentally new, but only combine the best properties of NAND and NOR.

And yet, be that as it may, NOR and NAND today are produced on equal terms and practically do not compete with each other, because, due to their qualities, they are used in different areas of data storage. This will be discussed further...

Where is memory needed...

The scope of application of any type of flash memory depends primarily on its speed characteristics and reliability of information storage. The address space of NOR memory allows you to work with individual bytes or words (2 bytes). In NAND, cells are grouped into small blocks (similar to a hard drive cluster). It follows from this that when reading and writing sequentially, NAND will have a speed advantage. However, on the other hand, NAND is significantly inferior in random access operations and does not allow direct work with bytes of information. For example, to change one byte you need:

  1. read into the buffer the block of information in which it is located
  2. change the required byte in the buffer
  3. write the block with the changed byte back

If we add block fetch and access delays to the execution time of the above operations, we will get indicators that are by no means competitive with NOR (note that this is specifically for the case of byte-by-byte recording). Sequential writing/reading is another matter - here NAND, on the contrary, shows significantly higher speed characteristics. Therefore, and also because of the possibility of increasing memory capacity without increasing the size of the chip, NAND flash has found use as a storer of large amounts of information and for its transfer. The most common devices now based on this type of memory are flash drives and memory cards. As for NOR flash, chips with such an organization are used as storers of program code (BIOS, RAM of pocket computers, mobile phones, etc.), sometimes implemented in the form of integrated solutions (RAM, ROM and processor on one mini-board, or even in one chip). A good example of this use is the Gumstix project: a single-board computer the size of a stick of gum. It is NOR chips that provide the level of reliability of information storage required for such cases and more flexible options for working with it. The volume of NOR flash is usually measured in units of megabytes and rarely exceeds tens.

And there will be a flash...

Of course, flash is a promising technology. However, despite high production growth rates, storage devices based on it are still expensive enough to compete with desktop or laptop hard drives. Basically, now the sphere of dominance of flash memory is limited to mobile devices. As you understand, this segment of information technology is not so small. In addition, according to the manufacturers, flash expansion will not stop there. So, what are the main development trends taking place in this area?

First, as mentioned above, there is a strong focus on integrated solutions. Moreover, projects like Gumstix are only intermediate stages on the path to implementing all functions in one chip.

So far, the so-called on-chip (single-chip) systems are combinations of flash memory with a controller, processor, SDRAM, or special software in one chip. For example, Intel StrataFlash in combination with Persistent Storage Manager (PSM) software makes it possible to use memory capacity simultaneously for both storing data and executing program code. PSM is essentially a file system supported by Windows CE 2.1 and higher. All this is aimed at reducing the number of components and reducing the size of mobile devices while increasing their functionality and performance. No less interesting and relevant is the development of the Renesas company - superAND flash memory with built-in management functions. Until this point, they were implemented separately in the controller, but now they are integrated directly into the chip. These are functions of monitoring bad sectors, error correction (ECC - error check and correct), and wear leveling. Since they are present in one variation or another in most other branded firmware of external controllers, let’s take a brief look at them. Let's start with the bad sectors. Yes, they are also found in flash memory: chips are already coming off the assembly line with an average of up to 2% of non-working cells - this is a common technological norm. But over time, their number may increase (the environment should not be particularly blamed for this - the electromagnetic, physical (shaking, etc.) influence of the flash chip is not terrible). Therefore, like hard drives, flash memory has reserve capacity. If a bad sector appears, the monitoring function replaces its address in the file allocation table with the address of the sector from the spare area.


Actually, the ECC algorithm is responsible for identifying bad problems - it compares the recorded information with the actually recorded information. Also, due to the limited resource of the cells (on the order of several million read/write cycles for each), it is important to have a function for accounting for uniform wear. Let me give you a rare but common case: a key fob with 32 MB, of which 30 MB are occupied, and something is constantly being written to and deleted in the free space. It turns out that some cells are idle, while others are intensively exhausting their resources. To prevent this from happening, in branded devices the free space is conventionally divided into sections, for each of which the number of write operations is monitored and recorded.

Even more complex all-in-one configurations are now widely represented by such companies as, for example, Intel, Samsung, Hitachi, etc. Their products are multifunctional devices implemented in only one chip (standardly it contains a processor, flash memory and SDRAM). They are focused on use in mobile devices, where high performance with minimal size and low power consumption is important. These include: PDA, smartphones, phones for 3G networks. Let me give an example of such developments - a chip from Samsung that combines an ARM processor (203 MHz), 256 MB of NAND memory and 256 SDRAM. It is compatible with common operating systems: Windows CE, Palm OS, Symbian, Linux and has USB support. Thus, based on it, it is possible to create multifunctional mobile devices with low power consumption, capable of working with video, sound, voice and other resource-intensive applications.

Another direction for improving flash is to reduce power consumption and size while simultaneously increasing the size and speed of memory. This applies to a greater extent to chips with NOR architecture, since with the development of mobile computers that support wireless networks, NOR flash, due to its small size and low power consumption, will become a universal solution for storing and executing program code. 512 Mbit NOR chips from the same Renesas will soon be put into mass production. Their supply voltage will be 3.3 V (let me remind you, they can store information without supplying current), and the speed of write operations will be 4 MB/sec. At the same time, Intel is already presenting its development StrataFlash Wireless Memory System (LV18/LV30) - a universal flash memory system for wireless technologies. Its memory capacity can reach 1 Gbit, and the operating voltage is 1.8 V. The chip manufacturing technology is 0.13 nm, with plans to switch to a 0.09 nm process technology. Among the innovations of this company, it is also worth noting the organization of a batch mode of operation with NOR memory. It allows you to read information not one byte at a time, but in blocks of 16 bytes: using a 66 MHz data bus, the speed of information exchange with the processor reaches 92 Mbit/s!

Well, as you can see, technology is developing rapidly. It is quite possible that by the time this article is published something new will appear. So, if anything happens, don’t blame me :) I hope the material was interesting to you.

Despite the progress of computer technology, just 3-4 years ago many new computers (and even more so older ones) included a floppy drive. Significant reductions in the cost of optical drives and CDs have not been able to replace 3.5-inch floppy disks. It's inconvenient to use optical media and that's it. While reading data from them does not cause any particular discomfort, writing and deleting them already required some time. And the reliability of disks, although many times higher than that of floppy disks, still begins to decline after some time, especially after active use. As always, at the most inopportune moment, the drive will “kick” due to old age (its own or the disk) and say that the disk is not noticeable on the horizon.

That's why floppy disks lasted so long. It is still quite possible to carry small things like documents or source codes of programs on them. But now, even for this type of data, sometimes 1.38 MB of free space is not enough.

The solution to the problem has been looming for quite some time. Its name is flash memory. It was invented back in the 80s of the last century, but reached actual mass products by the end of the 90s. And at first it was available to us as memory cards, and then in the form of MP3 players, which today have already changed the abbreviation MP3 to a prouder and more general epithet “digital”.

This was followed by the advent of USB flash drives. The process of their penetration was not the fastest at first. It began with the appearance of 16-64 MB solutions. Now this is minuscule, but 8 years ago, compared to a floppy disk, it was wow. And added to this was ease of use, high read/write speed and, of course, a high price. At that time, such flash drives were more expensive than an optical drive, which themselves were valued at about $100.

However, the convenience of flash drives has had a decisive influence on consumer choice. As a result, a real boom began in 2005. The cost of flash memory has fallen many times, and along with it the capacity of storage devices has increased. As a result, today you can buy a 32 GB flash drive for just 2000-2500 rubles, whereas a year ago it cost almost twice as much.

Progress in the field of flash memory has been so successful that today it is already beginning to compete with hard drives. So far only in the area of ​​read/write speed and access time, as well as in energy performance and durability, but victory in capacity in the coming years also cannot be ruled out. The only advantage of HDD is the price. One “hard” gigabyte costs much less. But this is only a matter of time.

So, flash memory is one of the most promising computer technologies for storing data. But where did it come from and what are its possible limitations and disadvantages? It is precisely these questions that this article aims to answer.

Past

While Japanese shippers were unloading one of the first shipments of Apple computers, which arrived in refrigerators because of the apple on the boxes, a Japanese scientist named Fujio Masuoki was working on a new type of memory in the Toshiba research laboratory. They didn’t come up with a name for it right away, but the scientist saw the prospects for the invention from the very beginning.

However, the name was decided on quite quickly. Fujio's colleague, Mr. Shoji Ariizumi, suggested calling the new memory "flash". One translation of this word means a camera flash (and, in principle, any other flash of light). This idea was suggested to Shoji by the method of erasing data.

The new technology was presented in 1984 in San Francisco at an event called the International Electron Devices Meeting, held by the IEEE. It was noticed immediately by quite large companies. For example, Intel released its first commercial NOR chip in 1988.

Five years later, in 1989, Toshiba introduced NAND flash memory technology at a similar event. Today this type is used in the vast majority of devices. We'll tell you why exactly in the next section.

NOR and NAND

NOR memory was introduced a little earlier because it is a little easier to manufacture, and its transistors in their structure resemble a regular MOSFET transistor (channel unipolar field-effect transistor). The only difference is that in NOR memory the transistor, in addition to the control gate, has a second, “floating” gate. The latter, with the help of a special insulating layer, can retain electrons for many years, keeping the transistor not discharged.

In general, NOR memory got its name because it works as a NOR gate (NOR is a logical NOR operation; it takes the value “true” only when both inputs are “false”). So the empty NOR memory cell is filled with the logic value "1". By the way, the same applies to NAND memory. And, as you might guess, it got its name because of a similar principle of working with a NAND gate (NAND is a logical NAND operation; it takes the value “false” only when “true” is applied to both inputs).

What do these same “NOT-AND” and “NOT-OR” result in in practice? The fact is that the NOR memory chip can only be cleared entirely. Although in more modern incarnations of this technology, the chip is divided into several blocks, usually occupying 64, 128 or 256 KB. But this type of memory has an external address bus, which allows byte-by-byte reading and programming (writing). This allows you not only to access data directly as accurately as possible, but also to execute it directly “on the spot”, without unloading all the information into RAM. This capability is called XIP (eXecute In Place).

It's also worth talking about a relatively new NOR memory function called BBM (Bad Block Management). Over time, some of the cells may become unusable (more precisely, their recording will become unavailable) and the chip controller, noticing this, will reassign the address of such cells to another, still working block. Hard drives do something similar, as we wrote about in the article "".

Thus, NOR memory is well suited for cases where maximum accuracy of data reading and fairly infrequent changes are required. Can you guess where we're going with this? That's right - to the firmware of various devices, in particular the BIOS of motherboards, video cards, etc. This is where NOR flash is now most often used.

As for NAND, the situation with it is a little more tricky. Reading data can only be done page by page, and writing can only be done block by block. One block consists of several pages, and one page is usually 512, 2048, or 4096 bytes in size. The number of pages in a block usually varies from 32 to 128. So there is no question of any “on-site” execution. Another limitation of NAND memory is that a block can only be written sequentially.

As a result, such precision (although it would be more correct to say “not precision”) sometimes leads to errors, especially if you have to deal with MLC memory (more on this type below). To correct them, the ECC mechanism is used. It can correct 1 to 22 bits in every 2048 bits of data. If correction is not possible, the mechanism detects that there was an error while writing or erasing data and the block is marked as "bad".

By the way, to prevent the formation of bad blocks in flash memory, there is a special method called “wear levelling” (literally “wear level”). It works quite simply. Since the "survivability" of a flash memory block depends on the number of erase and write operations, and this number is different for different blocks, the device controller counts the number of these operations for blocks, trying to write to those that have been used less over time. That is, those that are less “worn out”.

Well, as for the scope of application of NAND memory, due to the possibility of denser placement of transistors, and at the same time cheaper production, it is used in all flash memory cards and USB flash drives, as well as SSDs.

Well, a little about SLC (Single-Level Cell - single-level cell) and MLC (Multi-Level Cell - multi-level cell) cells. Initially, only the first type was available. It assumes that only two states, that is, one bit of data, can be stored in one cell. MLC chips were invented later. Their capabilities are a little wider - depending on the voltage, the controller can read more than two values ​​from them (usually four), which allows you to store 2 or more bits in one cell.

The advantages of MLC are obvious - with the same physical size, twice as much data fits into one cell. The disadvantages, however, are no less significant. First of all, this is the reading speed - it is naturally lower than that of SLC. After all, it is necessary to create a more accurate voltage, and after that it is necessary to correctly decipher the information received. And then the second drawback arises - inevitable errors when reading and writing data. No, the data is not damaged, but it does affect the speed of operation.

A rather significant drawback of flash memory is the limited number of data write and erase cycles. In this regard, it still can’t compete very well with hard drives, but overall the situation is improving every year. Here are the service life data for different types of flash memory:

  • SLC NAND – up to 100 thousand cycles;
  • MLC NAND – up to 10 thousand cycles;
  • SLC NOR – from 100 to 1000 thousand cycles;
  • MLC NOR – up to 100 thousand cycles.

Here's another disadvantage of MLC memory - it is less durable. Well, NOR flash is generally beyond competition. True, this is of little use to the average person - anyway, his flash drive is most likely built on the basis of NAND flash, and even on MLC chips. However, technology does not stand still and NAND flash with millions of cycles of writing and erasing data is gradually coming to the masses. So over time, these parameters will become of little significance to us.

"Cards"

Having dealt with the types of flash memory, let's now move on to real products based on it. Of course, we will omit the description of BIOS chips, since most readers are of little interest to them. Just like it makes no sense to talk about USB flash drives. With them, everything is extremely simple: they are connected via a USB interface, the chips installed inside are entirely dependent on the manufacturer. There are no standards for these media, except for the need for USB compatibility.

But standards are required for flash cards, which are used today in digital cameras, players, mobile phones and other mobile devices. A card reader for them is available in most laptops and netbooks, and one can also be found in household DVD (or Blu-ray) players or car radios.

There is one universal characteristic for these devices - the number of supported memory cards. Sometimes on card readers you can see proud inscriptions “20-in-1” or even “30-in-1”, indicating the number of supported formats. But what is most surprising is that there are only 6 fundamentally different mass formats. All the rest are their modifications. It is these six standards that we will focus on further.

CompactFlash

The CompactFlash format occupies a special place among all other flash memory card formats. First of all, because it was the very first mass standard. It was introduced by SanDisk in 1994. And it is still actively used in digital SLR cameras, as well as computer routers and other highly specialized devices.

The most interesting thing is that the first CF cards were based on NOR chips manufactured by Intel. But then they were quickly transferred to NAND flash, which reduced the cost and increased capacity.

CompactFlash was created as a format for external data storage. But since there were no card readers 15 years ago, and USB was just being designed, CF cards were created based on the ATA (IDE) interface specifications. Thus, such a card can be connected to a regular IDE connector or inserted into a PC Card slot via a passive adapter. This is why CompactFlash is very convenient to use in routers and similar devices - speed and large volume are not required there, but size, shock resistance and low heating are much more relevant.

In addition, it is not difficult to make an adapter for a USB or FireWire interface. And, most interestingly, most card readers use the CompactFlash I/O system to exchange data between the computer and other formats: SD/MMC, Memoty Stick, xD and SmartMedia.

Now about the various modifications of the CompactFlash standard. Initially, such cards were issued in a single “cartridge” measuring 43x36x3.3 mm. It is still used today. But when the one-inch IBM Microdrive hard drive was introduced, a second form factor with dimensions of 43x36x5.0 mm was added. Thus, the first became known as CF Type I, and the second - CF Type II. After the release of the Microdrive (and its analogues) was stopped, the relevance of the CF Type II came to naught.

CompactFlash has several more revisions. Their need arose as read/write speeds and volumes increased. So revision 2.0 increased the maximum speed to 16 MB/s. Later, revision 3.0 appeared, increasing this value to 66 MB/s. Well, the latest version 4.0/4.1 allows you to exchange data at speeds of up to 133 MB/s. The last value corresponds to the UDMA133 standard, which is also losing its relevance.

To replace the fourth revision, they are already preparing... no, not a new revision - a new format - CFast. Its main fundamental difference is the use of the SerialATA interface instead of IDE. Of course, this completely covers backward compatibility with the previous type of connector, but it increases the maximum speed to 300 MB/s and the ability to expand the volume to much more than 137 GB. Note that CFast uses seven pins for data exchange, just like a regular SATA interface. But power is supplied through 17 pins, whereas SATA devices have 15. So you won’t be able to directly connect the CFast card to the motherboard; you’ll have to use an adapter. Such cards should appear this year. In January, at CES 2009, the first samples with a capacity of 32 GB were already demonstrated.

Now it remains to talk about the speed of data exchange and the volumes of CompactFlash cards available today. The speed of CF cards (and other flash memory drives, except SSDs too) is measured exactly the same as for CD drives. That is, 1x corresponds to 150 KB/s. The fastest representatives have the inscription 300x, which corresponds to 45 MB/s. In principle, it’s not small, but it’s far from hard drives paired with SSDs. But over time, the speed will only increase.

Well, as for the volume, CompactFlash cards with capacities ranging from 2 MB to 100 GB have been released over the years. Today, the most common options are from 1 to 32 GB. However, 48, 64 and 100 GB versions are already available for sale, although they are still quite rare. So far, the CompactFlash format offers the highest capacity flash memory cards. But others may offer other advantages. We read about them further.

SmartMedia

SmartMedia became the second mass format of flash cards. It was introduced a year later than CompactFlash - in the summer of 1995. Actually, it was created as a competitor to CF. What did SmartMedia have to offer? First of all, smaller sizes. And to be even more precise, only a smaller thickness - only 0.76 mm; the width and length of such cards was 45x37 mm, while for CompactFlash these parameters are almost the same - 43x36 mm. It should be noted that in terms of thickness, SM has not yet surpassed any other format. Even ultra-compact microSD cards are fatter - 1 mm.

This figure was achieved thanks to the removal of the controller chip. It was transferred to the card reader. Yes, and inside the SM card itself, at first there could be one NAND chip, but then, as technology improved, there were more of them.

But the absence of a controller inside the card has certain disadvantages. Firstly, as the volume grew and new media models were released, the card reader firmware had to be updated. And this operation was not always available if the card reader was very old. Also, over time, confusion began with the operating voltage of SmartMedia cards. Initially it was 5.0 V, and then 3.3 V. And if the card reader did not support one of them, then it could not work with such cards. Moreover, when inserting a 3.3 volt card into a 5.0 volt card reader, it could be damaged or burned.

Secondly, for the SmartMedia format it is impossible to use the method of calculating the wear level of flash memory blocks (we described the wear levelling method in the last section). And this potentially threatens to shorten the life of the memory card.

However, all this did not prevent SmartMedia from being used for quite a long time as the main format for digital cameras - in 2001, up to half of such devices on the market supported it, although at that time this market was much more modest than today. SmartMedia has not found itself in other digital devices such as players, PDAs or mobile phones. And camera manufacturers began to abandon SM. Cameras were becoming smaller and smaller and the thinness of these cards was no longer enough. Well, the second significant disadvantage is the growing need for more capacity. SmartMedia cards reached a capacity of only 128 MB. 256 MB variants were planned, but they were never released.

In general, SmartMedia was conceived as a replacement for 3.5-inch floppy disks. A special adapter called FlashPath was even released for them. It was introduced in May 1998 and a year later they sold a million units. It was developed by SmartDisk, which, by the way, produced similar adapters for MemoryStick and SD/MMC cards.

The most amazing thing is that FlashPath can work with any floppy drive with an excellent “HD” (High-Density) logo. In short, anyone who reads a 1.44 MB floppy disk is suitable. But there is one "but". There is no way to do without it. And here there are even two of them. First, a special driver is required to recognize the FlashPath adapter and the card inside it. And if it is not available for the required OS, then it is in the air. So it will no longer be possible to boot from such a floppy disk. The second “but” is the speed of work. It does not exceed that when working from a regular floppy disk. And if 1.44 MB could be copied or written in a little more than a minute, then 64 MB would take more than an hour.

Today the SmartMedia format can be called dead. Some card readers still support it (especially the geeky all-in-1 ones), but this compatibility is simply not relevant. Although, of course, this standard made a certain contribution to the development of flash technologies.

The MMC format was introduced third in 1997. It was developed by SanDisk and Siemens AG. The abbreviation MMC stands for MultiMediaCard, which immediately indicates the purpose of the standard - digital multimedia devices. This is where MMC is most often used.

In principle, MMC is very closely related to SD, especially their first versions. However, they diverged in their development and today the second is the most common. So we will talk about it in the next subsection.

MMC, unlike CompactFlash and SmartMedia, has a more compact size. In terms of length and width: 24x32 mm. The thickness of MMC cards is 1.4 mm, which is approximately twice that of SM. But this parameter is not as critical as the other two measurements.

Over the entire existence of MMC, as many as eight different modifications of its cards have been presented. The first (simply MMC) uses a one-bit serial interface for data transmission, and its controller operates at a frequency of up to 20 MHz. This means a maximum speed of no more than 20 Mbps (2.5 MB/s or approximately 17x). In principle, quite modest by modern standards, but 12 years ago this was enough.

In 2004, the RS-MMC form factor was introduced. The prefix RS means Reduced-Size or “reduced size”. Its dimensions are as follows: 24x18x1.4 mm. You can see that the height has almost halved. Otherwise it was exactly the same MMC memory card. But to install it in a card reader you need to use a mechanical adapter.

The DV-MMC format turned out to be quite short-lived (DV stands for Dual-Voltage - double voltage). Such cards could operate at a standard voltage of 3.3 V and at a reduced voltage of 1.8 V. This is necessary to save energy. There is a clear focus on mobile devices here. But DV-MMC cards were quickly phased out due to the advent of the MMC+ (or MMCplus) and MMCmobile formats.

MMC+ and MMCmobile differed quite significantly from the original MMC specification and represented its fourth version. However, this did not prevent them from maintaining full backward compatibility with older card readers and devices, but to use their new capabilities, a firmware update was required. And these possibilities were as follows. To the one-bit data exchange interface, 4- and 8-bit ones were added. The controller frequency could be from 26 to 52 MHz. All this raised the maximum speed to 416 Mbit/s (52 MB/s). Both of these formats supported operation with a voltage of 1.8 or 3.3 V. In size, they did not differ from MMC and RS-MMC, MMCplus and MMCmobile, respectively.

Later, the smallest MMC appeared - MMCmicro. The card dimensions were 14x12x1.1 mm. This format was based on MMC+ with some limitations. In particular, due to the lack of additional contacts (MMC has 7, MMC+ has 13), the data exchange interface did not support 8-bit data transfer.

There is also such an unusual format as miCard. It was introduced in the summer of 2007 with the goal of creating a universal card that can be inserted into both an SD/MMC card reader and a USB connector. The first cards were supposed to have a capacity of 8 GB. The maximum reaches 2048 GB.

Well, the last one is SecureMMC. It is also based on the version 4.x specification that is used in MMC+. Its main feature is support for DRM protection. By the way, this is what originally distinguished the SD format from MMC. SecureMMC is an attempt to compete with SD. So let's move on to this standard.

The SD (Secure Digital) format is by far the most popular. It and its modifications are used everywhere: in digital players and cameras (even DSLRs), in PDAs and mobile phones. Probably the reason for this is its constant support and development from many companies.

SD was introduced in 1999 by Matsushita and Toshiba. A full-size Secure Digital card has the same dimensions as an MMC – 32x24x2.1 mm. The large thickness is explained by the presence of a write-blocking key. However, the SD specification allows you to make cards without it (they are called Thin SD), then the thickness is reduced to 1.4 mm.

Initially, the SD release aimed to compete with MemoryStick (discussed below), which supported DRM protection for media files. Then the development companies mistakenly assumed that the giants of the media industry would crowd online stores so much that all files would be protected by DRM. So we decided to make a fuss.

Secure Digital is based on the MMC specifications. This is why SD card readers easily work with MMC. Why not the other way around? To protect contacts from wear on SD cards, they were slightly recessed into the housing. Therefore, the contacts of a card reader aimed only at working with MMC simply will not reach the contacts of the SD card.

In terms of variety of formats, SD is no less “modest” than its predecessor. First of all, it is worth noting that two more form factors were presented: miniSD (20x21.5x1.4 mm) and microSD (11x15x1). The latter was originally created by SanDisk and was called T-Flash and then TransFlash. And then it was adapted as a standard by the SD Card Association.

The remaining differences relate to the card capacity. And there is some confusion here. It started with the first generation of cards, which reached a capacity of 2 GB. The SD card is identified by a 128-bit key. Of these, 12 bits are used to indicate the number of memory clusters and another 3 bits to indicate the number of blocks in the cluster (4, 8, 16, 32, 64, 128, 256 or 512 - a total of 8 values, which corresponds to three memory bits). Well, the standard block size for the first versions was 512 bytes. Total 4096x512x512 gives 1 GB of data. We've arrived.

When the lack of capacity “from above” began to tighten, version 1.01 of the specification appeared, which allowed the use of an additional bit to further determine the size of the block - it could now be 1024 or 2048 bytes, and the maximum capacity accordingly increased to 2 and 4 GB. But here's the problem - older devices could incorrectly determine the size of new memory cards.

In June 2006, a new edition of the standard appeared - SD 2.0. They even gave it a new name - SDHC or Secure Digital High Capacity. The name speaks for itself. The main innovation of SDHC is the ability to create cards up to 2 TB (2048 GB). The minimum limit is in principle unlimited, but in practice SDHC cards have a capacity of 4 GB or more. It is noteworthy that the maximum limit is artificially limited - 32 GB. For higher-capacity cards, it is suggested to use the SDXC standard (more about it below), although several manufacturers have introduced 64 GB SDHC.

The SD 2.0 standard uses 22 bits of data to define the size, but four of them are reserved for future use. So card readers that were not originally designed to work with SDHC will not be able to recognize new memory cards. But new devices can easily recognize old cards.

Along with the announcement of the SDHC format, identification by speed classes appeared. There are three options: SD Class 2, 4 and 6. These numbers indicate the minimum data exchange speed for the card. That is, a card with SD Class 6 will provide a speed of at least 6 MB/s. Well, the upper limit is naturally not limited, although so far the situation with SD cards is approximately the same as with CompactFlash - the fastest representatives have reached a speed of 300x or 45 MB/s.

It is worth adding that miniature form factors have also undergone modernization. Nobody has forgotten about miniSDHC and microSDHC. True, it’s mostly the first cards that come on sale. Today their maximum volume has already reached 16 GB, and 32 GB options are on the way.

Well, the latest innovation is the standard. Whether it was called version 3.0 or not, we were unable to find out. However, it differs from SDHC not so significantly. First of all, the artificial limitation on the maximum volume has been removed, which can now reach 2 TB. The maximum data transfer speed has been increased to 104 MB/s, and in the future they promise to raise it to 300 MB/s. Well, exFAT was chosen as the main file system (discussed below), while SDHC is content with FAT32 in most cases. The first SDXC cards have already been announced and they have a capacity of 32 or 64 GB. But products with their support will still need to wait for some time.

Actually, everything about SD cards. But within the framework of this standard, several more interesting things were released. For example, the SDIO (Secure Digital Input Output) specification. According to it, using the form factor and interface of SD cards, you can create devices such as GPS receivers, Wi-Fi and Bluetooth controllers, modems, FM tuners, Ethernet adapters, etc. That is, the SD slot in this case serves as a kind of analogue of USB.

SanDisk has distinguished itself with SD Plus cards, which immediately integrate a USB connector. Eye-Fi is a rather interesting development. This is a memory card with a built-in Wi-Fi controller. The latter can transfer data from the card to any computer. Thus, there is no need to even remove it from the camera or phone.

In total, today the Secure Digital format is the most popular and fastest growing. So far Sony is trying to resist it with its Memory Stick, but it’s not going well.

Memory Stick

Sony is known for its dislike of most formats and standards that were not developed by it. This is understandable - you won’t receive royalties from them. So eventually DVD+R/RW and Blu-ray and Memory Stick cards appeared. Introduced in October 1998, they are still distributed only among Sony products. And by and large, only Sony and a little SanDisk are involved in their production. The result of this is logical: relatively low prevalence and higher price than other flash cards of similar volume.

Over the entire existence of the Memory Stick, Sony has released as many as seven modifications. Moreover, unlike MMC, they are all in use. As a result, natural confusion arises, and at the same time, card reader manufacturers can increase the number of recognized standards by their products.

It all started with just a Memory Stick. This is an elongated memory card measuring 50x21.5x2.8 mm. Its shape somewhat resembles a piece of chewing gum. It was distinguished, as we wrote above, by DRM support, which was never required. Capacity varied from 4 to 128 MB.

Over time, this was not enough, and since an updated standard had not yet been developed, the Memory Stick Select format was announced. This is a regular Memory Stick card, but inside it there were two memory chips of 128 MB each. And you could switch between them using a special switch on the card itself. Not a very convenient solution. That's why it was temporary and intermediate.

We managed to cope with the low capacity by releasing Memory Stick PRO in 2003. Theoretically, such a memory card can store up to 32 GB of data, but in practice they were not made more than 4 GB. Of course, most older devices do not recognize the PRO version, but new ones can easily recognize the first generation Memory Stick. A sub-variant of the High Speed ​​Memory Stick PRO standard makes things even more confusing. All Memory Stick PROs with a capacity of 1 GB or more were like this. It is clear that they could operate in a special high-speed mode. And I’m very glad that they are all backwards compatible with older devices, but the speed dropped to normal.

Over time, it became clear that it would be necessary to go down the path of making cards smaller, otherwise Memory Stick “plates” are not convenient to use everywhere. This is how Memory Stick Duo appeared, measuring 31x20x1.6 mm - slightly smaller than Secure Digital. But bad luck, these cards were based on the first version of the Memory Stick standard, and with it a limitation on the maximum capacity. 128 MB for 2002 is somehow not at all respectable. This is how Memory Stick PRO Duo appeared in 2003. And it is this standard that is developing the most today - there are already 16 GB cards, 32 GB options are on the way, and the theoretical limit, according to Sony, is 2 TB.

In December 2006, Sony, together with SanDisk, announced a new modification of its flash memory cards - Memory Stick PRO-HG Duo. Its main difference from other options is its higher operating speed. In addition to the 4-bit communication interface, an 8-bit one has been added. And the controller frequency has increased from 40 to 60 MHz. As a result, the theoretical speed limit increased to 480 Mbit/s or 60 MB/s.

Well, following the latest fashion, in February 2006, the Memory Stick Micro card format (or it is also called M2) appeared, with dimensions of 15x12.5x1.2 mm - this is slightly larger than microSD. Their capacity varies from 128 to 16 GB, and theoretically can be 32 GB. Through an adapter, an M2 memory card can be inserted into the Memory Stick PRO slot, but if its capacity is more than 4 GB, certain recognition problems may arise.

This is such a squiggle. If you look at it, in principle it’s not difficult: Memory Stick is the original format, not the most compact in size, Memory Stick PRO is an option with greater capacity and speed, Memory Stick (PRO) Duo is a smaller version of cards, Memory Stick PRO-HG Duo is accelerated version of Memory Stick PRO Duo, Memory Stick Micro (M2) - the smallest Memory Stick. Now you can move on to the latest standard - xD.

xD-Picture Card

Olympus and Fujifilm felt that the flash card formats that existed in the early years of this century did not meet their ideas of ideal data storage for cameras. How else can we explain the development of our own xD-Picture Card standard?

From the name of the format it follows that it was created for storing images. But Olympus produces digital voice recorders based on it, and Fujitsu produces MP3 players. However, there are much fewer of the latest devices than cameras with xD support. However, if we compare the total sales volume of Fujitsu and Olympus digital cameras, they will in no way exceed the figures of the market leaders - Canon and Nikon. And the leaders quietly use CompactFlash in mid- and high-end SLR cameras, while the Secure Digital standard has taken root well in the rest. Well, since the distribution of xD cards is not very large, then in their development they lag behind the most popular formats, and besides, they are more expensive than them. About 2-3 times, if you take cards of the same capacity.

Obviously, the main focus of the developers of the xD format (by the way, Toshiba and Samsung are producing cards based on it) was to reduce the size of the memory card. Its dimensions are as follows - 20x25x1.78 mm. About the same as two Memory Stick Micros.

The capacity of the very first version of xD cards varies from 16 to 512 MB. They were presented in July 2002. However, in February 2005, the first update appeared, allowing the maximum volume to be increased to 8 GB. The new standard was called xD Type M. The volume was increased through the use of MLC memory, which at the same time turned out to be slower. Type M xD cards have reached 2 GB capacity. And so far this limit has not been overcome either by Type M or newer standards.

To solve the speed problem, xD Type H was introduced in November 2005. This format was based on SLC memory, since they decided to discontinue it in 2008 due to high costs. But it was replaced in April 2008 by the Type M+. Cards of this format are approximately 1.5 times faster than Type M.

Backwards compatibility of different versions of xD formats is true only for the newest devices - they can easily recognize older versions of cards. But older devices will not necessarily recognize the new cards. The situation here is approximately the same as with other standards.

As for speed, then, as in terms of volume, xD does not shine at all. Today, the average Type M+ read speed is 6.00 MB/s (40x), and write speed is 3.75 MB/s (25x).

In total, the xD-Picture Card format is more expensive in retail than SD and CF. Memory cards are quite compact, but their capacity no longer meets modern requirements. The same goes for speed. For shooting video with a resolution of 640x480 at 30 frames per second, Type M+ is still sufficient. But for today's SLR cameras, which shoot frames with a resolution of 12-24 MP and video in 720p and 1080p format, this is clearly not enough. It’s not bad at all to have a 200-300x card. So we don’t see much point in continuing to support and develop xD. We also wouldn’t be surprised if they suddenly decide to close it down, and the next generation of cameras will be transferred to SD and/or CF.

The abbreviation SSD began to appear in news feeds and article titles relatively recently - a couple of years ago. The reason for this is that this technology began to become widespread only when flash memory began to be used more and more often for data storage, and the aforementioned news headlines (and text) spoke of the imminent rapid growth of this market, simultaneously promising the displacement of HDDs. At least from the laptop and netbook segment.

But the most interesting thing is that an SSD is not necessarily a flash memory drive. SSD or Solid State Drive means solid state drive. That is, the principle rather than the type is important here - “hard” memory is used to store data. A memory that doesn't spin, spin, or jump. So the SSD is not a couple of years old at all, but formally fifty years old. This technology was called differently then, but again, the principle is important here. But the principle has remained.

Today, two types of SSDs are relevant: based on volatile memory and based on non-volatile memory. The first are those that use SRAM or DRAM memory as their basis. They are also called RAM-drive. From time to time, such SSDs are announced by manufacturers as ultra-fast storage media. Some of them even allow you to independently increase the volume when connectors for conventional memory modules (DDR, DDR2 or DDR3 in the most modern version) are simply installed on the board.

Well, non-volatile memory is, of course, flash. It has been possible to create SSDs based on it for a long time, but the volumes of such drives were far from the capabilities of hard drives, and the cost was much higher. And the speed was not great. But today these shortcomings are gradually being eliminated.

The first generation of SSDs had capacities from 16 to 64 GB, and such “flash drives” cost hundreds and thousands of dollars. This was about two years ago. Today, 64-512 GB options are available at prices ranging from $200 to $1,500. It's a long way from hard drives, but much better. For and on the way, a 1 TB SSD in the format of a 2.5-inch hard drive. Let us remind you that mobile HDDs have not yet exceeded the capacity of 500 GB. And desktop ones have just reached the 2 TB mark. So SSD is moving forward by leaps and bounds.

As for the speed of work, it is also constantly growing. The first generation of SSDs lagged somewhat behind mobile hard drives, but modern drives have already surpassed them. Suffice it to recall the Intel X25-M SSD introduced last year, which has a read speed of 250 MB/s and a write speed of 70 MB/s. And it doesn’t cost as much as a flight to the ISS - about $350 with a capacity of 80 GB.

Of course, there are especially high-speed models from Fusion-IO with read/write speeds of 800/694 MB/s or PhotoFast G-Monster PCIe SSD with 1000/1000 MB/s, but they are priced like a small jet. And of course, for data exchange they use not SerialATA, but regular PCI Express x8 - this standard is still capable of providing the required bandwidth. By the way, PCI Express x1 is actively used to connect SSDs in netbooks. It is in this format that their data storage is made - in the form of a small PCI-E x1 card.

Such high speed performance for SSD drives was achieved thanks to parallel reading of data from several chips at once. For example, the Intel X25-M mentioned above works on the principle of a RAID level 0 array. That is, one bit is written to the first chip, the second to the second, and so on. It is extremely difficult to organize a similar mechanism for a regular USB flash drive or memory card, since they almost always only have one flash memory chip installed.

To increase capacity and reduce cost, MLC memory is often used in SSDs (including in the X25-M). More expensive models are equipped with SLC chips. But if you write data to a USB flash drive or some SD card relatively rarely, then to an SSD the recording is carried out continuously during operation. And in most cases you don’t even know it. Modern programs constantly maintain various logs; the operating system moves little-used data to the swap file, thus freeing up RAM; Even basic file access requires recording the access time.

So, in any case, you have to install more durable chips in the SSD. You also have to worry about algorithms for calculating the wear level and redistributing data - they must be more advanced than those of conventional flash drives. SSDs even have an additional volatile cache chip, just like a regular hard drive. The cache contains block address data and wear level data. When turned off, the latter are saved to flash memory.

In any case, for now, flash-based SSD technology continues to develop rapidly. It offers several undeniable advantages over HDD:

  • significantly shorter data access time;
  • constant data reading speed;
  • zero noise level;
  • less energy consumption.

At the moment, all that remains is to increase the number of rewrite cycles to such a number that you don’t have to worry about it at all. The capacity will continue to grow without that. It is possible that in the next 2-3 years it will catch up and even overtake hard drives. Well, the price falls by itself if the technology is promising, actively promoted and the level of sales is constantly growing. We don’t know whether SSDs will be able to supplant HDDs in the desktop computer market, but they are already moving towards mobile devices.

Future

Actually we have come to the end. The conclusion from the above can be drawn as follows: flash memory will become more widespread and improved in the future. It is not yet clear whether it will be able to replace hard drives, but it has the makings of this. But there is another catch - the file system.

Modern file systems are optimized for use with hard drives. But HDD is not an SSD at all in its structure. First of all, data on the hard drive is accessed using LBA addressing. A block of such an address allows you to calculate on which plate, on which track and in which sector the requested information is located. But here's the problem - flash does not have plates, tracks or sectors. But there are blocks divided into pages. Today this problem is solved by translating addresses from one format to another, but it would be much more convenient if all this happened directly.

Another feature of flash memory is that writing can only be done in previously cleared blocks. And this operation takes some time. It would be a good idea to clear completely unused blocks during idle time.

Modern disk file systems are optimized to minimize data access time - they try to ensure that they are searched as quickly as possible across the disk. But for flash memory this is simply irrelevant - all blocks are accessed equally quickly. Well, support for calculating the level of wear of flash chips from the file system would not hurt.

So the thing for the near future is the release of new file systems optimized for working with flash memory. These, however, already exist, but modern operating systems do not support them well. It is noteworthy that one of the first was FFS2 from Microsoft, which it released back in the early 90s.

Linux OS keeps up with progress. The file systems JFFS, JFFS2, YAFFS, LogFS, UBIFS were created for it. Sun also distinguished itself by developing ZFS, which recently . It is optimized not only for hard drives, but also for flash drives. Moreover, both for using them as the main storage and as a cache.

However, today the most popular file system for flash drives (not counting SSDs) remains FAT and FAT32. It's simply the most convenient. They are supported by all operating systems and do not require drivers. But they are no longer enough for work. For example, the limitation on the maximum file size (4 GB) is already becoming unacceptable.

However, Microsoft has a replacement - exFAT, formerly known as FAT64. As we already wrote, it was chosen as the main FS for SDXC cards. In addition to being optimized for flash memory, it supports files up to 16 exabytes (16.7 million terabytes) in size, and more than 65,536 files can be stored in one folder.

exFAT is supported today by the operating systems Windows Mobile version 6.0 and higher, Windows XP SP2 and higher, Windows Vista SP1, Windows Server 2008 and Windows 7 from build 6801. Note that in Windows Vista, an exFAT-based flash drive is not capable of being used as a cache in ReadyBoost functions. Corresponding support will appear in Windows 7. As for other operating systems, a free kernel module is available for Linux that allows you to use exFAT read-only.

So the most promising OS for flash drives today seems to be ZFS and exFAT. But both are very poorly distributed, although the latter has a better chance of becoming popular. It has already been chosen as the main one for the latest generation SD cards and all the most popular versions of Windows “know” it.

For the rest, we will wait for a further increase in the capacity of flash drives and a reduction in their cost. This technology is very good, so we wish it only success.

New Year is a pleasant, bright holiday on which we all sum up the past year, look to the future with hope and give gifts. In this regard, I would like to thank all Habr residents for their support, help and interest shown in my articles (, , ,). If you had not once supported the first one, there would not have been subsequent ones (already 5 articles)! Thank you! And, of course, I want to give a gift in the form of a popular scientific article about how you can use analytical equipment that is quite harsh at first glance in a fun, interesting and beneficial way (both personal and social). Today, on New Year's Eve, on the festive operating table are: a USB-Flash drive from A-Data and a SO-DIMM SDRAM module from Samsung.

Theoretical part

I’ll try to be as brief as possible so that we all have time to prepare Olivier salad with extra for the festive table, so some of the material will be in the form of links: if you want, you can read it at your leisure...
What kind of memory is there?
At the moment, there are many options for storing information, some of them require constant power supply with electricity (RAM), some are forever “sewn” into the control chips of the equipment around us (ROM), and some combine the qualities of both and others (Hybrid). Flash, in particular, belongs to the latter. It seems to be non-volatile memory, but the laws of physics are difficult to cancel, and from time to time you still have to rewrite information on flash drives.

The only thing that, perhaps, can unite all these types of memory is more or less the same operating principle. There is some two-dimensional or three-dimensional matrix that is filled with 0s and 1s in approximately this way and from which we can subsequently either read these values ​​or replace them, i.e. all this is a direct analogue of its predecessor - memory on ferrite rings.

What is flash memory and what types does it come in (NOR and NAND)?
Let's start with flash memory. Once upon a time, the well-known ixbt published quite a bit about what Flash is and what the 2 main types of this type of memory are. In particular, there are NOR (logical not-or) and NAND (logical not-and) Flash memory (everything is also described in great detail), which are somewhat different in their organization (for example, NOR is two-dimensional, NAND can be three-dimensional), but they have one common element - a floating gate transistor.


Schematic representation of a floating gate transistor.

So how does this engineering marvel work? This is described together with some physical formulas. In short, between the control gate and the channel through which current flows from source to drain, we place the same floating gate, surrounded by a thin layer of dielectric. As a result, when current flows through such a “modified” field-effect transistor, some high-energy electrons tunnel through the dielectric and end up inside the floating gate. It is clear that while the electrons were tunneling and wandering inside this gate, they lost some of their energy and practically cannot return back.

NB:“practically” is the key word, because without rewriting, without updating cells at least once every few years, Flash is “reset to zero” just like RAM, after turning off the computer.

Again we have a two-dimensional array that needs to be filled with 0s and 1s. Since it takes quite a long time to accumulate charge on the floating gate, a different solution is used in the case of RAM. The memory cell consists of a capacitor and a conventional field-effect transistor. Moreover, the capacitor itself has, on the one hand, a primitive physical device, but, on the other hand, it is non-trivially implemented in hardware:


RAM cell design.

Again, ixbt has a good one dedicated to DRAM and SDRAM memory. It is, of course, not so fresh, but the fundamental points are described very well.

The only question that torments me is: can DRAM have a multi-level cell, like flash? It seems like yes, but still...

Practical part

Flash
Those who have been using flash drives for quite some time have probably already seen a “bare” drive, without a case. But I will still briefly mention the main parts of a USB flash drive:


The main elements of a USB Flash drive: 1. USB connector, 2. controller, 3. PCB-multilayer printed circuit board, 4. NAND memory module, 5. quartz reference frequency oscillator, 6. LED indicator (now, however, on many flash drives do not have it), 7. write protection switch (similarly, it is missing on many flash drives), 8. space for an additional memory chip.

Let's go from simple to complex. Crystal oscillator (more about the principle of operation). To my deep regret, during the polishing the quartz plate itself disappeared, so we can only admire the body.


Crystal oscillator housing

By chance, in the meantime, I found what the reinforcing fiber inside the PCB looks like and the balls that make up the PCB for the most part. By the way, the fibers are still laid with twisting, this is clearly visible in the top image:


Reinforcing fiber inside the PCB (red arrows indicate fibers perpendicular to the cut), which makes up the bulk of the PCB

And here is the first important part of the flash drive - the controller:


Controller. The top image was obtained by combining several SEM micrographs

To be honest, I didn’t quite understand the idea of ​​the engineers who placed some additional conductors in the chip itself. Maybe this is easier and cheaper to do from a technological point of view.

After processing this picture, I shouted: “Yayyyyyyyyyyyyyyyyyyyyyyy!” and ran around the room. So, we present to your attention the 500 nm technological process in all its glory with perfectly drawn boundaries of the drain, source, control gate, and even the contacts are preserved in relative integrity:


"Ide!" microelectronics - 500 nm controller technology with beautifully drawn individual drains (Drain), sources (Source) and control gates (Gate)

Now let's move on to dessert - memory chips. Let's start with the contacts that literally feed this memory. In addition to the main one (the “thickest” contact in the picture), there are also many small ones. By the way, "fat"< 2 диаметров человеческого волоса, так что всё в мире относительно:


SEM images of the contacts powering the memory chip

If we talk about memory itself, then success awaits us here too. We were able to photograph individual blocks, the boundaries of which are indicated by arrows. Looking at the image with maximum magnification, try to strain your gaze, this contrast is really difficult to discern, but it is there in the image (for clarity, I marked a separate cell with lines):


Memory cells 1. Block boundaries are marked with arrows. Lines indicate individual cells

At first it seemed to me like an image artifact, but after processing all the photos of the house, I realized that these are either control gates elongated along the vertical axis in an SLC cell, or these are several cells assembled in an MLC. Although I mentioned MLC above, this is still a question. For reference, the "thickness" of the cell (i.e. the distance between the two light dots in the bottom image) is about 60 nm.

In order not to dissemble, here are similar photos from the other half of the flash drive. A completely similar picture:


Memory cells 2. Block boundaries are highlighted with arrows. Lines indicate individual cells

Of course, the chip itself is not just a set of such memory cells; there are some other structures inside it, the identity of which I could not determine:


Other structures inside NAND memory chips

DRAM
Of course, I didn’t cut the entire SO-DIMM board from Samsung; I only “disconnected” one of the memory modules using a hair dryer. It is worth noting that one of the tips proposed after the first publication came in handy here - sawing at an angle. Therefore, for a detailed immersion in what you saw, it is necessary to take this fact into account, especially since cutting at 45 degrees also made it possible to obtain, as it were, “tomographic” sections of the capacitor.

However, according to tradition, let's start with contacts. It was nice to see what a “chipped” BGA looks like and what the soldering itself is like:


"Chipped" BGA solders

And now it’s time to shout “Ide!” for the second time, since we managed to see individual solid-state capacitors - concentric circles in the image, marked with arrows. They are the ones who store our data while the computer is running in the form of a charge on their plates. Judging by the photographs, the dimensions of such a capacitor are about 300 nm in width and about 100 nm in thickness.

Due to the fact that the chip is cut at an angle, some capacitors are cut neatly in the middle, while others have only the “sides” cut off:


DRAM memory at its finest

If anyone doubts that these structures are capacitors, then you can look at a more “professional” photo (though without a scale mark).

The only point that confused me is that the capacitors are located in 2 rows (lower left photo), i.e. It turns out that there are 2 bits of information per cell. As mentioned above, information on multibit recording is available, but to what extent this technology is applicable and used in modern industry remains questionable to me.

Of course, in addition to the memory cells themselves, there are also some auxiliary structures inside the module, the purpose of which I can only guess:


Other structures inside a DRAM memory chip

Afterword

In addition to those links that are scattered throughout the text, in my opinion, this review (even from 1997), the site itself (and a photo gallery, and chip-art, and patents, and much, much more) and this office, which actually engaged in reverse engineering.

Unfortunately, it was not possible to find a large number of videos on the topic of Flash and RAM production, so you will have to be content with only assembling USB Flash drives:

P.S.: Once again, Happy New Year of the Black Water Dragon everyone!!!
It turns out strange: I wanted to write an article about Flash one of the first, but fate decreed otherwise. Fingers crossed, let's hope that the next at least 2 articles (about biological objects and displays) will be published in early 2012. In the meantime, the seed is carbon tape:


Carbon tape on which the samples under study were attached. I think regular tape looks similar.







2024 gtavrl.ru.