Personal computer history of appearance and stages of evolution. A brief history of the creation and development of computers


It is simply impossible to imagine modern life without a computer today. Just some 10-12 years ago, a “miracle” modern electronics Not everyone could afford it. We are going to trace the evolutionary development of personal computers, as well as identify the key stages in the transition of PCs from the category of “whose means allow” to the category of “publicly available”. In historical development computer equipment celebrates only eight names of people who made the greatest contribution to the main evolutionary stages of PC production. Over the course of several decades, electronics not only overtook, but also largely replaced mechanics. Not just evolutionary, but revolutionary steps were taken to ensure that in less than a century society became so “addicted” to computers.

Instead of a preface

Perhaps it is simply impossible to imagine modern life without a computer today. And just ten years ago, not everyone could afford the “miracle” of modern electronics. I remember how I had to sit in the library over books, copying what I needed into notes. And these terrible handwritten abstracts, a callus on the middle finger of the right hand...

Unlike the German computer, where the basis was relays, in ENIAC most of the elements were vacuum tubes. It was a real monster, costing almost 500 thousand dollars, occupying an entire room. The device weighed 27 tons, the total number of components: about 17.5 thousand lamps various types, 7.2 thousand silicon diodes, 1.5 thousand relays, 70 thousand resistors and 10 thousand capacitors. The machine required a power supply of 174 kW. Computing power – 357 multiplication operations or 5 thousand addition operations per second. Calculation Basics – decimal system calculus. The computer easily worked with numbers 20 digits long.

Despite its computational superiority, ENIAC had a lot of shortcomings. For example, if at least one lamp burned out, the entire computer would fail. The process of programming the computer itself was also lengthy: solving a problem took several minutes, when entering data could take several days.

ENIAC never became widespread; the device was produced in a single copy and was not used anywhere in the future. But some of the principles that were based on the design of ENIAC were subsequently reflected in more advanced models of electronic computer technology.

"Made in USSR"

In 1951, a small electronic calculating machine - MESM - was created on the territory of the Ukrainian SSR. It contained 6 thousand electronic tubes; it barely fit in the left wing of the dormitory building of the former monastic village of Feofaniya (10 km from Kyiv). MESM was created in the laboratory of computer technology of the Institute of Electrical Engineering of the Academy of Sciences of the Ukrainian SSR under the leadership of academician S.A. Lebedeva.

Lebedev’s thoughts about creating a computer with superpowers appeared back in the 30s, when the young scientist was engaged in research on the stability of power systems. But the wars that broke out in the 40s forced all endeavors to be abandoned for a while.

In 1948, Lebedev, together with a group of engineers, moved to Feofaniya (one of the departments of the Institute of Economics of the Academy of Sciences of the Ukrainian SSR) and began three years of work on the implementation of a secret project to create the first domestic computer.

“The machine occupied a room of 60 square meters. The MESM worked at a speed unprecedented at that time - 3 thousand operations per minute (modern computers produce millions of operations per second) and could perform operations on subtraction, addition, multiplication, division, shifts, comparison based on sign, comparison based on absolute value, transfer of control , transmitting numbers from a magnetic drum, adding commands. The total power of vacuum tubes is 25 kW.”

After a series of tests, S.A. Lebedev proved that his machine is “smarter than a person.” This was followed by a series of public demonstrations and the conclusion of an expert commission on the introduction of MESM into operation (December 1951).

MESM was practically the only computer in the country that solved various scientific and technical problems in the field of thermonuclear processes, space flights and rocketry, long-distance power lines, mechanics, and statistical quality control. One of the most important problems solved at MESM was the calculation of the stability of parallel operation of units of the Kuibyshev hydroelectric power station, determined by a system of nonlinear second-order differential equations. It was necessary to determine the conditions under which the maximum possible power could be transmitted to Moscow without compromising the stability of the system. In connection with the rapid development of jet and rocket technology, the machine was tasked with calculating external ballistics of varying complexity, ranging from relatively simple multivariate calculations of trajectories passing within the earth's atmosphere with a slight difference in altitude to very complex ones associated with the flight of objects outside the earth's atmosphere .

MESM was used in many research projects right up to 1957, after which the machine was dismantled and disassembled into parts. The equipment was delivered to the Kiev Polytechnic Institute for laboratory work.

The first computers with data storage capabilities

As mentioned earlier, some of the first electronic computing systems became prototypes for the creation of more advanced computerized devices. the main task developers of new computers was associated with endowing machines with the ability to store processed and received data in electronic memory.

One of these machines is called “The Manchester Baby”. In 1948, at the University of Manchester (UK), an electronic computing device capable of storing data in internal random access memory was developed, and a year later put into operation. Manchester's Mark 1 was an improved version of the Neumann computer.

The device could not only read information from punched tapes, but also had the ability to input and output data from a magnetic drum directly while the program was running. The “memory” system was a circuit cathode ray tubes Williams (patent development 1946).

The “Manchester Child” had completely “non-childish” dimensions: 17 m in length. The system consisted of 75 thousand vacuum tubes, 3 thousand mechanical relays, 4 Williams tubes (computer memory 96 40-bit words), a magnetic drum (1024-4096 40-bit words), a processor with 30 instructions and a system batteries. The machine required from 3 to 12 seconds for the simplest mathematical operations.

In 1951, the “Child” was disposed of, and its place was replaced by a full-fledged commercial computer, the Ferranti Mark 1.

Around the same period, in Cambridge (UK), a group of engineers led by Maurice Wilkes created a computer with a program stored in memory - EDSAC (Electronic Delay Storage Automatic Computer). This device becomes the first widely used electronic computing device with internal memory capabilities.

The computer used almost 3 thousand vacuum tubes. The main memory of the computer is 1024 memory cells: 32 mercury ultrasonic delay lines (HULZ), each of which stored 32 words of 17 bits, including the sign bit. It was possible to include additional delay lines, which made it possible to work with words of 35 binary digits. Calculations were carried out in binary system at a speed of 100 to 15 thousand operations per second. Power consumption - 12 kW, occupied surface area - 20 square meters.

In 1953, under the leadership of Wilkes and Renwick, work began on the second computer model, the EDSAC-2. Elements on ferrite cores with a total capacity of 1024 words were already used as RAM (random access memory). IN new car ROM (read only memory) appeared - first on a diode and then on a ferrite matrix. But the main innovation was the use of microprogram control: some of the commands could be composed of a set of micro-operations; microprograms were recorded in permanent memory. This computer was used until 1965.

"Transistor" story

The beginning of the era of computers “for life” is associated with the same IBM. After a change of management in 1956, the company also changed its production vector. In 1957, IBM introduced the FORTRAN language (“FORmula TRANslation”), which was used for scientific computing. In 1959, the first transistor-based IBM computers appeared, reaching such a level of reliability and speed that they began to be used by the military in air defense early warning systems. In 1964, the entire IBM System/360 family was introduced. They became: the first family of computers designed, the first universal computers, the first computers with byte-addressable memory (the list of primacy does not end there). IBM System z computers compatible with System/360 are still being produced, this is an absolute record for compatibility.

The evolutionary development of computer technology included: reduction in size, transition to more advanced components, increase computing power, increasing the amount of RAM and permanent storage, the possibility of widespread use in various industries, as well as the ability to personalize the computer.

In the 50-60s of the twentieth century, transistor computers replaced tube computers. Used as the main element semiconductor diodes and transistors, as memory devices - magnetic cores and magnetic drums (distant ancestors of modern hard drives). The second difference between these computers: it became possible to program in algorithmic languages. The first languages ​​were developed high level(Fortran, Algol, COBOL). These two important improvements made writing computer programs much easier and faster. Programming, while remaining a science, becomes more applied. All this led to a reduction in size and a significant reduction in the cost of computers, which then began to be built for sale for the first time.

The production capacity of these computers is up to 30 thousand operations per second. The amount of RAM is 32 KB. Big advantages are reduced dimensions and reduced energy consumption. Programming transistor computers becomes the basis for the emergence of so-called “operating systems.” It becomes easier to work with the device, which is possible not only for scientists, but also for less “advanced” users. Computer equipment appears in factories and offices (mainly in accounting).

Among the transistor electronic computing devices of this period, the most famous are:

Early 50s. The most powerful computer in Europe is the Soviet M-20 with an average speed of 20 thousand 3-address commands per second over 45-bit floating point numbers; her RAM was implemented on ferrite cores and had a volume of 4096 words.

1954-1957. NCR (USA) produces the first transistor computer – NCR-304;

1955 The Bell Telephone Laboratories transistor computer, TRADIS, contains 800 individual transistor elements;

1958 NEC Corporation develops the first Japanese computer, NEC-1101 and 1102;

Note that these are not the only representatives of the “transistor” history in the evolution of computers. During this period, developments were carried out at the Massachusetts Institute of Technology (USA), in many scientific and technical laboratories throughout the Soviet Union, and in leading European research and technological higher schools.

Microchips and mass production

It only took the developers a few years to produce a computer with new components. Just as transistors replaced vacuum tubes (and they replaced mechanical relays), so microcircuits occupied their evolutionary cell. The end of the 60s of the twentieth century brings the following metamorphoses to computers: integrated circuits were developed, consisting of a chain of transistors combined under one semiconductor; semiconductor memory appears, which becomes the main element of computer RAM; mastered the method of simultaneous programming of several tasks (the principle of dialogue mode); the central processor can work in parallel and control various peripheral devices; opportunity opens up remote access to computer data.

It was during this period that the “famous” family of IBM computers appeared. The production of electronic computer equipment is moving onto the conveyor belt, and mass production of computerized equipment is being established.

Of course, there is more to say about the IBM System/360 (S/360). In 1964, the company released a series of computers different sizes and functionality. Depending on the requirements, both small machines with low productivity and large machines with higher production rates can be equally used in production. All machines run on similar software, so if you have to replace a low-power device with a more advanced one, you do not need to rewrite the main program. To ensure compatibility, IBM is pioneering the use of microcode technology, which is used in all but the highest-end models in the series. This series of computers becomes the first derivative when a clear distinction is made between the architecture and implementation of the computer.

S/360 cost the company $5 billion (a colossal expense by 1964 standards). But this system still does not become the most expensive production; the primacy remains with the R&D project. The 360 ​​is being replaced by the 370, 390 and System z, but they retain the same computer architecture. Based on S/360, other companies produce their own model series, for example, the 470 family from Amdahl, Hitachi mainframes, UNIVAC 9200/9300/940, Soviet machines of the ES computer series, etc.

Thanks to the widespread use of the IBM/360, the 8-bit characters and 8-bit byte invented for it as the minimum addressable memory cell became the standard for all computer equipment. Also IBM/360 was the first 32-bit computer system. The older models of the IBM/360 family and the IBM/370 family that followed them were among the first computers with virtual memory and the first production computers to support the implementation of virtual machines. The IBM/360 family was the first to use microcode to implement individual processor commands.

But some microprocessor systems There was one drawback - the low quality of components. This was especially pronounced in Soviet electronic computers. They continued to have significant dimensions and lagged behind Western developments in functionality. To eliminate this, domestic designers had to design special processors to perform specific tasks (which excluded the possibility of multiprogramming).

During this period, the first minicomputers also appeared (prototypes modern computers). The most important thing that happened to PCs in the late 60s and early 70s was the transition from large quantity elements to use one part that combines all the necessary components. Microprocessors are the heart of any computer. Society owes their appearance to Intel. It was she who owned the first microchip, which became a truly revolutionary and evolutionary leap for computer technology.

Along with the rapid improvement of technical equipment, electronic computing systems are beginning to be combined into local and global computer networks(prototype of the Internet). The programming language is being improved, and more advanced operating systems are being written.

Supercomputers and personal portable electronics

The seventies and eighties became the main period of mass production of computers for general consumption. There were no significant innovations during this period. Electronic computing technology is divided into two camps: supermachines with incredible computing capabilities and more personalized systems. The elemental base of these systems is large integrated circuits (LSI), where more than a thousand elements are placed in one chip. The power of such computers is tens of millions of operations per second, and the amount of RAM increases to several hundred megabytes.

Computerized computing systems used in production remain complex, but mass leadership is moving to personal computers. It was during this period that the term “electronic computer” was replaced by the term “computer”, which is familiar to our ears.

The era of personal computers begins with Apple, IBM-PC (XT, AT, PS /2), Iskra, Elektronika, ES-1840, ES-1841 and others. These systems are inferior in functionality to supercomputers, but due to the consumer purpose of the PC, it is firmly established in the market: the device becomes generally available, a number of innovations appear that simplify the work with the device (graphical user interface, new peripheral devices, global networks).

After the release of the Intel 4004 and Intel 8008 microprocessors, the technology was picked up by other companies: MPs were produced both based on the Intel project and their own modifications.

This is where the young Apple Computer Company of Steve Jobs and Steve Wozniak appears on the scene with its first personal product - the Apple-1 computer. Not many ambitious entrepreneurs were interested in the development. There was only one order for a batch of Apple-1 computers: Paul Terrell, owner of the Byte computer store, orders a shipment of 50 units of the product. But the conditions are as follows: these must not be just computer boards, but completely complete machines. Overcoming difficulties in financing production, Apple Computer still manages to fulfill its obligations on time, and Apple-1 appears on the shelves of Terrell's store. True, without “ammunition”, but only in the form of payment, but Terrell agrees to the supply and pays the promised $500 per unit of goods.

Note that most PCs of that time were supplied as separate components, the assembly of which was carried out by distributors or end customers.

So, in 1976, Apple 1 goes on sale for $666.66 apiece. The Apple I was completely assembled on a circuit board containing about 30 chips, which is why it is considered by many to be the first full-fledged PC. But to get a working computer, users had to add a case, power supply, keyboard and monitor. An additional board, released later at a cost of $75, provided communication with a cassette recorder for data storage.

Many experts do not consider the computer Apple first personal electronic device, and they call it the Altair 8800 microcomputer, which was created by Ed Robers and distributed through catalogs in 1974-1975. But in fact, this device did not meet all user requirements.

The company continues production, and an updated version goes on sale Apple model II. This series of PCs was equipped with a 1 MHz MOS Technology 6502 processor, 4 KB of RAM (expandable to 48 KB), 4 KB of ROM, a monitor and an Integer BASIC interpreter, and an interface for connecting a cassette recorder. Apple II becomes the most widely sold device on the electrical market (more than 5 million units of this product were sold over the years of production). The Apple II looked more like an office tool than a piece of electronic equipment. It was full-fledged computer, which is suitable for home environment, manager's desk or school classroom.

To connect a monitor (or TV), a composite video output in NTSC format was used. Computers sold in Europe used an additional PAL encoder located on an expansion card. The sound was provided by a speaker controlled through a register in memory (1 bit used). The computer had 8 expansion connectors, 1 of which allowed you to connect additional RAM, while the rest were used to provide I/O (serial and parallel ports, controllers external devices). The initial retail price of the computer was $1,298-$2,638 per model modification.

The Apple II acquired a family and until the early 90s retained its leadership in the computer equipment market.

General PC Standard

At the end of 1980, IBM decided to produce its own PC. The supply of microprocessors for future IBM PC models is entrusted to Intel, and the project of Harvard dropout Bill Gates - the PC-DOS operating system - is adopted for the main OS.

The company not only sets production rates, but also sets its own standards for computer production. Each PC manufacturer could purchase a license from IBM and assemble similar computers, and microprocessor manufacturers could manufacture elements for them (in fact, only Apple managed to maintain its own architecture). This is how the IBM PC XT model appears with hard drive. Following it is the IBM PC AT, built on the MP 80286.

1985 was marked by the release of high-performance PCs; Intel and Motorola jointly produced the 80386 and M68020 microprocessors. From year to year, computer modifications are improved, the names of IBM and Intel are constantly heard. New microprocessors achieve incredible data processing power - up to 50 million operations per second. In 1993, Intel released the P5 Pentium MP with a 64-bit architecture, followed by models 2 and 3. The Pentium 4 is already equipped with HT technology, which allows it to process information using 2 parallel threads.

Computers are improving in everything: energy consumption is decreasing, dimensions are decreasing, but computing power is increasing enormously, the amount of RAM is increasing (up to 4 gigabytes), volumes hard drives are calculated in terabytes.

Almost all computers produced in the world are switching to the new “windowed” operating system. MicroSoft system"Windows" and office applications MS-Office. This is how personal computer computer standards are defined: the IBM PC architecture and the Windows OS.

As for the size of the PC, along with desktop computers, portable portable electronics are produced: laptops, netbooks, then tablets and smartphones (phone-computer).

Instead of an afterword

Over the course of several decades, personal computers have moved from electronic “calculating machines” to everyday equipment. Now a PC is not just an electronic computing device. This is an entire industry of knowledge, entertainment, work, education and other consumer opportunities.

Mikhail Polyukhovich

How can you live without computers, smartphones and other gadgets today? It's even harder to realize that 50 years ago these technologies could only be learned from science fiction books.

We offer a short excursion into history to find out how personal computers developed.

The first computers were created after the end of World War II. They were very large and expensive (cost even more than latest version modern MacBook). Therefore, only employees of serious organizations, banks or leading universities could play with such toys. But the development of home PCs (personal computers) occurred in the second half of the twentieth century. The first is the PDP-8 mini-computer. It was released in March 1965 by Digital Equipment Corporation.

It should be noted that when we call the PDP-8 a mini-computer, we mean that it did not take up the entire room. The PDP-8 was no more than an ordinary refrigerator, which sounds pretty wild for our time. Its price was $18,500, but this did not stop computer enthusiasts from buying this miracle of technology. Therefore, the PDP-8 became not only the first home PC, but also the first commercially successful computer.

The next “breakthrough” was made by the MITS company when it released the Altair 8800 computer in 1975. It is considered one of the “revolutionaries” of home PCs, as well as the first link in the formation of personal computer manufacturing companies.

What is the secret of the Altair 8800? It was compact, productive and inexpensive. For just $439, anyone could purchase parts for a computer and assemble it with the help of Popular Electronics magazine. For $621 you could get a ready-made model. The Altair 8800 had an Intel 8080 microprocessor clocked at 2 MHz, and could handle 8 and 16-bit numbers. By the way, Bill Gates started his career thanks to the Altair 8800!

At the same time, two more computer enthusiasts - Steve Jobs and Steve Wozniak - decided to create a company that would develop computer equipment. Their truly revolutionary project can be called the Apple II, which appeared in 1977. Jobs and Wozniak demonstrated what a computer should be like general use. Since that time, the technology could be used not only by enthusiasts or radio amateurs, but also by ordinary citizens.

IBM PC 5150

In 1981, IBM joined the craze and released the IBM PC 5150, which may still be found in some government offices.

The computer is considered one of the most successful home PCs in the world. In total, 20 million devices were sold. The PC was equipped with a MOS 6510 processor. It could also be connected to a TV and used as a game console.

Apple Macintosh

Next successful product Apple became the Macintosh, which finally defined the type of personal computer. The main innovations that the product flaunted were a mouse-type manipulator and a fully graphical interface. In fact, it is the granddaddy of all modern iMacs and MacBooks. It is also the first computer to say hello to its future users.

IBM PC Convertible is the world's first laptop, which was introduced by IBM in 1986. It had an Intel 80C88 processor and 256 kilobytes of RAM, which could be expanded to 512 kilobytes. The laptop also boasted two disk drives and a modem. The PC sold very poorly. It was heavy, not fast enough, and the LCD monitor was difficult to read. The IBM PC Convertible is still the first laptop to go into mass production and influence further development industry.

A little about the future

Technologies never stop developing. Nowadays, most companies are trying to create high-performance computers that do not take up much space. The leader is Apple, whose products over the past ten years have gained enormous popularity in all corners of the Earth.

Large personal computers are beginning to give way to ultra-thin laptops and tablets (although there are still enthusiasts who build and upgrade PCs themselves). According to experts, in 100 years the functions of a laptop or PC will be performed by smart watch, smartphones and hologramophones, and powerful PCs will be used to calculate large volumes information.

In the history of the development of civilization there have been several information revolutions- transformation of social relations due to fundamental changes in the field of information processing and information technology. The consequence of such transformations was the acquisition of a new quality by human society.

The fourth (70s of the XX century) is associated with the invention of microprocessor technology and the advent of the personal computer. Computers, computer networks, and data transmission systems (information communications) are created using microprocessors and integrated circuits. This period is characterized by three fundamental innovations:


electronic;

miniaturization of all components, devices, instruments, machines;

creation of software-controlled devices and processes.


The third (late 19th century) was due to the invention of electricity

The second (mid-16th century) was caused by the invention of printing, which radically changed industrial society, culture, and organization of activities.

thanks to which the telegraph, telephone, and radio appeared, making it possible to quickly transmit and accumulate information in any volume.

The first revolution was associated with the invention of writing, which led to a gigantic qualitative and quantitative leap. There is an opportunity to transfer knowledge from generation to generation.

First period 1945-1955

First period 1945-1955

It is known that the computer was invented English mathematician Charles Babbage at the end of the eighteenth century. His “analytical engine” was never able to really work because the technology of that time did not meet the requirements for the manufacture of precision mechanics parts that were necessary for computer technology. It is also known that this computer did not have an operating system.


Some progress in the creation of digital computers occurred after the Second World War. In the mid-40s, the first tube computing devices were created. At that time, the same group of people participated in the design, operation, and programming of the computer. It was more of a research work in the field of computer technology, rather than the use of computers as a tool for solving any practical problems from other application areas. Programming was carried out exclusively in machine language. There was no talk about operating systems; all tasks of organizing the computing process were solved manually by each programmer from the control panel. There was no other system software, except for libraries of mathematical and utility routines.


Second period 1955 – 1965

Second period 1955 – 1965

Since the mid-50s, a new period began in the development of computer technology, associated with the emergence of a new technical base - semiconductor elements. Second-generation computers became more reliable, now they were able to work continuously for so long that they could be entrusted with performing truly practically important tasks. It was during this period that the personnel was divided into programmers and operators, operators and computer developers.

During these years, the first algorithmic languages ​​appeared, and consequently the first system programs - compilers. The cost of CPU time has increased, requiring a reduction in the time overhead between program runs. The first systems appeared batch processing, which simply automated the launch of one program after another and thereby increased the processor load factor.

Batch processing systems were the prototype of modern operating systems; they became the first system programs, designed to control the computing process.

During the implementation of batch processing systems, a formalized task control language was developed, with the help of which the programmer informed the system and the operator what work he wanted to perform on the computer. A collection of several tasks, usually in the form of a deck of punched cards, is called a task package.


Third period 1965 – 1980

The next important period in the development of computers dates back to 1965-1980. At this time, there was a transition in the technical base from individual semiconductor elements such as transistors to integrated circuits, which gave much greater opportunities to the new, third generation of computers.


This period was also characterized by the creation of families of software-compatible machines.

The first family of software-compatible machines built on integrated circuits, a series of IBM/360 machines appeared. Built in the early 60s, this family was significantly superior to second-generation machines in terms of price/performance. Soon the idea of ​​software-compatible machines became generally accepted.

Software compatibility also required operating system compatibility. Such operating systems would have to work on both large and small computing systems, with a large and small number of diverse peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention of satisfying all these conflicting requirements turned out to be extremely complex monsters. They consisted of many millions of lines of assembly code, written by thousands of programmers, and contained thousands of errors, causing an endless stream of corrections. In each new version operating system, some errors were corrected and others were introduced.

However, despite its enormous size and many problems, OS/360 and other similar operating systems on third-generation machines did satisfy most consumer requirements. The most important achievement of the OS of this generation was the implementation of multiprogramming. Multiprogramming is a way of organizing a computing process in which several programs are alternately executed on one processor. While one program is performing an I/O operation, the processor is not idle, as was the case when executing programs sequentially (single-program mode), but is executing another program (multi-program mode). In this case, each program is loaded into its own section of RAM, called a partition.


Another innovation is spooling. Spooling at that time was defined as a way of organizing the computing process, according to which tasks were read from punched cards onto disk at the pace at which they appeared in the computer center, and then, when the next task was completed, a new task was loaded from disk into the free partition .

Along with the multiprogram implementation of batch processing systems, new type OS - time sharing systems. The multiprogramming option used in time-sharing systems is aimed at creating for each individual user the illusion of sole use of the computer.

Fourth period 1980 - present

The next period in the evolution of operating systems is associated with the advent of large-scale integrated circuits (LSI). During these years, there was a sharp increase in the degree of integration and a reduction in the cost of microcircuits. The computer became available to the individual, and the era of personal computers began. From an architectural point of view, personal computers were no different from the class of minicomputers such as the PDP-11, but their prices were significantly different. If a minicomputer made it possible to have your own computer department of an enterprise or a university, the personal computer has made this possible for the individual.

Computers became widely used by non-specialists, which required the development of "friendly" software, which put an end to the caste of programmers.

In networked OSes, users must be aware of the presence of other computers and must log into another computer to use its resources, mainly files. Each machine on the network runs its own local operating system, different from the OS standalone computer availability additional funds, allowing the computer to work on the network. The network OS has no fundamental differences from the OS of a single-processor computer. It necessarily contains software support for network interface devices (network adapter driver), as well as tools for remote login to other computers on the network and means of accessing deleted files, however, these additions do not significantly change the structure of the operating system itself.

Two now giant companies played a very important role in the development of computers: Microsoft® and Intel®. The first of them greatly influenced the development of computer software, while the second became known thanks to the best microprocessors it produced.


You can download the presentation on this topic using the link:

First generation- computers using vacuum tubes (1946 - 1956). The starting point of the computer era is usually taken to be February 15, 1946, when scientists at the University of Pennsylvania in the USA commissioned the world's first electronic computer, ENIAC. It used 18 thousand vacuum tubes. The machine occupied an area of ​​135 m3, weighed 30 tons and consumed 150 kW of electricity. It was used to solve problems related to the creation of the atomic bomb. And although mechanical and electromechanical machines appeared much earlier, all further successes of computers are associated precisely with electronic computers. In the USSR in 1952, academician S.A. Lebedev created the fastest computer in Europe, BESM. The speed of the first machines was several thousand operations per second.

Second generation- transistor computers (1956 - 1964). The semiconductor device, the transistor, was invented in the USA in 1948 by Shockley and Bardeen. Transistor-based computers have dramatically reduced their size, weight, and power consumption, and increased their performance and reliability. A typical domestic car (Minsk, Ural series) contained about 25 thousand transistors. Our best computer BESM-6 had a speed of 1 million op/s.

Third generation- computers on microcircuits with a low degree of integration (1964 - 1971). The microcircuit was invented in 1958 by J. Kilby in the USA. Microcircuits made it possible to increase the speed and reliability of computers, reduce dimensions, weight and power consumption. The first computer based on IBM-360 chips was released in the USA in 1965, as was the first mini-computer, the PDP-8, the size of a refrigerator. In the USSR, third-generation large computers of the ES series (ES-1022-ES-1060) were produced together with the CMEA countries since 1972. These were analogues of the American computers IBM-360, IBM-370.

Fourth generation- computers based on microprocessors (1971 - present). A microprocessor is an arithmetic and logical device, most often made in the form of a single microcircuit with a high degree of integration. The use of microprocessors has led to a sharp reduction in the size, weight and power consumption of computers, increasing their performance and reliability. The first microprocessor Intel-4004 was released in the USA by Intel in 1971. Its capacity was 4 bits. In 1973 the 8-bit Intel-8008 was released, and in 1974 the Intel-8080. In 1975, the world's first personal computer, Altair-8800, built on the Intel-8080, appeared. The era of personal computers has begun.

In 1976, the Apple personal computer based on a Motorola microprocessor appeared, which was a great commercial success. He laid the foundation for the Macintosh series of computers. The first computer from IBM, called IBM PC, appeared in 1981. It was made on the basis of a 16-bit Intel-8088 microprocessor and had 1 MB of RAM (all other machines then had 64 KB of RAM). In fact, it became the standard for personal computers. Now IBM-compatible computers make up 90% of all personal computers produced in the world. In 1983 An IBM PC/XT computer with a hard drive was released based on the Intel-8088. In 1982 a 16-bit Intel-80286 processor was made, which was used by IBM in 1984. on an IBM PC/AT series computer. Its performance was 3 - 4 times higher than that of the IBM PC/XT. In 1985 Intel developed the 32-bit Intel-80386 processor.

It contained approximately 275 thousand transistors and could work with 4 GB of disk memory. For the Intel-80286 and Intel-80386 processors, mathematical coprocessors appeared, respectively, Intel-80287 and Intel-80387, which increased the performance of computers in mathematical calculations and when working with floating point. Processors 80486 (1989), Pentium (1993), Pentium-Pro (1995), Pentium-2 (1997) and Pentium-3 (1999) already have a built-in math coprocessor. Many modern personal computers are based on Pentium processors.

Fifth generation (promising)- these are computers that use new technologies and new element base, such as ultra-large-scale integrated circuits, optical and magneto-optical elements, operating through ordinary spoken language, equipped with huge databases. It is also expected to use elements of artificial intelligence and recognition of visual and sound images. Such projects are being developed in leading industrialized countries.

The personal computer (PC) has greatly changed humanity's relationship with computing resources. With each new model PC people transferred more and more functions onto the shoulders of the machine, starting from simple calculations and ending with accounting or design. That is why malfunctions, failures, and downtime of computer technology have become not just unwanted misunderstandings, but a real disaster that can lead to direct economic losses and other unacceptable consequences.

The first milestones in the development of personal computers


In the second half of the 20th century, only large companies had computers, and not only because of the high price of the equipment, but also because of its impressive size. Therefore, enterprises involved in the development and manufacture of computer equipment sought to miniaturize and reduce the cost of their products. As a result, microminiaturization, as well as the widespread development of microcircuits, led to the fact that the computer could fit on a desk, and Xerox introduced the first personal computer Alto in 1973. For the first time, programs and files were displayed on the screen in the form of “windows.”

In 1975, the first commercial PC, Altair-8800, was released, built on the Intel 8080 microprocessor. RAM was 256 bytes. The PC was controlled by a special switch panel. For data input and output, an 8-inch floppy drive was installed, which was purchased separately. The first version of the i8080 microprocessor was manufactured in a 48-pin planar package, the maximum clock frequency was 2 MHz. However, the processor had a serious flaw that caused it to freeze. Only the “reset” signal allowed the system to be revived. A corrected and improved version of the processor - 8080A - was released six months later. It was manufactured in a DIP-40 package, and the maximum clock frequency increased to 2.5 MHz.

The beginning of the journey of Apple and Intel


In 1976, Steve Jobs and Steve Wozniak assembled a working computer board called the Apple I in Palo Alto. It was housed in a wooden case and did not have a keyboard or screen. The board contained a processor, 8 KB of RAM, and the ability to display information on the screen.

In 1977, Wozniak and Jobs developed the first complete PC, the Apple II, in a plastic case, with an integrated keyboard, and a TV used as a display. That same year, Commodore introduced a PC called the PET.

In June 1978, Intel created the first 16-bit microprocessor, the i8086. Thanks to the segmented memory organization, it could address up to 1024 KB of RAM. The i8086 used an instruction set that is also used in modern processors. With the advent of the i8086 processor, the x86 architecture became famous. The processor clock frequency ranged from 4 to 10 MHz. It should be noted that the 8086 processor gained popularity mainly thanks to the Compaq DeskPro computer.

In 1980, Osborne Computer began producing the first portable PCs, which had the dimensions of a suitcase and weighed 11 kg.

IBM's first steps


In 1981, IBM released the IBM PC, an open-architecture microcomputer based on Intel's 16-bit 8088 microprocessor. The 16-bit i8088 processor with an 8-bit data bus had a clock speed of 5 to 10 MHz. The PC was equipped with a monochrome text display, two 160 KB 5-inch floppy disk drives and 64 KB RAM.

In 1983, the IBM PC XT (extended Technology) computer appeared, which had 256 KB of RAM and a 10 MB hard drive. The processor clock frequency was 5 MHz.

The IBM PC AT (Advanced Technology) was introduced in 1984. The computer ran on an Intel 80286 microprocessor and ISA architecture, came with a 20 MB hard drive. The use of the Intel 80286 microprocessor (produced since February 1, 1986) made it possible to switch to the AT bus: 16-bit data bus, 24-bit address bus. It became possible to address RAM up to 16 MB (compared to 640 KB of the original IBM PC model). Motherboard provided a battery to power the microcircuit, and the time was stored in memory (capacity - 50 bytes). Processor clock speed: 80286 – 6 – 6 MHz, 80286 – 8 – 8 MHz, 80286-10 – 10 MHz, 80286 – 12 – 12.5 MHz.

In October 1985, Intel created the first 32-bit microprocessor, the i80386, which included about 275 thousand transistors. The first PC to use this microprocessor was the Compaq DeskPro 386. A cheaper alternative, the 32-bit i80386 processor, which later received the DX suffix, did not appear until June 1988. It was the 386th processor that provided a noticeable increase in the clock speed of personal computers. Different models of 386 processors operated at clock frequencies of 16.20, 25, 33.40 MHz.

Intel's Colossal Breakthrough


In 1989, Intel released the 486DX microprocessor. It had 1.2 million transistors on a single chip and was fully compatible with x86 processors. This chip was the first to combine CPU, math coprocessor and cache memory. Clock speeds of various modifications of 486 processors ranged from 16 to 150 MHz. Computers based on the 486th processor reached a frequency of 133 MHz (the so-called DX4). The 486 DX2 processors had a multiplication factor of 2 (at a frequency system bus 50 MHz processor frequency was 100 MHz). Later, processors with the DX4 index were produced. Their multiplication factor was not 4, but 3. After Intel's 486 processors left the market, AMD released the 486DX4-120 and 486DX4-133 processors. As a result of the introduction of multipliers, the concept of overclocking arose for the first time - increasing productivity by increasing the bus clock frequency or multiplication factor. There were systems on sale where i486 processors were overclocked to 160 MHz.

In March 1993, Intel began shipping 66 and 60 MHz versions of the Pentium processor. Pentium-based PCs are fully compatible with computers using i8088, i80286, i80386, i486 microprocessors. New processor contained approximately 3.1 million transistors and had a 32-bit address and 64-bit external data bus.

In May 1997, Intel introduced the Pentium II processor, based on the Pentium Pro. A processing unit for MMX instructions was added to the P6 core. The second level cache was removed from the processor case, and this contributed to the mass distribution of the Pentium II. The clock speeds of Pentium II processors have increased noticeably. U different models were: 233, 266,300, 333,350, 400, 433,450,466, 500, 533 MHz.

32-bit microprocessor sixth Intel generation Pentium III was released by Intel in February 1999. It practically copied the Pentium II, but included new features: 70 real instructions SSE (Streaming SIMD Extensions, also called MMX2), focused on multimedia support; Improved L1 cache controller. The clock frequencies of Pentium III (Katmai) processors were 450,500,533, 550,600 MHz. Based on Coppermine - from 533 to 1133 MHz. Pentium III processors on the Tualatin core have speeds from 1000 to 1400 MHz.

The era of multi-core processors


At the end of November 2000, Intel introduced Pentium 4 processors clocked at over 1 GHz, built on the NetBurst architecture and using fast Rambus memory with an effective system bus frequency of 400 MHz. The processors contained 144 additional instructions SSE2. Clock speeds of the first Pentium 4 processors ranged from 1.4 to 2.0 GHz. In the following modifications, the clock frequency increased from 2.2 to 3.8 GHz.

In July 2006, Intel created dual-core processors - Core 2; the first processors in this line were Intel Core 2 Duo and Intel Core 2 Extreme. The processors were developed based on the new Intel Core architecture, which the company calls the most significant step in the development of its microprocessors since the introduction trademark Intel Pentium in 1993. Using EM64T technology, Intel processors Core 2 can run in both 32-bit and 64-bit modes. The main differences between the new processors and the Pentium 4 family are low heat generation and power consumption, as well as greater overclocking capabilities. Frequency Core processors 2 Duo ranges from 1.5 to 3.5 GHz.

At the beginning of 2007, the Core 2 Quad, a quad-core processor, was introduced. Clock frequencies – from 2.33 to 3.2 GHz.

In January 2010, Intel Core i3 processors appeared. They have added so-called “graphical” processors; they carry out calculations in “graphical” mode. There is a built-in function that provides “intelligence” in operation, auto acceleration. At medium and low loads it operates at rated performance and saves energy. Increased load causes automatic increase processor performance. The size of the cache (internal RAM of the processor) has been increased; it is dynamically distributed between the cores - depending on the load. New processors get hotter, especially during auto overclocking. Accordingly, they require a more efficient cooling system. Clock frequencies of i-Series processors (i3, i5, i7) are from 2.66 to 3.6 GHz.







2024 gtavrl.ru.