Unix systems. Early history of UNIX


The UNIX operating system, the progenitor of many modern operating systems such as Linux, Android, Mac OS X and many others, was created within the walls of the Bell Labs research center, a division of AT&T. Generally speaking, Bell Labs is a veritable breeding ground for scientists who have made discoveries that have literally changed technology. For example, it was at Bell Labs that scientists such as William Shockley, John Bardeen and Walter Brattain worked, who first created the bipolar transistor in 1947. We can say that it was at Bell Labs that the laser was invented, although masers had already been created by that time. Claude Shannon, the founder of information theory, also worked at Bell Labs. The creators of the C language, Ken Thompson and Denis Ritchie (we will remember them later), also worked there, and the author of C++, Bjarne Stroustrup, also works there.

On the way to UNIX

Before we talk about UNIX itself, let's remember those operating systems that were created before it, and which largely defined what UNIX is, and through it, many other modern operating systems.

The development of UNIX was not the first work in the field of operating systems carried out at Bell Labs. In 1957, the laboratory began to develop an operating system called BESYS (short for Bell Operating System). The project leader was Viktor Vysotsky, the son of a Russian astronomer who emigrated to America. BESYS was an internal project that was not released as a finished commercial product, although BESYS was distributed on magnetic tape to everyone. This system was designed to run on IBM 704 - 709x series computers (IBM 7090, 7094). I would like to call these things the antediluvian word “computers,” but so as not to hurt the ears, we will continue to call them computers.

IBM 704

First of all, BESYS was intended for batch execution of a large number of programs, that is, in a way where a list of programs is specified, and their execution is scheduled in such a way as to occupy the maximum possible resources so that the computer does not stand idle. At the same time, BESYS already had the beginnings operating system with time sharing - that is, in essence, what is now called multitasking. When full-fledged time-sharing systems appeared, this opportunity was used so that several people could work with one computer at the same time, each from their own terminal.

In 1964, Bell Labs underwent an upgrade of computers, as a result of which BESYS could no longer be run on new computers from IBM; cross-platform was out of the question at that time. At that time, IBM supplied computers without operating systems. Developers from Bell Labs could have started writing a new operating system, but they did it differently - they joined the development of the Multics operating system.

The Multics project (short for Multiplexed Information and Computing Service) was proposed by MIT professor Jack Dennis. In 1963, he and his students developed a specification for a new operating system and managed to interest representatives of General Electric in the project. As a result, Bell Labs joined MIT and General Electric in developing a new operating system.

And the ideas for the project were very ambitious. Firstly, it had to be an operating system with full time sharing. Secondly, Multics was not written in assembly language, but in one of the first languages high level- PL/1, which was developed in 1964. Third, Multics could run on multiprocessor computers. The same operating system had a hierarchical file system, file names could contain any characters and be quite long, and the file system also provided symbolic links to directories.

Unfortunately, work on Multics dragged on for a long time; Bell Labs programmers never saw the release of this product and left the project in April 1969. And the release took place in October of the same year, but they say that the first version was terribly buggy, and for another year the remaining developers corrected the bugs that users reported to them, although a year later Multics was already a more reliable system.

Multics was still in development for quite some time, the last release was in 1992, and it was version 12.5, although this is a completely different story, Multics had a huge impact on the future of UNIX.

Birth of UNIX

UNIX appeared almost by accident, and the culprit was the computer game “Space Travel,” a space flight game written by Ken Thompson. It was back in 1969, the game “Space Travel” was first designed for that same Multics operating system, and after Bell Labs was cut off from access to new versions of Multics, Ken rewrote the game in Fortran and ported it to the GECOS operating system, which came with the GE-635 computer. But here two problems crept in. Firstly, this computer did not have a very good system for display output, and, secondly, playing on this computer was a bit expensive - something like $50-75 per hour.

But one day Ken Thompson came across a DEC PDP-7 computer that was rarely used, and could well be suitable for running Space Travel, and it also had a better video processor.

Porting the game to the PDP-7 was not so easy, essentially requiring a new operating system to be written to run it. This left the matter, to what lengths programmers will go for their favorite toy. This is how UNIX, or rather Unics, was born. The name, suggested by Brian Kernighan, is an abbreviation of the words Uniplexed Information and Computing System. Let me remind you that the name Multics comes from the words Multiplexed Information and Computing Service, thus, Unics was in some way opposed to Multics in terms of simplicity. Indeed, Multics was already under attack for its complexity. For comparison, the first versions of the Unics kernel took up only 12 KB of RAM versus 135 KB for Multics.

Ken Thompson

This time the developers did not (yet) experiment with high-level languages, and the first version of Unics was written in assembly language. Thompson himself, Denis Ritchie, and later Douglas McIlroy, Joey Ossanna and Rad Kennedy took part in the development of Unics. At first, Kernighan, who proposed the name of the OS, provided only moral support.

A little later, in 1970, when multitasking was implemented, the operating system was renamed UNIX and was no longer considered an abbreviation. This year is considered the official year of birth of UNIX, and it is from January 1, 1970 that the system time is counted (the number of seconds starting from this date). The same date is called more pathetically - the beginning of the UNIX era (in English - UNIX Epoch). Remember how we were all scared by the Y2K problem? So, a similar problem awaits us back in 2038, when 32-bit integers, which are often used to determine the date, will not be enough to represent time, and time and date will become negative. I would like to believe that by this time all vital software will use 64-bit variables for this purpose in order to push back this terrible date by another 292 million years, and then we’ll come up with something. 🙂

By 1971, UNIX was already a full-fledged operating system and Bell Labs even staked its claim trademark UNIX. In the same year, UNIX was rewritten to run on the more powerful PDP-11 computer, and it was in this year that the first official version UNIX (also called First Edition).

In parallel with the development of Unics/UNIX, Ken Thompson and Denis Ritchie, starting in 1969, developed a new language, B (B), which was based on the BCPL language, which, in turn, can be considered a descendant of the Algol-60 language. Ritchie proposed rewriting UNIX in B, which was portable although interpreted, and then he continued to modify the language to suit new needs. In 1972, the second version of UNIX, Second Edition, was released, which was written almost entirely in B; a fairly small module of about 1000 lines remained in assembler, so porting UNIX to other computers was now relatively easy. This is how UNIX became portable.

Ken Thompson and Dennis Ritchie

Then the B language developed along with UNIX until it gave birth to the C language, one of the most famous programming languages, which is now usually slandered or exalted as an ideal. In 1973, the third edition of UNIX was released with a built-in C compiler, and starting with the 5th version, which was released in 1974, it is believed that UNIX was completely rewritten in C. By the way, it was in UNIX in 1973 that the concept of pipes

Beginning in 1974-1975, UNIX began to spread beyond Bell Labs. Thompson and Ritchie publish UNIX in the Communications of the ACM, and AT&T provides UNIX to educational institutions as a teaching tool. In 1976, we can say that the first porting of UNIX to another system took place - to the Interdata 8/32 computer. In addition, in 1975, the 6th version of UNIX was released, starting with which various implementations of this operating system appeared.

The UNIX operating system turned out to be so successful that, starting in the late 70s, other developers began to make similar systems. Let's now switch from the original UNIX to its clones and see what other operating systems have appeared thanks to it.

The emergence of BSD

The proliferation of this operating system was largely facilitated by American officials, even before the birth of UNIX, in 1956, who imposed restrictions on AT&T, which owned Bell Labs. The fact is that then the Department of Justice forced AT&T to sign an agreement prohibiting the company from engaging in activities not related to telephone and telegraph networks and equipment, but by the 70s AT&T had already realized what a successful project UNIX had turned out to be and wanted to make it commercial. In order for officials to allow them to do this, AT&T transferred the UNIX source code to some American universities.

One of these universities that had access to the body of the source code was the University of California at Berkeley, and if there are other people’s source codes, then the desire involuntarily arises to correct something in the program for yourself, especially since the license did not prohibit this. Thus, a few years later (in 1978), the first UNIX-compatible system appeared, not created within the walls of AT&T. It was BSD UNIX.

University of California at Berkeley

BSD is an abbreviation for Berkeley Software Distribution, a special system for distributing programs in source code with a very soft license. The BSD license was created just for the distribution of the new UNIX-compatible system. This license allows the reuse of the source code distributed under it, and, in addition, unlike the GPL (which did not yet exist), it does not impose any restrictions on derivative programs. In addition, it is very short and does not deal with a lot of tedious legal terms.

The first version of BSD (1BSD) was more of an addition to the original UNIX version 6 than a standalone system. 1BSD added a Pascal compiler and text editor ex. The second version of BSD, released in 1979, included such well-known programs as vi and the C Shell.

After BSD UNIX appeared, the number of UNIX-compatible systems began to grow incredibly quickly. Already from BSD UNIX, separate branches of operating systems began to branch off, different operating systems exchanged code with each other, the intertwining became quite confusing, so in the future we will not dwell on each version of all UNIX systems, but will look at how the most famous of them appeared.

Perhaps the most famous direct descendants of BSD UNIX are the operating systems FreeBSD, OpenBSD and, to a slightly lesser extent, NetBSD. All of them descended from the so-called 386BSD, released in 1992. 386BSD, as the name suggests, was a port of BSD UNIX to the Intel 80386 processor. This system was also created by graduates of the University of Berkeley. The authors believed that the UNIX source code received from AT&T was sufficiently modified to void the AT&T license, however, AT&T itself did not think so, so there were lawsuits surrounding this operating system. Judging by the fact that 386BSD itself became the parent of many other operating systems, everything ended well for it.

The FreeBSD project (at the beginning it did not have its own name) appeared as a set of patches for 386BSD, however, for some reason these patches were not accepted, and then, when it became clear that 386BSD would no longer be developed, in 1993 the project was deployed towards the creation of a full-fledged operating system, called FreeBSD.

Beastie. FreeBSD Mascot

At the same time, the 386BSD developers themselves created new project NetBSD, from which OpenBSD in turn branched. As you can see, it turns out to be a rather sprawling tree of operating systems. The goal of the NetBSD project was to create a UNIX system that could run on as many architectures as possible, that is, to achieve maximum portability. Even NetBSD drivers must be cross-platform.

NetBSD logo

Solaris

However, the first to branch off from BSD was the SunOS operating system, the brainchild, as you understand from the name, of the Sun Microsystems company, unfortunately now deceased. This happened in 1983. SunOS is an operating system that came with computers built by Sun itself. Generally speaking, a year earlier, in 1982, Sun had launched the Sun UNIX operating system, which was based on the Unisoft Unix v7 code base (Unisoft is a company founded in 1981 that was involved in porting Unix to various hardware), but it is SunOS 1.0 that is based on the 4.1 BSD code. SunOS was regularly updated until 1994, when version 4.1.4 was released, and then was renamed Solaris 2. Where did the two come from? It’s a bit of a confusing story here, because Solaris was first called SunOS versions 4.1.1 - 4.1.4, developed from 1990 to 1994. Consider that this was a kind of rebranding that only took root starting with version Solaris 2. Then, until 1997, Solaris 2.1, 2.2, etc. were released. to 2.6, and instead of Solaris 2.7 in 1998, just Solaris 7 was released, then only this figure began to increase. Currently, the latest version of Solaris is 11, released on November 9, 2011.

OpenSolaris logo

The history of Solaris is also quite complex, until 2005 Solaris was a completely commercial operating system, but in 2005 Sun decided to open up part of the source code of Solaris 10 and create the OpenSolaris project. In addition, previously, while Sun was alive, Solaris 10 was either free to use or could be purchased official technical support. Then, in early 2010, when Oracle acquired Sun, he made Solaris 10 paid system. Fortunately, Oracle has not yet been able to ruin OpenSolaris.

Linux. Where would we be without him?

And now it’s the turn to talk about the most famous of the UNIX implementations - Linux. The history of Linux is remarkable because three interesting projects came together in it. But before we talk about the creator of Linux, Linus Torvalds, we need to mention two more programmers, one of whom, Andrew Tanenbaum, unknowingly pushed Linus to create Linux, and the second, Richard Stallman, whose tools Linus used when creating his operating system .

Andrew Tanenbaum is a professor at the Vrije Universiteit Amsterdam and is primarily involved in operating systems development. Together with Albert Woodhull, he wrote such a well-known book as “Operating Systems: Design and Implementation,” which inspired Torvalds to write Linux. This book discusses such UNIX-like system, like Minix. Unfortunately, Tanenbaum for a long time viewed Minix only as a project for teaching operating system skills, but not as a full-fledged working OS. The Minix source code had a rather limited license, where you can study its code, but you cannot distribute your own modified versions of Minix, and for a long time the author himself did not want to apply the patches that were sent to him.

Andrew Tanenbaum

The first version of Minix was released along with the first edition of the book in 1987, the subsequent second and third versions of Minix were published along with the corresponding editions of the book about operating systems. The third version of Minix, released in 2005, can already be used as a stand-alone operating system for a computer (there are LiveCD versions of Minix that do not require installation on a hard drive), and as an embedded operating system for microcontrollers. The latest version of Minix 3.2.0 was released in July 2011.

Now let's remember Richard Stallman. Recently, he began to be perceived only as a propagandist of free software, although many now well-known programs appeared thanks to him, and at one time his project made Torvalds’ life much easier. The most interesting thing is that both Linus and Richard approached the creation of the operating system with different sides, and as a result the projects merged into GNU/Linux. Here we need to give some explanation about what GNU is and where it came from.

Richard Stallman

You can talk about Stallman for quite a long time, for example, that he received a degree with honors in physics from Harvard University. In addition, Stallman worked at MIT, where he began writing his famous EMACS editor in the 1970s. At the same time, the source code of the editor was available to everyone, which was not a feature at MIT, where for a long time there was a kind of friendly anarchy, or, as Steven Levy, the author of the wonderful book “Hackers,” called it. Heroes of the computer revolution", "hacker ethics". But a little later, MIT began to take care of computer security, users were given passwords, and unauthorized users could not access the computer. Stallman was strongly against this practice; he made a program that could allow anyone to find out any password for any user, and advocated leaving the password blank. For example, he sent the following messages to users: “I see that you have chosen the password [such and such]. I'm guessing you can switch to the "carriage return" password. It's much easier to type and is consistent with the principle that there should be no passwords here." But his efforts came to nothing. Moreover, new people who came to MIT already began to worry about the rights to their program, about copyright and other similar abominations.

Stallman later said (quoted from the same book by Levy): “I cannot believe that software should have owners. What happened sabotaged all of humanity as a whole. It prevented people from getting the most out of the programs.” Or here’s another quote from him: “The cars began to break down, and there was no one to fix them. No one made the necessary changes to software. Non-hackers reacted to this simply - they began to use purchased commercial systems, bringing with them fascism and licensing agreements.”

As a result, Richard Stallman left MIT and decided to create his own free implementation of a UNIX-compatible operating system. So on September 27, 1983, the GNU project appeared, which translates as “Gnu is Not UNIX”. The first GNU program was EMACS. Within the framework of the GNU project in 1988, its own GNU license GPL is the GNU General Public License, which obliges authors of programs based on source codes distributed under this license to also open source codes under the GPL license.

Until 1990, within the framework of GNU (not only by Stallman), various software was written for the future operating system, but this OS did not have its own kernel. They started working on the kernel only in 1990, it was a project called GNU Hurd, but it didn’t take off; its last version was released in 2009. But Linux took off, and we have finally come to it.

And here the Finnish boy Linus Torvalds comes into action. While studying at the University of Helsinki, Linus took courses on the C language and the UNIX system; in anticipation of this subject, he bought the very book by Tanenbaum that described Minix. And that’s exactly what was described, Minix itself had to be purchased separately on 16 floppy disks, and it cost $169 back then (oh, our Gorbushka wasn’t in Finland then, but what can you do, savages 🙂). In addition, Torvalds also had to buy a computer with an 80386 processor on credit for $3,500, because before that he only had an old computer with a 68008 processor, which Minix could not run on (fortunately, when he had already made the first version of Linux, grateful users chipped in and paid for his computer loan).

Linus Torvalds

Despite the fact that Torvalds generally liked Minix, he gradually began to understand what limitations and disadvantages it had. He was especially irritated by the terminal emulation program that came with the operating system. As a result, he decided to write his own terminal emulator, and at the same time understand the operation of the 386 processor. Torvalds wrote the emulator at a low level, that is, he started with the BIOS bootloader, gradually the emulator acquired new capabilities, then, in order to download files, Linus had to write a driver for the drive and file system, and off we go. This is how the Linux operating system appeared (at that time it did not yet have any name).

When the operating system began to more or less take shape, the first program that Linus ran on it was bash. It would be more correct to say that he tweaked his operating system so that bash could finally work. After that, he gradually began to launch other programs under his operating system. And the operating system should not be called Linux at all. Here is a quote from Torvalds’ autobiography, which was published under the title “Just for Fun”: “In my mind I called it Linux. Honestly, I never intended to release it under the Linux name because it seemed too immodest to me. What name did I have in mind for the final version? Freax. (Get it? Freaks are fans - and at the end of the x from Unix)."

On August 25, 1991, the following historical message appeared in the comp.os.minix conference: “Hello to all minix users! I'm writing a (free) operating system (amateur version - it won't be as big and professional as gnu) for 386 and 486 AT. I've been working on this since April and it looks like it will be ready soon. Write to me what you like/dislike about minix, since my OS is similar to it (among other things, it has, for practical reasons, the same physical layout of the file system). So far I have transferred bash (1.08) and gcc (1.40) to it, and everything seems to work. This means that in the coming months I will have something working, and I would like to know what features most people need. All applications are accepted, but implementation is not guaranteed :-)"

Please note that GNU and the gcc program are already mentioned here (at that time this abbreviation stood for GNU C Compiler). And remember Stallman and his GNU, who began developing the operating system from the other end. Finally, the merger happened. Therefore, Stallman is offended when the operating system is simply called Linux, and not GNU/Linux; after all, Linux is the kernel, and many of the features were taken from the GNU project.

On September 17, 1991, Linus Torvalds first posted his operating system, which at that time was version 0.01, on a public FTP server. Since then, all progressive humanity has celebrated this day as the birthday of Linux. Particularly impatient people begin to celebrate it on August 25, when Linus admitted at the conference that he was writing an operating system. Then Linux developed, and the name Linux itself became stronger, because the address where the operating system was posted looked like ftp.funet.fi/pub/OS/Linux. The fact is that Ari Lemke, the teacher who allocated space on the server for Linus, thought that Freax did not look very presentable, and he named the directory “Linux” - as a mixture of the author’s name and the “x” at the end from UNIX.

Tux. Linux logo

There is also a point that although Torvalds wrote Linux under the influence of Minix, there is a fundamental difference between Linux and Minix from a programming point of view. The fact is that Tanenbaum is a supporter of microkernel operating systems, that is, those when the operating system has a small kernel with a small number of functions, and all drivers and services of the operating system act as separate independent modules, and Linux kernel monolithic, many features of the operating system are included there, so under Linux, if you need some special feature, you may need to recompile the kernel, making some changes there. On the one hand, the microkernel architecture has the advantages of reliability and simplicity; at the same time, with careless microkernel design, a monolithic kernel will work faster, since it does not need to exchange large amounts of data with third-party modules. After the advent of Linux in 1992, a virtual debate broke out between Torvalds and Tanenbaum, as well as their supporters, at the comp.os.minix conference on which architecture was better - microkernel or monolithic. Tanenbaum argued that microkernel architecture was the future, and Linux was already outdated before it was released. Almost 20 years have passed since that day... By the way, GNU Hurd, which was supposed to become the kernel of the GNU operating system, was also developed as a microkernel.

Mobile Linux

So, since 1991, Linux has been gradually developing, and although on computers ordinary users Linux's share is not yet large; it has long been popular on servers and supercomputers, and Windows is trying to carve out its share in this area. In addition, Linux has now taken a good position on phones and tablets, because Android is also Linux.

Andriod logo

The history of Android began with the company Android Inc, which appeared in 2003 and seemed to be developing mobile applications(specific developments of this company in the first years of its existence are still not particularly advertised). But less than two years later, Android Inc was absorbed by Google. It was not possible to find any official details about what exactly the developers of Android Inc were doing before the takeover, although already in 2005, after its purchase by Google, it was rumored that they were already developing a new operating system for phones. However, the first release of Android took place on October 22, 2008, after which new versions began to be released regularly. One of the features of the development of Android could be the fact that attacks on this system began regarding allegedly violated patents, and the situation with the Java implementation is unclear from a legal point of view, but let’s not go into these non-technical squabbles.

But Android is not the only mobile representative of Linux; besides it there is also the MeeGo operating system. If Android is backed by such a powerful corporation as Google, then MeeGo does not have one strong trustee; it is developed by a community under the auspices of The Linux Foundation, which is supported by companies such as Intel, Nokia, AMD, Novell, ASUS, Acer, MSI and others. At the moment, the main help comes from Intel, which is not surprising, since the MeeGo project itself grew out of the Moblin project, which was initiated by Intel. Moblin is like that Linux distribution, which was supposed to run on processor-controlled portable devices Intel Atom. Let's mention another mobile Linux - Openmoko. Linux is quite quickly trying to gain a foothold on phones and tablets, Google has taken the matter seriously with Android, but the prospects for other mobile versions of Linux are still vague.

As you can see, Linux can currently run on many systems managed by different processors However, in the early 1990s, Torvalds did not believe that Linux could be ported anywhere other than the 386 processor.

Mac OS X

Now let's switch to another operating system, which is also UNIX-compatible - Mac OS X. The first versions of Mac OS, up to the 9th, were not based on UNIX, so we will not dwell on them. The most interesting thing for us began after Steve Jobs was expelled from Apple in 1985, after which he founded the NeXT company, which developed computers and software for them. NeXT hired programmer Avetis Tevanyan, who had previously been developing the Mach microkernel for a UNIX-compatible operating system being developed at Carnegie Mellon University. The Mach kernel was intended to replace the BSD UNIX kernel.

NeXT company logo

Avetis Tevanyan was the leader of the team developing a new UNIX-compatible operating system, called NeXTSTEP. To avoid reinventing the wheel, NeXTSTEP was based on the same Mach core. From a programming point of view, NeXTSTEP, unlike many other operating systems, was object-oriented, and the language played a huge role in it Objective-C programming, which is now widely used in Mac OS X. The first version of NeXTSTEP was released in 1989. Although NeXTSTEP was originally designed for Motorola 68000 processors, in the early 1990s, the operating system was ported to 80386 and 80486 processors. Things were not going well for NeXT, and in 1996 Apple offered Jobs to buy NeXT in order to use NeXTSTEP instead of Mac OS. Here we could also talk about the rivalry between the operating systems NeXTSTEP and BeOS, which ended with the victory of NeXTSTEP, but we will not lengthen the already long story, besides, BeOS is in no way related to UNIX, so at the moment it does not interest us, although it itself in itself, this operating system was very interesting, and it’s a pity that its development was interrupted.

A year later, when Jobs returned to Apple, the policy of adapting NeXTSTEP for Apple computers, and a few years later this operating system was ported to PowerPC and Intel processors. Thus, the server version of Mac OS X (Mac OS X Server 1.0) was released in 1999, and in 2001 the operating system for end users, Mac OS X (10.0), was released.

Later, based on Mac OS X, an operating system for iPhone phones was developed, which was called Apple iOS. First iOS version came out in 2007. The iPad also runs on the same operating system.

Conclusion

After all of the above, you may be wondering what kind of operating system can be considered UNIX? There is no clear answer to this. From a formal point of view, there is the Uniform UNIX Specification - a standard that an operating system must satisfy in order to be called UNIX. Do not confuse it with the POSIX standard, which can be followed by a non-UNIX-like operating system. By the way, the name POSIX was proposed by the same Richard Stallman, and formally the POSIX standard has the number ISO/IEC 9945. Obtaining a unified specification is expensive and time-consuming, so not many operating systems are associated with this. Operating systems that have received such a certificate include Mac OS X, Solaris, SCO and several other lesser-known operating systems. This does not include either Linux or *BSD, but no one doubts their “Unix-ness”. Therefore, for example, programmer and writer Eric Raymond proposed two more signs to determine whether a particular operating system is UNIX-like. The first of these signs is the “non-inheritance” of the source code from the original UNIX, developed at AT&T and Bell Labs. This includes BSD systems. The second sign is “UNIX in functionality.” This includes operating systems that behave closely as described in the UNIX specification, but have not received a formal certificate, and, moreover, are in no way related to the source code of the original UNIX. This includes Linux, Minix, QNX.

We’ll probably stop here, otherwise there are already too many letters. This review mainly includes the history of the emergence of the most famous operating systems - variations of BSD, Linux, Mac OS X, Solaris, some UNIXes were left out, such as QNX, Plan 9, Plan B and some others. Who knows, maybe in the future we’ll remember about them.

), third (GNU/Linux) and many subsequent places.

UNIX systems are of great historical importance because they gave rise to some of the OS and software concepts and approaches that are popular today. Also, during the development of Unix systems, the C language was created.

Examples of well-known UNIX-like operating systems include: BSD, Solaris, Linux, Android, MeeGo, NeXTSTEP, Mac OS X, Apple iOS.

Story

Predecessors

The first versions of UNIX were written in assembly language and did not have a built-in high-level language compiler. Around 1969, Ken Thompson, with the assistance of Dennis Ritchie, developed and implemented the Bi (B) language, which was a simplified version (for implementation on minicomputers) of the BCPL language developed in the language. Bi, like BCPL, was an interpreted language. In 1972, the second edition of UNIX was released, rewritten in the Bi language. In 1969-1973, a compiled language was developed based on Bi, called C (C).

Split

An important reason for the UNIX split was the implementation of the TCP/IP protocol stack in 1980. Before this, machine-to-machine communication in UNIX was in its infancy - the most significant method of communication was UUCP (a means of copying files from one UNIX system to another, originally operating over telephone networks using modems).

Two network application programming interfaces have been proposed: Berkley sockets and the TLI transport layer interface. Transport Layer Interface).

The Berkley sockets interface was developed at the University of Berkeley and used the TCP/IP protocol stack developed there. TLI was created by AT&T according to the transport layer definition of the OSI model and first appeared in System V version 3. Although this version contained TLI and streams, it initially did not implement TCP/IP or other network protocols, but similar implementations were provided by third parties.

The implementation of TCP/IP was officially and finally included in the base distribution of System V version 4. This, along with other considerations (mostly market ones), caused the final demarcation between the two branches of UNIX - BSD (Berkeley University) and System V (commercial version from AT&T). Subsequently, many companies, having licensed System V from AT&T, developed their own commercial varieties of UNIX, such as AIX, CLIX, HP-UX, IRIX, Solaris.

Modern UNIX implementations are generally not pure V or BSD systems. They implement features of both System V and BSD.

Free UNIX-like operating systems

Currently, GNU/Linux and members of the BSD family are rapidly taking over the market from commercial UNIX systems and simultaneously penetrating both end-user desktop computers and mobile and embedded systems.

Proprietary systems

The influence of UNIX on the evolution of operating systems

The ideas behind UNIX had a huge impact on the development of computer operating systems. Currently, UNIX systems are recognized as one of the most historically important operating systems.

Widely used in systems programming, the C language, originally created for the development of UNIX, has surpassed UNIX in popularity. The C language was the first “tolerant” language that did not try to impose one or another programming style on the programmer. C was the first high-level language to provide access to all processor capabilities, such as references, tables, bit shifts, increments, etc. On the other hand, the freedom of the C language led to buffer overflow errors in C standard library functions such as gets and scanf. The result has been many notorious vulnerabilities, such as the one exploited by the famous Morris worm.

The early developers of UNIX helped introduce the principles of modular programming and reuse into engineering practice.

UNIX made it possible to use TCP/IP protocols on relatively inexpensive computers, which led to the rapid growth of the Internet. This, in turn, contributed to the rapid discovery of several major security vulnerabilities, architecture and system utilities UNIX.

Over time, UNIX's leading developers developed cultural norms for software development that became as important as UNIX itself. ( )

Social role in the community of IT professionals and historical role

The original UNIXes ran on large multi-user computers, which also offered proprietary OSes from the hardware manufacturer, such as RSX-11 and its descendant VMS. Despite the fact that, according to a number of opinions, the then UNIX had disadvantages compared to the OS data (for example, the lack of serious database engines), it was a) cheaper and sometimes free for academic institutions b) was portable from equipment to equipment, and developed in the portable C language, which “decoupled” program development from specific hardware. In addition, the user experience turned out to be “decoupled” from the hardware and manufacturer - a person who worked with UNIX on VAX could easily work with it on 68xxx, and so on.

Hardware manufacturers at that time often had a cool attitude towards UNIX, considering it a toy, and offering their proprietary OS for serious work - primarily DBMS and business applications based on them in commercial structures. DEC is known to have commented on this issue regarding its VMS. Corporations listened to this, but not the academic environment, which had everything it needed in UNIX, often did not require official support from the manufacturer, managed on its own, and valued the low cost and portability of UNIX.

Thus, UNIX was perhaps the first OS portable to different hardware.

UNIX's second big rise was the introduction of RISC processors around 1989. Even before that, there were so-called. workstations are high-power personal single-user computers that have sufficient memory, a hard drive and a sufficiently developed OS (multitasking, memory protection) to work with serious applications, such as CAD. Among the manufacturers of such machines, Sun Microsystems stood out, making a name for itself on them.

Before the advent of RISC processors, these stations typically used a Motorola 68xxx processor, the same as in Apple computers (albeit with a more advanced operating system than Apple's).

Around 1989, commercial implementations of RISC architecture processors appeared on the market. The logical decision of a number of companies (Sun and others) was to port UNIX to these architectures, which immediately entailed the transfer of the entire UNIX software ecosystem.

Proprietary serious operating systems, such as VMS, began their decline precisely from this moment (even if it was possible to transfer the OS itself to RISC, everything was much more complicated with applications for it, which in these ecosystems were often developed in assembler or in proprietary languages ​​like BLISS ), and UNIX became the OS for the most powerful computers in the world.

However, at this time the PC ecosystem began to move to a GUI in the form of Windows 3.0. The enormous advantages of the GUI, as well as, for example, unified support for all types of printers, were appreciated by both developers and users. This greatly undermined UNIX's position in the market - implementations such as SCO and Interactive UNIX were unable to support Windows applications. As for the GUI for UNIX, called X11 (there were other implementations, much less popular), it could not fully work on a regular user PC due to memory requirements - for normal operation X11 required 16 MB, while Windows 3.1 ran both Word and Excel simultaneously in 8 MB (which became the standard PC memory size at the time) with sufficient performance. With high memory prices, this was a limiting factor.

The success of Windows gave impetus to an internal Microsoft project called Windows NT, which was API compatible with Windows but still had the same architectural features serious OS, like UNIX - multitasking, full memory protection, support for multiprocessor machines, access rights to files and directories, system log. Windows NT also introduced journaling NTFS system, which at that time exceeded in capabilities all file systems standardly supplied with UNIX - analogues for UNIX were only separate commercial products from Veritas and others.

Although Windows NT was not popular initially due to its high memory requirements (the same 16 MB), it allowed Microsoft to enter the market for server solutions, such as database management systems. Many at that time did not believe in Microsoft opportunity, which has traditionally specialized in desktop software, to be a player in the enterprise software market, which already had its own big names such as Oracle and Sun. Adding to this doubt was the fact that the Microsoft DBMS - SQL Server - began as a simplified version of Sybase SQL Server, licensed from Sybase and 99% compatible in all aspects of working with it.

In the second half of the 1990s, Microsoft began to squeeze UNIX in the corporate server market.

The combination of the above factors, as well as a huge drop in the price of 3D video processors, which went from professional equipment to home equipment, essentially killed the very concept of a workstation by the beginning of the 2000s.

In addition, Microsoft systems are easier to manage, especially in typical scenarios use.

But UNIX has now begun its third dramatic rise.

In addition, Stallman and his comrades, fully aware that the success of non-corporate software requires non-proprietary development tools, developed a set of compilers for various programming languages ​​(gcc), which, together with the previously developed GNU utilities (replacement standard utilities UNIX) has compiled a necessary and quite powerful software package for a developer.

To create a completely free UNIX, essentially only the OS kernel was missing. And it was developed by Finnish student Linus Torvalds. The kernel was developed from scratch and, from the point of view of source code, is not a derivative of either BSD or System V (although concepts were borrowed, for example, Linux had the namei and bread functions), however, in a number of nuances (system calls, rich /proc, absence of sysctk) - gravitates more towards the latter.

  • POSIX 1003.2-1992, which defines the behavior of utilities, including the command interpreter;
  • POSIX 1003.1b-1993, which supplements POSIX 1003.1-1988, specifies support for real-time systems;
  • POSIX 1003.1c-1995, which supplements POSIX 1003.1-1988, defines threads, also known as pthreads.

All POSIX standards are compiled in IEEE 1003.

For compatibility purposes, several UNIX system creators have proposed using the SVR4 system's ELF format for binary and object files. A single format ensures complete compliance between binary files within the same computer architecture.

The directory structure of some systems, in particular GNU/Linux, is defined in the Filesystem Hierarchy Standard. However, this type of standard is controversial in many respects, and it is, even within the GNU/Linux community, far from universal.

Standard UNIX commands

  • Creating and navigating files and directories: touch , , , , , , pwd , , mkdir , rmdir , find , ;
  • View and edit files: more , less , , ex, , emacs ;
  • Text processing: echo, cat, grep, sort, uniq, sed, awk, tee, head, tail, cut, split, printf;
  • File comparison: comm, cmp, diff, patch;
  • Various shell utilities: yes, test, xargs, expr;
  • System administration: chmod , chown , , , , who , , mount , umount ;
  • Communications: mail, telnet, ftp, finger, rsh, ssh;
  • Command shells: bash, csh, ksh, tcsh, zsh;
  • Working with source code and object code: cc, gcc, ld, , yacc, bison, lex, flex, ar, ranlib, make;
  • Compression and archiving: compress, uncompress, gzip, gunzip, tar
  • Working with binary files: , strings

Below is a list of the 60 commands from section 1 of the first version of UNIX:

  • b, bas, bcd, boot
  • cat, chdir, check, chmod, chown, cmp,
  • date , db, dbppt, , , dsw, dtf,
  • mail, mesg, mkdir, mkfs, mount,
  • rew, rkd, rkf, rkl, , rmdir, roff

Notes

see also

The UNIX operating system, the progenitor of many modern operating systems such as Linux, Android, Mac OS X and many others, was created within the walls of the Bell Labs research center, a division of AT&T. Generally speaking, Bell Labs is a breeding ground for scientists who have made discoveries that have literally changed technology. For example, it was at Bell Labs that scientists such as William Shockley, John Bardeen and Walter Brattain worked, who first created the bipolar transistor in 1947. We can say that it was at Bell Labs that the laser was invented, although masers had already been created by that time. Claude Shannon, the founder of information theory, also worked at Bell Labs. The creators of the C language, Ken Thompson and Denis Ritchie (we will remember them later), also worked there, and the author of C++, Bjarne Stroustrup, also works there.

On the way to UNIX

Before we talk about UNIX itself, let's remember those operating systems that were created before it, and which largely defined what UNIX is, and through it, many other modern operating systems.

The development of UNIX was not the first work in the field of operating systems carried out at Bell Labs. In 1957, the laboratory began developing an operating system called BESYS (short for Bell Operating System). The project manager was Viktor Vysotsky, the son of a Russian astronomer who emigrated to America. BESYS was an internal project that was not released as a finished commercial product, although BESYS was distributed on magnetic tape to everyone. This system was designed to run on IBM 704 – 709x series computers (IBM 7090, 7094). I would like to call these things the antediluvian word “computers,” but so as not to hurt the ears, we will continue to call them computers.

IBM 704

First of all, BESYS was intended for batch execution of a large number of programs, that is, in a way where a list of programs is specified, and their execution is scheduled in such a way as to occupy the maximum possible resources so that the computer does not stand idle. At the same time, BESYS already had the beginnings of a time-sharing operating system - that is, in essence, what is now called multitasking. When full-fledged time-sharing systems appeared, this opportunity was used so that several people could work with one computer at the same time, each from their own terminal.

In 1964, Bell Labs underwent an upgrade of computers, as a result of which BESYS could no longer be run on new computers from IBM; cross-platform was out of the question at that time. At that time, IBM supplied computers without operating systems. Developers from Bell Labs could have started writing a new operating system, but they did it differently - they joined the development of the Multics operating system.

The Multics project (short for Multiplexed Information and Computing Service) was proposed by MIT professor Jack Dennis. In 1963, he and his students developed a specification for a new operating system and managed to interest representatives of General Electric in the project. As a result, Bell Labs joined MIT and General Electric in developing a new operating system.

And the ideas for the project were very ambitious. Firstly, it had to be an operating system with full time sharing. Secondly, Multics was not written in assembly language, but in one of the first high-level languages ​​- PL/1, which was developed in 1964. Third, Multics could run on multiprocessor computers. The same operating system had a hierarchical file system, file names could contain any characters and be quite long, and the file system also provided symbolic links to directories.

Unfortunately, work on Multics dragged on for a long time; Bell Labs programmers never saw the release of this product and left the project in April 1969. And the release took place in October of the same year, but they say that the first version was terribly buggy, and for another year the remaining developers corrected the bugs that users reported to them, although a year later Multics was already a more reliable system.

Multics was still in development for quite some time, the last release was in 1992, and it was version 12.5, although this is a completely different story, Multics had a huge impact on the future of UNIX.

Birth of UNIX

UNIX appeared almost by accident, and the computer game “Space Travel”, a space flight game written by Ken Thompson, was to blame for this. It was back in 1969, the game “Space Travel” was first designed for that same Multics operating system, and after Bell Labs was cut off from access to new versions of Multics, Ken rewrote the game in Fortran and ported it to the GECOS operating system, which came with the GE-635 computer. But here two problems crept in. Firstly, this computer did not have a very good system for display output, and, secondly, playing on this computer was a bit expensive - something like $50-75 per hour.

But one day Ken Thompson came across a DEC PDP-7 computer that was rarely used, and could well be suitable for running Space Travel, and it also had a better video processor.

Ken Thompson

This time the developers did not (yet) experiment with high-level languages, and the first version of Unics was written in assembly language. Thompson himself, Denis Ritchie, and later Douglas McIlroy, Joey Ossanna and Rad Kennedy took part in the development of Unics. At first, Kernighan, who proposed the name of the OS, provided only moral support.

A little later, in 1970, when multitasking was implemented, the operating system was renamed UNIX and was no longer considered an abbreviation. This year is considered the official year of birth of UNIX, and it is from January 1, 1970 that the system time is counted (the number of seconds starting from this date). The same date is called more pathetically - the beginning of the UNIX era (in English - UNIX Epoch). Remember how we were all scared by the Y2K problem? So, a similar problem awaits us back in 2038, when 32-bit integers, which are often used to determine the date, will not be enough to represent time, and time and date will become negative. I would like to believe that by this time all vital software will use 64-bit variables for this purpose in order to push back this terrible date by another 292 million years, and then we’ll come up with something.

By 1971, UNIX was already a full-fledged operating system and Bell Labs even staked out the UNIX trademark. In the same year, UNIX was rewritten to run on the more powerful PDP-11 computer, and it was in this year that the first official version of UNIX (also called First Edition) was released.

In parallel with the development of Unics/UNIX, Ken Thompson and Denis Ritchie, starting in 1969, developed a new language, B (B), which was based on the BCPL language, which, in turn, can be considered a descendant of the Algol-60 language. Ritchie proposed rewriting UNIX in B, which was portable although interpreted, and then he continued to modify the language to suit new needs. In 1972, the second version of UNIX, Second Edition, was released, which was written almost entirely in B; a fairly small module of about 1000 lines remained in assembler, so porting UNIX to other computers was now relatively easy. This is how UNIX became portable.

Ken Thompson and Dennis Ritchie

Then the B language developed along with UNIX until it gave birth to the C language, one of the most famous programming languages, which is now usually slandered or exalted as an ideal. In 1973, the third edition of UNIX was released with a built-in C compiler, and starting with the 5th version, which was released in 1974, it is believed that UNIX was completely rewritten in C. By the way, it was in UNIX in 1973 that the concept of pipes

Beginning in 1974-1975, UNIX began to spread beyond Bell Labs. Thompson and Ritchie publish UNIX in the Communications of the ACM, and AT&T provides UNIX to educational institutions as a teaching tool. In 1976, we can say that the first porting of UNIX to another system took place - to the Interdata 8/32 computer. In addition, in 1975, the 6th version of UNIX was released, starting with which various implementations of this operating system appeared.

The UNIX operating system turned out to be so successful that, starting in the late 70s, other developers began to make similar systems. Let's now switch from the original UNIX to its clones and see what other operating systems have appeared thanks to it.

The emergence of BSD

The proliferation of this operating system was largely facilitated by American officials, even before the birth of UNIX, in 1956, who imposed restrictions on AT&T, which owned Bell Labs. The fact is that then the Department of Justice forced AT&T to sign an agreement prohibiting the company from engaging in activities not related to telephone and telegraph networks and equipment, but by the 70s AT&T had already realized what a successful project UNIX had turned out to be and wanted to make it commercial. In order for officials to allow them to do this, AT&T transferred the UNIX source code to some American universities.

One of these universities that had access to the body of the source code was the University of California at Berkeley, and if there are other people’s source codes, then the desire involuntarily arises to correct something in the program for yourself, especially since the license did not prohibit this. Thus, a few years later (in 1978), the first UNIX-compatible system appeared, not created within the walls of AT&T. It was BSD UNIX.

University of California at Berkeley

BSD is an abbreviation for Berkeley Software Distribution, a special system for distributing programs in source code with a very soft license. The BSD license was created just for the distribution of the new UNIX-compatible system. This license allows the reuse of the source code distributed under it, and, in addition, unlike the GPL (which did not yet exist), it does not impose any restrictions on derivative programs. In addition, it is very short and does not deal with a lot of tedious legal terms.

The first version of BSD (1BSD) was more of an addition to the original UNIX version 6 than a standalone system. 1BSD added the Pascal compiler and ex text editor. The second version of BSD, released in 1979, included such well-known programs as vi and the C Shell.

After BSD UNIX appeared, the number of UNIX-compatible systems began to grow incredibly quickly. Already from BSD UNIX, separate branches of operating systems began to branch off, different operating systems exchanged code with each other, the intertwining became quite confusing, so in the future we will not dwell on each version of all UNIX systems, but will look at how the most famous of them appeared.

Perhaps the most famous direct descendants of BSD UNIX are the operating systems FreeBSD, OpenBSD and, to a slightly lesser extent, NetBSD. All of them descended from the so-called 386BSD, released in 1992. 386BSD, as the name suggests, was a port of BSD UNIX to the Intel 80386 processor. This system was also created by graduates of the University of Berkeley. The authors believed that the UNIX source code received from AT&T was sufficiently modified to void the AT&T license, however, AT&T itself did not think so, so there were lawsuits surrounding this operating system. Judging by the fact that 386BSD itself became the parent of many other operating systems, everything ended well for it.

The FreeBSD project (at the beginning it did not have its own name) appeared as a set of patches for 386BSD, however, for some reason these patches were not accepted, and then, when it became clear that 386BSD would no longer be developed, in 1993 the project was deployed towards the creation of a full-fledged operating system, called FreeBSD.

Beastie. FreeBSD Mascot

At the same time, the 386BSD developers themselves created a new project, NetBSD, from which, in turn, OpenBSD branched. As you can see, it turns out to be a rather sprawling tree of operating systems. The goal of the NetBSD project was to create a UNIX system that could run on as many architectures as possible, that is, to achieve maximum portability. Even NetBSD drivers must be cross-platform.

NetBSD logo

Solaris

However, the first to branch off from BSD was the SunOS operating system, the brainchild, as you understand from the name, of the Sun Microsystems company, unfortunately now deceased. This happened in 1983. SunOS is an operating system that came with computers built by Sun itself. Generally speaking, a year earlier, in 1982, Sun had launched the Sun UNIX operating system, which was based on the Unisoft Unix v7 code base (Unisoft is a company founded in 1981 that was involved in porting Unix to various hardware), but it is SunOS 1.0 that is based on the 4.1 BSD code. SunOS was regularly updated until 1994, when version 4.1.4 was released, and then was renamed Solaris 2. Where did the two come from? It’s a bit of a confusing story here, because Solaris was first called SunOS versions 4.1.1 – 4.1.4, developed from 1990 to 1994. Consider that this was a kind of rebranding that only took root starting with version Solaris 2. Then, until 1997, Solaris 2.1, 2.2, etc. were released. to 2.6, and instead of Solaris 2.7 in 1998, just Solaris 7 was released, then only this figure began to increase. At the moment, the latest version of Solaris is 11, released on November 9, 2011.

OpenSolaris logo

The history of Solaris is also quite complex, until 2005 Solaris was a completely commercial operating system, but in 2005 Sun decided to open up part of the source code of Solaris 10 and create the OpenSolaris project. In addition, previously, while Sun was alive, Solaris 10 could be used either for free, or you could buy official technical support. Then, in early 2010, when Oracle acquired Sun, it made Solaris 10 a paid system. Fortunately, Oracle has not yet been able to ruin OpenSolaris.

Linux. Where would we be without him?

And now it’s the turn to talk about the most famous of the UNIX implementations – Linux. The history of Linux is remarkable because three interesting projects came together in it. But before we talk about the creator of Linux, Linus Torvalds, we need to mention two more programmers, one of whom, Andrew Tanenbaum, unknowingly pushed Linus to create Linux, and the second, Richard Stallman, whose tools Linus used when creating his operating system .

Andrew Tanenbaum is a professor at the Vrije Universiteit Amsterdam and is primarily involved in operating systems development. Together with Albert Woodhull, he wrote such a well-known book as “Operating Systems: Design and Implementation,” which inspired Torvalds to write Linux. This book discusses a UNIX-like system called Minix. Unfortunately, Tanenbaum for a long time viewed Minix only as a project for teaching operating system skills, but not as a full-fledged working OS. The Minix source code had a rather limited license, where you can study its code, but you cannot distribute your own modified versions of Minix, and for a long time the author himself did not want to apply the patches that were sent to him.

Andrew Tanenbaum

The first version of Minix was released along with the first edition of the book in 1987, the subsequent second and third versions of Minix were published along with the corresponding editions of the book about operating systems. The third version of Minix, released in 2005, can already be used as a stand-alone operating system for a computer (there are LiveCD versions of Minix that do not require installation on a hard drive), and as an embedded operating system for microcontrollers. The latest version of Minix 3.2.0 was released in July 2011.

Now let's remember Richard Stallman. Recently, he began to be perceived only as a propagandist of free software, although many now well-known programs appeared thanks to him, and at one time his project made Torvalds’ life much easier. The most interesting thing is that both Linus and Richard approached the creation of the operating system from different angles, and as a result, the projects merged into GNU/Linux. Here we need to give some explanation about what GNU is and where it came from.

Richard Stallman

You can talk about Stallman for quite a long time, for example, that he received a degree with honors in physics from Harvard University. In addition, Stallman worked at MIT, where he began writing his famous EMACS editor in the 1970s. At the same time, the source code of the editor was available to everyone, which was not a feature at MIT, where for a long time there was a kind of friendly anarchy, or, as Steven Levy, the author of the wonderful book “Hackers,” called it. Heroes of the computer revolution", "hacker ethics". But a little later, MIT began to take care of computer security, users were given passwords, and unauthorized users could not access the computer. Stallman was strongly against this practice; he made a program that could allow anyone to find out any password for any user, and advocated leaving the password blank. For example, he sent the following messages to users: “I see that you have chosen the password [such and such]. I'm guessing you can switch to the "carriage return" password. It's much easier to type and is consistent with the principle that there should be no passwords here." But his efforts came to nothing. Moreover, new people who came to MIT already began to worry about the rights to their program, about copyright and other similar abominations.

Stallman later said (quoted from the same book by Levy): “I cannot believe that software should have owners. What happened sabotaged all of humanity as a whole. It prevented people from getting the most out of the programs.” Or here’s another quote from him: “The cars began to break down, and there was no one to fix them. Nobody made the necessary changes to the software. Non-hackers reacted to this simply - they began to use purchased commercial systems, bringing with them fascism and licensing agreements.”

As a result, Richard Stallman left MIT and decided to create his own free implementation of a UNIX-compatible operating system. So on September 27, 1983, the GNU project appeared, which translates as “Gnu is Not UNIX”. The first GNU program was EMACS. Within the framework of the GNU project, in 1988, its own GNU GPL license was developed - the GNU General Public License, which obliges the authors of programs based on source codes distributed under this license to also open source codes under the GPL license.

Until 1990, within the framework of GNU (not only by Stallman), various software was written for the future operating system, but this OS did not have its own kernel. They started working on the kernel only in 1990, it was a project called GNU Hurd, but it didn’t take off; its last version was released in 2009. But Linux took off, and we have finally come to it.

And here the Finnish boy Linus Torvalds comes into action. While studying at the University of Helsinki, Linus took courses on the C language and the UNIX system; in anticipation of this subject, he bought the very book by Tanenbaum that described Minix. And it was exactly as described, Minix itself had to be purchased separately on 16 floppy disks, and it cost $169 at that time (oh, our Gorbushka was not in Finland then, but what can you do, savages). In addition, Torvalds also had to buy a computer with an 80386 processor on credit for $3,500, because before that he only had an old computer with a 68008 processor, which Minix could not run on (fortunately, when he had already made the first version of Linux, grateful users chipped in and paid for his computer loan).

Linus Torvalds

Despite the fact that Torvalds generally liked Minix, he gradually began to understand what limitations and disadvantages it had. He was especially irritated by the terminal emulation program that came with the operating system. As a result, he decided to write his own terminal emulator, and at the same time understand the operation of the 386 processor. Torvalds wrote the emulator at a low level, that is, he started with the BIOS bootloader, gradually the emulator acquired new capabilities, then, in order to download files, Linus had to write a driver for the drive and file system, and off we go. This is how the Linux operating system appeared (at that time it did not yet have any name).

When the operating system began to more or less take shape, the first program that Linus ran on it was bash. It would be more correct to say that he tweaked his operating system so that bash could finally work. After that, he gradually began to launch other programs under his operating system. And the operating system should not be called Linux at all. Here is a quote from Torvalds’ autobiography, which was published under the title “Just for Fun”: “In my mind I called it Linux. Honestly, I never intended to release it under the Linux name because it seemed too immodest to me. What name did I have in mind for the final version? Freax. (Get it? Freaks are fans – and at the end of the x from Unix).”

On August 25, 1991, the following historical message appeared in the comp.os.minix conference: “Hello to all minix users! I'm writing a (free) operating system (amateur version - it won't be as big and professional as gnu) for 386 and 486 AT. I've been working on this since April and it looks like it will be ready soon. Write to me what you like/dislike about minix, since my OS is similar to it (among other things, it has, for practical reasons, the same physical layout of the file system). So far I have transferred bash (1.08) and gcc (1.40) to it, and everything seems to work. This means that in the coming months I will have something working, and I would like to know what features most people need. All applications are accepted, but implementation is not guaranteed"

Please note that GNU and the gcc program are already mentioned here (at that time this abbreviation stood for GNU C Compiler). And remember Stallman and his GNU, who began developing the operating system from the other end. Finally, the merger happened. Therefore, Stallman is offended when the operating system is simply called Linux, and not GNU/Linux; after all, Linux is the kernel, and many of the features were taken from the GNU project.

On September 17, 1991, Linus Torvalds first posted his operating system, which at that time was version 0.01, on a public FTP server. Since then, all progressive humanity has celebrated this day as the birthday of Linux. Particularly impatient people begin to celebrate it on August 25, when Linus admitted at the conference that he was writing an operating system. Then Linux developed, and the name Linux itself became stronger, because the address where the operating system was posted looked like ftp.funet.fi/pub/OS/Linux. The fact is that Ari Lemke, the teacher who allocated space on the server for Linus, thought that Freax did not look very presentable, and he named the directory “Linux” - as a mixture of the author’s name and the “x” at the end from UNIX.

Tux. Linux logo

There is also a point that although Torvalds wrote Linux under the influence of Minix, there is a fundamental difference between Linux and Minix from a programming point of view. The fact is that Tanenbaum is a supporter of microkernel operating systems, that is, those when the operating system has a small kernel with a small number of functions, and all drivers and services of the operating system act as separate independent modules, while Linux has a monolithic kernel, there many features of the operating system are included, so under Linux, if you need some special feature, you may need to recompile the kernel, making some changes there. On the one hand, the microkernel architecture has the advantages of reliability and simplicity; at the same time, with careless microkernel design, a monolithic kernel will work faster, since it does not need to exchange large amounts of data with third-party modules. After the advent of Linux in 1992, a virtual debate broke out between Torvalds and Tanenbaum, as well as their supporters, at the comp.os.minix conference about which architecture was better - microkernel or monolithic. Tanenbaum argued that microkernel architecture was the future, and Linux was already outdated before it was released. Almost 20 years have passed since that day... By the way, GNU Hurd, which was supposed to become the kernel of the GNU operating system, was also developed as a microkernel.

Mobile Linux

So, since 1991, Linux has been gradually developing, and although Linux’s share on ordinary users’ computers is not yet large, it has long been popular on servers and supercomputers, and Windows is trying to carve out its share in this area. In addition, Linux has now taken a good position on phones and tablets, because Android is also Linux.

Andriod logo

The history of Android began with the company Android Inc, which appeared in 2003 and seemed to be developing mobile applications (the specific developments of this company in the first years of its existence are still not particularly advertised). But less than two years later, Android Inc was absorbed by Google. It was not possible to find any official details about what exactly the developers of Android Inc were doing before the takeover, although already in 2005, after its purchase by Google, it was rumored that they were already developing a new operating system for phones. However, the first release of Android took place on October 22, 2008, after which new versions began to be released regularly. One of the features of the development of Android could be the fact that attacks on this system began regarding allegedly violated patents, and the situation with the Java implementation is unclear from a legal point of view, but let’s not go into these non-technical squabbles.

But Android is not the only mobile representative of Linux; besides it there is also the MeeGo operating system. If Android is backed by such a powerful corporation as Google, then MeeGo does not have one strong trustee; it is developed by a community under the auspices of The Linux Foundation, which is supported by companies such as Intel, Nokia, AMD, Novell, ASUS, Acer, MSI and others. At the moment, the main help comes from Intel, which is not surprising, since the MeeGo project itself grew out of the Moblin project, which was initiated by Intel. Moblin is a Linux distribution that was designed to run on portable devices controlled by Intel processor Atom. Let's mention another mobile Linux – Openmoko. Linux is quite quickly trying to gain a foothold on phones and tablets, Google has taken the matter seriously with Android, but the prospects for other mobile versions of Linux are still vague.

As you can see, Linux can now run on many systems running different processors, however, in the early 1990s, Torvalds did not believe that Linux could be ported anywhere other than the 386 processor.

Mac OS X

Now let's switch to another operating system, which is also UNIX-compatible - Mac OS X. The first versions of Mac OS, up to the 9th, were not based on UNIX, so we will not dwell on them. The most interesting thing for us began after Steve Jobs was expelled from Apple in 1985, after which he founded the NeXT company, which developed computers and software for them. NeXT hired programmer Avetis Tevanyan, who had previously been developing the Mach microkernel for a UNIX-compatible operating system being developed at Carnegie Mellon University. The Mach kernel was intended to replace the BSD UNIX kernel.

NeXT company logo

Avetis Tevanyan was the leader of the team developing a new UNIX-compatible operating system, called NeXTSTEP. To avoid reinventing the wheel, NeXTSTEP was based on the same Mach core. From a programming point of view, NeXTSTEP, unlike many other operating systems, was object-oriented, and a huge role in it was played by the Objective-C programming language, which is now widely used in Mac OS X. The first version of NeXTSTEP was released in 1989. Although NeXTSTEP was originally designed for Motorola 68000 processors, in the early 1990s, the operating system was ported to 80386 and 80486 processors. Things were not going well for NeXT, and in 1996 Apple offered Jobs to buy NeXT in order to use NeXTSTEP instead of Mac OS. Here we could also talk about the rivalry between the operating systems NeXTSTEP and BeOS, which ended with the victory of NeXTSTEP, but we will not lengthen the already long story, besides, BeOS is in no way related to UNIX, so at the moment it does not interest us, although it itself in itself, this operating system was very interesting, and it’s a pity that its development was interrupted.

A year later, when Jobs returned to Apple, the policy of adapting NeXTSTEP for Apple computers continued, and a few years later this operating system was ported to PowerPC and Intel processors. Thus, the server version of Mac OS X (Mac OS X Server 1.0) was released in 1999, and in 2001 the operating system for end users, Mac OS X (10.0), was released.

Later, based on Mac OS X, an operating system for iPhone phones was developed, which was called Apple iOS. The first version of iOS was released in 2007. The iPad also runs on the same operating system.

Conclusion

After all of the above, you may be wondering what kind of operating system can be considered UNIX? There is no clear answer to this. From a formal point of view, there is the Uniform UNIX Specification - a standard that an operating system must satisfy in order to be called UNIX. Do not confuse it with the POSIX standard, which can be followed by a non-UNIX-like operating system. By the way, the name POSIX was proposed by the same Richard Stallman, and formally the POSIX standard has the number ISO/IEC 9945. Obtaining a single specification is expensive and time-consuming, so not many operating systems are associated with this. Operating systems that have received such a certificate include Mac OS X, Solaris, SCO and several other lesser-known operating systems. This does not include either Linux or *BSD, but no one doubts their “Unix-ness”. Therefore, for example, programmer and writer Eric Raymond proposed two more signs to determine whether a particular operating system is UNIX-like. The first of these signs is the “non-inheritance” of the source code from the original UNIX, developed at AT&T and Bell Labs. This includes BSD systems. The second sign is “UNIX in functionality”. This includes operating systems that behave closely as described in the UNIX specification, but have not received a formal certificate, and, moreover, are in no way related to the source code of the original UNIX. This includes Linux, Minix, QNX.

We’ll probably stop here, otherwise there are already too many letters. This review mainly includes the history of the emergence of the most famous operating systems - variations of BSD, Linux, Mac OS X, Solaris, some UNIXes were left out, such as QNX, Plan 9, Plan B and some others. Who knows, maybe in the future we’ll remember about them.

Links

  • Hackers, heroes of the computer revolution
  • FreeBSD Manual

All pictures are taken from Wikipedia

This system has stood the test of time and survived.

In relation to this system, a system of standards has been developed:

POSIX 1003.1-1988, 1990 - describes UNIX OS system calls (system entry points)

(Application Programming Interface - API)

POSIX 1003.2-1992 - defines the command interpreter and set of utilities for the UNIX OS

POSIX 1003.1b-1993 - additions related to real-time applications

X/OPEN - a group coordinating the development of standards for the UNIX OS

Distinctive features of unix OS

    The system is written in a high-level language (C), which makes it easy to understand, change and transfer to other hardware platforms. UNIX is one of the most open systems.

    UNIX is a multitasking, multiuser system with a wide range of services. One server can serve requests from a large number of users. This requires administration of only one user system.

    Availability of standards. Despite the variety of versions, the basis of the entire UNIX family is a fundamentally identical architecture and a number of standard interfaces, which simplifies the transition of users from one system to another.

    Simple yet powerful modular user interface. There is a certain set of utilities, each of which solves a highly specialized problem, and from them it is possible to construct complex software processing systems.

    Using a single hierarchical, easily maintained file system that provides access to data stored in files on disk and to computer devices through a unified file system interface.

    Quite a large number of applications, including freely distributed ones.

Basic architecture of the unix operating system Model of the unix system.

Unix OS kernel structure.

UNIX is a two-tier system model: the kernel and applications.

The kernel directly interacts with the computer hardware, isolating application programs from the hardware features of the computing system.

The kernel has a set of services provided to application programs. These include input/output operations, creation and control of processes, interaction between processes, signals, etc.

All applications request kernel services through the calling system.

The second level consists of applications or tasks, both system ones, which determine the overall functionality of the system, and application ones, which provide the UNIX user interface. The interaction scheme of all applications with the kernel is the same.

Core provides the basic functionality of the operating system, creates and manages processes, allocates memory, and provides access to files and peripheral devices. Interaction of application tasks with the kernel occurs through a standard system call interface. The system call interface represents a set of kernel services and defines the format for requesting services.

A process requests a service from a specific procedure through a standardized system call that looks similar to a regular C library function call. The kernel processes the request on behalf of the process and returns the necessary data to the process.

The core consists of three main subsystems:

1) file subsystem;

2) input-output subsystem;

3) process and memory management subsystem.

File subsystem provides a unified interface for accessing data located on disk drives and peripheral devices. The same write/read functions can be used when working with files on disks and when entering/outputting data to a terminal, printer, etc. external devices.

The file subsystem controls file access rights, performs file placement and deletion operations, and writes and reads data.

Since most application functions use the file system interface in their work, file access rights largely determine the user's access privileges to the system. Thus, the privileges of individual users are formed.

There are 3 user categories associated with each file:

Owner;

Owning group;

Other users.

The file subsystem provides redirection of requests addressed to peripheral devices corresponding to the input/output subsystem modules.

The input/output subsystem processes requests from the file subsystem and the process control subsystem to access peripheral devices, provides the necessary data buffering and interacts with device drivers.

Drivers are special kernel modules that directly serve external devices.

Process and memory management subsystem controls the creation and deletion of processes, the distribution of system resources, memory and processor between processes, process synchronization, and interprocessor communication.

System resources are allocated by a special kernel task called planner processes. The scheduler starts system processes and ensures that the process does not take over shared system resources.

Memory management module provides placement of RAM for applied tasks, including virtual memory. This means that it provides the ability to place part of a process in secondary memory (i.e. the hard drive) and move it to RAM as needed.

A process releases the processor before a long I/O operation or when the time slice expires. In this case, the scheduler selects the next highest priority process and starts it for execution.

Interprocessor communication module is responsible for notifying processes about events using signals and providing the ability to transfer data between different processes.

First The meaning of the term rests on the consideration of the structures into which files on storage media can be organized. There are several types of such structures: linear, tree, object and others, but currently only tree structures are widely used.

Each file in the tree structure is located in a specific file store - catalog, each directory, in turn, is also located in a certain directory. Thus, according to the principle of nesting file system elements (files and directories) into each other, a tree is built, the vertices of which are non-empty directories, and the leaves are files or empty directories. The root of such a tree is called root directory and is denoted by some special character or group of characters (for example, “C:” in operating Windows system). Each file corresponds to some Name, defining its location in the file system tree. The full file name consists of the names of all the nodes in the file system tree that can be traversed from the root to this file(directory), writing them from left to right and separating them with special delimiter characters.

Currently exists great amount file systems, each of which is used for a specific purpose: for quick access to data, to ensure data integrity in case of system failures, for ease of implementation, for compact data storage, etc. However, among the whole variety of file systems, we can distinguish those that have a number of similar characteristics, namely:

Files and directories are identified not by names, but by index nodes (i-node) – indexes in the general array of files for a given file system. This array stores information about the used data blocks on the media, as well as the file length, file owner, access rights and other service information under the general name “ metadata about the file" Logical connections like “name–i-node” are nothing more than the contents of directories.

Thus, each file is characterized by one i-node, but can be associated with several names - in UNIX this is called hard links (See Figure 1.22, “Example of a Hard Link”). In this case, a file is deleted when the last hard link to this file is deleted.

An important feature of such file systems is that file names are case sensitive, in other words, the files test.txt and TEST.txt are different (that is, they are different lines in the directory file).

In certain (fixed for a given file system) blocks of the physical storage medium there is the so-called. superblock. The superblock is the most critical area of ​​the file system, containing information for the operation of the file system as a whole, as well as for its identification. The superblock contains " magic number" – a file system identifier that distinguishes it from other file systems, a list of free blocks, a list of free i-nodes and some other service information.

  • Besides catalogs And regular files to store information, the FS may contain the following types of files:

    Special device file

    Provides access to a physical device. When creating such a device, the device type (block or character) is specified. senior number – driver index in the operating system driver table and junior number – a parameter passed to a driver that supports multiple devices to clarify which “subdevice” we're talking about(for example, which of several IDE devices or COM ports).

    Named pipe Symbolic link

    A special type of file whose contents are not data, but the name of some other file (see Figure 1.23, “Example of a symbolic link.” To the user, such a file is indistinguishable from the one it links to.

    A symbolic link has a number of advantages over a hard link: it can be used to link files in different file systems (after all, inode numbers are unique only within one file system), and deleting files is also more transparent - the link can be deleted completely independently of the main file .

    Socket
  • Such file systems inherit the features of the original UNIX. These include, for example: s5 (used in versions of UNIX System V), ufs (BSD UNIX), ext2, ext3, reiserfs (Linux), qnxfs (QNX). All of these file systems differ in the format of their internal structures, but are compatible in terms of basic concepts.

    Directory tree

    Consideration second The meaning of the term FS leads us to the previously defined set of procedures that access files on various media. A feature of the UNIX family of operating systems is the existence of a single file system tree for any number of storage media with the same or different types file systems on them. This is achieved by mounting – temporary substitution of a directory of one file system for a tree of another file system, as a result of which the system does not have several trees that are in no way related to each other, but one large branched tree with a single root directory.

    Operating file subsystem UNIX systems has a unique system for processing requests for files – file system switch or virtual file system (VFS). VFS provides the user with a standard set of functions (interface) for working with files, regardless of their location and belonging to different file systems.

    In the world of UNIX standards, it is determined that the root directory of a single file system tree must be named / , as well as the separator character when forming a fully qualified file name. Then the full filename could be, for example, /usr/share/doc/bzip2/README . The task of VFS is to find its location in the file system tree using the full file name, determine its type in this place in the tree and “switch”, i.e. transfer the file for further processing to the driver of a specific file system. This approach allows you to use an almost unlimited number of different file systems on one computer running one operating system, and the user will not even know that the files are physically located on different storage media.

    The use of common names of the main files and directory structure greatly facilitates work in the operating system, its administration and portability. Some of these structures are used when the system starts, some are used during operation, but all of them are of great importance for the OS as a whole, and violation of this structure can lead to the inoperability of the system or its individual components.

    Figure 1.24. Standard directories in the UNIX file system

    Let's give short description main directories of the system, formally described by a special standard for file system hierarchy (Filesystem Hierarchy Standard). All directories can be divided into two groups: for static (rarely changing) information - /bin, /usr and dynamic (frequently changing) information - /var, /tmp. Based on this, administrators can place each of these directories on their own media with the appropriate characteristics.

    Root directory

    The root directory / is the basis of any UNIX file system. All other directories and files are located within the structure (tree) generated by the root directory, regardless of their physical location.

    /bin

    This directory contains frequently used commands and utilities of the public system. This includes all the basic commands that are available even if only the root filesystem was mounted. Examples of such commands are: ls , cp , sh and so on..

    /boot

    The directory contains everything necessary for the operating system boot process: bootloader program, operating system kernel image, etc.

    /dev

    The directory contains special files devices that serve as an access interface to peripheral devices. Having such a directory does not mean that special device files cannot be created elsewhere, just that it is convenient to have one directory for all files of that type.

    /etc

    This directory contains system configuration files. Examples include the files /etc/fstab, which contains a list of file systems to be mounted, and /etc/resolv.conf, which specifies rules for composing local DNS queries. Among the most important files are system initialization and deinitialization scripts. In systems that inherit the features of UNIX System V, directories are allocated for them from /etc/rc0.d to /etc/rc6.d and a description file common to all - /etc/inittab.

    /home (optional)

    The directory contains the home directories of users. Its existence in the root directory is not necessary and its contents depend on the characteristics of a particular UNIX-like operating system.

    /lib

    Directory for static and dynamic libraries, necessary to run programs located in the /bin and /sbin directories.

    /mnt

    A standard directory for temporarily mounting file systems such as floppy disks, flash disks, CD-ROMs, etc.

    /root (optional)

    The directory contains the superuser's home directory. Its existence in the root directory is not necessary.

    /sbin

    This directory contains commands and utilities for the system administrator. Examples of such commands are: route , halt , init etc. The /usr/sbin and /usr/local/sbin directories are used for similar purposes.

    /usr

    This directory follows the structure of the root directory - it contains the /usr/bin, /usr/lib, /usr/sbin directories, which serve similar purposes.

    The /usr/include directory contains C header files for various libraries located on the system.

    The /usr/local directory is the next level of repetition of the root directory and is used for program storage installed by the administrator in addition to the standard distribution of the operating system.

    The /usr/share directory stores immutable data for installed programs. Of particular interest is the /usr/share/doc directory, which contains documentation for all installed programs.

    /var , /tmp

    Used to store temporary data of processes – system and user, respectively.





    

    2024 gtavrl.ru.