Virtualization definition. Free server virtualization platforms


Recently, many different companies operating not only in the IT sector, but also in other areas, have begun to take a serious look at virtualization technologies. Home users have also experienced the reliability and convenience of virtualization platforms that allow them to run multiple operating systems in virtual machines simultaneously. At the moment, virtualization technologies are among the most promising, according to various information technology market researchers. The market for virtualization platforms and management tools is currently growing rapidly, with new players periodically appearing on it, and the process of acquisition of small companies developing software for virtualization platforms and tools for improving the efficiency of the use of virtual infrastructures is in full swing by large players.

Meanwhile, many companies are not yet ready to invest heavily in virtualization because they cannot accurately assess the economic effect of introducing this technology and do not have sufficiently qualified personnel. If in many Western countries there are already professional consultants who can analyze the IT infrastructure, prepare a plan for virtualizing the company’s physical servers and assess the profitability of the project, then in Russia there are very few such people. Of course, in the coming years, the situation will change, and at a time when various companies appreciate the benefits of virtualization, there will be specialists with sufficient knowledge and experience to implement virtualization technologies at various scales. At the moment, many companies are only conducting local experiments in the use of virtualization tools, using mainly free platforms.

Fortunately, many vendors, in addition to commercial virtualization systems, also offer free platforms with limited functionality so that companies can partially use virtual machines in the production environment of the enterprise and, at the same time, evaluate the possibility of moving to serious platforms. In the desktop sector, users are also starting to use virtual machines in their daily activities and are not placing greater demands on virtualization platforms. Therefore, free funds are considered first of all.

Leaders in virtualization platforms

The development of virtualization tools at various levels of system abstraction has been ongoing for more than thirty years. However, only relatively recently the hardware capabilities of servers and desktop PCs have made it possible to take this technology seriously in relation to the virtualization of operating systems. It so happens that for many years, both various companies and enthusiasts have been developing various tools for virtualizing operating systems, but not all of them are currently actively supported and are in an acceptable state for effective use. Today, the leaders in the production of virtualization tools are VMware, Microsoft, SWSoft (together with its Parallels company), XenSource, Virtual Iron and InnoTek. In addition to the products of these vendors, there are also such developments as QEMU, Bosch and others, as well as virtualization tools for operating system developers (for example, Solaris Containers), which are not widely used and are used by a narrow circle of specialists.

Companies that have achieved some success in the market for server virtualization platforms distribute some of their products for free, while relying not on the platforms themselves, but on management tools, without which it is difficult to use virtual machines on a large scale. In addition, commercial desktop virtualization platforms designed for use by IT professionals and software development companies have significantly more capabilities than their free counterparts.

However, if you use server virtualization on a small scale, in the SMB (Small and Medium Business) sector, free platforms may well fill a niche in a company's production environment and provide significant cash savings.

When to use free platforms

If you do not require mass deployment of virtual servers in an organization, constant monitoring of the performance of physical servers under changing loads and a high degree of availability, you can use virtual machines based on free platforms to support the organization’s internal servers. With an increasing number of virtual servers and a high degree of their consolidation on physical platforms, the use of powerful tools for managing and maintaining virtual infrastructure is required. Depending on whether you need to use various systems and storage networks, such as Storage Area Network (SAN), backup and disaster recovery tools, and hot migration of running virtual machines to other equipment, you may not be able to suffice the capabilities of free virtualization platforms, however, it should be noted that free platforms are constantly updated and acquire new functions, which expands the scope of their use.

Another important point is technical support. Free virtualization platforms exist either within the Open Source community, where many enthusiasts are developing the product and supporting it, or are supported by the platform vendor. The first option assumes the active participation of users in the development of the product, their compilation of error reports and does not guarantee a solution to your problems when using the platform; in the second case, most often, technical support is not provided at all. Therefore, the qualifications of the personnel deploying free platforms must be at a high level.

Free desktop virtualization platforms are best used for isolating user environments, decoupling them from specific hardware, for educational purposes, for studying operating systems, and for safe testing of various software. It is unlikely that free desktop platforms should be used on a large scale for software development or testing in software companies, since they do not have sufficient functionality for this. However, for home use, free virtualization products are quite suitable, and there are even examples where virtual machines based on free desktop virtualization systems are used in a production environment.

Free server virtualization platforms

In almost any organization using a server infrastructure, there is often a need to use both standard network services (DNS, DHCP, Active Directory) and several internal servers (applications, databases, corporate portals) that do not experience heavy loads and are distributed across different physical servers. These servers can be consolidated into several virtual machines on one physical host. At the same time, the process of migrating servers from one hardware platform to another is simplified, hardware costs are reduced, the backup procedure is simplified and their manageability is increased. Depending on the types of operating systems that run network services and the requirements for the virtualization system, you can choose the appropriate free product for the corporate environment. When choosing a server virtualization platform, it is necessary to take into account performance characteristics (they depend both on the virtualization technology used and on the quality of implementation of various components of the manufacturers' platform), ease of deployment, the ability to scale the virtual infrastructure and the availability of additional management, maintenance and monitoring tools.


The project is an open source virtualization platform, developed by a community of independent developers supported by SWSoft. The product is distributed under the GNU GPL license. The core of the OpenVZ platform is part of the Virtuozzo product, a commercial product from SWSoft that has greater capabilities than OpenVZ. Both products use an original virtualization technique: virtualization at the operating system instance level. This method of virtualization has less flexibility compared to full virtualization (you can only run Linux operating systems, since one kernel is used for all virtual environments), but it allows you to achieve minimal performance losses (about 1-3 percent). Systems running OpenVZ cannot be called full-fledged virtual machines; they are rather virtual environments (Virtual Environments, VE), in which hardware components are not emulated. This approach only allows you to install different Linux distributions as virtual environments on the same physical server. Moreover, each of the virtual environments has its own process trees, system libraries and users and can use network interfaces in its own way.

Virtual environments appear to users and applications running in them to be almost completely isolated environments that can be managed independently of other environments. Due to these factors and high performance, OpenVZ and SWSoft Virtuozzo products have become most widespread in supporting virtual private servers (VPS) in hosting systems. Based on OpenVZ, it is possible to provide clients with several dedicated virtual servers based on the same hardware platform, each of which can have different applications installed and which can be rebooted separately from other virtual environments. The OpenVZ architecture is presented below:

Some independent experts conducted a comparative analysis of the performance of virtual servers based on the commercial platforms SWSoft Virtuozzo and VMware ESX Server for hosting purposes and concluded that Virtuozzo copes better with this task. Of course, the OpenVZ platform on which Virtuozzo is built has the same high performance, but it lacks the advanced controls that Virtuozzo has.

The OpenVZ environment is also great for training purposes, where anyone can experiment with their own isolated environment without endangering other environments on that host. Meanwhile, using the OpenVZ platform for other purposes is not advisable at the moment due to the obvious inflexibility of the virtualization solution at the operating system level.


The company relatively recently entered the virtualization platform market, but quickly entered into competition with such serious server platform vendors as VMware, XenSource and SWSoft. Virtual Iron's products are based on the free Xen hypervisor, supported by the Open Source Xen-community. Virtual Iron is a virtualization platform that does not require a host operating system (the so-called bare-metal platform), and is aimed at use in large enterprise environments. Virtual Iron products provide all the necessary tools to create, manage, and integrate virtual machines into a company's production environment. Virtual Iron supports 32- and 64-bit guest and host operating systems, as well as virtual SMP (Symmetric Multi Processing), which allows virtual machines to use multiple processors.

Virtual Iron originally used paravirtualization techniques to run guests in virtual machines, just like XenSource's products based on the Xen hypervisor. The use of paravirtualization involves the use of special versions of guest systems in virtual machines, the source code of which is modified to run them on virtualization platforms. This requires changes to the operating system kernel, which is not a big problem for an open source OS, but is unacceptable for proprietary closed systems such as Windows. There is no significant increase in performance in paravirtualization systems. As practice has shown, operating system manufacturers are reluctant to include support for paravirtualization in their products, so this technology has not gained much popularity. As a result, Virtual Iron was one of the first to use hardware virtualization techniques that allow it to run unmodified versions of guest systems. At the moment, the latest version of the Virtual Iron 3.7 platform allows the use of virtual machines on server platforms only with support for hardware virtualization. The following processors are officially supported:

  • Intel® Xeon® 3000, 5000, 5100, 5300, 7000, 7100 Series
  • Intel® Core™ 2 Duo E6000 Series
  • Intel® Pentium® D-930, 940, 950, 960
  • AMD Opteron™ 2200 or 8200 Series Processors
  • AMD Athlon™ 64 x2 Dual-Core Processor
  • AMD Turion™ 64 x2 Dual-Core Processor

In addition, on the Virtual Iron website you can find lists of equipment certified by the company for its virtualization platform.

Virtual Iron products come in three editions:

  • Single Server Virtualization and Management
  • Multiple Server Virtualization and Management
  • Virtual Desktop Infrastructure (VDI) Solution

Currently, the free solution is the Single Server solution, which allows you to install Virtual Iron on one physical host in the organization's infrastructure. It supports the iSCSI protocol, SAN networks and local storage systems.

The free edition of Single Server has the following minimum installation requirements:

  • 2 GB RAM
  • CD-ROM drive
  • 36 GB disk space
  • Ethernet network interface
  • Fiber channel network interface (optional)
  • Support for hardware virtualization in the processor

Virtual Iron allows you to appreciate all the capabilities of hardware virtualization and virtual machine management tools. The free edition is primarily intended to evaluate the effectiveness and convenience of the virtualization platform and management tools. However, it can also be used in an enterprise production environment to support the company's internal servers. The absence of a separate host platform will allow, firstly, not to spend money on purchasing a license for the host OS, and secondly, it reduces productivity losses for supporting guest systems. Typical applications of the free edition of Virtual Iron are the deployment of several virtual servers in the infrastructure of a small organization in the SMB sector in order to separate vital servers from the hardware and increase their manageability. In the future, when purchasing a commercial version of the platform, the virtual server infrastructure can be expanded, and features such as effective backup tools and “hot” migration of virtual servers between hosts can be used.


In terms of convenience and ease of use, VMware Server is the undisputed leader, and in terms of performance it does not lag behind commercial platforms (especially on Linux host systems). Disadvantages include the lack of support for hot migration and the lack of backup tools, which, however, are provided, most often, only by commercial platforms. Of course, VMware Server is the best choice for quickly deploying an organization's internal servers, including pre-installed virtual server templates, which can be found in abundance on various resources (for example,).

Results

Summing up the review of free server virtualization platforms, we can say that each of them currently occupies its own niche in the SMB sector, where through the use of virtual machines one can significantly increase the efficiency of the IT infrastructure, make it more flexible and reduce the cost of purchasing equipment. Free platforms, first of all, allow you to evaluate the capabilities of virtualization not on paper and experience all the advantages of this technology. In conclusion, here is a summary table of the characteristics of free virtualization platforms that will help you choose the appropriate server platform for your purposes. After all, it is through free virtualization that the path to further investment in virtualization projects based on commercial systems lies.

Platform name, developerHost OSOfficially supported guest operating systemsSupport for multiple virtual processors (Virtual SMP)Virtualization techniqueTypical UseProductivity
An open source community project powered by SWSoft LinuxVarious Linux distributionsYesOperating system level virtualizationIsolation of virtual servers (including for hosting services)No losses

Virtual Iron Software, Inc
Not requiredWindows, RedHat, SuSEYes (up to 8)Server virtualization in a production environmentClose to native
Virtual Server 2005 R2 SP1
Microsoft
WindowsWindows, Linux (Red Hat and SUSE)NoNative virtualization, hardware virtualizationVirtualization of internal servers in a corporate environmentClose to native (with Virtual Machine Additions installed)

VMware
Windows, LinuxDOS, Windows, Linux, FreeBSD, Netware, SolarisYesNative virtualization, hardware virtualizationConsolidation of small enterprise servers, development/testingClose to native
Xen Express and Xen
XenSource (supported by Intel and AMD)
NetBSD, Linux, SolarisLinux, NetBSD, FreeBSD, OpenBSD, Solaris, Windows, Plan 9YesParavirtualization, hardware virtualizationDevelopers, testers, IT professionals, server consolidation of small enterprisesClose to native (some losses when working with the network and intensive disk usage)

Subject: Introduction to virtual machines. Methods for installing Unix-like and Windows-like operating systems on a virtual machine.

Target: become familiar with virtualization software products, learn how to install various operating systems on a virtual machine and gain the skills to configure them.

Theoretical information

Virtualization- is the isolation of computing processes and resources from each other. This is a new virtual view of the resources of constituent parts, not limited by implementation, physical configuration or geographic location. Typically, virtualized resources include computing power and data storage. In a broad sense, the concept of virtualization is the hiding of the real implementation of a process or object from its true representation for the one who uses it. In computer technology, the term "virtualization“usually refers to the abstraction of computing resources and provision to the user of a system that “encapsulates” (hides) its own implementation. Simply put, the user works with a convenient representation of the object, and it does not matter to him how the object is structured in reality.

The term itself "virtualization" appeared in computer technology in the sixties of the last century along with the term "virtual machine", meaning software and hardware platform virtualization product.

Types of virtualization

The concept of virtualization can be divided into two fundamentally different categories:

    platform virtualization

The products of this type of virtualization are virtual machines- software abstractions running on the platform of real hardware and software systems.

    resource virtualization

This type of virtualization aims to combine or simplify the presentation of hardware resources for the user and obtain certain user abstractions of equipment, namespaces, networks, etc.

During the laboratory work we will become familiar with platform virtualization for organizing guest operating systems.

Under platform virtualization understand the creation of software systems based on existing hardware and software systems, dependent or independent of them. The system that provides hardware resources and software is called host, and the systems it simulates are guest. In order for guest systems to function stably on the host system platform, it is necessary that the host software and hardware be sufficiently reliable and provide the necessary set of interfaces to access its resources.

Virtual machine:

A software and/or hardware system that emulates the hardware of a certain platform (target - target, or guest platform) and executes programs for the target platform on the host platform (host - host platform, host platform);

Or it virtualizes a certain platform and creates environments on it that isolate programs and even operating systems from each other (sandbox).

There are several types of platform virtualization, each of which has its own approach to the concept of “virtualization”.

Full emulation (simulation)

With this type of virtualization, the virtual machine completely virtualizes all the hardware while keeping the guest operating system unchanged. This approach allows you to emulate various hardware architectures. The main disadvantage of this approach is that the emulated hardware very, very significantly slows down the performance of the guest system, which makes working with it very inconvenient.

Partial emulation (native virtualization)

In this case, the virtual machine virtualizes only the necessary amount of hardware so that it can be run in isolation. This approach allows you to run guest operating systems designed only for the same architecture as the host. This way, multiple guest instances can be running simultaneously. This type of virtualization can significantly increase the performance of guest systems compared to full emulation and is widely used today. Also, in order to improve performance, virtualization platforms using this approach use a special “layer” between the guest operating system and the hardware ( hypervisor), allowing the guest system to directly access hardware resources. Hypervisor, also called "Virtual Machine Monitor"- one of the key concepts in the world of virtualization.

Examples of products for native virtualization: VMware products (Workstation, Server, Player), Microsoft Virtual PC, VirtualBox, Parallels Desktop and others.

Partial virtualization, as well as “address space virtualization”

With this approach, the virtual machine simulates several instances of the hardware environment (but not all), in particular, the address space. This type of virtualization allows you to share resources and isolate processes, but does not allow you to separate instances of guest operating systems. Strictly speaking, with this type of virtualization, virtual machines are not created by the user, but some processes are isolated at the operating system level.

Paravirtualization

When using paravirtualization, there is no need to simulate the hardware, but instead (or in addition to this), a special application programming interface (API) is used to interact with the guest operating system.

Operating system level virtualization

The essence of this type of virtualization is the virtualization of a physical server at the operating system level in order to create several secure virtualized servers on one physical one. The guest system, in this case, shares the use of one kernel of the host operating system with other guest systems. A virtual machine is an environment for applications that run in isolation. This type of virtualization is used when organizing hosting systems, when it is necessary to support several virtual client servers within one kernel instance.

Application Layer Virtualization

This type of virtualization is not like all the others: if in previous cases virtual environments or virtual machines are created that are used to isolate applications, then in this case the application itself is placed in a container with the necessary elements for its operation: registry files, configuration files, user and system objects. The result is an application that does not require installation on a similar platform. When such an application is transferred to another machine and launched, the virtual environment created for the program resolves conflicts between it and the operating system, as well as other applications. This method of virtualization is similar to the behavior of interpreters of various programming languages ​​(it is not for nothing that the interpreter, the Java Virtual Machine (JVM), also falls into this category).

Quick reference on virtual machines:

Oracle VirtualBox is a cross-platform free (GNU GPL) virtualization software product for the operating systems Microsoft Windows, Linux, FreeBSD, Mac OS X, Solaris/OpenSolaris, ReactOS, DOS and others. Both 32-bit and 64-bit OS versions are supported.

VMware Workstation - allows you to create and run simultaneously several virtual machines (x86 architecture), each of which runs its own guest operating system. Both 32-bit and 64-bit OS versions are supported.

VMware Player is a free (for personal, non-commercial use) software product designed for creating (starting from version 3.0) and launching ready-made virtual machines (created in VMware Workstation or VMware Server). A free solution with limited functionality compared to VMware Workstation.

Microsoft Virtual PC is a virtualization software package for the Windows operating system.

Information technologies have brought many useful and interesting things to the life of modern society. Every day, inventive and talented people come up with more and more new applications for computers as effective tools for production, entertainment and collaboration. Many different software and hardware, technologies and services allow us to improve the convenience and speed of working with information every day. It is becoming more and more difficult to single out truly useful technologies from the stream of technologies falling upon us and learn to use them with maximum benefit. This article will talk about another incredibly promising and truly effective technology that is rapidly breaking into the world of computers - virtualization technology.

In a broad sense, the concept of virtualization is the hiding of the real implementation of a process or object from its true representation for the one who uses it. The product of virtualization is something convenient for use, in fact, having a more complex or completely different structure, different from that which is perceived when working with the object. In other words, there is a separation of representation from the implementation of something. In computer technology, the term “virtualization” usually refers to the abstraction of computing resources and the provision to the user of a system that “encapsulates” (hides) its own implementation. Simply put, the user works with a convenient representation of the object, and it does not matter to him how the object is structured in reality.

The term “virtualization” itself in computer technology appeared in the sixties of the last century along with the term “virtual machine”, meaning a product of virtualization of a software and hardware platform. At that time, virtualization was more of an interesting technical discovery than a promising technology. Developments in the field of virtualization in the sixties and seventies were carried out only by the company. With the advent of the experimental paging system on the IBM M44/44X computer, the term "virtual machine" was used for the first time, replacing the earlier term "pseudo machine". Then, in the System 360/370 series mainframes, virtual machines could be used to preserve previous versions of operating systems. Until the end of the nineties, no one else dared to use this original technology seriously. However, in the nineties, the prospects of the virtualization approach became obvious: with the growth of hardware capacity, both personal computers and server solutions, it will soon be possible to use several virtual machines on one physical platform.

In 1997, Connectix released the first version of Virtual PC for the Macintosh platform, and in 1998 it patented its virtualization techniques. Connectix was subsequently acquired by Microsoft and VMware by , and both companies are now the two main potential competitors in the virtualization technology market in the future. Potential - because it is now the undisputed leader in this market, but Microsoft, as always, has an ace up its sleeve.

Since their inception, the terms “virtualization” and “virtual machine” have acquired many different meanings and are used in different contexts. Let's try to understand what virtualization really is.

Types of virtualization

The concept of virtualization can be divided into two fundamentally different categories:

  • platform virtualization

The product of this type of virtualization is virtual machines - certain software abstractions that run on the platform of real hardware and software systems.

  • resource virtualization

This type of virtualization aims to combine or simplify the presentation of hardware resources for the user and obtain certain user abstractions of equipment, namespaces, networks, etc.

Types of virtualization

Platform virtualization

Platform virtualization refers to the creation of software systems based on existing hardware and software systems, dependent or independent of them. The system that provides the hardware resources and software is called the host, and the systems it simulates are called guests. In order for guest systems to function stably on the host system platform, it is necessary that the host software and hardware be sufficiently reliable and provide the necessary set of interfaces to access its resources. There are several types of platform virtualization, each of which has its own approach to the concept of “virtualization”. The types of platform virtualization depend on how fully the hardware is simulated. There is still no consensus on virtualization terms, so some of the types of virtualization listed below may differ from what other sources provide.

Types of platform virtualization:

  • Full emulation (simulation).

With this type of virtualization, the virtual machine completely virtualizes all the hardware while keeping the guest operating system unchanged. This approach allows you to emulate various hardware architectures. For example, you can run virtual machines with guests for x86 processors on platforms with other architectures (for example, on Sun RISC servers). For a long time, this type of virtualization was used to develop software for new processors even before they were physically available. Such emulators are also used for low-level debugging of operating systems. The main disadvantage of this approach is that the emulated hardware very, very significantly slows down the performance of the guest system, which makes working with it very inconvenient, therefore, except for the development of system software, as well as educational purposes, this approach is rarely used.

Examples of products for creating emulators: Bochs, PearPC, QEMU (without acceleration), Hercules Emulator.

  • Partial emulation (native virtualization).

In this case, the virtual machine virtualizes only the necessary amount of hardware so that it can be run in isolation. This approach allows you to run guest operating systems designed only for the same architecture as the host. This way, multiple guest instances can be running simultaneously. This type of virtualization can significantly increase the performance of guest systems compared to full emulation and is widely used today. Also, in order to improve performance, virtualization platforms using this approach use a special “layer” between the guest operating system and the hardware (hypervisor), allowing the guest system to directly access hardware resources. Hypervisor, also called Virtual Machine Monitor, is one of the key concepts in the world of virtualization. The use of a hypervisor, which is a link between guest systems and hardware, significantly increases the performance of the platform, bringing it closer to the performance of the physical platform.

The disadvantages of this type of virtualization include the dependence of virtual machines on the architecture of the hardware platform.

Examples of native virtualization products: VMware Workstation, VMware Server, VMware ESX Server, Virtual Iron, Virtual PC, VirtualBox, Parallels Desktop and others.

Partial virtualization, as well as “address space virtualization”.

With this approach, the virtual machine simulates several instances of the hardware environment (but not all), in particular, the address space. This type of virtualization allows you to share resources and isolate processes, but does not allow you to separate instances of guest operating systems. Strictly speaking, with this type of virtualization, virtual machines are not created by the user, but some processes are isolated at the operating system level. Currently, many of the well-known operating systems use this approach. An example is the use of UML (User-mode Linux), in which the “guest” kernel runs in the user space of the base kernel (in its context).

  • Paravirtualization.

When using paravirtualization, there is no need to simulate the hardware, but instead (or in addition to this), a special application programming interface (API) is used to interact with the guest operating system. This approach requires modification of the guest system code, which, from the point of view of the Open Source community, is not so critical. Systems for paravirtualization also have their own hypervisor, and API calls to the guest system are called “hypercalls”. Many doubt the prospects of this virtualization approach, since at the moment all hardware manufacturers' decisions regarding virtualization are aimed at systems with native virtualization, and para-virtualization support must be sought from operating system manufacturers who have little faith in the capabilities of the tool they offer. Currently, paravirtualization providers include XenSource and Virtual Iron, which claim that paravirtualization is faster.

  • Operating system level virtualization.

The essence of this type of virtualization is the virtualization of a physical server at the operating system level in order to create several secure virtualized servers on one physical one. The guest system, in this case, shares the use of one kernel of the host operating system with other guest systems. A virtual machine is an environment for applications that run in isolation. This type of virtualization is used when organizing hosting systems, when it is necessary to support several virtual client servers within one kernel instance.

Examples of OS level virtualization: Linux-VServer, Virtuozzo, OpenVZ, Solaris Containers and FreeBSD Jails.

  • Application layer virtualization.

This type of virtualization is not like all the others: if in previous cases virtual environments or virtual machines are created that are used to isolate applications, then in this case the application itself is placed in a container with the necessary elements for its operation: registry files, configuration files, user and system objects. The result is an application that does not require installation on a similar platform. When such an application is transferred to another machine and launched, the virtual environment created for the program resolves conflicts between it and the operating system, as well as other applications. This method of virtualization is similar to the behavior of interpreters of various programming languages ​​(it is not for nothing that the interpreter, the Java Virtual Machine (JVM), also falls into this category).

Examples of this approach are: Thinstall, Altiris, Trigence, Softricity.

Resource virtualization

When describing platform virtualization, we considered the concept of virtualization in a narrow sense, mainly applying it to the process of creating virtual machines. However, if we consider virtualization in a broad sense, we can come to the concept of resource virtualization, which generalizes approaches to creating virtual systems. Resource virtualization allows you to concentrate, abstract, and simplify management of groups of resources such as networks, data stores, and namespaces.

Types of resource virtualization:

  • Combination, aggregation and concentration of components.

This type of resource virtualization refers to the organization of several physical or logical objects into resource pools (groups) that provide convenient interfaces to the user. Examples of this type of virtualization:

multiprocessor systems, which appear to us as one powerful system,

RAID arrays and volume management tools that combine multiple physical disks into one logical disk,

virtualization of storage systems used in the construction of SAN (Storage Area Network) storage networks,

virtual private networks (VPN) and network address translation (NAT), which allow the creation of virtual spaces of network addresses and names.

  • Computer clustering and distributed computing (grid computing).

This type of virtualization includes techniques used to combine many individual computers into global systems (metacomputers) that jointly solve a common problem.

  • Resource sharing (partitioning).

When dividing resources in the process of virtualization, any one large resource is divided into several objects of the same type that are convenient for use. In storage area networks, this is called resource zoning.

  • Encapsulation.

Many people know this word as an object hiding its realization within itself. In relation to virtualization, we can say that this is the process of creating a system that provides the user with a convenient interface for working with it and hides the details of the complexity of its implementation. For example, the CPU's use of cache to speed up calculations is not reflected on its external interfaces.

Resource virtualization, unlike platform virtualization, has a broader and vaguer meaning and represents a lot of different approaches aimed at improving the user experience of systems as a whole. Therefore, further we will rely mainly on the concept of platform virtualization, since the technologies associated with this concept are currently the most dynamically developing and effective.

Where is virtualization used?

Operating system virtualization has advanced very well over the past three to four years, both technologically and in a marketing sense. On the one hand, it has become much easier to use virtualization products, they have become more reliable and functional, and on the other hand, many new interesting applications have been found for virtual machines. The scope of application of virtualization can be defined as “the place where there are computers,” but at the moment the following options for using virtualization products can be identified:

Server consolidation.

At the moment, applications running on servers in companies' IT infrastructure create a small load on the server's hardware resources (on average 5-15 percent). Virtualization allows you to migrate from these physical servers to virtual ones and place them all on one physical server, increasing its load to 60-80 percent and thereby increasing the utilization of equipment, which allows you to significantly save on equipment, maintenance and electricity.

Application development and testing.

Many virtualization products allow you to run multiple different operating systems simultaneously, allowing developers and software testers to test their applications on different platforms and configurations. Also, convenient tools for creating “snapshots” of the current state of the system with one click of the mouse and the same simple restoration from this state allow you to create test environments for various configurations, which significantly increases the speed and quality of development.

Use in business.

This use case for virtual machines is the most extensive and creative. It includes everything that may be needed in the daily handling of IT resources in business. For example, based on virtual machines, you can easily create backup copies of workstations and servers (by simply copying a folder), build systems that provide minimal recovery time after failures, etc. This group of use cases includes all those business solutions that take advantage of the main advantages of virtual machines.

Using virtual workstations.

With the advent of the era of virtual machines, it will be pointless to make yourself a workstation with its connection to hardware. Now, once you have created a virtual machine with your work or home environment, you can use it on any other computer. You can also use ready-made virtual machine templates (Virtual Appliances) that solve a specific problem (for example, an application server). The concept of using virtual workstations in this way can be implemented on the basis of hosting servers to run roaming user desktops on them (something similar to mainframes). In the future, the user can take these desktops with him without synchronizing the data with the laptop. This use case also provides the ability to create secure user workstations that can be used, for example, to demonstrate the software's capabilities to a customer. You can limit the time a virtual machine can be used, and after this time the virtual machine will no longer start. This use case has great potential.

All of the listed use cases for virtual machines are actually just areas of their application at the moment; over time, undoubtedly, new ways to make virtual machines work in various IT industries will appear. But let's see how things stand with virtualization now.

How virtualization works today

Today, IT infrastructure virtualization projects are being actively implemented by many leading systems integration companies that are authorized partners of virtualization system providers. In the process of virtualization of the IT infrastructure, a virtual infrastructure is created - a set of systems based on virtual machines that ensure the functioning of the entire IT infrastructure, which has many new capabilities while maintaining the existing pattern of activity of IT resources. Vendors of various virtualization platforms are ready to provide information about successful projects to implement virtual infrastructure in large banks, industrial companies, hospitals, and educational institutions. The many benefits of operating system virtualization allow companies to save on maintenance, personnel, hardware, business continuity, data replication, and disaster recovery. Also, the virtualization market is beginning to be filled with powerful tools for managing, migrating and supporting virtual infrastructures, allowing you to use the benefits of virtualization to the fullest. Let's look at exactly how virtualization allows companies that implement virtual infrastructure to save money.

10 reasons to use virtual machines

  • Savings on hardware when consolidating servers.

Significant savings on the purchase of hardware occur when placing several virtual production servers on one physical server. Depending on the virtualization platform vendor, options for workload balancing, control of allocated resources, migration between physical hosts, and backup are available. All this entails real savings on the maintenance, management and administration of server infrastructure.

  • Ability to support older operating systems to ensure compatibility.

When a new version of the operating system is released, the old version can be supported on a virtual machine until the new OS is fully tested. Conversely, you can “lift” a new OS on a virtual machine and try it out without affecting the main system.

  • Ability to isolate potentially dangerous environments.

If any application or component raises doubts about its reliability and security, you can use it in a virtual machine without the risk of damaging vital system components. This isolated environment is also called a sandbox. In addition, you can create virtual machines that are limited by security policies (for example, the machine will stop starting after two weeks).

  • Ability to create required hardware configurations.

Sometimes it is necessary to use a given hardware configuration (processor time, amount of allocated RAM and disk memory) when checking the performance of applications under certain conditions. It is quite difficult to “drive” a physical machine into such conditions without a virtual machine. In virtual machines, it’s a couple of mouse clicks.

  • Virtual machines can create views of devices that you don't have.

For example, many virtualization systems allow you to create virtual SCSI disks, virtual multi-core processors, etc. This can be useful for creating various types of simulations.

  • Several virtual machines, united in a virtual network, can be running simultaneously on one host.

This feature provides unlimited possibilities for creating virtual network models between several systems on one physical computer. This is especially necessary when you need to simulate a distributed system consisting of several machines. You can also create several isolated user environments (for work, entertainment, surfing the Internet), launch them and switch between them as needed to perform certain tasks.

  • Virtual machines provide excellent learning opportunities for operating systems.

You can create a repository of ready-to-use virtual machines with different guest operating systems and run them as needed for training purposes. They can be subjected to all sorts of experiments with impunity, since if the system is damaged, restoring it from a saved state will take a couple of minutes.

  • Virtual machines improve mobility.

The folder with the virtual machine can be moved to another computer, and the virtual machine can be launched there immediately. There is no need to create any images for migration, and, moreover, the virtual machine is decoupled from specific hardware.

  • Virtual machines can be organized into "application packages".

You can create a virtual environment for a specific use case (for example, a designer's machine, a manager's machine, etc.), installing all the required software in it, and deploy desktops as needed.

  • Virtual machines are more manageable.

Using virtual machines significantly improves manageability for backups, virtual machine snapshots, and disaster recovery.

Of course, the advantages of virtual machines do not end there; this is just food for thought and research into their capabilities. Of course, like any new and promising solution, virtual machines also have their drawbacks:

  • Inability to emulate all devices.

At the moment, all major hardware platform devices are supported by virtualization system vendors, but if you use, for example, any controllers or devices that are not supported by them, you will have to abandon virtualization of such an environment.

  • Virtualization requires additional hardware resources.

Currently, the use of various virtualization techniques has made it possible to bring the performance of virtual machines closer to real ones, however, in order for a physical host to be able to run at least a couple of virtual machines, a sufficient amount of hardware resources is required for them.

Some virtualization platforms require specific hardware.

In particular, the company's wonderful platform, ESX Server, would be absolutely wonderful if it did not impose stringent hardware requirements.

  • Good virtualization platforms cost good money.

Sometimes, the cost of deploying one virtual server is equal to the cost of another physical one; under certain conditions this may not be practical. Fortunately, there are many free solutions, but they are mainly aimed at home users and small businesses.

Despite the listed and completely removable shortcomings, virtualization continues to gain momentum, and in 2007 a significant expansion is expected of both the market for virtualization platforms and virtual infrastructure management tools.

However, due to the complexity and high cost of deploying and maintaining virtual infrastructure, as well as the difficulty of properly assessing the return on investment, many virtualization projects fail. According to a study conducted by Computer Associates among various companies that have attempted virtualization, 44 percent could not describe the result as successful. This circumstance holds back many companies planning virtualization projects. Another problem is the lack of truly competent specialists in this field.

What does the future hold for virtualization?

2006 was a key year for virtualization technologies: many new players entered this market, many releases of virtualization platforms and management tools, as well as a considerable number of partnership agreements and alliances concluded, indicate that in the future the technology will be very, very in demand. The virtualization market is in the final stage of its formation. Many hardware manufacturers have announced support for virtualization technologies, and this is a sure guarantee of the success of any new technology. Virtualization is becoming closer to people: interfaces for using virtual machines are being simplified, agreements on the use of various tools and techniques appear, not yet officially established, and migration from one virtual platform to another is being simplified. Of course, virtualization will occupy its niche in the list of necessary technologies and tools when designing the IT infrastructure of enterprises. Regular users will also find their use for virtual machines. As the performance of desktop computer hardware platforms increases, it will become possible to support multiple user environments on one machine and switch between them.

Hardware manufacturers are also not going to remain static: in addition to existing hardware virtualization techniques, hardware systems will soon appear that natively support virtualization and provide convenient interfaces for the software being developed. This will allow you to quickly develop reliable and efficient virtualization platforms. It is possible that any installed operating system will be immediately virtualized, and special low-level software, supported by hardware functions, will switch between running operating systems without compromising performance.

The very idea inherent in virtualization technologies opens up wide possibilities for their use. After all, ultimately, everything is done for the convenience of the user and simplifying the use of things familiar to him. Whether it is possible to significantly save money on this, time will tell.

Virtualization in computing, the process of representing a set of computing resources, or their logical combination, that provides some advantage over the original configuration. This is a new virtual view of resources that are not limited by implementation, geographic location or physical configuration of component parts. Typically, virtualized resources include computing power and data storage.

“Over the past few years, the server virtualization market has matured greatly. In many organizations, more than 75% of servers are virtual, this indicates a high level of saturation,” said Michael Warrilow, research director at Gartner.

Analysts say attitudes toward virtualization vary more than ever among organizations of varying sizes. The popularity of virtualization among companies with larger IT budgets in 2014-2015. remained at the same level. Such companies continue to actively use virtualization, and saturation is approaching in this segment. Among organizations with smaller IT budgets, the popularity of virtualization is expected to decline in the next two years (until the end of 2017). This trend is already observed.

« Physicalization»

According to Gartner, companies are increasingly resorting to so-called “physicalization” - launching servers without virtualization software. It is expected that by the end of 2017, in more than 20% of these companies, less than a third of the operating systems on x86 servers will be virtual. For comparison, in 2015 there were half as many such organizations.

Analysts note that companies have different reasons for abandoning virtualization. Today, customers have new options - they can take advantage of software-defined infrastructure or hyper-converged integrated systems. The emergence of such options forces virtualization technology providers to act more actively: expand the functionality of their solutions available out of the box, simplify interaction with products and reduce payback periods for customers.

Hyperconverged integrated systems

At the beginning of May 2016, Gartner published a forecast regarding hyperconverged integrated systems. According to analysts, in 2016 this segment will grow by 79% compared to 2015 to almost $2 billion and will reach the mainstream stage within five years.

In the coming years, the hyperconverged integrated systems segment will experience the highest growth rates of any other integrated systems. By the end of 2019, it will grow to approximately $5 billion and occupy 24% of the integrated systems market, Gartner predicts, noting that the growth of this area will lead to cannibalization of other market segments.

Analysts refer to hyperconverged integrated systems (HCIS) as hardware and software platforms that combine software-defined computing nodes and a software-defined storage system, standard associated equipment and a common control panel.

Types of virtualization

Virtualization is a general term that covers the abstraction of resources for many aspects of computing. Some of the most typical examples of virtualization are given below.

Paravirtualization

Paravirtualization is a virtualization technique in which guest operating systems are prepared for execution in a virtualized environment, for which their kernel is slightly modified. The operating system communicates with the hypervisor program, which provides it with a guest API, rather than directly using resources such as the memory page table. Code related to virtualization is localized directly into the operating system. Paravirtualization requires that the guest operating system be modified for the hypervisor, and this is a disadvantage of this method, since such modification is only possible if the guest OS is open source and can be modified under a license. At the same time, paravirtualization offers performance almost like a real non-virtualized system, as well as the ability to simultaneously support different operating systems, just like full virtualization.

Infrastructure virtualization

In this case, we will understand by this term the creation of an IT infrastructure independent of hardware. For example, when the service we need is located on a guest virtual machine and, in principle, it is not particularly important to us on which physical server it is located.

Virtualization of servers, desktops, applications - there are many methods for creating such an independent infrastructure. In this case, several virtual or “guest” machines are hosted on one physical or host server using special software called a “hypervisor”.

Modern virtualization systems, in particular VMware and Citrix XenServer, for the most part operate on the bare metal principle, that is, they are installed directly on bare metal.

Example

The virtual system is not built on a bare metal hypervisor, but on a combination of the Linux CentOS 5.2 operating system and VMware Server based on the Intel SR1500PAL server platform, 2 Intel Xeon 3.2/1/800 processors, 4Gb RAM, 2xHDD 36Gb RAID1 and 4xHDD 146Gb in RAID10 shared volume 292Gb. The host machine hosts four virtual machines:

  • Postfix mail server based on the FreeBSD (Unix) operating system. The POP3 protocol was used to deliver mail to the end user.
  • Squid proxy server based on the same FreeBSD system.
  • dedicated domain controller, DNS, DHCP based on Windows 2003 Server Standard Edition.
  • control workstation based on Windows XP for office purposes.

Server virtualization

  • A virtual machine is an environment that appears to the “guest” operating system as a hardware environment. However, in reality it is a software environment that is simulated by the host system software. This simulation must be robust enough to allow the guest drivers to operate reliably. When using paravirtualization, the virtual machine does not simulate hardware, but instead offers the use of special

Only lazy people have never heard of virtualization today. It is no exaggeration to say that today this is one of the main trends in IT development. However, many administrators still have very fragmentary and scattered knowledge about the subject, mistakenly believing that virtualization is only available to large companies. Given the relevance of the topic, we decided to create a new section and start a series of articles on virtualization.

What is virtualization?

Virtualization today is a very broad and diverse concept, but we will not consider all its aspects today; this goes far beyond the scope of this article. For those who are just getting acquainted with this technology, a simplified model will be enough, so we tried to simplify and generalize this material as much as possible, without going into details of implementation on a particular platform.

So what is virtualization? This is the ability to run several virtual machines isolated from each other on one physical computer, each of which will “think” that it is running on a separate physical PC. Consider the following diagram:

Special software runs on top of the real hardware - hypervisor(or virtual machine monitor), which provides emulation of virtual hardware and interaction of virtual machines with real hardware. It is also responsible for communications between virtual PCs and the real environment via the network, shared folders, shared clipboard, etc.

The hypervisor can work either directly on top of the hardware or at the operating system level; there are also hybrid implementations that work on top of a specially configured OS in a minimal configuration.

Using a hypervisor, virtual machines are created, for which the minimum required set of virtual hardware is emulated and access to the shared resources of the main PC, called " host". Each virtual machine, like a regular PC, contains its own instance of the OS and application software, and subsequent interaction with them is no different from working with a regular PC or server.

How is a virtual machine structured?

Despite the apparent complexity, a virtual machine (VM) is just a folder with files; depending on the specific implementation, their set and number may vary, but any VM is based on the same minimum set of files; the presence of the rest is not critical .

The virtual hard disk file is of greatest importance; its loss is equivalent to the failure of the hard disk of a regular PC. The second most important is the VM configuration file, which contains a description of the hardware of the virtual machine and the shared host resources allocated to it. Such resources include, for example, virtual memory, which is a dedicated area of ​​the host's shared memory.

In principle, the loss of the configuration file is not critical; having only one virtual HDD file, you can start the virtual machine by creating its configuration again. Just like having only one hard drive, you can connect it to another PC of a similar configuration and get a fully functional machine.

In addition, the folder in the virtual machine may contain other files, but they are not critical, although their loss may also be undesirable (for example, snapshots that allow you to roll back the state of the virtual PC).

Benefits of Virtualization

Depending on the purpose, desktop and server virtualization are divided. The first is used primarily for training and testing purposes. Now, in order to study some technology or test the implementation of any service in a corporate network, all you need is a fairly powerful PC and desktop virtualization tools. The number of virtual machines that you can have in your virtual laboratory is limited only by the size of the disk; the number of simultaneously running machines is limited mainly by the amount of available RAM.

In the figure below, a window of a desktop virtualization tool from our test laboratory in which Windows 8 is running.

Server visualization is widely used in IT infrastructures of any level and allows you to use one physical server to run several virtual servers. The advantages of this technology are obvious:

Optimal use of computing resources

It's no secret that the computing power of even entry-level servers and just average PCs is excessive for many tasks and server roles and is not fully used. This is usually solved by adding additional server roles, but this approach significantly complicates server administration and increases the likelihood of failures. Virtualization allows you to safely use free computing resources by dedicating your own server to each critical role. Now, to perform maintenance on, say, a web server, you don't have to stop the database server

Saving physical resources

Using one physical server instead of several allows you to effectively save energy, space in the server room, and costs for related infrastructure. This is especially important for small companies that can significantly reduce rental costs due to the reduction in the physical size of the equipment, for example, there is no need to have a well-ventilated server room with air conditioning.

Increased infrastructure scalability and extensibility

As a company grows, the ability to quickly and without significant costs increase the computing power of the enterprise becomes increasingly important. Typically, this situation involves replacing servers with more powerful ones, followed by migration of roles and services from old servers to new ones. Carrying out such a transition without failures, downtime (including planned ones) and various kinds of “transition periods” is almost impossible, which makes each such expansion a small emergency for the company and administrators, who are often forced to work at night and on weekends.

Virtualization allows us to solve this issue much more efficiently. If there are free host computing resources, you can easily add them to the desired virtual machine, for example, increasing the amount of available memory or adding processor cores. If it is necessary to increase performance more significantly, a new host is created on a more powerful server, where the virtual machine in need of resources is transferred.

Downtime in this situation is very short and comes down to the time required to copy VM files from one server to another. In addition, many modern hypervisors include a “live migration” feature that allows you to move virtual machines between hosts without stopping them.

Increased fault tolerance

Perhaps the physical failure of a server is one of the most unpleasant moments in the work of a system administrator. The situation is complicated by the fact that a physical instance of the OS is almost always hardware dependent, which makes it impossible to quickly launch the system on another hardware. Virtual machines do not have this drawback; if the host server fails, all virtual machines are quickly and without problems transferred to another, working server.

In this case, differences in the hardware of the servers do not play any role; you can take virtual machines from a server on the Intel platform and successfully launch them a few minutes later on a new host running on the AMD platform.

The same circumstance allows you to temporarily put servers out for maintenance or change their hardware without stopping the virtual machines running on them; it is enough to temporarily move them to another host.

Ability to support legacy operating systems

Despite constant progress and the release of new software versions, the corporate sector often continues to use outdated software versions; 1C:Enterprise 7.7 is a good example. Virtualization allows such software to be integrated into a modern infrastructure at no extra cost; it can also be useful when an old PC running an outdated OS has broken down, and it is not possible to run it on modern hardware. The hypervisor allows you to emulate a set of outdated hardware to ensure compatibility with older operating systems, and special utilities allow you to transfer a physical system to a virtual environment without data loss.

Virtual networks

It's hard to imagine a modern PC without some kind of network connection. Therefore, modern virtualization technologies make it possible to virtualize not only computers but also networks. Like a regular computer, a virtual machine can have one or more network adapters, which can be connected either to an external network, through one of the host's physical network interfaces, or to one of the virtual networks. A virtual network is a virtual network switch to which network adapters of virtual machines are connected. If necessary, in such a network, using the hypervisor, DHCP and NAT services can be implemented to access the Internet through the host’s Internet connection.

The capabilities of virtual networks allow you to create quite complex network configurations even within the same host; for example, let’s look at the following diagram:

The host is connected to the external network via a physical network adapter LAN 0, the VM5 virtual machine is connected to the external network via the same physical interface via a network adapter VM LAN 0. For other machines on the external network, the host and VM5 are two different PCs, each of them has its own network address, its own network card with its own MAC address. The second VM5 network card is connected to the virtual network virtual switch VMNET 1, network adapters of virtual machines VM1-VM4 are also connected to it. Thus, within one physical host, we organized a secure internal network, which has access to the external network only through the VM5 router.

In practice, virtual networks make it easy to organize several networks with different levels of security within one physical server, for example, placing potentially unsafe hosts in the DMZ without additional costs for network equipment.

Snapshots

Another virtualization function whose usefulness is difficult to overestimate. Its essence boils down to the fact that at any time, without stopping the operation of the virtual machine, you can save a snapshot of its current state, and more than one. For an unspoiled admin, it’s just some kind of holiday to be able to easily and quickly return to the original state if something suddenly goes wrong. Unlike creating an image of the hard drive and then restoring the system using it, which can take considerable time, switching between snapshots occurs within a matter of minutes.

Another use for snapshots is for training and testing purposes; with their help, you can create an entire state tree of a virtual machine, being able to quickly switch between different configuration options. The figure below shows a tree of images of a router from our test laboratory, which you are very familiar with from our materials:

Conclusion

Despite the fact that we tried to give only a brief overview, the article turned out to be quite lengthy. At the same time, we hope that thanks to this material you will be able to really evaluate all the possibilities that virtualization technology provides and meaningfully, presenting the benefits that your IT infrastructure can receive, begin to study our new materials and the practical implementation of virtualization in everyday practice .







2024 gtavrl.ru.