Virtualization basics and an introduction to KVM. What is KVM virtualization Kvm what is


Hypervisors (virtualization technologies) have existed for more than 30 years, and during this time they have managed to become one of the main "cogs" in the cloud ecosystem. Many virtualization companies choose two popular hypervisors, VMware and KVM. We propose to figure out which one is better. But first, a little theory.

What is a hypervisor?

A hypervisor is a program that separates the operating system from the hardware. Hypervisors virtualize server resources (processor, memory, disk, network interfaces, etc.), allowing them to be used as their own, and create several separate virtual machines based on one server. Each created virtual machine is isolated from its neighbors so as not to affect the work of others. For the hypervisor to work, virtualization support is required: for Intel processors on an Intel VT processor, and for AMD processors on AMD-V.

Hypervisors are divided into two types: the first work directly with the server, and the user's operating system runs on top of the hypervisor. These hypervisors can provide server management functionality to some users, and most enterprises use these hypervisors.

The second type of hypervisor, also known as Hosted Hypervisor, runs with the operating system installed on the server. And operating systems for new users are built on top of the hypervisor.

Desktop hypervisors such as Oracle VirtualBox or VMware Workstation are type 2 hypervisors, while VMware and KVM are type 1. VMware and KVM are installed directly on the server and do not require any operating system to be installed.

VMware vSphere

Before purchasing VMware vSphere, you can try the trial version (60 days), after which you need to buy a license, or put up with the limitations of the free version.

The free version, called VMware Free vSphere Hypervisor, has no CPU or memory limits for the host, but there are a number of others:

  • The product API is read-only;
  • a virtual machine cannot have more than 8 cores;
  • it cannot be used in conjunction with Veeam to create backups;
  • connection to vCenter Server is not supported;
  • High availability, VM Host Live Migration and VM Storage Live Migration technologies are also not supported.

The product from VMware differs from its counterparts in support of a large number of operating systems - Windows, Linux, Solaris, FreeBSD, Netware, MacOS and others.

Installing a VMware distribution on a server is very simple: just boot from CD, flash drive, or PXE. In addition, scripts are supported to automate the process of installing software, configuring the network, and connecting to the vCenter Server.

It is also important that there is a special VMware vCenter Converter that allows you to use MS Virtual Server, Virtual PC, Hyper-V images in ESXi, as well as physical servers and images of disk partitions created by such programs as Acronis True Image, Norton Ghost and others.

VMware vSphere has built-in Microsoft Active Directory integration, which means you can authenticate users in a private or hybrid cloud using Microsoft Domain Services. Flexible resource allocation allows for hot add CPU, RAM and hard disk (including resizing the current hard disk without rebooting).

VMware Fault Tolerate is a VMware technology designed to protect virtual machines with continuous availability clusters. If the host (ESXi server) with the Primary working copy of the virtual machine fails, the protected virtual machine will instantly switch to the "Secondary" or "shadow" copy running on another ESXi server. For machines protected by VMware Fault Tolerance, there is a constant (real-time) copy of the entire state of memory and processor instructions from the main copy to the shadow copy. If the primary ESXi host fails, users will not even notice the failover process to the second host. This is what makes Fault Tolerance different from High Availability. In High Availability, if the physical server fails, the virtual machines will be restarted on other nodes, and while the operating systems are rebooted, users will not be able to access the virtual servers.

In addition to VMware Foult Tolerate, the VMware vCloud Suite Enterprise license provides high availability, resiliency, and disaster recovery with vSphere HA, vMotion, Storage vMotion, and vCenter Site Recovery Manager.

To reduce planned outages in servicing servers or storage systems (DSS), the vMotion and Storage vMotion functions move virtual machines and their disks online without interrupting applications and users. VSphere Replication supports multiple vCenter Site Recovery Manager (SRM) replication options to protect against major disasters. SRM provides centralized disaster recovery planning, automatic Failover and Failback from a backup site or vCloud, and non-disruptive disaster recovery testing.

The peculiarities of this hypervisor include selectivity to the hardware - before installing, you must carefully check the existing hardware for compatibility with the desired version of ESXi. There is a special one for this on the VMware website.

Licensing of VMware products has its own specifics. Additional confusion is added by periodic changes (from version to version of vSphere) in the VMware licensing policy. There are several points to consider before purchasing VMware vSpere licenses:

  • the hypervisor is licensed on a per physical basis (CPU) basis. Each server CPU requires a separate vSphere license (cores are not physical processors and do not count towards licensing);
  • the available functionality of an ESXi server is determined by the vSphere license installed on it. A detailed guide on licenses is available at;
  • for each purchased vShpere license, you must purchase a service support package (at least one year);
  • VMware does not impose limits on the amount of memory (RAM) installed on the server or on the number of running virtual machines.

Another VMware product, Vcenter Server, can be used to manage multiple hosts with ESXi hypervisors, storage systems, and networking equipment. The vSphere client plug-ins provided by VMware partners give IT administrators the ability to manage third-party elements in the data center directly from this console. Therefore, vCenter users can back up, protect data, manage servers, networks and security directly from the vCenter interface. In the same console, you can configure triggers that will notify you of problems, and get data about the operation of the entire infrastructure in the form of graphs or tables.

KVM

KVM is an easy-to-use, lightweight, low-resource, and fairly functional hypervisor. It allows you to deploy a virtualization platform and organize virtualization under the Linux operating system in the shortest possible time. During operation, KMV accesses the operating system kernel through a special module (KVM-Intel or KVM-AMD). Initially, KVM only supported x86 processors, but modern versions of KVM support a wide variety of processors and guest operating systems, including Linux, BSD, Solaris, Windows, etc. By the way, all Wiki resources (MediaWiki, Wikimedia Foundation, Wikipedia, Wikivoyage, Wikidata , Wikiversity) use this particular hypervisor.

Because guest operating systems interact with a hypervisor that is integrated into the Linux kernel, guest operating systems have the ability to access hardware directly without having to change the guest operating system. Due to this, there is almost no slowdown in the guest operating system.

KVM allows virtual machines to use unmodified QEMU, VMware and other images containing operating systems. Each virtual machine has its own virtual hardware: network cards, disk, video card, and other hardware.

Thanks to support for unmodified VMware images, a physical server can be easily virtualized using the same VMware vServer Converter utility, and then transferred the resulting file to the hypervisor.

Installing KVM on a Linux operating system consists of installing the KVM package and the Libvirt virtualization library, as well as carefully setting up the virtualization environment. Depending on the operating system used on the host, you need to configure a bridge or connection to a VNC console through which virtual machines will communicate with the host.

It is more difficult to administer KVM, as there is no transparent access to files, processes, consoles, and network interfaces, so you have to configure it yourself. Rebuilding VM parameters in KVM (CPU, RAM, HDD) is not very convenient and requires additional steps, including rebooting the OS.

The project itself does not offer convenient graphical tools for managing virtual machines, only the Virsh utility, which implements all the necessary functions. For convenient management of virtual machines, you can additionally install the Virt-Manager package.

KVM doesn't have built-in tools like Fault Tolerate for VMware, so the only way to create a HA cluster is to use network replication with DRDB. The DRBD cluster only supports two nodes, and the nodes are synchronized without encryption. That is, for a more secure connection, you must use a VPN connection.

In addition, to build a high availability cluster, you will need the Heartbeat program, which allows the nodes in the cluster to exchange service messages about their status, and Pacemaker, the cluster resource manager.

The KVM hypervisor is distributed as an open source product, and for corporate users there is a commercial Red Hat Virtualization (RHEL) solution based on KVM and the oVirt virtual infrastructure management platform.

The undoubted advantage of this hypervisor is that it can run on any server. The hypervisor is quite unpretentious in terms of resources, which makes it easy to use for testing tasks.

Please note that KVM does not have a support service. If something doesn't work out, you can count on the forums and community help. Or go to RHEL.

So what should you choose?

Both hypervisors are mature, reliable, high-performance virtualization systems, each with its own characteristics to consider when choosing.

KVM is generally more scalable than VMware, primarily because vSphere has some limitations on the servers it can manage. In addition, VMware has added a large number of storage area networks (SANs) to support multiple vendors. This feature means VMware has more storage options than KVM, but also makes it harder to support VMware storage as it expands.

KVM is usually the most popular hypervisor for companies looking to reduce implementation costs and are less interested in enterprise-grade features.

Research has shown that KVM's TCO is typically 39 percent lower than VMware, although actual TCO is dependent on specific factors such as operational parameters and site workload.

Tight integration with the host operating system is one of the most common reasons why developers choose KVM. Especially those using Linux. The inclusion of KVM in many Linux distributions also makes it a convenient choice for developers.

Cloud providers offering IaaS services to their customers typically opt for an infrastructure built on VMware products. Solutions based on VMware Sphere contain all the important corporate functions for ensuring high and continuous availability, provide support for more guest operating systems and have the ability to interface the customer's infrastructure with cloud services.

At Cloud4Y, we see VmWare as the leading virtualization solution. However, we are also interested in other solutions, including Xen and KVM. And here's what we noticed: there is not much information that allows us to compare these hypervisors: the last good research that we found on the network dates back to 2012 and, of course, can no longer be considered relevant. Today we will present to your attention also not the most recent, but, in our opinion, quite useful research devoted to the performance of the KVM and Xen hypervisors.

KVM hypervisor

Yes, virtualization gurus will forgive us, but first, we will remind readers what a hypervisor is and what it is for. To perform tasks that differ in their meaning (software development, hosting, etc.), the easiest way is to use virtual machines: they will allow you to have several different operating systems with an appropriate software environment. For ease of working with virtual machines, hypervisors are used - software tools that allow you to quickly deploy, stop and start a VM. KVM is one of the most widely used hypervisors.


KVM is software that allows you to organize virtualization based on PCs running Linux and similar operating systems. Recently, KVM has been considered part of the Linux kernel and is being developed in parallel to it. This hypervisor can only be used on systems where virtualization is supported by hardware - using Intel and AMD processors.


During operation, KVM accesses the kernel directly through a processor-specific module (kvm-intel or kvm-amd). In addition, the complex includes the main core - kvm.ko and UI elements, including the widespread QEMU. KVM allows you to directly work with VM files and disk images. Each virtual machine is provided with its own isolated space.

Xen hypervisor

Initially, a project was launched by Cambridge students that eventually became a commercial version of Xen. The first release dates back to 2003, and in 2007 the source code was purchased by Citrix. Xen is a cross-platform hypervisor with great functionality and enormous capabilities, which makes it possible to use it in the corporate sphere. Xen supports paravirtualization, a special operating system kernel mode in which the kernel is configured to run concurrently with a hypervisor.

Only the necessary set of functions has been added to the Xen code: management of virtual memory and processor clock frequency, work with DMA, real-time timer and interrupts. The rest of the functionality is moved to domains, that is, to virtual machines running at that time. This makes Xen the lightest hypervisor.

Research essence

Testing based on two SuperMicro servers, each with an Intel Xeon E3-1220 quad-core processor @ 3.10 Hz, 24GB Kingston DDR3 RAM, and four Western Digital RE-3 160GB (RAID 10) drivers. BIOS versions are identical.
For hosting and virtual machines, we took Fedora 20 (with SELinux). Here are the software versions we have taken:

  • Kernel: 3.14.8
  • For KVM: qemu-kvm 1.6.2
  • For Xen: xen 4.3.2
All root file systems are XFS with default configuration. The virtual machines are created with virt-Manager using the default settings applicable to KVM and Xen. The virtual disks used raw images and were allocated 8 GB of RAM with 4 vCPUs (virtual processors). OSs running on Xen used PVHVM.

Explanations

Some of you may start to resent that the owner of Fedora 20, Red Hat, spends a significant amount of effort to support KVM. To be clear, Red Hat hasn't made significant advances in Xen in years.


In addition, competition between hypervisors is tightly controlled and minimized. On most virtual servers, you will have multiple virtual machines vying for CPU time, I / O devices, and network access. Our testing doesn't take this into account. A single hypervisor can perform poorly when resource contention is low, and then perform much better than its competitors when resource contention is higher.

This study was conducted on Intel processors, so results may differ for AMD and ARM.

results

Tests for virtual machines installed directly on "hardware", that is, without an operating system (hereinafter referred to as "hardware"), served as the basis for testing virtual machines. The performance variance between the two non-virtualized servers was 0.51% or less.


KVM performance dropped by around 1.5% over hardware in almost all tests. Only two tests showed a different result: one of them was the 7-Zip test, where KVM showed itself 2.79% slower than hardware. The odd thing is that KVM was 4.11% faster in the PostMark benchmark (which simulated a heavily loaded mail server). The performance of Xen was more different from the performance of the hardware than in the KVM situation. In three tests, Xen differed by 2.5% from the speed of the hardware, in other tests it turned out to be even slower.

In the PostMark benchmark, Xen was 14.41% slower than hardware. When restarted, the test results differed from the original by 2%. The best KVM benchmark, MAFFT, ranks second in the worst for Xen.

Here's a quick summary of the testing:

Best Value Bare Metal KVM Xen
Timed MAFFT Alignment lower 7.78 7.795 8.42
Smallpt lower 160 162 167.5
POV-Ray lower 230.02 232.44 235.89
PostMark higher 3667 3824 3205
OpenSSL higher 397.68 393.95 388.25
John the Ripper (MD5) higher 49548 48899.5 46653.5
John the Ripper (DES) higher 7374833.5 7271833.5 6911167
John the Ripper (Blowfish) higher 3026 2991.5 2856
CLOMP higher 3.3 3.285 3.125
C-Ray lower 35.35 35.66 36.13
7-Zip higher 12467.5 12129.5 11879

If you want to see the full results, please follow the link.

Instead of a conclusion

In our testing, KVM was almost always 2% slower than hardware. Xen was 2.5% slower in three tests out of ten, and even worse in the rest: by 5-7%. Although KVM performed well in the PostMark benchmark, it should be noted that we ran only one I / O test, and to get a more reliable picture, it is worth doing a few more.


To choose the right hypervisor, you need to properly assess the nature of your workloads. If your workloads involve less CPU and more I / O, then more I / O tests can be done. If you work mainly with audio and video, try x264 or mp3 benchmarks.

As mister_fog rightly pointed out, in 2007 Citrix bought not the Xen source code, but the XenSource company, which was founded by the Xen developers and was engaged in the commercial development of this open source project. ...


Recently, an interesting report was released by the Principled Technologies company, which specializes, among other things, in all kinds of testing of hardware and software environments. The document "" explains that the ESXi hypervisor can run more virtual machines on the same hardware than the RHEV KVM hypervisor.

It is clear that the study is biased (at least if you look at the title), but since there are not so many such documents, we decided to pay attention to it.

For testing, we used a Lenovo x3650 M5 rack server, on which Microsoft SQL Server 2016 was running in virtual machines with an OLTP load. OPM (orders per minute) was used as the main performance indicator, which displays a quantitative assessment of executed transactions.

If you do not use the Memory Overcommit techniques, then the result of executing one host in the number of OPMs on 15 virtual machines is approximately the same on both hypervisors:

But when there is an increase in the number of virtual machines, then vSphere performs much better:

The crosses mark the machines that simply did not start on RHV, the product console gave the following error:

Despite the inclusion of memory optimization techniques in Red Hat Virtualization Manager (RHV-M), such as memory ballooning and kernel shared memory, the sixteenth virtual machine still refused to start on KVM:

Well, on vSphere they continued to increase the number of VMs until they ran into a lack of resources:

It turned out that with overcommit technicians on vSphere, it turned out to run 24 virtual machines, and on RHV - only 15 pieces. As a result, we concluded that 1.6 times more virtual machines can be run on VMware vSphere:

Not to say that this is an objective test, but it is obvious that ESXi in this case performs better than KVM in terms of any optimizations of memory and other VM resources.


Tags: VMware, Red Hat, Performance, RHV, vSphere, ESXi, KVM
Tags: KVM, oVirt, Open Source, Update

Recall that RHEV is based on the Kernel-based Virtual Machine (KVM) hypervisor and supports the OpenStack open cloud architecture. Let's see what's new in the updated RHEV version 3.4.

Infrastructure

  • SNMP configuration service to support third-party monitoring systems.
  • Saving the settings of the cloud installation of RHEV for the possibility of its recovery in case of a failure or for replication in other clouds.
  • RHEV authentication services have been rewritten and improved.
  • The ability to hot add a processor to the VM (Hot Plug CPU). This requires support from the OS.
  • Non-root users now have access to logs.
  • New installer based on TUI (textual user interface).
  • IPv6 support.
  • Possibility of choosing a connection to the VM console in Native Client or noVNC mode.
  • Possibility to change some settings of the running virtual machine.
  • Full support for RHEL 7 as a guest OS.
  • Ability to enable / disable KSM (Kernel Samepage Merging) at the cluster level.
  • Ability to reboot VM from RHEVM or with a console command.

Networking

  • Tighter integration with the OpenStack infrastructure:
    • Security and scalability improvements for networks deployed with Neutron.
    • Supports Open vSwitch technology (scalable virtual switch) and SDN capabilities.
  • Network Labels - labels that can be used when referring to devices.
  • Correct virtual network adapter (vNIC) numbering order.
  • Support for iproute2.
  • A single point to configure the network settings for multiple hosts on a specified network.

Storage capabilities

  • Mixed storage domains - the ability to simultaneously use disk devices from iSCSI, FCP, NFS, Posix and Gluster storage to organize storage of virtual machines.
  • Multiple Storage Domains - the ability to distribute disks of one virtual machine across multiple storages within the data center.
  • Ability to specify disks that will participate in creating snapshots, as well as those that will not.
  • The mechanism for restoring a VM from a backup has been improved - now it is possible to specify a snapshot of the state to which you want to rollback.
  • Asynchronous management of Gluster storage tasks.
  • Read-Only Disk for Engine - This feature enables the Red Hat Enterprise Virtualization Manager management tool to use read-only disks.
  • Multipathing access for iSCSI storage.

Virtualization tools

  • Guest OS agents (ovirt-guest-agent) for OpenSUSE and Ubuntu.
  • SPICE Proxy - the ability to use proxy servers to allow users to access their VMs (if, for example, they are outside the infrastructure network).
  • SSO (Single Sign-On) Method Control - the ability to switch between different pass-through authentication mechanisms. So far, there are only two options: guest agent SSO and no SSO.
  • Support for multiple versions of the same virtual machine template.

Scheduler and Service Level Enhancements

  • Improvements to the virtual machine scheduler.
  • Affinity / Anti-Affinity groups (rules for the existence of virtual machines on hosts - place machines together or separately).
  • Power-Off Capacity is a power policy that allows you to shut down a host and prepare its virtual machines for migration to another location.
  • Even Virtual Machine Distribution - the ability to distribute virtual machines to hosts based on the number of VMs.
  • High-Availability Virtual Machine Reservation - the mechanism allows you to guarantee the recovery of virtual machines in the event of a failure of one or more host servers. It works on the basis of calculating the available capacity of the computing resources of the cluster hosts.

Improvements to the interface

  • Bug fixes related to the fact that the interface did not always react to events taking place in the infrastructure.
  • Support for low screen resolutions (when some elements of the control console were not visible at low resolutions).

You can download Red Hat Enterprise Virtualization 3.4 from this link. Documentation is available.


Tags: Red Hat, RHEV, Update, Linux, KVM

The new version of RHEL OS has many new interesting features, among which many relate to virtualization technologies. Some of the major new features in RHEL 7:

  • Built-in support for packaged Docker applications.
  • Kernel patching utility Technology Preview - patching the kernel without rebooting the OS.
  • Direct and indirect integration with Microsoft Active Directory, described in more detail.
  • XFS is now the default file system for boot, root and user data partitions.
    • For XFS, the maximum file system size has been increased from 100 TB to 500 TB.
    • For ext4, this size has been increased from 16 TB to 50 TB.
  • Improved OS installation process (new wizard).
  • Ability to manage Linux servers using Open Linux Management Infrastructure (OpenLMI).
  • NFS and GFS2 file system improvements.
  • New capabilities of KVM virtualization technology.
  • Ability to run RHEL 7 as a guest OS.
  • Improvements to NetworkManager and a new command line utility for performing NM-CLI network tasks.
  • Supports Ethernet network connections at speeds up to 40 Gbps.
  • Supports WiGig wireless technology (IEEE 802.11ad) (at speeds up to 7 Gbps).
  • New Team Driver mechanism that virtually combines network devices and ports into a single interface at the L2 level.
  • New dynamic service FirewallD, which is a flexible firewall that takes precedence over iptables and supports multiple network trust zones.
  • GNOME 3 in classic desktop mode.

For more information on the new features in RHEL 7, see Red Hat.

In terms of virtualization, Red Hat Enterprise Linux 7 introduces the following major innovations:

  • Technological preview of virtio-blk-data-plane feature, which allows QEMU I / O commands to be executed in a separate optimized thread.
  • A technological preview of PCI Bridge technology has appeared, allowing more than 32 PCI devices to be supported in QEMU.
  • QEMU Sandboxing - improved isolation between RHEL 7 host guest OSs.
  • Support for "hot" adding virtual processors to machines (vCPU Hot Add).
  • Multiple Queue NICs - each vCPU has its own transmit and receive queues, which eliminates the need for other vCPUs (for Linux guest OSs only).
  • Hot Migration Page Delta Compression technology allows the KVM hypervisor to migrate faster.
  • KVM introduces support for paravirtualized functions of Microsoft OS, for example, Memory Management Unit (MMU) and Virtual Interrupt Controller. This allows Windows guests to run faster (these features are disabled by default).
  • Supports EOI Acceleration technology based on Intel and AMD Advanced Programmable Interrupt Controller (APIC) interface.
  • Technological preview of USB 3.0 support in KVM guest operating systems.
  • Supports Windows 8, Windows 8.1, Windows Server 2012 and Windows Server 2012 R2 guest operating systems on a KVM hypervisor.
  • I / O Throttling functions for guest operating systems on QEMU.
  • Support for Ballooning technologies and transparent huge pages.
  • The new virtio-rng device is available as a random number generator for guest operating systems.
  • Support for hot migration of guest operating systems from a Red Hat Enterprise Linux 6.5 host to a Red Hat Enterprise Linux 7 host.
  • Supports mapping NVIDIA GRID and Quadro devices as a second device in addition to emulated VGA.
  • Para-Virtualized Ticketlocks technology, which improves performance when there are more virtual vCPUs than physical ones on the host.
  • Improved error handling for PCIe devices.
  • New Virtual Function I / O (VFIO) driver improves security.
  • Supports Intel VT-d Large Pages Technology when using the VFIO driver.
  • Improvements in giving accurate time to virtual machines on KVM.
  • Support for images of the QCOW2 version 3 format.
  • Improved Live Migration statistics - total time, expected downtime and bandwidth.
  • Dedicated stream for Live Migration, which allows hot migrations not to impact guest OS performance.
  • Emulation of AMD Opteron G5 processors.
  • Support for new Intel processor instructions for KVM guest operating systems.
  • Supports read-only VPC and VHDX virtual disk formats.
  • New features of the libguestfs utility for working with virtual disks of machines.
  • New Windows Hardware Quality Labs (WHQL) drivers for Windows guest operating systems.
  • Integration with VMware vSphere: Open VM Tools, 3D graphics drivers for OpenGL and X11, and improved communication mechanism between the guest OS and the ESXi hypervisor.

Release Notes of the new OS version are available at this link. You can read about the virtualization functions in the new RHEL 7 release (and - in Russian). The sources for the Red Hat Enterprise Linux 7 rpm packages are now only available through the Git repository.


Tags: Linux, QEMU, KVM, Update, RHEL, Red Hat

Ravello has found an interesting way to leverage nested virtualization in its Cloud Application Hypervisor product, which allows it to universally deploy VMs across different virtualization platforms in the public clouds of different service providers.

The main component of this system is HVX technology - its own hypervisor (based on Xen), which is part of the Linux OS and runs nested virtual machines without changing them using binary translation techniques. Further, these machines can be hosted in Amazon EC2, HP Cloud, Rackspace and even private clouds managed by VMware vCloud Director (support for the latter is expected soon).

The Ravello product is a SaaS service, and such nesting dolls can be simply uploaded to any of the supported hosting sites, regardless of the hypervisor it uses. A virtual network between machines is created via an L2 overlay over the existing L3 infrastructure of the hoster using a GRE-like protocol (only based on UDP):

The very mechanics of the proposed Cloud Application Hypervisor service are as follows:

  • The user uploads virtual machines to the cloud (machines created on ESXi / KVM / Xen platforms are supported).
  • Describes multi-machine applications using a special GUI or API.
  • Publishes its VMs to one or more supported clouds.
  • The resulting configuration is saved as a snapshot in the Ravello cloud (then in which case it can be restored or unloaded) - this storage can be created both on the basis of Amazon S3 cloud storage, CloudFiles, and on the basis of its own block storages or NFS volumes.
  • After that, each user can get a multi-machine configuration of their application on demand.

The obvious question that comes up first is what about performance? Well, first of all, the Cloud Application Hypervisor is aimed at development and test teams for which performance is not a critical factor.

And secondly, the results of performance tests of such nested nesting dolls show not so bad results:

For those interested in HVX technology, there is a good overview video in Russian:


Tags: Rovello, Nested Virtualization, Cloud, HVX, VMware, ESXi, KVM, Xen, VMachines, Amazon, Rackspace

The new version of the open virtualization platform RHEV 3.0 is based on the Red Ha Enterprise Linux version 6 distribution and, traditionally, the KVM hypervisor.

New features of Red Hat Enterprise Virtualization 3.0:

  • The Red Hat Enterprise Virtualization Manager management tool is now Java-based, running on the JBoss platform (previously .NET was used, and, accordingly, was tied to Windows, now you can use Linux for the management server).
  • A self-service portal for users to self-deploy virtual machines, create templates, and administer their own environments.
  • New RESTful API allowing access to all solution components from third-party applications.
  • An advanced administration mechanism that provides the ability to granularly assign permissions, delegate authority based on user roles, and hierarchical privilege management.
  • Supports local server disks as storage for virtual machines (but Live Migration is not supported for them).
  • An integrated reporting engine that analyzes historical performance data and predicts virtual infrastructure development.
  • Optimized for WAN connections, including dynamic compression technologies and automatic adjustment of desktop effects and color depth. In addition, the new version of SPICE has enhanced support for Linux guest desktops.
  • Updated KVM hypervisor based on the latest Red Hat Enterprise Linux 6.1 released in May 2011.
  • Supports up to 160 logical CPUs and 2 TB of memory for host servers, 64 vCPUs and 512 GB of memory for virtual machines.
  • New possibilities for the administration of large installations of RHEV 3.0.
  • Support for large pages of memory (Transparant Huge Pages, 2MB instead of 4KB) in guest operating systems, which improves performance by reducing the number of reads.
  • Optimization of the vhost-net component. Now the KVM networking stack has been moved from user mode to kernel mode, which significantly increases performance and reduces network latency.
  • Using the functions of the sVirt library, which provides hypervisor security.
  • The paravirtualized controller x2paic has appeared, which reduces overhead on the content of the VM (especially effective for intensive workloads).
  • Async-IO technology to optimize I / O and improve performance.

You can download the final release of Red Hat Enterprise Virtualization 3.0 using this link.

And, finally, a short video review of Red Hat Enterprise Virtualization Manager 3.0 (RHEV-M):


Tags: Red Hat, Enterprise, Update, KVM, Linux

Well done NetApp! Roman, we are waiting for translation into Russian)


Tags: Red Hat, KVM, NetApp, Storage, NFS

ConVirt 2.0 Open Source allows you to manage the Xen and KVM hypervisors included in free and commercial Linux distributions, deploy virtual servers from templates, monitor performance, automate administrator tasks, and configure all aspects of the virtual infrastructure. ConVirt 2.0 supports hot migration of virtual machines, "thin" virtual disks (growing as they fill up with data), control of virtual machines resources (including running), extensive monitoring functions and means of intelligent placement of virtual machines on host servers (manual load balancing).

ConVirt 2.0 still exists only in the Open Source edition, but the developers promise to soon release the ConVirt 2.0 Enteprise edition, which will differ from the free edition in the following features:

FeatureConVirt 2.0
Open source
ConVirt 2.0 Enterprise

Architecture
Multi-platform Support
Agent-less Architecture
Universal Web Access
Datacenter-wide Console

Administration
Start, Stop, Pause, Resume
Maintanence Mode
Snapshot
Change Resource Allocation on a Running VM

Monitoring
Real-time Data
Historical Information
Server Pools
Storage pools
Alerts and Notifications

Provisioning
Templates-based Provisioning
Template Library
Integrated Virtual Appliance Catalogs
Thin Provisioning
Scheduled Provisioning

Automation
Intelligent Virtual Machine Placement
Live migration
Host Private Networking
SAN, NAS Storage Support

Advanced Automation
High Availability
Backup and recovery
VLAN Setup
Storage automation
Dynamic Resource Allocation
Power Saving Mode

Security
SSH Access
Multi-user Administration
Auditing
Fine Grained Access Control

Integration
Open Repository
Command Line Interface
Programmatic API

Tags: Xen, KVM, Convirt, Citrix, Red Hat, Free, Open Source,

Convirture, the 2007 XenMan GUI for managing the XEN hypervisor, recently released free Convirture ConVirt 1.0, which changed its name to XenMan.

With ConVirt, you can manage Xen and KVM hypervisors using the following features:

  • Management of multiple host servers.
  • Snapshots (snapshots).
  • Live migration of virtual machines between hosts.
  • VM backup.
  • The simplest monitoring of hosts and virtual machines.
  • Support for virtual modules (Virtual Appliances).

You can download Convirture ConVirt 1.0 from this link:

Convirture ConVirt 1.0
Tags: Xen, KVM

In the life of a sysadmin, one day there comes a time when you have to deploy an enterprise infrastructure from scratch or redo an existing one that has been inherited. In this article I will talk about how to properly deploy a hypervisor based on Linux KVM and libvirt with LVM (logical group) support.

We will go through all the intricacies of hypervisor management, including console and GUI utilities, resource expansion, and migration of virtual machines to another hypervisor.

First, let's figure out what virtualization is. The official definition is: "Virtualization is the provision of a set of computing resources or their logical association, abstracted from the hardware implementation and while providing logical isolation from each other of computing processes running on one physical resource." That is, in human terms, having one powerful server, we can turn it into several medium servers, and each of them will perform its task assigned to it in the infrastructure, without interfering with others.

System administrators working closely with virtualization at the enterprise, masters and virtuosos of their craft, divided into two camps. Some are adherents of high-tech, but insanely expensive VMware for Windows. Others are lovers of open source and free solutions based on Linux VM. It can take a long time to enumerate the advantages of VMware, but here we will focus on virtualization based on the Linux VM.

Virtualization technologies and hardware requirements

There are now two popular virtualization technologies: Intel VT and AMD-V. Intel VT (from Intel Virtualization Technology) implements real addressing mode virtualization; the corresponding hardware I / O virtualization is called VT-d. This technology is often referred to as VMX (Virtual Machine eXtension). AMD created their virtualization extensions and originally called them AMD Secure Virtual Machine (SVM). When the technology hit the market, it became known as AMD Virtualization (abbreviated AMD-V).

Before putting the hardware into operation, make sure that the equipment supports one of these two technologies (you can see the specifications on the manufacturer's website). If support for virtualization is available, it must be enabled in the BIOS before deploying the hypervisor.

Other hypervisor requirements include hardware RAID (1, 5, 10) support, which increases the hypervisor's fault tolerance in the event of a hard drive failure. If there is no support for hardware RAID, then you can use software as a last resort. But RAID is a must-have!

The solution described in this article carries three virtual machines and successfully runs on the minimum requirements: Core 2 Quad Q6600 / 8 GB DDR2 PC6400 / 2 × 250 GB SATA HDD (hardware RAID 1).

Installing and configuring the hypervisor

I'll show you how to configure a hypervisor using Debian Linux 9.6.0 - X64-86 as an example. You can use any Linux distribution you like.

When you decide on the choice of iron and it finally arrives, it's time to install the hypervisor. When installing the OS, we do everything as usual, except for the partitioning of the disks. Inexperienced administrators often choose the "Automatically partition all disk space without using LVM" option. Then all the data will be written to one volume, which is not good for several reasons. First, if the hard drive fails, you will lose all data. Secondly, changing the filesystem is going to be a lot of hassle.

In general, in order to avoid unnecessary gestures and wasted time, I recommend using disk partitioning with LVM.

Logical Volume Manager

The Logical Volume Manager (LVM) is a subsystem available on Linux and OS / 2, built on top of Device Mapper. Its purpose is to represent different areas from one hard disk or areas from multiple hard disks as a single logical volume. LVM creates a logical volume group (VG - Volumes Group) from physical volumes (PV - Phisical Volumes). It, in turn, consists of logical volumes (LV - Logical Volume).

All Linux distributions with kernel 2.6 and higher now have LVM2 support. To use LVM2 on an OS with a 2.4 kernel, you need to install a patch.

After the system detects the hard drives, the hard drive partition manager will start. Select the item Guided - use entire disk and set up LVM.


Now we select the disk on which our volume group will be installed.



The system will offer options for marking the media. Select "Write all files to one section" and move on.




After saving the changes, we get one logical group and two volumes in it. The first is the root partition and the second is the swap file. Here many will ask the question: why not choose the layout manually and create LVM yourself?

My answer is simple: when creating a logical group VG, the boot partition is not written to VG, but is created as a separate partition with the ext2 file system. If this is not taken into account, the boot volume will be in a logical group. This will doom you to torment and suffering when restoring the boot volume. This is why the boot partition is sent to the non-LVM volume.



Let's move on to configuring the logical group for the hypervisor. We select the item "Configuration of the manager of logical volumes".



The system will notify that all changes will be written to disk. We agree.



Let's create a new group - for example, let's name it vg_sata.



INFO

The servers use SATA, SSD, SAS, SCSI, NVMe media. It is good practice to indicate not the hostname when creating a logical group, but the type of media used in the group. I advise you to name the logical group like this: vg_sata, vg_ssd, vg_nvme, and so on. This will help you understand what media the logical group is built from.




We create our first logical volume. This will be the volume for the root partition of the operating system. We select the item "Create logical volume".



Select a group for the new logical volume. We have only one.



Assigning a name to the logical volume. The most correct way to assign a name is to use a prefix in the form of a logical group name - for example, vg_sata_root, vg_ssd_root, and so on.



We indicate the volume for the new logical volume. I advise you to allocate 10 GB for the root, but less can be, since the logical volume can always be expanded.



By analogy with the example above, create the following logical volumes:

  • vg_sata_home - 20 GB for user directories;
  • vg_sata_opt - 10 GB for installing application software;
  • vg_sata_var - 10 GB for frequently changing data, for example, system logs and other programs;
  • vg_sata_tmp - 5 GB for temporary data, if the amount of temporary data is large, you can do more. In our example, this section was not created as unnecessary;
  • vg_sata_swap - equal to the amount of RAM. This is a swap section, and we create it for safety reasons - in case the hypervisor runs out of RAM.

After creating all the volumes, exit the manager.



Now we have several volumes for creating partitions of the operating system. As you might guess, each partition has its own logical volume.



Create a partition of the same name for each logical volume.



We save and record the changes made.



After saving the disk layout changes, the basic system components will be installed, and then you will be prompted to select and install additional system components. Of all the components, we need ssh-server and standard system utilities.



After installation, the GRUB boot loader will be generated and written to disk. Install it on the physical disk where the boot partition is saved, that is / dev / sda.




Now we are waiting for the bootloader to finish writing to disk, and after the notification, we restart the hypervisor.





After rebooting the system, go to the hypervisor via SSH. First of all, under the root, we install the utilities necessary for the work.

$ sudo apt-get install -y sudo htop screen net-tools dnsutils bind9utils sysstat telnet traceroute tcpdump wget curl gcc rsync

We customize SSH to taste. I advise you to immediately make authorization by keys. We restart and check the serviceability.

$ sudo nano / etc / ssh / sshd_config $ sudo systemctl restart sshd; sudo systemctl status sshd

Before installing virtualization software, you need to check the physical volumes and the state of the logical group.

$ sudo pvscan $ sudo lvs

Install virtualization components and utilities for creating a network bridge on the hypervisor interface.

$ sudo apt-get update; apt-get upgrade -y $ sudo apt install qemu-kvm libvirt-bin libvirt-dev libvirt-daemon-system libvirt-clients virtinst bridge-utils

After installation, we configure the network bridge on the hypervisor. Comment on the network interface settings and set new ones:

$ sudo nano / etc / network / interfaces

The content will be something like this:

Auto br0 iface br0 inet static address 192.168.1.61 netmask 255.255.255.192 gateway 192.168.1.1 broadcast 192.168.0.61 dns-nameserver 127.0.0.1 dns-search bridge_ports site enp2s0 bridge_stp off bridge_waitport 0 bridge_fd 0

Add our user, under which we will work with the hypervisor, to the libvirt and kvm groups (for RHEL, the group is called qemu).

$ sudo gpasswd -a iryzhevtsev kvm $ sudo gpasswd -a iryzhevtsev libvirt

Now we need to initialize our logical group to work with the hypervisor, start it and add it to startup at system startup.

$ sudo virsh pool-list $ sudo virsh pool-define-as vg_sata logical --target / dev / vg_sata $ sudo virsh pool-start vg_sata; sudo virsh pool-autostart vg_sata $ sudo virsh pool-list

INFO

For the LVM group to work properly with QEMU-KVM, you must first activate the logical group through the virsh console.

Now we download the distribution kit for installation on guest systems and put it in the desired folder.

$ sudo wget https://mirror.yandex.ru/debian-cd/9.5.0/amd64/iso-cd/debian-9.5.0-amd64-netinst.iso $ sudo mv debian-9.5.0-amd64-netinst .iso / var / lib / libvirt / images /; ls -al / var / lib / libvirt / images /

To connect to virtual machines via VNC, edit the /etc/libvirt/libvirtd.conf file:

$ sudo grep "listen_addr =" /etc/libvirt/libvirtd.conf

Let's uncomment and change the line listen_addr = "0.0.0.0". Save the file, restart the hypervisor and check if all services are up and running.

Continuation is available only to participants

Option 1. Join the "site" community to read all the materials on the site

Membership in the community within the specified period will open you access to ALL Hacker's materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score!

For me personally, it is easiest to think of KVM (Kernel-based Virtual Machine) as such a level of abstraction over Intel VT-x and AMD-V hardware virtualization technologies. We take a machine with a processor that supports one of these technologies, install Linux on this machine, install KVM in Linux, as a result we get the opportunity to create virtual machines. This is roughly how cloud hosting services, for example, Amazon Web Services, work. Along with KVM, Xen is sometimes also used, but a discussion of this technology is beyond the scope of this post. Unlike container virtualization technologies, for example, the same Docker, KVM allows you to run any OS as a guest system, but it also has b O Higher virtualization overhead.

Note: The steps described below were tested by me on Ubuntu Linux 14.04, but in theory they will be largely valid for other versions of Ubuntu and other Linux distributions. Everything should work both on the desktop and on the server, which is accessed via SSH.

Installing KVM

Check if Intel VT-x or AMD-V is supported by our processor:

grep -E "(vmx | svm)" / proc / cpuinfo

If something heats up, then it is supported, and you can proceed.

Install KVM:

sudo apt-get update
sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils

What is customary to store where:

  • / var / lib / libvirt / boot / - ISO images for installing guest systems;
  • / var / lib / libvirt / images / - images of hard drives of guest systems;
  • / var / log / libvirt / - all logs should be searched here;
  • / etc / libvirt / - directory with configuration files;

Now that KVM is installed, let's create our first virtual machine.

Creation of the first virtual machine

I chose FreeBSD as the guest system. Downloading the ISO image of the system:

cd / var / lib / libvirt / boot /
sudo wget http: // ftp.freebsd.org/ path / to / some-freebsd-disk.iso

In most cases, virtual machines are managed using the virsh utility:

sudo virsh --help

Before starting the virtual machine, we need to collect some additional information.

We look at the list of available networks:

sudo virsh net-list

Viewing information about a specific network (named default):

sudo virsh net-info default

Let's see the list of available optimizations for guest OS:

sudo virt-install --os-variant list

So, now we create a virtual machine with 1 CPU, 1 GB of RAM and 32 GB of disk space, connected to the default network:

sudo virt-install \
--virt-type = kvm \
--name freebsd10 \
--ram 1024 \
--vcpus = 1 \
--os-variant = freebsd8 \
--hvm \
--cdrom = / var / lib / libvirt / boot / FreeBSD-10.2 -RELEASE-amd64-disc1.iso \
--network network = default, model = virtio \
--graphics vnc \
--disk path = / var / lib / libvirt / images / freebsd10.img, size = 32, bus = virtio

You can see:

WARNING Unable to connect to graphical console: virt-viewer not
installed. Please install the "virt-viewer" package.

Domain installation still in progress. You can reconnect to the console
to complete the installation process.

This is normal and it should be.

Then we look at the properties of the virtual machine in XML format:

sudo virsh dumpxml freebsd10

Here is the most complete information. Including, for example, the MAC address, which we will need further. So far, we find information about VNC. In my case:

With the help of your favorite client (I personally use Rammina), we go to VNC, using SSH port forwarding if necessary. We get straight into the FreeBSD installer. Then everything is as usual - Next, Next, Next, we get the installed system.

Basic commands

Let's now take a look at the basic commands for working with KVM.

Getting a list of all virtual machines:

sudo virsh list --all

Getting information about a specific virtual machine:

sudo virsh dominfo freebsd10

Start virtual machine:

sudo virsh start freebsd10

Stop virtual machine:

sudo virsh shutdown freebsd10

Hard to nail the virtual machine (despite the name, this not delete):

sudo virsh destroy freebsd10

Reboot the virtual machine:

sudo virsh reboot freebsd10

Clone a virtual machine:

sudo virt-clone -o freebsd10 -n freebsd10-clone \
--file / var / lib / libvirt / images / freebsd10-clone.img

Enable / disable autorun:

sudo virsh autostart freebsd10
sudo virsh autostart --disable freebsd10

Launching virsh in dialog mode (all commands are in dialog mode - as described above):

sudo virsh

Editing virtual machine properties in XML, including here you can change the limit on the amount of memory, etc.

sudo virsh edit freebsd10

Important! Unfortunately, comments from the edited XML are being removed.

When the virtual machine is stopped, the disk can also be resized:

sudo qemu-img resize / var / lib / libvirt / images / freebsd10.img -2G
sudo qemu-img info / var / lib / libvirt / images / freebsd10.img

Important! Your guest OS will most likely not like that the disk suddenly gets bigger or smaller. In the best case, it will boot in emergency mode with a proposal to repartition the disk. Chances are, you shouldn't want to do this. It may be much easier to start a new virtual machine and migrate all the data to it.

Backing up and restoring is pretty straightforward. It is enough to save somewhere the dumpxml output, as well as the disk image, and then restore them. On YouTube found a video with a demonstration of this process, everything is really simple.

Network settings

An interesting question is how to determine which IP address the virtual machine received after booting? KVM does this in a tricky way. I ended up writing a script like this in Python:

#! / usr / bin / env python3

# virt-ip.py script
# (c) 2016 Aleksander Alekseev
# http: // site /

import sys
import re
import os
import subprocess
from xml .etree import ElementTree

def eprint (str):
print (str, file = sys .stderr)

if len (sys .argv)< 2 :
eprint ("USAGE:" + sys .argv [0] + " " )
eprint ("Example:" + sys .argv [0] + "freebsd10")
sys .exit (1)

if os .geteuid ()! = 0:
eprint ("ERROR: you shold be root")
eprint ("Hint: run` sudo "+ sys .argv [0] +" ... `");
sys .exit (1)

if subprocess .call ( "which arping 2> & 1> / dev / null", shell = True)! = 0:
eprint ("ERROR: arping not found")
eprint ( "Hint: run` sudo apt-get install arping` ")
sys .exit (1)

Domain = sys .argv [1]

if not re .match ("^ * $", domain):
eprint ( "ERROR: invalid characters in domain name")
sys .exit (1)

Domout = subprocess .check_output ("virsh dumpxml" + domain + "|| true",
shell = True)
domout = domout.decode ("utf-8") .strip ()

if domout == "":
# error message already printed by dumpxml
sys .exit (1)

Doc = ElementTree.fromstring (domout)

# 1.list all network interfaces
# 2.run `arping` on every interface in parallel
# 3.grep replies
cmd = "(ifconfig | cut -d" "-f 1 | grep -E". "|" + \
"xargs -P0 -I IFACE arping -i IFACE -c 1 () 2> & 1 |" + \
"grep" bytes from ") || true"

for child in doc.iter ():
if child.tag == "mac":
macaddr = child.attrib ["address"]
macout = subprocess .check_output (cmd .format (macaddr),
shell = True)
print (macout.decode ("utf-8"))

The script works with both the default network and the bridged network, the configuration of which will be discussed further. In practice, however, it is much more convenient to configure KVM so that it always assigns the same IP addresses to guest systems. To do this, edit the network settings:

sudo virsh net-edit default

... something like this:

>



>

After making these edits


>

... and replace it with something like:




>

We reboot the guest system and check that it received an IP via DHCP from the router. If you want the guest to have a static IP address, this is configured as usual within the guest itself.

Virt-manager program

You may also be interested in the virt-manager program:

sudo apt-get install virt-manager
sudo usermod -a -G libvirtd USERNAME

This is how its main window looks like:

As you can see, virt-manager is not only a GUI for virtual machines running locally. With its help, you can manage virtual machines running on other hosts, as well as look at beautiful graphs in real time. I personally find it especially convenient in virt-manager that you do not need to search in the configs on which port the VNC of a particular guest system is running. You just find the virtual machine in the list, double-click, and you get access to the monitor.

It is also very convenient to use virt-manager to do things that would otherwise require laborious editing of XML files and, in some cases, additional commands. For example, renaming virtual machines, configuring CPU affinity and the like. By the way, using CPU affinity significantly reduces the effect of noisy neighbors and the impact of virtual machines on the host system. Always use it whenever possible.

If you decide to use KVM as a replacement for VirtualBox, keep in mind that they cannot share hardware virtualization among themselves. For KVM to work on your desktop, you will not only have to stop all virtual machines in VirtualBox and Vagrant, but also reboot the system. I personally find KVM much more convenient than VirtualBox, at least because it does not require you to execute the command sudo / sbin / rcvboxdrv setup after each kernel update, it adequately works with Unity, and generally allows you to hide all the windows.







2021 gtavrl.ru.