You need to enable client hyper-v on a virtual machine; what feature do you need to use?

Red Hat Training

A Red Hat training course is available for RHEL 8

For your virtual machines [VMs] to connect over a network to your host, to other VMs on your host, and to locations on an external network, the VM networking must be configured accordingly. To provide VM networking, the RHEL 8 hypervisor and newly created VMs have a default network configuration, which can also be modified further. For example:

  • You can enable the VMs on your host to be discovered and connected to by locations outside the host, as if the VMs were on the same network as the host.
  • You can partially or completely isolate a VM from inbound network traffic to increase its security and minimize the risk of any problems with the VM impacting the host.

The following sections explain the various types of VM network configuration and provide instructions for setting up selected VM network configurations.

13.1. Understanding virtual networking

The connection of virtual machines [VMs] to other devices and locations on a network has to be facilitated by the host hardware. The following sections explain the mechanisms of VM network connections and describe the default VM network setting.

13.1.1. How virtual networks work

Virtual networking uses the concept of a virtual network switch. A virtual network switch is a software construct that operates on a host machine. VMs connect to the network through the virtual network switch. Based on the configuration of the virtual switch, a VM can use an existing virtual network managed by the hypervisor, or a different network connection method.

The following figure shows a virtual network switch connecting two VMs to the network:

From the perspective of a guest operating system, a virtual network connection is the same as a physical network connection. Host machines view virtual network switches as network interfaces. When the libvirtd service is first installed and started, it creates virbr0, the default network interface for VMs.

To view information about this interface, use the ip utility on the host.

$ ip addr show virbr0
3: virbr0:  mtu 1500 qdisc noqueue state
 UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff
 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

By default, all VMs on a single host are connected to the same NAT-type virtual network, named default, which uses the virbr0 interface. For details, see Virtual networking default configuration.

For basic outbound-only network access from VMs, no additional network setup is usually needed, because the default network is installed along with the libvirt-daemon-config-network package, and is automatically started when the libvirtd service is started.

If a different VM network functionality is needed, you can create additional virtual networks and network interfaces and configure your VMs to use them. In addition to the default NAT, these networks and interfaces can be configured to use one of the following modes:

  • Routed mode
  • Bridged mode
  • Isolated mode
  • Open mode

13.1.2. Virtual networking default configuration

When the libvirtd service is first installed on a virtualization host, it contains an initial virtual network configuration in network address translation [NAT] mode. By default, all VMs on the host are connected to the same libvirt virtual network, named default. VMs on this network can connect to locations both on the host and on the network beyond the host, but with the following limitations:

  • VMs on the network are visible to the host and other VMs on the host, but the network traffic is affected by the firewalls in the guest operating system’s network stack and by the libvirt network filtering rules attached to the guest interface.
  • VMs on the network can connect to locations outside the host but are not visible to them. Outbound traffic is affected by the NAT rules, as well as the host system’s firewall.

The following diagram illustrates the default VM network configuration:

13.2. Using the web console for managing virtual machine network interfaces

Using the RHEL 8 web console, you can manage the virtual network interfaces for the virtual machines to which the web console is connected. You can:

  • View information about network interfaces and edit them.
  • Add network interfaces to virtual machines, and disconnect or delete the interfaces.

13.2.1. Viewing and editing virtual network interface information in the web console

Using the RHEL 8 web console, you can view and modify the virtual network interfaces on a selected virtual machine [VM]:

Procedure

  1. In the interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to .

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

    + The information includes the following:

    • Type - The type of network interface for the VM. The types include virtual network, bridge to LAN, and direct attachment.

      Generic Ethernet connection is not supported in RHEL 8 and later.

    • Model type - The model of the virtual network interface.
    • MAC Address - The MAC address of the virtual network interface.
    • IP Address - The IP address of the virtual network interface.
    • Source - The source of the network interface. This is dependent on the network type.
    • State - The state of the virtual network interface.

  3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings dialog opens.

  4. Change the interface type, source, model, or MAC address.
  5. Click Save. The network interface is modified.

    Changes to the virtual network interface settings take effect only after restarting the VM.

    Additionally, MAC address can only be modified when the VM is shut off.

13.2.2. Adding and connecting virtual network interfaces in the web console

Using the RHEL 8 web console, you can create a virtual network interface and connect a virtual machine [VM] to it.

Procedure

  1. In the interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to .

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Plug network interfaces.

  3. Click Plug in the row of the virtual network interface you want to connect.

    The selected virtual network interface connects to the VM.

13.2.3. Disconnecting and removing virtual network interfaces in the web console

Using the RHEL 8 web console, you can disconnect the virtual network interfaces connected to a selected virtual machine [VM].

Procedure

  1. In the interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to .

    The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.

  3. Click Unplug in the row of the virtual network interface you want to disconnect.

    The selected virtual network interface disconnects from the VM.

13.3. Recommended virtual machine networking configurations

In many scenarios, the default VM networking configuration is sufficient. However, if adjusting the configuration is required, you can use the command-line interface [CLI] or the RHEL 8 web console to do so. The following sections describe selected VM network setups for such situations.

13.3.1. Configuring externally visible virtual machines using the command-line interface

By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual bridge on the host. This ensures that the VM can use the host’s network interface controller [NIC] for connecting to outside networks, but the VM is not reachable from external systems.

If you require a VM to appear on the same external network as the hypervisor, you must use bridged mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical network device. To use the command-line interface for this, follow the instructions below.

Prerequisites

  • A shut-down existing VM with the default NAT setup.
  • The IP configuration of the hypervisor. This varies depending on the network connection of the host. As an example, this procedure uses a scenario where the host is connected to the network using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.

    To obtain the IP configuration of the ethernet interface, use the ip addr utility:

    # ip addr
    [...]
    enp0s25:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff
        inet 10.0.0.148/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s25

Procedure

  1. Create and set up a bridge connection for the physical interface on the host. For instructions, see the Configuring a network bridge.

    Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of the physical ethernet interface to the bridge interface.

  2. Modify the VM’s network to use the created bridged interface. For example, the following sets testguest to use bridge0.

    # virt-xml testguest --edit --network bridge=bridge0
    Domain 'testguest' defined successfully.
  3. Start the VM.

    # virsh start testguest
  4. In the guest operating system, adjust the IP and DHCP settings of the system’s network interface as if the VM was another physical system in the same network as the hypervisor.

    The specific steps for this will differ depending on the guest OS used by the VM. For example, if the guest OS is RHEL 8, see Configuring an Ethernet connection.

Verification

  1. Ensure the newly created bridge is running and contains both the host’s physical interface and the interface of the VM.

    # ip link show master bridge0
    2: enp0s25:  mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000
        link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff
    10: vnet0:  mtu 1500 qdisc fq_codel master bridge0 state UNKNOWN mode DEFAULT group default qlen 1000
        link/ether fe:54:00:89:15:40 brd ff:ff:ff:ff:ff:ff
  2. Ensure the VM appears on the same external network as the hypervisor:

    1. In the guest operating system, obtain the network ID of the system. For example, if it is a Linux guest:

      # ip addr
      [...]
      enp0s0:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
          link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff
          inet 10.0.0.150/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s0
    2. From an external system connected to the local network, connect to the VM using the obtained ID.

      # ssh 
      's password:
      Last login: Mon Sep 24 12:05:36 2019
      root~#*

      If the connection works, the network has been configured successfully.

Troubleshooting

  • In certain situations, such as when using a client-to-site VPN while the VM is hosted on the client, using bridged mode for making your VMs available to external locations is not possible.

    To work around this problem, you can set destination NAT using nftables for the VM.

13.3.2. Configuring externally visible virtual machines using the web console

By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual bridge on the host. This ensures that the VM can use the host’s network interface controller [NIC] for connecting to outside networks, but the VM is not reachable from external systems.

If you require a VM to appear on the same external network as the hypervisor, you must use bridged mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical network device. To use the RHEL 8 web console for this, follow the instructions below.

Prerequisites

  • The web console VM plug-in is installed on your system.
  • A shut-down existing VM with the default NAT setup.
  • The IP configuration of the hypervisor. This varies depending on the network connection of the host. As an example, this procedure uses a scenario where the host is connected to the network using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.

    To obtain the IP configuration of the ethernet interface, go to the Networking tab in the web console, and see the Interfaces section.

Procedure

  1. Create and set up a bridge connection for the physical interface on the host. For instructions, see Configuring network bridges in the web console.

    Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of the physical ethernet interface to the bridge interface.

  2. Modify the VM’s network to use the bridged interface. In the Network Interfaces tab of the VM:

    1. Click Add Network Interface
    2. In the Add Virtual Network Interface dialog, set:

      • Interface Type to Bridge to LAN
      • Source to the newly created bridge, for example bridge0

    3. Click Add
    4. Optional: Click Unplug for all the other interfaces connected to the VM.

  3. Click Run to start the VM.
  4. In the guest operating system, adjust the IP and DHCP settings of the system’s network interface as if the VM was another physical system in the same network as the hypervisor.

    The specific steps for this will differ depending on the guest OS used by the VM. For example, if the guest OS is RHEL 8, see Configuring an Ethernet connection.

Verification

  1. In the Networking tab of the host’s web console, click the row with the newly created bridge to ensure it is running and contains both the host’s physical interface and the interface of the VM.

  2. Ensure the VM appears on the same external network as the hypervisor.

    1. In the guest operating system, obtain the network ID of the system. For example, if it is a Linux guest:

      # ip addr
      [...]
      enp0s0:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
          link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff
          inet 10.0.0.150/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s0
    2. From an external system connected to the local network, connect to the VM using the obtained ID.

      # ssh 
      's password:
      Last login: Mon Sep 24 12:05:36 2019
      root~#*

      If the connection works, the network has been configured successfully.

Troubleshooting

  • In certain situations, such as when using a client-to-site VPN while the VM is hosted on the client, using bridged mode for making your VMs available to external locations is not possible.

To work around this problem, you can set destination NAT using nftables for the VM.

13.4. Types of virtual machine network connections

To modify the networking properties and behavior of your VMs, change the type of virtual network or interface the VMs use. The following sections describe the connection types available to VMs in RHEL 8.

13.4.1. Virtual networking with network address translation

By default, virtual network switches operate in network address translation [NAT] mode. They use IP masquerading rather than Source-NAT [SNAT] or Destination-NAT [DNAT]. IP masquerading enables connected VMs to use the host machine’s IP address for communication with any external network. When the virtual network switch is operating in NAT mode, computers external to the host cannot communicate with the VMs inside the host.

Virtual network switches use NAT configured by firewall rules. Editing these rules while the switch is running is not recommended, because incorrect rules may result in the switch being unable to communicate.

13.4.2. Virtual networking in routed mode

When using Routed mode, the virtual switch connects to the physical LAN connected to the host machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, the virtual machines [VMs] are all in a single subnet, separate from the host machine. The VM subnet is routed through a virtual switch, which exists on the host machine. This enables incoming connections, but requires extra routing-table entries for systems on the external network.

Routed mode uses routing based on the IP address:

Common topologies that use routed mode include DMZs and virtual server hosting.

DMZ

You can create a network where one or more nodes are placed in a controlled sub-network for security reasons. Such a sub-network is known as a demilitarized zone [DMZ].

Host machines in a DMZ typically provide services to WAN [external] host machines as well as LAN [internal] host machines. Since this requires them to be accessible from multiple locations, and considering that these locations are controlled and operated in different ways based on their security and trust level, routed mode is the best configuration for this environment.

Virtual server hosting

A virtual server hosting provider may have several host machines, each with two physical network connections. One interface is used for management and accounting, the other for the VMs to connect through. Each VM has its own public IP address, but the host machines use private IP addresses so that only internal administrators can manage the VMs.

13.4.3. Virtual networking in bridged mode

In most VM networking modes, VMs automatically create and connect to the virbr0 virtual bridge. In contrast, in bridged mode, the VM connects to an existing Linux bridge on the host. As a result, the VM is directly visible on the physical network. This enables incoming connections, but does not require any extra routing-table entries.

Bridged mode uses connection switching based on the MAC address:

In bridged mode, the VM appear within the same subnet as the host machine. All other physical machines on the same physical network can detect the VM and access it.

Bridged network bonding

It is possible to use multiple physical bridge interfaces on the hypervisor by joining them together with a bond. The bond can then be added to a bridge, after which the VMs can be added to the bridge as well. However, the bonding driver has several modes of operation, and not all of these modes work with a bridge where VMs are in use.

The following bonding modes are usable:

  • mode 1
  • mode 2
  • mode 4

In contrast, using modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that media-independent interface [MII] monitoring should be used to monitor bonding modes, as Address Resolution Protocol [ARP] monitoring does not work correctly.

For more information on bonding modes, refer to the Red Hat Knowledgebase.

Common scenarios

The most common use cases for bridged mode include:

  • Deploying VMs in an existing network alongside host machines, making the difference between virtual and physical machines invisible to the end user.
  • Deploying VMs without making any changes to existing physical network configuration settings.
  • Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a physical network where they must access DHCP services.
  • Connecting VMs to an existing network where virtual LANs [VLANs] are used.

13.4.4. Virtual networking in isolated mode

When using isolated mode, virtual machines connected to the virtual switch can communicate with each other and with the host machine, but their traffic will not pass outside of the host machine, and they cannot receive traffic from outside the host machine. Using dnsmasq in this mode is required for basic functionality such as DHCP.

13.4.5. Virtual networking in open mode

When using open mode for networking, libvirt does not generate any firewall rules for the network. As a result, libvirt does not overwrite firewall rules provided by the host, and the user can therefore manually manage the VM’s firewall rules.

13.4.6. Comparison of virtual machine connection types

The following table provides information about the locations to which selected types of virtual machine [VM] network configurations can connect, and to which they are visible.

Table 13.1. Virtual machine connection types

 Connection to the hostConnection to other VMs on the hostConnection to outside locationsVisible to outside locations

Bridged mode

YES

YES

YES

YES

NAT

YES

YES

YES

no

Routed mode

YES

YES

YES

YES

Isolated mode

YES

YES

no

no

Open mode

Depends on the host’s firewall rules

13.5. Booting virtual machines from a PXE server

Virtual machines [VMs] that use Preboot Execution Environment [PXE] can boot and load their configuration from a network. This chapter describes how to use libvirt to boot VMs from a PXE server on a virtual or bridged network.

These procedures are provided only as an example. Ensure that you have sufficient backups before proceeding.

13.5.1. Setting up a PXE boot server on a virtual network

This procedure describes how to configure a libvirt virtual network to provide Preboot Execution Environment [PXE]. This enables virtual machines on your host to be configured to boot from a boot image available on the virtual network.

Prerequisites

  • A local PXE server [DHCP and TFTP], such as:

    • libvirt internal server
    • manually configured dhcpd and tftpd
    • dnsmasq
    • Cobbler server

  • PXE boot images, such as PXELINUX configured by Cobbler or manually.

Procedure

  1. Place the PXE boot images and configuration in /var/lib/tftpboot folder.
  2. Set folder permissions:

    # chmod -R a+r /var/lib/tftpboot
  3. Set folder ownership:

    # chown -R nobody: /var/lib/tftpboot
  4. Update SELinux context:

    # chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot
    # chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpboot
  5. Shut down the virtual network:

    # virsh net-destroy default
  6. Open the virtual network configuration file in your default editor:

    # virsh net-edit default
  7. Edit the element to include the appropriate address, network mask, DHCP address range, and boot file, where BOOT_FILENAME is the name of the boot image file.

       
       
          
          
       
    
  8. Start the virtual network:

    # virsh net-start default

Verification

  • Verify that the default virtual network is active:

    # virsh net-list
    Name             State    Autostart   Persistent
    ---------------------------------------------------
    default          active   no          no

13.5.2. Booting virtual machines using PXE and a virtual network

To boot virtual machines [VMs] from a Preboot Execution Environment [PXE] server available on a virtual network, you must enable PXE booting.

Procedure

  • Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the default virtual network, into a new 10 GB qcow2 image file:

    # virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10

    • Alternatively, you can manually edit the XML configuration file of an existing VM:

      1. Ensure the element has a element inside:

           hvm
           
           
        
      2. Ensure the guest network is configured to use your virtual network:

           
           
           
           
           
        

Verification

  • Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.

13.5.3. Booting virtual machines using PXE and a bridged network

To boot virtual machines [VMs] from a Preboot Execution Environment [PXE] server available on a bridged network, you must enable PXE booting.

Prerequisites

  • Network bridging is enabled.
  • A PXE boot server is available on the bridged network.

Procedure

  • Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the breth0 bridged network, into a new 10 GB qcow2 image file:

    # virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10

    • Alternatively, you can manually edit the XML configuration file of an existing VM:

      1. Ensure the element has a element inside:

           hvm
           
           
        
      2. Ensure the VM is configured to use your bridged network:

           
           
           
           
           
        

Verification

  • Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.

13.6. Additional resources

  • Configuring and managing networking
  • Attach specific network interface cards as SR-IOV devices to increase VM performance.

  1. Previous
  2. Next

How do I enable Hyper

Enable the Hyper-V role through Settings Right click on the Windows button and select 'Apps and Features'. Select Programs and Features on the right under related settings. Select Turn Windows Features on or off. Select Hyper-V and click OK.

Which CPU feature must be present to run Client Hyper

Regardless of the Hyper-V features you want to use, you'll need: A 64-bit processor with second-level address translation [SLAT]. To install the Hyper-V virtualization components such as Windows hypervisor, the processor must have SLAT.

Which options must be enabled to run Hyper

To do so open up Settings > Update and Security > Activation..
Windows 10 Enterprise..
Windows 10 Pro..
Windows 10 Education..

Which Hyper

Nested virtualization is a feature that allows you to run Hyper-V inside of a Hyper-V virtual machine [VM]. This is helpful for running a Visual Studio phone emulator in a virtual machine, or testing configurations that ordinarily require several hosts. Nested Virtualization is supported both Azure and on-premises.

Chủ Đề