The following is Part II of my three-part series on Hyper-V vs VMware vSphere. You’ll recall in Part I, I evaluated Hyper-V’s install process, hardware considerations, networking and virtual switches.
Here, I take a deeper dive into failover clustering, clustered shared volumes and storage. Enjoy!
In order to get all of the benefits of High Availability and similar features that require the use of shared storage between your Hyper-V hosts in a “cluster,” you must install the Failover Clustering role on your Hyper-V hosts. This is, as the name indicates, what allows you the ability to “cluster” the hosts together.
Installing this role is just as easy as installing the Hyper-V role – simply check the box under the Features in the add new roles/features wizard, and it will run through the install. It’s worth noting that the server on which you are installing the role must be a member of the domain at the time of installation, or it won’t go through with the install.
Once the feature is installed and the server is rebooted, you must run through the Pre-Cluster Validation report, which will check out the hardware, drivers, etc. and alert you to anything that would prevent your cluster from working and/or being supported by Microsoft. See Part I for the importance of having HCL-listed hardware.
From here, you create a new cluster, give it a name, an IP address, etc. and add your single node into it. Rinse and repeat to add additional hosts.
Any network interfaces that have an IP address assigned to them (as is usually the case for interfaces or NIC teams that are used for management, cluster heartbeats, or iSCSI storage access) will show up as Networks in the Failover Clustering management interface. You can rename them to be more readable, and as long as the subnets stay the same across hosts, all hosts can share those networks without having to do anything special to keep the names consistent.
Jumping ahead to the future state of a common Hyper-V 2012 deployment without System Center Virtual Machine Manager (SCVMM, which we’ll mention in more detail later), the Failover Clustering management utility is where you will be able to migrate your virtual machines across nodes in your cluster.
Clustered Shared Volumes
To briefly define a “Clustered Shared Volume” to someone familiar with ESXi, consider it the equivalent of a VMFS datastore. It’s essentially a unit of storage that is able to be accessed by multiple hosts at the same time, just like the VMware File System (VMFS) allows. It’s similar to a clustered file system given the nature of its functionality, but it still runs Microsoft’s well known NTFS as the underlying file system.
In order to add a CSV to a cluster, you present the storage as appropriate for your configuration and vendor’s best practices, and it becomes available to the “Add Disk” wizard, as available and cluster-capable storage. Once it has been added, you can then right click it and select “Add to Clustered Shared Volume.”
One of the bigger advantages VMware/ESXi still has over Hyper-V is its built-in, native multipathing capability. For those unfamiliar, NMP allows the use, failover, failback, and in some cases, load balancing/load distribution of all the paths to your shared storage array.
The only equivalent utility natively available in Hyper-V 2012 is the MPIO driver, which combines its use with the built-in iSCSI initiator. Microsoft has its own “generic device specific module,” and you should check with your storage manufacturer to verify that using the Microsoft DSM is supported for access to the storage (EMC VNX IS supported).
For those who are familiar with and have used the VMware NMP beyond “install and let it run,” you know how incredibly valuable and efficient it is in nearly all deployment use cases (aside from NFS, which is a different beast entirely).
It’s worth noting that storage vendors such as EMC provide their own multipathing plugins or software that you can use to provide highly enhanced, optimized path management, load balancing, and failover. In EMC’s case, this software is their PowerPath V/E, or Virtual Edition, and it makes a great add-on to any Hyper-V 2012 + EMC Storage deployment.
Stay tuned this week for Part III which will take a look at performance and management features with my final conclusion wrap up.
To start, let me say that Hyper-V 2012 is a strong product– a very strong product. Its entrance to the market was loud and impressive, but the noise around it has since quieted down. After spending quite a bit of time using it, and coming from the background of years on the VMware vSphere platform, I’ve been able to make a comparison between the two and wanted to share my thoughts on how it stacks up.
For those who aren’t aware, Hyper-V 2012 can be installed as a Server Role on top of Server 2012, and the installation process (once the OS is already loaded) is just about as easy as it gets. You check the box for the Hyper-V role in the add new role wizard, and in a few minutes, it’s installed and live on your server. There is also a standalone, Core-only (command line only) version called Hyper-V 2012 Server, that you can install onto the server hardware directly. The difference here is that the Role added on Server 2012 includes licensing for Server OS virtual machines that will run on top of the host, whereas the Hyper-V 2012 Server installation does not. In addition, Hyper-V Server is command line only, as it runs the “core” installation of Windows Server.
I won’t spend much time on the licensing details other than to say unless you are running a small number of virtual machines, going the Datacenter edition is probably going to be the best bang for the buck, as it covers an unlimited amount of Windows Server OS VMs that run on the licensed host. To state it another way, once you pay for the Datacenter Edition license, install it on a server, and enable the Hyper-V role, all other server VMs that you subsequently run on that physical server, require no additional Server OS license.
It’s important to note that at this point, you have the equivalent of a standalone hypervisor server– a “standlone ESXi host” is a good point of comparison. You do not have the benefits of high availability, live migrations for either the virtual workload or its underlying storage, and no dynamic workload balancing.
Throughout our time installing, upgrading, and managing Hyper-V hosts, we’ve found that Microsoft is very picky about the underlying hardware used, and reference their hardware compatibility list in several documents and interfaces (especially in the Failover Clustering interface, which we’ll get to later). While an HCL exists from the other hypervisor vendors, it’s almost a background consideration in a lot of cases, as compared to Microsoft’s position of putting the HCL “front and center.”
Given this fact, we’d strongly suggest to anyone deploying Hyper-V 2012, and especially those who are upgrading from previous versions, that you very carefully validate that your hardware is certified for Server 2012 before going into production.
One of the major improvements in Server 2012, which has an upward and positive impact to Hyper-V, is the addition of NIC Teaming through the Server 2012 OS itself, as opposed to only being available through the use of vendor specific teaming utilities.
The result here is a very similar capability to what I’ve been accustomed to having from VMware for some time now. NIC Teaming provides the ability to load balance NICs from different vendors, NICs that are on the motherboard with NICs that are add-ons through a PCIe slot, as well as load balancing of traffic through the NICs (in an “active/active” manner) without any configuration on the upstream switches.
Additionally, you can load balance your NICs in an active/standby configuration, or the commonplace active/active with “switch awareness” (a port channel in the Cisco world– which can be “static”– channel-group mode on in Cisco terms, or using LACP, channel-group mode active in Cisco terms).
Assuming a standard 6 or 8 NIC server design, the only real “difference” in the networking configuration for a Hyper-V server as opposed to an ESXi host is that there is generally a NIC (or two) dedicated for “outside of Hyper-V” server OS communication. This can be done with a team, and is something that should be discussed as part of your design and hardware purchase (to ensure you have the correct amount of NIC ports).
Hyper-V Virtual Switches
This topic took me a little while to understand how it actually applied, especially since I was trying to relate it to what I was used to working with from VMware.
Imagine that you have 2 NICs in an active/active, switch independent team, that you’d like to use for the “normal” virtual machine network/LAN traffic. First, you must create this team using the NIC Teaming interface through the Server Manager in Server 2012 (this is done OUTSIDE of Hyper-V, as it’s an OS feature, and not a Hyper-V specific feature).
Once configured, note the non-editable name that Windows assigns to the logical team interface, as this is what you will be creating your virtual switch to use as its communication link. This is a minor annoyance. Being able to change the name to something more usable, or having Hyper-V use the team name as opposed to the OS assigned name, would be a simple and a more user-friendly implementation. Maybe in a patch or future update.
Staying off of the Hyper-V configuration and speaking specifically to the Server 2012 OS use of the network team interface, you have the ability to specify a VLAN, and can also create additional LOGICAL sub-interfaces off of the NIC team for specific VLANs. If you set no VLAN, the interface will listen for traffic on all VLANs that the upstream, physical switch is passing down to it.
If you specify one on the team interface itself, that’s the only VLAN it will be listening for traffic on.
If you create a sub-interface and assign it a VLAN ID, that interface will only listen for that specific VLAN’s traffic, but the logical TEAM interface will continue to listen on all others.
The recommendation for Hyper-V is to NOT create sub-interfaces on the network team interface, and to NOT assign any VLANs to the team interface, but leave the VLAN tagging to the vNICs for the virtual machines (this is a bit different than ESXi’s vSwitching, which we’ll outline further below).
Now, back to Hyper-V. From Hyper-V Manager , you’ll find the Hyper-V Virtual Switch Manager in the top right corner, and once you open it, you’ll have the ability to add a new network. There are three network types, but the most common and the one you’ll want to create to allow virtual machines to talk to your physical LAN is the “External Network” type.
If you create a network called “Server Network,” you’ll notice that you can ONLY assign it a VLAN if you share it with the management OS. What this means is that if you want to tag it with a VLAN at this level, you won’t be keeping it as a dedicated network for Hyper-V, but rather will be sharing it with the Server 2012 OS you installed Hyper-V onto as well. I suggest you NOT check this option, and that you do not assign it a VLAN.
As part of the creation of the new network, you’ll have the ability to select the network interface you want to use for it as well via a drop down menu. Once you select an interface for a new virtual switch, it cannot be used again for another one. This is important to take note of for your configuration.
So at this point, consider the diagram below, taken as a screenshot from ESXi. You have created a virtual switch, and assigned it physical interfaces on which to connect– the center piece, and the pieces to the right.
What you have NOT done, which is what confused us for a while, is create the “VM Port Groups” on the left hand side (See “ExNet” in the diagram).
In a typical VMware deployment, you would create a VM Port Group for each VLAN you wanted to have available to your virtual machines, and in that port group, you would set a VLAN ID. The upstream communication would be on trunking (or tagging) interfaces, and thus providing you with access to the VLANs you need– and this part is the same in Hyper-V. What’s different, however, is that Hyper-V 2012 does not have the concept of a VM Port Group. Instead, you assign a VLAN ID and “virtual port” on the vNIC of the virtual machines themselves.
So let’s say that Server Network has access to VLANs 10-20, and you want to be on the “DMZ” VLAN, which we’ll say is VLAN 15 in our example. On the VM you are putting in the DMZ, you would go to Edit Settings, find its NIC, and check the box to “Enable VLAN Tagging”, and specific VLAN 15. Then you’d press OK, and the VM would be able to communicate out on the network.
I know this can be a bit confusing, so if you have any questions, please do not hesitate to post in the comments or drop us an email– we’re happy to help clarify!