At last month’s VMware Users Group meeting in Savannah GA, I told a story about a company’s IT department and how their world was turned upside down when an absent minded employee and lax security policies met head on. The purpose of the anecdote was to show how it was possible to secure sensitive data while introducing more flexibility and choice to the end-user. The resulting solution was founded on a solid and secure implementation of VMware View.
With the recent release of the VMware’s Horizon Suite, I can’t help but wonder how these products would have changed or improved the original solution. When I look at the product portfolio of the Horizon Suite, I see a lot of opportunity for what could have been. There are three major points that could have really improved that initial deployment of View.
Access to Remote Applications
Horizon Workspace would have set the stage for a more simplified implementation of a BYOD policy. The delivery of full desktops to end-users can be cumbersome and financially prohibitive. From one SSO portal, Workspace allows users to access individual applications and files. The enhanced security provides peace of mind and allows IT management deeper insights and controls through auditing and quotas. The best part is that the end-user will now have a multitude of secured devices to choose from in order to access their data. From home computers to library kiosks, there will be no excuse for not having access to the data you need when you need it most.
Controlling the Handsets
In the example I provided, we discussed the use of a mobile device management system in order to control access and security from the end-user handsets and tablets. As I tote around my six pounds of laptop, I can surely attest to the rise in popularity of mobile devices like smartphones and tablets. Since most of these devices are personally owned, they become increasingly more difficult to manage. It is not so difficult to imagine what might happen to unsecured corporate data on a smartphone if it were to fall into the hands of a…toddler. So how are we, as IT managers, supposed to allow people to use their personal devices while maintaining the security of our corporate data? VMware Horizon Mobile is the answer. This tool allows IT to deliver a customized corporate persona to the end-user device allowing them to maintain their own personalization while enforcing corporate security on the company provided device.
No Man Left Behind
One of the biggest issues we found during our View deployment was that not all of our end-users were easily converted to a Virtual Desktop. These end-points felt extremely detached from the infrastructure as a whole and required a lot of extra attention. This detracted from the benefits we were gaining through our View deployment. If we had been able to use Horizon Mirage, our management capabilities for these end points would have been significantly more streamlined. The policy-driven remediation of applications would have meant a few hours of repair for branch offices instead of the days it would have required in order to ship the device, complete the repairs and ship back. It would have also been extremely cheaper. Have you ever mailed a laptop overnight express with insurance? Twice?
As you can imagine, this is just the tip of the iceberg for the entire portfolio of the VMware Horizon Suite. The sheer scope of this product is enough to overwhelm any sized IT department. At eGroup, we’re ‘getting our hands dirty’ with all this great technology so you don’t have to!
The following is Part II of my three-part series on Hyper-V vs VMware vSphere. You’ll recall in Part I, I evaluated Hyper-V’s install process, hardware considerations, networking and virtual switches.
Here, I take a deeper dive into failover clustering, clustered shared volumes and storage. Enjoy!
In order to get all of the benefits of High Availability and similar features that require the use of shared storage between your Hyper-V hosts in a “cluster,” you must install the Failover Clustering role on your Hyper-V hosts. This is, as the name indicates, what allows you the ability to “cluster” the hosts together.
Installing this role is just as easy as installing the Hyper-V role – simply check the box under the Features in the add new roles/features wizard, and it will run through the install. It’s worth noting that the server on which you are installing the role must be a member of the domain at the time of installation, or it won’t go through with the install.
Once the feature is installed and the server is rebooted, you must run through the Pre-Cluster Validation report, which will check out the hardware, drivers, etc. and alert you to anything that would prevent your cluster from working and/or being supported by Microsoft. See Part I for the importance of having HCL-listed hardware.
From here, you create a new cluster, give it a name, an IP address, etc. and add your single node into it. Rinse and repeat to add additional hosts.
Any network interfaces that have an IP address assigned to them (as is usually the case for interfaces or NIC teams that are used for management, cluster heartbeats, or iSCSI storage access) will show up as Networks in the Failover Clustering management interface. You can rename them to be more readable, and as long as the subnets stay the same across hosts, all hosts can share those networks without having to do anything special to keep the names consistent.
Jumping ahead to the future state of a common Hyper-V 2012 deployment without System Center Virtual Machine Manager (SCVMM, which we’ll mention in more detail later), the Failover Clustering management utility is where you will be able to migrate your virtual machines across nodes in your cluster.
Clustered Shared Volumes
To briefly define a “Clustered Shared Volume” to someone familiar with ESXi, consider it the equivalent of a VMFS datastore. It’s essentially a unit of storage that is able to be accessed by multiple hosts at the same time, just like the VMware File System (VMFS) allows. It’s similar to a clustered file system given the nature of its functionality, but it still runs Microsoft’s well known NTFS as the underlying file system.
In order to add a CSV to a cluster, you present the storage as appropriate for your configuration and vendor’s best practices, and it becomes available to the “Add Disk” wizard, as available and cluster-capable storage. Once it has been added, you can then right click it and select “Add to Clustered Shared Volume.”
One of the bigger advantages VMware/ESXi still has over Hyper-V is its built-in, native multipathing capability. For those unfamiliar, NMP allows the use, failover, failback, and in some cases, load balancing/load distribution of all the paths to your shared storage array.
The only equivalent utility natively available in Hyper-V 2012 is the MPIO driver, which combines its use with the built-in iSCSI initiator. Microsoft has its own “generic device specific module,” and you should check with your storage manufacturer to verify that using the Microsoft DSM is supported for access to the storage (EMC VNX IS supported).
For those who are familiar with and have used the VMware NMP beyond “install and let it run,” you know how incredibly valuable and efficient it is in nearly all deployment use cases (aside from NFS, which is a different beast entirely).
It’s worth noting that storage vendors such as EMC provide their own multipathing plugins or software that you can use to provide highly enhanced, optimized path management, load balancing, and failover. In EMC’s case, this software is their PowerPath V/E, or Virtual Edition, and it makes a great add-on to any Hyper-V 2012 + EMC Storage deployment.
Stay tuned this week for Part III which will take a look at performance and management features with my final conclusion wrap up.
In the past, when designing enterprise storage solutions for virtual infrastructures as I have, we were basically provided two primary considerations to complete a valid design.
One, you must have enough capacity to store your VMs.
Two, you must have enough I/O to meet your virtual workloads requirements.
A third consideration, though not strictly a necessity, was software integration with the hypervisor and storage array.
As disk sizes have increased, we no longer have to throw shelves and shelves of disks at the design to meet capacity needs. Likewise, with the introduction of Flash (EFDs, SSDs etc.) into our storage solutions, we no longer have to throw shelves and shelves of disks at the solution to meet our I/O requirement (a la “short stroking”). Now, customer demands are pushing for software defined storage solutions as their number one priority. The demand for “VM aware” storage is greater now than it ever has been.
ENTER the Host
We can argue what network protocol and medium in which to use for as virtual storage solution (Fibre Channel, Ethernet, NFS, iSCSI etc) until we are blue in the face, what’s not up for debate is this: if we move the active data closer to the CPU we are going to get decreased latency and increased response time.
To realize this we must first accept two conditions.
- Capacity Storage is becoming a commodity
- I/O requirements can and will be met with Host based Flash solutions
With these assumptions, we can begin to focus on storage solutions that truly provide “value add” at the software layer. Let’s focus on the second condition.
Solutions for utilizing host based flash aren’t exactly new. Product offerings from companies like Fusion IO, allow us to insert high quantities of flash storage into our virtual hosts. The real advantage of utilizing these types of peripheral interconnects comes out of the software that successfully manages them with efficiency and transparency to the data as it’s stored and retrieved.
The author thinks it takes the following attributes for a host based flash solution to be truly enterprise worthy:
- It must be truly clustered and compatible with cross-host functions like vMotion, DRS, HA, etc. And “being compatible” should mean that the product allows true network-based sharing and movement of a VM’s flash footprint as it goes through vMotion, DRS migrations, etc.
- It must support write acceleration, and in a manner that doesn’t incur data loss on host failures.
- It must not be a virtual appliance. I repeat, it should not be a virtual appliance. This often leads to performance, availability, scalability, and manageability problems. It must have seamless, simple, integration into the hypervisor management console.
- It must not require in-guest or application level changes. This is operationally un-scalable.
- It must be flash technology agnostic!
- It must work on per VM basis. We should have the ability to apply such a solution at the VM level, when we want to, and for how long. Such a solution should be “VM aware” with proper reporting. For example, we should be able, at any point in time, to see and understand the total number of IOPs being utilized from the solution by a particular VM. And thus make intelligent admin decisions based on this information.
I recently had the privilege of discussing such a solution with Satyam Vaghani. You may recall Satyam for his long tenure at VMware where he helped create ESXi, VMFS, VAAI and virtual volumes. He is now the CTO and cofounder of PernixData where he is working on his next game changing technology.
I asked about PernixData’s product offerings and how they may provide change that could impact the storage playing field?
His response: “At PernixData, we envision the need for a brand new yet seamless software defined data tier in front of the primary storage system and alongside the VMs. This tier provides IOPS for data-in-motion (“hot” data), and unburdens the storage system so it can focus on providing capacity and data services for your data-at-rest.”
More to come and full on follow up/interview with Satyam will be in the months ahead.
One thing is for certain: my designs for enterprise storage systems that support virtual workloads have changed. I now have to focus not only on capacity and I/O requirements but how the next generation software defined data center interacts with the data as it is being stored and fetched.
To start, let me say that Hyper-V 2012 is a strong product– a very strong product. Its entrance to the market was loud and impressive, but the noise around it has since quieted down. After spending quite a bit of time using it, and coming from the background of years on the VMware vSphere platform, I’ve been able to make a comparison between the two and wanted to share my thoughts on how it stacks up.
For those who aren’t aware, Hyper-V 2012 can be installed as a Server Role on top of Server 2012, and the installation process (once the OS is already loaded) is just about as easy as it gets. You check the box for the Hyper-V role in the add new role wizard, and in a few minutes, it’s installed and live on your server. There is also a standalone, Core-only (command line only) version called Hyper-V 2012 Server, that you can install onto the server hardware directly. The difference here is that the Role added on Server 2012 includes licensing for Server OS virtual machines that will run on top of the host, whereas the Hyper-V 2012 Server installation does not. In addition, Hyper-V Server is command line only, as it runs the “core” installation of Windows Server.
I won’t spend much time on the licensing details other than to say unless you are running a small number of virtual machines, going the Datacenter edition is probably going to be the best bang for the buck, as it covers an unlimited amount of Windows Server OS VMs that run on the licensed host. To state it another way, once you pay for the Datacenter Edition license, install it on a server, and enable the Hyper-V role, all other server VMs that you subsequently run on that physical server, require no additional Server OS license.
It’s important to note that at this point, you have the equivalent of a standalone hypervisor server– a “standlone ESXi host” is a good point of comparison. You do not have the benefits of high availability, live migrations for either the virtual workload or its underlying storage, and no dynamic workload balancing.
Throughout our time installing, upgrading, and managing Hyper-V hosts, we’ve found that Microsoft is very picky about the underlying hardware used, and reference their hardware compatibility list in several documents and interfaces (especially in the Failover Clustering interface, which we’ll get to later). While an HCL exists from the other hypervisor vendors, it’s almost a background consideration in a lot of cases, as compared to Microsoft’s position of putting the HCL “front and center.”
Given this fact, we’d strongly suggest to anyone deploying Hyper-V 2012, and especially those who are upgrading from previous versions, that you very carefully validate that your hardware is certified for Server 2012 before going into production.
One of the major improvements in Server 2012, which has an upward and positive impact to Hyper-V, is the addition of NIC Teaming through the Server 2012 OS itself, as opposed to only being available through the use of vendor specific teaming utilities.
The result here is a very similar capability to what I’ve been accustomed to having from VMware for some time now. NIC Teaming provides the ability to load balance NICs from different vendors, NICs that are on the motherboard with NICs that are add-ons through a PCIe slot, as well as load balancing of traffic through the NICs (in an “active/active” manner) without any configuration on the upstream switches.
Additionally, you can load balance your NICs in an active/standby configuration, or the commonplace active/active with “switch awareness” (a port channel in the Cisco world– which can be “static”– channel-group mode on in Cisco terms, or using LACP, channel-group mode active in Cisco terms).
Assuming a standard 6 or 8 NIC server design, the only real “difference” in the networking configuration for a Hyper-V server as opposed to an ESXi host is that there is generally a NIC (or two) dedicated for “outside of Hyper-V” server OS communication. This can be done with a team, and is something that should be discussed as part of your design and hardware purchase (to ensure you have the correct amount of NIC ports).
Hyper-V Virtual Switches
This topic took me a little while to understand how it actually applied, especially since I was trying to relate it to what I was used to working with from VMware.
Imagine that you have 2 NICs in an active/active, switch independent team, that you’d like to use for the “normal” virtual machine network/LAN traffic. First, you must create this team using the NIC Teaming interface through the Server Manager in Server 2012 (this is done OUTSIDE of Hyper-V, as it’s an OS feature, and not a Hyper-V specific feature).
Once configured, note the non-editable name that Windows assigns to the logical team interface, as this is what you will be creating your virtual switch to use as its communication link. This is a minor annoyance. Being able to change the name to something more usable, or having Hyper-V use the team name as opposed to the OS assigned name, would be a simple and a more user-friendly implementation. Maybe in a patch or future update.
Staying off of the Hyper-V configuration and speaking specifically to the Server 2012 OS use of the network team interface, you have the ability to specify a VLAN, and can also create additional LOGICAL sub-interfaces off of the NIC team for specific VLANs. If you set no VLAN, the interface will listen for traffic on all VLANs that the upstream, physical switch is passing down to it.
If you specify one on the team interface itself, that’s the only VLAN it will be listening for traffic on.
If you create a sub-interface and assign it a VLAN ID, that interface will only listen for that specific VLAN’s traffic, but the logical TEAM interface will continue to listen on all others.
The recommendation for Hyper-V is to NOT create sub-interfaces on the network team interface, and to NOT assign any VLANs to the team interface, but leave the VLAN tagging to the vNICs for the virtual machines (this is a bit different than ESXi’s vSwitching, which we’ll outline further below).
Now, back to Hyper-V. From Hyper-V Manager , you’ll find the Hyper-V Virtual Switch Manager in the top right corner, and once you open it, you’ll have the ability to add a new network. There are three network types, but the most common and the one you’ll want to create to allow virtual machines to talk to your physical LAN is the “External Network” type.
If you create a network called “Server Network,” you’ll notice that you can ONLY assign it a VLAN if you share it with the management OS. What this means is that if you want to tag it with a VLAN at this level, you won’t be keeping it as a dedicated network for Hyper-V, but rather will be sharing it with the Server 2012 OS you installed Hyper-V onto as well. I suggest you NOT check this option, and that you do not assign it a VLAN.
As part of the creation of the new network, you’ll have the ability to select the network interface you want to use for it as well via a drop down menu. Once you select an interface for a new virtual switch, it cannot be used again for another one. This is important to take note of for your configuration.
So at this point, consider the diagram below, taken as a screenshot from ESXi. You have created a virtual switch, and assigned it physical interfaces on which to connect– the center piece, and the pieces to the right.
What you have NOT done, which is what confused us for a while, is create the “VM Port Groups” on the left hand side (See “ExNet” in the diagram).
In a typical VMware deployment, you would create a VM Port Group for each VLAN you wanted to have available to your virtual machines, and in that port group, you would set a VLAN ID. The upstream communication would be on trunking (or tagging) interfaces, and thus providing you with access to the VLANs you need– and this part is the same in Hyper-V. What’s different, however, is that Hyper-V 2012 does not have the concept of a VM Port Group. Instead, you assign a VLAN ID and “virtual port” on the vNIC of the virtual machines themselves.
So let’s say that Server Network has access to VLANs 10-20, and you want to be on the “DMZ” VLAN, which we’ll say is VLAN 15 in our example. On the VM you are putting in the DMZ, you would go to Edit Settings, find its NIC, and check the box to “Enable VLAN Tagging”, and specific VLAN 15. Then you’d press OK, and the VM would be able to communicate out on the network.
I know this can be a bit confusing, so if you have any questions, please do not hesitate to post in the comments or drop us an email– we’re happy to help clarify!
South Carolina Farm Bureau Insurance Delivers a Better Desktop Experience to its Users
eGroup collaborates to replace SC Farm Bureau’s clunky and costly desktop PC environment with VMware View
MT. PLEASANT, S.C. – January 16, 2013 – eGroup, the Southeast’s leading provider of cloud, application and end-user computing services, today announced the South Carolina Farm Bureau Insurance Companies (SCFB Insurance) have successfully deployed their virtual desktop infrastructure (VDI) program across the enterprise, saving time and money.
Given the initial success of the project and positive feedback from the first 100 users, SCFB Insurance plans to roll out its VDI program to the remainder of the company’s user base in 2013.
Facing the prospect of having to refresh several hundred desktops at an estimated cost of $300,000, SCFB Insurance’s information technology (IT) department decided it was finally time to adopt a VDI strategy.
With the consultation of eGroup’s End-User Computing team, SCFB Insurance identified administrative personnel and QuickBooks™ users as a good target group to replace their clunky desktops with Dell Wyse P20s. The benefits were instantaneous, according to Dave Riberdy, Infrastructure Architect, SCFB Insurance.
“Our administrative personnel were severely burdened by 30 minute log in times to QuickBooks and chronic latency issues given the amount of staff using the application. Moving QuickBooks to the cloud and having staff access through View on their Wyse P20s improved log in times to mere minutes and significantly improved application performance,” commented Riberdy.
The IT department also experienced immediate productivity benefits. For example, using View they no longer had to audit each and every machine for software versioning, nor did they have to push Adobe Reader updates out individually to each machine. Riberdy now has a golden version of the desktop that can be delivered with a press of a button.
“We were fortunate we had a few idle servers and some extra SAN space to get the pilot off the ground. Once we did, management witnessed just how powerful VDI can be and the impressive time and cost results that are realized almost immediately,” said Riberdy. “We were thrilled to have eGroup support us. Given View’s complexity, we needed experts to get it up and running quickly and successfully. There really was no margin for error in the eyes of management.”
SCFB Insurance even captured non-tangible metrics over the course of the project, such as a 90 percent reduction in help desk calls related to specific desktop PC hardware issues, or having to waste time removing harmful, non-business related software from individuals’ machines.
“One of the key validations that our VDI strategy is working is that employees who have yet to be converted keep asking when we’ll get to them,” commented Riberdy. “What IT team doesn’t like happy, productive and content users?”
The next phase of SCFB Insurance’s VDI strategy involves its field agents. In the near future, agents will be able to use View on their iPads to access claims applications while consulting with a client.
Mike Carter, Principal, eGroup, said: “Working with SCFB Insurance on their VDI transformation has been a wonderful experience. Living and breathing technology all day long, we understand the impact that comes from the deployment of innovative end user computing strategies that guarantee a defined outcome. Many times, customer leadership does not recognize the immediate value of enabling technologies like this until it’s ‘in production’, which is why we’re thrilled to ensure the project’s success from pre-planning, to rollout, to end-user education, and ultimately in the hands of the users and being effectively utilized.”
To hear Dave Riberdy detail the project in depth, you can listen to this recent eGroup Roundtable Webinar.
Page 1 of 4612345...102030...»Last »