One of the changes in the VNX platform from the previous generation EMC storage arrays is that the Failover Mode 4 is supposed to be the new default. And I say “supposed to be” because during the beta program, I did not find that this was the case. The ESXi hosts we connected in to the array all used the previously default Failover Mode of 1 (Active/Passive mode). But now I think I know why. There has been a little bit of forums/message board discussions on this and the popular thought is that “everything defaults to Failover Mode 4 now”– but I’ve found that is not true– kind of.
Wow! I should change “released” to say “soon to be unleashed upon the world” !!!
Trend Micro has done it AGAIN – hitting the ground running this year – easily outpacing most of the competition in the race to win the cloud-enabled anti-malware software market.
From cloud-enabled, VDI-aware endpoint protection, to vmsafe API-based virtual machine protection through the core protection modules, Trend has a forward-looking and effective (and affordable) security solution for every cloud-savvy Data Center.
Earlier this summer, and given our track record with both messaging and security, eGroup was invited to participate in the InterScan Messaging Security Virtual Appliance (IMSVA) 8.0 Hybrid SaaS Email Security Early Adopter Program. The results were pretty fantastic!
Another new feature introduced with vSphere 4.1 that doesn’t seem to get any "coverage" is Network I/O Control (NetIOC). Very similar to the I/O Control introduced for storage, NetIOC uses shares and hard limits to determine who gets access to the network resource (bandwidth).
When moving to 10Gb ethernet, it’s common to see substantially fewer links/cables, where traditionally as many as 12 1Gbps links were used to provide the necessary bandwidth to all of the network traffic in a virtualized environment. Because there are fewer physical links, with the same number of machines using them, it’s important to have the ability to prioritize, limit, or otherwise "protect" the traffic of important functions.
One new "feature", or probably better described as an enhancement, introduced in vSphere 4.1 is Load Based Teaming.
Briefly, Load Based Teaming takes the traditional three load balancing choices (route based on originating virtual Port ID, IP hash, or Source MAC) and offers a dynamic, "on-the-fly" policy for better utilization of bandwidth.
One of the little known/used enhancements from the upgrade to vSphere (and the accompanying VM Tools update) is the in-Windows perfmon DLLs. See image below–
If you run into an issue with the “Overview” page under the Performance Tab in vCenter, then follow these steps to resolve.
While adding storage to a Backup Server, I ran across a “bug” in Server 2003 Enterprise Edition while trying to format newly attached storage.
Here’s the quick fix:
In ESX 4, many improvements have been introduced in CPU scheduler. This includes further relaxed co-scheduling, lower lock-contention, and multi-core aware load balancing. Co-scheduling overhead has been further reduced by the accurate measurement of the co-scheduling skew, and by allowing more scheduling choices. Lower lock-contention is achieved by replacing scheduler cell-lock with finer-grained locks. By eliminating the scheduler-cell, a virtual machine can get higher aggregated cache capacity and memory bandwidth. Lastly, multi-core aware load balancing achieves high CPU utilization while minimizing the cost of migrations.
VMware has made a significant change that when used in conjunction with the latest intel (Nehalem) or AMD (Istanbul) CPU can take a one vCPU VM performance to a new height.
Did you forget about all the fancy CPU masking you did in version 3.x to get vMotion to work across different CPU revisions. Well if you did you may run into this when you upgrade. You can reference VMware Knowledgebase article 1011294 when calling VMware support if the resolution does not work.
info@eGroup-us.com | 1.877.eGroup.1 | 1.877.347.6871
eGroup Corporate Office
482 Wando Park Blvd.
Mt. Pleasant, SC 29464