Archive for VMware

Has VMware just killed some certification kudos?

Oh come on, at least let me finish the outline!?

So I woke up this morning to what would seem to be an innocuous email from VMware Education confirming some changes to the candidate IDs. Nothing much interesting here:

“To streamline the certification exam registration process, and provide you a single consolidated view of your training and certification histories, we have updated our candidate tracking systems. Part of this update was the creation of a new Candidate ID for all users. Your former Candidate ID VCP###### has been replaced by VMW-########X-########. This new ID will be recognized at both vmware.com/certification and pearsonvue.com/vmware.

From now on, when you are ready to register for a VMware certification exam, you will begin at vmware.com/certification. Once you have registered with VMware and received authorization, you can proceed to pearsonvue.com/vmware to schedule your exam date, time and location.

Please keep your new Candidate ID handy. Should you misplace this email, you can find your Candidate ID by logging in at vmware.com/certification to access myPreferences.”

Until we get to this bit:

“Note: We will no longer be using “Certification Numbers” such as 00001 or VCAPDCA-123456 going forward. If you are a veteran certified individual, you’ll find your original VCP Certification number reflected in the last digits of the new Candidate ID.”

I may be reading this wrong, but it would appear that you will no longer have a VCAP or VCP number… Perhaps worse still, maybe not even a VCDX number?!

Not massively important in the grand scheme of things (especially since VMware jumped from ~550 to >2000 when the VCAP5 came out so it’s hard to keep track of how many there are now as it is!). Still, a bit of a downer for people who want to show that not only have they been certified, but were perhaps some of the pioneers in doing so.

Just my 2p… I would love to hear other people’s thoughts on this? Does anyone actually care?

London VMUG – 4th July 2013

Its that time of year again, when we all look forward to the imminent arrival of our local VMUG. In my case it’s the Summer London VMUG being held next Thursday the 4th of July 2013 at 33 Queen Street, London, EC4R 1AP.

For those of you who haven’t previously attended, a VMUG is a great place to:

  • Meet and swap ideas with other virtualisation professionals in the community. A great opportunity to put real faces to all those twitterati with whom you have had many conversations with over time, but aren’t quite sure what they actually look like (especially if they’re particularly selective about which photos they use)! 🙂
  • See some great presentations from other community members. Some of my most memorable have been things like network virtualisation from Greg Ferro (one of the best tech presentations I’ve been fortunate enough to attend, anywhere!), converged networking from Julian Wood of wooditwork.com, and the standing room only presentation from the inimitable Mike Laverick, talking about his 42U “home” lab!
  • Attend vendor presentations on new and existing technologies / products. These can be a real mixed bag IMHO. I’ve seen some truly awesome presentations on tech which I would definitely be interested to deploy, as well as some right shockers where half the audience was nearly asleep! Either way these events don’t pay for themselves, so spending 45 minutes at a presentation from a vendor which has the potential to turn out to be really useful, is worth it for the great event they help to fund.

Of course, after the VMUG it is tradition for us all to pop along to a local watering hole to set the virtualisation and tech worlds to rights, over a few vBeers. Once again, if you can spare the time these are great things to attend. The number of things I’ve learned, and great people I’ve met though these events have made it an event I look forward to each quarter. The fact that you might get a free pint on top is just an added bonus!!!

However you look at it, VMUGs are great FREE events organised in their spare time by some very generous people. If you can get the time out of the office, and make it to the venue you won’t be disappointed!

Register for London VMUG here, get the latest news by following @LonVMUG on Twitter and don’t forget to join in the conversation throughout the day using the hashtag #lonvmug

If you see me at the London VMUG or the subsequent vBeers, don’t hesitate to come and say hi; you cant miss me, I’m 6’7″!

Yet MORE Intel NUC Models on the way for your Nanolab!

For those of you who are regular followers of my blog, you will know I am a great proponent of the Intel NUC range for their low noise, low power, low(ish) cost, high performance and most importantly high WAF (Wife Acceptance Factor) features!

Unbelievably having only just announced their second generation triumvirate of models just 2 months ago (and due out in a couple of weeks), they’re at it again, announcing a third generation already! The new models include a pair of Haswell-based “Wilson Canyon” Core i3 / Core i5 processor options, featuring up to 4 USB 3.0 ports and a full size SATA connector and are expected to land some time around Q3 this year.

I have updated the CPU table with the currently available info on the new models, and will add CPU benchmarks once available on www.cpubenchmark.net (for consistency). This also includes the recently leaked specs for the new Gen 8 HP Microservers based on Intel Pentium / Celeron processors.

GenModelCores / Threads / Logical CPUsClock Speed / Turbo (GHz)CacheMax TDP (Watts)CPU BenchFeatures
1Intel Celeron 8472/1/21.1 / None2 MB17986None
1Intel Core i3-3217U2 / 2 / 41.80 / None3 MB172272None
2Intel Core i5-3427U2 / 2 / 41.80 / 2.803 MB173611vPro & VT-d
2Intel Core i7-3537U2 / 2 / 42.00 / 3.104 MB173766VT-d
3Intel Core i3-4010U2 / 2 / 41.70 / None3 MB152253VT-d
3Intel Core i5-4250U2 / 2 / 41.30 / 2.63 MB153572VT-d
1 (G7)AMD Athlon II Neo N36L2 / 1 / 21.30 / None2 MB12751None
2 (G7)AMD Turion II Neo N40L2 / 2 / 41.50 / None2 MB15946None
3 (G7)AMD Turion II Neo N54L2 / 2 / 42.20 / None2 MB251314None
4 (G8)Intel Celeron G530T2 / 2 / 42.00 / None2 MB351604iLO
4 (G8)Intel Pentium G630T2 / 1 / 22.30 / None3 MB352154iLO

IMHO you cant beat the NUC for its price / performance / noise features mentioned above. In an ideal world I would be happy to give up 2-3cm of extra board size to get some extra RAM slots and a second gig port on the VMware HCL in there, but as a tidy home lab solution they’re hard to beat!

As regards this latest batch of models, I personally still think the sweet spot is with the Intel Core i5-3427U DC53427HYE 2nd Gen model, which includes vPro for remote access, and will turbo to a handsome 2.8GHz for as little as ~£235 when I last checked. More than enough for most home lab requirements!

Maximising Perceived Memory Utilisation in vSphere

One of the few drawbacks of the Intel NUC, HP Microserver or most other small form factor motherboards is that most of these systems have only two memory DIMM slots. With current technology, this leaves you with a maximum of 2x8GB DIMMs or 16GB RAM in total. A couple of months ago the following tweet peaked my interest in trying to maximise the memory availability in my Nanolab environment, which consists of 2 NUC boxes with a total of 32GB of RAM in the cluster.

 

One of the ways in which VMware maximise VM consolidation ratios when virtualising is through a technology called Transparent Page Sharing (or TPS for short).

I was only running a relatively small number of VMs on my environment but was starting to run out of RAM and was disappointed with the levels of memory sharing I was seeing. What I had completely forgotten was that large memory page support is enabled by default, and these large pages will only be split down into 4k chunks in the event that my cluster was in memory contention.

I decided that as I am not running any particular high performance apps in my lab, I would prefer the visibility of how much RAM I actually still had available, instead of gaining maximum application performance through large page support. Enabling this was very simple, and simply required changing the advanced setting Mem.AllocGuestLargePage to 0 and waiting for TPS to kick in later that day.

My memory utilisation before was as follows:

Large Page Support Enabled

After it was as follows (you can see that not only can I see the RAM savings, but I have also added several more VMs in between screenshots):

Large Page Support Disabled

Assuming you have a large number of similar VMs within your home lab, disabling large memory page support can allow you to gain easy visibility of your maximum memory savings and actual available RAM. Implementing this in your production environments may not be ideal based on your specific workloads, however if your production policy is to be reasonably aggresive with memory overcommitment, I recommend you highlight this issue to your capacity management team to ensure they don’t go out buying extra servers or RAM unnecessarily early!

Further reading:

%d bloggers like this: