Archive for 29th October 2015

Quick Tip: Install a VIB into an Existing vSphere 5.5 ESXi Host

The following will likely work in other versions of vSphere, but I used it in vSphere 5.5 a while ago, then forgot to hit publish on this post!

In that case I had installed a new ESXi host and not included the custom VIB with the drivers for the SATA card. I did this deliberately as I thought I would have no need at this time to use the local HBA. The thing I forgot is that the host profiles I had created from other hosts included a local HBA, therefore the host profiles would not remediate without one. Annoying! So I used the following steps to manually add the specific VIB I needed (in this case sata-xahci-1.10-1.x86_64.vib).

SSH to your ESXi host (having enabled the SSH server from the vSphere Client):

# ssh [email protected]<hostip>
# cd /tmp

 

Copy the vib file into the host image (in my case I had it stored on my web server, but you could equally use any other standard method to get the file onto the host):

# wget http://www.tekhead.org/wp-uploads/www.tekhead.org/sata-xahci-1.10-1.x86_64.zip

 

Unzip the vib file:

# unzip sata-xahci-1.10-1.x86_64.zip

 

Install the vib:

# esxcli software vib install -v file:/tmp/sata-xahci-1.10-1.x86_64.vib
 
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VFrontDe_bootbank_sata-xahci_1.10-1
VIBs Removed:
VIBs Skipped:

 

Check that the vib is installed:

# esxcli software vib list | grep -i <vib name in my case ahci>
sata-xahci   1.10-1   VFrontDe   CommunitySupported   2014-10-31

 

Remove the old files (no longer needed):

# rm sata-xahci-1.10-1.x86_64.*

 

Finally, reboot your ESXi host, job done!

 

Without good Analytics you dont have a competitive storage product

Throughout my career, analysing storage utilisation for solution design and capacity management has not been an easy task! Even recently when I speak to customers about utilisation, they often don’t have the management tools in place on their legacy arrays or servers to be able to help us understand what their true workloads look like, or indeed often just basic statistics.

Gathering them is laborious at best, and almost impossible at worst. For example:

  • One previous major vendor I used to work with was only able to surface a small amount of basic throughput and latency data over the past 30 days or so, along with a bit of controller and port utilisation, through their Java-based BUI (Java version specific of course – I still shudder at the thought).
  • More recently another vendor I have used has a web based stats console which can aggregate multiple arrays, but they use a rather outdated method of visualisation which requires filling in a big form to get the stats generated and the produced graphs don’t include any kind of trending data or 95th percentile, etc.
  • Another vendor array I work with fairly regularly requires you to run an API call against the array which only provides you with the stats since the last time you ran it. By then running the API every 30 seconds to a minute, you can build up a body of stats over time. Not brilliant, and it’s a total pain to rationalise the exported data.
  • Even if you have the stats at the array, you need to then gather the same stats at the connected hosts, to ensure that they roughly correlate and that you don’t have any potential issues on the network (which is significantly more likely if say you are running storage and IP traffic on a converged network fabric).

In a word; clunky!

One of the things that struck me about many if not all of the vendors at Storage Field Day 8, was how much better the management consoles and analytics engines were than virtually all of those I have used in the past.

Several vendors use their dial home features to send the analytics back to HQ. This way the stats for individual customers as well as their customer base as a whole can be kept almost indefinitely and used to improve the product, as well as pre-emptively warning customers of potential issues through analysis of this “big data”. This also avoids customers having to spend yet more money on storing the data about their data storage!

Of those we spoke to, one vendor in particular really stood out for me; Nimble Storage. Their InfoSight platform gathers 30-70m data points per array, per day, which are uploaded to their central analytics platform and accessible via their very user friendly interface. It can produce a number of very useful graphs and statistics, send scheduled reports, and will even provide predictive upgrade modelling based on current trends.

Recently they have also added a new opt-in VMVision service which can actually plug into your vCenter server to track the IO stats for the VMs from a host / VM perspective as well, presenting these in conjunction with the array data. This will show you exactly where your potential bottlenecks are / are not, meaning that in a troubleshooting scenario you can avoid wasting precious time looking in the wrong place and all of the data is automatically rationalised into a single view, with no administrative effort required.

As certain storage array features are becoming relatively commoditised, it’s becoming harder for vendors to set themselves apart from the field. Having strong analytics and management tools is definitely one way to do this. So much so, I was compelled to tweet the following at the time:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

VMworld Europe 2015 Day Four and Closing Thoughts

VMworld

As with every year, the final day of VMworld was a pretty subdued atmosphere. In the main this is due to the number of people who head home early, combined with the those left looking distinctly hungover from the VMworld party on Wednesday night! Fortunately I remained reasonably coherent all night, other than in the act of losing my voice somewhat due to the volume, (Yes, I am turning into a grumpy old man who likes his virtual slippers by the fireside) and the inevitable VMworld lurgy, which in my case kicked in during rather than after the event!

The morning was largely spent shooting the breeze, chewing the fat, grinding our axes and many other classic metaphors, with the guys in the bloggers area. Needless to say we set the world to rights, defined the product strategy VMware should be taking for the next 20 years, and redefined the UK tax system so that it was fairer for all involved… ahem

I managed to squeeze in a couple more sessions over lunch time, including a great group discussion on NSX and vCD integration led by Ray Budavari and Wade Holmes. The vast majority of people in the room came from service providers, and of those only one was using NSX without vCD, so it appears that there is life in the old dog yet!

NSX & vCD

One of the interesting points from the session is that it looks as though the different editions of NSX will eventually be rationalised. vSphere will likely be the “favourite child” of NSX, getting new features first etc, but multi-hypervisor support will continue to be a feature in the future. Probably quite reassuring if you have already made a significant investment in the technology, though upgrades are likely to be a bit of a concern as they bring together the different Code Streams (groan).

After that I managed to catch up with the inimitable Alastair Cooke, key member of the #vBrownbag posse, who gave me some excellent advice for my upcoming trip to Storage Field Day 8. It’s always a pleasure to catch up with Alastair. He was a massive help in passing my VCAP-DCD4 back in the day, so if you don’t already subscribe to his excellent blog, I highly recommend you check it out!

After that it was time to hop on the shuttle bus and had for the airport for my flight home…

 

Closing Thoughts

Another VMworld Europe comes and goes, and much like the US edition there weren’t a huge number of life-changing announcements to write home about. Of the things which were in play however, Cloud Native Apps were most definitely front and centre! VIC is a great option for those organisations looking to get their feet wet in the container space, whilst being assured of the security that comes with being backed by the support resources of a company like VMware.

Final Thoughts

If you’re a software vendor or enterprise with some chunky legacy custom applications and you are considering going down the CNA route, just remember you don’t need to boil the ocean! Instead of spending the next 3 years cloudifying™ and microservicing™ your app for some major release, think about starting small.

  • Target new application functionality to be built with microservices at the core.
  • Find the performance bottlenecks in the existing application and rewrite that code to be able to scale out in a microservice architecture.
  • Think about how deep you want to go with your new microservice architecture? Design it at a business function/task level, which can be improved and iterated over time.
  • Consider for each microservice, what you would want to happen if that service fails. For example if your search services fail, customers can still access content. This is fine for most sites, but it wouldn’t work for Google, so make sure you drill down on your requirements! One example of this is that if the recommendation engine in Netflix fails for some reason, users will still get a default list of recommendations, rather than a big fat error message!
  • Don’t forget about security! Microservices are awesome, but they introduce a whole new level of complexity…
  • Above all, always bear in mind Baguley’s law (see below)!

For those customers wanting to scale beyond the 10k container limit imposed by vCenter itself, Photon will be an option too, though I have a sneaking suspicion that customers of that scale may look at doing something a little more open/custom anyway.

VMware Photon

When it comes to VMware’s other applications you can definitely see some decent forward momentum, particularly in products which have been bought in and integrated, such as vRA and NSX. In many cases I think the migration processes to the newer versions are all a bit too “rip and replace unless you have a 100% vanilla install”, but as the products mature further I think this will become less of an issue. It will definitely cause a few customers some pain in the short term though, especially if they just went out and spent thousands on PSO to implement the current version, only to have to redo half the work to upgrade! I guess you could continue to hold off until a later version if you want to reduce hassle, but if you never set a foot on the path you’ll never actually reach your destination!

It was great to catch up with many old and new faces at the event, especially at the vExpert event and in the bloggers area. It’s funny how you feel you kind of know people pretty well before you’ve even met them, if only via your 140 character interactions, so when you’re actually face to face for the first time it’s like catching up with old friends!

Until next year…

A Few Links

I was kindly invited to do a wee interview for VMworld TV by Eric Sloof at the vExpert party on Monday night. If you want to take a look, the link is below. I won’t embed it as the thumbnail image looks like I’m having some kind of embolism!
https://www.youtube.com/watch?v=N8CXTxvtb-I

If you didn’t manage to attend the event, it’s not too late to take advantage of some of the awesome content and sessions. VMware post a significant number of the most popular sessions on YouTube for free public consumption. Did I mention they’re free? Andreas Lesslhumer has kindly put together a summary list of all of the available videos from both US and EU events here:
http://www.running-system.com/vmworld-2015-general-sessions-and-technical-sessions-available-online/

Needless to say I’m very much looking forward to next year’s event already. If you want to attend, I suggest you preregister now to be notified when tickets become available:
https://www.vmworld.com/en/pre-register.html

 

Walk a mile in another man’s shoes…

I know it has now become something of a tradition to post how far you walked during the week, so here are my stats. I would caveat however that I am not a lazy bar steward, I’m 6’7″ tall, so I don’t need to take as many steps as other people! 🙂

Day Steps ~Distance
Monday 8564 6 km
Tuesday 7374 5km
Wednesday 10274 7km
Thursday 8356 6km

It now becomes obvious why everyone says bring comfortable shoes! 🙂

 

Quote of the Week

This undoubtedly goes to VMware CTO Joe Baguley, during the CNA Panel session on Day three:

VMworld Europe 2015 Day Three Roundup

Day three was quite simply Cloud Native Apps day for me!

I began in the morning with an internal partner briefing with some of the guys in the CNA team. Needless to say this was really interesting and for me it was a total nerdgasm! I did get a real sense that VMware are certainly not planning to get left behind in this new era, in fact far from it as some of their future plans will push the boundaries of what is already bleeding edge today. For the Pratchett fans amongst you, I would suggest that we are indeed living in Interesting Times!

Immediately following this I legged it down to Hall 8 for the CNA panel session, hosted by VMware CTO Joe Baguley, and featuring some regular faces from the London VMUG including Robbie Jerrom and Andy Jenkins. One of the interesting discussions which came up was about DevOps. DevOps is a nice vision, but developers today understand code, point them at a faulty storage array and they will look at you blankly… There is a skills gap there!

If the entire world is expected to become more DevOps focussed, Infrastructure will have to become a hell of a lot easier, or everything will need to just move to the public cloud. The reverse holds true of course, point most infra guys at something much more complex than a PowerShell / Bash / Perl script and you’re asking for trouble.

A true DevOps culture will require people with a very particular set of skills. Skills they have acquired over a very long career. Skills that make them a nightmare for… (ok I’ll stop now!).

Next was a wee session on the performance of Docker on vSphere. This actually turned out to be a stats fest, comparing the relative performance of Docker running on native tin and virtualised. The TLDR for the session was that running docker in a VM provides a minimal overhead to most things. Slightly more impact on network latency than other resources, but depending on the scale out nature of the solution it can actually perform better than native due to optimal NUMA scheduling.

Consider requirements over performance when looking at how to roll out your container platform. If you are running to performance margins of sub 5-10% on any resource then you have under-designed your infrastructure!

The final session of the day (INF5229) was actually probably my favourite of the whole week. If this is released on youtube I recommend you catch it above any other session! Ben Corrie (Lead Engineer on Project Bonneville) took us through a clear and detailed explanation of the differences between running Docker on Linux inside of a standard VM compared to running vSphere Integrated Containers and Photon.

After a quick overview of some of the basics, Ben then proceeded to do several live demos using a one day old build, inside of his Mac Mini test lab (with he appropriate nod given to Mr William Lam of course)! I’m convinced he must have slaughtered many small animals to the gods of the Demos, as the whole thing went off without a hitch! Perhaps Bill Gates could have done with his help back in 1998!

Most importantly, Ben showed that via the use of vSphere Integrated Containers, you are no longer limited to simply containerising Linux, and the same process can be applied to virtually any OS, with his example being MS-DOS running Doom in a container!!! When cloning Windows VMs, the same technology will be used as last year, which enables the ability to generate a new SID and do a domain join almost instantly.

It’s also worth noting that this is not based on the notoriously compromised TPS, and is all new code. Whether that makes it more secure of course, is anyone’s guess! 🙂

MS-DOS Container under Docker and VIC, running Doom!

MS-DOS Container under Docker and VIC, running Doom!

Once the sessions were all done for the day I wandered down to the Solutions Exchange for the annual “Hall Crawl”, where I was admiring Atlantis Computing CTO Ruben Spruijt’s Intel NUC homelab, running in a hyper converged configuration. The only negative I would suggest is that his case is the wrong way round!

IMG_0103

The day finished off with the VMworld party, and a great performance from Faithless on the main stage. As a Brit, this was a great choice, but I did see a few confused faces from many of our EU counterparts, at least until Insomnia started playing!

Day Three QotD

Robbie Jerrom produced Quote of the Day for me on the CNA panel (which was where my Quote of the Event came from, but more of that later). It is very simple but succinct in getting across a relatively complex subject:

A micro service does one thing, really well.

 

%d bloggers like this: