Tag Archive for performance

VMworld Europe 2015 Day Three Roundup

Day three was quite simply Cloud Native Apps day for me!

I began in the morning with an internal partner briefing with some of the guys in the CNA team. Needless to say this was really interesting and for me it was a total nerdgasm! I did get a real sense that VMware are certainly not planning to get left behind in this new era, in fact far from it as some of their future plans will push the boundaries of what is already bleeding edge today. For the Pratchett fans amongst you, I would suggest that we are indeed living in Interesting Times!

Immediately following this I legged it down to Hall 8 for the CNA panel session, hosted by VMware CTO Joe Baguley, and featuring some regular faces from the London VMUG including Robbie Jerrom and Andy Jenkins. One of the interesting discussions which came up was about DevOps. DevOps is a nice vision, but developers today understand code, point them at a faulty storage array and they will look at you blankly… There is a skills gap there!

If the entire world is expected to become more DevOps focussed, Infrastructure will have to become a hell of a lot easier, or everything will need to just move to the public cloud. The reverse holds true of course, point most infra guys at something much more complex than a PowerShell / Bash / Perl script and you’re asking for trouble.

A true DevOps culture will require people with a very particular set of skills. Skills they have acquired over a very long career. Skills that make them a nightmare for… (ok I’ll stop now!).

Next was a wee session on the performance of Docker on vSphere. This actually turned out to be a stats fest, comparing the relative performance of Docker running on native tin and virtualised. The TLDR for the session was that running docker in a VM provides a minimal overhead to most things. Slightly more impact on network latency than other resources, but depending on the scale out nature of the solution it can actually perform better than native due to optimal NUMA scheduling.

Consider requirements over performance when looking at how to roll out your container platform. If you are running to performance margins of sub 5-10% on any resource then you have under-designed your infrastructure!

The final session of the day (INF5229) was actually probably my favourite of the whole week. If this is released on youtube I recommend you catch it above any other session! Ben Corrie (Lead Engineer on Project Bonneville) took us through a clear and detailed explanation of the differences between running Docker on Linux inside of a standard VM compared to running vSphere Integrated Containers and Photon.

After a quick overview of some of the basics, Ben then proceeded to do several live demos using a one day old build, inside of his Mac Mini test lab (with he appropriate nod given to Mr William Lam of course)! I’m convinced he must have slaughtered many small animals to the gods of the Demos, as the whole thing went off without a hitch! Perhaps Bill Gates could have done with his help back in 1998!

Most importantly, Ben showed that via the use of vSphere Integrated Containers, you are no longer limited to simply containerising Linux, and the same process can be applied to virtually any OS, with his example being MS-DOS running Doom in a container!!! When cloning Windows VMs, the same technology will be used as last year, which enables the ability to generate a new SID and do a domain join almost instantly.

It’s also worth noting that this is not based on the notoriously compromised TPS, and is all new code. Whether that makes it more secure of course, is anyone’s guess! 🙂

MS-DOS Container under Docker and VIC, running Doom!

MS-DOS Container under Docker and VIC, running Doom!

Once the sessions were all done for the day I wandered down to the Solutions Exchange for the annual “Hall Crawl”, where I was admiring Atlantis Computing CTO Ruben Spruijt’s Intel NUC homelab, running in a hyper converged configuration. The only negative I would suggest is that his case is the wrong way round!

IMG_0103

The day finished off with the VMworld party, and a great performance from Faithless on the main stage. As a Brit, this was a great choice, but I did see a few confused faces from many of our EU counterparts, at least until Insomnia started playing!

Day Three QotD

Robbie Jerrom produced Quote of the Day for me on the CNA panel (which was where my Quote of the Event came from, but more of that later). It is very simple but succinct in getting across a relatively complex subject:

A micro service does one thing, really well.

 

FreeNAS 0.7.2 NFS and iSCSI Performance in a vSphere 4.1 Lab

While doing some lab testing and benchmarking for my upcoming VCAP-DCD exam I came across some interesting results when messing about with NFS and iSCSI using FreeNAS. I plan to re-run the same set of tests soon using the EMC Celerra simulator once I have it set up.

The results are from very simplistic testing using a simple buffered read test only (it would be reasonable to expect write times to be the same or slower, and this is just a quick test for my own info). For this I used the following sample hdparm command in some Ubuntu VMs:

sudo hdparm -t /dev/sda

A sample output from this command would be:

/dev/sda:  Timing buffered disk reads: 142 MB in  3.11 seconds =  45.59 MB/sec

As this was a quick performance comparison I only repeated the test three times per storage and protocol type, but even with this simplistic testing the results fairly conclusive.

Test HW was based on my 24GB RAM ESX and ESXi cluster in a box solution [hence some results will be faster than you can achieve over a gig network as this is all running on one host] running under Windows 7 64-bit with VMware Workstation 8. Components are:

  • 4x ESX/ESXi 4.1 hosts running in Workstation 8 with an SSD datastore. 4GB RAM and 2x vCPUs each.
  • 1x FreeNAS 0.7.2 instance running in Workstation 8 with an SSD datastore and a SATA datastore. I use this over FreeNAS 8 as it has a significantly smaller memory footprint (512mb instead of 2GB). 1vCPU and 512 MB RAM.
  • 64-bit Ubuntu Linux VMs running nested under the ESX(i) virtual hosts. 1vCPU and 512 MB RAM each.

Storage components are:

  • SATA 2 onboard ICH10R controller
  • 1x Crucial M4 128GB SSD (500MB/sec Read, 175MB/Sec Write)
  • 1x Seagate 250GB 7200RPM SATA

The results of the testing are as follows:

Protocol Storage Type Read MB/sec
Local VMFS SSD 383
Local VMFS SATA 88
FreeNAS 0.7.2 w/ NFS SSD 11
FreeNAS 0.7.2 w/ NFS SATA 5
FreeNAS 0.7.2 w/ iSCSI SSD 175
FreeNAS 0.7.2 w/ iSCSI SATA 49

As you can see, FreeNAS with NFS does not play nice with ESX(i) 4. I can confirm that I have seen stats and posts confirming these issues are not aparent in the real world, with NetApp FAS or Oracle Unified Storage (which is aparently awesome on NFS) but for your home lab, the answer is clear:

For best VM performance using FreeNAS 7, stick to iSCSI!

%d bloggers like this: