Tag Archive for storage

VMworld Europe 2014 – Day Three Roundup and Closing Thoughts

Well that’s it, its all over! Having never been to a VMworld prior to this week, I have to say the event does indeed live up to the hype!

Day Three
Day three started pretty subdued, not only from the point of view of the attendees, but a couple of the presenters as well; it definitely seems people had a good time at the VMworld party the night before!

Mixing in a bit of session time with a visit to the solution exchange and a bit of Hands on Labbing was the order of the day. I did have a quite amusing chat with one of the guys working on the Oracle stand. He said that the vast majority of people who had spoken to him had berated them about licensing and support in virtual environments, along with asking why they were advertising OVM at a VMware event. I think the poor guy was not far from the end of his tether!

My last role was at Oracle, so I can fully feel the pain around the license questions as it was almost always the first thing people asked me about when I told them I worked there! It doesn’t help the fact that the latest licensing hard vs soft partitioning guide is still only from 2011!

Oracle Tastiness!

Oracle Tastiness!

One thing I will be very interested to see is what becomes the defacto stance on how many hosts you must license once share-nothing VMotion between clusters, vCenters and DCs comes along in vSphere 6. It begs the question whether any Oracle auditor might have the audacity to suggest that you need to license all hosts in all DCs?

This of course assumes that the specific auditor will not accept mandatory cluster affinity as per Richard’s comments here: http://www.licenseconsulting.eu/vmworld-tv-oracle-on-licensing-vmware-virtualized-environments-updated/

Hopefully in this scenario, common sense would prevail, but that’s deep enough down that rabbit hole for now! 🙂

The sessions I managed to attend on day 3 were as follows:

STO2521 – VSAN Best Practices
Rawlinson Rivera & Kiran Madnani provided a very useful overview of a number of example use cases and how to apply different VSAN configurations. As this was covering multiple use cases there was some repetition of content, but not so far as to be distracting. Key takeaway, when it comes to disk groups, more = better!

VSAN Use Cases

VSAN Use Cases

STO2496 – Storage Best Practices for Next-Gen Storage Platforms
Being a bit of a storage geek, for me this was one of the best sessions of the entire week. Not only entertaining, but the quantity and quality of the information was intense to say the least! A couple of key areas which they covered were around benchmarking of storage (not just using the standard 4k 100% Read profiles which vendors use to produce stats for their marketing material).

Absurd Testing at the Chad & Vaughn Show

Absurd Testing at the Chad & Vaughn Show

TEX1985 – Lessons Learned from a Real Life VSAN POC at Trend Micro
It’s always interesting to see how real customers found the use of a technology. Arsenio Mateos from Trend Micro was not particularly detailed in any specific issues they experienced, as he concentrated more on the decisions behind the solution, and the benefits it broupght them.  Cormac on the other hand was very open and when into some detail as to some of the configuration issues and bugs which were common among other customer deployments. I was also the grateful recipient of a signed copy of the book Cormac co-wrote with Duncan Epping.

EUC2027 – Characterise Performance in Horizon 6
My final session rounded out the end of the day. I don’t currently use or design VMware Horizon View in my current role, when most commonly customers have managed RDS or Citrix XenApp farms. I mainly went to the session to see the VMware approach to sizing the new session host desktops on Horizon 6. Unsurprisingly it turns out that they come out with very similar ratios and guidelines as Citrix do (shocking)!  The really interesting takeaway for me from this session was the VMware View Planner tool, which looked like it could definitely have some value in load testing and gauging the requirements for customers with or without VMware View.

By this time it was 4.30, and everything had closed. If I’m honest I was a bit gutted as I had believed the HoLs were going to be open until 6. I was most of the way through my NSX lab, so I guess I’ll just have to finish it up from home!

After the event, my remaining colleagues and I wandered into town to check out the Sagrada Familia, and grab some light refreshments + tasty tapas.

Sagrada Familia

Sagrada Familia

Wrapping Up
Session Surveys – The one thing I didn’t actually get done at the show (but I plan to fill in this weekend), was the session surveys. I understand these are as valuable to the speakers as to VMware, so I have no issues spending a bit of time giving feedback. If you haven’t already, then I suggest you do, especially if you want to see the same guys & gals back next year!

If I were to be able to make any suggestions to VMware for next year they would be few and far between:

  • Keep the hang space and hands on labs and/or solution exchange open until 6pm on day 3. It’s minimal extra effort but it will allow attendees to make the absolute most of the event and facilities, especially those who don’t have an early flight back the same day.
  • Make the information on getting to the event a bit easier to find on the VMworld.com site (rather than burying it in the FAQs)
  • Free Segways or (or foot massages) for all attendees!

I enjoyed a wander or two around the solution hall, but for me the best and most useful elements of the entire week were the breakout sessions (and being there live giving me the opportunity to ask questions at the end), and networking with others both in the event and at the vendor sponsored evenings.

As a side note, I will probably be creating PDFs of all of my notes and posting these on the blog imminently for anyone who may find them useful.

So finally a big thank you to everyone who made VMworld a success; the organisers, the vendors, the speakers, the HoL team and all of the people with whom I had the such interesting and entertaining discussions!

Key Stats
Number of days attended4 (including partner day)
Blog articles published6
Blogs word count
6,516
Live breakout / HoL sessions attended14
Total session notes word count10,412
Average notes word count per session743
Hands on Labs Completed2
Number of steps walkedNo idea as I don’t have a Fitbit!
Total hours slept in 4 nights< 24
Contacts madeMany
Knowledge gainedIncalculable

Cannot See Any iSCSI Devices on Synology from a vSphere Host

Just a quick fix I discovered this weekend. It’s probably quite specific but hopefully if you come across this in future it will save you some time.

I had just finished rebuilding the second node in my lab from 5.1 with a fresh install. I added the software iSCSI initiator and connected it to my iSCSI target (Synology DS412+) using Dynamic Discovery. I then rescanned for a list of devices, and although I was picking up the IQN for the iSCSI server, I couldn’t see any devices!

I tried lots of things including removing and re-adding the initiator, messing with iSCSI bindings, but nothing! Very frustrating.

After a bit of googlage, I came across this KB article from VMware:

Cannot see some or all storage devices in VMware vCenter Server or VirtualCenter (1016222)

Although this was specific to VI3/vSphere 4, it did trigger a thought! Just before I built the new node, I rejigged all of my storage LUNs, deleting 3 old ones in the process (which just so happened to be the first 3 LUNs on my NAS). What I believe this caused is that the LUNs were viewed by node 1 as different LUN IDs on node 2, so they refused to show up!

So now, the fix. Incredibly simple as it turns out:

  1. Created three new temp 10GB LUNs on my old NAS which would then assume LUN IDs 1/2/3 as they originally had (before I deleted them).
  2. Rescan the new node of the cluster for storage
  3. Confirmed all of the LUNs are now visible
  4. Deleted the three temp LUNs from the Synology (I don’t plan to add any more nodes for now so I have no need of these temp LUNs, but as they’re thin provisioned anyway it actually wouldn’t hurt to leave them there).
  5. Rescan the ESXi host again to ensure it can still see the LUNs.
  6. Job done!

Synology iSCSI Devices

Not much to it, but worth a quick post I thought as this simple issue wasted a chunk of my time!

London VMUG 17th July 2014 – Last Chance to Register

Just a quick reminder that this is your last chance to register for this quarters London VMUG. For those of you who haven’t previously attended a VMUG, it’s a brilliant way to meet other people in our industry, watch a load of community and vendor sessions, and generally steep yourself into the techie melting pot.

As it happens, at this VMUG I will be presenting my own session, Noddy’s Guide to Storage Design – Storage 101, where I go through the basics of storage design decisions and impacts along with a few tips I’ve picked up over the years. I plan to potentially follow this up by turning it into a series of blog posts. At time of writing I have over 40 slides in my deck and that’s just the basics. My biggest issue is probably not lack of content, so I need to work on cutting it down before Thursday for sure!

Storage is complex… who knew?!

Fortunately Mike Laverick has kindly agreed to FeedForward with me and I’ll be running through my initial draft with him this evening!

Of the other sessions, the ones I’m particularly looking forward to are:

  • When Did Turkeys Ever Vote for Christmas? – Mike Laverick, VMware
  • Vendors: VMware Vision and Strategy – Martyn Storey, VMware
  • Hitting the Big Red Button with vCO and SRM – Sam McGeown

VMUG Agenda

If you are coming along, I highly recommend getting there early. Doors open from 8.30 and it’s a great time to catch up with other attendees. At the end of the day there is of course the most excellent London #vBeers at the Pavilion End! If you haven’t been before, just hang about by the lifts and tag along with a regular…

If you do see me on the day (I’m 6’7” so you cant miss me), please feel free to come and say hi!

Register here: London VMUG

VMware vSphere NanoLab – Part 4 – Network and Storage Choices

Over the past few posts I have gone into the detail on configuring a high WAF vSphere NanoLab, mainly from the perspective of compute. In my case this consists of two Intel NUC nodes, running  dual core 1.8GHz core i3 processors and 16GB of RAM each. The main question people  have been asking me since I published the series is, what do I use for networking and storage?

Prior to the NanoLab, I have always gone for a vInception type of setup, i.e. everything running inside a single powerful workstation with plenty of RAM. This limits your options a bit, in my case it meant simply using local SSD & SATA storage, presented either as iSCSI from my Windows 2008 R2 server  or a nested FreeNAS 7 VM. For a bit of extra capacity I also had a couple of spare disks in an HP Microserver N36L presented via another FreeNAS 7 VM under ESXi.

The most frustrating thing with running your VMFS storage from a Windows host, is the monthly patching and reboots, meaning you have to take down your entire environment every time. In my case this also includes this blog, which is hosted as  a VM on this environment, so moving forward I wanted to have something a little more secure, flexible and robust, which also adhered to the cost, noise and size requirements you might expect for a NanoLab.

Storage

Speed of storage can make or break you experience and productivity when running a home lab. My requirements for a storage device / NAS were:

  • Minimal size
  • Silent or as near silent as possible
  • Low power consumption
  • Minimum 4 disk slots and ability to do RAID 5 (to minimise disk cost and provide flexibility for later growth)
  • Reasonable price

Optionally:

  • VAAI support
  • Decent warranty (if not a home build)
  • Reasonable component redundancy
  • USB3 support in case I want to add any external drives later for some speedy additional storage / backup

After going back and forth between a home-made solution based on another HP Microserver, or a pre-configured NAS, I decided that the additional features available in the Synology “Plus” line were too good to pass up. These include:

  • VAAI support for Hardware Assisted Locking (ATS), Block Zero, Full Copy, Thin Provisioning
  • iSCSI snapshot and backup
  • Link aggregation support for the dual gigabit NICs
  • 2-3 year warranty depending on the model
  • iSCSI or NFS (VAAI on iSCSI volumes only)

They were also recommended by a number of vExperts such as Jason Nash, Chris Wahl and Julian Wood, which is always a good justification to go for one! 🙂

The 1512+ was very tempting, but I in the end I chose the DS412+ due to its near-silent sub-20db operation (thanks to an external power brick and 2 hot-swap silent cooling fans), low power consumption618_348_backup-plans-synology-ds412 (max 44w under heavy load),  minimal footprint and reduced cost. I was tempted to wait and see if a DS413+ comes out any time soon, but Synology are being cagey and I needed the lab upgrades to be done and dusted in a short period. I already have a DS413j which I use for backups, so I can confirm they are very well built little machines, and the noise level claims are indeed accurate!

 

Into the 412+ I have loaded a pair of 240GB SANDisk Extreme SSDs using SHR (Synology Hybrid Raid). This is effectively just RAID1 mirroring when only two drives are installed but gives me the ability to expand out to RAID5 equivalent as I need more space and the price of SSDs (inevitably) comes down. Eventually the box will have around ~720GB or more of useable SSD storage, more than enough for a decent bunch of lab VMs! Another alternative would be a pair of SSDs for VM boot partitions / config files, and a pair of SATA drives for VM data partitions.

Networking

Although you can easily build a great home lab on a flat network with any old cheap switch, the ability to experiment with more advanced features is highly desirable. My requirements for a managed switch were:

  • Minimal size
  • Passive cooling (for silent operation)
  • Low power consumption
  • Minimum of 8x 1 gigabit ports (or preferably more)
  • Link aggregation
  • QoS
  • Security – VLANs, PVLANs, ACLs, & Layer 3 switching
  • SSH access for command line management

Optionally:

  • I am studying for a few Cisco exams over the next year so a Cisco branded router would be preferential
  • Decent warranty

After a great suggestion from Jasper and reading an article by Vladan I ended up going for the ten port Cisco SG300-10.

SG300-10

This 10-port switch will allow for:

  • 1-2 ports per NUC (for 2-4 NUC boxes)
  • 2 LACP enabled ports for the Synology lab storage
  • 2 ports for my personal data storage server (might replace this with a second mid-range Synology NAS later)
  • 2 uplink ports (In my case for a router and a second wireless access point)

This switch is passively cooled, only uses around 10w power, and as an added bonus Cisco include a limited lifetime warranty! Great if you are going to invest that much in a switch for home!

“As long as the original End User continues to own or use the Product, provided that: fan and power supply warranty is limited to five (5) years. In the event of discontinuance of product manufacture, Cisco warranty support is limited to five (5) years from the announcement of discontinuance.” http://www.cisco.com/en/US/docs/general/warranty/English/LH2DEN__.html

If I had been going for a switch purely on cost I would probably have chosen one of the HP models as these have some great bang for your buck, but I did want to stick to a Cisco branded one. I would also have loved to go for the PoE model so I could plug in a VoiP phone later, but the cost for the SG300-10P / MP was at least 50% more, and power consumption would be higher, even when idle.

WAF

The entire NanoLab setup above of 2 NUC boxes, DS412+ and SG300-10 in total take up about the same volume of space as a large shoe box, are virtually silent, and combine for an idle power level of 50-60 watts, and under 100 watts even under load. That’s less than a couple of halogen light bulbs!

In my next post I will go through the process of configuring the network and storage, including link aggregation and suggested VLAN configuration.

Earlier parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3

%d bloggers like this: