Looking Forward to Storage Field Day 8

I have been a fan of the Tech Field Day events for some time. They provide a really interesting approach to tech marketing and are a great way of keeping up with the latest developments in the industry, as tech startups and established players alike take to the stage to showcase / discuss (and often get grilled by the delegates about) their shiniest new toys and features.

One of the key services I see the guys at TFD providing (free!) to the community is helping to maintain our knowledge of the bewildering array (pardon the pun) of vendors and solutions that are available out there, in an easy to consume format. It’s important to keep up with current trends and releases in the storage arena, even if you only have vendor X or Y in your current environment. If only so that when your IT Director says to you he wants to go out and buy vendor Z you can have a sensible, fact based conversation as to why or why not to consider them (instead of the obvious knee jerk reaction which they will potentially be expecting!). In my case I’m just a massive geek who loves talking / learning / reading / writing about tech, so keeping up definitely isn’t a chore for me! 🙂

So with that in mind, I am very honoured and excited to have been invited to attend Storage Field Day 8 from the 21st to 23rd October this year. Thank you very much to Stephen Foskett (@SFoskett) and Claire Chaplais (@cchaplais) for the awesome opportunity!

I would consider myself an IT generalist with a penchant for virtualisation and storage. The thing that has really drawn my interest to the storage field has been the fact that it is one of the fastest moving parts of the industry today, with the most innovation and potential disruption from startups.

You don’t have to be an established player to be successful any more. The cost of entry when basing your solutions on Intel chips and white box chassis with a layer of cleverly written software is a heck of a lot cheaper than the custom hardware driven solutions of the past! As many companies have a wide selection of storage silos across their estates, it is also not so difficult to encourage them to try out your new solution to initially replace a single silo either. Lastly lets be honest, we all like an underdog, and there are quite a few underdogs nipping at the bellies of the 880lb storage gorillas as we speak!

Morpheus doesnt like high margin storage

For the past three years I have been working as a Solution Architect at Claranet, an independent pan-European managed services provider, designing hosting solutions for the mid-market; an interesting and challenging sector where aspirations sometimes exceed budgets. That said, I will try not to repeat the traditional service provider mantra of “Can I securely multi-tenant it?” and “Do you provide an Opex commercial model?” too much…

I am really looking forward to enabling my brain sponge and soaking up the vast combined knowledge of the delegates and presenters at the event (some of whom I listen to regularly on the highly recommended podcasts Greybeards on Storage and In Tech We Trust and all of whom are known for producing awesome community content), so be sure to check them out and follow them on twitter!

The list of vendors at SFD8 is extensive too… with some new names who only came out of stealth in the past year along with the more familiar ones, it should be a fascinating week!

SFD8 Vendors

You can join the live stream during the event, and recordings of all sessions are available after, all of which you can find here:
http://techfieldday.com/event/sfd8/

PS: Being half Saffa, half Scot I was a bit concerned I might miss some of the RWC 2015 action by being in the States during the semi final stage, but after spending this Saturday sitting in the stands during the (now infamous) SA vs Japan game, I’m sadly less concerned about that possible outcome now!

Storage, Tech Field Day , , , , , , , ,

Tech Startup Spotlight – Hedvig

After posting this comment last week, I thought it might be worth following up with a quick post. I’ll be honest and say that until Friday I hadn’t actually heard of Hedvig, but I was invited along by the folks at Tech Field Day to attend a Webex with this up and coming distributed storage company, who have recently raised $18 million in their Series B funding round, having only come out of stealth in March 2015.

Hedvig are a “Software Defined Storage” company, but in their own words they are not YASS (Yet Another Storage Solution). Their new solution has been in development for a number of years by their founder and CEO Avinash Lakshman; the guy who invented Cassandra at Facebook as well as Amazon Dynamo, so a chap who knows about designing distributed systems! It’s based around a software only distributed storage architecture, which supports both hyper-converged and traditional infrastructure models.

It’s still pretty early days, but apparently has been tested to up to 1000 nodes in a single cluster, with about 20 Petabytes, so it would appear to definitely be reasonably scalable! 🙂 It’s also elastic, as it is designed to be able to shrink by evacuating nodes, as well as add more. When you get to those kind of scales, power can become a major part to your cost to serve, so it’s interesting to note that both x86 and ARM hardware are supported in the initial release, though none of their customers are actually using the latter as yet.

In terms of features and functionality, so far it appears to have all the usual gubbins such as thin provisioning, compression, global deduplication, multi-site replication with up to 6 copies, etc; all included within the standard price. There is no specific HCL from a hardware support perspective, which in some ways could be good as it’s flexible, but in others it risks being a thorn in their side for future support. They will provide recommendations during the sales cycle though (e.g. 20 cores / 64GB RAM, 2 SSDs for journalling and metadata per node), but ultimately it’s the customer’s choice on what they run. Multiple hypervisors are supported, though I saw no mention of VAAI support just yet.

The software supports auto-tiering via two methods, with hot blocks being moved on demand, and a 24/7 background housekeeping process which reshuffles storage at non-busy times. All of this is fully automated with no need for admin input (something which many admins will love, and others will probably freak out about!). This is driven by their philosophy or requiring as little human intervention as possible. A noteworthy goal in light of the modern IT trend of individuals often being responsible for concurrently managing significantly more infrastructure than our technical forefathers! (See Cats vs Chickens).

Where things start to get interesting though is when it comes to the file system itself. It seems that the software can present block, file and object storage, but the underlying file system is actually based on key-value pairs. (Looks like Jeff Layton wasn’t too far off with this article from 2014) They didn’t go into a great deal of detail on the subject, but their architecture overview says:

“The Hedvig Storage Service operates as an optimized key value store and is responsible for writing data directly to the storage media. It captures all random writes into the system, sequentially ordering them into a log structured format that flushes sequential writes to disk.”

Supported Access Protocols
Block – iSCSI and Cinder
File – NFS (SMB coming in future release)
Object – S3 or SWIFT APIs

Working for a service provider, my first thought is generally a version of “Can I multi-tenant it securely, whilst ensuring consistent performance for all tenants?”. Neither multi-tenancy of the file access protocols (e.g. attaching the array to multiple domains for different security domains per volume) nor storage performance QoS are currently possible as yet, however I understand that Hedvig are looking at these in their roadmap.

So, a few thoughts to close… Well they definitely seem to be a really interesting storage company, and I’m fascinated to find out more as to how their key-value filesystem works in detail.  I’d suggest they’re not quite there yet from a service provider perspective, but for private clouds in the the enterprise market, mixed hypervisor environments, and big data analytics, they definitely have something interesting to bring to the table. I’ll certainly be keeping my eye on them in the future.

For those wanting to find out a bit more, they have an architectural white paper and datasheet on their website.

Startup Spotlight, Storage, Tech Field Day , , , , , , , , , , , , , , , , , ,

Assigning vCenter Permissions and Roles for DRS Affinity Rules

Today I was looking at a permissions element for a solution. The requirement was to provide a customer with sufficient permissions to be able to configure host and virtual machine affinity / anti-affinity groups in the vCentre console themselves, without providing any more permissions than absolutely necessary.

After spending some time trawling through vCentre roles and permissions, I couldn’t immediately find the appropriate setting; certainly nothing specifically relating to DRS permissions. A bit of Googling and Twittering also yielded nothing concrete. I finally found that the key permission required to be able to allow users to create and modify affinity groups is the “Host \ Inventory \ Modify Cluster” privilege. Unfortunately the use of this permission is a bit like using a sledgehammer to crack a nut!

roles

By providing the Modify Cluster permission, this will also provide sufficient permissions to be able to enable, Configure and disable HA, modify EVC settings, and change pretty much anything you like within DRS. all of these settings are relatively safe to modify without risking uptime (though they do present some risk in the event of unexpected downtime); what is a far more concerning is that these permissions and allow you to enable, configure and disable DPM! It doesn’t take a great deal of imagination to come up with scenario where for example a junior administrator accidentally enables DPM on your cluster, a large percentage of your estate unexpectedly shuts down overnight without the appropriate config to boot back up, and all hell breaks loose at 9am!

The next question then becomes, how do you ensure that this scenario is at least partly mitigated? Well it turns out that DPM can be controlled via vCenter Scheduled Tasks. Based on that, the potential workaround for this solution is to enable the Modify Cluster privilege for your users in question, then set a scheduled task to auto-disable DPM on a regular basis (such as hourly). This should at least minimise any risk, without necessarily eradicating it. Not ideal, but it would work. I’m not convinced as to whether this would be such a great idea for use on a critical production system. Certainly a bit of key training before letting anyone loose in vCenter, even with “limited” permissions, is always a good idea!

I have tested this in my homelab on vSphere 5.5 and it seems to work pretty well. I don’t have vSphere 6 set up in my homelab at the moment, so can’t confirm if the same configuration options are available, however it seems likely. I’ll test this again once I have upgraded my lab.

It would be great to see VMware provide more granular permissions in this area, as even basic affinity rules such as VM-VM anti-affinity are absolutely critical in many application solutions to ensure resilience and availability of services such as Active Directory, Exchange, web services, etc. To allow VM administrators achieve this, it should not be necessary to start handing out sledgehammers to all and sundry! 🙂

If anyone has any other suggested solutions or workarounds to this, I would be very interested to hear them? Fire me a message via Twitter, and I will happily update this post with any other suggested alternatives. Unfortunately due to inundation with spam, I removed the ability to post comments from my site back in 2014. sigh

 

VMware , , , , , , , , , ,

SpiceWorld London Day Two

So that’s it, finito, over, done! Day two of SpiceWorld London is officially closed, and we are all left contemplating what we’ve learned, the new acquaintances made and how we are going to use the information we have learned over the past couple of days to influence our jobs and careers moving forward.

With the removal of the marketing track and most of the main hall sessions, today was a quieter, more tech focussed event than Tuesday. The more subdued atmosphere may also have something to do with last nights party of course… That said, I saw significantly more discussion and interaction at the sessions I attended, which always makes for a more engaging event. I was able to catch 4 sessions on a variety of subjects including cryptography, Windows 10, certification and a session on the SpiceWorks community, and what they’re developing.

The first (and most well attended) session of the day was on all of the new improvements and features in Windows 10. Two of those features in particular stood out to me, one of which generated some (rather heated) debate in the room!

The first feature which rather concerned me was around the potential privacy issues with Cortana. There have been a number of fairly high profile privacy issues with recent editions of OS X and Ubuntu, yet Microsoft seem to be quite happy to have joined in. Many smarter people than me have articulated the risks of the direction in which our industry is headed when it comes to privacy!

The feature which actually generated the most debate in the room was around the requirement to have a Microsoft Account in order to use the features of Cortana, and the subsequent impact it may have when user’s either want (or don’t want) to use their personal accounts to enable this feature on their corporate devices. Undoubtedly this would be Microsoft’s preference as it enables them to build up a more accurate profile of you from the data collected in both halves of your life (tin foil hats at the ready people!).

hat

The alternative of course if for users to maintain two separate identities with MS, for example based on their corporate email. This then has the potential to lead to confusion for users, and additional work for the IT department who most likely have to setup and support these accounts, in addition to everything else they have to manage. There were some fairly strong opinions in the room to say the least, and the atmosphere got pretty tense at one point!

On the plus side, it was nice to be reminded that Microsoft do seem to be taking security pretty seriously these days. Here’s a quick reminder of all of the security features now built into Windows:

The final session I attended mirrored the first in many ways, being all about IT certification, this time led by CBT Nuggets instructor Chris Ward. Chris’ style of presentation was very different, and the structured part of the session was relatively short, lending itself to a much more interactive event. One discussion I found particularly interesting was started by one of the attendees who runs a team of contractors at a large organisation. His challenge was with an individual who was still working from some very old certs and skills, who kept saying he had no time to train himself and that the company should send him on a paid VMware course. To me, there are numerous issues with this situation, and yes I may be over simplifying a bit, but:

  • One of the reasons for using contractors is that they help you to fill skills gaps. If you have a team full of contractors, and you don’t have the right skills available, you’re doing contracting/outsourcing wrong!
  • Yes, contractors earn more and pay less tax, but they need to fund their own holidays, and more importantly, their own development. It’s not all roses! If contractors are not willing to invest in their own skills, why would an organisation want to hire (or in this case renew) them?
  • Contractor or not, people can’t expect their employer to drive their training, or indeed fund all of it. Individuals need to take some level of responsibility for this themselves, particularly in this new Self Study Era we seem to be moving into…

My final key takeaway from Chris’ presentation was something which I intend to make my life’s goal; maintaining the grooming standard!

Mullet
Reflecting back on the past couple of days I would say that you are in of a first or second line IT engineering role, perhaps working as the sole IT guy in an SMB, or even as an IT manager, then the Spiceworld conference is definitely worth checking out! There are a wide variety of sessions on different areas of IT and you can dip in and out of subjects depending on your interests. You can also take this one step further by attending your local SpiceCorps meet up.

If you want to go that bit deeper dive, then on you might want to consider either alternatively or additionally attending some of the more vendor focussed user groups such as VMUG or CitrixUG, and if talking to tech marketers is your thing, there are plenty of them at the massive vendor agnostic events such as Cloud Expo and Apps World.

Disclaimer: Please note that Spiceworks kindly provided my entry to the event, but there was no expectation or request for me to write about their products or services.

Spiceworld , , , , , , , , , , , , , , , , ,