Tech Startup Spotlight – Hedvig

After posting this comment last week, I thought it might be worth following up with a quick post. I’ll be honest and say that until Friday I hadn’t actually heard of Hedvig, but I was invited along by the folks at Tech Field Day to attend a Webex with this up and coming distributed storage company, who have recently raised $18 million in their Series B funding round, having only come out of stealth in March 2015.

Hedvig are a “Software Defined Storage” company, but in their own words they are not YASS (Yet Another Storage Solution). Their new solution has been in development for a number of years by their founder and CEO Avinash Lakshman; the guy who invented Cassandra at Facebook as well as Amazon Dynamo, so a chap who knows about designing distributed systems! It’s based around a software only distributed storage architecture, which supports both hyper-converged and traditional infrastructure models.

It’s still pretty early days, but apparently has been tested to up to 1000 nodes in a single cluster, with about 20 Petabytes, so it would appear to definitely be reasonably scalable! 🙂 It’s also elastic, as it is designed to be able to shrink by evacuating nodes, as well as add more. When you get to those kind of scales, power can become a major part to your cost to serve, so it’s interesting to note that both x86 and ARM hardware are supported in the initial release, though none of their customers are actually using the latter as yet.

In terms of features and functionality, so far it appears to have all the usual gubbins such as thin provisioning, compression, global deduplication, multi-site replication with up to 6 copies, etc; all included within the standard price. There is no specific HCL from a hardware support perspective, which in some ways could be good as it’s flexible, but in others it risks being a thorn in their side for future support. They will provide recommendations during the sales cycle though (e.g. 20 cores / 64GB RAM, 2 SSDs for journalling and metadata per node), but ultimately it’s the customer’s choice on what they run. Multiple hypervisors are supported, though I saw no mention of VAAI support just yet.

The software supports auto-tiering via two methods, with hot blocks being moved on demand, and a 24/7 background housekeeping process which reshuffles storage at non-busy times. All of this is fully automated with no need for admin input (something which many admins will love, and others will probably freak out about!). This is driven by their philosophy or requiring as little human intervention as possible. A noteworthy goal in light of the modern IT trend of individuals often being responsible for concurrently managing significantly more infrastructure than our technical forefathers! (See Cats vs Chickens).

Where things start to get interesting though is when it comes to the file system itself. It seems that the software can present block, file and object storage, but the underlying file system is actually based on key-value pairs. (Looks like Jeff Layton wasn’t too far off with this article from 2014) They didn’t go into a great deal of detail on the subject, but their architecture overview says:

“The Hedvig Storage Service operates as an optimized key value store and is responsible for writing data directly to the storage media. It captures all random writes into the system, sequentially ordering them into a log structured format that flushes sequential writes to disk.”

Supported Access Protocols
Block – iSCSI and Cinder
File – NFS (SMB coming in future release)
Object – S3 or SWIFT APIs

Working for a service provider, my first thought is generally a version of “Can I multi-tenant it securely, whilst ensuring consistent performance for all tenants?”. Neither multi-tenancy of the file access protocols (e.g. attaching the array to multiple domains for different security domains per volume) nor storage performance QoS are currently possible as yet, however I understand that Hedvig are looking at these in their roadmap.

So, a few thoughts to close… Well they definitely seem to be a really interesting storage company, and I’m fascinated to find out more as to how their key-value filesystem works in detail.  I’d suggest they’re not quite there yet from a service provider perspective, but for private clouds in the the enterprise market, mixed hypervisor environments, and big data analytics, they definitely have something interesting to bring to the table. I’ll certainly be keeping my eye on them in the future.

For those wanting to find out a bit more, they have an architectural white paper and datasheet on their website.

Startup Spotlight, Storage, Tech Field Day , , , , , , , , , , , , , , , , , ,

Assigning vCenter Permissions and Roles for DRS Affinity Rules

Today I was looking at a permissions element for a solution. The requirement was to provide a customer with sufficient permissions to be able to configure host and virtual machine affinity / anti-affinity groups in the vCentre console themselves, without providing any more permissions than absolutely necessary.

After spending some time trawling through vCentre roles and permissions, I couldn’t immediately find the appropriate setting; certainly nothing specifically relating to DRS permissions. A bit of Googling and Twittering also yielded nothing concrete. I finally found that the key permission required to be able to allow users to create and modify affinity groups is the “Host \ Inventory \ Modify Cluster” privilege. Unfortunately the use of this permission is a bit like using a sledgehammer to crack a nut!

roles

By providing the Modify Cluster permission, this will also provide sufficient permissions to be able to enable, Configure and disable HA, modify EVC settings, and change pretty much anything you like within DRS. all of these settings are relatively safe to modify without risking uptime (though they do present some risk in the event of unexpected downtime); what is a far more concerning is that these permissions and allow you to enable, configure and disable DPM! It doesn’t take a great deal of imagination to come up with scenario where for example a junior administrator accidentally enables DPM on your cluster, a large percentage of your estate unexpectedly shuts down overnight without the appropriate config to boot back up, and all hell breaks loose at 9am!

The next question then becomes, how do you ensure that this scenario is at least partly mitigated? Well it turns out that DPM can be controlled via vCenter Scheduled Tasks. Based on that, the potential workaround for this solution is to enable the Modify Cluster privilege for your users in question, then set a scheduled task to auto-disable DPM on a regular basis (such as hourly). This should at least minimise any risk, without necessarily eradicating it. Not ideal, but it would work. I’m not convinced as to whether this would be such a great idea for use on a critical production system. Certainly a bit of key training before letting anyone loose in vCenter, even with “limited” permissions, is always a good idea!

I have tested this in my homelab on vSphere 5.5 and it seems to work pretty well. I don’t have vSphere 6 set up in my homelab at the moment, so can’t confirm if the same configuration options are available, however it seems likely. I’ll test this again once I have upgraded my lab.

It would be great to see VMware provide more granular permissions in this area, as even basic affinity rules such as VM-VM anti-affinity are absolutely critical in many application solutions to ensure resilience and availability of services such as Active Directory, Exchange, web services, etc. To allow VM administrators achieve this, it should not be necessary to start handing out sledgehammers to all and sundry! 🙂

If anyone has any other suggested solutions or workarounds to this, I would be very interested to hear them? Fire me a message via Twitter, and I will happily update this post with any other suggested alternatives. Unfortunately due to inundation with spam, I removed the ability to post comments from my site back in 2014. sigh

 

VMware , , , , , , , , , ,

SpiceWorld London Day Two

So that’s it, finito, over, done! Day two of SpiceWorld London is officially closed, and we are all left contemplating what we’ve learned, the new acquaintances made and how we are going to use the information we have learned over the past couple of days to influence our jobs and careers moving forward.

With the removal of the marketing track and most of the main hall sessions, today was a quieter, more tech focussed event than Tuesday. The more subdued atmosphere may also have something to do with last nights party of course… That said, I saw significantly more discussion and interaction at the sessions I attended, which always makes for a more engaging event. I was able to catch 4 sessions on a variety of subjects including cryptography, Windows 10, certification and a session on the SpiceWorks community, and what they’re developing.

The first (and most well attended) session of the day was on all of the new improvements and features in Windows 10. Two of those features in particular stood out to me, one of which generated some (rather heated) debate in the room!

The first feature which rather concerned me was around the potential privacy issues with Cortana. There have been a number of fairly high profile privacy issues with recent editions of OS X and Ubuntu, yet Microsoft seem to be quite happy to have joined in. Many smarter people than me have articulated the risks of the direction in which our industry is headed when it comes to privacy!

The feature which actually generated the most debate in the room was around the requirement to have a Microsoft Account in order to use the features of Cortana, and the subsequent impact it may have when user’s either want (or don’t want) to use their personal accounts to enable this feature on their corporate devices. Undoubtedly this would be Microsoft’s preference as it enables them to build up a more accurate profile of you from the data collected in both halves of your life (tin foil hats at the ready people!).

hat

The alternative of course if for users to maintain two separate identities with MS, for example based on their corporate email. This then has the potential to lead to confusion for users, and additional work for the IT department who most likely have to setup and support these accounts, in addition to everything else they have to manage. There were some fairly strong opinions in the room to say the least, and the atmosphere got pretty tense at one point!

On the plus side, it was nice to be reminded that Microsoft do seem to be taking security pretty seriously these days. Here’s a quick reminder of all of the security features now built into Windows:

The final session I attended mirrored the first in many ways, being all about IT certification, this time led by CBT Nuggets instructor Chris Ward. Chris’ style of presentation was very different, and the structured part of the session was relatively short, lending itself to a much more interactive event. One discussion I found particularly interesting was started by one of the attendees who runs a team of contractors at a large organisation. His challenge was with an individual who was still working from some very old certs and skills, who kept saying he had no time to train himself and that the company should send him on a paid VMware course. To me, there are numerous issues with this situation, and yes I may be over simplifying a bit, but:

  • One of the reasons for using contractors is that they help you to fill skills gaps. If you have a team full of contractors, and you don’t have the right skills available, you’re doing contracting/outsourcing wrong!
  • Yes, contractors earn more and pay less tax, but they need to fund their own holidays, and more importantly, their own development. It’s not all roses! If contractors are not willing to invest in their own skills, why would an organisation want to hire (or in this case renew) them?
  • Contractor or not, people can’t expect their employer to drive their training, or indeed fund all of it. Individuals need to take some level of responsibility for this themselves, particularly in this new Self Study Era we seem to be moving into…

My final key takeaway from Chris’ presentation was something which I intend to make my life’s goal; maintaining the grooming standard!

Mullet
Reflecting back on the past couple of days I would say that you are in of a first or second line IT engineering role, perhaps working as the sole IT guy in an SMB, or even as an IT manager, then the Spiceworld conference is definitely worth checking out! There are a wide variety of sessions on different areas of IT and you can dip in and out of subjects depending on your interests. You can also take this one step further by attending your local SpiceCorps meet up.

If you want to go that bit deeper dive, then on you might want to consider either alternatively or additionally attending some of the more vendor focussed user groups such as VMUG or CitrixUG, and if talking to tech marketers is your thing, there are plenty of them at the massive vendor agnostic events such as Cloud Expo and Apps World.

Disclaimer: Please note that Spiceworks kindly provided my entry to the event, but there was no expectation or request for me to write about their products or services.

Spiceworld , , , , , , , , , , , , , , , , ,

SpiceWorld London – Day One

As I clickety clack my way home on the train from my first day experiencing SpiceWorld, I thought it would be worth jotting down a few thoughts from the day. For those people who haven’t heard of the conference before, I would describe it as a vendor sponsored conference largely about Spiceworks, but with a healthy sprinkling of community content for good measure.

The day (unsurprisingly) started with the keynote session, which kicked off with something which is apparently a SpiceWorld tradition. An amusing video, this time about the Spiceworks staff who weren’t able to come to London for the event, so held their own mock conference featuring a smoking Microsoft Clippy as keynote speaker and the currently under secret development, iGunbrellunger (don’t ask!).

Clippy - the root of all that is evil in the world!

Clippy – the root of all that is evil in the world!

The keynote was split into three main sections, most of which were explaining for the benefit of non-Spiceheads, where Spiceworks originated. Some of those key facts being:

  • Founded in 2005 with a vision to create iTunes for system management.
  • First version released in 2006, which was (quite transparently) a free service with Google AdWords built into the the client from day one
  • Reached 1m users within about 4 years
  • By the end of 2014 they were serving 3.4m users and 100m page views per month on their platform

For me, some of the more interesting things covered were around the thought processes and principles by which the company was founded. These include concentrating on developing the 20% of functionality which users require 80% of the time (hence not spending resources developing stuff users will hardly use), and building a strong community to which services could be provided. This week Spiceworks have released their latest feature, which is an SDK for developers to be able to fill the feature gaps with whatever they can dream up, and made these available via an App Store interface. Extensibility FTW!

Their commercial model was further enhanced through the years by allowing users to rate ads so they didn’t receive irrelevant content. In 2010 Spiceworks used the performance, configuration and even warranty data they held on their customer’s solutions to warn customers when they may need to upgrade kit, and to offer them the appropriate SKUs to order from their partner suppliers, all from within the client… Very clever indeed! Similarly when client printers are running low on ink, they notify administrators and offer the ability to procure replacements. A very simple but highly effective solution, and as long as those partners are offering competitive rates, then a win-win for all it seems!

I had some very interesting conversations in the vendor breakout area including a couple of particular interest to me. The first of these was with a company I had only recently heard about, Cyberoam, who provide UTM devices for SMBs. They aren’t massively well known in the UK, but have significantly larger market shares in other parts of the world, such as South Africa, where I’m told they rank 3rd in terms of unit sales. Their offerings seem pretty interesting and relatively keenly priced, particularly as the software on all models is identical, with the only differences between models tending to be around their throughput/connection capabilities.

Cyberoam are now also part of Sophos, so have pretty decent backing and are definitely worth checking out, if the interface demo I saw was anything to go by. Comparing their product lineup, if you are looking for something with high availability and the ability to rack mount, then your real entry point solution is something like the Cyberoam 25iNG, capable of 125Mbps of full UTM throughput, or >1Gbps of standard firewall traffic. Certainly comparable with many of the big name solutions out there.

The second company which I took note of was Scale Computing. I believe that although they are a relatively mature solution in the hyper-converged space, having been around since the noughties, they only recently presented at tech field day but were pretty well received. Also targeting the SME space, they too are keenly priced, starting at about £20k for 3 nodes and a bunch of Sata drives. As you move up the model ranges you get more compute and faster SAS disks. Their licensing model is all inclusive, including a KVM hypervisor underlying (though you still need to buy Windows licenses if that’s your chosen OS, so some of those KVM savings are lost already). For me, the only element l feel potentially let the product down is the lack of SSDs, but if the primary audience is only looking to run a handful of VMs such as DCs, file servers, Exchange etc, then it could be a very good value proposition.

scalecomputing

I attended a number of sessions throughout the rest of the day including Andrew Bettany on the IT certification hamster wheel (something I think we all know too well!), the Ctrl Alt Tech IT Pro Web Show, a very brief session from Dell on Big Data, and Unitrends’ session on their new Free Edition. These sessions were all fine, though I felt they potentially lacked the depth I have seen at other events. That said, the last two sessions of the afternoon were really what made the day for me.

The penultimate session of the day was Andy Malone talking about TOR and the Dark Web. This session was genuinely quite disturbing, but gave a great insight into the kinds of content available via TOR, and how to identify and lock down users from potentially using TOR networks to abuse your IT services.

The Internet

In the demo, Andy actually loaded up the TOR client live on stage and went fishing in the depths for some content that was not too NSFW, but it wasn’t that easy to find:

Sites dont always last long on the Dark Web

The example below also describes some of the potential fingerprints left behind after a user has been using TOR, allowing you to at least know it’s going on, if not what has been accessed.

The final session was a real breath of fresh air, and definitely made a nice change from the usual tech conference keynotes. It was presented by special guest Simon Singh, who talked about the subjects of several of his books, and finished with a live demonstration of a real enigma machine which he had brought along. This was really quite fascinating, especially to consider the level of complexity of these cryptographic systems even 70+ years ago!

The day ended with some great community discussion at the Unitrends Happy Hour, after which it was time for me to head home, missing the chance to head out to the Namco Funscape for the Totally 80s party!

So closing thoughts for day one? Well as I mentioned above I would like to have perhaps seen a little more technical depth to one or two of the presentations, but overall it was definitely a worthwhile experience and has opened my eyes to some of the challenges and the perceptions which some of my customers have. The price for the event is typically around £150 for the two days, with numerous early bird discounts, so is significantly less expensive than other paid vendor events. If you don’t have the budget to go to a paid event, or would like to build on the knowledge you have gained from Spiceworld, I suggest you check out your local VMUG event, or even better the UK VMUG event held in Birmingham every year. These events are well attended by vendors and community members alike, so well worth checking out!

Anyhow, I’m definitely looking forward to day two and it’s getting late, so for now, nuff said!

Disclaimer: Please note that Spiceworks kindly provided my entry to the event, but there was no expectation or request for me to write about their products or services.

Spiceworld , , , , , , , , , , , ,