Preview – Open Homelab Project at #LonVMUG – 14th April 2016

So this is just a very brief post to firstly say don’t forget it’s the London VMUG on 14th April 2016, at Tech UK (10 Saint Bride Street, EC4A). There are a load of really interesting sessions coming up, both vendor and community.

For example:

  • We have a keynote from Luca Dell’Oca who provided one of the best non-vendorised vendor sessions I have ever seen at a VMUG (his session title this time sounds like he may be looking to up the ante!)!
  • We have loads of sessions on VSAN including the 6.2 updates (also see the Storage Field Day 9 sessions here for a deep dive on that).
  • We even have a session from the London VMUG leadership team’s “Darth Vader” himself, Simon Gallagher, talking about App Volumes!

It should be an awesome day!

agenda-lonvmug-April-2016.png

The keen eyed among you may also notice that I have a session in the list as well…

If you want to come along and be part of a slightly unique session, never been seen before, never been done before, and will probably never be done again (especially if when all goes pear shaped!), then feel free to come along to the Open Homelab project session I will be attempting to herd / steer / keep on the rails!

I suggested a few months ago to Simon G that we do some roundtable sessions at the London VMUG and volunteered to run some as an experiment. These are my favourite sessions at the UK VMUG as you get a dozen or so people round a table and chew the fat on a specific subject area.

It turns out that we don’t actually have anywhere in our new venue to run this session for a small group, so instead, it’s been converted into a “square table”, i.e. “no table” session in one of the standard rooms instead!

Running a roundtable with a room full of people is certainly going to be a challenge, a bit of an experiment, and worst case scenario it all falls apart and we never do it again! Yay! But, hopefully it will actually be a really worthwhile session, and I plan to share the results here afterwards as kind of a crowdsourced homelab advice tree or something! To be honest with less than two weeks to go I haven’t really figured out the details yet, but rest assured by a week on Thursday, I will at least have the title decided!

planning

Whatever happens it should be interesting! So if you want to share your homelab requirements with the group and get some advice and tips on how to design and build it, or if you want to tell us how awesome your lab is already and why you chose to build it like that, please do come along to the session and join in! 🙂

Register here:
London VMUG Meeting Registration – Thursday, 14th April 2016

VMUG, VMware , , , , , , , , , , ,

Exclusive: Intel announce new 3D-MAN’D memory technology!

I am incredibly proud to bring you the news that Intel have chosen Tekhead.it exclusively to announce their new memory technology today, known as 3D-MAN’D!

Information on this new technology is still reasonably scarce, but Intel have informed us that the new semi-volatile memory can provide automated selective filtering of data both inline and at rest, with snapshots, deduplication and compression enabled via an addon “alcohol” license.

Other data services such as replication are in the pipeline, and Intel will tell you about that when you’re old enough!

The technology is capable of storing billions of transactions, images, and movies for up to 100 years, though latency is wildly variable, and tends to degrade over the lifetime of the media.

Lastly, the solution is based on a serial interface, so application developers will need to rewrite code to avoid multi-threading / multi-tasking as this can cause corruption of data.

brain-inside

Exciting times ahead!

UPDATE: For more information on this ground-breaking technology, please see Intel’s own announcement here.

UPDATE2: Great news! We have another exclusive update / correction from a senior source at Intel:

Life, Storage , , , , ,

Amazon AWS Tips and Gotchas – Part 4 – Direct Connect & Public / Private VIFs

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, specific to Direct Connect.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

Tips and Gotchas – Part 4
10. VPC Private / Public Access Considerations

If you have gone out and bought a shiny new Direct Connect to your AWS platform, you might reasonably assume that all of the users and applications on your MPLS will automatically start using this for accessing S3 content and other AWS endpoints. Unfortunately, this is not so simple!

At a high level, here is a diagram showing the two primary Direct Connect configurations, Public and Private:

AWS Direct Connect Public and Private VIFMore Info on Direct Connect here:
AWS Direct Connect by Camil Samaha

A key point to note about Direct Connect is that it supports multiple VIFs per 1Gbps or 10Gbps link:

aws2If you are not a giant enterprise and don’t need this kind of bandwidth, you can buy single VIFs from your preferred network provider, but you will pay for it on a per-VIF basis and as such multiple VPCs Direct Connect access to public endpoints will bump up your costs a bit.

The question therefore becomes, what is the cost-effective and simple solution to access service endpoints (such as S3 in the examples below), when you also want to access your private resources in your own VPCs?

This is not always a straight forward answer if you are on a tight budget.

Accessing S3 via your Direct Connect

As I understand it, the S3 endpoint acts very much like VPC peering, only it is from your VPC to S3, and is therefore subject to similar restrictions. Specifically, the S3 endpoint documentation has a very key statement:

“Endpoint connections cannot be extended out of a VPC. Resources on the other side of a VPN connection, a VPC peering connection, an AWS Direct Connect connection, or a ClassicLink connection in your VPC cannot use the endpoint to communicate with resources in the endpoint service”.

Basically this means for every VPC you want to communicate with directly from your MPLS, you need another VIF, and hence another connection from your service provider. If you want to access S3 services and other AWS public endpoints directly, you will also need an additional connection dedicated to that. This assumes your requirements are not enough to justify buying a 1Gbps / 10Gbps pipe for your sole use, and are using a partner to deliver it. If you can buy 1Gbps or above then you can subdivide your pipe into multiple VIFs for little / no extra cost.

Here are four example / potential solutions for different use cases, but they are definitely NOT all recommended or supported.

  • Assuming you are using a Private VIF, then by default, the content in S3 is actually accessed over the internet (e.g. using HTTPS if you bucket is configured as such):
    This may come as a surprise to people, as you would expect to buy a connection and access any AWS service.AWS Direct Connect Private VIF
  • If you have a Direct Connect from your MPLS into Amazon as a Public connection / VIF you can then route to the content over your Direct Connect, however this means you are bypassing your VPC and going straight into Amazon.
    This is a bit like having a private internet connection, so accessing VPCs etc securely would still require you run an IPsec VPN over the top of your “public” connection. This will work fine and will mean you can maximise the utilisation of the bandwidth on your direct connect, reduce your Direct Connect costs by sharing one between all VPCs. This is OK, but frankly not brilliant as you are ultimately still depending on VPNs to secure your data. If you want very secure, private access to your VPCs, you should really just spend the money! 🙂AWS Direct Connect Public VIF
  • If you have a Direct Connect from your MPLS into Amazon as a Private connection / VIF, you could proxy the connectivity to S3 via an EC2 instance. The content is requested by your instance using the standard S3 API and forwarded back to your clients. This means your EC2 instance is now a bottleneck to your S3 storage, and if you want to avoid it becoming a SPoF, you need at least a couple of them.
    It is worth specifically noting that although technically possible, this method would be strictly against all support and recommendations from AWS! S3 Endpoints and VPC peers are for accessing content from your VPCs, they are NOT meant to be transitive.AWS Direct Connect Private VIF
  • Lastly, Amazon’s primary recommended method is to run multiple VIFs, mixing both public and private. This biggest downside here is that each VIF will likely have a specific amount of bandwidth associated with it and you will have to procure multiple connections from your provider (unless you are big enough to need to buy a minimum of 1 Gbps!).AWS Direct Connect Public and Private VIFs

As this scales to many accounts, many VPCs and many VIFs, things also start to get a bit complex when it comes to routing (especially if you want many or all of the VPCs in question to be able to route to eachother), and I will cover that in the next post.

Until then…

AWS Direct Connect VIF networkingFind more posts in this series here:
http://www.tekhead.org/tag/awsgotchas/

Amazon AWS Tips and Gotchas – Part 5 – Managing Multiple VPCs

AWS, Cloud , , , , , , , , , , , , , , , , , , ,

NetApp – Is this the dawn of a new day?

Many people in the storage industry believed that NetApp made a pretty big mistake by underestimating the power of flash and its impact on the storage market. What really impressed me is that at Storage Field Day 9, Dave Hitz stood up and openly agreed!

He then went on to explain how they had recognised this and made a strategic decision to purchase one of the hottest and most innovative flash storage companies in the world, SolidFire. This has clearly been done with the intention of using SolidFire as Polyfilla for the hole in their product portfolio, but I would suggest that it is as much about SolidFire becoming a catalyst for modernising and reforming the organisation.

As with almost any company which has been around for a significant period of time and grown to a significant size (currently standing at around 12,500 employees), NetApp has become rather a behemoth, with all of the usual process-driven issues which beset companies of their scale. Much like an oil tanker, they don’t so much measure their turning circle in metres, as they do in miles.

With the exception of a few key figures and some public battles with a certain 3-letter competitor, their marketing has also historically been relatively conservative and their customers the same. As a current and historical NetApp customer and ex-NetApp admin myself, by no means am I denigrating the amazing job they have done over the years, or indeed the quality of the products they have produced! However, of late I have generally considered them to be mostly in the camp of “nobody ever got fired for buying IBM”.

Nobody ever got fired for buying IBM

In stark contrast, they have just spent a significant chunk of change on a company that is the polar opposite. SolidFire have not only brilliant engineers and impressive technology, but they also furnished their tech marketing team with some of the most well known and talented figures in the industry. These guys have been backed up by a strong, but relatively small sales organisation, who were not afraid to qualify out of shaky opportunities quickly, allowing them to concentrate their limited resources on chasing business where their unique solution had the best chance of winning. Through this very clear strategy, they have been able to grow revenues significantly year on year, ultimately leading to their very attractive $870m exit.

Having experienced a number of M&As myself, both as the acquiring company and the acquired, I can see some parallels to my own experiences. Needless to say, the teams from both sides of this new venture are in for a pretty bumpy ride over the coming months! NetApp must make the transformation into a cutting edge infrastructure company with a strong social presence, and prove themselves to be more agile to changing market requirements. This is will not be easy for some individuals in the legacy organisation, who are perhaps more comfortable with the status quo. The guys coming in from SolidFire are going to feel rather like they’re nailing jelly to a tree at times, especially when they run into many of the old processes and old guard attitudes at their new employer.

kidding

What gives me hope that the eventual outcome could be a very positive one, is that NetApp senior management have already identified and accepted these challenges, and have put a number of policies in place to mitigate them. For example, as I understand it, the staff at SolidFire have been given a remit that whenever they come across blockers to achieving success for the organisation, to ask some “hard questions”, which are robust in nature to say the least! That said, some are as simple as asking the question “Why?”. With executive sponsorship behind this endeavour ensuring that responses like “because that’s how we’ve always done it” will not be acceptable, I am confident that it will enable the SolidFire guys and gals to work with their new colleagues to affect positive change within the organisation.

I think this is reflected in Jeramiah Dooley’s recent post here, which echoes so many elements of this post I almost considered not hitting publish! 😮

If the eventual outcome of this is to make NetApp stronger and more viable in the long term, then all the better it will be for those who stick around to enjoy it! This, of course, will benefit the industry as a whole by maintaining a strong and broad set of storage companies to keep competition fierce and prices low for customers. Win-win!

bright

It is certainly going to be an interesting couple of years, and I for one am looking forward to seeing the results!

You can find the session videos from all the guys at NetApp here, I would say they are well worth the time to watch:
NetApp Presents at Storage Field Day 9

Further Reading
Some of the other SFD9 delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage, Tech Field Day , , , , , ,