Tag Archive for 5.5

Assigning vCenter Permissions and Roles for DRS Affinity Rules

Today I was looking at a permissions element for a solution. The requirement was to provide a customer with sufficient permissions to be able to configure host and virtual machine affinity / anti-affinity groups in the vCentre console themselves, without providing any more permissions than absolutely necessary.

After spending some time trawling through vCentre roles and permissions, I couldn’t immediately find the appropriate setting; certainly nothing specifically relating to DRS permissions. A bit of Googling and Twittering also yielded nothing concrete. I finally found that the key permission required to be able to allow users to create and modify affinity groups is the “Host \ Inventory \ Modify Cluster” privilege. Unfortunately the use of this permission is a bit like using a sledgehammer to crack a nut!

roles

By providing the Modify Cluster permission, this will also provide sufficient permissions to be able to enable, Configure and disable HA, modify EVC settings, and change pretty much anything you like within DRS. all of these settings are relatively safe to modify without risking uptime (though they do present some risk in the event of unexpected downtime); what is a far more concerning is that these permissions and allow you to enable, configure and disable DPM! It doesn’t take a great deal of imagination to come up with scenario where for example a junior administrator accidentally enables DPM on your cluster, a large percentage of your estate unexpectedly shuts down overnight without the appropriate config to boot back up, and all hell breaks loose at 9am!

The next question then becomes, how do you ensure that this scenario is at least partly mitigated? Well it turns out that DPM can be controlled via vCenter Scheduled Tasks. Based on that, the potential workaround for this solution is to enable the Modify Cluster privilege for your users in question, then set a scheduled task to auto-disable DPM on a regular basis (such as hourly). This should at least minimise any risk, without necessarily eradicating it. Not ideal, but it would work. I’m not convinced as to whether this would be such a great idea for use on a critical production system. Certainly a bit of key training before letting anyone loose in vCenter, even with “limited” permissions, is always a good idea!

I have tested this in my homelab on vSphere 5.5 and it seems to work pretty well. I don’t have vSphere 6 set up in my homelab at the moment, so can’t confirm if the same configuration options are available, however it seems likely. I’ll test this again once I have upgraded my lab.

It would be great to see VMware provide more granular permissions in this area, as even basic affinity rules such as VM-VM anti-affinity are absolutely critical in many application solutions to ensure resilience and availability of services such as Active Directory, Exchange, web services, etc. To allow VM administrators achieve this, it should not be necessary to start handing out sledgehammers to all and sundry! 🙂

If anyone has any other suggested solutions or workarounds to this, I would be very interested to hear them? Fire me a message via Twitter, and I will happily update this post with any other suggested alternatives. Unfortunately due to inundation with spam, I removed the ability to post comments from my site back in 2014. sigh

 

Cannot See Any iSCSI Devices on Synology from a vSphere Host

Just a quick fix I discovered this weekend. It’s probably quite specific but hopefully if you come across this in future it will save you some time.

I had just finished rebuilding the second node in my lab from 5.1 with a fresh install. I added the software iSCSI initiator and connected it to my iSCSI target (Synology DS412+) using Dynamic Discovery. I then rescanned for a list of devices, and although I was picking up the IQN for the iSCSI server, I couldn’t see any devices!

I tried lots of things including removing and re-adding the initiator, messing with iSCSI bindings, but nothing! Very frustrating.

After a bit of googlage, I came across this KB article from VMware:

Cannot see some or all storage devices in VMware vCenter Server or VirtualCenter (1016222)

Although this was specific to VI3/vSphere 4, it did trigger a thought! Just before I built the new node, I rejigged all of my storage LUNs, deleting 3 old ones in the process (which just so happened to be the first 3 LUNs on my NAS). What I believe this caused is that the LUNs were viewed by node 1 as different LUN IDs on node 2, so they refused to show up!

So now, the fix. Incredibly simple as it turns out:

  1. Created three new temp 10GB LUNs on my old NAS which would then assume LUN IDs 1/2/3 as they originally had (before I deleted them).
  2. Rescan the new node of the cluster for storage
  3. Confirmed all of the LUNs are now visible
  4. Deleted the three temp LUNs from the Synology (I don’t plan to add any more nodes for now so I have no need of these temp LUNs, but as they’re thin provisioned anyway it actually wouldn’t hurt to leave them there).
  5. Rescan the ESXi host again to ensure it can still see the LUNs.
  6. Job done!

Synology iSCSI Devices

Not much to it, but worth a quick post I thought as this simple issue wasted a chunk of my time!

%d bloggers like this: