VVols, Oh VVols, wherefore art thou VVols?

Some technologies take over the (IT infrastructure) world, some sink without a trace, and others find a niche where they fit a requirement. I suspect VMware’s VVols (Virtual Volumes) fall into the last category. VVols was released as a feature of vSphere 6.0 in March 2015 and updated to version 2.0 with vSphere 6.5 in late 2016. The primary functionality of VVOLS is to allow a block storage array visibility and control of the storage presented to VMs. Rather than the usual VMFS, where the array only sees a datastore but cannot tell what VMs are using that datastore. Less storage management in the ESXi hypervisor and more in your storage array. The assumption is that the storage array, or storage team, is good at managing storage capabilities and can provide a better service to the VMs than the vSphere hypervisor’s native capabilities. For example, storage array-based snapshots rather than vSphere snapshots or storage replication at the VM level rather than the datastore. An interesting element is an ability to control performance on a per-disk level to the VM from the array rather than layering per-datastore performance management from the array with per-disk performance management from the hypervisor. VVOLs only applies to block storage (iSCSI, Fibre Channel, and NVMEoF) and not to NFS-based storage, where the NFS server already knows about the individual VM disks because they are just files on the NFS share. NFS-based arrays know about the individual VM disks but can also benefit from VVOLs to offload advanced storage features to the NFS array. Thanks, Ben for the correction.

The Pure Storage presentation inspired my thinking about VVOLS at the Tech Field Day Extra at VMworld US 2022. It had been quite a while since I heard a lot of talk about VVOLs, so I wondered whether it was still a thing. A quick Google search shows that storage vendors are still talking about VVOLs and vSphere 7 brought some updates to VVOLs, so there must be customers using and benefitting from VVOLs; it hasn’t sunk without a trace. But why didn’t VVOLs take over the world? I suspect it is a combination of easily used features in vSphere and plentiful performance from all-flash arrays. After all, the infrastructure needs only be good enough not to limit the applications that it hosts. If your applications don’t demand more performance and capabilities than vSphere, VMFS, and flash together can deliver, then the simplest solution will provide the best benefit. The place where VVOLs have value is where VMFS limits the application. It might be that vSphere snapshots cause VM stuns, which affect application performance. These stuns are a part of vSphere’s snapshot behavior and can be a big issue when snapshots frequently happen during working hours. It might be a highly critical application that requires very low and stable disk latency, such as real-time commodity trading. These are use cases where the application requirements demand specific storage capabilities. Usually, these are business-critical applications at the core of the business.

Do you need VVOLs in your vSphere environment? Ask yourself a simple question, does VMFS simplify or complicate your storage design? If VMFS simplifies, you probably spend little time thinking about storage for your VMs and have at least half a dozen VMs for every datastore. If VMFS complicates your storage design, you probably have datastores dedicated to specific VMs or applications and spend significant time tuning the datastore and LUN configurations. If VMFS simplifies your storage, keep using VMFS and not have to spend too much time on storage. If VMFS complicates, then looks loosely at VVOLs, it will probably be easier to build complicated storage configurations with VVOLs than using VMFS.

© 2023, Alastair. All rights reserved.

About Alastair

I am a professional geek, working in IT Infrastructure. Mostly I help to communicate and educate around the use of current technology and the direction of future technologies.
This entry was posted in General. Bookmark the permalink.