This is part two of my series on using PyVmomi on Linux to work with vCentre to create a bunch of VMs. You can find part one, where we connected to vCenter and retrieved VM information here. Creating a VM with any automation tool requires specifying all of the attributes of the VM, there is no wizard like the vSphere client, so we need to construct a bunch of linked objects. The most complex object is the hardware specification object and within it the objects for disk controllers and drives. Usually, each object is created using a constructor method and then added to its parent with an operation property.
The approximate hierarchy of objects that make up a VM look like this, there can be configuration at each level as well as in the sub-objects:
- Configuration Specification
- Network device
- Disk Controller
- Configuration Specification
This script is largely taken from this sample on the PyVmoni community samples repository and from more information here on StackOverflow. I use a function to create each VM, calling the function inside a loop to create multiple VMs. The function accepts the VM name and a few other objects. The unusual variable is the service_instance, which refers to the vCenter connection that we established in the first blog post.
def create_vm(vm_name, service_instance, vm_folder, resource_pool, datastore, net_name, size2GB, RAM, vCPUs): devices =  datastore_path = '[' + datastore.name + '] ' + vm_name vmx_file = vim.vm.FileInfo(logDirectory=None,snapshotDirectory=None,suspendDirectory=None, vmPathName=datastore_path)
Once a few basics such as the VM home directory are setup, we need to specify the network adapters for this VM. Most crucial here are the type of NIC (VMXNet3) and the portgroup that connects to this NIC. The portgroup is an object called net_name and oddly we need to pass both the object and its name to the NIC device. Once the NIC is specified the last line device.append(nicspec) adds the Nic to the VM device list.
nicspec = vim.vm.device.VirtualDeviceSpec() nicspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add nic_type = vim.vm.device.VirtualVmxnet3() nicspec.device = nic_type nicspec.device.deviceInfo = vim.Description() nicspec.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo() nicspec.device.backing.network = net_name nicspec.device.backing.deviceName = net_name.name nicspec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo() nicspec.device.connectable.startConnected = True nicspec.device.connectable.allowGuestControl = True devices.append(nicspec)
Next we need a SCSI controller for the disks, I am using PVSCSI since I need good storage performance. One trick is to make sure that the device.slotInfo.pciSlotNumber and device.busNumber are unique within this VM.
scsi_ctr = vim.vm.device.VirtualDeviceSpec() scsi_ctr.operation = vim.vm.device.VirtualDeviceSpec.Operation.add scsi_ctr.device = vim.vm.device.ParaVirtualSCSIController() scsi_ctr.device.deviceInfo = vim.Description() scsi_ctr.device.slotInfo = vim.vm.device.VirtualDevice.PciBusSlotInfo() scsi_ctr.device.slotInfo.pciSlotNumber = 16 scsi_ctr.device.controllerKey = 100 scsi_ctr.device.unitNumber = 3 scsi_ctr.device.busNumber = 0 scsi_ctr.device.hotAddRemove = True scsi_ctr.device.sharedBus = 'noSharing' scsi_ctr.device.scsiCtlrUnitNumber = 7 devices.append(scsi_ctr)
The scsiCtlrUnitNumber does not need to be unique, the convention is to use 7 for all SCSI controllers on the VM. Next we add a hard disk to the SCSI controller that we already specified. Notice that the file name is specified by substituting variables into the string, the %s part is a substitution variable in Python. You need to make sure that the file name is unique, it will fail to create the VM if the file already exists.
unit_number = 0 sizeGB = 16 controller = scsi_ctr.device disk_spec = vim.vm.device.VirtualDeviceSpec() disk_spec.fileOperation = "create" disk_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add disk_spec.device = vim.vm.device.VirtualDisk() disk_spec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo() disk_spec.device.backing.diskMode = 'persistent' disk_spec.device.backing.fileName = '[%s] %s/%s.vmdk' % ( datastore.name, vm_name, vm_name ) disk_spec.device.unitNumber = unit_number disk_spec.device.capacityInKB = sizeGB * 1024 * 1024 disk_spec.device.controllerKey = controller.key devices.append(disk_spec)
Once all of the devices are specified we can pass the hardware specification along with a bunch of other variables to a VM config object and then pass the config object to a CreateVM task and wait for the task to complete.
config = vim.vm.ConfigSpec(name=vm_name, memoryMB=RAM, numCPUs=vCPUs, files=vmx_file, guestId='ubuntu64Guest', version='vmx-09', deviceChange=devices) task = vm_folder.CreateVM_Task(config=config, pool=resource_pool) tasks.wait_for_tasks(service_instance, [task])
Yay, we have a VM which we can now power on. I don’t quite understand why we are iterating through multiple VMs here, but the result is that the VM we just created is powered on.
vms = vm_folder.childEntity for vm in vms: if not((hasattr(vm, 'childEntity')) or (isinstance(vm, vim.VirtualApp))): if ((vm.runtime.powerState != vim.VirtualMachinePowerState.poweredOn) and (vm.name == vm_name)): task = vm.PowerOn() tasks.wait_for_tasks(service_instance, [task])
In a normal deployment, you would probably present some install media to the VM, like an ISO. In my use, there is a PXE server that deploys Linux to the VM when it PXE boots. Once there is an OS on the first hard disk the VM will boot that OS rather than PXE.
Another oddity that I found was that if I assigned multiple SCSI controllers and hard disks to the VM at creation time then all of the disks were connected to the last SCSI controller. I wanted a spread to maximize performance. In part three I will look at adding SCSI controllers and disks to an existing VM.
© 2018, Alastair. All rights reserved.