I am continuing to learn about Cohesity and share my learnings with you. This week I added my Cohesity cluster to Active Directory so that I could use AD accounts to manage the platform rather than the built-in account. The process is shown in this video and took all of five minutes to complete. The security model in Cohesity is reasonably straightforward but flexible. Accounts are given a role which defaults to being global but can be filtered to specific objects. There are roles for cluster administrators, backup operators, and backup viewers as well as a couple more that I haven’t investigated. There is also a facility to create custom roles based on your specific security policies. I granted one AD group administrative rights to replace using the admin account and gave another group the operator role so that they could look after data management, but not change the cluster setup. One important thing is to secure the built-in admin account’s password, configuring AD authentication supplements built-in authentication, so the local accounts still exist. Set a complex password and document it in whatever safe location you use for system passwords. Now that the cluster is joined to AD, the login page has a drop-down for domain selection. The delegation of user authentication to Active Directory was quick and easy on my Cohesity cluster.
Past Cohesity videos
Cohesity Virtual Edition Deployment Walk Through
Cohesity – Archive and Tier to public cloud
Disclaimer: Cohesity is my paying customer, and I am helping them by making this video and blog posts. The topic and content of the video was entirely my idea, and everything was created and posted before Cohesity got to do any review. I think they like it this way as they are so danged busy at the moment.
I am surprised that we do not have more SaaS-based management platforms, ever since Cloud Physics launched in 2013 it has made sense to me that SaaS was a great model for managing infrastructure. All of the usual SaaS benefits apply, the software is always up to date, and that is not the IT team’s problem. But the real genius of Cloud Physics is that they have a vast information warehouse of data about their customer’s environment and can learn from the data to help every customer operate better. Just before VMworld USA, my friends at Cohesity launched their own SaaS management platform called Helios. One aspect of Helios is to unify management of multiple Cohesity clusters, both on-premises and in public cloud. Another aspect is to enable the more intelligent use of the information inside those Cohesity clusters.
I have an off/on relationship with vForum Sydney. I first attended the event when it was called TSX in 2007, right at the start of my time teaching VMware training courses. Back then there were a couple of hundred people at TSX Sydney, now vForum attracts thousands of attendees and is a smaller VMworld. I’ve attended most of the vForum events, except when it conflicted with VMworld EMEA one year, and I think OpenStack Summit another year. Along with VMUG UserCons, vForum is a gathering of the virtualization community, and it brings some superstars in from overseas too. I will be at vForum Sydney this year and am really looking forward to seeing my friends and doing some community activities.
In 2011 I attended my first VMworld in 2011 and the community parties (VMundeground and CXI) were a revelation to me, a great place to meet people and talk. I came back and organized VMdownunderground, the community warmup party before vForum Sydney. The party has happened before vForum every year, with Ryan McBride taking over the organization when I couldn’t make it to vForum. There will be a VMdownunderground again, so you can come along to talk to other community people. Please register here on Eventbrite, so you get a reminder and the address. We started organizing things this week, a little too late to get a lot of sponsorship, which means you will need to buy your own drinks. Great thanks to Actifio for sponsoring the event at short notice, hopefully, we can provide some snacks.
There will also be vBrownBag TechTalks at vForum, as there have been often through the years. TechTalks are brief presentations that provide technical education of some sort. Presentations are video recorded and often live streamed, then published to the vBrownBag YouTube channel. If you would like to present a TechTalk at vForum Sydney this year, then just fill out this form, and I will be in touch to schedule your session.
We all like the idea of a single pane of glass system monitoring, but the reality is that often monitoring data is siloed away in a bunch of different tools that do not speak to each other, not even the same language. We end up with several single panes of glass, each dedicated to their own data. Often each team is only aware of their own data, with no ability to correlate data between different infrastructure and application layers. What we could use is a Rosetta Stone that allows translation between the various data languages in our enterprise and permits data to be ingested for analysis and delivery to our favorite pane of glass.
There is definitely a divide between what is possible with on-premises IT infrastructure and what is possible with public cloud services. On-premises infrastructure is finite, dedicated, under our direct control, and paid for up front even if we don’t use it. Public cloud infrastructure is effectively infinite but must be shared with other tenants of the public cloud. We have limited control of public cloud infrastructure, but we only need to pay for what we actually use. These differences mean that most organizations are using a hybrid cloud approach, some IT on-premises and some from one or more public clouds. One of the first infrastructure elements that are outsourced to the public cloud is cold data storage, and it was no coincidence that S3 was one of the first services that AWS offered. The two usual initial adoption models are tape replacement and tiering. Both these adoptions models treat cloud storage as an extension of on-premises backup storage.
I’ve just started working with Cohesity and have made a video of my first deployment of the Cohesity DataPlatform Virtual Edition. I had been through the OVA deployment and configuration, but this was my first ever hands-on time with the Cohesity UI, and I had not watched or read any other walk-through. I was impressed, I deployed the appliance, backed up two VMs and completed one whole VM restore, and two file level restores in under an hour elapsed time. The video is about half that length; I sped up quite a few places where I was waiting for data movement. You can find the video here on my Notes for Engineers YouTube channel.
I was impressed that the user interface is straightforward to work with and is focused on routine tasks, protection and recovery are both front and center. The opening screen after logging onto the appliance has plenty of useful information for at-a-glance status and health.
In the coming months, I will spend quite a bit more time with the Cohesity product, learning and sharing what I learn with you. I do want to spend some time looking at the data management features and how Public Cloud is impacting the use of products like Cohesity.
Disclosure: Cohesity is a customer of Demitasse, and this article and video are part of my work for Cohesity, but Cohesity did not suggest, control or review this content before publication.
There seems to be a fashion to rename your backup product as a data management product. I think that there are significant differences between data protection and data management, some products are not merely being renamed but are fundamentally different. I think it is worthwhile identifying the difference between products that are made for data protection and those that were designed from the start for data management. As always happens, I expect marketing departments to jump on-board the new name even when it is not within the capabilities of their products.
This is the fourth article in my series about using PyVmomi for VM build automation from Linux. In the earlier posts, we connected to vCentre, created a VM, and added SCSI controllers and drives to the VM.
The VMs I needed to create are for a storage benchmarking tool, so I needed to be sure that competition for CPU or RAM was not going to limit the storage performance. I had parameters in the script for the vCPU count and RAM amount. Now I needed to add reservations for the entire RAM footprint and a decent amount of GHz per vCPU. I was rather expecting to have a hard time with setting resource controls on my VMs since apparently simple things like adding disks to SCSI controllers seemed painful. Much to my surprise I could set reservations very easily and setting shares and limits looked easy too. I needed to add Memory and CPU allocation objects into the VM configuration specification before creating the VM. In the example below I set the RAM reservation equal to the configured RAM and CPU reservation to 1.5GHz for each vCPU.
config = vim.vm.ConfigSpec(name=vm_name, memoryMB=RAM, numCPUs=vCPUs, files=vmx_file, guestId='ubuntu64Guest', version='vmx-09', deviceChange=devices)
config.memoryAllocation = vim.ResourceAllocationInfo(reservation=RAM)
ResMHz = vCPUs * 1500
config.cpuAllocation = vim.ResourceAllocationInfo(reservation=ResMHz)
#logging.debug("Creating VM " + vm_name)
task = vm_folder.CreateVM_Task(config=config, pool=resource_pool)
Both the MemoryAllocation and CPUAllocation objects have Shares and Limits properties as well as the Reservation property that I am using. Since I was reserving a lot of resources there was no reason for me to use shares or limits, you use case may be different.
I found PyVmomi significantly harder to work with than PowerCLI, mostly because there are far fewer examples to work from. A lot of my time was spent reading the vSphere API and object definitions. It does feel like PyVmomi is aimed more at developers than operations teams and that some formal software development training might have made it all easier. The nice part is that I was able to get everything I wanted working within Python.
I could not believe that is was so long since I did an AutoLab release.
Then I completed this release in June and started the testing, six weeks later testing is complete and I have even updated the deployment guide. Now I know why AutoLab releases take so long, I needed to test two different operating systems with three different vSphere releases and two Horizon View versions inside the lab and four different virtualization platforms.
The great news is that AutoLab v3.0 supports vSphere 6.5 and 6.7 as well as all the older versions. Horizon 7.0 and 7.5 are supported, as is Windows Server 2016.
This is the last version of AutoLab that will be Windows based. My plan for the next version is to use a single Linux appliance to replace the NAS, Router, and Domain Controller. The Windows VC will be replaced by the VCSA, as it has in all my lab environments. This will be a significant effort, so there may be another long wait for a new version!!
Joy is a funny thing, we all know what it is, and we can usually tell when people are experiencing joy. Many of us feel too little joy in our lives. It seems far too easy to get caught up in all the duty of our adult lives and lose the joy that we had when we were younger. For the last year, I have been focussing more on the things that bring me joy and trying to help people around me find their joy. The funny thing is that I cannot tell you how to find your joy, I can only ask you to think about what brings you joy and tell you what brings me joy.
Five minutes from my shedquarters