Business process automation is an essential part of what IT delivers, and central to allowing IT to deliver business automation is using IT automation. You may recall lots of vBrownBag content around “Automate all the things” a few years ago. Plenty of help to use PowerShell to automate your vSphere and more recent content helping you learn to automate with Python. The thing is that the best automation is the one you don’t write. A vendor with an open API to allow customers to develop automation is fantastic, and now an open API is table stakes for products in private and public cloud deployment. Unfortunately, some vendors stop with an API and expect their customers to develop all of their own automation using that API. If a large number of your customers are writing precisely the same automation, then your product is missing the feature that removes the need to write the automation. For example, if most of your customers need to write reports that show the availability and performance of your product, then your product should have those reports built-in. Customers are far better served with features built-in, rather than lots of duplicated effort to build these features on top of your API.
I had a great chat with a Zerto customer while we were eating lunch at ZertoCon last week. The customer is an MSP that offers Backup and DR as services to their customers. The DR as a Service (DRaaS) is delivered with Zerto. MSPs are a large market for Zerto, the MSPs put their own brand over the Zerto product and provide DRaaS for their customers. The comment from the support engineer was that when Zerto reports a problem, then there was a real problem, rather than a Zerto software issue. The MSPs Backup as a service product used a different vendor’s product, and the MSP frequently had support calls open with the vendor to resolve the problems with the backup product. The MSP’s DRaaS engineer that I chatted with could not believe how often the backup team had to open support requests with the backup vendor. He seldom, if ever, opens tickets with Zerto. For enterprise customers, the reliability translates into less staff time to deliver consistent DR using Zerto.
The headline feature of Zerto 7.0, released this month, is long term retention. Previously Zerto would retain recovery points for a maximum of 30 days, and now retention is a policy set by the customer. Recovery for up to 30 days meets most DR and backup/restore requirements, but not compliance and archive. With long term retention Zerto can be used for all of the “bad things happen” use cases that I outlined a few months ago. I wonder whether that MSP will deprecate their backup as a service and rename their DRaaS as “data protection as a service” when they upgrade to Zerto 7.0. Unifying back and DR makes a lot of sense. A single “copy” action from production can satisfy both requirements; then policies determine which copies go where and how long they are retained.
Hopefully, you understand that cloud-native applications have very different architecture and different databases compared to traditional enterprise applications. There are now modern enterprise applications, which use cloud-native services and databases alongside traditional application constructs. We also see more enterprises having parts of their estate in the public cloud and using cloud-native services. Hybrid cloud is not merely using both on-premises and cloud services, but also often has a melding of cloud-native and enterprise techniques. Against that backdrop, it makes a lot of sense for enterprise data protection platforms to add cloud-native data protection. Cohesity buying Imanis Data adds non-relational database protection to the platform.
Disclosure: This post is part of my work with Cohesity.
I had a briefing with Imanis in the middle of 2018, well before the acquisition. Imanis’s mission was to offer enterprise data management for Hadoop and No-SQL data. Their platform is software-only and uses a scale-out architecture. It provides a distributed file system with deduplication and compression, it protects data using API based access, rather than agents. So far, so good, so what? The exciting part for me is data awareness and the use of machine learning in the platform. Data awareness means that the Imanis platform knows about the data structures inside the databases and can use this to aid migration and do analytics on the protected data. I particularly liked that the analytics include ransomware detection, a role that I think backup products are ideally placed to fulfill.
I saw Imanis again when Cohesity unveiled their app store. You can deploy Imanis directly onto your Cohesity cluster and use the Cohesity scale-out storage for Imanis. It appears that Cohesity will continue to sell Imanis as a stand-alone product, as well as more integration into the Cohesity platform. I hope that we see the integration of the Cohesity policy based back into Imanis. I want to apply the same protection policy to the parts of my application that reside in VMs and cloud-native data stores. Cohesity continues to expand the coverage for their data protection, covering Enterprise platforms, Outlook 365, and now cloud-native databases. The aim is clearly to be a one-stop-shop for data protection and management in enterprises and particularly support multi-cloud organizations.
One of the beautiful things about a small conference is feeling less like I’m lost in a crowd. I spent three days in Nashville at ZertoCon with a few hundred other attendees. A smaller conference meant smaller queues for food and less walking around the conference center. A smaller crowd also meant that after chatting with the person next to me in a line, I saw them and talked with them again the next day. It is quite a different feeling compared to the massive conferences I often attend. There was also some lovely personal touches such as having Zil (a Zerto staff member, or Zertonian) play guitar for the opening keynote and again at the customer appreciation party. The crowd at the party really appreciated the Zertonians who performed when the house band took a break. I went to more sessions at ZertoCon than I have at any conference since my first VMworld. Both those sessions were good with knowledgeable presenters in rooms that were small enough that questions didn’t feel like an interruption. I also had a one-on-one briefing as well as easy access to talk to Zerto and sponsor staff to answer questions.
I liked the center of Nashville too, although it will be more pleasant when the construction right next to my hotel is finished. The bottom of Broadway is quite a sight, three or four blocks of wall-to-wall live music venues. Even on a Monday night everywhere had live performance and many bars had multiple bands playing. One bar we went to was four floors with three different groups performing. I was surprised that almost all of the music was classic rock covers, not a lot of anything country or western.
My favorite part of all of the conferences that I go to is catching up with my community friends. Ariel and Edgar were there before me and won the hackathon by adding Zerto components to their vDocumentation tool. I caught up with Nick Scuola, Shannon Snowden, and Justin Paul, who are all Zertonians, along with Kaitlyn, who was my host for the event. I also got to hang out with Eric Siebert properly for the first time.
Small conferences are very different from large ones. There was more focus on education and technical content at ZertoCon than at the massive vendor conferences. ZertoCon felt personal and the Zertonians I met wanted to have real relationships with attendees rather than simply to make sales.
This week I had a look at using my Cohesity cluster to protect a Nutanix Acropolis Hypervisor (AHV) environment. Most of my experience with Nutanix was using vSphere as the hypervisor, in which case Cohesity sees a vSphere environment. This time I deployed the Nutanix Community Edition (CE) which uses AHV and allows a single node deployment. The protection process for AHV works the same as vSphere protection, but the restore processes are not quite as smooth. Below is a walk-through of protection and recovery; you can see the same process in this video. You can also read more about Cohesity protection for Nutanix AHV on the product page and learn more about Cohesity through the Build Day Live event.
Next week I will be in Nashville for ZertoCon. Two firsts for me as I have never been to Tennessee or ZertoCon. I also get to have my longest flight, almost fifteen hours direct from Auckland to Chicago before my connection to Nashville. Unfortunately, I don’t get to Nashville until Sunday night, too late for the ZertoCon Hackathon. Hopefully, I will be sufficiently rested to make the most of the education opportunities at ZertoCon on Monday. There are hands-on-labs as well as instructor-led training and certification testing. The sessions run Tuesday and Wednesday, I’m interested in some of the cloud migration sessions as well as a vExpert session run by Ariel Sanchez and Nick Scuola. I am also expecting to learn quite a bit about Zerto V7.0 that was released this month. Say hi if you see me at ZertoCon, also look out for Ariel.
On my way back from Nashville, I will have a couple of days in San Jose. Who should I catch up with while I’m there?
When I first met Pure Storage, at Virtualization Field Day Three, the company objective was to deliver an all-flash storage array that was cost competitive with disk-based arrays. At that time Pure Storage was optimizing for cost, rather than all-out performance. But even when the price was a primary concern, Pure also wanted to deliver the data services that enterprise IT expects. Over the five years since I met Pure, I have seen many profound changes in the company, and the industry as a whole. While Pure is still very conscious of the cost of their products, they have a sufficient range of products that there are performance optimized options. There are whole new families of products that were not even conceived in 2014. One thing that has not changed is their business approach. The Forever Flash guarantee is one aspect of how Pure wants to make life simpler for its customers.
In my last post about Cohesity, I showed you how to set up replication between Cohesity clusters so that you could have DR using an off-site Cohesity cluster. Today I will walk through how that actual recovery might happen. You can watch the video of the process here on YouTube. We think of DR planning as being protection against major events, floods, fire, tornados, and the like. The reality is that most DR activities are more mundane. Real disasters are infrequent, and a DR plan is mostly insurance that we pray we never need to use. Often the DR environment is also the test and development environment, and the usual recovery is to bring up an isolated copy of production. Using the DR environment for testing delivers additional value from what would otherwise be expensive idle equipment. Each test also validates that parts of our DR plan will work if a disaster does occur.
In my series of posts about copying data, I talked about Disaster Recovery (DR) as a reason to copy data between sites, particularly in a form that allows rapid recovery of a large workload. Today I will walk through the process of replicating a set of VMs from one Cohesity cluster to another. If you would prefer to see the process in a video then take a look here at my YouTube video. You can also refer to the Cohesity site for more information about Disaster Recovery and Replication. In another post and video, I will show you the recovery of those VMs.
In the last two blog posts of this series, I looked at ways that we copy data for protection and ways that are about improving business. Since we are making these copies of the same data for different purposes it might be worth considering how we might use a single product to make these copies without a lot of redundant copying and storage. Each time we make a copy of the production data we are impacting the production system, minimizing the impact on production should result in a business benefit. The challenge is that the different reasons for copying data have very different requirements so a single product for these needs will have to be flexible and feature rich.