I am surprised that we do not have more SaaS-based management platforms, ever since Cloud Physics launched in 2013 it has made sense to me that SaaS was a great model for managing infrastructure. All of the usual SaaS benefits apply, the software is always up to date, and that is not the IT team’s problem. But the real genius of Cloud Physics is that they have a vast information warehouse of data about their customer’s environment and can learn from the data to help every customer operate better. Just before VMworld USA, my friends at Cohesity launched their own SaaS management platform called Helios. One aspect of Helios is to unify management of multiple Cohesity clusters, both on-premises and in public cloud. Another aspect is to enable the more intelligent use of the information inside those Cohesity clusters.
The first job of a SaaS-based system management tool is to reduce the number of tool transitions that IT operations staff need to make in their workflow. Helios centralizes management of multiple Cohesity clusters which would otherwise each have their own URL for management. A centralized management location for a collection of clusters is excellent, particularly since it is not just unified reporting you can use the unified console to make changes such as updating data protection policies across all your Cohesity clusters. While the unified management may be the current value of Helios, there is way more value in the future capabilities that Helios will deliver from its analytics.
Helios is also a repository of meta-data about your data protection, as well as every other Cohesity customer’s data protection. This information warehouse can be used for analytics purposes, both within your own meta-data but also comparatively across all customers. Keep in mind that Helios does not store your backed-up data, just information about the backups. I wrote a little about using the public cloud to store backups last month and made a video about doing this with Cohesity.
The initial customer analytics insights in Helios seem to be about capacity planning and SLA optimization. Neither of these strikes me as killer uses, there is already capacity planning inside the cluster, and I would have hoped that the cluster could manage SLA compliance without Helios. I would like to see more insights into data lifecycle in my organization, such as file shares that contain data that hasn’t been updated for years. I would like to see the machine learning work out what is “normal” behavior and alert me when there has been a significant deviation from normal. One example is the detection of ransomware infection since there are specific file modification behaviors from ransomware. Ideally coupled with the ability to restore all encrypted files to their last pre-infection state.
Cross Customer Analytics
Business benchmarking is a valuable tool, how do we rank against similar businesses? Helios has benchmarking so you can identify how you are doing compared to similar businesses. Benchmarking is much more interesting to management than IT operations; it is still useful in justifying how you operate your data protection. I would be interested in seeing the machine learning applied here too, identifying common data protection meta-data patterns that immediately precede mass restores might help speed detection of data loss events in other customers. Similarly, there may be patterns that precede rapid drops in the available capacity that can also be used to help other customers avoid issues. Smart people learn from their mistakes, smarter people learn from the mistakes of others.
I really like the ideas behind Helios; now I need to get some hands-on time to see what it is really like. I am also very interested to see the progress of the analytics component over the next months and years.
Disclosure: This post is part of my work with Cohesity.
© 2018 – 2019, Alastair. All rights reserved.