Having decided that HCI is all about simplified deployment and management, I have started thinking about how simple it is to manage modern on-premises infrastructure. I feel that HCI is often compared with the technology it is replacing, rather than the alternatives that are available now. One aspect of that it when an HCI vendor says that a client replaced five racks of five-year-old equipment with half a rack of HCI. A new SAN and servers would not require five racks, so some of the reduced footprint is about hardware generations, rather than HCI. Another aspect is simplified management. HCI uses management that is centered around VMs and ideally policy based. Older architectures tend to have isolated control of each hardware type: compute, network, and storage. Comparing HCI’s simple management to a ten-year-old management practice is also not valid, HCI needs to be compared to the manageability of modern products with which HCI products compete.
I usually think of SolarWinds as a suite of on-premises software that helps monitor and manage many aspects of IT and application infrastructure. While this is true, it isn’t the complete story. As I have heard at a couple of Tech Field Day events (TFD13 & TFD16 plus TFD disclaimer), SolarWinds also has a suite of online services that can help monitor and manage, even without on-premises SolarWinds management. This shift, towards SaaS solutions for management, is essential in a couple of ways. It seems evident that cloud-native applications would be managed by cloud-native management platforms, it makes sense to use current application architectures. Initially, it was less clear that managing on-premises infrastructure from cloud-based services was a smart idea. For some uses it is still not acceptable, some on-premises infrastructure does have to be managed wholly on-premises. But there is a lot to like about a management application as a service, it is always up-to-date and was implemented following the vendor’s best practices. It always has enough capacity for your business growth without a lot of planning by your IT team. So, what are these online management products that SolarWinds has?
The spring conference season is upon us, and particularly upon me. I am starting the season with a trip to Dell Technologies World (DTW) in Las Vegas at the end of April. I will be attending the conference as the guest of Dell, who will provide my flights, hotel accommodation, probably some nice hospitality, and access to some interesting people. I am interested to see the continued merging of the EMC and Dell data center product lines. Now that the initial dust has settled I expect to see some announcements around the development of some products and silence about others. This will tell us the makeup of the combined storage product line. After the Dell presentation at TFD16 in February, my interest in managing fleets of x86 servers is renewed, so I will be looking for more stories about how easy it is to manage dozens of servers. For me, this is a great comparison to the ease of management that we expect from HyperConverged products. While I’m talking about HCI, I really do need to learn a bit more about Dell’s VxRail product as I hear that it is hugely successful.
I will be in Las Vegas April 29th to May 2nd, although DTW continues until the 3rd. It will be a big geek week in Las Vegas, InteropITX will be in town the same week. I have a pass for Interop, so may see people there as well as at DTW.
What information do your organization’s security tools take as input to decide whether an action is safe? Most security software only takes IT technical information as inputs; firewalls use IP information and malware detection often uses file fingerprinting. I think we know that there are significant shortcomings with just looking at IT data when we assess risk. These inputs fail to address the most significant threat in any security landscape, the people. At Tech Field day 16 we heard from Forcepoint (video here) about how their User and Entity Behaviour Analytics (UEBA) product takes a lot of external data to make decisions about the risk associated with specific actions by a staff member, or other entity. My usual TFD disclaimer applies.
It was bound to happen eventually, no matter how much I resisted the idea, eventually I needed a PowerShell script with a GUI. This week I needed a script that allows searching for a VM and selection from a few lists, as well as some optional configuration. I could have written a script that prompted the user for a series of answers, but what I needed was a GUI with a bunch of Windows form controls just like I used to use in Visual Basic. The good news is that there is a pretty web application for designing precisely this purpose and generating a PowerShell script skeleton that you can fill out with your application code. I used POSHGUI.com to create the web form part, then put together the main script logic as usual.
Is it just me that gets annoyed when category definitions are arbitrary and fail to match up to real business needs? One example is Gartner’s All-Flash Array (AFA) storage analysis. Any product that can be either AFA or hybrid is excluded, so vendors make unique product IDs that are really just an all-flash configuration of a hybrid array. Gartner’s definition of AFA gets in the way of customers looking for a set of benefits. I have come to realize that I have made the same mistake about HyperConverged Infrastructure (HCI) as a category. The realization arrived as I took part in Tech Field Day 16, particularly this presentation by Adam Carter. Naturally, my standard TFD disclaimer applies. HCI is not really about putting clustered storage inside a bunch of hypervisor hosts; it is far more about the simplicity of operating an environment designed purely to run VMs. There is a range of vendors with products that make it easy to deploy and manage a virtualization platform which is what HCI is really about. To me, the big surprise is that VMware does not have a general-purpose deployment tool, even for a basic vSphere cluster.
I am in Austin (Texas) this week to be a delegate at Tech Field Day 16, this is my fourteenth Tech Field Day (TFD) event. I want to spend a few moments talking about what I think the events are and share some of “The Rules” that we talk about for the event. My list of rules is by no means definitive. In fact, I hope to have corrections and additions to this list over time. Naturally, this post is covered by my TFD disclaimer post. Continue reading
I have long believed that success in the public cloud is not just about meeting the NIST definition, it also requires developer enablement. The rampant success of AWS is not driven by EC2 compute instances; it is by delivering services that enable developers to build applications that satisfy business needs rapidly. I believe that this is why we have seen IaaS based public clouds fail, they don’t deliver services that developers want to consume. Is there a parallel in private cloud? It seems that Stratoscale believes that there is, they have pivoted from providing only an on-premises IaaS cloud to delivering familiar AWS services on-premises. To be clear, they do not offer all of the AWS services and don’t give every API for every service. They are awesome, but not miracle workers, more products, and more extensive API coverage will come over time. Nonetheless, the services that they offer are pretty amazing. There are clones of AWS networking and load balancing, database service, Hadoop-as-a-service, Object and file services, as well as a Kubernetes-as-a-service offering. All these services are delivered on-premises using a software-only HCI deployment; you can re-use existing physical servers or buy your choice of new servers.
Developing software for AWS services is undoubtedly a popular practice, but usually locks you into deployment onto the AWS public cloud. With Stratoscale you can use many of the same services but deploy to on-premises infrastructure by changing one URL in the deployment process. Developers could use AWS for the development phase and then deploy to production on-premises, or the other way around. Applications could also be built with a split between on-premises and public cloud services, using the same architectures in both locations. I think that the strongest enterprise use-case for Stratoscale is organizations that want the agility of public cloud development but have regulatory or compliance requirements to keep their applications and data on-premises. The other strong use-case is for smaller public cloud providers to offer their own AWS compatible services and service niche requirements.
This is a very cool product. I hope to see more from Stratosacale as they expand their product and educate customers about the possibility of AWS compatible services on-premises.
I am continuing my look at the Rubrik platform. In my previous blog post, I looked at the deployment process for the Rubrik Edge virtual appliance, as well as backups and restores from that Edge appliance. Today I want to dig a little deeper into the backup policies (SLA Domains in the Rubrik terms) as well as look at using replication to protect against losing the Edge appliance itself. I will start with replication and then loop round to policies since replication is driven by these policies.
For every data center full of servers, there are dozens or even hundreds of remote or branch offices. These locations are where business actually sell their products and make money. Delivering IT to these ROBO locations is a challenge in part because there are lots of them so controlling cost is crucial. While we might hope that all our business processes are run out of cloud applications, the reality is that many of these ROBO locations need to have their own servers. One retail branch I visited ran around 20 VMs, enough to need a virtualization platform but not enough for a SAN and multiple ESXi servers. Since these locations are where the money is made, it is also where the data is generated. Protecting that data in the Remote or branch office is what Rubrik Edge is all about.