There are a lot of ways that big cloud companies run their IT estate differently to how enterprise IT companies run their estate. One of the significant differences is that cloud vendors have developers operate their infrastructure, this is the Site Reliability Engineer (SRE) role that Google talk about. The SRE role is approximately like the IT operations function in Enterprise IT, but at a scale that enterprises never experience. In Enterprise IT when something is broken an Operations engineer will fix the problem, then move on to the next task. In an SRE role, the problem is not simply fixed; the SRE will make software changes to the underlying platform to avoid the problem ever occurring again. This difference in approach is one of the essential elements to moving from IT as pets to IT as cattle (or probably poultry since a cow, sheep, or goat is a pretty expensive & non-disposable asset.)
My original Platform 9 Meme, thanks for the reminder Sirish
This is part two of my series on using PyVmomi on Linux to work with vCentre to create a bunch of VMs. You can find part one, where we connected to vCenter and retrieved VM information here. Creating a VM with any automation tool requires specifying all of the attributes of the VM, there is no wizard like the vSphere client, so we need to construct a bunch of linked objects. The most complex object is the hardware specification object and within it the objects for disk controllers and drives. Usually, each object is created using a constructor method and then added to its parent with an operation property.
I am interested in seeing what customers do with Microsoft Azure Stack. I think it is a significant part of a hybrid cloud infrastructure for enterprise customers. The interesting part to me is the split in responsibilities between the customer, Dell, and Microsoft. The workload is managed by the customer as another Azure region. The software platform is managed by Microsoft with their own updating cycle. The hardware platform is managed by the hardware vendor but is on-site in the customer’s datacentre. There is an inherent risk with customer-managed premises underneath cloud provider managed software; this needs to be addressed by the hardware vendor. Even more, the complication comes if customers choose a custom, snowflake hardware configurations.
It does seem that there is always a new programming language to learn. I do wish that I had done some real programming courses when I was a student. My Physics degree from the 1990’s didn’t prepare me well for needing to write a lot of scripts which seem to get more complicated every month. I have been working on a project which requires a Linux virtual appliance be used to build a bunch of Linux VMs. I did start by looking at the option of running PowerCLI on Linux and quickly came to the conclusion that it was too soon for me to use that technology. So, I fell back to Python and the Python bindings for the VMware vSphere API called pyVmomi. This Python module allows me to interact with vCenter to get tasks done.
The springtime conference season continues, I will be attending Pure Accelerate in San Francisco from May 22nd to 24th as a guest of Pure Storage. This will be my first time at Accelerate; the last two years conferences never quite worked out for me. I learned quite a lot about the Flash Array product when we did a vBrownBag Build Day Live event last year. I am expecting to learn a bit more about the Flash Blade product and how it is being used by real customers. Also more presence for FlashStack, which is a converged infrastructure with Cisco UCS servers and Pure Storage Flash Array. I also expect to see some evolution of the Pure1 web service that is used to manage an estate of Pure Storage arrays. There has been quite a contingent of bloggers and influencers at past Pure Accelerate conferences; I expect to see a few friends and make a few new ones. I also have quite a few friends at Pure Storage, so it will be nice to catch up with them if they have a spare moment at the conference.
I am always interested in new ways of delivering virtual desktops. Although it is still not the year of VDI, there are plenty of customers who need remote access to a desktop. Tonight, I heard from Tocariowho is based in Stuttgart and have a turn-key solution for service providers and medium businesses to deliver desktops from a data center. The desktops can run any x86 operating system, as all display and device remoting are provided in the virtual hardware (KVM hypervisor) rather than using in-guest drivers and agents. The client side can be a mobile device with a native client or an HTML5 client. The HTML5 client even has screen sharing, potentially for dozens of students watching one VM’s screen in their web browser.
Deployment starts with a scale-out management component that is usually on existing virtualization but can be bare metal. Then the physical hosts are deployed from those management hosts, KVM on bare metal. The hosts consume shared storage, either NFS or iSCSI to provide resilience. The management cluster offers load balancing and the usual brokering. There is full multi-tenancy for service providers and larger businesses. Service providers also get nested multi-tenancy. A reseller can have their clients access through the reseller’s tenant portal as if the platform belonged to the reseller but without the management requirement. Service providers also get some built-in sales enablement, such as a self-service trial function built-in. All licensing is per-desktop VM; service providers pay for usage per-month while on-premises deployment is bought in packs of 10 or 100 desktops, again priced per desktop per month.
Having decided that HCI is all about simplified deployment and management, I have started thinking about how simple it is to manage modern on-premises infrastructure. I feel that HCI is often compared with the technology it is replacing, rather than the alternatives that are available now. One aspect of that it when an HCI vendor says that a client replaced five racks of five-year-old equipment with half a rack of HCI. A new SAN and servers would not require five racks, so some of the reduced footprint is about hardware generations, rather than HCI. Another aspect is simplified management. HCI uses management that is centered around VMs and ideally policy based. Older architectures tend to have isolated control of each hardware type: compute, network, and storage. Comparing HCI’s simple management to a ten-year-old management practice is also not valid, HCI needs to be compared to the manageability of modern products with which HCI products compete.
I usually think of SolarWinds as a suite of on-premises software that helps monitor and manage many aspects of IT and application infrastructure. While this is true, it isn’t the complete story. As I have heard at a couple of Tech Field Day events (TFD13 & TFD16 plus TFD disclaimer), SolarWinds also has a suite of online services that can help monitor and manage, even without on-premises SolarWinds management. This shift, towards SaaS solutions for management, is essential in a couple of ways. It seems evident that cloud-native applications would be managed by cloud-native management platforms, it makes sense to use current application architectures. Initially, it was less clear that managing on-premises infrastructure from cloud-based services was a smart idea. For some uses it is still not acceptable, some on-premises infrastructure does have to be managed wholly on-premises. But there is a lot to like about a management application as a service, it is always up-to-date and was implemented following the vendor’s best practices. It always has enough capacity for your business growth without a lot of planning by your IT team. So, what are these online management products that SolarWinds has?
The spring conference season is upon us, and particularly upon me. I am starting the season with a trip to Dell Technologies World (DTW) in Las Vegas at the end of April. I will be attending the conference as the guest of Dell, who will provide my flights, hotel accommodation, probably some nice hospitality, and access to some interesting people. I am interested to see the continued merging of the EMC and Dell data center product lines. Now that the initial dust has settled I expect to see some announcements around the development of some products and silence about others. This will tell us the makeup of the combined storage product line. After the Dell presentation at TFD16 in February, my interest in managing fleets of x86 servers is renewed, so I will be looking for more stories about how easy it is to manage dozens of servers. For me, this is a great comparison to the ease of management that we expect from HyperConverged products. While I’m talking about HCI, I really do need to learn a bit more about Dell’s VxRail product as I hear that it is hugely successful.
I will be in Las Vegas April 29th to May 2nd, although DTW continues until the 3rd. It will be a big geek week in Las Vegas, InteropITX will be in town the same week. I have a pass for Interop, so may see people there as well as at DTW.
What information do your organization’s security tools take as input to decide whether an action is safe? Most security software only takes IT technical information as inputs; firewalls use IP information and malware detection often uses file fingerprinting. I think we know that there are significant shortcomings with just looking at IT data when we assess risk. These inputs fail to address the most significant threat in any security landscape, the people. At Tech Field day 16 we heard from Forcepoint (video here) about how their User and Entity Behaviour Analytics (UEBA) product takes a lot of external data to make decisions about the risk associated with specific actions by a staff member, or other entity. My usual TFD disclaimer applies.