AWS Surprises – No VMotion

In my on-premises VMware experience, VMotion was a game-changing technology, so I was very surprised to find that there is no equivalent for EC2 instances on AWS. The basic premise with VMotion is that it divorces a Virtual Machine (VM) from the underlying physical server. VMware’s vSphere goes even further, using VMotion to provide mobility within a cluster of physical hosts and abstract away the individual hosts into a cluster. On AWS, the VM service is EC2, and it offers no way to move an EC2 instance (VM) to another physical host without powering the instance off. The crucial architectural difference here is that vSphere wants us to stop thinking about individual physical servers, and AWS wants us to stop thinking about individual VMs. On-premises it is common to have a single VM that offers a service, i.e., this is the CRM server. The CRM server VM is critical and must remain operational at all times, so we want to migrate a running VM to a new physical server. On AWS, we build services, rather than individual servers, the service should remain operational even if one of its servers has a performance problem or an outage. Rather than one single server for CRM, we might have five servers in EC2 instance and a load balancer to deliver the CRM application. If one instance is overloaded or fails, the load balancer uses the instances that are still operational. When we use EC2 autoscaling, the instances are created automatically and can even be destroyed automatically if they fail. A single EC2 instance is a disposable resource, so there is no need to migrate one between physical servers. This disposability of the compute resource is a common characteristic with cloud-native applications. If you are looking for VMotion on AWS, then you are probably building or bringing a legacy architecture in the public cloud. Aim to move away from on-premises style architecture as soon as possible in your public cloud journey.

Posted in General | Comments Off on AWS Surprises – No VMotion

AWS Surprises – Change is Constant

Being comfortable with change is an integral part of a career in IT, I like the saying that “if you don’t like change, you will have to accept being irrelevant.” The thing is that the rates of change are very variable. VMware releases a new major product version every three to four years for each product. Between major version releases, there are usually minor version releases every year or two, and between those are point, or update releases every few months. As a VMware trainer, I might teach a particular version of a course for a year or more. Similarly, customers would run a particular release, with a fixed feature set, for years.

On AWS, (surprise) there are no exposed versions. Products update and gain new features every week. In fact, AWS publishes a weekly newsletter of the new features and capabilities in the last week, and there are usually at least 20 items each week. For me, this means that courses are also changing constantly, about every two months there is a new release of Architecting on AWS with new slides. Every second week there is an update to the labs because of AWS console changes for one service or another. Because of these changes, the courses (and certifications) tend to focus on basic principles rather than details such as speeds and specifications.

With all of these new services, new features, and new capabilities, there is change all of the time. In fact, I like to paraphrase Werner Vogels with “Everything changes, all the time.” Another perspective might be the growth in AWS service numbers. When I first attended AWS training in about 2014, there were 42 services; as of early 2020, there are over 200 services.  There is no way for a single person to stay completely up to date on every AWS service, or even know every aspect of any significant service. A result is that on AWS, more than any other platform, knowing where to find the answer is more important than knowing the right answer since the answer often changes. There is a psychological shift in not expecting to know the details off the top of your head but expecting to look up the answer. AWS expertise is not about knowing every product fact and feature.  There is another angle, too; architectural design patterns change, so you should be slow to judge old architectures. As an example, before the Transit Gateway service, it was very painful to join a lot of VPCs together into a routed network using only VPC peering. As soon as Transit Gateway was released, it became the standard for new VPC connectivity. However, the older networks using peering and routers in EC2 instances did not disappear because they still work.

When everything changes all of the time, you need to check your assumptions and validate whether you should keep on doing what you have always done.

Posted in General | Comments Off on AWS Surprises – Change is Constant

Vendor Briefing – Retrospect

I associate Retrospect as an end-point backup solution and have the Dantz brand attached in my head. I am about a decade out of date on both the product and the company. After a changing ownership a few times (including EMC and Roxio ownership), Retrospect is now owned by StorCentric, along with Drobo, Vexata, and Nexsan. Retrospect is now able to protect servers as well as end-point devices such as laptops and desktops as well as use public cloud as a destination and a SaaS management console. This month Retrospect announced new versions of both their Retrospect Backup and Retrospect Virtual products. JG Heithcock briefed me about both the company and the updates. StorCentric has assembled a portfolio of storage from high-end NVMe all-flash to SMB focussed, with Retrospect in the SMB data protection category.

One of the key new features in both Retrospect Backup 17, and Retrospect Virtual 2020 is the simple onboarding, essentially a single, Internet-accessible, URL for deploying a pre-configured agent and license. Simple onboarding is essential for end-point protection, where a laptop may never connect to the corporate LAN and so cannot get easily get updates from the on-premises corporate servers. For on-premises resources such as servers and desktops, the simple onboarding can integrate with your chosen software deployment tool.

I like the SaaS console to manage across multiple Retrospect servers, although complete management is still available at each server. The web console provides a holistic view of your data protection status for the entire organization. I also like that restores can happen from local storage on the Retrospect server or from lower-cost storage on a public cloud. Licensing is flexible, either a monthly subscription covering all version updates or a perpetual license for a specific version.

You can also read what Dan Frith wrote about the Retrospect announcement.

Posted in General | Comments Off on Vendor Briefing – Retrospect

A Phase Complete, Learning about Cohesity

Today marks the end of me documenting my journey of documenting my learning about Cohesity, so I thought it might be useful to recap some of the things I learned. Probably the most significant thing is that simplicity is the ultimate sophistication. Making a product that is easy to use for complex requirements requires focus; it is easy to get caught up in the minute details and end up missing the ease of use. With Cohesity, I found that features are easy to use, and the amount of time I spent with the Cohesity console was less than I expected. I liked that I could reuse the Protection Policies across different data sources. Even restores are simple due to the universal search feature, especially helpful when users only know the name of a file, not where the directory where they saved it before deleting their important version. I also found a lot of breadth in Cohesity; for a single product company, the product does a lot fo different things. Data protection, as well as data storage with protection. Protection for VMs, SaaS (Office365), and protection for physical servers. I barely scratched the surface of using the Public Cloud with Cohesity since I only used AWS as storage expansion for my Cohesity cluster. I haven’t done their migration from on-premises, DevOps integration or DR to the cloud. You can find all of the videos and blog posts about my Cohesity learning experience on my Cohesity page.

Posted in General | Comments Off on A Phase Complete, Learning about Cohesity

AWS Surprises – No VM Console Access

In a vSphere environment, connecting to the “physical” console of a VM is natural. Whether you need to change the BIOS settings or choose to boot from the network or watch the operating system boot, the VM console is a significant part of working with vSphere VMs. I was rather shocked (more AWS surprises here) that the same is not possible with AWS and EC2. There are a bunch of routine operational tasks that you cannot do with EC2 instances: no PXE booting, no attaching ISOs, and no installing your own operating system. Fundamentally, EC2 instances are always deployed from Amazon Machine Images (AMIs), which contain an installed operating system. There is no requirement to attach ISOs before boot or to boot from the network. Usually, all of our remote management is done after the operating system is up and using OS native tools such as SSH and RDP when we need access to an instance.

Continue reading

Posted in General | Comments Off on AWS Surprises – No VM Console Access

AWS Surprises – Reduce your bill by giving AWS more responsibility

This article is part of my series on how AWS surprised me as I transitioned from an on-premises vSphere specialist to teaching AWS courses. In a previous life, I worked for an IT outsourcing company here in New Zealand, much like any other outsourcing company around the world we would take on your IT systems for a fee. In my experience, the more problems you get the outsourcer to manage for you, the larger the bill. It is a bit of a surprise to me that a given outcome often costs customers less to achieve with an AWS managed service than a more basic service. The more problems you hand over to AWS, usually the smaller your bill.

Continue reading

Posted in General | Comments Off on AWS Surprises – Reduce your bill by giving AWS more responsibility

Protecting Physical Servers with Cohesity

Cohesity started with data protection and management for virtual machines and can also help customers who have valuable data and applications in physical servers. Like every other physical backup product, there is an agent to install and then register with the Cohesity cluster. We will take a quick walk through this simple process. Then we will see that once the Cohesity cluster knows about your physical machines, data protection and recovery is just as simple as on virtual machines. You can watch me walk through the process in this video.

Continue reading

Posted in General | 2 Comments

Vendor Briefing – Formulus Black

New servers in your data center can have a lot of RAM DIMMs in them, or some RAM and some Persistent MEMory (PMEM) such as Optane DIMMs. Either way, there is a lot of fast capacity that might help solve application performance issues that are caused by slow storage. Formulus Black would like to help you by turning some DIMMs into a block device for your application. Their FORSA software takes either RAM or PMEM and makes it into a local file system with latency and throughput that are better than NVME SSDs. Your application can run directly on the Linux host, where FORSA is installed. The other option is your application inside VMs under the KVM hypervisor on that host. The VM option allows applications that require Windows to be accelerated with DIMM based storage. There are other drivers that take Optane DIMMs and make them a disk device, and other RAM disk drivers around. FORSA has some unique features around management and data services. One feature is the ability to rapidly clone a volume (usually RAM-based) to an SSD to provide data protection. Another is replicating a volume from one FORSA node to another for high availability. There is also a GUI for friendlier management and deduplication to increase the effective capacity of the high-speed volume.

I expect we will see more use of DIMM based storage when there are cost-effective options for PMEM. Increasing application (particularly database) performance is important, but finding the expertise to fix and optimize applications is expensive. Moving from the fastest NVMe SSD, you can get to DIMM attached storage can offer an order of magnitude better application performance for a relatively small increase in hardware cost. Formulus Black says that they are targeting medium-sized companies and Wall Street. The presentation I saw had some significant improvements in IOPS and 99th percentile latency, which translates into fewer storage bottlenecks. If you are struggling to get enough storage performance with local NVMe SSDs, give them a look.

Posted in General | Comments Off on Vendor Briefing – Formulus Black

AWS Surprises – It Is Not All about VMs

This is the first in a series of posts about things that surprised me when I started to work with AWS services. This particular surprise was one of the first, back in 2013, I was still teaching VMware courses and was invited to attend some AWS training. On the first day, the trainer explained that you could deploy as much software-defined network and as many VMs as you wanted. That was all a given at the time when VMware was struggling to integrate Nicera (now NSX) with vSphere. Then my mind was blown when the instructor said that EC2 (the VM service) is not that interesting. The real fun is in the AWS services that you use to assemble applications. The next two days were spent learning about these services and labs where I actually built an application. It was just too easy, even when I started going outside the lines. The lab had the usual prescriptive guidance about how to configure everything to work as intended. I built the autoscaling group of EC2 instances that would respond to the number of requests in a queue and take input data from object storage and place results in more object storage. The lab instructions only told us to scale-out the cluster, I worked out how to configure scale-in too & managed to test that still in the lab time.

Now that I teach the AWS courses, I do see more acceptance of applications that run in VMs, the way they do on-premises. But I now talk about EC2 instances as the last resort for when you have no better way to achieve your objectives. Usually, a managed service is a better option because it requires less work from you and often because it costs less in AWS bills too. I noticed that the Developing on AWS course is very focussed on serverless application development, meaning no EC2. New applications shouldn’t require working with legacy constructs like VMs, but you can have your VMs for your older applications.

Posted in General | Comments Off on AWS Surprises – It Is Not All about VMs

AWS Is a Land of Surprises

As I started working with and writing about AWS, there were a few things that surprised me about how different AWS is from on-premises vSphere. As I have been teaching AWS official courses, I have continued to notice things that surprise me about AWS. I’m planning to write a separate post about each of these strange things. As I think of more strange things, this list will get longer, and I will write separate blog posts with more detail about each strange thing.

Continue reading

Posted in General | 1 Comment