Pets Vs. Poultry in Your Application

It has been a while since the phrase “Pets versus Cattle” was on the top of the conversational pile, but I think that it is a useful tool for approaching application architecture. Originally the phrase referred to on-premises enterprise IT as pets. We would have individual names for our servers and would spend a lot of time troubleshooting issues to return a server to a healthy state. By contrast, cloud-native applications were referred to as cattle. Instances have a numeric reference for a name, and if one stops working, it is destroyed and replaced with a new working instance. 

Continue reading

Posted in General | Leave a comment

Cohesity Data Migration in Action

One of the new features in the Cohesity Data Platform version 6.4 is called Data Migration and is part of the Smart Files function. Data Migration automates moving files from a file share to the Cohesity platform and leaving a symbolic link in place of the migrated file. The objective here is to free the file server or NAS from holding old or infrequently accessed files, which then reduces the need for expanding capacity on file servers or NAS devices that are no good at data efficiency or have high-cost storage. I talked about this in a past blog post or two, you can also read what Dan Frith wrote, and now I recorded a video of the actual migration.

The Data Migration job is simple to set up, requiring a source share, criteria for migration, and a name for the new Cohesity View to hold the migrated files. The View and its share are created automatically, don’t use the name of an existing View.

Continue reading

Posted in General | Comments Off on Cohesity Data Migration in Action

Cohesity Relieves NAS Space Stress

File servers (or NAS) with disks that fill up are a constant problem in any organization. Twenty years ago, I spent more than a few weekends swapping to larger hard drives in physical file servers. Now that those file servers are virtual, the virtual disks can grow until a datastore is full, then a SAN LUN needs to be made larger, and it is still a lot of work and a lot of money to store a lot of data, often of questionable value. If you are suffering from overloaded file servers, then you might want to look at a couple of ways that Cohesity can help.

Disclosure: This post is part of my work with Cohesity.

The first way is that Cohesity offers a scale-out multi-protocol NAS platform. I made a video walking through creating a NAS share on my Cohesity cluster, and you can also take a look at Theresa Miller’s video here. Cohesity is more than just a scale-out NAS; this video outlines some of the value add offered by smart files and applications that run directly on the Cohesity cluster.

The second way that Cohesity can help is by migrating older (less frequently accessed) files off your file servers or NAS appliances, onto the Cohesity cluster. This is what I was talking about in my post on reviving the HSM dream. You can get more details from this video that Mike Letschin recorded.

Posted in General | Comments Off on Cohesity Relieves NAS Space Stress

Vendor Briefing: BlueCat Networks

We tend not to think a lot about some of the most fundamental IT infrastructure services, yet they come up a lot when we are troubleshooting problems. One rule of thumb is that when it is an application networking problem then it is a DNS issue. Even when it is definitely not a DNS issue, it is a DNS issue. I am being. A bit flippant here, but the reality of troubleshooting non-obvious problems is that name resolution in one form or another is a common problem area. When it is change time, the issue of IP address assignment and management will also come up. Many years ago, I was working at a global pharmaceutical company where IP address management was handled by DHCP reservations for servers. Apparently, there had been a slipup where a new server was assigned the IP address of another server and a highly critical application had gone down.

These two pieces of context came up for me when I was chatting with BlueCat Networks about their product. Their core is in IP addresses and DNS names: providing integrated IPAM, DHCP, and DNS. You may feel that these areas are covered by your Windows or Linux servers, or by your network platform and you may be right for your organization. At a massive scale, you do need a more coordinated and integrated system that keeps IP addresses attached to hostnames across the whole enterprise. The target market for BlueCat Networks is the very large networks with critical applications and high change rates, where security, scalability, and automation are critical. One aspect of the product that I found interesting is the ability to do analytics against DNS requests, noticing if a server suddenly changes its behavior in a way that indicates that it has been compromised. Your print server probably shouldn’t be trying to locate your payroll system. I’m always interested in management tools that gain more insights from the data they have about system behavior.

If you have a large network and if you have an IP address problem, BlueCat might be your savior.

Posted in General | Comments Off on Vendor Briefing: BlueCat Networks

Reporting using the Cohesity ReST API

I mentioned the Cohesity REST API when I looked at the developer portal, now I’d like to show you a little about how to access that API and gather information from your Cohesity clusters. For my example I am going to do some very basic work directly with the API using Python. There are language bindings for Python and PowerShell that make accessing the API simpler, but direct access to the API is also worth illustrating. I chose a couple of basic tasks: reporting on capacity reporting and VMs that are not protected. Below I show how I accessed data via the API, I also posted a video of the same process if you prefer to watch.

Continue reading

Posted in General | Comments Off on Reporting using the Cohesity ReST API

Pure Cloud Block Store Available on AWS

One of the fun elements of being briefed about a product that is not yet released, and probably has not had its form finalized, is that only part of the product is revealed. This week at Pure Accelerate the Pure Cloud Block Store (CBS) was launched in its production form. The CBS is an implementation of the Flash Array that runs on AWS rather than on-premises. In my earlier post about CBS, I talked about the storage architecture, S3 object storage for performance, EC2 Instance Store for a read cache and EBS IO1 for the write buffer. This storage architecture remains in place in the CBS but is not attached to the controller as I thought. The EC2 instances that have the IO1 and Instance Store are called Virtual Disks. The basic CBS has seven of these as a “disk shelf.” The controllers in CBS have boot volumes, all the data and metadata storage are in the Virtual Disks, which is the same architecture as a physical Flash Array. One other element that I did not foresee is a DynamoDB table to store system configuration, rather than having this configuration on the disks.

Continue reading

Posted in General | Comments Off on Pure Cloud Block Store Available on AWS

Developer Enablement, Cohesity Developer Portal

Following on from my quick look at using PowerShell with Cohesity, I want to highlight the developer resources page at developer.cohesity.com. The developer portal has a few useful resources for automation around Cohesity, some detailed documentation of the REST API that is the core of all developer access to Cohesity and a repository of sample scripts that you can re-use and re-purpose to your needs. There are samples in Python, PowerShell, and Ansible as well as details of how to build an application to run on a Cohesity cluster.

Cohesity Developer Portal

Continue reading

Posted in General | Comments Off on Developer Enablement, Cohesity Developer Portal

All-In Public Cloud for Backup

Born in the cloud companies approach problem-solving differently to on-premises software companies, so Druva looks at the world differently to other enterprise backup vendors. One difference is the expectation that infrastructure is rented from cloud providers, rather than purchased from hardware vendors and deployed on-site. Druva offers Backup-as-a-Service and prefers to deploy as little infrastructure as possible inside their customer’s environments. Initially, Druva provided a backup solution for distributed endpoints (laptops and PCs) that live outside the corporate offices. Highly mobile staff who generate business data are a prime target for endpoint backup, and backup to the public cloud works extremely well for these uses. More recently, Druva has added support for enterprise virtualization as a data source for backup to the cloud.

Continue reading
Posted in General | 2 Comments

Keeping the HSM Dream Alive

Way back in the 1990s I was involved in managing large numbers of Windows file servers, as a central repository of business data. These file servers grew and grew over time, more and more files stored. Many organizations now have years and years of files stored on file servers and high-performance NAS appliances. Over time the knowledge of the value of these files is diluted, but the fear that something important may get lost never fades. IT teams are left as the holders of this business data and must treat every file as if some manager or regulator may demand access at any moment. Back in the ’90s, there was also a dream of Hierarchical Storage Management (HSM) which allowed data to move to lower-cost storage when it was not frequently accessed, freeing space on the expensive and fast storage for more frequently accessed data. At the time, there was no built-in support for data mobility in operating systems, so each HSM product had its own custom file-system driver to redirect access to migrated files.

Continue reading
Posted in General | Comments Off on Keeping the HSM Dream Alive

Vendor Briefing – Zadara

Enterprise storage delivered on-premises or in the cloud, as a service where you pay only for what you use. That is my one-sentence summary for Zadara. We know that storage management is hard, and multi-cloud storage management is very hard. Zadara’s business is to deliver multi-cloud, enterprise storage as a service. I was surprised to hear that they have product deployed on five continents, that means hardware shipped to and maintained on five continents. All that hardware is on Zadara’s books; their customers are paying a monthly fee based on their consumption and Zadara carries the cost and risk on the hardware.

Zadara is a scale-out storage platform using standard x86 servers with options for hard drives, SSDs, and NVMe flash including Optane. The hardware is shipped to customers; however, customers are only billed for the resources that they use: performance and capacity. Within a scale-out cluster, virtual arrays can be defined that have a dedicated subset of the overall cluster’s physical resources. These virtual arrays have dedicated hardware, so perform very much like a physical array and allow multi-tenant consumption of a larger cluster. Zadara maintains a service control plane that manages every deployed device down to drive level. This global control plane does not have access to customers data on the arrays; the data is encrypted at rest using customer-controlled keys. One unusual new capability is the ability to run Docker containers directly on the storage cluster; I’m sure that will drive some interesting use cases.

Posted in General | Comments Off on Vendor Briefing – Zadara