Vendor Briefing – Atlantis Computing

I last talked to Atlantis Computing at Virtualization Field Day 3 in 2014, at the time that they were releasing their USX data platform. I read about the release of their HyperConverged platform, HyperScale in 2015. HyperScale is a software HCI that is delivered on top of a very restricted list of partner hardware. There were four hardware partners and each essentially offered two all-flash configurations. One at 12TB and the other 24TB capacity per node. I missed the announcement this year of a ROBO scale version, 4TB and the addition of Dell as a partner.

One of the nice things about the Atlantis HCI is that you can integrate with Atlantis USX on other hardware. This is a good way to either extend your existing hardware into your new HCI deployment. Or migrate your VMs off your existing hardware and onto Atlantis HyperScale. This integration and migration stuff is not what most HCI vendors want to talk about, and is a real benefit of a software HCI product. This process of migrating onto an HCI platform can be very time-consuming and may have its own restrictions if you don’t already have 10GBE in use.

The new stuff that was in the briefing is not yet announced, expect to hear interesting things from Atlantis over the next month. Maybe even at one of the conferences that are running next week. When that part becomes public I may have some more to say about Atlantis.

Posted in General | Comments Off on Vendor Briefing – Atlantis Computing

A Different Shed

People are very interesting. We each perceive the world slightly differently, sometimes very differently. Like most people, I am intrigued by how I think & how other people think. On my last trip to the US, I was thinking about how people perceive difference. This “social construction of difference” is something I learned a little about at University. One aspect is how my accent is a trigger for my friends to notice that I am different from them.

shedquarters

On this trip, the trigger word for my US friends was “schedule” which I pronounce differently to them. I believe I follow my English origins and pronounce as if there were no C, making it “shedule.” My US friends found that they have a K in place of the CH and so pronounce “skedule.” I do wonder where the k came from. I also notice that the extraneous K does not bother me, but the shed bothers my friends. That’s not to say that I am more tolerant, there are trigger words for me. One is solder, a crucial part of assembling electronic devices. My US friends seem not to notice that there is an L in the word, which I find disconcerting. Don’t get me started on router or Aluminium, both troubling words.

These small differences in language are part of how we identify people who are like us and people who are different. Pointing out the differences reinforces both the sense of belonging and the sense of difference. I want to talk a bit more about belonging and difference as well as how I perceive people in some coming blog posts. They will make a nice break from all the vendor briefing blog posts I’ve been doing.

Posted in General | 1 Comment

Vendor Briefing – FalconStor

I’ve heard about FalconStor as a storage virtualization platform for a while. I also think that we will see more products that virtualize clouds for mobility in the near(ish) future. The vision that FalconStor lays out is one where data can move freely between clouds. On-premises clouds, managed private clouds, and various public clouds. I definitely see that as a destination that customers will want to get to, but most are nowhere near ready. This is good for FalconStor as they have not yet delivered on their vision, more product development is underway.

What can FalconStor deliver today? Virtualization of your existing iSCSI and Fibre Channel block storage. Replication between dissimilar storage and between sites. Deduplication to reduce WAN costs and public cloud egress costs too. Physical appliances, virtual appliances, and virtual appliances on public cloud. Multi-tenancy for service providers, including authentication integrated with Active Directory or other LDAP. Analytics from the block storage to the application performance. A unified user interface and API across multiple locations for the virtualized storage.

My thoughts on gaps. First, it is block storage-centered. No object or file storage built into the storage virtualization. Next, to use FalconStor for mobility you will be replicating whole operating systems. And application sets within those OSs. I think the future multi-cloud mobility needs to be application centered. Moving only the application and its data without all the re-creatable dependencies. The issue with this vision is that it requires applications to be redeveloped, a very slow and expensive process. For the near future, there will be a need to replicate or transfer whole VMs.

FalconStor has a good vision of multi-cloud data portability and they are executing on making the vision into a product. What I don’t know is whether enterprises see enough value in the current product to provide the income that will be needed to fund developing the vision.

Posted in General | 2 Comments

Vendor Briefing – Scality

It seems to be the season of object storage; I keep getting briefings on object storage products. At Tech Field Day Extra (TFDx) at VMworld USA, we had a briefing from Scality. Please refer to my standard TFD disclosure post. Scality didn’t spend a lot of time digging into their scale-out storage platform at TFDx. Rather they focused on some new capabilities and a new packaging option for developers wanting to test against a Scality object store. This last is a great move as object storage is usually consumed by applications rather than end users or infrastructure teams. There is a Docker image for a complete Scality object store with an S3 compatible interface. With this Docker image, a developer can use Docker Compose to create a multi-container application with their code and an object store as well as any other supporting containers. Then software testing, including continuous integration, can be completed without needing a production ready Scality deployment in the test environment. I’ve written in other places about how important it is to enable developers and that is what Scality has done. Scality also told us about their version 6.0 product. They have enhanced a lot of the AWS compatibility features. Support for the AWS IAM security model and the AWS command lines will also ease developer adoption. While we didn’t get deep into the product architecture we did get some of the highlights. A multi-Petabyte scale architecture using a scale out

Scality also told us about their version 6.0 product. They have enhanced a lot of the AWS compatibility features. Support for the AWS IAM security model and the AWS command lines will also ease developer adoption. While we didn’t get deep into the product architecture we did get some of the highlights. A multi-Petabyte scale architecture using a scale-out collection of x86 servers. A strict consistency model for data and design guides for solution availability and performance. A RESTful API for monitoring is also nice, allowing visual reporting using Grafana. I expect to hear more about the internals of the Scality product this week at Storage Field Day in Silicon Valley.

Posted in General | Comments Off on Vendor Briefing – Scality

Vendor Briefing – Pivot3

You may have been surprised to see the name Pivot3 turn up as a leader in HCI in recent analyst reports. We are more used to seeing Nutanix & SimpliVity in the lead. Then a collection of others spread nearer or further way depending on the evaluation criteria. Pivot3 has made a huge amount of ground over the last year as an HCI vendor. The company has been around for a long time. Originally with a scale-out storage product that targeted video surveillance. This year they acquired NexGen storage, to add high-performance flash and storage QoS to their products. At the same time, Pivot3 hugely expanded their marketing team and partner program to make the products much more visible.

Disclosure – I learned about NexGen storage last year at Virtualization Field Day, TFD Disclosure

Lately I’ve been seeing Erasure Coding (EC) a lot more in products. Pivot3 use EC to distribute stored data over nodes in their HCI. EC allows data to be stored with high durability, Pivot3 can survive up to five concurrent failures. EC is also space efficient. In that high durability configuration, usable space is nearly 75% of physical disk space. The challenge with EC is that it tends to be CPU intensive or add latency to IO. Pivot3 always store all persistent data using EC, so they have had ten years to optimize their EC. I hadn’t thought about some of the other consequences of starting in video surveillance. One is that the sheer velocity of data means that there is no backup, the primary copy must be very durable. Another element is that the data is very critical. You cannot afford to lose the one video frame that allows you to identify who committed a crime. Customers who have used Pivot3 to store critical video data are more likely to trust them to store VMs. I was impressed with how the NexGen acquisition has been integrated. Within a few months of the deal closing, there was an integrated product. The initial integration is in management, a unified console to manage HCI and NexGen storage. This follows my expectation that HCI is about simplification. More than necessarily about a pure scale-out architecture on commodity hardware. Pivot3 delivers on the simplicity and policy-based management of the NexGen storage alongside their HCI. The harder job of integrating the NexGen features into the HCI storage is underway. Expect to see the separate NexGen appliance go away as the QoS and flash features are added to the HCI. This is a huge engineering effort. It won’t happen overnight, but it will happen.

The other part I found interesting is the differentiation in physical hardware. Pivot3 have solutions that work on blades as well as rack servers. They have support for a software only deployment model or hardware appliances. Even models with enough PCIe slots for Teradici cards with VDI. I think we can expect to keep hearing about Pivot3 HCI in the future.

Posted in General | Comments Off on Vendor Briefing – Pivot3

Vendor Briefing – StacksWare

Today’s briefing was with one of the founders of StacksWare. Don’t forget the second S or you get some other company. Their product gives visibility into the applications installed and running in your virtual and physical environment. A single OVA is deployed onto your vSphere and connected to both vCenter and AD. The appliance then pushes metadata about the applications into a cloud data warehouse. Each customer has their own dashboard with both historic and real-time views of application usage.

The first use case is simple software inventory, what software is installed on which machines. If you are faced with an audit from Microsoft, Adobe or another vendor then this can be a quick resolution (or a quick bill if you aren’t compliant). Real-time usage is also useful to identify concurrency for licensing, for products that allow you to license peak concurrency rather than named users or machines.

There is also a compliance perspective. Both from the angle of making sure only the supported versions of software are used and making sure that only authorized people are using controlled software. In the distant past, I worked at a pharma company. They needed to prove who had access to certain data and needed to identify whether server administrators had accessed applications on the server. StacksWare identifies every executable is run by every user, immediate visibility. There is also a security angle, notifications can be setup that will alert (for example) if an out of date version of Java is launched.

There is a trial of the software available on the StacksWare site. List pricing is US$3 per device per month, I imagine there is discounting for volume and time commitments. By the way, StacksWare started is a research project at Stanford that was supported by VMware. They also won the TechTarget Startup Spotlight award at VMworld 2016.

Posted in General | Comments Off on Vendor Briefing – StacksWare

Vendor Briefing – NooBaa at TFDx VMworld

This week I have been at VMworld USA, always a busy time with lots of technology and making lots of videos. I did spend one afternoon attending the Tech Field Day Extra (TFDx) at VMworld, where I was briefed by a few vendors. As it is a TFD event, please refer to my standard TFD disclaimer. NooBaa was one of the vendors at TFDx, they also reached out a while before and briefed me a little ahead of the event. NooBaa has a software-based object storage platform. There are three layers to the software, storage nodes, access nodes, and a master control node. The storage and access nodes use a scale out architecture, while the control node is kept out of the data path. The access node provides S3 object and file access. It can even be installed onto application servers to remove the first network hop to access objects. The storage node software can be installed onto any existing Windows or Linux host and use any local storage on that machine. Storage nodes are combined into pools and storage buckets are built on the pools. Bucket data can be written to multiple pools and pools can have different levels of mirroring. You can even use an external S3 store, like AWS S3, as a pool.

I like the flexibility of building an object store out of whatever hardware I want, just installing software. For production use, I would want to use bare metal Linux installs as the platform, light on the CPU count and plenty of RAM and disks. But there is a heap of interesting uses that aren’t really replicating a big object store. One possibility is using trapped disk capacity in servers or desktops, machines with large disks that will never fill the disk. Another is easily building dedicated object stores for development. NooBaa has a (free) community edition that may suit a lot of non-production use cases. The free edition allows up to 20TB of data stored, and there is an option to buy support for the free edition too. The full edition is licensed per TB of data stored, not the physical disk size, or provisioned size of pools or buckets.

A challenge for me is that I don’t have anything that consumes object storage, so it is hard to work out how to test the community edition. Maybe I need a coding challenge that includes some object storage.

Posted in General | 1 Comment

Vendor Briefing – Zerto

Zerto is another company I have known for a while. At VMworld they announced a new release of their Zerto Virtual Replication. With some cool new features. I’ve written before that the biggest challenge in writing about Zerto’s products is that they just work. New versions just work like the old versions and add new features without a huge learning curve.

My favourite new feature is the ability to replicate VMs to Azure as an off-site DR location. This is in addition to the ability to replicate to AWS. Azure is a desirable DR destination as a lot of customers have Microsoft Enterprise License Agreements (ELAs) that include Azure credits, even if the customer didn’t want Azure. A result of these credits and Zerto’s new feature is free DR to the cloud. A nice part is that the achievable Recovery Time Objective (RTO) on Azure is very similar to using on-premises hardware. RTOs of under an hour are achievable with Zerto on Azure.

Also in the new version is support for one-to-many replication. This enables using two data centers on a campus as well as replicating to a cloud, or more remote, data center. There is a whole host of natural disasters that would affect a campus. Floods, earthquakes, tornados, wildfire and the like could all shut down an entire campus. The ability to have an additional DR to the cloud strategy is hugely beneficial.

The final part that I found interesting is the mobile application for monitoring your Zerto environment. There have been many attempts to bring infrastructure management to mobile devices, I still haven’t seen it stick. Most IT infrastructure management is still done from a large screen device. I will be interested to see whether Zerto has built something that does deliver what IT managers and operations staff want on mobile devices. For me, the most interesting part was that the mobile app came out of an internal hackathon that Zerto’s engineering team runs periodically. I plan to find out more about these hackathons as innovation happens in these intense, small group activities.

Posted in General | Comments Off on Vendor Briefing – Zerto

Vendor Briefing – SimpliVity

You may know that I have been doing a bit of work with SimpliVity over the last year or so. If you work for SimpliVity you may have heard my voice on some training materials, hope you managed to stay awake.

This week SimpliVity announced some new hardware and features.

The new hardware is an option for All-Flash OmniStack HCI nodes. All flash is a great fit for SimpliVity as they are in-line deduplicated, so there are very few overwrites of blocks on the flash. This means they don’t need to add so much special write handling or use high endurance flash. An all-flash configuration keeps the worst case IO latency low and brings more high-performance workloads into scope. I suspect there is also a simple piece of marketing where customers are demanding all-flash without any technical justification. The new all-flash configurations are available on both Lenovo and Cisco hardware as well as directly on SimpliVity OmniCube nodes.

The other new features that I like is the DR automation called RapidDR. It uses a wizard-based interface to setup automation of VM failover. It sits on top of SimpliVity’s native data protection which will replicate VM contents between data centers. I think this idea started out as a set of scripting by one of SimpliVity’s partners, to solve just one customer’s specific needs. Now the functionality has been built into the SimpliVity platform and GUI, with the improved functionality and robustness that you would expect. The only downside is thatRaipDR is a licensed capability, licensed per protected VM. I suspect it won’t be terribly expensive, so customers are unlikely to build their own scripts to replicate the functionality.

A final nice feature is an option for SQL Server log truncation when SimpliVity does an application-aware backup of the VM. I imagine that this will make a lot of DBAs happy as they won’t need to resort to manually truncating logs or using simple logging.

Posted in General | Comments Off on Vendor Briefing – SimpliVity

Vendor Briefing – Robin Systems

Today’s briefing was with Robin Systems.

Their pitch is around Application Defined Datacenters. It is an interesting focus since SDDC is usually all about the software that runs the data center and less about the applications that deliver value to the business.

They have a software product that automates a lot of the application lifecycle and adds it’s own persistent storage for containers. The storage software seems to be a large part of the product, it includes QoS settings for applications. The actual storing of data uses local SSD or hard disks in the compute nodes as well as some AWS storage. All the storage is pooled and has tiering and that QoS I mentioned. The compute capacity is also pooled, making this sort of a hyperconverged platform for containers.

The real proof is in what the product does for real customers with real applications, there are are few references on the website. Being the week before VMworld I’m not going to get a chance to dig deeper. Luckily we will also be able to hear about Robin Systems at Cloud Field Day next month, or at Oracle OpenWorld.

I get briefings from a few vendors each month. I’m going to try to share my immediate impressions as I get off the phone with each company in short blog posts. This is obviously the first of these posts.

Posted in General | Comments Off on Vendor Briefing – Robin Systems