August 26, 2015 | Cloud
Unless you’ve been living under a rock for the last year, you’ll undoubtedly know that microservices are the new hotness. An emerging trend that I’ve observed is that the people who are actually using microservices in production tend to be the larger well-funded companies, such as Netflix, Gilt, Yelp, Hailo etc., and each organisation has their own way of developing, building and deploying.
As an industry we haven’t yet converged on the best way to do these things for a microservices-based application. We often still need to assemble our platform, and in my opinion this is often a source of difficulty for smaller, less operationally-savvy organisations that are jumping on the microservice bandwagon…
Don’t get me wrong, at OpenCredo we’ve built and deployed many microservice-based applications into production (in differing organisations), and there are lots of other challenges with embracing a microservice-based application, primarily organisational (the ever-present Conway’s law) and architectural (just how do I divide my application into bounded contexts?).
However, this article will focus on the technical platform challenges. In this context I use the word ‘platform’ to represent everything involved in getting the code I write on my machine pushed through a build pipeline to a reliable production deployment e.g. continuously integrated, tested, consolidated, validated, deployed, coordinated, monitored and maintained. Woah, I hear you say, that’s a lot of stuff… yep, welcome to the unglamorous side of microservices…
Not exactly. Increasingly microservices are being conflated with container technology, and some of the motivations behind this are valid – containers (such as Docker and rkt) appear to be great packaging mechanisms for running small, isolated and ephemeral services – but some of the motives, such as providing enforcement of good architectural principles like encapsulation, are decidedly more suspect.
Therefore, although the two technologies are decidedly separate, I’m going to assume that people are predominantly deploying microservices within some kind of container-like vehicle – perhaps a VM image, Docker container, or a Cloud Foundry Warden/Garden container. This means that you will in all likelihood require some kind of deployment fabric that acts like a platform/cluster manager, which handles scheduling and orchestration of these container-like application vehicles.
The top three open source cluster management fabrics at the moment are Apache Mesos (combined with Marathon or Aurora), Google’s Kubernetes, and some flavour of Cloud Foundry (including Lattice). Other vendor-specific fabrics do exist, such as AWS’s EC2 Container Service (ECS), and they do provide quite a lot of the ‘platform’ responsibilities I mentioned above, but there is some degree of vendor lock-in. The open source solutions primarily assume you are going to ‘roll your own’ platform.
There are some notable exceptions here – Mesosphere are working on their Datacenter Operating System (DCOS), Red Hat have created the OpenShift v3 / fabric8 / Kubernetes platform, CoreOS offer the Kubernetes-based Tectonic, and Pivotal and IBM have created Cloud Foundry ecosystems with Pivotal Cloud Foundry and Bluemix respectively. Other notable attempts at creating an open microservice platform include Cisco’s Mantl (formerly known as microservice-infrastructure) and Capgemini’s Apollo, both of which are based on Apache Mesos, and leverage other open technologies such as Hashicorp’s Terraform, CoreOS’s etcd and Mesosphere’s Marathon.
So, if you do decide to go-it alone with something like Mesos, Kubernetes or Cloud Foundry (or even if you do utilise the emerging platforms), what are some of the technical challenges you might face?
Over the coming weeks we’ll be addressing these issues, and their associated solutions, in a series of blog posts. Stay tuned, and feel free to ask questions!