Open Credo

February 14, 2018 | Cloud

Fargate As An Enabler For Serverless Continuous Delivery

AWS Announced a few new products for use with containers at RE:Invent 2017 and of particular interest to me was a new Elastic Container Service(ECS) Launch type, called Fargate

Prior to Fargate, when it came to creating a continuous delivery pipeline in AWS, the use of containers through ECS in its standard form, was the closest you could get to an always up, hands off, managed style of setup. Traditionally ECS has allowed you to create a configured pool of “worker” instances, with it then acting as a scheduler, provisioning containers on those instances.



Miles Wilson

Miles Wilson

Fargate As An Enabler For Serverless Continuous Delivery

Fargate takes this one step further, allowing you to run tasks and services on infrastructure fully managed by AWS so that you no longer have to manage your own EC2 instances, instead relying entirely on a serverless, Container As A Service platform.

This essentially provides us with a Serverless option for running containers – no EC2 instances for operations teams to worry about, meaning no patching, no AMI baking, etc. This has the potential to dramatically reduce the barrier to entry for containerisation and further reduce any customer’s requirement to look outside of the Amazon ecosystem. This all AWS, all severless setup should reduce costs as well – you’re no longer paying for any resource that you’re not actively using. This is slightly different to Lambda, where you actually pay nothing unless someone is making requests to your application, here you only pay for the resources allocated to your container.

This in depth post looks to show you what all the steps are required to do this, as well as looking to gain insight into the inevitable follow on question “is this a good idea?”

The Goal: Create An All AWS Serverless CD Setup

With that in mind, I set out to deploy a very simple “Hello World” type application utilising Fargate, as part of entirely Amazon based product suite.

I’m going to use CodeCommit, CodeBuild, the Elastic Container Registry, the Elastic Container Service and CodePipeline to glue these services together.

Below is a visualisation of the steps I want my deployment pipeline to be able to cover.

Pipeline diagram with 2nd step highlighted

All the source code and Cloudformation templates used in this article is available in a git repository. Each step in the post has an associated branch, that has the relevant code snippets discussed in the post committed to it.

Step 1 – Checkout From SCM

Pipeline diagram with 1st step highlighted

For the purposes of this post, I’ve already set up an AWS account, and configured my command line client. All the examples shown below are run directly from my laptop.

I first need to create a repository to store the code for my application. AWS has CodeCommit for this, and the setup is simple enough:

After running the create-repository command, the CLI spits back a clone url for me where I can use my standard git client to checkout and start working with my code:

I’m ready to start writing an application!

I’ve created a very simple Python app, that’s in the repository I set up to accompany this blog post. In order to get the code from this external repository into my newly created AWS CodeCommit repo, I will add it as a remote to the repository I just cloned from CodeCommit.

Step 2 – Build Docker Container

Pipeline diagram with 2nd step highlighted

Local Build and Test

The application I’ve created is very simple – it just returns “Hello World!” when the root endpoint is called. It will be enough for me to prove the pipeline through though. I can check that the app works as expected by building and running the Docker container locally on my laptop:

Begin Defining “Pipeline” Template

If I want AWS to build my Docker image for me, I’ll need to set up an ECR Repository, CodeBuild, and CodePipeline.
CodePipeline allows you to stitch together a deployment pipeline, and provides a UI as well as a number of prebuilt “Actions” you can assemble together in “Stages”. One of these actions is to call CodeBuild, which is analogous to TravisCI – it can execute scripts inside containers for you, the purpose being to compile and test applications.
Whilst I might normally turn to Terraform at this point to create these three resources within my AWS account, I’m trying to use only AWS products, so I’ll use CloudFormation. I’m going to walk through each resource in my initial CloudFormation template (cf/build-pipeline.yml in the repo), one at a time.

Define ECR Repository, Permissions & Storage

To start with, I will create an Elastic Container Registry Repository to hold our container images. This will be my central repository, and be used by all clusters to pull Docker images from.

Then I need an IAM Role that has the relevant permissions to build the container: this means allowing the CodeBuild service access to the ECR Registry, as well as the ability to clone the CodeCommit git repo, and ship logs to CloudWatch.

CodePipeline needs an S3 bucket to store artifacts between build stages, so I add one to the template:

Define CodeBuild Project

Now I can define a CodeBuild project. This is going to check out the source from CodeCommit, build the Docker container, and push it to ECR. I’m passing the ECR Repository name to the build process as an environment variable, so it knows where to push the container image once it’s been built.

CodeBuild expects to find a file called buildspec.yml in the root directory of the source artifact (our CodeCommit repo). This file describes the build and test process you would like CodeBuild to execute. The one used in my case is very simple – it just executes Docker build, and pushes to the repository provided in the environment variable.

${CODEBUILD_RESOLVED_SOURCE_VERSION} is an environment variable that CodeBuild injects into every build process, that contains the git checksum used for the build. I’m tagging the container with the git checksum, so it is possible to identify which version of an application is running in the various environments.

Define CodePipeline and Stages

Now the CodePipeline can be defined. To Start with, this will just checkout the code from CodeCommit (the Checkout stage) and then ask CodeBuild to build and push our Docker container (the Build stage). The S3 bucket is used to pass the cloned repository to CodeBuild.

Enable Automatic Triggering of Pipeline

In order to have the pipeline executed on every commit to the master branch, I need to use CloudWatch Events. In order for CloudWatch to be able to be able to trigger the pipeline, I need to create an IAM Role for it to assume:

Once the permissions are in place, the event needs to trigger the pipeline:

Create Pipeline & Associated Resources Using CloudFormation

I’ve placed the build pipeline in the cf folder of the application repository. Lets create the stack from the command line, which in turn will create the pipeline.

Test it!

At this point, I can create a new commit, push to CodeCommit, and see the app build in the console:

CodePipeline screenshot showing build phases

Step 3 – Deploy To Test

Pipeline diagram with 3rd step highlighted

Now I have an an application that gets built, and pushed into a Docker registry.
A Docker container is not much use unless it’s running somewhere – so I will create another template that defines the “Environment” – this template can be used to stand up multiple Fargate compatible ECS Clusters in our pipeline. (Test, and Production in this simplified)

To make this post a little simpler, I’ve already set up a VPC, with public subnets and required routing tables. If you need to set this up – the reference template from aws labs will be useful.

Define “Environment” Template

I’m going to export some of the outputs from this template, which means I can retrieve information from deployed stacks based on this template when I deploy my service. This allows me to run multiple stacks based on this template for different environments.

Although this template is included in the repository created for this blog post, this is for ease of use only. It would normally be kept in an independant repository, as it is not associated with the application.

Create TEST Environment using CloudFormation

Using the template defined above, I will create a cluster called “test” via the command line:

I now have two separate cloudformation templates:
cf/builder.yml – which contains the definition of the pipeline.
cf/ecs-cluster.yml – which contains the definition of an ECS cluster (Environment).

Define “Service Deployment” Template

I now need to create a template to deploy our service into ECS. This template (cf/service.yml) describes everything required to launch the container based application on a given ECS Cluster.

The template is parameterised to allow me to pass in the ECR repository name, an image tag I wish to deploy, and the environment I would like to deploy the service to.

Define Logging Resources and Permissions

I want to view logs from all instances of this service together in CloudWatch Logs, so I need a log group to aggregate logs from my containers once they’re deployed.

Containers in ECS have their own “Task Execution Role”, much like an EC2 Instance Role. I need one that can push logs to the log group I created, pull the Docker image, and register the service with a load balancer.

Define ECS Run Task To Run Application

Each Service in ECS can have multiple tasks associated with it. (If you are familiar with Kubernetes, tasks are equivalent to a Kubernetes “Pod” in that they group one or more containers together as a single item). The task I wish to run only has one container in it, and I’ve allocated 512mb of ram, and 256 CPU shares. The NetworkMode “awsvpc” means that my container gets an Elastic Network Interface associated with it, rather than using Docker’s bridge, or host modes which would normally be used to map a virtual interface on the container, to a “physical” interface on the host. Fargate requires the use of the “awsvpc” mode, as there is no host available to make use of the other modes.

Configure ECS Task to run on Fargate

I then link assign the task to the service, note that I set the LaunchType to “FARGATE” – this means AWS will run the tasks on the serverless Fargate platform, not on ECS EC2 instances – it is possible to run hybrid clusters, with some workloads deployed to EC2 instances, and others to Fargate.
Fargate requires that your containers are executed inside a VPC, so I pass through my exported parameters from my cluster definition in step 4:

I can also get the service to register itself against a load balancer once the stack is launched, the TargetGroup defines my healthcheck, and port that I wish to be exposed on the ALB, the ListenerRule picks up requests sent to /welcomer and forwards them to the TargetGroup.

Test it!

I can test out this template from the command line:

Adjust CodeBuild To Enable Continuous Deployment

However, I want this to be continuously delivered – so there are a few changes I need to make.

Firstly, I need to capture the latest built version of the container from the CodeBuild process, this is easily achieved by adding the following line to my buildspec.yml:

This captures the git commit reference in a way that I can use later on as a parameter to my service CloudFormation template.

Add “Deploy To Test” Stage into Pipeline Template

To my codepipeline definition I now add an additional stage:

This stage will run my cf/service.yml cloudformation template from the application’s CodeCommit repository. By setting the ActionMode to CREATE_UPDATE, if the app has never been deployed before a new stack will be created automatically, else it will update an existing stack. The update is performed without downtime automatically, and in the event that the new version of the service does not become healthy, the deployment will be rolled back automatically as well.

Step 4 – Deploy To Production

Pipeline diagram with 4th step highlighted

In a more formalised setup, I would want a number of test phases to be included in this pipeline, but for the purposes of exploring a “Serverless” Continuous Delivery pipeline, I’m going to assume that if the healthcheck passes in test, it’s okay to deploy that version to production.

Create PROD Environment using CloudFormation

In order to do this, I need a production cluster, which I will again start via the command line:

Add “Deploy To Prod” Stage into Pipeline Template

I will also need a new stage at the end of my pipeline:

I can update the pipeline from the command line easily:

Test It!

A subsequent push to my CodeCommit repository results in codepipeline going all the way through to production, with zero downtime, and zero servers that need to be maintained. I’m paying only for the containers in use, traffic, and for the build time required by CodeBuild.

Completed Pipeline in CodePipeline


With regards to the “Is this a good idea?” question, the answer as always is about tradeoffs. To actually productionize this pipeline, a large amount more work would need to be done: tests, failure scenarios, notifications etc, would all need to be defined and implemented. However, it’s encouraging to know that it is at least possible to deliver an application without running a server, indeed without stepping outside of Amazon at all.

In a future post, I’d like to explore using tooling from outside the AWS ecosystem, to try and find a more comfortable solution – in places the AWS offering doesn’t feel particularly coherent, and the AWS console can be bewildering to those that at the end of the day, want to know why their build failed, or find a copy of a test report.

Below is a summary of my experience, expressed as a list of pros and cons, for what it is like to use an entirely AWS stack to deploy a containerized workload.


  • No instances to manage – This reduces operations overheads dramatically, as well as increasing security – it’s hard to overstate how powerful it is to be able to run containers in “your” network, without managing servers.
  • A “one-stop-shop”– If I have a user for the AWS console, I can inspect every aspect of my software delivery lifecycle, more importantly be billed in one invoice.
  • IAM Roles at Service Level – I can limit an individual service’s access to specific AWS resources. This is exposed natively to the container if it’s using the amazon SDK (which most client libraries for AWS services do), so despite being incredibly fine grained, the permissions model is transparent to the calling application.
  • Simplified networking model – Because the containers deployed to ECS in awsvpc networking mode get elastic network interfaces assigned to them, I can work with the well understood AWS concepts of VPCs, security groups without the added complexity of an additional software defined networks.


  • Cloudformation is vague – Working with cloudformation to get this setup up and running was quite frustrating – the error messages are vague, and there’s no way to increase the verbosity.
    Time from commit to running feels slow – If you’re used to pushing to github, and watching a build process kick off in your browser window pretty much instantly (as I am), it feels sluggish waiting for CodeDeploy to pick up the change and start work.
  • Debugging is hard – CodeBuild gives very little away when something is configured incorrectly, and CodePipeline just gives you a red light instead of a green one, and I was left to work out what happened without any further information.
  • CodeCommit feels immature – Especially in comparison to github or bitbucket – it’s painful to navigate around, and the review tools are not up to scratch. This could prove a frustrating experience for a development team if they are forced to use it, simply because it’s the AWS supported system..


In terms of the original goal I set out to achieve, I have shown that it it is possible to create a “serverless continuous delivery” pipeline capable of building and deploying an application without running a single server, indeed without stepping outside of Amazon at all.

A significant part of my all AWS setup was the use of Fargate. This provided me with the ability to push my built containers into a stable, production like environment, without having to worry about any of the underlying machines needed to run these containers.

Fargate itself is thus extremely powerful, especially when coupled with cloudformation templates. Having said that, whilst AWS excels at the “runtime” part of building and running software, the development tooling and processes feels a bit more immature, in comparison to the more established SAAS vendors.

Whilst the goal to demonstrate an all AWS setup was achieved, given a completely free hand, my approach to such a challenge would generally involve a bit more of a pragmatic mixed approach. Using other tooling (e.g. Terraform) and SaaS offerings (such as GitHub, TravisCI etc) to make this a more streamlined, and developer friendly experience. So entirely AWS Serverless CD is indeed an option, albeit subject to a few of the challenges noted!



Twitter LinkedIn Facebook Email