Open Credo

August 26, 2016 | Kubernetes

Kubernetes from scratch to AWS with Terraform and Ansible (part 2)

This post is the second of a series of three tutorial articles introducing a sample, tutorial project, demonstrating how to provision Kubernetes on AWS from scratch, using Terraform and Ansible. To understand the goal of the project, you’d better start from the first part.

WRITTEN BY

Lorenzo Nicora

Lorenzo Nicora

Kubernetes from scratch to AWS with Terraform and Ansible (part 2)

Terraform, Ansible, AWS, KubernetesPart 1: Provision the infrastructure, with Terraform.
Part 2 (this article): Install and configure Kubernetes, with Ansible.
Part 3: Complete setup and smoke test it, deploying a nginx service.

The complete working project is available here: https://github.com/opencredo/k8s-terraform-ansible-sample

Installing Kubernetes

In the previous article, we created all AWS resources, using Terraform. No Kubernetes component has been installed yet.

We have 9 EC instances (hosts, in Ansible terms), 3 of each type (group, for Ansible):

  • Controllers: Kubernetes HA master
    • Will run Kubernetes API Server, Controller Manager and Scheduler services
  • Workers: Kubernetes Nodes or Minions
    • Will run Docker, Kubernetes Proxy and Kubelet services
    • Will have CNI installed for networking between containers
  • etcd: etcd 3 nodes cluster to maintain Kubernetes state

All hosts need the certificates we generated, for HTTPS.

First of all, we have to install Python 2.5+ on all machines.

Ansible project organisation

The Ansible part of the project is organised as suggested by Ansible documentation. We also have multiple playbooks, to run independently:

  1. Bootstrap Ansible (install Python). Install, configure and start all the required components (infra.yaml)
  2. Configure Kubernetes CLI (kubectl) on your machine (kubectl.yaml)
  3. Setup internal routing between containers (kubernetes-routing.yaml)
  4. Smoke test it, deploying a nginx service (kubernetes-nginx.yaml) + manual operations

This article walks through the first playbook (infra.yaml).

The code snippets have been simplified. Please refer to project repository for the complete version.

Installing Kubernetes components

The first playbook takes care of bootstrapping Ansible and installing Kubernetes components. Actual tasks are separate in rolescommon (executed on all hosts); one role per machine type: controller, etcd and worker.

Before proceeding, we have to understand how Ansible identifies and find hosts.

Ansible_LogoDynamic Inventory

Ansible works on groups of hosts. Each host must have a unique handle and address to SSH into the box.

The most basic approach is using a static inventory, a hardwired file associating groups to hosts and specifying the IP address (or DNS name) of each host.

A more realistic approach uses a Dynamic Inventory, and a static file to define groups, based on instance tags. For AWS, Ansible provides an EC2 AWS Dynamic Inventory script out-of-the-box.

The configuration file ec2.ini, downloaded from Ansible repo, requires some change. It is very long, so here are the modified parameters only:

Note we use instance tags to filter and identify hosts and we use IP addresses for connecting to the machines.

A separate file defines groups, based on instance tags. It creates nicely named groups, controller, etcd and worker (we might have used groups weirdly called tag_ansibleNodeType_worker…). If we add new hosts to a group, this file remains untouched.

To make the inventory work, we put Dynamic Inventory Python script and configuration file in the same directory with groups file.

ansible/
  hosts/
    ec2.py
    ec2.ini
    groups    

The final step is configuring Ansible to use this directory as inventory. In ansible.cfg:

Bootstrapping Ansible

Now we are ready to execute the playbook (infra.yaml) to install all components. The first step is installing Python on all boxes with a raw module. It executes a shell command remotely, via SSH, with no bell and whistle.

Installing and configuring Kubernetes components: Roles

The second part of the playbook install and configure all Kubernetes components. It plays different roles on hosts, depending on groups. Note that groups and roles have identical names, but this is not a general rule.

Ansible executes the common role (omitted here) on all machines. All other roles do the real job. They install, set up and start services using systemd:

  1. Copy the certificates and key (we generated with Terraform, in the first part)
  2. Download service binaries directly from the official source, unpack and copy them to the right directory
  3. Create the systemd unit file, using a template
  4. Bump both systemd and the service
  5. Verify the service is running

Here are the tasks of etcd role . Other roles are not substantially different.

We generate etcd.service from a template. Ports are hardwired (may be externalised as variables), but hosts IP addresses are facts gathered by Ansible.

Known simplifications

The most significant simplifications, compared to a real world project, concern two aspects:

  • Ansible workflow is simplistic: at every execution, it restarts all services. In a production environment, you should add guard conditions, trigger operations only when required (e.g. when the configuration has changed) and avoid restarting all nodes of a cluster at the same time.
  • As in the first part, using fixed internal DNS names, rather than IPs, would be more realistic.

Next steps

The infra.yaml playbook has installed and run all the services required by Kubernetes. In the next article, we will set up routing between Containers, to allow Kubernetes Pods living on different nodes talking each other.

 

This blog is written exclusively by the OpenCredo team. We do not accept external contributions.

RETURN TO BLOG

SHARE

Twitter LinkedIn Facebook Email

SIMILAR POSTS

Blog