DevOps as a Culture

Getting started

In this section you will learn the basics about running a CI/CD Pipeline and what steps must be taken to automate integration and deployment.

Running automated workflow using GitLab CI

The GitLab CI is the main tool used for running automated workflows in Pan-Net. It is a part of GitLab and it is integrated in its user interface and accessible from the Gitlab web UI. Also it has a RESTful API for most of the features. It is available in both GitLab Enterprise Edition and GitLab Community Edition. In this section you will learn how to use it to create your own CI/CD pipelines.

How to start with GitLab CI

The CI/CD pipeline is defined in a descriptor file. In Gitlab CI this descriptor is the YAML file .gitlab-ci.yml located in the root directory of the git repository.
Here you can specify when the pipeline has to be executed and which stages and jobs of the pipeline should run. The jobs executed by GitLab Runner. The complete documentation to GitLab CI can be found here.

Basic example of Gitlab CI Pipeline

In this example you can see .gitlab-ci.yaml with two stages. Once new code is pushed into the GitLab repository, Gitlab evaluates the file descriptor and if applicable, task a GitLab Runner which in turn will run the Docker image alpine:latest in which it will execute commands from the separate jobs. First it runs the stage first_stage and its job_1. Next it will run jobs from the second_stage but the job_3 will be executed only if the code was pushed into the develop branch.

.gitlab-ci.yml
image: alpine:latest

stages:
  - first_stage
  - second_stage

job_1:
  stage: first_stage
  script:
    - echo "Run me in first stage"

job_2:
  stage: second_stage
  script:
    - echo "Run me in second stage"

job_3:
  stage: second_stage
  script:
    - echo "Run me in second_stage but only if git branch is develop"
  only:
    refs:
      - develop

Other CI tools



Infrastructure as Code (IaC)

To be able to fully automate the process of Integration and Deployment, you must automate also the process of infrastructure provisioning (networks, virtual machines, connection topology...) whenever applicable. In the first stages of the CI pipeline it is therefore necessary to specify and create the infrastructure where your application will run after it's deployed. This specification is called Infrastructure as Code and it is described in the source code file(s). During the deployment phase the source code is used by a provisioning tool and deployed.
In Pan-Net the most common IaC deployment tools are Openstack Heat and Hashicorp Terraform.

How to start with Terraform

To run Terraform, the only thing you need is a single binary which you can run on almost any modern computer operating system. For installation the you only need to go to the download page, download the package with the binary for his/her operating system, unpack and run.

To create IaC, you define infrastructure in a human-readable configuration language called HCL (HashiCorp Configuration Language). The infrastructure is defined in one or more files with tf file extension. The number of files depends on you and which resources you will define in each file. Make sure you use this time to define and ask yourself questions like:

  • Am I deploying to more datacenters?
  • Do I need to create similar / parallel infrastructures?

Answering these questions is relevant, specially in large infrastructures, because they will dictate directory structure and parametrization of variables and values.

The complete documentation to Hashicorp Terraform can be found here

Basic example of Terraform IaC file

The following example is whole IaC definition in one file. It define one Virtual Machine, one Network and use one precreated Router.

main.tf
provider "openstack" {
  version = "1.29.0"
}

resource "openstack_compute_keypair_v2" "admin_keypair" {
  name = "admin_keypair"
  public_key = "ssh-rsa THIS IS JUST AN EXAMPLE KEY"
}

# Create VM
resource "openstack_compute_instance_v2" "http" {
  name        = "http"
  image_name  = "ubuntu-18.04-x86_64"
  flavor_name = "m1.medium"
  key_pair    = openstack_compute_keypair_v2.user_key.name
  user_data   = file("scripts/first-boot.sh")
  network {
    port = openstack_networking_port_v2.http.id
  }
}

# Create Network port
resource "openstack_networking_port_v2" "http" {
  name           = "port-instance-http"
  network_id     = openstack_networking_network_v2.generic.id
  admin_state_up = true
  security_group_ids = [
    openstack_compute_secgroup_v2.ssh.id,
    openstack_compute_secgroup_v2.http.id,
  ]
  fixed_ip {
    subnet_id = openstack_networking_subnet_v2.http.id
  }
}

# Create floating ip
resource "openstack_networking_floatingip_v2" "http" {
  pool = var.external_network
}

# Attach floating ip to instance
resource "openstack_compute_floatingip_associate_v2" "http" {
  floating_ip = openstack_networking_floatingip_v2.http.address
  instance_id = openstack_compute_instance_v2.http.id
}

# Router creation
data "openstack_networking_router_v2" "generic" {
  status = "ACTIVE"
}

# Network creation
resource "openstack_networking_network_v2" "generic" {
  name = "network-generic"
}

# Subnet http configuration
resource "openstack_networking_subnet_v2" "http" {
  name            = "subnet-http"
  network_id      = openstack_networking_network_v2.generic.id
  cidr            = "192.168.1.0/24"
  dns_nameservers = ["8.8.8.8", "8.8.8.4"]
}

# Router interface configuration
resource "openstack_networking_router_interface_v2" "http" {
  router_id = openstack_networking_router_v2.generic.id
  subnet_id = openstack_networking_subnet_v2.http.id
}

# Acces group, open input port 80
resource "openstack_compute_secgroup_v2" "http" {
  name        = "http"
  description = "Open input http port"
  rule {
    from_port   = 80
    to_port     = 80
    ip_protocol = "tcp"
    cidr        = "0.0.0.0/0"
  }
}

# Acces group, open input port 22
resource "openstack_compute_secgroup_v2" "ssh" {
  name        = "ssh"
  description = "Open input ssh port"
  rule {
    from_port   = 22
    to_port     = 22
    ip_protocol = "tcp"
    cidr        = "0.0.0.0/0"
  }
}

Additional information sources:



Configuration Management

Once the infrastructure is deployed, the next step is to install and configure the software. The tool which we in the Pan-Net are using for configuration automation at most is the Ansible.

How to start with Ansible

Ansible is a agentless automation tool using SSH for connection to the remote hosts. So there is no need to pre-prepare the VM image (except for python, which is required at the target host) and pre-bake an agent - which would listen and wait for instructions from the runner or the local host.

The complete Ansible documentation can be found here.

Ansible is known for its simplicity and fast onramp to productivity. To get started, the user needs to know only few a basic concepts:

  1. Ansible uses an inventory to describe the hosts on which to apply changes
  2. Ansible processes a YAML file called playbook which contains user-defined list of tasks (or plays) in an order they should get executed
  3. To create the plays and tasks user uses an extensive library of modules

The installation of Ansible is straight-forward as the main dependency is Python. On Linux or Mac systems the simplest way how to install it is to run pip install ansible. On Windows the process is a bit lengthier and you can check it here.


Basic example of Ansible playbook

In this example the Ansible will install (or update) httpd rpm package, enable and start the apache service.

playbook.yml
---
- name: Playbook
  hosts: webservers
  become: yes
  tasks:
    - name: Ensure apache is at the latest version
      yum:
        name: httpd
        state: latest
    - name: Ensure apache is running and it is enabled
      systemd:
        name: httpd
        state: started
        enabled: yes

Alternative Configuration Management Systems



Useful Tips

  • Ansible will need a hosts inventory. Although not required, it's highly recommended the inventory is built dynamically. From our experience there are some ways to tackle this:

  • Keep in mind that you will have to authenticate against the remote hosts (VMs). So, you will have to inject the ssh public key into the destination VM before you can configure it:

    • In Openstack environment, you can use OpenStack keypair and UserData.
  • Do not store the secrets in the configuration code. Use a secret store for this purpose. We recommend using a combination of:

    • GitLab CI/CD Variables and
    • Hashicorp Vault (preferred) or Ansible Vault
  • For dynamic changes in the configuration files, consider using template Ansible module rather than lineinfile

  • Create the pipeline idempotent (multiple runs to the same host should yield changes at most once).

    • The most common example when idempotence is not respected is restart of services after every pipeline run

Last updated: October 23, 2020