Manage NGINX configurations with Ansible

Manage NGINX configurations with Ansible

Looking for a job opportunity as a Developer?

Find out open job vacancies on Meritocracy

When we set up more servers connected through a load balancer, the editing of the NGINX configuration and management of virtual hosts was frustrating. We could no longer copy and paste code between SSH terminals.

Here Ansible saved our lifes.

Configuration

After installing Ansible we created a configuration file – ansible.cfg – just to tell it to read a specific hosts file instead of the default one, so every developer can pull the repository and launch the playbook without editing needed. Its content is very simple:

[defaults]

inventory = ./hosts

The content if hosts, called inventory, is equally simple:

[vagrant]

192.168.33.10 ansible_user=ubuntu

[app]

ec2-xxx-xxx-xxx-xxx.eu-west-1.compute.amazonaws.com ansible_user=ubuntu
ec2-xxx-xxx-xxx-xxx.eu-west-1.compute.amazonaws.com ansible_user=ubuntu

In this case we set two groups: vagrant and app. Vagrant will be our vagrant machine (we’ll see it later) and app our real servers. Note we added the parameter ansible_user to tell ansible to use that user. If you use AWS EC2 you’ll need this.

Templates

The next step is to create two template files which will contain our actually code: nginx.conf.tpl that represents /etc/nginx/nginx.conf, the NGINX configuration file, and yoursite.com.conf.tpl that contains our virtual host placed in /etc/nginx/sites-available/yoursite.com.conf.

The (extracted) content of nginx.conf.tpl will look like this:

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
  worker_connections 768;
  # multi_accept on;
}

http {
  # Prevent flood
  limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
  limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
  # Format log
  log_format timed_combined '$remote_addr - $remote_user [$time_local] '
    '"$request" $status $body_bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '$request_time $upstream_response_time $pipe';

And the (always extracted) content of yoursite.com.conf.tpl:

# Expires map
map $sent_http_content_type $expires {
    default                    off;
    text/html                  epoch;
    text/css                   max;
    application/javascript     max;
    ~image/                    max;
}

server {
  listen 80 default_server;
  listen [::]:80 default_server;
  server_name {{ inventory_hostname }};

  expires $expires;

In this last one we used {{ inventory_hostname }} which is an Ansible variable: whatever between two curly brackets in a .tpl will be parsed as variable.
In this case it will be replaced with the host name from the inventory file, so for example the line will be rendered as server_name ec2-xxx-xxx-xxx-xxx.eu-west-1.compute.amazonaws.com;.

Playbooks

Now that we created our templates, we have to tell Ansible to read them and upload to specific directories in our servers: this can be achieved with Playbooks, which are files containing rules and tasks that will be executed.

Our playbook, called nginx.yml (or nginx.yaml) will do these things, called tasks:

  1. Upload our virtual host file – yoursite.com.conf.tpl – to /etc/nginx/sites-available/yoursite.comf.conf;
  2. Upload our nginx configuration file – nginx.conf.tpl – to /etc/nginx/nginx.conf;
  3. Reload nginx.

This can be written like this:

---
- hosts: app
  become: true
  vars:
    nginx_path: /etc/nginx
    nginx_sites: {{ nginx_path }}/sites-available
  tasks:
    - name: Setup nginx vhost
      template:
        src=yoursite.com.conf.tpl
        dest={{ nginx_sites }}/yoursite.com.conf
    - name: Setup nginx conf
      template:
        src=nginx.conf.tpl
        dest={{ nginx_path }}/nginx.conf
      notify: restart nginx
  handlers:
    - name: restart nginx
      service:
        name=nginx
        state=restarted

But let’s analyise the code piece by piece.

hosts: app tells to Ansible to use the app group from our inventory, so the two EC2 instances.
become: true is used to run commands as root, like if we’re using sudo.
vars contains our two variables which will be used later on our tasks.
tasks contains the action to perform. In this case we’ll explain only the last.
name is the task name. It will appear in your terminal when you run the playbook.
template is a module to parse templates and we pass it two args: src – the file location – and dest – the remote path where it will be uploaded. Learn more about template module.
notify will trigger and event (called handlers in Ansible) which will be used below to restart nginx.

Running

Everything is done, now we have just to run the command ansible-playbook nginx.yml and let’s the magic happens.

Testing

Before uploading our new brand configurations, we want to test them, otherwise the risk is that our web servers will stop blocking our site.

For this reason we adopt Vagrant, creating a new machine with nginx and our files.
To set up an instance, we have to create a Vagrantfile with the following content:

Vagrant.configure(2) do |config|

  config.vm.box = "ubuntu/xenial64"

  # Create a private network, which allows host-only access to the machine using a specific IP.
  config.vm.network "private_network", ip: "192.168.33.10"

  # SSH settings
  config.ssh.insert_key = false
  config.ssh.forward_agent = true
  config.ssh.private_key_path = ["~/.ssh/id_rsa", "~/.vagrant.d/insecure_private_key"]
  config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/authorized_keys"
  config.vm.provision "shell", inline: <

With this, we create a machine with ubuntu 16.04 (config.vm.box = "ubuntu/xenial64"), a private network so we can test everything is right with our browser navigating to 192.168.33.10 (config.vm.network "private_network", ip: "192.168.33.10") and, in the SSH settings section, we disallow the creation of the default SSH key of vagrant and we upload our own from .ssh/id_rsa.pub, so we can access the machine simply with ssh [email protected].

When you saved the file, run vagrant up from your terminal and the machine will be initialized.

Now we have to install nginx and other packages in our machine. You can ssh into the vagrant instance or create a new playbook called nginx_install.yml which will:

  1. Install nginx;
  2. Install PHP (just for testing purpose);
  3. Create our project directory in /var/www/project, where we’ll upload a simple php file to check the configuration;
  4. Upload the php file to /var/www/project;
  5. Delete the default nginx virtual host and its symlink;
  6. Upload our configurations;
  7. Restart nginx.

This can be achieved by the following playbook, which points to vagrant hosts in the inventory.

---
- hosts: vagrant
  become: true
  vars:
    doc_root: /var/www/project
    nginx_sites: /etc/nginx/sites-available
    conf_file: yoursite.com
  tasks:
    - name: Update apt
      apt: update_cache=yes

    - name: Install nginx
      apt:
        name=nginx
        state=latest

    - name: Install php7.0
      apt:
        name={{ item }}
        state=latest
      with_items:
        - php-fpm
        - php-mysql
        - php-mbstring
        - php-mcrypt

    - name: Create custom document root
      file:
        path={{ doc_root }}
        state=directory
        owner=www-data
        group=www-data

    - name: Create HTML file
      copy:
        src=index.php dest={{ doc_root }}/index.php
        owner=www-data
        group=www-data mode=0644

    - name: Delete default nginx vhost
      file:
        path={{ nginx_sites }}/default
        state=absent

    - name: Delete default nginx vhost symlink
      file: path=/etc/nginx/sites-enabled/default state=absent

    - name: Setup nginx vhost
      template:
        src=nginx.tpl
        dest={{ nginx_sites }}/{{ conf_file }}.conf

    - name: Create symlink nginx vhost
      file:
        src={{ nginx_sites }}/{{ conf_file }}.conf
        dest=/etc/nginx/sites-enabled/{{ conf_file }}.conf
        state=link
      notify: restart nginx
  handlers:
    - name: restart nginx
      service:
        name=nginx
        state=restarted

Save it and run `ansible-playbook nginx_install.yml` and, going with your browser to 192.168.33.10, you will see this page:

Finally, if you need to test your configuration with the machine already set up, you can duplicate nginx.yml file calling it nginx_vagrant.yml, replace hosts: app with hosts: vagrant and run ansible-playbook nginx_vagrant.yml.

 

You can see the complete repository here: Github.com – Ansible Nginx

2 Replies to “Manage NGINX configurations with Ansible”

  1. The Vagrantfile for this tutorial is incomplete. There is no closing “end” statement, as well as a missing encapsulation for the “<" on line 13. If you try to run vagrant up with this as is it will throw an error on line 13:

    Vagrantfile:13: syntax error, unexpected '<'
    m.provision "shell", inline: <

    Fixed this by modifying line the vagrant file like so:

    Vagrant.configure(2) do |config|

    config.vm.box = "ubuntu/xenial64"

    # Create a private network, which allows host-only access to the machine using a specific IP.
    config.vm.network "private_network", ip: "192.168.33.10"

    # SSH settings
    config.ssh.insert_key = false
    config.ssh.forward_agent = true
    config.ssh.private_key_path = ["~/.ssh/id_rsa", "~/.vagrant.d/insecure_private_key"]
    config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/authorized_keys"
    config.vm.provision "shell",
    inline: "<"
    end

  2. The Vagrantfile for this tutorial is incomplete. There is no closing “end” statement, as well as a missing encapsulation for the “<" on line 13. If you try to run vagrant up with this as is it will throw an error on line 13:

    Vagrantfile:13: syntax error, unexpected '<'
    m.provision "shell", inline: <

    To fix this, change line 13 to : config.vm.provision "shell", inline: "<"

    Also add a closing end statement at the bottom of the file.

Leave a Reply

Your email address will not be published.