Compare commits

..

No commits in common. 'master' and 'new-app-ini' have entirely different histories.

  1. 56
      TODO
  2. 7
      docs/ansible_do.md
  3. 81
      docs/ansible_linode.md
  4. 63
      docs/ansible_playbooks.md
  5. 52
      docs/index.md
  6. 1
      docs/quickstart.md
  7. 8
      group_vars/all/main.yml
  8. 1
      mkdocs.yml
  9. 3
      podcharlesreid1.yml
  10. 14
      podwebhooks.yml
  11. 3
      roles/docker/files/install.sh
  12. 1
      roles/init-nonroot/tasks/main.yml
  13. 4
      roles/letsencrypt/tasks/main.yml
  14. 38
      roles/pod-bots/README.md
  15. 2
      roles/pod-bots/defaults/main.yml
  16. 6
      roles/pod-bots/handlers/main.yml
  17. 60
      roles/pod-bots/meta/main.yml
  18. 2
      roles/pod-bots/tasks/main.yml
  19. 2
      roles/pod-bots/tests/inventory
  20. 5
      roles/pod-bots/tests/test.yml
  21. 2
      roles/pod-bots/vars/main.yml
  22. 65
      roles/pod-charlesreid1/README.md
  23. 8
      roles/pod-charlesreid1/defaults/main.yml
  24. 62
      roles/pod-charlesreid1/tasks/main.yml
  25. 38
      roles/pod-webhooks/README.md
  26. 20
      roles/pod-webhooks/defaults/main.yml
  27. 6
      roles/pod-webhooks/handlers/main.yml
  28. 524
      roles/pod-webhooks/tasks/main.yml
  29. 27
      roles/pod-webhooks/templates/captain-hook-canary.service.j2
  30. 16
      roles/pod-webhooks/templates/pod-webhooks.service.j2
  31. 9
      roles/sshkeys/tasks/main.yml

56
TODO

@ -1,56 +0,0 @@ @@ -1,56 +0,0 @@
captain hook config:
- need to have a template
- requires us to set a secret
- have been using "charles@charlesreid1.com"
- md5
captain hook canary setup:
- install service script that checks for the canary file every 10 seconds
- it should run a script in the captain hook install dir
- if it finds the canary file, it should use a docker pod scripts dir script to update captain hook
pod-webhooks:
- need to install captain hook canary and captain hook pull host
- debian/dotfiles/bluebear_scripts/captain_hook_canary.sh
- debian/dotfiles/bluebear_scripts/captain_hook_pull_host.py
- debian/dotfiles/service/captain-hook-canary.service
making domain swappable:
- submodules of pod-charlesreid1 would need to be reviewed in detail...
- need to template more files than we are currently templating
- the jinja copy from, copy to approach works well
- gitea
- mediawiki
- nginx
- letsencrypt
- the pod-charlesreid1 role defaults has a top_domain set to charlesreid1.com
- it says, "check for letsencrypt certs to this domain (top level domain of entire pod)"
- this does not match up with the nginx config files... which is how things are REALLY set
- top domain is used by gitea...
subdomains/domains approach needs to be:
- specify a list of top level domains
- subdomains are fixed, but needs to be eg pages.${TOP_DOMAIN}
pod-charlesreid1 /www setup
https://git.charlesreid1.com/charlesreid1/charlesreid1.com
/www/charlesreid1.com/
charlesreid1.com-src/ <-- clone of charlesreid1.com repo, src branch
git/ <-- .git dir for charlesreid1.com repo gh-pages branch
git.data/ <-- .git dir for charlesreid1-data
htdocs/ <-- clone of charlesreid1.com repo gh-pages branch
data/ <-- clone of charlesreid1-data

7
docs/ansible_do.md

@ -15,7 +15,7 @@ Table of Contents @@ -15,7 +15,7 @@ Table of Contents
## Droplet setup
Start by logging in to your Digital Ocean account
Start by logging in to your digital ocean account
and creating a droplet. You should be able to
create or specify an SSH key.
@ -55,8 +55,6 @@ Now you can run the base playbook. @@ -55,8 +55,6 @@ Now you can run the base playbook.
defined by default. Define it using the
`--extra-vars` flag.
Specifying a machine name using the `--extra-vars` flag:
```plain
ANSIBLE_CONFIG="do.cfg" \
ansible-playbook \
@ -67,8 +65,7 @@ ANSIBLE_CONFIG="do.cfg" \ @@ -67,8 +65,7 @@ ANSIBLE_CONFIG="do.cfg" \
## Run pod playbooks
Once you've run the base playbook, you can install the
docker pod with the corresponding playbook by specifying
`ANSIBLE_CONFIG` and pointing to the Digital Ocean config file.
docker pod with the corresponding playbook.
pod-charlesreid1:

81
docs/ansible_linode.md

@ -1,81 +0,0 @@ @@ -1,81 +0,0 @@
# Linode Quickstart
This quickstart walks through the process
of setting up a Linode node
using these Ansible playbooks.
Table of Contents
=================
* [Node setup](#node-setup)
* [Run provision and base playbooks](#run-provision-and-base-playbooks)
* [Run pod playbooks](#run-pod-playbooks)
## Node setup
Start by logging in to your Linode account
and creating a new node. You should be able to
create or specify an SSH key.
!!! warning
You must modify the path to the SSH private
key, specified in `linode.cfg` (the Linode
Ansible config file), to match the SSH key that
you added to the droplet at its creation.
!!! warning
Once you create your droplet and it is connected
to the internet via a public IP, you must update
the file `linodehosts` (the Linode Ansible
inventory file) to point to the correct IP address
for the node.
## Run provision and base playbooks
Once you have the correct SSH key in `linode.cfg`
and the correct droplet IP address in `linodehosts`,
you are ready to run the Ansible playbooks.
Run the provision playbook to prepare the droplet for Ansible:
```plain
ANSIBLE_CONFIG="linode.cfg" \
ansible-playbook \
provision.yml
```
Now you can run the base playbook.
!!! warning
You must provide a `machine_name` parameter to
the base playbook. This variable is **_not_**
defined by default. Define it using the
`--extra-vars` flag.
Specifying a machine name using the `--extra-vars` flag:
```plain
ANSIBLE_CONFIG="linode.cfg" \
ansible-playbook \
--extra-vars "machine_name=redbeard" \
base.yml
```
## Run pod playbooks
Once you've run the base playbook, you can install the
docker pod with the corresponding playbook by specifying
`ANSIBLE_CONFIG` and pointing to the Linode config file.
pod-charlesreid1:
```plain
ANSIBLE_CONFIG="linode.cfg" \
ansible-playbook \
--extra-vars "machine_name=redbeard" \
podcharlesreid1.yml
```

63
docs/ansible_playbooks.md

@ -10,6 +10,8 @@ Table of Contents @@ -10,6 +10,8 @@ Table of Contents
* [provision\.yml: Provision Your Remote Node](#provisionyml-provision-your-remote-node)
* [base\.yml: the base plays](#baseyml-the-base-plays)
* [podcharlesreid1\.yml: charlesreid1 docker pod play](#podcharlesreid1yml-charlesreid1-docker-pod-play)
* [charlesreid1hooks\.yml: webhooks server docker pod play](#charlesreid1hooksyml-webhooks-server-docker-pod-play)
* [charlesreid1bots\.yml: bots docker pod play](#charlesreid1botsyml-bots-docker-pod-play)
* [List of Tags](#list-of-tags)
@ -25,15 +27,11 @@ step installs `/usr/bin/python`. @@ -25,15 +27,11 @@ step installs `/usr/bin/python`.
ANSIBLE_CONFIG="vagrant.cfg" vagrant provision
```
Running plays against a Linode/Digital Ocean node requires
Running plays against a Digital Ocean node requires
the provision playbook to be run explicitly with the
command:
```plain
# Linode
ANSIBLE_CONFIG="linode.cfg" ansible-playbook provision.yml
# Digital Ocean
ANSIBLE_CONFIG="do.cfg" ansible-playbook provision.yml
```
@ -60,17 +58,8 @@ ANSIBLE_CONFIG="vagrant.cfg" \ @@ -60,17 +58,8 @@ ANSIBLE_CONFIG="vagrant.cfg" \
base.yml
```
To run on Linode:
```plain
ANSIBLE_CONFIG="linode.cfg" \
ansible-playbook \
--vault-password-file=.vault_secret \
--extra-vars "machine_name=yoyo" \
base.yml
```
To run on Digital Ocean:
To run on Digital Ocean, use the same command
but specify the corrsponding config file:
```plain
ANSIBLE_CONFIG="do.cfg" \
@ -83,12 +72,17 @@ ANSIBLE_CONFIG="do.cfg" \ @@ -83,12 +72,17 @@ ANSIBLE_CONFIG="do.cfg" \
## podcharlesreid1.yml: charlesreid1 docker pod play
**host: krash**
**host: redbeard**
The charlesreid1 docker pod runs the following:
- nginx
- letsencrypt/certs
- mediawiki
- gitea
- files/etc
**Example:** Deploy the charlesreid1 docker pod play
on a Vagrant machine.
@ -113,25 +107,42 @@ ANSIBLE_CONFIG="vagrant.cfg" \ @@ -113,25 +107,42 @@ ANSIBLE_CONFIG="vagrant.cfg" \
podcharlesreid1.yml
```
**Linode Example:**
**Example:** Deploy the charlesreid1 docker pod play
to a Digital Ocean droplet.
```plain
ANSIBLE_CONFIG="linode.cfg" \
ANSIBLE_CONFIG="do.cfg" \
ansible-playbook \
--vault-password-file=.vault_secret \
--extra-vars "machine_name=yoyo" \
podcharlesreid1.yml
```
**Digital Ocean Example:**
```plain
ANSIBLE_CONFIG="do.cfg" \
ansible-playbook \
--vault-password-file=.vault_secret \
--extra-vars "machine_name=yoyo" \
podcharlesreid1.yml
```
## charlesreid1hooks.yml: webhooks server docker pod play
**host: bluebear**
**host: bluebeard**
The webhooks server docker pod runs the following:
- captain hook webhook server
- hooks.charlesreid1.com domain
- static site hosting for pages.charlesreid1.com
- pages.charlesreid1.com domain
## charlesreid1bots.yml: bots docker pod play
**host: bluebear**
The bots docker pod runs several Python
scripts to keep some Twitter bots going:
- Ginsberg bot flock
- Milton bot flock
- Apollo Space Junk bot flock
## List of Tags

52
docs/index.md

@ -20,24 +20,11 @@ Table of Contents @@ -20,24 +20,11 @@ Table of Contents
Before you get started:
* Provision a compute node (Vagrant or cloud provider)
* If using Vagrant, see the [Ansible Vagrant](ansible_vagrant.md) page for
instructions on how to provision virtual machines.
* If using a cloud provider, follow the instructions provided by your
cloud provider.
* Provision a compute node (vagrant or cloud provider)
* Configure and enable SSH access
* If using Vagrant, see the [Ansible Vagrant](ansible_vagrant.md) page for
instructions on how to get SSH key information from Vagrant virtual machines.
* If using a cloud provider, you should be provided with an SSH key or
SSH access instructions by your cloud provider.
* Run Ansible with the `base.yml` playbook - see [Ansible Playbooks](ansible_playbooks.md#baseyml-the-base-plays)
and `base.yml` for information and details about this playbook.
* Run Ansible with the pod-charlesreid1 playbook `pod-charlesreid1.yml`
* Configure DNS to point to the IP address of the compute node
* Run Ansible with the `base.yml` playbook
* Run Ansible with the pod playbook of your choice
* Configure DNS to point to compute node IP address
## Docker Pods
@ -49,11 +36,6 @@ are ready to run these docker pods. @@ -49,11 +36,6 @@ are ready to run these docker pods.
| Pod | Link |
|------------------|--------------------------------------------------------|
| pod-charlesreid1 | <https://git.charlesreid1.com/docker/pod-charlesreid1> |
The following pods **HAVE BEEN DEACTIVATED:**
| Pod | Link |
|------------------|--------------------------------------------------------|
| pod-webhooks | <https://git.charlesreid1.com/docker/pod-webhooks> |
| pod-bots | <https://git.charlesreid1.com/docker/pod-bots> |
@ -63,11 +45,14 @@ The following pods **HAVE BEEN DEACTIVATED:** @@ -63,11 +45,14 @@ The following pods **HAVE BEEN DEACTIVATED:**
There is one playbook per docker pod, plus a base playbook
and a provision playbook.
| Playbook | Description | Link |
|------------------------|----------------------------------------------------------------------------------------------------------------------|----------------|
| `provision.yml` | (Vagrant-only) Playbook to provision new Ubuntu machines with `/usr/bin/python`. | [link](ansible_playbooks.md#provisionyml-provision-your-remote-node) |
| `base.yml` | Base playbook run by all of the pod playbooks above. | [link](ansible_playbooks.md#baseyml-the-base-plays) |
| `podcharlesreid1.yml` | Playbook to install and run the charlesreid1.com docker pod | [link](https://git.charlesreid1.com/docker/pod-charlesreid1) |
| Playbook | Description |
|------------------------|----------------------------------------------------------------------------------------------------------------------|
| `provision.yml` | (Vagrant-only) Playbook to provision new Ubuntu machines with `/usr/bin/python`. |
| `base.yml` | Base playbook run by all of the pod playbooks above. |
| `podcharlesreid1.yml` | Playbook to install and run the charlesreid1.com docker pod (<https://git.charlesreid1.com/docker/pod-charlesreid1>) |
| `podwebhooks.yml` | (TBA) Playbook to install and run the webhooks pod (<https://git.charlesreid1.com/docker/pod-webhooks>) |
| `podbots.yml` | (TBA) Playbook to install and run the bot pod (<https://git.charlesreid1.com/docker/pod-bots>) |
## Roles
@ -97,6 +82,8 @@ respective docker pod. @@ -97,6 +82,8 @@ respective docker pod.
| Role Name | Description |
|-----------------------|--------------------------------------------------------------|
| pod-charlesreid1 | Role specific to the charlesreid1.com docker pod |
| pod-webhooks | Role specific to \{hooks,pages\}.charlesreid1.com docker pod |
| pod-bots | Role specific to bots docker pod |
## Getting Started with Playbooks
@ -105,7 +92,6 @@ respective docker pod. @@ -105,7 +92,6 @@ respective docker pod.
|-----------------------------------------------|-----------------------------------------------------------------|
| [docs/index.md](index.md) | Documentation index |
| [docs/quickstart.md](quickstart.md) | Quick start for the impatient (uses Vagrant) |
| [docs/ansible_linode.md](ansible_linode.md) | Guide for running charlesreid1.com playbooks on Linode |
| [docs/ansible_do.md](ansible_do.md) | Guide for running charlesreid1.com playbooks on Digital Ocean |
| [docs/ansible_vagrant.md](ansible_vagrant.md) | Guide for running charlesreid1.com playbooks on Vagrant |
@ -196,14 +182,8 @@ on how to set up a Vagrant virtual machine to run the @@ -196,14 +182,8 @@ on how to set up a Vagrant virtual machine to run the
Ansible playbook against, for testing purposes.
## Linode Deployment
See [Ansible Linode](ansible_linode.md) for instructions on how to set up a Linode node
to run the Ansible playbook against.
## Digital Ocean Deployment
## DigitalOcean Deployment
See [Ansible Digital Ocean](ansible_do.md) for instructions on how to set up an Digital Ocean
See [Ansible Digital Ocean](ansible_do.md) for instructions on how to set up an DigitalOcean
node to run the Ansible playbook against.

1
docs/quickstart.md

@ -13,6 +13,7 @@ Table of Contents @@ -13,6 +13,7 @@ Table of Contents
* [Provision Vagrant Machines](#provision-vagrant-machines)
* [Configure Ansible-Vagrant SSH Info](#configure-ansible-vagrant-ssh-info)
* [Cloud Node Setup](#cloud-node-setup)
* [Installing SSH Keys](#installing-ssh-keys)
* [Run Ansible](#run-ansible)
* [Set Up Vault Secret](#set-up-vault-secret)
* [Run the Base Playbook](#run-the-base-playbook)

8
group_vars/all/main.yml

@ -25,9 +25,17 @@ charlesreid1_admin_email: "charles@charlesreid1.com" @@ -25,9 +25,17 @@ charlesreid1_admin_email: "charles@charlesreid1.com"
charlesreid1_port_default: "80"
charlesreid1_port_gitea: "80"
charlesreid1_port_files: "80"
charlesreid1_port_pages: "80"
charlesreid1_port_hooks: "80"
charlesreid1_port_bots: "80"
charlesreid1_port_ssl_default: "443"
charlesreid1_port_ssl_gitea: "443"
charlesreid1_port_ssl_files: "443"
charlesreid1_port_ssl_pages: "443"
charlesreid1_port_ssl_hooks: "443"
charlesreid1_port_ssl_bots: "443"

1
mkdocs.yml

@ -25,7 +25,6 @@ nav: @@ -25,7 +25,6 @@ nav:
- 'Index': 'index.md'
- 'Quickstart': 'quickstart.md'
- 'Ansible on Vagrant': 'ansible_vagrant.md'
- 'Ansible on Linode': 'ansible_linode.md'
- 'Ansible on DigitalOcean': 'ansible_do.md'
- 'Ansible Playbooks': 'ansible_playbooks.md'
- 'Ansible Vault': 'ansible_vault.md'

3
podcharlesreid1.yml

@ -14,6 +14,9 @@ @@ -14,6 +14,9 @@
- "charlesreid1.red"
- "www.charlesreid1.red"
- "git.charlesreid1.red"
- "pages.charlesreid1.red"
- "bots.charlesreid1.red"
- "hooks.charlesreid1.red"

14
podwebhooks.yml

@ -0,0 +1,14 @@ @@ -0,0 +1,14 @@
---
# main playbook for webhooks docker pod
# SSL certs are all handled by the pod-charlesreid1 compute node
- name: Install webhooks docker pod (pages.* and hooks.* and bots.* subdomains)
hosts: servers
become: yes
roles:
- role: pod-webhooks
tags: pod-webhooks
charlesreid1_server_name_default: "charlesreid1.red"

3
roles/docker/files/install.sh

@ -17,9 +17,10 @@ sudo true @@ -17,9 +17,10 @@ sudo true
wget -qO- https://get.docker.com/ | sh
# Install docker-compose
COMPOSE_VERSION=`git ls-remote https://github.com/docker/compose | grep refs/tags | grep -oP "[0-9]+\.[0-9][0-9]+\.[0-9]+$" | sort | tail -n 1`
COMPOSE_VERSION=`git ls-remote https://github.com/docker/compose | grep refs/tags | grep -oP "[0-9]+\.[0-9][0-9]+\.[0-9]+$" | tail -n 1`
sudo sh -c "curl -L https://github.com/docker/compose/releases/download/${COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose"
sudo chmod +x /usr/local/bin/docker-compose
sudo sh -c "curl -L https://raw.githubusercontent.com/docker/compose/${COMPOSE_VERSION}/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose"
# Install docker-cleanup command
cd /tmp

1
roles/init-nonroot/tasks/main.yml

@ -5,7 +5,6 @@ @@ -5,7 +5,6 @@
become: yes
user:
name: "{{ username }}"
password: "{{ charlesreid1_system_password }}"
shell: /bin/bash
groups: wheel
append: yes

4
roles/letsencrypt/tasks/main.yml

@ -65,7 +65,7 @@ @@ -65,7 +65,7 @@
- name: "Install /etc/letsencrypt/options-nginx-ssl.conf"
become: yes
get_url:
url: "https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf"
url: "https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf"
dest: /etc/letsencrypt/options-ssl-nginx.conf
when:
- not ssl_options_installed.stat.exists
@ -79,7 +79,7 @@ @@ -79,7 +79,7 @@
- name: "Install /etc/letsencrypt/ssl-dhparams.conf"
become: yes
get_url:
url: "https://raw.githubusercontent.com/certbot/certbot/master/certbot/certbot/ssl-dhparams.pem"
url: "https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem"
dest: /etc/letsencrypt/ssl-dhparams.pem
when:
- not dhparams_installed.stat.exists

38
roles/pod-bots/README.md

@ -0,0 +1,38 @@ @@ -0,0 +1,38 @@
Role Name
=========
A brief description of the role goes here.
Requirements
------------
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
Role Variables
--------------
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
Dependencies
------------
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
Example Playbook
----------------
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
- hosts: servers
roles:
- { role: username.rolename, x: 42 }
License
-------
BSD
Author Information
------------------
An optional section for the role authors to include contact information, or a website (HTML is not allowed).

2
roles/pod-bots/defaults/main.yml

@ -0,0 +1,2 @@ @@ -0,0 +1,2 @@
---
# defaults file for pod-bots

6
roles/pod-bots/handlers/main.yml

@ -0,0 +1,6 @@ @@ -0,0 +1,6 @@
---
# handlers file for pod-charlesreid1
#
- name: restart pod-charlesreid1
service: name=pod-charlesreid1 state=restarted

60
roles/pod-bots/meta/main.yml

@ -0,0 +1,60 @@ @@ -0,0 +1,60 @@
galaxy_info:
author: your name
description: your description
company: your company (optional)
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
# issue_tracker_url: http://example.com/issue/tracker
# Some suggested licenses:
# - BSD (default)
# - MIT
# - GPLv2
# - GPLv3
# - Apache
# - CC-BY
license: license (GPLv2, CC-BY, etc)
min_ansible_version: 2.4
# If this a Container Enabled role, provide the minimum Ansible Container version.
# min_ansible_container_version:
# Optionally specify the branch Galaxy will use when accessing the GitHub
# repo for this role. During role install, if no tags are available,
# Galaxy will use this branch. During import Galaxy will access files on
# this branch. If Travis integration is configured, only notifications for this
# branch will be accepted. Otherwise, in all cases, the repo's default branch
# (usually master) will be used.
#github_branch:
#
# Provide a list of supported platforms, and for each platform a list of versions.
# If you don't wish to enumerate all versions for a particular platform, use 'all'.
# To view available platforms and versions (or releases), visit:
# https://galaxy.ansible.com/api/v1/platforms/
#
# platforms:
# - name: Fedora
# versions:
# - all
# - 25
# - name: SomePlatform
# versions:
# - all
# - 1.0
# - 7
# - 99.99
galaxy_tags: []
# List tags for your role here, one per line. A tag is a keyword that describes
# and categorizes the role. Users find roles by searching for tags. Be sure to
# remove the '[]' above, if you add tags to this list.
#
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
# Maximum 20 tags per role.
dependencies: []
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
# if you add dependencies to this list.

2
roles/pod-bots/tasks/main.yml

@ -0,0 +1,2 @@ @@ -0,0 +1,2 @@
---
# tasks file for pod-bots

2
roles/pod-bots/tests/inventory

@ -0,0 +1,2 @@ @@ -0,0 +1,2 @@
localhost

5
roles/pod-bots/tests/test.yml

@ -0,0 +1,5 @@ @@ -0,0 +1,5 @@
---
- hosts: localhost
remote_user: root
roles:
- pod-bots

2
roles/pod-bots/vars/main.yml

@ -0,0 +1,2 @@ @@ -0,0 +1,2 @@
---
# vars file for pod-bots

65
roles/pod-charlesreid1/README.md

@ -1,72 +1,17 @@ @@ -1,72 +1,17 @@
pod-charlesreid1 ansible role
=============================
Role Name
=========
This ansible role installs pod-charlesreid1, a docker pod that runs charlesreid1.com.
A brief description of the role goes here.
Requirements
------------
???
Tasks
-----
phase 1:
- clone pod contents
phase 2:
- /www setup
- server_name_default top level domain clone
- docker and docker compose checks
- mediawiki prep
- gitea prep
phase 3:
- construct the pod (docker-compose build)
- install service
- (port mapping in Dockerfile)
- (letsencrypt cert check)
- enable service
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
Role Variables
--------------
List of role variables (set in `defaults/main.yml`):
- `username`
- `pod_install_dir`
- `admin_email`
- `server_name_default`
- `nginx_subdomains_ip`
- `port_default`
- `port_gitea`
- `port_ssl_default`
- `port_ssl_gitea`
- `gitea_app_name`
- `gitea_domain`
- `gitea_secret_key`
- `gitea_internal_token`
- `mysql_password`
- `mediawiki_secretkey`
Most of these have default values set from top-level Ansible variables
prefixed with `charlesreid1`:
- `nonroot_user` (used to set `username`)
- `charlesreid1_admin_email`
- `charlesreid1_server_name_default`
- `charlesreid1_nginx_subdomains_ip`
- `charlesreid1_port_default`
- `charlesreid1_port_gitea`
- `charlesreid1_port_ssl_default`
- `charlesreid1_port_ssl_gitea`
- `charlesreid1_gitea_secret_key`
- `charlesreid1_gitea_internal_token`
- `charlesreid1_mysql_password`
- `charlesreid1_mediawiki_secretkey`
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
Dependencies
------------

8
roles/pod-charlesreid1/defaults/main.yml

@ -18,9 +18,17 @@ nginx_subdomains_ip: "{{ charlesreid1_nginx_subdomains_ip }}" @@ -18,9 +18,17 @@ nginx_subdomains_ip: "{{ charlesreid1_nginx_subdomains_ip }}"
port_default: "{{ charlesreid1_port_default }}"
port_gitea: "{{ charlesreid1_port_gitea }}"
port_files: "{{ charlesreid1_port_files }}"
port_pages: "{{ charlesreid1_port_pages }}"
port_hooks: "{{ charlesreid1_port_hooks }}"
port_bots: "{{ charlesreid1_port_bots }}"
port_ssl_default: "{{ charlesreid1_port_ssl_default }}"
port_ssl_gitea: "{{ charlesreid1_port_ssl_gitea }}"
port_ssl_files: "{{ charlesreid1_port_ssl_files }}"
port_ssl_pages: "{{ charlesreid1_port_ssl_pages }}"
port_ssl_hooks: "{{ charlesreid1_port_ssl_hooks }}"
port_ssl_bots: "{{ charlesreid1_port_ssl_bots }}"
# end nginx configuration variables
# ----------------

62
roles/pod-charlesreid1/tasks/main.yml

@ -11,7 +11,7 @@ @@ -11,7 +11,7 @@
# clone pod contents
#
# /www setup
# server_name_default top level domain clone
# server-name_default top level domain clone
# docker and docker compose checks
# mediawiki prep
# gitea prep
@ -65,19 +65,6 @@ @@ -65,19 +65,6 @@
- pod-charlesreid1
# Init submodules
- name: Initialize pod-charlesreid1 submodules
become: yes
become_user: "{{ username }}"
command: "git submodule update --init"
args:
chdir: "{{ pod_install_dir }}"
when:
- "pod_charlesreid1_clone_check.stat.exists"
tags:
- pod-charlesreid1
# Pull submodules
- name: Pull pod-charlesreid1 submodules
become: yes
@ -101,6 +88,7 @@ @@ -101,6 +88,7 @@
# Then use the template module to use the template.
- name: Fetch the docker-compose template from the remote machine
run_once: true
fetch:
src: "{{ pod_install_dir }}/docker-compose.yml.j2"
dest: "/tmp/pod-charlesreid1-docker-compose.yml.j2"
@ -153,7 +141,9 @@ @@ -153,7 +141,9 @@
#
# /www/<domain>/
# git/ <-- .git dir for charlesreid1.com repo gh-pages branch
# git.data/ <-- .git dir for charlesreid1-data
# htdocs/ <-- clone of charlesreid1.com repo gh-pages branch
# data/ <-- clone of charlesreid1-data
# -------------
# Install and run the clone www script
@ -255,6 +245,7 @@ @@ -255,6 +245,7 @@
# #####################################
# NGIX CONFIG PREP
#
@ -280,6 +271,7 @@ @@ -280,6 +271,7 @@
# HTTP
- name: Fetch d-nginx-charlesreid1 http configuration templates from remote machine
run_once: true
fetch:
src: "{{ pod_install_dir }}/d-nginx-charlesreid1/conf.d_templates/http.DOMAIN.conf.j2"
dest: "/tmp/http.DOMAIN.conf.j2"
@ -306,6 +298,7 @@ @@ -306,6 +298,7 @@
# HTTPS
- name: Fetch d-nginx-charlesreid1 https configuration templates from remote machine
run_once: true
fetch:
src: "{{ pod_install_dir }}/d-nginx-charlesreid1/conf.d_templates/https.DOMAIN.conf.j2"
dest: "/tmp/https.DOMAIN.conf.j2"
@ -333,6 +326,7 @@ @@ -333,6 +326,7 @@
- name: Fetch d-nginx-charlesreid1 https subdomains configuration templates from remote machine
run_once: true
fetch:
src: "{{ pod_install_dir }}/d-nginx-charlesreid1/conf.d_templates/https.DOMAIN.subdomains.conf.j2"
dest: "/tmp/https.DOMAIN.subdomains.conf.j2"
@ -571,6 +565,46 @@ @@ -571,6 +565,46 @@
register: register_letsencrypt_livecert_gitea
#- name: Check if LetsEncrypt cert for files server name is present
# tags:
# - letsencrypt
# - pod-charlesreid1
# - pod-charlesreid1-certs
# stat:
# path: "/etc/letsencrypt/live/files.{{ server_name_default }}"
# register: register_letsencrypt_livecert_files
- name: Check if LetsEncrypt cert for pages server name is present
tags:
- letsencrypt
- pod-charlesreid1
- pod-charlesreid1-certs
stat:
path: "/etc/letsencrypt/live/pages.{{ server_name_default }}"
register: register_letsencrypt_livecert_pages
- name: Check if LetsEncrypt cert for hooks server name is present
tags:
- letsencrypt
- pod-charlesreid1
- pod-charlesreid1-certs
stat:
path: "/etc/letsencrypt/live/hooks.{{ server_name_default }}"
register: register_letsencrypt_livecert_hooks
- name: Check if LetsEncrypt cert for bots server name is present
tags:
- letsencrypt
- pod-charlesreid1
- pod-charlesreid1-certs
stat:
path: "/etc/letsencrypt/live/bots.{{ server_name_default }}"
register: register_letsencrypt_livecert_bots
# If top level and subdomain certs are present, start/restart the
# pod-charlesreid1 service.

38
roles/pod-webhooks/README.md

@ -0,0 +1,38 @@ @@ -0,0 +1,38 @@
Role Name
=========
A brief description of the role goes here.
Requirements
------------
Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.
Role Variables
--------------
A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.
Dependencies
------------
A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.
Example Playbook
----------------
Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:
- hosts: servers
roles:
- { role: username.rolename, x: 42 }
License
-------
BSD
Author Information
------------------
An optional section for the role authors to include contact information, or a website (HTML is not allowed).

20
roles/pod-webhooks/defaults/main.yml

@ -0,0 +1,20 @@ @@ -0,0 +1,20 @@
---
# defaults file for pod-webhooks
username: "{{ nonroot_user }}"
# where pod-webhooks is installed
webhooks_install_dir: "/home/{{ username }}/pod-webhooks"
# shared secret
# # (must be entered every time you create a webhook)
captain_hook_secret: "{{ charlesreid1_captain_hook_secret }}"
# ----------------
# subpages nginx
# configuration variables
server_name_default: "{{ charlesreid1_server_name_default }}"
# end nginx configuration variables
# ----------------
#

6
roles/pod-webhooks/handlers/main.yml

@ -0,0 +1,6 @@ @@ -0,0 +1,6 @@
---
# handlers file for pod-charlesreid1
#
- name: restart pod-charlesreid1
service: name=pod-charlesreid1 state=restarted

524
roles/pod-webhooks/tasks/main.yml

@ -0,0 +1,524 @@ @@ -0,0 +1,524 @@
---
###########################
# Set up webhooks pod
#
# git.charlesreid1.com/docker/pod-webhooks
# git.charlesreid1.com/docker/d-nginx-subdomains
#
# Tasks:
# ------
#
# clone pod contents
#
# /www setup
# pages subdomain clone
# hooks subdomain clone
# bots subdomain clone
# docker and docker compose checks
# pages subdomain prep
# captain hook setup
# captain hook canary setup
#
# construct the pod (docker-compose build)
# install service
# (port mapping in Dockerfile)
# (letsencrypt cert check)
# enable service
#
# NOTE: This is almost identical to
# pod-charlesreid1, except for a few
# different sections. We could have
# made everything shared, but f--k it
# this has dragged on long enough.
#
###########################
# #####################################
# CLONE POD-WEBHOOKS
# Check if we already cloned it
- name: Check if pod-webhooks repo is cloned
stat:
path: "{{ webhooks_install_dir }}"
register: pod_webhooks_clone_check
tags:
- pod-webhooks
# Clone it
- name: Clone pod-webhooks
become: yes
become_user: "{{ username }}"
git:
repo: 'https://github.com/charlesreid1-docker/pod-webhooks.git'
dest: "{{ webhooks_install_dir }}"
recursive: yes
when:
- "not pod_webhooks_clone_check.stat.exists"
tags:
- pod-webhooks
# Pull it
- name: Pull pod-webhooks
become: yes
become_user: "{{ username }}"
command: "git pull"
args:
chdir: "{{ webhooks_install_dir }}"
when:
- "pod_webhooks_clone_check.stat.exists"
tags:
- pod-webhooks
# Pull submodules
- name: Pull pod-webhooks submodules
become: yes
become_user: "{{ username }}"
command: "git submodule update --remote"
args:
chdir: "{{ webhooks_install_dir }}"
when:
- "pod_webhooks_clone_check.stat.exists"
tags:
- pod-webhooks
# #####################################
# BUILD DOCKER-COMPOSE FILE FROM TEMPLATE
#
- name: Fetch the docker-compose template from the remote machine
run_once: true
fetch:
src: "{{ webhooks_install_dir }}/docker-compose.yml.j2"
dest: "/tmp/pod-webhooks-docker-compose.yml.j2"
flat: yes
fail_on_missing: yes
tags:
- pod-webhooks
- pod-webhooks-docker
- name: Install the docker-compose file
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/pod-webhooks-docker-compose.yml.j2"
dest: "{{ webhooks_install_dir }}/docker-compose.yml"
mode: 0640
force: yes
tags:
- pod-webhooks
- pod-webhooks-docker
# #####################################
# SET UP /WWW DIRECTORY
#
# Create /www directory
# for subdomains content
- name: Create the /www directory
become: yes
file:
path: "/www"
state: directory
recurse: yes
owner: "{{ username }}"
group: "{{ username }}"
tags:
- pod-webhooks
- pod-webhooks-content
# Template scripts to populate /www
# with subdomain pages is done in the
# rules below...
# #####################################
# SUBDOMAIN PAGES SETUP (ALL)
#
# Initializes the /www folder structure for
# /www/pages.*
# /www/hooks.*
# /www/bots.*
#
# This is done with template python scripts
#
# /www/<subdomain>.charlesreid1.com/
# <subdomain>.charlesreid1.com-src/
# git/
# htdocs/
- name: "Fetch the initial subdomain clone commands script template"
fetch:
src: "{{ webhooks_install_dir }}/scripts/subdomains_init_setup.py.j2"
dest: "/tmp/subdomains_init_setup.py.j2"
flat: yes
fail_on_missing: yes
tags:
- pod-webhooks
- pod-webhooks-content
- name: "Install the initial subdomain clone commands script"
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/subdomains_init_setup.py.j2"
dest: "{{ webhooks_install_dir }}/scripts/subdomains_init_setup.py"
mode: 0755
force: yes
tags:
- pod-webhooks
- pod-webhooks-content
- name: Run initial clone commands to set up bots/pages/hooks subdomains at /www/
command: "python {{ webhooks_install_dir }}/scripts/subdomains_init_setup.py"
tags:
- pod-webhooks
- pod-webhooks-content
# #####################################
# PAGES SETUP
#
# Initializes the contents of /www/pages.*/*
- name: Fetch the initial pages script
fetch:
src: "{{ webhooks_install_dir }}/scripts/pages_init_setup.py.j2"
dest: "/tmp/pages_init_setup.py.j2"
flat: yes
fail_on_missing: yes
tags:
- pod-webhooks
- pod-webhooks-content
- name: Install the pages init setup script
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/pages_init_setup.py.j2"
dest: "{{ webhooks_install_dir }}/scripts/pages_init_setup.py"
mode: 0755
force: yes
tags:
- pod-webhooks
- pod-webhooks-content
- name: Run initial clone commands to set up pages at /www/pages.charlesreid1.com
command: "python {{ webhooks_install_dir }}/scripts/pages_init_setup.py"
tags:
- pod-webhooks
- pod-webhooks-content
# #####################################
# DOCKER/DOCKER COMPOSE
# The docker role, in the base playbook,
# will install docker-compose, but we want
# to double check that the executable exists
- name: Check that docker compose executable is available
stat:
path: "/usr/local/bin/docker-compose"
register: webhooks_register_docker_compose
tags:
- pod-webhooks
- pod-webhooks-docker
# Also make sure the docker daemon is running
- name: Enable docker service
become: yes
service:
name: docker
enabled: yes
state: restarted
tags:
- pod-webhooks
- pod-webhooks-docker
- pod-webhooks-services
# #####################################
# NGIX CONFIG PREP
#
# prepare the config files for the
# subdomains nginx server:
# - copy templates from remote machine
# - clean conf.d directory
# - copy rendered templates to remote machine
- name: Clean d-nginx-subdomains conf.d directory
become: yes
become_user: "{{ username }}"
command: "python {{ webhooks_install_dir }}/d-nginx-subdomains/scripts/clean_config.py"
tags:
- pod-webhooks
# Install the d-nginx-subdomains configuration templates
#
- name: Fetch d-nginx-subdomains configuration templates from remote machine
run_once: true
fetch:
src: "{{ webhooks_install_dir }}/d-nginx-subdomains/conf.d_templates/http.subdomains.conf.j2"
dest: "/tmp/http.subdomains.conf.j2"
flat: yes
fail_on_missing: yes
tags:
- pod-webhooks
- name: Install the d-nginx-subdomains configuration templates
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/http.subdomains.conf.j2"
dest: "{{ webhooks_install_dir }}/d-nginx-subdomains/conf.d/http.subdomains.conf"
force: yes
tags:
- pod-webhooks
# #####################################
# CAPTAIN HOOK SETUP
- name: Fetch the captain hook config file template
fetch:
src: "{{ webhooks_install_dir }}/b-captain-hook/config.json.j2"
dest: "/tmp/captain_hook_config.json.j2"
flat: yes
fail_on_missing: yes
tags:
- captain-hook
- name: Install the captain hook config file
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/captain_hook_config.json.j2"
dest: "{{ webhooks_install_dir }}/b-captain-hook/config.json"
mode: 0755
force: yes
tags:
- captain-hook
# #####################################
# CAPTAIN HOOK CANARY SCRIPT SETUP
#
# Start with the canary script first.
#
# The whole pod has to be built and the
# pod startup service installed
# before the canary service can be
# installed.
# Script 1 - canary script itself
# Use the template provided to make it
#
- name: Fetch the captain hook canary script template from the remote machine
run_once: true
fetch:
src: "{{ webhooks_install_dir }}/scripts/captain_hook_canary.sh.j2"
dest: "/tmp/captain_hook_canary.sh.j2"
flat: yes
fail_on_missing: yes
tags:
- captain-hook
# Install the captain hook canary script
#
- name: Install the captain hook canary script
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/captain_hook_canary.sh.j2"
dest: "{{ webhooks_install_dir }}/scripts/captain_hook_canary.sh"
mode: 0755
force: yes
tags:
- captain-hook
# Script 2 - pull host script
# Do it all again for the pull host script
# Use the template provided to make it
#
- name: Fetch the captain hook pull host script template from the remote machine
run_once: true
fetch:
src: "{{ webhooks_install_dir }}/scripts/captain_hook_pull_host.py.j2"
dest: "/tmp/captain_hook_pull_host.py.j2"
flat: yes
fail_on_missing: yes
tags:
- captain-hook
# Install the captain hook pull host script
- name: Install the captain hook pull host script
become: yes
become_user: "{{ username }}"
template:
src: "/tmp/captain_hook_pull_host.py.j2"
dest: "{{ webhooks_install_dir }}/scripts/captain_hook_pull_host.py"
mode: 0755
force: yes
tags:
- captain-hook
# #####################################
# CONSTRUCT THE POD
#
# This task is very time-consuming.
- name: Build pod-webhooks from scratch
become: yes
become_user: "{{ username }}"
command: "/usr/local/bin/docker-compose build --no-cache"
args:
chdir: "{{ webhooks_install_dir }}"
when:
- "webhooks_register_docker_compose.stat.exists"
# #####################################
# INSTALL STARTUP SERVICE
#
# Check if the webhooks docker pod service
# is installed. If not, install it.
### # Just kidding - don't bother.
### # Always reinstall the startup service.
### #
### - name: Check if pod-webhooks service is installed
### stat:
### path: "/etc/systemd/system/pod-webhooks.service"
### register: pod_webhooks_service_check
### tags:
### - pod-webhooks-services
# Fetch the pod-webhooks service template
#
- name: Fetch the pod-webhooks template from remote host machine
run_once: true
fetch:
src: "{{ webhooks_install_dir }}/scripts/pod-webhooks.service.j2"
dest: "/tmp/pod-webhooks.service.j2"
flat: yes
fail_on_missing: yes
tags:
- pod-webhooks-services
# Apply the template and install it for goodness sake
#
- name: Install pod-webhooks service
become: yes
template:
src: "/tmp/pod-webhooks.service.j2"
dest: "/etc/systemd/system/pod-webhooks.service"
mode: 0774
tags:
- pod-webhooks-services
# Now enable the pod-webhooks service.
# Don't worry about SSL cert checks, not our problem.
- name: Enable pod-webhooks service
become: yes
service:
name: pod-webhooks
enabled: yes
state: restarted
when:
- "webhooks_register_docker_compose.stat.executable"
tags:
- pod-webhooks-services
# #####################################
# CAPTAIN HOOK CANARY SERVICE SETUP
### # Begin by checking to see if installed
### # Just kidding - always reinstall the canary service from the repo template
### #
### - name: Check if the captain hook canary service is installed
### stat:
### path: "/etc/systemd/system/captain-hook-canary.service"
### register: canary_service_check
### tags:
### - pod-webhooks-services
### - captain-hook
# Fetch the captain hook canary startup service template onto local computer
# #
- name: Fetch the captain hook canary service template file from the remote machine
run_once: true
fetch:
src: "{{ webhooks_install_dir }}/scripts/captain-hook-canary.service.j2"
dest: "/tmp/captain-hook-canary.service.j2"
flat: yes
fail_on_missing: yes
tags:
- pod-webhooks-services
- captain-hook
# Apply the captain hook canary startup service template
#
- name: Install the captain hook canary startup service
become: yes
template:
src: "/tmp/captain-hook-canary.service.j2"
dest: "/etc/systemd/system/captain-hook-canary.service"
mode: 0774
force: yes
tags:
- pod-webhooks-services
- captain-hook
# Now enable the captain hook canary startup service.
#
- name: Enable the captain hook canary startup service
become: yes
service:
name: captain-hook-canary
enabled: yes
state: restarted
tags:
- pod-webhooks-services
- captain-hook

27
roles/pod-webhooks/templates/captain-hook-canary.service.j2

@ -0,0 +1,27 @@ @@ -0,0 +1,27 @@
# Service script for starting up the
# captain hook canary service.
#
# The main purpose of this service is to
# allow the captain hook webhook container
# to send a signal to the host machine
# (by touching a file in a shared directory).
#
# Each repository has its own webhooks,
# and each repository can create their own
# canary files and have custom actions to
# deal with them.
[Unit]
Description=captain hook canary script
Requires=pod-webhooks.service
After=pod-webhooks.service
[Service]
Restart=always
ExecStart=/home/charles/blackbeard_scripts/captain_hook_canary.sh
ExecStop=/usr/bin/pgrep -f captain_hook_canary | /usr/bin/xargs /bin/kill
[Install]
WantedBy=default.target

16
roles/pod-webhooks/templates/pod-webhooks.service.j2

@ -0,0 +1,16 @@ @@ -0,0 +1,16 @@
# Service script for starting up the webhooks docker pod
# # (hooks subdomain, pages subdomain)
[Unit]
Description=webhooks and subdomains docker pod
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStart=/usr/local/bin/docker-compose -f /home/charles/codes/docker/pod-webhooks/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /home/charles/codes/docker/pod-webhooks/docker-compose.yml down
[Install]
WantedBy=default.target

9
roles/sshkeys/tasks/main.yml

@ -124,12 +124,3 @@ @@ -124,12 +124,3 @@
- nonroot-ssh
##################################
# nonroot: automatically accept new keys
- name: Automatically accept new SSH keys
become: yes
become_user: "{{ username }}"
command: "echo 'StrictHostKeyChecking=accept-new' > ~/.ssh/config"
tags:
- nonroot-ssh

Loading…
Cancel
Save