134 Commits

Author SHA1 Message Date
a907947e78 remove plan 2026-04-13 18:52:21 -07:00
b44d535a93 Merge branch 'claude-plans-mysql-no-root' into claude-plans
* claude-plans-mysql-no-root:
  add fixes for getting non-root mysql user
2026-04-13 18:50:33 -07:00
98d9368d65 add fixes for getting non-root mysql user 2026-04-13 18:50:02 -07:00
b475f2142e Merge branch 'claude-plans-first-fix-sql-backups' into claude-plans
* claude-plans-first-fix-sql-backups:
  fix wiki backups and canary script to check for missing trailer, not just nonzero files
  remove stupid dead file
  add plan to fix sql backups, plus implemented fixes for sql backups
2026-04-13 18:38:30 -07:00
647e24013e fix wiki backups and canary script to check for missing trailer, not just nonzero files 2026-04-13 18:38:02 -07:00
dba56b4d43 remove stupid dead file 2026-04-13 18:37:40 -07:00
134450d922 add plan to fix sql backups, plus implemented fixes for sql backups 2026-04-13 18:30:32 -07:00
2772e3447c add mysql no root pw transition plan 2026-04-02 21:38:49 -07:00
c6f0183d4b add mediawiki upgrade plan 2026-04-02 21:38:39 -07:00
1b3d1776b4 overwrite existing rendered templates by default 2026-03-25 20:58:10 -07:00
1f47029098 Merge pull request #2 from charlesreid1-docker/claude-audit
Implement changes recommended from Claude audit
2026-03-25 20:25:07 -07:00
0a89fe68c8 use docker volume for nginx log storage 2026-03-25 19:53:29 -07:00
d0b8e83ffc convert _.conf to _.conf.j2 2026-03-25 19:53:12 -07:00
89d5849708 apply rate-limiting to wiki urls consistently 2026-03-25 19:47:31 -07:00
13a3a1cb5e add X-Forwarded-Proto header to let mw/gitea know requests came in over https 2026-03-25 19:46:53 -07:00
892eddcbbb fix imagemagick thumbnail size limit 2026-03-25 19:43:57 -07:00
c95fcfaaf2 implement frontend/backend network segmentation 2026-03-25 19:43:19 -07:00
0e09187b3e Merge branch 'claude-fix-backup-scripts' into claude-audit
* claude-fix-backup-scripts:
  fix var handling (more defensive)
2026-03-25 19:41:52 -07:00
8afcd3073b fix var handling (more defensive) 2026-03-25 19:41:39 -07:00
efb3fa0140 fix variable name error 2026-03-25 19:39:32 -07:00
3048c35647 actually print error message in backups canary 2026-03-25 19:38:29 -07:00
86997b5a55 add restart policy for mysql and mw 2026-03-25 19:37:13 -07:00
fadee2ea91 fix min password length to 10 2026-03-25 19:35:06 -07:00
87582b77b2 Merge branch 'claude-pin-versions' into claude-audit
* claude-pin-versions:
  pin gitea and nginx versions
2026-03-25 19:34:26 -07:00
aaa226d82a pin gitea and nginx versions 2026-03-25 19:33:08 -07:00
5f26ebac25 Merge branch 'claude-mw-upgrade-key' into claude-audit
* claude-mw-upgrade-key:
  no .mysql.rootpw.cnf file (empty)
  get MW upgrade key from env var
2026-03-25 10:36:56 -07:00
940e21f507 no .mysql.rootpw.cnf file (empty) 2026-03-25 10:35:38 -07:00
99ab12a2ba get MW upgrade key from env var 2026-03-25 10:35:38 -07:00
6c49dd3171 fix stupid timer issue 2025-12-06 19:04:03 -08:00
99616d5de5 5 per second 2025-10-16 02:43:30 -07:00
83f898192a bump rate limit to 6 requests per second 2025-10-16 02:42:59 -07:00
5eb9ee5c3c add rate limits to /wiki, /w, and gitea endpoints 2025-10-16 02:42:44 -07:00
df23627e9a add a mediawiki cache directory to mw conf 2025-10-16 02:42:25 -07:00
76dc820b2d bind-mount /var/log/nginx between container and host 2025-10-16 02:42:09 -07:00
6b2b21b668 add base nginx.conf with rate limiting 2025-09-24 12:26:20 -07:00
bcb04257fa add slow query log config for mysql 2025-09-24 12:26:08 -07:00
0cad1e0398 add nginx and mysql config files 2025-09-24 12:25:50 -07:00
14d70a919d add rate-limiting to https config 2025-09-24 12:25:13 -07:00
3aba9729e6 add Troubleshooting.md 2025-06-14 03:48:49 -07:00
eb840384d1 update gitea theme name in app.ini.j2 2025-06-14 03:47:59 -07:00
5bf613cd56 ban more jerks 2025-05-24 19:36:17 -07:00
ccfed3f3fc update mw skin 2025-05-24 19:36:17 -07:00
194e619537 3 weeks for backups 2025-03-09 10:39:23 -07:00
a0f9548fcf ban more jerks 2025-03-07 16:13:15 -08:00
418315150a ban more jerks 2025-03-07 15:55:14 -08:00
ebb304d374 ban more jerks 2025-03-07 15:43:51 -08:00
8580c2c1f0 ban jerks 2025-03-06 12:24:43 -08:00
a3f460113a add instructions for blocking IP addresses 2024-11-16 19:17:46 -08:00
e94f911d99 add "ban jerks" section to nginx config 2024-11-16 19:17:31 -08:00
f7446c5a2d chmod the logs 2023-10-22 08:27:17 -07:00
6d1fa940a7 add wikifiles restore script 2023-10-15 13:06:49 -07:00
cfac7c69dc fix env var problem 2023-10-15 13:06:48 -07:00
3287d57554 fix script comment 2023-10-15 13:06:48 -07:00
d347024939 update gitea app.ini jinja template 2023-10-02 07:34:19 -07:00
8e4f86c8c6 smol makefile fix 2023-08-22 04:33:15 -07:00
5b855a575a make adjustments to bring all pod backup scripts in sync 2022-07-16 13:19:39 -07:00
4248f86c64 fixup restore db script 2022-07-15 17:52:58 -07:00
f36011d4cc fixup restore wikifiles 2022-07-15 17:49:59 -07:00
4953dfb8f3 remove tree subdomain 2022-06-05 21:05:20 -07:00
d003935769 update php.ini upload size to match localsettings.php 2022-03-23 20:05:47 -07:00
58e795bd98 fix backup canary script 2022-03-17 15:20:02 -07:00
0709e883ea 8am 2022-03-17 14:37:04 -07:00
8965515215 run backups canary every day 2022-03-17 14:36:00 -07:00
69523ba027 remove tree 2022-03-17 14:18:04 -07:00
2a4ed33024 add tree htpasswd to docker-compose 2022-03-09 20:32:57 -08:00
f880c44b79 add .tree.htpasswd to tree subdomain for auth protection 2022-03-09 20:18:26 -08:00
5cac0fa869 fix cert for tree subdomain 2022-03-09 09:01:14 -08:00
303ebf8ea3 add tree subdomain to renew cert script 2022-03-09 08:36:49 -08:00
4d638c456e bind-mount /www tree subdomain htdocs 2022-03-08 09:09:11 -08:00
72fc465d1d add tree subdomain to nginx config 2022-03-08 09:08:52 -08:00
2f579f4cfa restore 2022-03-08 09:02:06 -08:00
1bc4bb4902 add mw to skin footer 2022-03-06 18:49:12 -08:00
d91b7dc735 flush wikifiles and wikidb 2022-02-20 19:13:45 -08:00
acb2f57176 jerks 2022-02-20 19:13:45 -08:00
3482004df0 add php.ini 2022-02-07 18:18:54 -08:00
4ed1b479ef JERKS 2022-02-07 16:08:44 -08:00
5a931c2e38 another jerk 2022-02-07 15:49:43 -08:00
17da345041 more jerks 2022-02-07 15:47:19 -08:00
5e9be9e6c8 fix one more robots.txt 2022-02-07 15:20:27 -08:00
0148fe3e55 fix bind-mounting robots.txt 2022-02-07 15:07:46 -08:00
a144d6070b fix parsing of du command 2022-02-06 17:36:38 -08:00
989036ac21 add certbot to rsyslog filters 2022-02-06 17:36:38 -08:00
523ed50647 tell tar to stop crying about the log file and just skip it 2022-01-23 12:12:48 -08:00
03f81f4a25 more horrible hard-coded python binary 2022-01-18 22:02:34 -08:00
002ad20d7d stupid stupid stupid hard-coded shim path 2022-01-18 21:57:14 -08:00
2cb6a39990 restore weekly schedule 2022-01-18 21:48:58 -08:00
920ff3839e update gitea robots 2022-01-16 13:37:47 -08:00
d3dae75d38 add robots.txt to charlesreid1.com and git.charlesreid1.com 2022-01-16 13:27:27 -08:00
4004ba6ccb add robots dir 2022-01-16 13:27:15 -08:00
cf982ee2c6 add robots.txt to docker-compose template 2022-01-16 13:26:52 -08:00
efd9487953 add cut cmd to du cmd in aws backup script 2022-01-16 13:26:37 -08:00
b2552b6345 fix gitea backup script 2022-01-16 12:28:06 -08:00
1a8f699ab4 UGH more endless fixes 2022-01-16 12:07:09 -08:00
5e3ab1768c add boto/botocore checks, rearrange service installation steps 2022-01-16 11:53:43 -08:00
291ff2d28a restore daily runs 2022-01-16 11:53:11 -08:00
229975883c restore once a week schedule 2022-01-15 09:20:26 -08:00
af7ef822f0 remove commented lines 2022-01-15 09:18:28 -08:00
cc3688a982 add botocore/boto3 check for canary 2022-01-15 08:51:16 -08:00
e080cda745 add missing directive to rsyslog conf file 2022-01-15 08:05:34 -08:00
45c0f1390f update certbot renewal service 2022-01-14 13:24:51 -08:00
dacef1ac09 fix rsyslog config file 2022-01-14 13:22:52 -08:00
03a8456a2a fix execstartpre for canary service 2022-01-12 14:19:14 -08:00
d1d749d8e4 update makefile and add rsyslog config file 2022-01-12 14:06:56 -08:00
74adabc43a update log strategy - all services log to syslog, rely on user to filter system log 2022-01-12 13:55:37 -08:00
3566305577 add rsyslog filtering option 2022-01-12 13:53:36 -08:00
7442b2ee87 completely remove StandardOutput: from all serivces 2022-01-10 11:17:07 -08:00
9aa49166a6 remove StandardOutput from service files https://github.com/systemd/systemd/pull/10944 2022-01-10 10:38:02 -08:00
f06ac24ecb fix file: to append: 2022-01-10 01:36:18 -08:00
b796cc9756 bump backup services schedule to daily 2022-01-09 11:52:24 -08:00
25063ed251 pin mediawiki version to 1.34 in mw Dockerfile 2021-12-30 16:40:02 -08:00
72a47d71f2 more fail2ban cleanup 2021-12-30 16:31:31 -08:00
dba09976fb remove non-functional fail2banlog ext 2021-12-30 16:30:03 -08:00
7a3c76b9f9 remove unused script (use one in scripts/ instead) 2021-12-30 15:56:30 -08:00
18fd6038df fix clean-templates file 2021-12-30 15:56:30 -08:00
18814b6a1d fix pod install dir variable name 2021-12-30 15:43:08 -08:00
fc35d94b3c fix typos in apply templates script 2021-12-30 14:46:39 -08:00
3604bc1378 ignore environment when cleaning rendered templates 2021-12-30 14:44:14 -08:00
f0f65db9e3 make mkdocs-material submodule url https instad of git so it works without ssh key preconfigured 2021-12-30 14:37:15 -08:00
e5686d4d9a Merge branch 'feature/environment-template'
* feature/environment-template:
  massive rename of all ansible variables
  prep apply templates script for ansible variable rename
  fix missing var name in environment.j2
2021-12-30 12:00:06 -08:00
30c4a24b8d massive rename of all ansible variables 2021-12-30 11:59:45 -08:00
904122db17 prep apply templates script for ansible variable rename 2021-12-30 11:59:43 -08:00
8760edf0c3 fix missing var name in environment.j2 2021-12-30 11:56:53 -08:00
b4650771bc add environment template 2021-12-30 11:41:26 -08:00
b8182774a4 add --ignore-failed-read flag to gitea tar command 2021-12-26 19:26:48 -08:00
bb3b6c027a update certbot service to send logs to /var/log 2021-12-24 15:41:49 -08:00
1d18b5e71c send backup canary logs to /var/log 2021-12-24 15:41:22 -08:00
858cb6c3c8 send backup service logs to /var/log 2021-12-24 15:41:04 -08:00
0a5f9f99ac fix service description 2021-12-24 15:39:32 -08:00
2ac521e1c9 fix env var name in clean olderthan script 2021-12-19 10:48:58 -08:00
ffc4f1d0c0 add --no-progress flag to aws bacup script 2021-12-19 10:48:40 -08:00
7246b0845c cover cleanolderthan service with makefile install/uninstall rules 2021-12-12 11:29:02 -08:00
67acb4a32b Merge branch 'clean-backups'
* clean-backups:
  add systemd timer for clean backups service
2021-12-12 11:25:10 -08:00
15d4bcecc7 add systemd timer for clean backups service 2021-12-12 11:24:56 -08:00
9c92f3fd75 Merge branch 'service-updates'
* service-updates:
  add service to clean files older than N days
  add ExecStartPre to existing backup services
  clean older than 45 days
2021-12-12 11:16:38 -08:00
64 changed files with 1472 additions and 358 deletions

2
.gitmodules vendored
View File

@@ -1,3 +1,3 @@
[submodule "mkdocs-material"]
path = mkdocs-material
url = git@github.com:charlesreid1-docker/mkdocs-material.git
url = https://github.com/charlesreid1/mkdocs-material

View File

@@ -63,13 +63,14 @@ help:
templates:
@find * -name "*.service.j2" | xargs -I '{}' chmod 644 {}
@find * -name "*.timer.j2" | xargs -I '{}' chmod 644 {}
python3 $(POD_CHARLESREID1_DIR)/scripts/apply_templates.py
/home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/apply_templates.py
list-templates:
@find * -name "*.j2"
clean-templates:
python3 $(POD_CHARLESREID1_DIR)/scripts/clean_templates.py
# sudo is required because bind-mounted gitea files end up owned by root. stupid docker.
sudo -E /home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/clean_templates.py
# Backups
@@ -97,31 +98,42 @@ mw-fix-skins:
# /www Dir
clone-www:
python3 $(POD_CHARLESREID1_DIR)/scripts/git_clone_www.py
/home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/git_clone_www.py
pull-www:
python3 $(POD_CHARLESREID1_DIR)/scripts/git_pull_www.py
/home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/git_pull_www.py
install:
ifeq ($(shell which systemctl),)
$(error Please run this make command on a system with systemctl installed)
endif
@/home/charles/.pyenv/shims/python3 -c 'import botocore' || (echo "Please install the botocore library using python3 or pip3 binary"; exit 1)
@/home/charles/.pyenv/shims/python3 -c 'import boto3' || (echo "Please install the boto3 library using python3 or pip3 binary"; exit 1)
sudo cp $(POD_CHARLESREID1_DIR)/scripts/pod-charlesreid1.service /etc/systemd/system/pod-charlesreid1.service
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-aws.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-cleanolderthan.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-gitea.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-wikidb.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-wikifiles.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-gitea.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-aws.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/canary/pod-charlesreid1-canary.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/certbot/pod-charlesreid1-certbot.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/10-pod-charlesreid1-rsyslog.conf /etc/rsyslog.d/.
sudo chmod 664 /etc/systemd/system/pod-charlesreid1*
sudo systemctl daemon-reload
sudo systemctl restart rsyslog
sudo systemctl enable pod-charlesreid1
sudo systemctl enable pod-charlesreid1-backups-wikidb.timer
sudo systemctl enable pod-charlesreid1-backups-wikifiles.timer
sudo systemctl enable pod-charlesreid1-backups-gitea.timer
sudo systemctl enable pod-charlesreid1-backups-aws.timer
sudo systemctl enable pod-charlesreid1-backups-cleanolderthan.timer
sudo systemctl enable pod-charlesreid1-canary.timer
sudo systemctl enable pod-charlesreid1-certbot.timer
@@ -129,37 +141,54 @@ endif
sudo systemctl start pod-charlesreid1-backups-wikifiles.timer
sudo systemctl start pod-charlesreid1-backups-gitea.timer
sudo systemctl start pod-charlesreid1-backups-aws.timer
sudo systemctl start pod-charlesreid1-backups-cleanolderthan.timer
sudo systemctl start pod-charlesreid1-canary.timer
sudo systemctl start pod-charlesreid1-certbot.timer
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-aws.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-cleanolderthan.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-gitea.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-wikidb.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-wikifiles.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-canary.service.log
uninstall:
ifeq ($(shell which systemctl),)
$(error Please run this make command on a system with systemctl installed)
endif
-sudo systemctl disable pod-charlesreid1
-sudo systemctl disable pod-charlesreid1-backups-aws.timer
-sudo systemctl disable pod-charlesreid1-backups-cleanolderthan.timer
-sudo systemctl disable pod-charlesreid1-backups-gitea.timer
-sudo systemctl disable pod-charlesreid1-backups-wikidb.timer
-sudo systemctl disable pod-charlesreid1-backups-wikifiles.timer
-sudo systemctl disable pod-charlesreid1-backups-gitea.timer
-sudo systemctl disable pod-charlesreid1-backups-aws.timer
-sudo systemctl disable pod-charlesreid1-canary.timer
-sudo systemctl disable pod-charlesreid1-certbot.timer
# Leave the pod running!
# -sudo systemctl stop pod-charlesreid1
-sudo systemctl stop pod-charlesreid1-backups-aws.timer
-sudo systemctl stop pod-charlesreid1-backups-cleanolderthan.timer
-sudo systemctl stop pod-charlesreid1-backups-gitea.timer
-sudo systemctl stop pod-charlesreid1-backups-wikidb.timer
-sudo systemctl stop pod-charlesreid1-backups-wikifiles.timer
-sudo systemctl stop pod-charlesreid1-backups-gitea.timer
-sudo systemctl stop pod-charlesreid1-backups-aws.timer
-sudo systemctl stop pod-charlesreid1-canary.timer
-sudo systemctl stop pod-charlesreid1-certbot.timer
-sudo rm -f /etc/systemd/system/pod-charlesreid1.service
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-aws.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-cleanolderthan.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-gitea.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-wikidb.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-wikifiles.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-gitea.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-aws.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-canary.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-certbot.{service,timer}
sudo systemctl daemon-reload
-sudo rm -f /etc/rsyslog.d/10-pod-charlesreid1-rsyslog.conf
-sudo systemctl restart rsyslog
.PHONY: help

View File

@@ -0,0 +1,324 @@
# Upgrade Plan: MediaWiki 1.34 → 1.39+ and MySQL 5.7 → 8.0
## Context
MediaWiki 1.34 (EOL Nov 2021) and MySQL 5.7 (EOL Oct 2023) are both end-of-life and no longer receive security patches. The goal is to upgrade both with **minimal downtime** by running old and new versions side-by-side, testing the new stack, then switching over — with the ability to roll back instantly.
**Additional motivation:** The REST API v1 endpoint `/w/rest.php/v1/page/{title}/with_html` returns a 500 error ("Unable to fetch Parsoid HTML") because MW 1.34 does not bundle Parsoid. MW 1.39 bundles Parsoid in-process, which is required for this endpoint to work. This blocks tools (e.g., MediaWiki MCP) that rely on the REST API to fetch rendered HTML.
## Strategy: Blue-Green Deployment
Run the old stack ("blue") untouched while building and testing the new stack ("green") alongside it. Nginx acts as the switch — changing one `proxy_pass` line flips between old and new.
```
┌─ stormy_mw (MW 1.34) ──── stormy_mysql (MySQL 5.7) ← BLUE (old)
nginx ── proxy_pass ──────┤
└─ stormy_mw_new (MW 1.39) ─ stormy_mysql_new (MySQL 8) ← GREEN (new)
```
Both stacks use **separate volumes** — the old data is never touched.
---
## Decisions (Locked In)
- **Target:** MediaWiki 1.39 LTS (smallest jump from 1.34, can do 1.39→1.42 later)
- **Skin:** Patch Bootstrap2 to replace deprecated API calls for MW 1.39 compatibility
- **EmbedVideo:** Skip for now — don't include in green stack. Add back later if needed.
- **Extensions in green stack:** SyntaxHighlight_GeSHi, ParserFunctions, Math (all have REL1_39 branches)
---
## Phase 1: Preparation (no downtime)
All work happens on the VPS alongside the running production stack.
### 1.1 Full backup
```bash
# Database dump
make backups
# or manually:
./scripts/backups/wikidb_dump.sh
# Also back up the MW volume (uploaded images, cache)
docker run --rm -v stormy_mw_data:/data -v /tmp/mw_backup:/backup \
alpine tar czf /backup/mw_data_backup.tar.gz -C /data .
```
### 1.2 Create new Dockerfiles
**`d-mediawiki-new/Dockerfile`** — based on `mediawiki:1.39`
- Same structure as current Dockerfile
- Update extension COPY paths for new versions
- Update apt packages if needed (texlive, imagemagick still required)
- Apache config stays the same (port 8989)
**`d-mysql-new/Dockerfile`** — based on `mysql:8.0`
- Same structure as current
- Keep slow-log config (syntax compatible with 8.0)
### 1.3 Update extensions for target MW version
Create `scripts/mw/build_extensions_dir_139.sh` to clone REL1_39 branches:
| Extension | Current | New |
|-----------|---------|-----|
| SyntaxHighlight_GeSHi | REL1_34 | REL1_39 |
| ParserFunctions | REL1_34 | REL1_39 |
| Math | REL1_34 | REL1_39 |
| EmbedVideo | v2.7.3 | **Skipped** (add back later) |
### 1.4 Patch Bootstrap2 skin
Replace deprecated calls in `skins/Bootstrap2/`:
- `wfRunHooks('hook', ...)``Hooks::run('hook', ...)` (MW 1.35+)
- `wfMsg('key')``wfMessage('key')->text()`
- `wfEmptyMsg('key')``wfMessage('key')->isDisabled()`
### 1.5 Update LocalSettings.php.j2 (new copy for green stack)
Changes needed for MW 1.39:
- `require_once "$IP/extensions/Math/Math.php"``wfLoadExtension( 'Math' )`
- `$wgDBmysql5 = true;` — remove (deprecated in 1.39)
- Remove `wfLoadExtension( 'EmbedVideo' )` (skipped for now)
- Review other deprecated settings
- Add Parsoid configuration (bundled in MW 1.39, runs in-process — no separate container needed):
```php
# Parsoid (required for REST API with_html endpoint)
wfLoadExtension( 'Parsoid', "$IP/vendor/wikimedia/parsoid/extension.json" );
$wgParsoidSettings = [
'useSelser' => true,
];
```
---
## Phase 2: Build Green Stack (no downtime)
### 2.1 Add new services to docker-compose.yml.j2
```yaml
stormy_mysql_new:
restart: always
build: d-mysql-new
container_name: stormy_mysql_new
volumes:
- "stormy_mysql_new_data:/var/lib/mysql"
- "./d-mysql/conf.d:/etc/mysql/conf.d:ro"
environment:
- MYSQL_ROOT_PASSWORD={{ pod_charlesreid1_mysql_password }}
networks:
- backend_new
stormy_mw_new:
restart: always
build: d-mediawiki-new
container_name: stormy_mw_new
volumes:
- "stormy_mw_new_data:/var/www/html"
environment:
- MEDIAWIKI_SITE_SERVER=https://{{ pod_charlesreid1_server_name }}
- MEDIAWIKI_SECRETKEY={{ pod_charlesreid1_mediawiki_secretkey }}
- MEDIAWIKI_UPGRADEKEY={{ pod_charlesreid1_mediawiki_upgradekey }}
- MYSQL_HOST=stormy_mysql_new
- MYSQL_DATABASE=wikidb
- MYSQL_USER=root
- MYSQL_PASSWORD={{ pod_charlesreid1_mysql_password }}
depends_on:
- stormy_mysql_new
networks:
- frontend
- backend_new
```
Add `stormy_mysql_new_data`, `stormy_mw_new_data` to volumes, `backend_new` to networks.
### 2.2 Build and start green containers
```bash
docker compose build stormy_mysql_new stormy_mw_new
docker compose up -d stormy_mysql_new stormy_mw_new
```
Old containers keep running — no disruption.
### 2.3 Migrate database to new MySQL 8.0
```bash
# Dump from old MySQL 5.7
docker exec stormy_mysql sh -c \
'mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD" --default-character-set=binary' \
> /tmp/wikidb_for_upgrade.sql
# Load into new MySQL 8.0
docker exec -i stormy_mysql_new sh -c \
'mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' \
< /tmp/wikidb_for_upgrade.sql
```
### 2.4 Migrate MW uploaded files
```bash
# Copy images/uploads from old volume to new volume
docker run --rm \
-v stormy_mw_data:/old:ro \
-v stormy_mw_new_data:/new \
alpine sh -c 'cp -a /old/images /new/images 2>/dev/null; echo done'
```
### 2.5 Run MediaWiki database upgrade
```bash
docker exec stormy_mw_new php /var/www/html/maintenance/update.php --quick
```
This migrates the DB schema from MW 1.34 → 1.39 format.
---
## Phase 3: Test Green Stack (no downtime)
### 3.1 Direct browser test
Temporarily expose the new MW on a different port for testing:
```yaml
stormy_mw_new:
ports:
- "8990:8989" # temporary, for direct testing
```
Visit `http://<vps-ip>:8990` to verify MW loads, pages render, login works.
### 3.2 Test via nginx (brief switchover)
Edit nginx config to point `/wiki/` and `/w/` at `stormy_mw_new:8989`:
```nginx
proxy_pass http://stormy_mw_new:8989/wiki/;
```
```bash
docker exec stormy_nginx nginx -s reload
```
Test the live site. If broken, switch back:
```nginx
proxy_pass http://stormy_mw:8989/wiki/;
```
```bash
docker exec stormy_nginx nginx -s reload
```
**Switchover and rollback each take ~2 seconds** (nginx reload, no container restart).
### 3.3 Test checklist
- [ ] Wiki pages render correctly
- [ ] Bootstrap2 skin displays properly
- [ ] Login works
- [ ] Math equations render
- [ ] Syntax highlighting works
- [ ] Image uploads work
- [ ] File downloads work
- [ ] Edit pages (as sysop)
- [ ] Search works
- [ ] Special pages load
- [ ] REST API: `curl -s -o /dev/null -w '%{http_code}' https://wiki.golly.life/w/rest.php/v1/page/Main_Page/with_html` returns `200`
- [ ] REST API: response contains rendered HTML (not "Unable to fetch Parsoid HTML")
- [ ] MediaWiki MCP tool can fetch pages without 500 errors
---
## Phase 4: Switchover (~2 seconds downtime)
Once testing passes:
### 4.1 Final data sync
Right before switchover, re-dump and re-load the database to capture any edits made since Phase 2:
```bash
# Fresh dump
docker exec stormy_mysql sh -c \
'mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD" --default-character-set=binary' \
> /tmp/wikidb_final.sql
# Load into new
docker exec -i stormy_mysql_new sh -c \
'mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "DROP DATABASE wikidb; CREATE DATABASE wikidb;"'
docker exec -i stormy_mysql_new sh -c \
'mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /tmp/wikidb_final.sql
# Re-run schema upgrade
docker exec stormy_mw_new php /var/www/html/maintenance/update.php --quick
```
### 4.2 Switch nginx
Update proxy_pass in nginx config, reload. **This is the only moment of downtime.**
### 4.3 Stop old containers (optional, can defer)
```bash
docker compose stop stormy_mysql stormy_mw
```
Keep volumes intact for rollback.
---
## Phase 5: Rollback (if needed)
At any point after switchover:
```bash
# Point nginx back to old containers
# (edit proxy_pass back to stormy_mw:8989)
docker compose start stormy_mysql stormy_mw
docker exec stormy_nginx nginx -s reload
```
Old containers + old volumes are untouched. Rollback is instant.
**Keep old containers and volumes for at least 2 weeks** before removing.
---
## Files to Create/Modify
| File | Action |
|------|--------|
| `d-mediawiki-new/Dockerfile` | Create — based on `mediawiki:1.39` |
| `d-mediawiki-new/charlesreid1-config/` | Create — copy from d-mediawiki, update extensions |
| `d-mysql-new/Dockerfile` | Create — based on `mysql:8.0` |
| `docker-compose.yml.j2` | Add green stack services, volumes, network |
| `d-nginx-charlesreid1/conf.d/https.DOMAIN.conf.j2` | Switchover: change proxy_pass targets |
| `scripts/mw/build_extensions_dir_139.sh` | Create — clone REL1_39 branches |
| `d-mediawiki-new/charlesreid1-config/mediawiki/LocalSettings.php.j2` | Update for MW 1.39 compat |
| `d-mediawiki-new/charlesreid1-config/mediawiki/skins/Bootstrap2/` | Patch deprecated API calls |
---
## Risk Assessment
| Risk | Likelihood | Mitigation |
|------|-----------|------------|
| Bootstrap2 skin breaks on MW 1.39 | MEDIUM | Patching deprecated calls; have Vector as fallback |
| Math extension rendering changes | LOW | REL1_39 branch exists; test rendering |
| MySQL 8 query compatibility | LOW | MW 1.39 officially supports MySQL 8.0 |
| Uploaded images lost | NONE | Copied to new volume; old volume preserved |
| Database corruption on migration | LOW | Old DB untouched; dump/restore is safe |
| Pages using EmbedVideo break | LOW | Videos won't render but pages still load; add back later |
---
## Implementation Order
1. **Prepare** new Dockerfiles and extension builds (Phase 1)
2. **Build** green stack alongside production (Phase 2)
3. **Test** thoroughly (Phase 3)
4. **Switch** when confident (Phase 4)
5. **Clean up** old containers after 2 weeks (Phase 5)

405
PlanFixBackups.md Normal file
View File

@@ -0,0 +1,405 @@
# Plan: Fix the Broken wikidb Backup Script
## Status
**BLOCKING:** The MySQL no-root-password migration (`MySqlNoRootPasswordPlan.md`)
is on hold until backups are working. We will not touch the database until we
have a verified, complete, restorable dump in hand.
## What we observed
On 2026-04-13 at 18:02 PDT we ran `scripts/backups/wikidb_dump.sh` as a
pre-flight safety net. After ~14 seconds the output file stopped growing at
459,628,206 bytes (~439 MB) and the script hung. After 6+ minutes:
- The `mysqldump` process inside `stormy_mysql` was still alive but in `S`
(sleeping) state, using ~1% CPU.
- `SHOW PROCESSLIST` on MySQL showed **no** mysqldump connection — MySQL had
already dropped it.
- The dump file ended mid-`INSERT`, mid-row, with **no** `-- Dump completed on …`
trailer. The dump is unusable.
So: every "successful" run of this script may have been silently producing
truncated dumps. We do not know how long this has been broken or whether any
recent backup in `/home/charles/backups` or in S3 is restorable. **That is
question one.**
## Root cause hypothesis
`scripts/backups/wikidb_dump.sh` runs:
```bash
DOCKERX="${DOCKER} exec -t"
${DOCKERX} ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD" --default-character-set=binary' > "${BACKUP_TARGET}"
```
The `-t` flag allocates a pseudo-TTY inside the container. Two problems with
that:
1. **PTY corrupts binary output.** A PTY translates `LF``CRLF` on output.
`mysqldump --default-character-set=binary` writes raw `_binary` blobs that
contain `\n` bytes; these get rewritten in transit, silently corrupting the
dump even when it does complete.
2. **PTY buffers can deadlock on large streams.** PTYs have small kernel
buffers (typically 4 KB). When the redirect target (`> file`) drains slower
than mysqldump produces, or when MySQL hits `net_write_timeout` and closes
the connection, mysqldump can end up sleeping on a PTY write that will
never complete. That matches what we saw: MySQL connection gone, mysqldump
alive but sleeping, file frozen at ~439 MB.
The script also strips the first line with `tail -n +2` to drop mysqldump's
"Using a password on the command line interface can be insecure" warning. The
warning goes to **stderr**, not stdout, so this `tail` is at best a no-op and
at worst silently deletes the first line of real SQL.
## Affected files
| File | Change |
|------|--------|
| `scripts/backups/wikidb_dump.sh` | Remove `-t`; switch auth to `MYSQL_PWD` env; remove broken `tail -n +2`; add completion-trailer check; add `--single-transaction --quick --routines --triggers --events` |
| `scripts/backups/wikidb_restore_test.sh` | **NEW** — restore the latest dump into a throwaway MySQL container and run sanity queries |
| `scripts/backups/README.md` *(if present)* | Document the restore-test command and integrity check |
We will not touch `scripts/mysql/restore_database.sh` here — it is broken
independently (references the deleted `.mysql.rootpw.cnf`) and is tracked
separately.
---
## Phase 0: Triage (do this first, before any changes)
### Step 0.1: Kill the hung mysqldump
```bash
docker exec stormy_mysql sh -c 'pkill -9 mysqldump || true'
# also kill the host-side docker exec wrapper if it is still around
pgrep -af 'docker exec.*mysqldump' || true
```
After this, confirm nothing is running:
```bash
docker exec stormy_mysql sh -c 'pgrep -a mysqldump || echo none'
```
### Step 0.2: Remove the truncated dump
```bash
rm -i /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql
```
### Step 0.3: Audit existing backups — are *any* of them complete?
We need to know whether we have a known-good dump anywhere. For each candidate
file, the last bytes should contain `-- Dump completed on`:
```bash
for f in $(find /home/charles/backups -name 'wikidb_*.sql' -mtime -30 | sort); do
trailer=$(tail -c 200 "$f" | tr -d '\0' | grep -o 'Dump completed on[^"]*' || echo "MISSING")
size=$(stat -c %s "$f")
echo "$f size=$size trailer=$trailer"
done
```
Any file showing `MISSING` is truncated and **not a real backup**. Record the
results — we need to know whether the most recent good dump is from yesterday,
last week, or never.
### Step 0.4: Audit the S3 backups
```bash
source ./environment
aws s3 ls "s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}/" --recursive | grep wikidb | tail -20
```
Pull the most recent one down to a scratch dir and trailer-check it the same
way as Step 0.3. **Do not assume it is good just because it exists.**
### Step 0.5: Decide whether to pause writes
If Step 0.3 + 0.4 show no recent good backup, consider whether to pause writes
to the wiki (read-only mode via `$wgReadOnly` in `LocalSettings.php`) until we
have one. This is a judgement call — if the most recent good backup is days old
but the wiki is low-traffic, the risk of leaving it writable while we fix the
script is low. Decide explicitly, do not just drift.
---
## Phase 1: Fix the script
### Step 1.1: Edit `scripts/backups/wikidb_dump.sh`
Replace the docker exec block with:
```bash
# Pass the password via env to avoid:
# - the cmdline-password warning on stderr
# - the password showing up in `ps` inside the container
# No `-t`: PTY corrupts binary dumps and can deadlock on large output.
docker exec -i \
-e MYSQL_PWD \
"${CONTAINER_NAME}" \
sh -c 'exec mysqldump \
--user=root \
--single-transaction \
--quick \
--routines \
--triggers \
--events \
--default-character-set=binary \
--databases wikidb' \
> "${BACKUP_TARGET}"
```
Notes on each flag:
- `-i` — keep stdin open (no PTY). This is the single most important change.
- `-e MYSQL_PWD` — forwards the host's `MYSQL_PWD` env var into the container
for this one exec call. mysqldump reads `MYSQL_PWD` automatically. Set it on
the host before invoking the script:
```bash
export MYSQL_PWD="$(docker exec stormy_mysql printenv MYSQL_ROOT_PASSWORD)"
```
We pull it from the container so we don't have to duplicate the secret on
the host. The systemd unit / cron wrapper that runs this script will need
the same line.
- `--single-transaction` — InnoDB-only consistent snapshot without table
locks. wikidb is InnoDB. This is the standard recommendation for live MW
databases.
- `--quick` — stream rows one at a time instead of buffering whole tables in
RAM. Important for large `text` / `revision` tables.
- `--routines --triggers --events` — include stored programs. Cheap insurance.
- Removed `-uroot -p"$MYSQL_ROOT_PASSWORD"` from the inner sh -c, replaced
with `--user=root` + `MYSQL_PWD`.
### Step 1.2: Remove the broken `tail -n +2` block
The "warning" it was trying to strip went to stderr, never stdout. The
existing code:
```bash
tail -n +2 "${BACKUP_TARGET}" > "${BACKUP_TARGET}.tmp"
mv "${BACKUP_TARGET}.tmp" "${BACKUP_TARGET}"
```
is silently deleting the first line of real SQL (typically the
`-- MySQL dump …` header comment). Delete the block entirely.
### Step 1.3: Add an integrity check
After the dump, before declaring success:
```bash
# A complete mysqldump always ends with `-- Dump completed on …`.
if ! tail -c 200 "${BACKUP_TARGET}" | grep -q 'Dump completed on'; then
echo "ERROR: dump file ${BACKUP_TARGET} is missing the completion trailer." >&2
echo " mysqldump did not finish successfully." >&2
exit 2
fi
# Sanity: file should be at least a few MB. Tune the floor as you like.
size=$(stat -c %s "${BACKUP_TARGET}")
if [ "${size}" -lt $((50 * 1024 * 1024)) ]; then
echo "ERROR: dump file ${BACKUP_TARGET} is only ${size} bytes; suspicious." >&2
exit 3
fi
echo "Dump OK: ${BACKUP_TARGET} (${size} bytes)"
```
`set -eux` is already at the top of the script, so any failed step exits
non-zero. Good — make sure whatever runs the script (systemd, cron) actually
notices that exit code and alerts.
---
## Phase 2: Verify the new script works
### Step 2.1: Run it
```bash
export MYSQL_PWD="$(docker exec stormy_mysql printenv MYSQL_ROOT_PASSWORD)"
source ./environment
bash ./scripts/backups/wikidb_dump.sh
```
Time it. On a healthy `--quick` stream, 400 MB of wikidb should take well
under a minute on local disk.
### Step 2.2: Verify the trailer
```bash
tail -c 200 /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql | tr -d '\0'
```
Must end with `-- Dump completed on YYYY-MM-DD HH:MM:SS`.
### Step 2.3: Verify the byte count is sane
It should be **larger** than the truncated 439 MB we saw earlier (because the
truncated file was missing the tail end of a table). Compare to the largest
recent S3 backup if you have one.
### Step 2.4: Spot-check the SQL
```bash
head -50 /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql
```
Should start with `-- MySQL dump …` (NOT with `CREATE TABLE` — if it starts
with `CREATE TABLE` then the dead `tail -n +2` is still there, eating the
header).
---
## Phase 3: Prove the dump is restorable
A backup is only a backup if you have actually restored from it. Until then
it is a file of unknown provenance.
### Step 3.1: Spin up a throwaway MySQL container
```bash
docker run -d --rm \
--name wikidb_restore_test \
-e MYSQL_ROOT_PASSWORD=temp_test_pw_$$ \
mysql:5.7 # or whatever version stormy_mysql is — check with: docker inspect stormy_mysql --format '{{.Config.Image}}'
```
Wait for it to be ready:
```bash
until docker exec wikidb_restore_test sh -c 'mysqladmin -uroot -p"$MYSQL_ROOT_PASSWORD" ping' 2>/dev/null; do
sleep 2
done
```
### Step 3.2: Pipe the dump in
```bash
docker exec -i wikidb_restore_test sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' \
< /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql
```
Should complete with no errors.
### Step 3.3: Run sanity queries against the restored DB
```bash
docker exec wikidb_restore_test sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "
USE wikidb;
SELECT COUNT(*) AS pages FROM page;
SELECT COUNT(*) AS revisions FROM revision;
SELECT COUNT(*) AS texts FROM text;
SELECT MAX(rev_timestamp) AS most_recent_edit FROM revision;
"'
```
Compare those numbers to live `stormy_mysql`:
```bash
docker exec -i stormy_mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "
USE wikidb;
SELECT COUNT(*) FROM page;
SELECT COUNT(*) FROM revision;
SELECT COUNT(*) FROM text;
SELECT MAX(rev_timestamp) FROM revision;
"'
```
They should match (allowing for any edits between the dump time and the live
query).
### Step 3.4: Tear down
```bash
docker stop wikidb_restore_test
```
`--rm` removes it on stop. No leftover state.
### Step 3.5: Bake this into a script
Save the Phase 3 commands as `scripts/backups/wikidb_restore_test.sh` so we
can re-run it on demand. It should take a backup file path as its single
argument and exit non-zero on any mismatch.
---
## Phase 4: Verify the scheduled-backup path
Whatever runs `wikidb_dump.sh` on a schedule needs to:
1. Set `MYSQL_PWD` (or otherwise provide the password) before invoking.
2. Actually notice and alert on a non-zero exit.
### Step 4.1: Find the scheduler
```bash
systemctl list-timers --all | grep -i backup
ls /etc/systemd/system/ | grep -i backup
crontab -l
sudo crontab -l
```
### Step 4.2: Inspect whatever you find
Confirm it sources `./environment` (or otherwise gets `MYSQL_PWD`), runs the
script, and surfaces failures (slack canary webhook? email? exit-code check?
journalctl?). If the failure path is "we'd notice in the logs eventually,"
that is not a failure path.
### Step 4.3: Trigger the scheduled job manually and confirm a clean run
```bash
sudo systemctl start <whatever-the-unit-is>.service
journalctl -u <whatever-the-unit-is>.service --since "5 min ago"
```
The journal should show the "Dump OK" line from Step 1.3.
---
## Phase 5: Commit and unblock the MySQL work
### Step 5.1: Commit the script + new restore-test script
Branch, commit, push, PR. Reference this plan in the PR description.
### Step 5.2: Update `MySqlNoRootPasswordPlan.md` Step 4 (Take a fresh backup)
It should now point at the fixed script and the restore-test script — Phase 0
of the no-root-password plan should require **both** a successful dump AND a
successful restore-test before proceeding.
### Step 5.3: Resume the MySQL no-root-password migration
Only after Phase 3 above has passed at least once on a fresh dump.
---
## Rollback
There is nothing to roll back in Phase 03 — we are only modifying a script
and creating throwaway containers. If the new script doesn't work, the old
script is in git history (`git checkout -- scripts/backups/wikidb_dump.sh`)
and we are no worse off than we are right now (which is: backups are
broken).
---
## Notes / open questions
- **How long has this been broken?** Answer with Phase 0.3 + 0.4. If every
recent dump is truncated, this has been broken since whenever the wiki grew
past the first PTY-buffer-stall threshold. We should figure out an
approximate date so we know what window of "we thought we had backups" was
fictional.
- **Why no alert?** Phase 4 needs to answer this. A backup pipeline that can
silently produce 439 MB of garbage for an unknown number of days is the
real bug. The script fix is necessary but not sufficient.
- **Should we move off `mysqldump` entirely?** For a database this size,
`mysqldump` is fine. Not worth re-architecting. The fix is one flag and
one integrity check.
- **`docker exec -t` elsewhere in the repo?** Worth a grep — same bug pattern
could exist in any other backup or maintenance script.

19
Troubleshooting.md Normal file
View File

@@ -0,0 +1,19 @@
To get a shell in a container that has been created, before it is runnning in a pod, use `docker run`:
```
docker run --rm -it --entrypoint bash <image-name-or-id>
docker run --rm -it --entrypoint bash pod-charlesreid1_stormy_mediawiki
```
To get a shell in a container that is running in a pod, use `docker exec`:
```
docker exec -it <image-name> /bin/bash
docker exec -it stormy_mw /bin/bash
```
Also, if no changes are picking up, and you've already tried rebuilding the container image, try editing the Dockerfile.

View File

@@ -6,12 +6,14 @@
;; https://github.com/go-gitea/gitea/blob/master/conf/app.ini
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
APP_NAME = {{ gitea_app_name }}
APP_NAME = {{ pod_charlesreid1_gitea_app_name }}
RUN_USER = git
RUN_MODE = prod
WORK_PATH = /data/gitea
[ui]
DEFAULT_THEME = arc-green
DEFAULT_THEME = gitea-dark
THEMES = gitea-dark
[database]
DB_TYPE = sqlite3
@@ -31,17 +33,17 @@ DISABLE_HTTP_GIT = false
[server]
PROTOCOL = http
DOMAIN = git.{{ server_name_default }}
DOMAIN = git.{{ pod_charlesreid1_server_name }}
#CERT_FILE = /www/gitea/certs/cert.pem
#KEY_FILE = /www/gitea/certs/key.pem
SSH_DOMAIN = git.{{ server_name_default }}
SSH_DOMAIN = git.{{ pod_charlesreid1_server_name }}
HTTP_PORT = 3000
HTTP_ADDR = 0.0.0.0
ROOT_URL = https://git.{{ server_name_default }}
ROOT_URL = https://git.{{ pod_charlesreid1_server_name }}
;ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/
DISABLE_SSH = false
; port to display in clone url:
SSH_PORT = 222
;SSH_PORT = 222
; port for built-in ssh server to listen on:
SSH_LISTEN_PORT = 22
OFFLINE_MODE = false
@@ -92,9 +94,9 @@ ENABLED = false
[security]
INSTALL_LOCK = true
SECRET_KEY = {{ gitea_secret_key }}
MIN_PASSWORD_LENGTH = 6
INTERNAL_TOKEN = {{ gitea_internal_token }}
SECRET_KEY = {{ pod_charlesreid1_gitea_secretkey }}
MIN_PASSWORD_LENGTH = 10
INTERNAL_TOKEN = {{ pod_charlesreid1_gitea_internaltoken }}
[other]
SHOW_FOOTER_BRANDING = false

View File

@@ -1,4 +1,4 @@
FROM mediawiki
FROM mediawiki:1.34
EXPOSE 8989
@@ -41,17 +41,13 @@ RUN chown -R www-data:www-data /var/www/html/*
# Skins
COPY charlesreid1-config/mediawiki/skins /var/www/html/skins
RUN chown -R www-data:www-data /var/www/html/skins
RUN touch /var/www/html/skins
# Settings
COPY charlesreid1-config/mediawiki/LocalSettings.php /var/www/html/LocalSettings.php
RUN chown -R www-data:www-data /var/www/html/LocalSettings*
RUN chmod 600 /var/www/html/LocalSettings.php
# MediaWiki Fail2ban log directory
RUN mkdir -p /var/log/mwf2b
RUN chown -R www-data:www-data /var/log/mwf2b
RUN chmod 700 /var/log/mwf2b
# Apache conf file
COPY charlesreid1-config/apache/*.conf /etc/apache2/sites-enabled/
RUN a2enmod rewrite
@@ -59,4 +55,10 @@ RUN service apache2 restart
## make texvc
#CMD cd /var/www/html/extensions/Math && make && apache2-foreground
# PHP conf file
# https://hub.docker.com/_/php/
COPY php/php.ini /usr/local/etc/php/
# Start
CMD apache2-foreground

View File

@@ -5,6 +5,10 @@ To update the MediaWiki skin:
- Rebuild the MW container while the docker pod is still running (won't effect the docker pod)
- When finished rebuilding the MW container, restart the docker pod.
The skin currently in use is in `charlesreid1-config/mediawiki/skins/Bootstrap2`
To rebuild and then restart the pod:
```
# switch to main pod directory
cd ../

View File

@@ -1,4 +1,4 @@
ServerName {{ server_name_default }}
ServerName {{ pod_charlesreid1_server_name }}
Listen 8989
@@ -7,10 +7,10 @@ Listen 8989
# talks to apache via 127.0.0.1
# on port 8989
ServerAlias www.{{ server_name_default }}
ServerAlias www.{{ pod_charlesreid1_server_name }}
LogLevel warn
ServerAdmin {{ admin_email }}
ServerAdmin {{ pod_charlesreid1_mediawiki_admin_email }}
DirectoryIndex index.html index.cgi index.php

View File

@@ -13,8 +13,8 @@ if ( !defined( 'MEDIAWIKI' ) ) {
}
## The protocol and server name to use in fully-qualified URLs
$wgServer = 'https://{{ server_name_default }}';
$wgCanonicalServer = 'https://{{ server_name_default }}';
$wgServer = 'https://{{ pod_charlesreid1_server_name }}';
$wgCanonicalServer = 'https://{{ pod_charlesreid1_server_name }}';
## The URL path to static resources (images, scripts, etc.)
$wgStylePath = "$wgScriptPath/skins";
@@ -47,6 +47,7 @@ $wgDBmysql5 = true;
# Shared memory settings
$wgMainCacheType = CACHE_ACCEL;
$wgCacheDirectory = "$IP/cache";
$wgMemCachedServers = [];
# To enable image uploads, make sure the 'images' directory
@@ -104,7 +105,7 @@ $wgAuthenticationTokenVersion = "1";
# Site upgrade key. Must be set to a string (default provided) to turn on the
# web installer while LocalSettings.php is in place
$wgUpgradeKey = "984c1d9858dabc27";
$wgUpgradeKey = getenv('MEDIAWIKI_UPGRADEKEY');
# No license info
$wgRightsPage = "";
@@ -156,7 +157,7 @@ $wgPutIPinRC=true;
# Getting some weird "Error creating thumbnail: Invalid thumbnail parameters" messages w/ thumbnail
# http://www.gossamer-threads.com/lists/wiki/mediawiki/169439
$wgMaxImageArea=64000000;
$wgMaxShellMemory=0;
$wgMaxShellMemory=512000;
$wgFavicon="$wgScriptPath/favicon.ico";
@@ -209,13 +210,6 @@ wfLoadExtension( 'EmbedVideo' );
require_once "$IP/extensions/Math/Math.php";
#############################################
# Fail2banlog extension
# https://www.mediawiki.org/wiki/Extension:Fail2banlog
require_once "$IP/extensions/Fail2banlog/Fail2banlog.php";
$wgFail2banlogfile = "/var/log/apache2/mwf2b.log";
#############################################
# Fix cookies crap
@@ -224,7 +218,7 @@ session_save_path("/tmp");
##############################################
# Secure login
$wgServer = "https://{{ server_name_default }}";
$wgServer = "https://{{ pod_charlesreid1_server_name }}";
$wgSecureLogin = true;
###################################

View File

@@ -1,93 +0,0 @@
#!/bin/bash
#
# clone or download each extension
# and build o
mkdir -p extensions
(
cd extensions
##############################
Extension="SyntaxHighlight_GeSHi"
if [ ! -d ${Extension} ]
then
## This requires mediawiki > 1.31
## (so does REL1_31)
#git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
## This manually downloads REL1_30
#wget https://extdist.wmflabs.org/dist/extensions/SyntaxHighlight_GeSHi-REL1_30-87392f1.tar.gz -O SyntaxHighlight_GeSHi.tar.gz
#tar -xzf SyntaxHighlight_GeSHi.tar.gz -C ${PWD}
#rm -f SyntaxHighlight_GeSHi.tar.gz
# Best of both worlds
git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
(
cd ${Extension}
git checkout --track remotes/origin/REL1_34
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="ParserFunctions"
if [ ! -d ${Extension} ]
then
git clone https://github.com/wikimedia/mediawiki-extensions-ParserFunctions.git ${Extension}
(
cd ${Extension}
git checkout --track remotes/origin/REL1_34
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="EmbedVideo"
if [ ! -d ${Extension} ]
then
git clone https://github.com/HydraWiki/mediawiki-embedvideo.git ${Extension}
(
cd ${Extension}
git checkout v2.7.3
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="Math"
if [ ! -d ${Extension} ]
then
git clone https://github.com/wikimedia/mediawiki-extensions-Math.git ${Extension}
(
cd ${Extension}
git checkout REL1_34
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="Fail2banlog"
if [ ! -d ${Extension} ]
then
git clone https://github.com/charlesreid1-docker/mw-fail2ban.git ${Extension}
(
cd ${Extension}
git checkout master
)
else
echo "Skipping ${Extension}"
fi
##############################
# fin
)

View File

@@ -106,7 +106,7 @@ include('/var/www/html/skins/Bootstrap2/navbar.php');
<div class="container-fixed">
<div class="navbar-header">
<a href="/wiki/" class="navbar-brand">
{{ top_domain }} wiki
{{ pod_charlesreid1_server_name }} wiki
</a>
</div>
<div>

View File

@@ -11,7 +11,7 @@
</span>
Made from the command line with vim by
<a href="http://charlesreid1.com">charlesreid1</a><br />
with help from <a href="https://getbootstrap.com/">Bootstrap</a> and <a href="http://getpelican.com">Pelican</a>.
with help from <a href="https://getbootstrap.com/">Bootstrap</a> and <a href="http://mediawiki.org">MediaWiki</a>.
</p>
<p style="text-align: center">

View File

@@ -6,14 +6,14 @@
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a href="/" class="navbar-brand">{{ top_domain }}</a>
<a href="/" class="navbar-brand">{{ pod_charlesreid1_server_name }}</a>
</div>
<div>
<div class="collapse navbar-collapse" id="myNavbar">
<ul class="nav navbar-nav">
<li>
<a href="https://{{ top_domain }}/wiki">Wiki</a>
<a href="https://{{ pod_charlesreid1_server_name }}/wiki">Wiki</a>
</li>
</ul>

View File

@@ -1086,7 +1086,8 @@ html {
}
body {
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
font-size: 14px;
/*font-size: 14px;*/
font-size: 20px;
line-height: 1.42857143;
color: #c8c8c8;
background-color: #272b30;

3
d-mediawiki/php/php.ini Normal file
View File

@@ -0,0 +1,3 @@
post_max_size = 128M
memory_limit = 128M
upload_max_filesize = 100M

View File

@@ -4,8 +4,4 @@ MAINTAINER charles@charlesreid1.com
# make mysql data a volume
VOLUME ["/var/lib/mysql"]
# put password in a password file
RUN printf "[client]\nuser=root\npassword=$MYSQL_ROOT_PASSWORD" > /root/.mysql.rootpw.cnf
RUN chmod 0600 /root/.mysql.rootpw.cnf
RUN chown mysql:mysql /var/lib/mysql

View File

@@ -0,0 +1,5 @@
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2
log_queries_not_using_indexes = 0

View File

@@ -1,6 +0,0 @@
# https://serverfault.com/a/525011
server {
server_name _;
listen *:80 default_server deferred;
return 444;
}

View File

@@ -1,6 +1,6 @@
####################
#
# {{ server_name_default }}
# {{ pod_charlesreid1_server_name }}
# http/{{ port_default }}
#
# basically, just redirects to https
@@ -10,20 +10,20 @@
server {
listen 80;
listen [::]:80;
server_name {{ server_name_default }};
return 301 https://{{ server_name_default }}$request_uri;
server_name {{ pod_charlesreid1_server_name }};
return 301 https://{{ pod_charlesreid1_server_name }}$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name www.{{ server_name_default }};
return 301 https://www.{{ server_name_default }}$request_uri;
server_name www.{{ pod_charlesreid1_server_name }};
return 301 https://www.{{ pod_charlesreid1_server_name }}$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name git.{{ server_name_default }};
return 301 https://git.{{ server_name_default }}$request_uri;
server_name git.{{ pod_charlesreid1_server_name }};
return 301 https://git.{{ pod_charlesreid1_server_name }}$request_uri;
}

View File

@@ -1,9 +1,9 @@
####################
#
# {{ server_name_default }}
# {{ pod_charlesreid1_server_name }}
# https/443
#
# {{ server_name_default }} and www.{{ server_name_default }}
# {{ pod_charlesreid1_server_name }} and www.{{ pod_charlesreid1_server_name }}
# should handle the following cases:
# - w/ and wiki/ should reverse proxy story_mw
# - gitea subdomain should reverse proxy stormy_gitea
@@ -15,30 +15,46 @@
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name {{ server_name_default }} default_server;
server_name {{ pod_charlesreid1_server_name }};
ssl_certificate /etc/letsencrypt/live/{{ server_name_default }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{{ server_name_default }}/privkey.pem;
ssl_certificate /etc/letsencrypt/live/{{ pod_charlesreid1_server_name }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{{ pod_charlesreid1_server_name }}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/nginx/conf.d/secheaders.conf;
include /etc/nginx/conf.d/csp.conf;
location / {
try_files $uri $uri/ =404;
root /www/{{ server_name_default }}/htdocs;
root /www/{{ pod_charlesreid1_server_name }}/htdocs;
index index.html;
}
location = /robots.txt {
alias /var/www/robots/robots.txt;
}
location /wiki/ {
# Apply rate limit here.
limit_req zone=gitealimit burst=20 nodelay;
# Limit download rate to 500 KB/s per connection (4 Mbps)
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/wiki/;
}
location /w/ {
# Apply rate limit here.
limit_req zone=gitealimit burst=20 nodelay;
# Limit download rate to 500 KB/s per connection (4 Mbps)
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/w/;
}
@@ -55,31 +71,43 @@ server {
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.{{ server_name_default }};
server_name www.{{ pod_charlesreid1_server_name }};
ssl_certificate /etc/letsencrypt/live/www.{{ server_name_default }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.{{ server_name_default }}/privkey.pem;
ssl_certificate /etc/letsencrypt/live/www.{{ pod_charlesreid1_server_name }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.{{ pod_charlesreid1_server_name }}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/nginx/conf.d/secheaders.conf;
include /etc/nginx/conf.d/csp.conf;
root /www/{{ server_name_default }}/htdocs;
root /www/{{ pod_charlesreid1_server_name }}/htdocs;
location / {
try_files $uri $uri/ =404;
index index.html;
}
location = /robots.txt {
alias /var/www/robots/robots.txt;
}
location /wiki/ {
limit_req zone=gitealimit burst=20 nodelay;
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/wiki/;
}
location /w/ {
# Apply rate limit here.
limit_req zone=gitealimit burst=20 nodelay;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/w/;
}
@@ -94,18 +122,29 @@ server {
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name git.{{ server_name_default }};
server_name git.{{ pod_charlesreid1_server_name }};
ssl_certificate /etc/letsencrypt/live/git.{{ server_name_default }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.{{ server_name_default }}/privkey.pem;
ssl_certificate /etc/letsencrypt/live/git.{{ pod_charlesreid1_server_name }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.{{ pod_charlesreid1_server_name }}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/nginx/conf.d/secheaders.conf;
include /etc/nginx/conf.d/giteacsp.conf;
location / {
# Apply the rate limit here.
# Allows a burst of 20 requests, but anything beyond the max is queued.
limit_req zone=gitealimit burst=20 nodelay;
# Limit download rate to 500 KB/s per connection (4 Mbps)
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://stormy_gitea:3000/;
}
location = /robots.txt {
alias /var/www/robots/gitea.txt;
}
}

View File

@@ -0,0 +1,37 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
# Gitea rate limiting:
# 5 requests per second rate limit
limit_req_zone $binary_remote_addr zone=gitealimit:10m rate=5r/s;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

View File

@@ -0,0 +1,16 @@
User-agent: *
Disallow: */commit/*
Disallow: */src/*
Disallow: */tree/*
Disallow: */activity/*
Disallow: */wiki/*
Disallow: */releases/*
Disallow: */pulls/*
Disallow: */stars
Disallow: */watchers
Disallow: */forks
Disallow: *?tab=activity
Disallow: *?tab=stars
Disallow: *?tab=following
Disallow: *?tab=followers
Disallow: *?lang=*

View File

@@ -0,0 +1,2 @@
User-agent: *
Disallow: /w/

View File

@@ -5,7 +5,7 @@ services:
# https://stackoverflow.com/a/39039830
stormy_gitea:
image: gitea/gitea:latest
image: gitea/gitea:1.24.5
container_name: stormy_gitea
environment:
- USER_UID=1000
@@ -13,6 +13,7 @@ services:
restart: always
volumes:
- "stormy_gitea_data:/data"
- "./d-nginx-charlesreid1/robots:/var/www/robots:ro"
- "./d-gitea/custom:/data/gitea"
- "./d-gitea/data:/app/gitea/data"
- "/gitea_repositories:/data/git/repositories"
@@ -23,53 +24,68 @@ services:
max-file: "10"
ports:
- "22:22"
networks:
- frontend
stormy_mysql:
restart: always
build: d-mysql
container_name: stormy_mysql
volumes:
- "stormy_mysql_data:/var/lib/mysql"
- "./d-mysql/conf.d:/etc/mysql/conf.d:ro"
logging:
driver: "json-file"
options:
max-size: 1m
max-file: "10"
environment:
- MYSQL_ROOT_PASSWORD={{ mysql_password }}
- MYSQL_ROOT_PASSWORD={{ pod_charlesreid1_mysql_password }}
- MYSQL_DATABASE=wikidb
- MYSQL_USER=wikiuser
- MYSQL_PASSWORD={{ pod_charlesreid1_mysql_wikiuser_password }}
networks:
- backend
stormy_mw:
restart: always
build: d-mediawiki
container_name: stormy_mw
volumes:
- "stormy_mw_data:/var/www/html"
- "./mwf2b:/var/log/mwf2b"
logging:
driver: "json-file"
options:
max-size: 1m
max-file: "10"
environment:
- MEDIAWIKI_SITE_SERVER=https://{{ server_name_default }}
- MEDIAWIKI_SECRETKEY={{ mediawiki_secretkey }}
- MEDIAWIKI_SITE_SERVER=https://{{ pod_charlesreid1_server_name }}
- MEDIAWIKI_SECRETKEY={{ pod_charlesreid1_mediawiki_secretkey }}
- MEDIAWIKI_UPGRADEKEY={{ pod_charlesreid1_mediawiki_upgradekey }}
- MYSQL_HOST=stormy_mysql
- MYSQL_DATABASE=wikidb
- MYSQL_USER=root
- MYSQL_PASSWORD={{ mysql_password }}
- MYSQL_USER=wikiuser
- MYSQL_PASSWORD={{ pod_charlesreid1_mysql_wikiuser_password }}
depends_on:
- stormy_mysql
networks:
- frontend
- backend
stormy_nginx:
restart: always
image: nginx
image: nginx:1.27.5
container_name: stormy_nginx
hostname: {{ server_name_default }}
hostname: charlesreid1.com
hostname: {{ pod_charlesreid1_server_name }}
command: /bin/bash -c "nginx -g 'daemon off;'"
volumes:
- "./d-nginx-charlesreid1/nginx.conf:/etc/nginx/nginx.conf:ro"
- "./d-nginx-charlesreid1/conf.d:/etc/nginx/conf.d:ro"
- "./d-nginx-charlesreid1/robots:/var/www/robots:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/letsencrypt:/etc/letsencrypt"
- "/www/{{ server_name_default }}/htdocs:/www/{{ server_name_default }}/htdocs:ro"
- "/etc/letsencrypt:/etc/letsencrypt:ro"
- "/www/{{ pod_charlesreid1_server_name }}/htdocs:/www/{{ pod_charlesreid1_server_name }}/htdocs:ro"
- "stormy_nginx_logs:/var/log/nginx"
logging:
driver: "json-file"
options:
@@ -82,8 +98,15 @@ services:
ports:
- "80:80"
- "443:443"
networks:
- frontend
networks:
frontend:
backend:
volumes:
stormy_mysql_data:
stormy_mw_data:
stormy_gitea_data:
stormy_nginx_logs:

9
docs/BlockIps.md Normal file
View File

@@ -0,0 +1,9 @@
To block IP address:
* Modify the nginx config file template at
`d-nginx-charlesreid1/conf.d/https.DOMAIN.conf.j2`
* Re-render the Jinja templates into config files via
`make clean-templates && make templates`
* Stop and restart the pod service:
`sudo systemctl stop pod-charlesreid1 &&
sudo systemctl start pod-charlesreid1`

View File

@@ -10,10 +10,12 @@ export POD_CHARLESREID1_USER="nonrootuser"
# ----------
export POD_CHARLESREID1_MW_ADMIN_EMAIL="email@example.com"
export POD_CHARLESREID1_MW_SECRET_KEY="SecretKeyString"
export POD_CHARLESREID1_MW_UPGRADE_KEY="UpgradeKeyString"
# mysql:
# ------
export POD_CHARLESREID1_MYSQL_PASSWORD="SuperSecretPassword"
export POD_CHARLESREID1_MYSQL_WIKIUSER_PASSWORD="AnotherSecretPassword"
# gitea:
# ------

36
environment.j2 Normal file
View File

@@ -0,0 +1,36 @@
#!/bin/bash
# multiple templates:
# -------------------
export POD_CHARLESREID1_DIR="{{ pod_charlesreid1_pod_install_dir }}"
export POD_CHARLESREID1_TLD="{{ pod_charlesreid1_server_name }}"
export POD_CHARLESREID1_USER="{{ pod_charlesreid1_username }}"
export POD_CHARLESREID1_VPN_IP_ADDR="{{ pod_charlesreid1_vpn_ip_addr }}"
# mediawiki:
# ----------
export POD_CHARLESREID1_MW_ADMIN_EMAIL="{{ pod_charlesreid1_mediawiki_admin_email }}"
export POD_CHARLESREID1_MW_SECRET_KEY="{{ pod_charlesreid1_mediawiki_secretkey }}"
# mysql:
# ------
export POD_CHARLESREID1_MYSQL_PASSWORD="{{ pod_charlesreid1_mysql_password }}"
export POD_CHARLESREID1_MYSQL_WIKIUSER_PASSWORD="{{ pod_charlesreid1_mysql_wikiuser_password }}"
# gitea:
# ------
export POD_CHARLESREID1_GITEA_APP_NAME="{{ pod_charlesreid1_gitea_app_name }}"
export POD_CHARLESREID1_GITEA_SECRET_KEY="{{ pod_charlesreid1_gitea_secretkey }}"
export POD_CHARLESREID1_GITEA_INTERNAL_TOKEN="{{ pod_charlesreid1_gitea_internaltoken }}"
# aws:
# ----
export AWS_ACCESS_KEY_ID="{{ pod_charlesreid1_backups_aws_access_key }}"
export AWS_SECRET_ACCESS_KEY="{{ pod_charlesreid1_backups_aws_secret_access_key }}"
export AWS_DEFAULT_REGION="{{ pod_charlesreid1_backups_aws_region }}"
# backups and scripts:
# --------------------
export POD_CHARLESREID1_BACKUP_DIR="{{ pod_charlesreid1_backups_dir }}"
export POD_CHARLESREID1_BACKUP_S3BUCKET="{{ pod_charlesreid1_backups_bucket }}"
export POD_CHARLESREID1_CANARY_WEBHOOK="{{ pod_charlesreid1_backups_canary_slack_url }}"

View File

@@ -8,25 +8,28 @@ from jinja2 import Environment, FileSystemLoader, select_autoescape
# Should existing files be overwritten
OVERWRITE = False
OVERWRITE = True
# Map of jinja variables to environment variables
jinja_to_env = {
"pod_install_dir": "POD_CHARLESREID1_DIR",
"top_domain": "POD_CHARLESREID1_TLD",
"server_name_default" : "POD_CHARLESREID1_TLD",
"username": "POD_CHARLESREID1_USER",
# docker-compose:
"mysql_password" : "POD_CHARLESREID1_MYSQL_PASSWORD",
"mediawiki_secretkey" : "POD_CHARLESREID1_MW_SECRET_KEY",
# mediawiki:
"admin_email": "POD_CHARLESREID1_MW_ADMIN_EMAIL",
# gitea:
"gitea_app_name": "POD_CHARLESREID1_GITEA_APP_NAME",
"gitea_secret_key": "POD_CHARLESREID1_GITEA_SECRET_KEY",
"gitea_internal_token": "POD_CHARLESREID1_GITEA_INTERNAL_TOKEN",
# aws:
"backup_canary_webhook_url": "POD_CHARLESREID1_CANARY_WEBHOOK",
"pod_charlesreid1_pod_install_dir": "POD_CHARLESREID1_DIR",
"pod_charlesreid1_server_name": "POD_CHARLESREID1_TLD",
"pod_charlesreid1_username": "POD_CHARLESREID1_USER",
"pod_charlesreid1_vpn_ip_addr": "POD_CHARLESREID1_VPN_IP_ADDR",
"pod_charlesreid1_mediawiki_admin_email": "POD_CHARLESREID1_MW_ADMIN_EMAIL",
"pod_charlesreid1_mediawiki_secretkey": "POD_CHARLESREID1_MW_SECRET_KEY",
"pod_charlesreid1_mediawiki_upgradekey": "POD_CHARLESREID1_MW_UPGRADE_KEY",
"pod_charlesreid1_mysql_password": "POD_CHARLESREID1_MYSQL_PASSWORD",
"pod_charlesreid1_mysql_wikiuser_password": "POD_CHARLESREID1_MYSQL_WIKIUSER_PASSWORD",
"pod_charlesreid1_gitea_app_name": "POD_CHARLESREID1_GITEA_APP_NAME",
"pod_charlesreid1_gitea_secretkey": "POD_CHARLESREID1_GITEA_SECRET_KEY",
"pod_charlesreid1_gitea_internaltoken": "POD_CHARLESREID1_GITEA_INTERNAL_TOKEN",
"pod_charlesreid1_backups_aws_access_key": "AWS_ACCESS_KEY_ID",
"pod_charlesreid1_backups_aws_secret_access_key": "AWS_SECRET_ACCESS_KEY",
"pod_charlesreid1_backups_aws_region": "AWS_DEFAULT_REGION",
"pod_charlesreid1_backups_dir": "POD_CHARLESREID1_BACKUP_DIR",
"pod_charlesreid1_backups_bucket": "POD_CHARLESREID1_BACKUP_S3BUCKET",
"pod_charlesreid1_backups_canary_slack_url": "POD_CHARLESREID1_CANARY_WEBHOOK",
}
scripts_dir = os.path.dirname(os.path.abspath(__file__))

View File

@@ -0,0 +1,28 @@
if ( $programname startswith "pod-charlesreid1-canary" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-canary.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-certbot" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-certbot.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-aws" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-aws.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-cleanolderthan" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-cleanolderthan.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-gitea" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-gitea.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-wikidb" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-wikidb.service.log" flushOnTXEnd="on")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-wikifiles" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-wikifiles.service.log" flushOnTXEnd="on")
stop
}

View File

@@ -13,3 +13,40 @@ for the systemd service.
Use `make install` in the top level of this repo to install
the rendered service and timer files.
## syslog filtering
Due to a bug in systemd bundled with Ubuntu 18.04, we can't just use the nice easy solution of
directing output and error to a specific file.
Instead, the services all send their stderr and stdout to the system log, and then rsyslog
filters those messages and collects them into a separate log file.
First, install the services.
Then, install the following rsyslog config file:
`/etc/rsyslog.d/10-pod-charlesreid1-rsyslog.conf`:
```
if $programname == 'pod-charlesreid1-canary' then /var/log/pod-charlesreid1-canary.service.log
if $programname == 'pod-charlesreid1-canary' then stop
if $programname == 'pod-charlesreid1-backups-aws' then /var/log/pod-charlesreid1-backups-aws.service.log
if $programname == 'pod-charlesreid1-backups-aws' then stop
if $programname == 'pod-charlesreid1-backups-cleanolderthan' then /var/log/pod-charlesreid1-backups-cleanolderthan.service.log
if $programname == 'pod-charlesreid1-backups-cleanolderthan' then stop
if $programname == 'pod-charlesreid1-backups-gitea' then /var/log/pod-charlesreid1-backups-gitea.service.log
if $programname == 'pod-charlesreid1-backups-gitea' then stop
if $programname == 'pod-charlesreid1-backups-wikidb' then /var/log/pod-charlesreid1-backups-wikidb.service.log
if $programname == 'pod-charlesreid1-backups-wikidb' then stop
if $programname == 'pod-charlesreid1-backups-wikifiles' then /var/log/pod-charlesreid1-backups-wikifiles.service.log
if $programname == 'pod-charlesreid1-backups-wikifiles' then stop
```

View File

@@ -37,22 +37,22 @@ if [ "$#" == "0" ]; then
echo ""
echo "Checking that directory exists"
/usr/bin/test -d ${POD_CHARLESREID1_BACKUP_DIR}
/usr/bin/test -d "${POD_CHARLESREID1_BACKUP_DIR}"
echo "Checking that we can access the S3 bucket"
aws s3 ls s3://${POD_CHARLESREID1_BACKUP_S3BUCKET} > /dev/null
aws s3 ls "s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}" > /dev/null
# Get name of last backup, to copy to AWS
LAST_BACKUP=$(/bin/ls -1 -t ${POD_CHARLESREID1_BACKUP_DIR} | /usr/bin/head -n1)
LAST_BACKUP=$(/bin/ls -1 -t "${POD_CHARLESREID1_BACKUP_DIR}" | /usr/bin/head -n1)
echo "Last backup found: ${LAST_BACKUP}"
echo "Last backup directory: ${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}"
BACKUP_SIZE=$(du -hs ${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP})
BACKUP_SIZE=$(/usr/bin/du -hs "${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}" | cut -f 1)
echo "Backup directory size: ${BACKUP_SIZE}"
# Copy to AWS
echo "Backing up directory ${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}"
aws s3 cp --only-show-errors --recursive ${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP} s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}/backups/${LAST_BACKUP}
aws s3 cp --only-show-errors --no-progress --recursive "${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}" "s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}/backups/${LAST_BACKUP}"
echo "Done."
else

View File

@@ -24,7 +24,7 @@ def main():
alert(msg)
# verify there is a backup newer than N days
newer_backups = subprocess.getoutput(f'find {backup_dir} -mtime -{N}').split('\n')
newer_backups = subprocess.getoutput(f'find {backup_dir}/* -mtime -{N}').split('\n')
if len(newer_backups)==1 and newer_backups[0]=='':
msg = "Local Backups Error:\n"
msg += f"The backup directory `{backup_dir}` is missing backup files from the last {N} day(s)!"
@@ -35,7 +35,7 @@ def main():
newest_backup_files = subprocess.getoutput(f'find {newest_backup_path} -type f').split('\n')
# verify the most recent backup directory is not empty
if len(newest_backup_files)==1 and newer_backups[0]=='':
if len(newest_backup_files)==1 and newest_backup_files[0]=='':
msg = "Local Backups Error:\n"
msg += f"The most recent backup directory `{newest_backup_path}` is empty!"
alert(msg)
@@ -48,6 +48,24 @@ def main():
msg += f"Backup file name: {backup_file}!"
alert(msg)
# verify .sql dumps end with the mysqldump completion trailer.
# A non-empty file can still be truncated mid-row (e.g. PTY deadlock,
# net_write_timeout) — without this check, a 439 MB partial dump looks
# healthy to a size-only canary.
for backup_file in newest_backup_files:
if not backup_file.endswith('.sql'):
continue
with open(backup_file, 'rb') as f:
f.seek(0, os.SEEK_END)
f.seek(max(0, f.tell() - 512))
tail = f.read()
if b'Dump completed on' not in tail:
msg = "Local Backups Error:\n"
msg += f"SQL backup file `{backup_file}` is missing the "
msg += "`-- Dump completed on ...` trailer.\n"
msg += "mysqldump did not finish — the dump is truncated and not restorable."
alert(msg)
# verify the most recent backup files exist in the s3 backups bucket
bucket_base_path = os.path.join('backups', newest_backup_name)
for backup_file in newest_backup_files:
@@ -64,10 +82,12 @@ def check_exists(bucket_name, bucket_path):
# File does not exist
msg = "S3 Backups Error:\n"
msg += f"Failed to find the file `{bucket_path}` in bucket `{bucket_name}`"
alert(msg)
else:
# Problem accessing backups on bucket
msg = "S3 Backups Error:\n"
msg += f"Failed to access the file `{bucket_path}` in bucket `{bucket_name}`"
alert(msg)
def alert(msg):
@@ -97,7 +117,7 @@ def alert(msg):
raise Exception(response.status_code, response.text)
print("Goodbye.")
sys.exit(1)
sys.exit(0)
if __name__ == '__main__':

View File

@@ -5,9 +5,10 @@ After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-canary.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-canary.service.output.log
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/backups/canary/vp/bin/python3 {{ pod_install_dir }}/scripts/backups/canary/backups_canary.py'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-canary
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; /home/charles/.pyenv/shims/python3 {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/canary/backups_canary.py'
User=charles
Group=charles

View File

@@ -2,7 +2,7 @@
Description=Timer to run the pod-charlesreid1 backups canary
[Timer]
OnCalendar=Sun *-*-* 9:03:00
OnCalendar=*-*-* 7:01:00
[Install]
WantedBy=timers.target

View File

@@ -6,7 +6,7 @@ set -eux
# Number of days of backups to retain.
# Everything older than this many days will be deleted
N="45"
N="22"
function usage {
set +x
@@ -39,7 +39,7 @@ if [ "$#" == "0" ]; then
echo "Backup directory: ${POD_CHARLESREID1_BACKUP_DIR}"
echo ""
echo "Cleaning backups directory $BACKUP_DIR"
echo "Cleaning backups directory $POD_CHARLESREID1_BACKUP_DIR"
echo "The following files older than $N days will be deleted:"
find ${POD_CHARLESREID1_BACKUP_DIR} -mtime +${N}

View File

@@ -53,7 +53,7 @@ if [ "$#" == "0" ]; then
# We don't need to use docker, since these directories
# are both bind-mounted into the Docker container
echo "Backing up custom directory"
tar czf ${CUSTOM_TARGET} ${POD_CHARLESREID1_DIR}/d-gitea/custom
tar --exclude='gitea.log' --ignore-failed-read -czf ${CUSTOM_TARGET} ${POD_CHARLESREID1_DIR}/d-gitea/custom
echo "Backing up data directory"
tar czf ${DATA_TARGET} ${POD_CHARLESREID1_DIR}/d-gitea/data

View File

@@ -5,10 +5,10 @@ After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-aws.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-aws.service.output.log
ExecStartPre=/usr/bin/test -f {{ pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/backups/aws_backup.sh'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-aws
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/aws_backup.sh'
User=charles
Group=charles

View File

@@ -3,6 +3,7 @@ Description=Timer to copy the lastest pod-charlesreid1 backup to an S3 bucket
[Timer]
OnCalendar=Sun *-*-* 2:56:00
#OnCalendar=*-*-* 2:56:00
[Install]
WantedBy=timers.target

View File

@@ -1,12 +1,14 @@
[Unit]
Description=Copy the latest pod-charlesreid1 backup to an S3 bucket
Description=Clean pod-charlesreid1 backups older than N days
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-cleanolderthan.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-cleanolderthan.service.output.log
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/backups/clean_olderthan.sh'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-cleanolderthan
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/clean_olderthan.sh'
User=charles
Group=charles

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Timer to clean files older than N days from the pod-charlesreid1 backups dir
[Timer]
OnCalendar=Sun *-*-* 2:28:00
#OnCalendar=*-*-* 2:28:00
[Install]
WantedBy=timers.target

View File

@@ -5,10 +5,10 @@ After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-gitea.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-gitea.service.output.log
ExecStartPre=/usr/bin/test -f {{ pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/backups/gitea_backup.sh'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-gitea
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/gitea_backup.sh'
User=charles
Group=charles

View File

@@ -3,6 +3,7 @@ Description=Timer to back up pod-charlesreid1 gitea files
[Timer]
OnCalendar=Sun *-*-* 2:12:00
#OnCalendar=*-*-* 2:12:00
[Install]
WantedBy=timers.target

View File

@@ -5,10 +5,10 @@ After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-wikidb.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-wikidb.service.output.log
ExecStartPre=/usr/bin/test -f {{ pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/backups/wikidb_dump.sh'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-wikidb
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/wikidb_dump.sh'
User=charles
Group=charles

View File

@@ -1,13 +1,14 @@
[Unit]
Description=Back up the pod-charlesreid1 wiki files
Description=Back up pod-charlesreid1 wiki files
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-wikifiles.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-backups-wikifiles.service.output.log
ExecStartPre=/usr/bin/test -f {{ pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/backups/wikifiles_dump.sh'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-wikifiles
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/wikifiles_dump.sh'
User=charles
Group=charles

View File

@@ -1,5 +1,5 @@
[Unit]
Description=Timer to back up the pod-charlesreid1 wiki files
Description=Timer to back up pod-charlesreid1 wiki files
[Timer]
OnCalendar=Sun *-*-* 2:08:00

View File

@@ -5,7 +5,8 @@
set -eux
CONTAINER_NAME="stormy_mysql"
STAMP="`date +"%Y%m%d"`"
DATESTAMP="`date +"%Y%m%d"`"
TIMESTAMP="`date +"%Y%m%d_%H%M%S"`"
function usage {
set +x
@@ -20,7 +21,7 @@ function usage {
echo "Example:"
echo ""
echo " ./wikidb_dump.sh"
echo " (creates ${POD_CHARLESREID1_BACKUP_DIR}/20200101/wikidb_20200101.sql)"
echo " (creates ${POD_CHARLESREID1_BACKUP_DIR}/YYYYMMDD/wikidb_YYYYMMDD_HHMMSS.sql)"
echo ""
exit 1;
}
@@ -36,26 +37,63 @@ fi
if [ "$#" == "0" ]; then
TARGET="wikidb_${STAMP}.sql"
BACKUP_TARGET="${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}/${TARGET}"
TARGET="wikidb_${TIMESTAMP}.sql"
BACKUP_DIR="${POD_CHARLESREID1_BACKUP_DIR}/${DATESTAMP}"
BACKUP_TARGET="${BACKUP_DIR}/${TARGET}"
echo ""
echo "pod-charlesreid1: wikidb_dump.sh"
echo "--------------------------------"
echo ""
echo "Backup directory: ${POD_CHARLESREID1_BACKUP_DIR}"
echo "Backup directory: ${BACKUP_DIR}"
echo "Backup target: ${BACKUP_TARGET}"
echo ""
mkdir -p ${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}
DOCKER=$(which docker)
DOCKERX="${DOCKER} exec -t"
mkdir -p "${BACKUP_DIR}"
echo "Running mysqldump inside the mysql container"
${DOCKERX} ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD"' 2>&1 | grep -v "Using a password" > ${BACKUP_TARGET}
echo "Done."
# Pull the root password out of the container so we don't duplicate the
# secret on the host, and forward it in via MYSQL_PWD (which mysqldump
# reads automatically). No -t: a PTY corrupts --default-character-set=binary
# output (LF→CRLF translation on binary blobs) and its small kernel buffer
# can deadlock on large dumps.
set +x
MYSQL_PWD="$(docker exec "${CONTAINER_NAME}" printenv MYSQL_ROOT_PASSWORD)"
export MYSQL_PWD
set -x
docker exec -i \
-e MYSQL_PWD \
"${CONTAINER_NAME}" \
sh -c 'exec mysqldump \
--user=root \
--single-transaction \
--quick \
--routines \
--triggers \
--events \
--default-character-set=binary \
--databases wikidb' \
> "${BACKUP_TARGET}"
unset MYSQL_PWD
# A complete mysqldump always ends with "-- Dump completed on ...".
# Missing trailer means the dump is truncated and not restorable.
if ! tail -c 200 "${BACKUP_TARGET}" | grep -q 'Dump completed on'; then
echo "ERROR: dump file ${BACKUP_TARGET} is missing the completion trailer." >&2
echo " mysqldump did not finish successfully." >&2
exit 2
fi
size=$(stat -c %s "${BACKUP_TARGET}")
if [ "${size}" -lt $((50 * 1024 * 1024)) ]; then
echo "ERROR: dump file ${BACKUP_TARGET} is only ${size} bytes; suspicious." >&2
exit 3
fi
echo "Dump OK: ${BACKUP_TARGET} (${size} bytes)"
else
usage

View File

@@ -0,0 +1,110 @@
#!/bin/bash
#
# Restore a wikidb dump into a throwaway MySQL 5.7 container and run sanity
# queries against it. Compares row counts to live stormy_mysql. Exits non-zero
# on any failure.
#
# Usage:
# ./wikidb_restore_test.sh <path-to-dump.sql>
#
# A backup is only a backup if you have actually restored from it.
set -euo pipefail
DUMP="${1:-}"
if [ -z "${DUMP}" ] || [ ! -f "${DUMP}" ]; then
echo "Usage: $0 <path-to-wikidb-dump.sql>" >&2
exit 1
fi
LIVE_CONTAINER="stormy_mysql"
TEST_CONTAINER="wikidb_restore_test_$$"
TEST_PW="temp_restore_test_pw_$$"
IMAGE="mysql:5.7"
cleanup() {
docker stop "${TEST_CONTAINER}" >/dev/null 2>&1 || true
}
trap cleanup EXIT
echo "[1/5] Starting throwaway MySQL container ${TEST_CONTAINER}..."
docker run -d --rm \
--name "${TEST_CONTAINER}" \
-e MYSQL_ROOT_PASSWORD="${TEST_PW}" \
"${IMAGE}" >/dev/null
echo "[2/5] Waiting for MySQL to accept authenticated connections..."
# `mysqladmin ping` returns OK before the root user is actually set up, so we
# have to probe with a real authenticated query and accept only success.
ready=0
for i in $(seq 1 60); do
if docker exec -e MYSQL_PWD="${TEST_PW}" "${TEST_CONTAINER}" \
mysql -uroot -e 'SELECT 1' >/dev/null 2>&1; then
ready=1
break
fi
sleep 2
done
if [ "${ready}" -ne 1 ]; then
echo "ERROR: MySQL in ${TEST_CONTAINER} never became ready." >&2
docker logs "${TEST_CONTAINER}" 2>&1 | tail -20 >&2
exit 4
fi
echo "[3/5] Piping dump into throwaway MySQL..."
docker exec -i -e MYSQL_PWD="${TEST_PW}" "${TEST_CONTAINER}" \
mysql -uroot < "${DUMP}"
echo "[4/5] Querying restored DB..."
restored=$(docker exec -e MYSQL_PWD="${TEST_PW}" "${TEST_CONTAINER}" \
mysql -uroot -N -B -e "
USE wikidb;
SELECT COUNT(*) FROM page;
SELECT COUNT(*) FROM revision;
SELECT COUNT(*) FROM text;
SELECT COALESCE(MAX(rev_timestamp), 'none') FROM revision;
")
echo "--- restored ---"
echo "${restored}"
echo "[5/5] Querying live ${LIVE_CONTAINER}..."
LIVE_PW="$(docker exec "${LIVE_CONTAINER}" printenv MYSQL_ROOT_PASSWORD)"
live=$(docker exec -e MYSQL_PWD="${LIVE_PW}" "${LIVE_CONTAINER}" \
mysql -uroot -N -B -e "
USE wikidb;
SELECT COUNT(*) FROM page;
SELECT COUNT(*) FROM revision;
SELECT COUNT(*) FROM text;
SELECT COALESCE(MAX(rev_timestamp), 'none') FROM revision;
")
echo "--- live ---"
echo "${live}"
r_page=$(echo "${restored}" | sed -n '1p')
r_rev=$(echo "${restored}" | sed -n '2p')
r_text=$(echo "${restored}" | sed -n '3p')
l_page=$(echo "${live}" | sed -n '1p')
l_rev=$(echo "${live}" | sed -n '2p')
l_text=$(echo "${live}" | sed -n '3p')
fail=0
for kind in page rev text; do
r_var="r_${kind}"
l_var="l_${kind}"
r="${!r_var}"
l="${!l_var}"
if [ "${r}" != "${l}" ]; then
echo "MISMATCH: ${kind} count restored=${r} live=${l}" >&2
fail=1
else
echo "OK: ${kind} count = ${r}"
fi
done
if [ "${fail}" -ne 0 ]; then
echo "RESTORE TEST FAILED." >&2
exit 5
fi
echo "RESTORE TEST PASSED."

View File

@@ -5,7 +5,8 @@
set -eux
CONTAINER_NAME="stormy_mw"
STAMP="`date +"%Y%m%d"`"
DATESTAMP="`date +"%Y%m%d"`"
TIMESTAMP="`date +"%Y%m%d_%H%M%S"`"
function usage {
set +x
@@ -20,7 +21,7 @@ function usage {
echo "Example:"
echo ""
echo " ./wikifiles_dump.sh"
echo " (creates ${POD_CHARLESREID1_BACKUP_DIR}/20200101/wikifiles_20200101.tar.gz)"
echo " (creates ${POD_CHARLESREID1_BACKUP_DIR}/YYYYMMDD/wikifiles_YYYYMMDD_HHMMSS.tar.gz)"
echo ""
exit 1;
}
@@ -36,18 +37,19 @@ fi
if [ "$#" == "0" ]; then
TARGET="wikifiles_${STAMP}.tar.gz"
BACKUP_TARGET="${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}/${TARGET}"
TARGET="wikifiles_${TIMESTAMP}.tar.gz"
BACKUP_DIR="${POD_CHARLESREID1_BACKUP_DIR}/${DATESTAMP}"
BACKUP_TARGET="${BACKUP_DIR}/${TARGET}"
echo ""
echo "pod-charlesreid1: wikifiles_dump.sh"
echo "-----------------------------------"
echo ""
echo "Backup directory: ${POD_CHARLESREID1_BACKUP_DIR}"
echo "Backup directory: ${BACKUP_DIR}"
echo "Backup target: ${BACKUP_TARGET}"
echo ""
mkdir -p ${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}
mkdir -p ${BACKUP_DIR}
DOCKER=$(which docker)
DOCKERX="${DOCKER} exec -t"
@@ -62,6 +64,7 @@ if [ "$#" == "0" ]; then
echo "Step 3: Clean up tar.gz file"
${DOCKERX} ${CONTAINER_NAME} /bin/rm -f /tmp/${TARGET}
echo "Successfully wrote wikifiles dump to file: ${BACKUP_TARGET}"
echo "Done."
else

View File

@@ -0,0 +1,47 @@
#!/bin/bash
#
# Restore wiki files from a tar file
# into the stormy_mw container.
set -eu
function usage {
echo ""
echo "restore_wikifiles.sh script:"
echo "Restore wiki files from a tar file"
echo "into the stormy_mw container"
echo ""
echo " ./restore_wikifiles.sh <tar-file>"
echo ""
echo "Example:"
echo ""
echo " ./restore_wikifiles.sh /path/to/wikifiles.tar.gz"
echo ""
echo ""
exit 1;
}
# NOTE:
# I assume images/ is the only directory to back up/restore.
# If there are more I forgot, add them back in here.
# (skins and extensions are static, added into image at build time.)
if [[ "$#" -eq 1 ]];
then
NAME="stormy_mw"
TAR=$(basename "$1")
echo "Checking that container ${NAME} exists"
docker ps --format '{{.Names}}' | grep ${NAME} || exit 1;
echo "Copying dir $1 into container ${NAME}"
set -x
docker cp $1 ${NAME}:/tmp/${TAR}
docker exec -it ${NAME} rm -rf /var/www/html/images.old
docker exec -it ${NAME} mv /var/www/html/images /var/www/html/images.old
docker exec -it ${NAME} tar -xf /tmp/${TAR} -C / && rm -f /tmp/${TAR}
docker exec -it ${NAME} chown -R www-data:www-data /var/www/html/images
else
usage
fi

View File

@@ -5,6 +5,8 @@ After=docker.service
[Service]
Type=oneshot
StandardError=file:{{ pod_install_dir }}/.pod-charlesreid1-certbot.service.error.log
StandardOutput=file:{{ pod_install_dir }}/.pod-charlesreid1-certbot.service.output.log
ExecStart=/bin/bash -ac '. {{ pod_install_dir }}/environment; {{ pod_install_dir }}/scripts/certbot/renew_charlesreid1_certs.sh'
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-certbot
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/certbot/renew_charlesreid1_certs.sh'

View File

@@ -2,8 +2,8 @@
Description=Timer to renew certificates for pod-charlesreid1
[Timer]
# Run on the first Sunday of every month
OnCalendar=Sun *-*-01..07 4:03:00
# Run daily
OnCalendar=*-*-* 4:03:00
[Install]
WantedBy=timers.target

View File

@@ -34,7 +34,7 @@ if [ "$#" == "0" ]; then
sudo systemctl stop ${SERVICE}
echo "Stop pod"
docker-compose -f {{ pod_install_dir }}/docker-compose.yml down
docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml down
echo "Run certbot renew"
SUBS="git www"
@@ -63,7 +63,7 @@ if [ "$#" == "0" ]; then
done
echo "Start pod"
docker-compose -f {{ pod_install_dir }}/docker-compose.yml up -d
docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml up -d
echo "Enable and start system service ${SERVICE}"
sudo systemctl enable ${SERVICE}

View File

@@ -13,7 +13,9 @@ def clean():
rname = tname[:-3]
rpath = os.path.join(tdir, rname)
if os.path.exists(rpath):
ignore_list = ['environment']
if os.path.exists(rpath) and rname not in ignore_list:
print(f"Removing file {rpath}")
os.remove(rpath)
else:

View File

@@ -11,8 +11,8 @@ directory structure for charlesreid1.com
content. (Or, charlesreid1.XYZ, whatever.)
"""
SERVER_NAME_DEFAULT = '{{ server_name_default }}'
USERNAME = '{{ username }}'
SERVER_NAME_DEFAULT = '{{ pod_charlesreid1_server_name }}'
USERNAME = '{{ pod_charlesreid1_username }}'

View File

@@ -10,8 +10,8 @@ This script git pulls the /www directory
for updating charlesreid1.com content.
"""
SERVER_NAME_DEFAULT = '{{ server_name_default }}'
USERNAME = '{{ username }}'
SERVER_NAME_DEFAULT = '{{ pod_charlesreid1_server_name }}'
USERNAME = '{{ pod_charlesreid1_username }}'

View File

@@ -80,19 +80,5 @@ fi
##############################
Extension="Fail2banlog"
if [ ! -d ${Extension} ]
then
git clone https://github.com/charlesreid1-docker/mw-fail2ban.git ${Extension}
(
cd ${Extension}
git checkout master
)
else
echo "Skipping ${Extension}"
fi
##############################
# fin
)

View File

@@ -1,13 +1,6 @@
#!/bin/bash
#
# fix LocalSettings.php in the mediawiki container.
#
# docker is stupid, so it doesn't let you bind mount
# a single file into a docker volume.
#
# so, rather than rebuilding the entire goddamn container
# just to update LocalSettings.php when it changes, we just
# use a docker cp command to copy it into the container.
set -eux
NAME="stormy_mw"

View File

@@ -1,12 +1,6 @@
#!/bin/bash
#
# fix extensions dir in the mediawiki container
#
# in theory, we should be able to update the
# extensions folder in d-mediawiki/charlesreid1-config,
# but in reality this falls on its face.
# So, we have to fix the fucking extensions directory
# ourselves.
set -eux
NAME="stormy_mw"

View File

@@ -1,13 +1,6 @@
#!/bin/bash
#
# fix skins in the mediawiki container.
#
# docker is stupid, so it doesn't let you bind mount
# a single file into a docker volume.
#
# so, rather than rebuilding the entire goddamn container
# just to update the skin when it changes, we just
# use a docker cp command to copy it into the container.
set -eux
NAME="stormy_mw"

View File

@@ -2,7 +2,7 @@
#
# Restore wiki files from a tar file
# into the stormy_mw container.
set -eux
set -eu
function usage {
echo ""
@@ -31,16 +31,16 @@ then
NAME="stormy_mw"
TAR=$(basename "$1")
echo "Checking that container exists"
echo "Checking that container ${NAME} exists"
docker ps --format '{{.Names}}' | grep ${NAME} || exit 1;
echo "Copying $1 into container ${NAME}"
echo "Copying dir $1 into container ${NAME}"
set -x
docker cp $1 ${NAME}:/tmp/${TAR}
docker exec -it ${NAME} rm -rf /var/www/html/images.old
docker exec -it ${NAME} mv /var/www/html/images /var/www/html/images.old
docker exec -it ${NAME} tar -xf /tmp/${TAR} -C / && rm -f /tmp/${TAR}
docker exec -it ${NAME} chown -R www-data:www-data /var/www/html/images
set +x
else
usage

View File

@@ -1,35 +1,36 @@
#!/bin/bash
echo "this script is deprecated, see ../backups/wikidb_dump.sh"
##
## Dump a database to an .sql file
## from the stormy_mysql container.
#set -eu
#
# Dump a database to an .sql file
# from the stormy_mysql container.
set -x
function usage {
echo ""
echo "dump_database.sh script:"
echo "Dump a database to an .sql file "
echo "from the stormy_mysql container."
echo ""
echo " ./dump_database.sh <sql-dump-file>"
echo ""
echo "Example:"
echo ""
echo " ./dump_database.sh /path/to/wikidb_dump.sql"
echo ""
echo ""
exit 1;
}
CONTAINER_NAME="stormy_mysql"
if [[ "$#" -gt 0 ]];
then
TARGET="$1"
mkdir -p $(dirname $TARGET)
docker exec -i ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > $TARGET
else
usage
fi
#function usage {
# echo ""
# echo "dump_database.sh script:"
# echo "Dump a database to an .sql file "
# echo "from the stormy_mysql container."
# echo ""
# echo " ./dump_database.sh <sql-dump-file>"
# echo ""
# echo "Example:"
# echo ""
# echo " ./dump_database.sh /path/to/wikidb_dump.sql"
# echo ""
# echo ""
# exit 1;
#}
#
#CONTAINER_NAME="stormy_mysql"
#
#if [[ "$#" -gt 0 ]];
#then
#
# TARGET="$1"
# mkdir -p $(dirname $TARGET)
# set -x
# docker exec -i ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > $TARGET
#
#else
# usage
#fi

View File

@@ -6,6 +6,7 @@
# Note that this expects the .sql dump
# to create its own databases.
# Use the --databases flag with mysqldump.
set -eu
function usage {
echo ""
@@ -42,31 +43,23 @@ function usage {
# because of all these one-off
# "whoopsie we don't do that" problems.
if [[ "$#" -eq 1 ]];
then
CONTAINER_NAME="stormy_mysql"
TARGET=$(basename $1)
TARGET_DIR=$(dirname $1)
if [[ "$#" -eq 1 ]];
then
# Step 1: Copy the sql dump into the container
set -x
# Step 1: Copy the sql dump into the container
docker cp $1 ${CONTAINER_NAME}:/tmp/${TARGET}
set +x
# Step 2: Run sqldump inside the container
set -x
docker exec -i ${CONTAINER_NAME} sh -c "/usr/bin/mysql --defaults-file=/root/.mysql.rootpw.cnf < /tmp/${TARGET}"
set +x
# Step 3: Clean up sql dump from inside container
set -x
docker exec -i ${CONTAINER_NAME} sh -c "/bin/rm -fr /tmp/${TARGET}.sql"
set +x
docker exec -i ${CONTAINER_NAME} sh -c "/bin/rm -fr /tmp/${TARGET}"
set +x
else
usage
fi

View File

@@ -7,9 +7,9 @@ After=docker.service
Restart=always
StandardError=null
StandardOutput=null
ExecStartPre=/usr/bin/test -f {{ pod_install_dir }}/docker-compose.yml
ExecStart=/usr/local/bin/docker-compose -f {{ pod_install_dir }}/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f {{ pod_install_dir }}/docker-compose.yml stop
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml
ExecStart=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml stop
[Install]
WantedBy=default.target