233 Commits

Author SHA1 Message Date
f134f3cfbe fix networking issues in gitea action runner 2026-04-19 16:27:52 -07:00
021673c5aa fix stupid volume issue 2026-04-19 16:18:17 -07:00
f0e7740409 upgrade mediawiki (AGAIN) 2026-04-19 16:18:02 -07:00
45b60b42b4 pod-charlesreid1 service: stdout journal 2026-04-19 16:17:46 -07:00
f58fa07a9d remove endless tex packages from mw Dockerfile 2026-04-19 12:07:40 -07:00
c67149837a fix edit box style 2026-04-19 12:07:14 -07:00
6fcc762836 add runner config dir 2026-04-19 11:38:55 -07:00
90a98652be add gitea runner config 2026-04-19 11:36:10 -07:00
3cc52ddb6f get gitea container set up for gitea actions 2026-04-19 11:00:45 -07:00
714d939a0f fix [edit] style 2026-04-15 23:33:59 -07:00
e007447c99 use -i not -it 2026-04-15 23:33:45 -07:00
8b0645d700 fix [edit] style 2026-04-15 23:11:13 -07:00
e4eeab201f remove plan files 2026-04-15 23:10:59 -07:00
5470e90660 Merge branch 'claude-plans-execute-upgrade'
* claude-plans-execute-upgrade:
  add build extensions dir script
  update gitignore
  add php error logging
  fix bootstrap2 skin for 1.39
  update LocalSettings.php
  upgrade to mediawiki 1.39 in mw dockerfile
  bump mysql dockerfile version from 5.7 to 8.0
  add mysql no root password plan
  add execution notes
  update plan
  remove plan
  add fixes for getting non-root mysql user
  fix wiki backups and canary script to check for missing trailer, not just nonzero files
  remove stupid dead file
  add plan to fix sql backups, plus implemented fixes for sql backups
  add mysql no root pw transition plan
  add mediawiki upgrade plan
2026-04-15 22:58:04 -07:00
49d5f64c98 add build extensions dir script 2026-04-15 22:56:36 -07:00
2cba400f81 update gitignore 2026-04-15 22:55:08 -07:00
ef59df6890 add php error logging 2026-04-15 22:55:02 -07:00
047288fe7b fix bootstrap2 skin for 1.39 2026-04-15 22:54:08 -07:00
fc0abbad25 update LocalSettings.php 2026-04-15 22:53:51 -07:00
e51d8b19f2 upgrade to mediawiki 1.39 in mw dockerfile 2026-04-15 22:53:13 -07:00
fe1a35d06e bump mysql dockerfile version from 5.7 to 8.0 2026-04-15 22:52:48 -07:00
a7b4a4c9ff add mysql no root password plan 2026-04-15 22:51:27 -07:00
47fd49cecb add execution notes 2026-04-15 22:29:05 -07:00
564dfba199 update plan 2026-04-15 22:28:48 -07:00
a907947e78 remove plan 2026-04-13 18:52:21 -07:00
b44d535a93 Merge branch 'claude-plans-mysql-no-root' into claude-plans
* claude-plans-mysql-no-root:
  add fixes for getting non-root mysql user
2026-04-13 18:50:33 -07:00
98d9368d65 add fixes for getting non-root mysql user 2026-04-13 18:50:02 -07:00
b475f2142e Merge branch 'claude-plans-first-fix-sql-backups' into claude-plans
* claude-plans-first-fix-sql-backups:
  fix wiki backups and canary script to check for missing trailer, not just nonzero files
  remove stupid dead file
  add plan to fix sql backups, plus implemented fixes for sql backups
2026-04-13 18:38:30 -07:00
647e24013e fix wiki backups and canary script to check for missing trailer, not just nonzero files 2026-04-13 18:38:02 -07:00
dba56b4d43 remove stupid dead file 2026-04-13 18:37:40 -07:00
134450d922 add plan to fix sql backups, plus implemented fixes for sql backups 2026-04-13 18:30:32 -07:00
91120fa093 fix stupid environment.example problems 2026-04-13 17:20:13 -07:00
227b8c058e fix math rendering to be cheap ass <code> 2026-04-13 17:19:05 -07:00
2772e3447c add mysql no root pw transition plan 2026-04-02 21:38:49 -07:00
c6f0183d4b add mediawiki upgrade plan 2026-04-02 21:38:39 -07:00
1b3d1776b4 overwrite existing rendered templates by default 2026-03-25 20:58:10 -07:00
1f47029098 Merge pull request #2 from charlesreid1-docker/claude-audit
Implement changes recommended from Claude audit
2026-03-25 20:25:07 -07:00
0a89fe68c8 use docker volume for nginx log storage 2026-03-25 19:53:29 -07:00
d0b8e83ffc convert _.conf to _.conf.j2 2026-03-25 19:53:12 -07:00
89d5849708 apply rate-limiting to wiki urls consistently 2026-03-25 19:47:31 -07:00
13a3a1cb5e add X-Forwarded-Proto header to let mw/gitea know requests came in over https 2026-03-25 19:46:53 -07:00
892eddcbbb fix imagemagick thumbnail size limit 2026-03-25 19:43:57 -07:00
c95fcfaaf2 implement frontend/backend network segmentation 2026-03-25 19:43:19 -07:00
0e09187b3e Merge branch 'claude-fix-backup-scripts' into claude-audit
* claude-fix-backup-scripts:
  fix var handling (more defensive)
2026-03-25 19:41:52 -07:00
8afcd3073b fix var handling (more defensive) 2026-03-25 19:41:39 -07:00
efb3fa0140 fix variable name error 2026-03-25 19:39:32 -07:00
3048c35647 actually print error message in backups canary 2026-03-25 19:38:29 -07:00
86997b5a55 add restart policy for mysql and mw 2026-03-25 19:37:13 -07:00
fadee2ea91 fix min password length to 10 2026-03-25 19:35:06 -07:00
87582b77b2 Merge branch 'claude-pin-versions' into claude-audit
* claude-pin-versions:
  pin gitea and nginx versions
2026-03-25 19:34:26 -07:00
aaa226d82a pin gitea and nginx versions 2026-03-25 19:33:08 -07:00
5f26ebac25 Merge branch 'claude-mw-upgrade-key' into claude-audit
* claude-mw-upgrade-key:
  no .mysql.rootpw.cnf file (empty)
  get MW upgrade key from env var
2026-03-25 10:36:56 -07:00
940e21f507 no .mysql.rootpw.cnf file (empty) 2026-03-25 10:35:38 -07:00
99ab12a2ba get MW upgrade key from env var 2026-03-25 10:35:38 -07:00
6c49dd3171 fix stupid timer issue 2025-12-06 19:04:03 -08:00
99616d5de5 5 per second 2025-10-16 02:43:30 -07:00
83f898192a bump rate limit to 6 requests per second 2025-10-16 02:42:59 -07:00
5eb9ee5c3c add rate limits to /wiki, /w, and gitea endpoints 2025-10-16 02:42:44 -07:00
df23627e9a add a mediawiki cache directory to mw conf 2025-10-16 02:42:25 -07:00
76dc820b2d bind-mount /var/log/nginx between container and host 2025-10-16 02:42:09 -07:00
6b2b21b668 add base nginx.conf with rate limiting 2025-09-24 12:26:20 -07:00
bcb04257fa add slow query log config for mysql 2025-09-24 12:26:08 -07:00
0cad1e0398 add nginx and mysql config files 2025-09-24 12:25:50 -07:00
14d70a919d add rate-limiting to https config 2025-09-24 12:25:13 -07:00
3aba9729e6 add Troubleshooting.md 2025-06-14 03:48:49 -07:00
eb840384d1 update gitea theme name in app.ini.j2 2025-06-14 03:47:59 -07:00
5bf613cd56 ban more jerks 2025-05-24 19:36:17 -07:00
ccfed3f3fc update mw skin 2025-05-24 19:36:17 -07:00
194e619537 3 weeks for backups 2025-03-09 10:39:23 -07:00
a0f9548fcf ban more jerks 2025-03-07 16:13:15 -08:00
418315150a ban more jerks 2025-03-07 15:55:14 -08:00
ebb304d374 ban more jerks 2025-03-07 15:43:51 -08:00
8580c2c1f0 ban jerks 2025-03-06 12:24:43 -08:00
a3f460113a add instructions for blocking IP addresses 2024-11-16 19:17:46 -08:00
e94f911d99 add "ban jerks" section to nginx config 2024-11-16 19:17:31 -08:00
f7446c5a2d chmod the logs 2023-10-22 08:27:17 -07:00
6d1fa940a7 add wikifiles restore script 2023-10-15 13:06:49 -07:00
cfac7c69dc fix env var problem 2023-10-15 13:06:48 -07:00
3287d57554 fix script comment 2023-10-15 13:06:48 -07:00
d347024939 update gitea app.ini jinja template 2023-10-02 07:34:19 -07:00
8e4f86c8c6 smol makefile fix 2023-08-22 04:33:15 -07:00
5b855a575a make adjustments to bring all pod backup scripts in sync 2022-07-16 13:19:39 -07:00
4248f86c64 fixup restore db script 2022-07-15 17:52:58 -07:00
f36011d4cc fixup restore wikifiles 2022-07-15 17:49:59 -07:00
4953dfb8f3 remove tree subdomain 2022-06-05 21:05:20 -07:00
d003935769 update php.ini upload size to match localsettings.php 2022-03-23 20:05:47 -07:00
58e795bd98 fix backup canary script 2022-03-17 15:20:02 -07:00
0709e883ea 8am 2022-03-17 14:37:04 -07:00
8965515215 run backups canary every day 2022-03-17 14:36:00 -07:00
69523ba027 remove tree 2022-03-17 14:18:04 -07:00
2a4ed33024 add tree htpasswd to docker-compose 2022-03-09 20:32:57 -08:00
f880c44b79 add .tree.htpasswd to tree subdomain for auth protection 2022-03-09 20:18:26 -08:00
5cac0fa869 fix cert for tree subdomain 2022-03-09 09:01:14 -08:00
303ebf8ea3 add tree subdomain to renew cert script 2022-03-09 08:36:49 -08:00
4d638c456e bind-mount /www tree subdomain htdocs 2022-03-08 09:09:11 -08:00
72fc465d1d add tree subdomain to nginx config 2022-03-08 09:08:52 -08:00
2f579f4cfa restore 2022-03-08 09:02:06 -08:00
1bc4bb4902 add mw to skin footer 2022-03-06 18:49:12 -08:00
d91b7dc735 flush wikifiles and wikidb 2022-02-20 19:13:45 -08:00
acb2f57176 jerks 2022-02-20 19:13:45 -08:00
3482004df0 add php.ini 2022-02-07 18:18:54 -08:00
4ed1b479ef JERKS 2022-02-07 16:08:44 -08:00
5a931c2e38 another jerk 2022-02-07 15:49:43 -08:00
17da345041 more jerks 2022-02-07 15:47:19 -08:00
5e9be9e6c8 fix one more robots.txt 2022-02-07 15:20:27 -08:00
0148fe3e55 fix bind-mounting robots.txt 2022-02-07 15:07:46 -08:00
a144d6070b fix parsing of du command 2022-02-06 17:36:38 -08:00
989036ac21 add certbot to rsyslog filters 2022-02-06 17:36:38 -08:00
523ed50647 tell tar to stop crying about the log file and just skip it 2022-01-23 12:12:48 -08:00
03f81f4a25 more horrible hard-coded python binary 2022-01-18 22:02:34 -08:00
002ad20d7d stupid stupid stupid hard-coded shim path 2022-01-18 21:57:14 -08:00
2cb6a39990 restore weekly schedule 2022-01-18 21:48:58 -08:00
920ff3839e update gitea robots 2022-01-16 13:37:47 -08:00
d3dae75d38 add robots.txt to charlesreid1.com and git.charlesreid1.com 2022-01-16 13:27:27 -08:00
4004ba6ccb add robots dir 2022-01-16 13:27:15 -08:00
cf982ee2c6 add robots.txt to docker-compose template 2022-01-16 13:26:52 -08:00
efd9487953 add cut cmd to du cmd in aws backup script 2022-01-16 13:26:37 -08:00
b2552b6345 fix gitea backup script 2022-01-16 12:28:06 -08:00
1a8f699ab4 UGH more endless fixes 2022-01-16 12:07:09 -08:00
5e3ab1768c add boto/botocore checks, rearrange service installation steps 2022-01-16 11:53:43 -08:00
291ff2d28a restore daily runs 2022-01-16 11:53:11 -08:00
229975883c restore once a week schedule 2022-01-15 09:20:26 -08:00
af7ef822f0 remove commented lines 2022-01-15 09:18:28 -08:00
cc3688a982 add botocore/boto3 check for canary 2022-01-15 08:51:16 -08:00
e080cda745 add missing directive to rsyslog conf file 2022-01-15 08:05:34 -08:00
45c0f1390f update certbot renewal service 2022-01-14 13:24:51 -08:00
dacef1ac09 fix rsyslog config file 2022-01-14 13:22:52 -08:00
03a8456a2a fix execstartpre for canary service 2022-01-12 14:19:14 -08:00
d1d749d8e4 update makefile and add rsyslog config file 2022-01-12 14:06:56 -08:00
74adabc43a update log strategy - all services log to syslog, rely on user to filter system log 2022-01-12 13:55:37 -08:00
3566305577 add rsyslog filtering option 2022-01-12 13:53:36 -08:00
7442b2ee87 completely remove StandardOutput: from all serivces 2022-01-10 11:17:07 -08:00
9aa49166a6 remove StandardOutput from service files https://github.com/systemd/systemd/pull/10944 2022-01-10 10:38:02 -08:00
f06ac24ecb fix file: to append: 2022-01-10 01:36:18 -08:00
b796cc9756 bump backup services schedule to daily 2022-01-09 11:52:24 -08:00
25063ed251 pin mediawiki version to 1.34 in mw Dockerfile 2021-12-30 16:40:02 -08:00
72a47d71f2 more fail2ban cleanup 2021-12-30 16:31:31 -08:00
dba09976fb remove non-functional fail2banlog ext 2021-12-30 16:30:03 -08:00
7a3c76b9f9 remove unused script (use one in scripts/ instead) 2021-12-30 15:56:30 -08:00
18fd6038df fix clean-templates file 2021-12-30 15:56:30 -08:00
18814b6a1d fix pod install dir variable name 2021-12-30 15:43:08 -08:00
fc35d94b3c fix typos in apply templates script 2021-12-30 14:46:39 -08:00
3604bc1378 ignore environment when cleaning rendered templates 2021-12-30 14:44:14 -08:00
f0f65db9e3 make mkdocs-material submodule url https instad of git so it works without ssh key preconfigured 2021-12-30 14:37:15 -08:00
e5686d4d9a Merge branch 'feature/environment-template'
* feature/environment-template:
  massive rename of all ansible variables
  prep apply templates script for ansible variable rename
  fix missing var name in environment.j2
2021-12-30 12:00:06 -08:00
30c4a24b8d massive rename of all ansible variables 2021-12-30 11:59:45 -08:00
904122db17 prep apply templates script for ansible variable rename 2021-12-30 11:59:43 -08:00
8760edf0c3 fix missing var name in environment.j2 2021-12-30 11:56:53 -08:00
b4650771bc add environment template 2021-12-30 11:41:26 -08:00
b8182774a4 add --ignore-failed-read flag to gitea tar command 2021-12-26 19:26:48 -08:00
bb3b6c027a update certbot service to send logs to /var/log 2021-12-24 15:41:49 -08:00
1d18b5e71c send backup canary logs to /var/log 2021-12-24 15:41:22 -08:00
858cb6c3c8 send backup service logs to /var/log 2021-12-24 15:41:04 -08:00
0a5f9f99ac fix service description 2021-12-24 15:39:32 -08:00
2ac521e1c9 fix env var name in clean olderthan script 2021-12-19 10:48:58 -08:00
ffc4f1d0c0 add --no-progress flag to aws bacup script 2021-12-19 10:48:40 -08:00
7246b0845c cover cleanolderthan service with makefile install/uninstall rules 2021-12-12 11:29:02 -08:00
67acb4a32b Merge branch 'clean-backups'
* clean-backups:
  add systemd timer for clean backups service
2021-12-12 11:25:10 -08:00
15d4bcecc7 add systemd timer for clean backups service 2021-12-12 11:24:56 -08:00
9c92f3fd75 Merge branch 'service-updates'
* service-updates:
  add service to clean files older than N days
  add ExecStartPre to existing backup services
  clean older than 45 days
2021-12-12 11:16:38 -08:00
b838446576 add service to clean files older than N days 2021-12-12 10:50:44 -08:00
25b0f900a7 add ExecStartPre to existing backup services 2021-12-12 10:50:30 -08:00
0b2943fc3a clean older than 45 days 2021-12-12 10:50:07 -08:00
6fb8e7fdaa update apply templates script to include ignore list 2021-12-10 18:14:42 -08:00
573c0a3723 filter warning about password during mysqldump 2021-12-05 12:38:05 -08:00
6f5ee63c34 fix var problems with build extensions script and fix_* mw scripts 2021-12-04 17:53:36 -08:00
1e2e7a577f fix hard-coded vars 2021-12-04 17:17:12 -08:00
79d644e5bf typo fixes 2021-11-27 10:35:35 -08:00
f67faa651b fix var mapping 2021-11-27 09:55:09 -08:00
d5c441f9bf add chmod +x for shell scripts 2021-11-21 09:44:39 -08:00
c20a32b616 run certbot service as root 2021-11-20 11:45:58 -08:00
18d5d46406 create boto3 s3 resource in backups canary script 2021-11-20 10:39:01 -08:00
b8650cea95 update clean olderthan script 2021-11-17 15:06:23 -08:00
7caae4c5d6 less verbose aws commands 2021-11-17 15:00:45 -08:00
9e7f971a33 fix backups canary script 2021-10-17 14:29:38 -07:00
bf78d136c7 add canary to install process in makefile 2021-10-09 16:16:49 -07:00
dbd2effd68 update backups canary to use the right python 2021-10-09 16:16:03 -07:00
2c6a231983 fix canary service file 2021-10-02 13:34:25 -07:00
9f894f8780 fix canary timer syntax 2021-10-02 13:27:16 -07:00
07fd8e8a09 fix env var checks in apply templates script 2021-10-02 13:27:02 -07:00
31357bf16b restore backup timers to their final time 2021-10-02 13:17:28 -07:00
1a456a72b4 fix up aws backup script to use native aws cli env vars 2021-10-02 12:13:59 -07:00
2fe66094a6 fix chmod commands for installed template files 2021-10-02 12:07:17 -07:00
ca88f9ff5c fix permissions in makefile when installing service/timer files 2021-10-02 08:26:59 -07:00
5dc5ad5fb2 update timer syntax 2021-10-02 08:14:58 -07:00
455e3aa6e8 correctly specify aws credentials before using aws cli 2021-10-02 08:14:50 -07:00
fda32ac686 minor makefile improvements 2021-09-29 08:54:49 -07:00
3add031dd5 add aws backups to makefile 2021-09-29 08:34:33 -07:00
0f93a15f20 fix problem with aws backup script 2021-09-29 08:34:14 -07:00
20a569277b ignore log files 2021-09-27 22:00:30 -07:00
c6f7e290f4 fix wikifiles dump script 2021-09-27 21:59:45 -07:00
2a3c0b56c8 update timer description 2021-09-27 21:59:32 -07:00
2e6a339fbb fix service output/error syntax 2021-09-11 17:29:25 -07:00
619b09cc2c use abs path to bash 2021-09-11 17:25:40 -07:00
e7859eb4c5 more certbot updates 2021-09-11 17:22:40 -07:00
92a7189dbe update descriptions of service/timer for certbot 2021-09-11 17:19:48 -07:00
0401c08a56 install certbot with make install command 2021-09-11 17:18:16 -07:00
83e22c1cd2 update gitignore 2021-09-11 17:16:22 -07:00
e0ae04dee4 fix typos 2021-09-11 17:15:34 -07:00
61cd05b01a Merge branch 'cert-renewal'
* cert-renewal:
  add certbot renewal script, plus service, plus timer
2021-09-11 17:13:46 -07:00
1bd7893507 add certbot renewal script, plus service, plus timer 2021-09-11 17:13:30 -07:00
1d7e3b4c55 run backups canary on sunday at 9 am 2021-09-11 13:32:05 -07:00
ffe898d656 add gitea backups to makefile 2021-09-11 12:49:15 -07:00
d1895de16f update gitignore 2021-09-11 12:35:04 -07:00
40e9ef3880 Merge branch 'main' of https://github.com/charlesreid1-docker/pod-charlesreid1
* 'main' of https://github.com/charlesreid1-docker/pod-charlesreid1:
  reschedule aws backups for an hour after other backups
  add gitea timer/service
  add gitea backup script
2021-09-11 12:33:58 -07:00
89f8e4dd15 Merge branch 'add-gitea-backups'
* add-gitea-backups:
  reschedule aws backups for an hour after other backups
  add gitea timer/service
  add gitea backup script
2021-09-11 12:28:41 -07:00
30ad04448c reschedule aws backups for an hour after other backups 2021-09-11 12:23:51 -07:00
47c60ef5f9 add gitea timer/service 2021-09-11 12:23:29 -07:00
83b4a08fbd add gitea backup script 2021-09-11 12:23:20 -07:00
753df5176a update d-gitea readme a bit 2021-09-11 12:06:05 -07:00
941923c5da fix mediawiki build extensions script 2021-09-11 12:05:49 -07:00
9103e60eec prefix install/uninstall commands with sudo 2021-09-11 12:05:37 -07:00
ac8c6e7c7c remove unused file 2021-09-11 12:03:50 -07:00
39eb2f8b00 update gitignore for d-gitea 2021-09-11 12:02:20 -07:00
9e3db8ea2e update service to specify full path to test 2021-09-11 11:39:11 -07:00
ea814e572f add more rendered templates to gitignore 2021-09-11 11:30:09 -07:00
8181e334eb update apply templates script 2021-09-11 11:29:26 -07:00
b6209c2bfa revamp jinja-to-env var map 2021-09-11 11:28:38 -07:00
895605e340 fix apply templates 2021-09-11 11:13:32 -07:00
dd119618e9 ignore gitea data dir 2021-09-11 11:09:48 -07:00
46aeb84217 Merge branch 'make-stuff'
* make-stuff: (57 commits)
  backup canary -> backups canary
  add backup canary
  update env example
  remove gitea from makefile
  update scripts readme
  add readme for backup scripts
  update timers to run on sunday
  more scripts cleanup
  clean up existing scripts, remove gitea dump scripts
  add aws backup scripts (first draft)
  disable gitea backups
  remove gitea, mediawiki, mysql, nginx submodules
  add nginx dir
  add d-mysql directory contents
  adding charlesreid1.com wiki config dir - includes MW skin
  update gitignore
  fix clean templates script
  add d-mediawiki files
  revamp makefile, add mw make commands
  remove unused image
  ...
2021-09-11 10:26:57 -07:00
857f5eaad8 Merge branch 'backup-stuff' into make-stuff
* backup-stuff:
  backup canary -> backups canary
  add backup canary
  update env example
  remove gitea from makefile
  update scripts readme
  add readme for backup scripts
  update timers to run on sunday
  more scripts cleanup
  clean up existing scripts, remove gitea dump scripts
  add aws backup scripts (first draft)
2021-09-11 10:25:27 -07:00
a4e157223a backup canary -> backups canary 2021-09-10 16:32:16 -07:00
1cd2100c03 add backup canary 2021-09-10 16:30:12 -07:00
cfb48578da update env example 2021-09-10 16:29:53 -07:00
8f049c05d3 remove gitea from makefile 2021-09-10 14:22:04 -07:00
2db3cf5001 update scripts readme 2021-09-10 14:21:39 -07:00
48d8184022 add readme for backup scripts 2021-09-10 14:20:14 -07:00
ad7cad9521 update timers to run on sunday 2021-09-10 14:10:03 -07:00
cef2e260b0 more scripts cleanup 2021-09-10 14:02:34 -07:00
ae4abd454b clean up existing scripts, remove gitea dump scripts 2021-09-10 13:52:33 -07:00
ab245284d7 add aws backup scripts (first draft) 2021-09-10 13:52:09 -07:00
9c8317c2bc disable gitea backups 2021-09-09 16:09:32 -07:00
71 changed files with 1394 additions and 623 deletions

18
.gitignore vendored
View File

@@ -1,20 +1,26 @@
*.log
*.pyc *.pyc
environment environment
attic attic
# gitea # gitea
#d-gitea/data/ d-gitea/data/
d-gitea/custom/conf/app.ini d-gitea/custom/
d-gitea/custom/gitea.db
d-gitea/custom/avatars
d-gitea/custom/log/
d-gitea/custom/queues/
# mediawiki # mediawiki
charlesreid1.wiki.conf charlesreid1.wiki.conf
d-mediawiki/mediawiki/
d-mediawiki/charlesreid1-config/mediawiki/skins/Bootstrap2/Bootstrap2.php
d-mediawiki/charlesreid1-config/mediawiki/skins/Bootstrap2/navbar.php
d-mediawiki/charlesreid1-config/mediawiki/mathjax
# nginx
d-nginx-charlesreid1/conf.d/http.DOMAIN.conf
d-nginx-charlesreid1/conf.d/https.DOMAIN.conf
# scripts dir # scripts dir
scripts/git_*_www.py scripts/git_*_www.py
scripts/certbot/renew_charlesreid1_certs.sh
*.timer *.timer
*.service *.service

2
.gitmodules vendored
View File

@@ -1,3 +1,3 @@
[submodule "mkdocs-material"] [submodule "mkdocs-material"]
path = mkdocs-material path = mkdocs-material
url = git@github.com:charlesreid1-docker/mkdocs-material.git url = https://github.com/charlesreid1/mkdocs-material

131
Makefile
View File

@@ -26,7 +26,7 @@ help:
@echo "--------------------------------------------------" @echo "--------------------------------------------------"
@echo " Backups:" @echo " Backups:"
@echo "" @echo ""
@echo "make backups: Create backups of every service (gitea, wiki database, wiki files) in ~/backups" @echo "make backups: Create backups of every service (wiki database, wiki files) in ~/backups"
@echo "" @echo ""
@echo "make clean-backups: Remove files from ~/backups directory older than 30 days" @echo "make clean-backups: Remove files from ~/backups directory older than 30 days"
@echo "" @echo ""
@@ -53,7 +53,7 @@ help:
@echo "" @echo ""
@echo "make install: Install and start systemd service to run pod-charlesreid1." @echo "make install: Install and start systemd service to run pod-charlesreid1."
@echo " Also install and start systemd service for pod-charlesreid1 backup services" @echo " Also install and start systemd service for pod-charlesreid1 backup services"
@echo " for each service (gitea/mediawiki/mysql) part of pod-charlesreid1." @echo " for each service (mediawiki/mysql) part of pod-charlesreid1."
@echo "" @echo ""
@echo "make uninstall: Remove all systemd startup services and timers part of pod-charlesreid1" @echo "make uninstall: Remove all systemd startup services and timers part of pod-charlesreid1"
@echo "" @echo ""
@@ -61,18 +61,20 @@ help:
# Templates # Templates
templates: templates:
python3 $(POD_CHARLESREID1_DIR)/scripts/apply_templates.py @find * -name "*.service.j2" | xargs -I '{}' chmod 644 {}
@find * -name "*.timer.j2" | xargs -I '{}' chmod 644 {}
/home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/apply_templates.py
list-templates: list-templates:
@find * -name "*.j2" @find * -name "*.j2"
clean-templates: clean-templates:
python3 $(POD_CHARLESREID1_DIR)/scripts/clean_templates.py # sudo is required because bind-mounted gitea files end up owned by root. stupid docker.
sudo -E /home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/clean_templates.py
# Backups # Backups
backups: templates backups:
$(POD_CHARLESREID1_DIR)/scripts/backups/gitea_dump.sh
$(POD_CHARLESREID1_DIR)/scripts/backups/wikidb_dump.sh $(POD_CHARLESREID1_DIR)/scripts/backups/wikidb_dump.sh
$(POD_CHARLESREID1_DIR)/scripts/backups/wikifiles_dump.sh $(POD_CHARLESREID1_DIR)/scripts/backups/wikifiles_dump.sh
@@ -88,52 +90,105 @@ mw-fix-extensions: mw-build-extensions
$(POD_CHARLESREID1_DIR)/scripts/mw/build_extensions_dir.sh $(POD_CHARLESREID1_DIR)/scripts/mw/build_extensions_dir.sh
mw-fix-localsettings: mw-fix-localsettings:
$(POD_CHARLESEREID1_DIR)/scripts/mw/fix_LocalSettings.sh $(POD_CHARLESREID1_DIR)/scripts/mw/fix_LocalSettings.sh
mw-fix-skins: mw-fix-skins:
$(POD_CHARLESEREID1_DIR)/scripts/mw/fix_skins.sh $(POD_CHARLESREID1_DIR)/scripts/mw/fix_skins.sh
# /www Dir # /www Dir
clone-www: templates clone-www:
python3 $(POD_CHARLESREID1_DIR)/scripts/git_clone_www.py /home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/git_clone_www.py
pull-www: templates pull-www:
python3 $(POD_CHARLESREID1_DIR)/scripts/git_pull_www.py /home/charles/.pyenv/shims/python3 $(POD_CHARLESREID1_DIR)/scripts/git_pull_www.py
install: templates install:
ifeq ($(shell which systemctl),) ifeq ($(shell which systemctl),)
$(error Please run this make command on a system with systemctl installed) $(error Please run this make command on a system with systemctl installed)
endif endif
cp $(POD_CHARLESREID1_DIR)/scripts/pod-charlesreid1.service /etc/systemd/system/pod-charlesreid1.service @/home/charles/.pyenv/shims/python3 -c 'import botocore' || (echo "Please install the botocore library using python3 or pip3 binary"; exit 1)
cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-gitea.{service,timer} /etc/systemd/system/. @/home/charles/.pyenv/shims/python3 -c 'import boto3' || (echo "Please install the boto3 library using python3 or pip3 binary"; exit 1)
cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-wikidb.{service,timer} /etc/systemd/system/.
cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-wikifiles.{service,timer} /etc/systemd/system/. sudo cp $(POD_CHARLESREID1_DIR)/scripts/pod-charlesreid1.service /etc/systemd/system/pod-charlesreid1.service
systemctl daemon-reload
systemctl enable pod-charlesreid1 sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-aws.{service,timer} /etc/systemd/system/.
systemctl enable pod-charlesreid1-backups-gitea.timer sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-cleanolderthan.{service,timer} /etc/systemd/system/.
systemctl enable pod-charlesreid1-backups-wikidb.timer sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-gitea.{service,timer} /etc/systemd/system/.
systemctl enable pod-charlesreid1-backups-wikifiles.timer sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-wikidb.{service,timer} /etc/systemd/system/.
systemctl start pod-charlesreid1-backups-gitea.timer sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/pod-charlesreid1-backups-wikifiles.{service,timer} /etc/systemd/system/.
systemctl start pod-charlesreid1-backups-wikidb.timer
systemctl start pod-charlesreid1-backups-wikifiles.timer sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/canary/pod-charlesreid1-canary.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/certbot/pod-charlesreid1-certbot.{service,timer} /etc/systemd/system/.
sudo cp $(POD_CHARLESREID1_DIR)/scripts/backups/10-pod-charlesreid1-rsyslog.conf /etc/rsyslog.d/.
sudo chmod 664 /etc/systemd/system/pod-charlesreid1*
sudo systemctl daemon-reload
sudo systemctl restart rsyslog
sudo systemctl enable pod-charlesreid1
sudo systemctl enable pod-charlesreid1-backups-wikidb.timer
sudo systemctl enable pod-charlesreid1-backups-wikifiles.timer
sudo systemctl enable pod-charlesreid1-backups-gitea.timer
sudo systemctl enable pod-charlesreid1-backups-aws.timer
sudo systemctl enable pod-charlesreid1-backups-cleanolderthan.timer
sudo systemctl enable pod-charlesreid1-canary.timer
sudo systemctl enable pod-charlesreid1-certbot.timer
sudo systemctl start pod-charlesreid1-backups-wikidb.timer
sudo systemctl start pod-charlesreid1-backups-wikifiles.timer
sudo systemctl start pod-charlesreid1-backups-gitea.timer
sudo systemctl start pod-charlesreid1-backups-aws.timer
sudo systemctl start pod-charlesreid1-backups-cleanolderthan.timer
sudo systemctl start pod-charlesreid1-canary.timer
sudo systemctl start pod-charlesreid1-certbot.timer
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-aws.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-cleanolderthan.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-gitea.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-wikidb.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-backups-wikifiles.service.log
sudo chown syslog:syslog /var/log/pod-charlesreid1-canary.service.log
uninstall: uninstall:
ifeq ($(shell which systemctl),) ifeq ($(shell which systemctl),)
$(error Please run this make command on a system with systemctl installed) $(error Please run this make command on a system with systemctl installed)
endif endif
systemctl disable pod-charlesreid1 -sudo systemctl disable pod-charlesreid1
systemctl disable pod-charlesreid1-backups-gitea.timer -sudo systemctl disable pod-charlesreid1-backups-aws.timer
systemctl disable pod-charlesreid1-backups-wikidb.timer -sudo systemctl disable pod-charlesreid1-backups-cleanolderthan.timer
systemctl disable pod-charlesreid1-backups-wikifiles.timer -sudo systemctl disable pod-charlesreid1-backups-gitea.timer
systemctl stop pod-charlesreid1 -sudo systemctl disable pod-charlesreid1-backups-wikidb.timer
systemctl stop pod-charlesreid1-backups-gitea.timer -sudo systemctl disable pod-charlesreid1-backups-wikifiles.timer
systemctl stop pod-charlesreid1-backups-wikidb.timer -sudo systemctl disable pod-charlesreid1-canary.timer
systemctl stop pod-charlesreid1-backups-wikifiles.timer -sudo systemctl disable pod-charlesreid1-certbot.timer
rm -f /etc/systemd/system/pod-charlesreid1.service
rm -f /etc/systemd/system/pod-charlesreid1-backups-gitea.{service,timer} # Leave the pod running!
rm -f /etc/systemd/system/pod-charlesreid1-backups-wikidb.{service,timer} # -sudo systemctl stop pod-charlesreid1
rm -f /etc/systemd/system/pod-charlesreid1-backups-wikifiles.{service,timer}
systemctl daemon-reload -sudo systemctl stop pod-charlesreid1-backups-aws.timer
-sudo systemctl stop pod-charlesreid1-backups-cleanolderthan.timer
-sudo systemctl stop pod-charlesreid1-backups-gitea.timer
-sudo systemctl stop pod-charlesreid1-backups-wikidb.timer
-sudo systemctl stop pod-charlesreid1-backups-wikifiles.timer
-sudo systemctl stop pod-charlesreid1-canary.timer
-sudo systemctl stop pod-charlesreid1-certbot.timer
-sudo rm -f /etc/systemd/system/pod-charlesreid1.service
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-aws.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-cleanolderthan.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-gitea.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-wikidb.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-backups-wikifiles.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-canary.{service,timer}
-sudo rm -f /etc/systemd/system/pod-charlesreid1-certbot.{service,timer}
sudo systemctl daemon-reload
-sudo rm -f /etc/rsyslog.d/10-pod-charlesreid1-rsyslog.conf
-sudo systemctl restart rsyslog
.PHONY: help .PHONY: help

19
Troubleshooting.md Normal file
View File

@@ -0,0 +1,19 @@
To get a shell in a container that has been created, before it is runnning in a pod, use `docker run`:
```
docker run --rm -it --entrypoint bash <image-name-or-id>
docker run --rm -it --entrypoint bash pod-charlesreid1_stormy_mediawiki
```
To get a shell in a container that is running in a pod, use `docker exec`:
```
docker exec -it <image-name> /bin/bash
docker exec -it stormy_mw /bin/bash
```
Also, if no changes are picking up, and you've already tried rebuilding the container image, try editing the Dockerfile.

View File

@@ -18,7 +18,7 @@ The data directory contains any instance-specific gitea data.
The data directory is bind-mounted to `/app/gitea/data` in the container. The data directory is bind-mounted to `/app/gitea/data` in the container.
## Repository Data ## Repository Drive
Gitea stores all of its repositories in a separate drive that is at Gitea stores all of its repositories in a separate drive that is at
`/gitea_repositories` on the host machine. `/gitea_repositories` on the host machine.

View File

@@ -6,12 +6,14 @@
;; https://github.com/go-gitea/gitea/blob/master/conf/app.ini ;; https://github.com/go-gitea/gitea/blob/master/conf/app.ini
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
APP_NAME = {{ gitea_app_name }} APP_NAME = {{ pod_charlesreid1_gitea_app_name }}
RUN_USER = git RUN_USER = git
RUN_MODE = prod RUN_MODE = prod
WORK_PATH = /data/gitea
[ui] [ui]
DEFAULT_THEME = arc-green DEFAULT_THEME = gitea-dark
THEMES = gitea-dark
[database] [database]
DB_TYPE = sqlite3 DB_TYPE = sqlite3
@@ -31,17 +33,17 @@ DISABLE_HTTP_GIT = false
[server] [server]
PROTOCOL = http PROTOCOL = http
DOMAIN = git.{{ server_name_default }} DOMAIN = git.{{ pod_charlesreid1_server_name }}
#CERT_FILE = /www/gitea/certs/cert.pem #CERT_FILE = /www/gitea/certs/cert.pem
#KEY_FILE = /www/gitea/certs/key.pem #KEY_FILE = /www/gitea/certs/key.pem
SSH_DOMAIN = git.{{ server_name_default }} SSH_DOMAIN = git.{{ pod_charlesreid1_server_name }}
HTTP_PORT = 3000 HTTP_PORT = 3000
HTTP_ADDR = 0.0.0.0 HTTP_ADDR = 0.0.0.0
ROOT_URL = https://git.{{ server_name_default }} ROOT_URL = https://git.{{ pod_charlesreid1_server_name }}
;ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/ ;ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/
DISABLE_SSH = false DISABLE_SSH = false
; port to display in clone url: ; port to display in clone url:
SSH_PORT = 222 ;SSH_PORT = 222
; port for built-in ssh server to listen on: ; port for built-in ssh server to listen on:
SSH_LISTEN_PORT = 22 SSH_LISTEN_PORT = 22
OFFLINE_MODE = false OFFLINE_MODE = false
@@ -92,9 +94,12 @@ ENABLED = false
[security] [security]
INSTALL_LOCK = true INSTALL_LOCK = true
SECRET_KEY = {{ gitea_secret_key }} SECRET_KEY = {{ pod_charlesreid1_gitea_secretkey }}
MIN_PASSWORD_LENGTH = 6 MIN_PASSWORD_LENGTH = 10
INTERNAL_TOKEN = {{ gitea_internal_token }} INTERNAL_TOKEN = {{ pod_charlesreid1_gitea_internaltoken }}
[actions]
ENABLED = true
[other] [other]
SHOW_FOOTER_BRANDING = false SHOW_FOOTER_BRANDING = false

View File

View File

@@ -0,0 +1,20 @@
log:
level: info
runner:
# Label format: <label>:<runner-type>:<image>
# "ubuntu-latest" is the standard GitHub Actions label.
# Map it (and common aliases) to a Docker image so jobs don't sit waiting.
labels:
# alpine: ~50MB, has python3+pip; install extras with: apk add --no-cache git curl
- "alpine:docker://python:3.12-alpine"
- "ubuntu-latest:docker://catthehacker/ubuntu:act-22.04"
- "ubuntu-22.04:docker://catthehacker/ubuntu:act-22.04"
- "ubuntu-20.04:docker://catthehacker/ubuntu:act-20.04"
- "ubuntu-24.04:docker://catthehacker/ubuntu:act-22.04"
container:
network: "pod-charlesreid1_frontend"
cache:
enabled: true

View File

@@ -1,22 +1,12 @@
FROM mediawiki FROM mediawiki:1.39.12
EXPOSE 8989 EXPOSE 8989
VOLUME ["/var/www/html"]
# Install ImageMagick # Install ImageMagick (used for image thumbnailing)
# and math stuff mentioned by Math extension readme
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y build-essential \ apt-get install -y imagemagick && \
dvipng \ rm -rf /var/lib/apt/lists/*
ocaml \
ghostscript \
imagemagick \
texlive-latex-base \
texlive-latex-extra \
texlive-fonts-recommended \
texlive-lang-greek \
texlive-latex-recommended
# Copy skins, config files, and other particulars into container # Copy skins, config files, and other particulars into container
@@ -24,15 +14,13 @@ RUN apt-get update && \
# MediaWiki needs everything, everything, to be in one folder. # MediaWiki needs everything, everything, to be in one folder.
# Docker is totally incapable of mounting a file in a volume. # Docker is totally incapable of mounting a file in a volume.
# I cannot update LocalSettings.php without clearing the cache. # I cannot update LocalSettings.php without clearing the cache.
# I cannot clear the cache without reinstalling all of latex.
# I can't bind-mount the skins dir, because then it's owned by root. # I can't bind-mount the skins dir, because then it's owned by root.
# I can't fix the fact that all bind-mounted dirs are owned by root, # I can't fix the fact that all bind-mounted dirs are owned by root,
# because I can only add commands in THIS DOCKERFILE. # because I can only add commands in THIS DOCKERFILE.
# and when you run the commands in this dockerfile, # and when you run the commands in this dockerfile,
# YOU CANNOT SEE THE BIND-MOUNTED STUFF. # YOU CANNOT SEE THE BIND-MOUNTED STUFF.
# Extensions # Extensions (REL1_39 branches; EmbedVideo skipped for the 1.34 -> 1.39 upgrade)
COPY charlesreid1-config/mediawiki/extensions/EmbedVideo /var/www/html/extensions/EmbedVideo
COPY charlesreid1-config/mediawiki/extensions/Math /var/www/html/extensions/Math COPY charlesreid1-config/mediawiki/extensions/Math /var/www/html/extensions/Math
COPY charlesreid1-config/mediawiki/extensions/ParserFunctions /var/www/html/extensions/ParserFunctions COPY charlesreid1-config/mediawiki/extensions/ParserFunctions /var/www/html/extensions/ParserFunctions
COPY charlesreid1-config/mediawiki/extensions/SyntaxHighlight_GeSHi /var/www/html/extensions/SyntaxHighlight_GeSHi COPY charlesreid1-config/mediawiki/extensions/SyntaxHighlight_GeSHi /var/www/html/extensions/SyntaxHighlight_GeSHi
@@ -41,22 +29,27 @@ RUN chown -R www-data:www-data /var/www/html/*
# Skins # Skins
COPY charlesreid1-config/mediawiki/skins /var/www/html/skins COPY charlesreid1-config/mediawiki/skins /var/www/html/skins
RUN chown -R www-data:www-data /var/www/html/skins RUN chown -R www-data:www-data /var/www/html/skins
RUN touch /var/www/html/skins
# MathJax 3.2.2 (self-hosted, served via Apache alias at /w/mathjax/*).
# Math extension runs in 'source' mode; MathJax renders client-side, so we
# never call out to restbase/mathoid. See LocalSettings.php.j2.
COPY charlesreid1-config/mediawiki/mathjax /var/www/html/mathjax
RUN chown -R www-data:www-data /var/www/html/mathjax
# Settings # Settings
COPY charlesreid1-config/mediawiki/LocalSettings.php /var/www/html/LocalSettings.php COPY charlesreid1-config/mediawiki/LocalSettings.php /var/www/html/LocalSettings.php
RUN chown -R www-data:www-data /var/www/html/LocalSettings* RUN chown -R www-data:www-data /var/www/html/LocalSettings*
RUN chmod 600 /var/www/html/LocalSettings.php RUN chmod 600 /var/www/html/LocalSettings.php
# MediaWiki Fail2ban log directory
RUN mkdir -p /var/log/mwf2b
RUN chown -R www-data:www-data /var/log/mwf2b
RUN chmod 700 /var/log/mwf2b
# Apache conf file # Apache conf file
COPY charlesreid1-config/apache/*.conf /etc/apache2/sites-enabled/ COPY charlesreid1-config/apache/*.conf /etc/apache2/sites-enabled/
RUN a2enmod rewrite RUN a2enmod rewrite
RUN service apache2 restart RUN service apache2 restart
## make texvc # PHP conf file
#CMD cd /var/www/html/extensions/Math && make && apache2-foreground # https://hub.docker.com/_/php/
COPY php/php.ini /usr/local/etc/php/
# Start
CMD apache2-foreground CMD apache2-foreground

View File

@@ -5,6 +5,10 @@ To update the MediaWiki skin:
- Rebuild the MW container while the docker pod is still running (won't effect the docker pod) - Rebuild the MW container while the docker pod is still running (won't effect the docker pod)
- When finished rebuilding the MW container, restart the docker pod. - When finished rebuilding the MW container, restart the docker pod.
The skin currently in use is in `charlesreid1-config/mediawiki/skins/Bootstrap2`
To rebuild and then restart the pod:
``` ```
# switch to main pod directory # switch to main pod directory
cd ../ cd ../

View File

@@ -1,4 +1,4 @@
ServerName {{ server_name_default }} ServerName {{ pod_charlesreid1_server_name }}
Listen 8989 Listen 8989
@@ -7,10 +7,10 @@ Listen 8989
# talks to apache via 127.0.0.1 # talks to apache via 127.0.0.1
# on port 8989 # on port 8989
ServerAlias www.{{ server_name_default }} ServerAlias www.{{ pod_charlesreid1_server_name }}
LogLevel warn LogLevel warn
ServerAdmin {{ admin_email }} ServerAdmin {{ pod_charlesreid1_mediawiki_admin_email }}
DirectoryIndex index.html index.cgi index.php DirectoryIndex index.html index.cgi index.php

View File

@@ -13,8 +13,8 @@ if ( !defined( 'MEDIAWIKI' ) ) {
} }
## The protocol and server name to use in fully-qualified URLs ## The protocol and server name to use in fully-qualified URLs
$wgServer = 'https://{{ server_name_default }}'; $wgServer = 'https://{{ pod_charlesreid1_server_name }}';
$wgCanonicalServer = 'https://{{ server_name_default }}'; $wgCanonicalServer = 'https://{{ pod_charlesreid1_server_name }}';
## The URL path to static resources (images, scripts, etc.) ## The URL path to static resources (images, scripts, etc.)
$wgStylePath = "$wgScriptPath/skins"; $wgStylePath = "$wgScriptPath/skins";
@@ -43,10 +43,11 @@ $wgDBpassword = getenv('MYSQL_PASSWORD');
# MySQL specific settings # MySQL specific settings
$wgDBprefix = ""; $wgDBprefix = "";
$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary"; $wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
$wgDBmysql5 = true; # $wgDBmysql5 removed — deprecated in MW 1.39
# Shared memory settings # Shared memory settings
$wgMainCacheType = CACHE_ACCEL; $wgMainCacheType = CACHE_ACCEL;
$wgCacheDirectory = "$IP/cache";
$wgMemCachedServers = []; $wgMemCachedServers = [];
# To enable image uploads, make sure the 'images' directory # To enable image uploads, make sure the 'images' directory
@@ -83,16 +84,25 @@ $wgPingback = false;
# available UTF-8 locale # available UTF-8 locale
$wgShellLocale = "en_US.utf8"; $wgShellLocale = "en_US.utf8";
# If you have the appropriate support software installed # Math rendering: Math extension emits raw LaTeX ('source' mode), then
# you can enable inline LaTeX equations: # a self-hosted MathJax 3 build at /w/mathjax/ renders it client-side.
$wgUseTeX = true; # No mathoid, no restbase, no external CDN.
$wgTexvc = "$IP/extensions/Math/math/texvc"; $wgDefaultUserOptions['math'] = 'source';
#$wgTexvc = '/usr/bin/texvc'; $wgMathValidModes = [ 'source' ];
# Skip TeX validation entirely — default validator calls out to restbase,
# Set MathML as default rendering option # which breaks air-gapped installs even when we only emit source HTML.
$wgDefaultUserOptions['math'] = 'mathml'; $wgMathDisableTexFilter = 'always';
$wgMathFullRestbaseURL = 'https://en.wikipedia.org/api/rest_'; $wgHooks['BeforePageDisplay'][] = function ( $out, $skin ) {
$wgMathMathMLUrl = 'https://mathoid-beta.wmflabs.org/'; $out->addHeadItem( 'mathjax',
'<script>window.MathJax = {'
. 'tex: { inlineMath: [["$","$"],["\\\\(","\\\\)"]], '
. 'displayMath: [["$$","$$"],["\\\\[","\\\\]"]], '
. 'processEscapes: true }, '
. 'options: { processHtmlClass: "mwe-math-fallback-source-inline|mwe-math-fallback-source-display|mwe-math-element" } '
. '};</script>'
. '<script async src="/w/mathjax/tex-chtml.js"></script>'
);
};
# Site language code, should be one of the list in ./languages/data/Names.php # Site language code, should be one of the list in ./languages/data/Names.php
$wgLanguageCode = "en"; $wgLanguageCode = "en";
@@ -104,7 +114,7 @@ $wgAuthenticationTokenVersion = "1";
# Site upgrade key. Must be set to a string (default provided) to turn on the # Site upgrade key. Must be set to a string (default provided) to turn on the
# web installer while LocalSettings.php is in place # web installer while LocalSettings.php is in place
$wgUpgradeKey = "984c1d9858dabc27"; $wgUpgradeKey = getenv('MEDIAWIKI_UPGRADEKEY');
# No license info # No license info
$wgRightsPage = ""; $wgRightsPage = "";
@@ -156,7 +166,7 @@ $wgPutIPinRC=true;
# Getting some weird "Error creating thumbnail: Invalid thumbnail parameters" messages w/ thumbnail # Getting some weird "Error creating thumbnail: Invalid thumbnail parameters" messages w/ thumbnail
# http://www.gossamer-threads.com/lists/wiki/mediawiki/169439 # http://www.gossamer-threads.com/lists/wiki/mediawiki/169439
$wgMaxImageArea=64000000; $wgMaxImageArea=64000000;
$wgMaxShellMemory=0; $wgMaxShellMemory=512000;
$wgFavicon="$wgScriptPath/favicon.ico"; $wgFavicon="$wgScriptPath/favicon.ico";
@@ -197,24 +207,22 @@ $wgSyntaxHighlightDefaultLang = "text";
wfLoadExtension( 'ParserFunctions' ); wfLoadExtension( 'ParserFunctions' );
############################################## ##############################################
# Embed videos extension # Embed videos extension — SKIPPED for MW 1.39 upgrade (add back later)
# https://github.com/HydraWiki/mediawiki-embedvideo/
# require_once("$IP/extensions/EmbedVideo/EmbedVideo.php");
wfLoadExtension( 'EmbedVideo' );
########################################### ###########################################
# Math extension # Math extension
# https://github.com/wikimedia/mediawiki-extensions-Math.git # https://github.com/wikimedia/mediawiki-extensions-Math.git
require_once "$IP/extensions/Math/Math.php"; wfLoadExtension( 'Math' );
############################################# ###########################################
# Fail2banlog extension # Parsoid (bundled in MW 1.39, runs in-process)
# https://www.mediawiki.org/wiki/Extension:Fail2banlog # Required for REST API /v1/page/{title}/with_html endpoint
require_once "$IP/extensions/Fail2banlog/Fail2banlog.php"; wfLoadExtension( 'Parsoid', "$IP/vendor/wikimedia/parsoid/extension.json" );
$wgFail2banlogfile = "/var/log/apache2/mwf2b.log"; $wgParsoidSettings = [
'useSelser' => true,
];
############################################# #############################################
# Fix cookies crap # Fix cookies crap
@@ -224,7 +232,7 @@ session_save_path("/tmp");
############################################## ##############################################
# Secure login # Secure login
$wgServer = "https://{{ server_name_default }}"; $wgServer = "https://{{ pod_charlesreid1_server_name }}";
$wgSecureLogin = true; $wgSecureLogin = true;
################################### ###################################

View File

@@ -1,93 +0,0 @@
#!/bin/bash
#
# clone or download each extension
# and build o
mkdir -p extensions
(
cd extensions
##############################
Extension="SyntaxHighlight_GeSHi"
if [ ! -d ${Extension} ]
then
## This requires mediawiki > 1.31
## (so does REL1_31)
#git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
## This manually downloads REL1_30
#wget https://extdist.wmflabs.org/dist/extensions/SyntaxHighlight_GeSHi-REL1_30-87392f1.tar.gz -O SyntaxHighlight_GeSHi.tar.gz
#tar -xzf SyntaxHighlight_GeSHi.tar.gz -C ${PWD}
#rm -f SyntaxHighlight_GeSHi.tar.gz
# Best of both worlds
git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
(
cd ${Extension}
git checkout --track remotes/origin/REL1_34
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="ParserFunctions"
if [ ! -d ${Extension} ]
then
git clone https://github.com/wikimedia/mediawiki-extensions-ParserFunctions.git ${Extension}
(
cd ${Extension}
git checkout --track remotes/origin/REL1_34
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="EmbedVideo"
if [ ! -d ${Extension} ]
then
git clone https://github.com/HydraWiki/mediawiki-embedvideo.git ${Extension}
(
cd ${Extension}
git checkout v2.7.3
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="Math"
if [ ! -d ${Extension} ]
then
git clone https://github.com/wikimedia/mediawiki-extensions-Math.git ${Extension}
(
cd ${Extension}
git checkout REL1_34
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="Fail2banlog"
if [ ! -d ${Extension} ]
then
git clone https://github.com/charlesreid1-docker/mw-fail2ban.git ${Extension}
(
cd ${Extension}
git checkout master
)
else
echo "Skipping ${Extension}"
fi
##############################
# fin
)

View File

@@ -23,12 +23,12 @@ class SkinBootstrap2 extends SkinTemplate {
// cmr 05/08/2014 // cmr 05/08/2014
$template = 'Bootstrap2Template'; $template = 'Bootstrap2Template';
function setupSkinUserCss( OutputPage $out ) { // MW 1.39: Skin::setupSkinUserCss() was removed. initPage() is the
global $wgHandheldStyle; // per-request hook that still receives OutputPage and runs before
// headElement is generated, so addStyle() calls land in the <head>.
public function initPage( OutputPage $out ) {
parent::initPage( $out );
parent::setupSkinUserCss( $out );
// Append to the default screen common & print styles...
$out->addStyle( 'Bootstrap2/IE50Fixes.css', 'screen', 'lt IE 5.5000' ); $out->addStyle( 'Bootstrap2/IE50Fixes.css', 'screen', 'lt IE 5.5000' );
$out->addStyle( 'Bootstrap2/IE55Fixes.css', 'screen', 'IE 5.5000' ); $out->addStyle( 'Bootstrap2/IE55Fixes.css', 'screen', 'IE 5.5000' );
$out->addStyle( 'Bootstrap2/IE60Fixes.css', 'screen', 'IE 6' ); $out->addStyle( 'Bootstrap2/IE60Fixes.css', 'screen', 'IE 6' );
@@ -36,7 +36,6 @@ class SkinBootstrap2 extends SkinTemplate {
$out->addStyle( 'Bootstrap2/rtl.css', 'screen', '', 'rtl' ); $out->addStyle( 'Bootstrap2/rtl.css', 'screen', '', 'rtl' );
$out->addStyle( 'Bootstrap2/bootstrap.css' ); $out->addStyle( 'Bootstrap2/bootstrap.css' );
$out->addStyle( 'Bootstrap2/slate.css' ); $out->addStyle( 'Bootstrap2/slate.css' );
$out->addStyle( 'Bootstrap2/main.css' ); $out->addStyle( 'Bootstrap2/main.css' );
@@ -72,7 +71,8 @@ class Bootstrap2Template extends QuickTemplate {
// -------- Start ------------ // -------- Start ------------
// Adding the following line makes Geshi work // Adding the following line makes Geshi work
$this->html( 'headelement' ); // (MW 1.39: read $this->data directly to avoid QuickTemplate::html('headelement') deprecation)
echo $this->data['headelement'];
// Left this out because the [edit] buttons were becoming right-aligned // Left this out because the [edit] buttons were becoming right-aligned
// Got around that behavior by changing shared.css // Got around that behavior by changing shared.css
// -------- End ------------ // -------- End ------------
@@ -106,7 +106,7 @@ include('/var/www/html/skins/Bootstrap2/navbar.php');
<div class="container-fixed"> <div class="container-fixed">
<div class="navbar-header"> <div class="navbar-header">
<a href="/wiki/" class="navbar-brand"> <a href="/wiki/" class="navbar-brand">
{{ top_domain }} wiki {{ pod_charlesreid1_server_name }} wiki
</a> </a>
</div> </div>
<div> <div>
@@ -146,7 +146,7 @@ include('/var/www/html/skins/Bootstrap2/navbar.php');
echo ' '; echo ' ';
echo $tab['class']; echo $tab['class'];
} }
echo '" id="' . Sanitizer::escapeId( "ca-$key" ) . '">'; echo '" id="' . Sanitizer::escapeIdForAttribute( "ca-$key" ) . '">';
echo '<a href="'; echo '<a href="';
echo htmlspecialchars($tab['href']); echo htmlspecialchars($tab['href']);
echo '">'; echo '">';
@@ -329,7 +329,7 @@ include('/var/www/html/skins/Bootstrap2/footer.php');
<?php } <?php }
if($this->data['feeds']) { ?> if($this->data['feeds']) { ?>
<li id="feedlinks"><?php foreach($this->data['feeds'] as $key => $feed) { <li id="feedlinks"><?php foreach($this->data['feeds'] as $key => $feed) {
?><a id="<?php echo Sanitizer::escapeId( "feed-$key" ) ?>" href="<?php ?><a id="<?php echo Sanitizer::escapeIdForAttribute( "feed-$key" ) ?>" href="<?php
echo htmlspecialchars($feed['href']) ?>" rel="alternate" type="application/<?php echo $key ?>+xml" class="feedlink"<?php echo $this->skin->tooltipAndAccesskey('feed-'.$key) ?>><?php echo htmlspecialchars($feed['text'])?></a>&nbsp; echo htmlspecialchars($feed['href']) ?>" rel="alternate" type="application/<?php echo $key ?>+xml" class="feedlink"<?php echo $this->skin->tooltipAndAccesskey('feed-'.$key) ?>><?php echo htmlspecialchars($feed['text'])?></a>&nbsp;
<?php } ?></li><?php <?php } ?></li><?php
} }
@@ -390,7 +390,7 @@ include('/var/www/html/skins/Bootstrap2/footer.php');
} }
//wfRunHooks( 'BootstrapTemplateToolboxEnd', array( &$this ) ); //wfRunHooks( 'BootstrapTemplateToolboxEnd', array( &$this ) );
wfRunHooks( 'BootstrapTemplateToolboxEnd', array( &$this ) ); Hooks::run( 'BootstrapTemplateToolboxEnd', array( &$this ) );
?> ?>
</ul> </ul>
<!-- <!--
@@ -429,7 +429,7 @@ include('/var/www/html/skins/Bootstrap2/footer.php');
<?php if ( is_array( $cont ) ) { ?> <?php if ( is_array( $cont ) ) { ?>
<ul class="nav nav-list"> <ul class="nav nav-list">
<li class="nav-header"><?php $out = wfMsg( $bar ); if (wfEmptyMsg($bar, $out)) echo htmlspecialchars($bar); else echo htmlspecialchars($out); ?></li> <li class="nav-header"><?php $msg = wfMessage( $bar ); if ($msg->isDisabled()) echo htmlspecialchars($bar); else echo htmlspecialchars($msg->text()); ?></li>
<?php foreach($cont as $key => $val) { ?> <?php foreach($cont as $key => $val) { ?>
<li id="<?php echo Sanitizer::escapeId($val['id']) ?>"<?php <li id="<?php echo Sanitizer::escapeId($val['id']) ?>"<?php
if ( $val['active'] ) { ?> class="active" <?php } if ( $val['active'] ) { ?> class="active" <?php }

View File

@@ -11,7 +11,7 @@
</span> </span>
Made from the command line with vim by Made from the command line with vim by
<a href="http://charlesreid1.com">charlesreid1</a><br /> <a href="http://charlesreid1.com">charlesreid1</a><br />
with help from <a href="https://getbootstrap.com/">Bootstrap</a> and <a href="http://getpelican.com">Pelican</a>. with help from <a href="https://getbootstrap.com/">Bootstrap</a> and <a href="http://mediawiki.org">MediaWiki</a>.
</p> </p>
<p style="text-align: center"> <p style="text-align: center">

View File

@@ -518,8 +518,18 @@ a.new:visited {
color: #a55858; color: #a55858;
} }
span.editsection { .mw-editsection, .editsection {
font-size: small; font-size: small;
font-weight: normal;
margin-left: 1em;
}
.editOptions {
background-color: #777;
}
.mw-editsection-bracket {
margin-left: 0;
} }
#preftoc { #preftoc {

View File

@@ -6,14 +6,14 @@
<span class="icon-bar"></span> <span class="icon-bar"></span>
<span class="icon-bar"></span> <span class="icon-bar"></span>
</button> </button>
<a href="/" class="navbar-brand">{{ top_domain }}</a> <a href="/" class="navbar-brand">{{ pod_charlesreid1_server_name }}</a>
</div> </div>
<div> <div>
<div class="collapse navbar-collapse" id="myNavbar"> <div class="collapse navbar-collapse" id="myNavbar">
<ul class="nav navbar-nav"> <ul class="nav navbar-nav">
<li> <li>
<a href="https://{{ top_domain }}/wiki">Wiki</a> <a href="https://{{ pod_charlesreid1_server_name }}/wiki">Wiki</a>
</li> </li>
</ul> </ul>

View File

@@ -1086,7 +1086,8 @@ html {
} }
body { body {
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
font-size: 14px; /*font-size: 14px;*/
font-size: 20px;
line-height: 1.42857143; line-height: 1.42857143;
color: #c8c8c8; color: #c8c8c8;
background-color: #272b30; background-color: #272b30;

6
d-mediawiki/php/php.ini Normal file
View File

@@ -0,0 +1,6 @@
post_max_size = 128M
memory_limit = 128M
upload_max_filesize = 100M
display_errors = Off
log_errors = On
error_log = /var/log/apache2/php_errors.log

View File

@@ -1,11 +1,7 @@
FROM mysql:5.7 FROM mysql:8.0
MAINTAINER charles@charlesreid1.com MAINTAINER charles@charlesreid1.com
# make mysql data a volume # make mysql data a volume
VOLUME ["/var/lib/mysql"] VOLUME ["/var/lib/mysql"]
# put password in a password file
RUN printf "[client]\nuser=root\npassword=$MYSQL_ROOT_PASSWORD" > /root/.mysql.rootpw.cnf
RUN chmod 0600 /root/.mysql.rootpw.cnf
RUN chown mysql:mysql /var/lib/mysql RUN chown mysql:mysql /var/lib/mysql

View File

@@ -0,0 +1,5 @@
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2
log_queries_not_using_indexes = 0

View File

@@ -1,6 +0,0 @@
# https://serverfault.com/a/525011
server {
server_name _;
listen *:80 default_server deferred;
return 444;
}

View File

@@ -1,6 +1,6 @@
#################### ####################
# #
# {{ server_name_default }} # {{ pod_charlesreid1_server_name }}
# http/{{ port_default }} # http/{{ port_default }}
# #
# basically, just redirects to https # basically, just redirects to https
@@ -10,20 +10,20 @@
server { server {
listen 80; listen 80;
listen [::]:80; listen [::]:80;
server_name {{ server_name_default }}; server_name {{ pod_charlesreid1_server_name }};
return 301 https://{{ server_name_default }}$request_uri; return 301 https://{{ pod_charlesreid1_server_name }}$request_uri;
} }
server { server {
listen 80; listen 80;
listen [::]:80; listen [::]:80;
server_name www.{{ server_name_default }}; server_name www.{{ pod_charlesreid1_server_name }};
return 301 https://www.{{ server_name_default }}$request_uri; return 301 https://www.{{ pod_charlesreid1_server_name }}$request_uri;
} }
server { server {
listen 80; listen 80;
listen [::]:80; listen [::]:80;
server_name git.{{ server_name_default }}; server_name git.{{ pod_charlesreid1_server_name }};
return 301 https://git.{{ server_name_default }}$request_uri; return 301 https://git.{{ pod_charlesreid1_server_name }}$request_uri;
} }

View File

@@ -1,9 +1,9 @@
#################### ####################
# #
# {{ server_name_default }} # {{ pod_charlesreid1_server_name }}
# https/443 # https/443
# #
# {{ server_name_default }} and www.{{ server_name_default }} # {{ pod_charlesreid1_server_name }} and www.{{ pod_charlesreid1_server_name }}
# should handle the following cases: # should handle the following cases:
# - w/ and wiki/ should reverse proxy story_mw # - w/ and wiki/ should reverse proxy story_mw
# - gitea subdomain should reverse proxy stormy_gitea # - gitea subdomain should reverse proxy stormy_gitea
@@ -15,30 +15,46 @@
server { server {
listen 443 ssl; listen 443 ssl;
listen [::]:443 ssl; listen [::]:443 ssl;
server_name {{ server_name_default }} default_server; server_name {{ pod_charlesreid1_server_name }};
ssl_certificate /etc/letsencrypt/live/{{ server_name_default }}/fullchain.pem; ssl_certificate /etc/letsencrypt/live/{{ pod_charlesreid1_server_name }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{{ server_name_default }}/privkey.pem; ssl_certificate_key /etc/letsencrypt/live/{{ pod_charlesreid1_server_name }}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf; include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/nginx/conf.d/secheaders.conf; include /etc/nginx/conf.d/secheaders.conf;
include /etc/nginx/conf.d/csp.conf; include /etc/nginx/conf.d/csp.conf;
location / { location / {
try_files $uri $uri/ =404; try_files $uri $uri/ =404;
root /www/{{ server_name_default }}/htdocs; root /www/{{ pod_charlesreid1_server_name }}/htdocs;
index index.html; index index.html;
} }
location = /robots.txt {
alias /var/www/robots/robots.txt;
}
location /wiki/ { location /wiki/ {
# Apply rate limit here.
limit_req zone=gitealimit burst=20 nodelay;
# Limit download rate to 500 KB/s per connection (4 Mbps)
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/wiki/; proxy_pass http://stormy_mw:8989/wiki/;
} }
location /w/ { location /w/ {
# Apply rate limit here.
limit_req zone=gitealimit burst=20 nodelay;
# Limit download rate to 500 KB/s per connection (4 Mbps)
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/w/; proxy_pass http://stormy_mw:8989/w/;
} }
@@ -55,31 +71,43 @@ server {
server { server {
listen 443 ssl; listen 443 ssl;
listen [::]:443 ssl; listen [::]:443 ssl;
server_name www.{{ server_name_default }}; server_name www.{{ pod_charlesreid1_server_name }};
ssl_certificate /etc/letsencrypt/live/www.{{ server_name_default }}/fullchain.pem; ssl_certificate /etc/letsencrypt/live/www.{{ pod_charlesreid1_server_name }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.{{ server_name_default }}/privkey.pem; ssl_certificate_key /etc/letsencrypt/live/www.{{ pod_charlesreid1_server_name }}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf; include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/nginx/conf.d/secheaders.conf; include /etc/nginx/conf.d/secheaders.conf;
include /etc/nginx/conf.d/csp.conf; include /etc/nginx/conf.d/csp.conf;
root /www/{{ server_name_default }}/htdocs; root /www/{{ pod_charlesreid1_server_name }}/htdocs;
location / { location / {
try_files $uri $uri/ =404; try_files $uri $uri/ =404;
index index.html; index index.html;
} }
location = /robots.txt {
alias /var/www/robots/robots.txt;
}
location /wiki/ { location /wiki/ {
limit_req zone=gitealimit burst=20 nodelay;
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/wiki/; proxy_pass http://stormy_mw:8989/wiki/;
} }
location /w/ { location /w/ {
# Apply rate limit here.
limit_req zone=gitealimit burst=20 nodelay;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_pass http://stormy_mw:8989/w/; proxy_pass http://stormy_mw:8989/w/;
} }
@@ -94,18 +122,29 @@ server {
server { server {
listen 443 ssl; listen 443 ssl;
listen [::]:443 ssl; listen [::]:443 ssl;
server_name git.{{ server_name_default }}; server_name git.{{ pod_charlesreid1_server_name }};
ssl_certificate /etc/letsencrypt/live/git.{{ server_name_default }}/fullchain.pem; ssl_certificate /etc/letsencrypt/live/git.{{ pod_charlesreid1_server_name }}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.{{ server_name_default }}/privkey.pem; ssl_certificate_key /etc/letsencrypt/live/git.{{ pod_charlesreid1_server_name }}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf; include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/nginx/conf.d/secheaders.conf; include /etc/nginx/conf.d/secheaders.conf;
include /etc/nginx/conf.d/giteacsp.conf; include /etc/nginx/conf.d/giteacsp.conf;
location / { location / {
# Apply the rate limit here.
# Allows a burst of 20 requests, but anything beyond the max is queued.
limit_req zone=gitealimit burst=20 nodelay;
# Limit download rate to 500 KB/s per connection (4 Mbps)
limit_rate 500k;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_pass http://stormy_gitea:3000/; proxy_pass http://stormy_gitea:3000/;
} }
location = /robots.txt {
alias /var/www/robots/gitea.txt;
}
} }

View File

@@ -0,0 +1,37 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
# Gitea rate limiting:
# 5 requests per second rate limit
limit_req_zone $binary_remote_addr zone=gitealimit:10m rate=5r/s;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

View File

@@ -0,0 +1,16 @@
User-agent: *
Disallow: */commit/*
Disallow: */src/*
Disallow: */tree/*
Disallow: */activity/*
Disallow: */wiki/*
Disallow: */releases/*
Disallow: */pulls/*
Disallow: */stars
Disallow: */watchers
Disallow: */forks
Disallow: *?tab=activity
Disallow: *?tab=stars
Disallow: *?tab=following
Disallow: *?tab=followers
Disallow: *?lang=*

View File

@@ -0,0 +1,2 @@
User-agent: *
Disallow: /w/

View File

@@ -5,7 +5,7 @@ services:
# https://stackoverflow.com/a/39039830 # https://stackoverflow.com/a/39039830
stormy_gitea: stormy_gitea:
image: gitea/gitea:latest image: gitea/gitea:1.24.5
container_name: stormy_gitea container_name: stormy_gitea
environment: environment:
- USER_UID=1000 - USER_UID=1000
@@ -13,6 +13,7 @@ services:
restart: always restart: always
volumes: volumes:
- "stormy_gitea_data:/data" - "stormy_gitea_data:/data"
- "./d-nginx-charlesreid1/robots:/var/www/robots:ro"
- "./d-gitea/custom:/data/gitea" - "./d-gitea/custom:/data/gitea"
- "./d-gitea/data:/app/gitea/data" - "./d-gitea/data:/app/gitea/data"
- "/gitea_repositories:/data/git/repositories" - "/gitea_repositories:/data/git/repositories"
@@ -23,53 +24,91 @@ services:
max-file: "10" max-file: "10"
ports: ports:
- "22:22" - "22:22"
networks:
- frontend
stormy_gitea_runner:
image: gitea/act_runner:latest
container_name: stormy_gitea_runner
restart: always
volumes:
- "stormy_gitea_runner_data:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./d-gitea/runner/config.yaml:/etc/act_runner/config.yaml:ro"
environment:
- GITEA_INSTANCE_URL=http://stormy_gitea:3000
- GITEA_RUNNER_REGISTRATION_TOKEN={{ pod_charlesreid1_gitea_runner_token }}
- GITEA_RUNNER_NAME=stormy-runner
- CONFIG_FILE=/etc/act_runner/config.yaml
logging:
driver: "json-file"
options:
max-size: 1m
max-file: "10"
depends_on:
- stormy_gitea
networks:
- frontend
stormy_mysql: stormy_mysql:
restart: always
build: d-mysql build: d-mysql
container_name: stormy_mysql container_name: stormy_mysql
volumes: volumes:
- "stormy_mysql_data:/var/lib/mysql" - "stormy_mysql_data:/var/lib/mysql"
- "./d-mysql/conf.d:/etc/mysql/conf.d:ro"
logging: logging:
driver: "json-file" driver: "json-file"
options: options:
max-size: 1m max-size: 1m
max-file: "10" max-file: "10"
environment: environment:
- MYSQL_ROOT_PASSWORD={{ mysql_password }} - MYSQL_ROOT_PASSWORD={{ pod_charlesreid1_mysql_password }}
- MYSQL_DATABASE=wikidb
- MYSQL_USER=wikiuser
- MYSQL_PASSWORD={{ pod_charlesreid1_mysql_wikiuser_password }}
networks:
- backend
stormy_mw: stormy_mw:
restart: always
build: d-mediawiki build: d-mediawiki
container_name: stormy_mw container_name: stormy_mw
volumes: volumes:
- "stormy_mw_data:/var/www/html" - "stormy_mw_images:/var/www/html/images"
- "./mwf2b:/var/log/mwf2b"
logging: logging:
driver: "json-file" driver: "json-file"
options: options:
max-size: 1m max-size: 1m
max-file: "10" max-file: "10"
environment: environment:
- MEDIAWIKI_SITE_SERVER=https://{{ server_name_default }} - MEDIAWIKI_SITE_SERVER=https://{{ pod_charlesreid1_server_name }}
- MEDIAWIKI_SECRETKEY={{ mediawiki_secretkey }} - MEDIAWIKI_SECRETKEY={{ pod_charlesreid1_mediawiki_secretkey }}
- MEDIAWIKI_UPGRADEKEY={{ pod_charlesreid1_mediawiki_upgradekey }}
- MYSQL_HOST=stormy_mysql - MYSQL_HOST=stormy_mysql
- MYSQL_DATABASE=wikidb - MYSQL_DATABASE=wikidb
- MYSQL_USER=root - MYSQL_USER=wikiuser
- MYSQL_PASSWORD={{ mysql_password }} - MYSQL_PASSWORD={{ pod_charlesreid1_mysql_wikiuser_password }}
depends_on: depends_on:
- stormy_mysql - stormy_mysql
networks:
- frontend
- backend
stormy_nginx: stormy_nginx:
restart: always restart: always
image: nginx image: nginx:1.27.5
container_name: stormy_nginx container_name: stormy_nginx
hostname: {{ server_name_default }} hostname: {{ pod_charlesreid1_server_name }}
hostname: charlesreid1.com
command: /bin/bash -c "nginx -g 'daemon off;'" command: /bin/bash -c "nginx -g 'daemon off;'"
volumes: volumes:
- "./d-nginx-charlesreid1/nginx.conf:/etc/nginx/nginx.conf:ro"
- "./d-nginx-charlesreid1/conf.d:/etc/nginx/conf.d:ro" - "./d-nginx-charlesreid1/conf.d:/etc/nginx/conf.d:ro"
- "./d-nginx-charlesreid1/robots:/var/www/robots:ro"
- "/etc/localtime:/etc/localtime:ro" - "/etc/localtime:/etc/localtime:ro"
- "/etc/letsencrypt:/etc/letsencrypt" - "/etc/letsencrypt:/etc/letsencrypt:ro"
- "/www/{{ server_name_default }}/htdocs:/www/{{ server_name_default }}/htdocs:ro" - "/www/{{ pod_charlesreid1_server_name }}/htdocs:/www/{{ pod_charlesreid1_server_name }}/htdocs:ro"
- "stormy_nginx_logs:/var/log/nginx"
logging: logging:
driver: "json-file" driver: "json-file"
options: options:
@@ -82,8 +121,19 @@ services:
ports: ports:
- "80:80" - "80:80"
- "443:443" - "443:443"
networks:
- frontend
networks:
frontend:
backend:
volumes: volumes:
stormy_mysql_data: stormy_mysql_data:
stormy_mw_images:
external: true
stormy_mw_data: stormy_mw_data:
external: true
stormy_gitea_data: stormy_gitea_data:
stormy_gitea_runner_data:
stormy_nginx_logs:

9
docs/BlockIps.md Normal file
View File

@@ -0,0 +1,9 @@
To block IP address:
* Modify the nginx config file template at
`d-nginx-charlesreid1/conf.d/https.DOMAIN.conf.j2`
* Re-render the Jinja templates into config files via
`make clean-templates && make templates`
* Stop and restart the pod service:
`sudo systemctl stop pod-charlesreid1 &&
sudo systemctl start pod-charlesreid1`

View File

@@ -2,31 +2,36 @@
# multiple templates: # multiple templates:
# ------------------- # -------------------
POD_CHARLESREID1_DIR="/path/to/pod-charlesreid1" export POD_CHARLESREID1_DIR="/path/to/pod-charlesreid1"
POD_CHARLESREID1_TLD="example.com" export POD_CHARLESREID1_TLD="example.com"
export POD_CHARLESREID1_USER="nonrootuser"
export POD_CHARLESREID1_VPN_IP_ADDR="1.2.3.4"
# mediawiki: # mediawiki:
# ---------- # ----------
POD_CHARLESREID1_MW_ADMIN_EMAIL="email@example.com" export POD_CHARLESREID1_MW_ADMIN_EMAIL="email@example.com"
POD_CHARLESREID1_MW_SECRET_KEY="SecretKeyString" export POD_CHARLESREID1_MW_SECRET_KEY="SecretKeyString"
export POD_CHARLESREID1_MW_UPGRADE_KEY="UpgradeKeyString"
# mysql: # mysql:
# ------ # ------
POD_CHARLESREID1_MYSQL_PASSWORD="SuperSecretPassword" export POD_CHARLESREID1_MYSQL_PASSWORD="SuperSecretPassword"
export POD_CHARLESREID1_MYSQL_WIKIUSER_PASSWORD="AnotherSecretPassword"
# gitea: # gitea:
# ------ # ------
POD_CHARLESREID1_GITEA_APP_NAME="" export POD_CHARLESREID1_GITEA_APP_NAME=""
POD_CHARLESREID1_GITEA_SECRET_KEY="GiteaSecretKey" export POD_CHARLESREID1_GITEA_SECRET_KEY="GiteaSecretKey"
POD_CHARLESREID1_GITEA_INTERNAL_TOKEN="GiteaInternalToken" export POD_CHARLESREID1_GITEA_INTERNAL_TOKEN="GiteaInternalToken"
# aws: # aws:
# ---- # ----
POD_CHARLESREID1_AWS_ACCESS_KEY="AAAAAAAAAAAAAAAAAAAA" export AWS_ACCESS_KEY_ID="AAAAAAA"
POD_CHARLESREID1_AWS_ACCESS_SECRET="0000000000000000000000000000000000000000" export AWS_SECRET_ACCESS_KEY="BBBBBBBB"
export AWS_DEFAULT_REGION="us-west-1"
# backups and scripts: # backups and scripts:
# -------------------- # --------------------
POD_CHARLESREID1_USER="charles" export POD_CHARLESREID1_BACKUP_DIR="/path/to"
POD_CHARLESREID1_BACKUP_S3BUCKET="name-of-backups-bucket" export POD_CHARLESREID1_BACKUP_S3BUCKET="name-of-backups-bucket"
POD_CHARLESREID1_BACKUPCANARY_WEBHOOKURL="https://hooks.slack.com/services/000000000/AAAAAAAAA/111111111111111111111111" export POD_CHARLESREID1_CANARY_WEBHOOK="https://hooks.slack.com/services/000000000/AAAAAAAAA/111111111111111111111111"

37
environment.j2 Normal file
View File

@@ -0,0 +1,37 @@
#!/bin/bash
# multiple templates:
# -------------------
export POD_CHARLESREID1_DIR="{{ pod_charlesreid1_pod_install_dir }}"
export POD_CHARLESREID1_TLD="{{ pod_charlesreid1_server_name }}"
export POD_CHARLESREID1_USER="{{ pod_charlesreid1_username }}"
export POD_CHARLESREID1_VPN_IP_ADDR="{{ pod_charlesreid1_vpn_ip_addr }}"
# mediawiki:
# ----------
export POD_CHARLESREID1_MW_ADMIN_EMAIL="{{ pod_charlesreid1_mediawiki_admin_email }}"
export POD_CHARLESREID1_MW_SECRET_KEY="{{ pod_charlesreid1_mediawiki_secretkey }}"
export POD_CHARLESREID1_MW_UPGRADE_KEY="{{ pod_charlesreid1_mediawiki_upgradekey }}"
# mysql:
# ------
export POD_CHARLESREID1_MYSQL_PASSWORD="{{ pod_charlesreid1_mysql_password }}"
export POD_CHARLESREID1_MYSQL_WIKIUSER_PASSWORD="{{ pod_charlesreid1_mysql_wikiuser_password }}"
# gitea:
# ------
export POD_CHARLESREID1_GITEA_APP_NAME="{{ pod_charlesreid1_gitea_app_name }}"
export POD_CHARLESREID1_GITEA_SECRET_KEY="{{ pod_charlesreid1_gitea_secretkey }}"
export POD_CHARLESREID1_GITEA_INTERNAL_TOKEN="{{ pod_charlesreid1_gitea_internaltoken }}"
# aws:
# ----
export AWS_ACCESS_KEY_ID="{{ pod_charlesreid1_backups_aws_access_key }}"
export AWS_SECRET_ACCESS_KEY="{{ pod_charlesreid1_backups_aws_secret_access_key }}"
export AWS_DEFAULT_REGION="{{ pod_charlesreid1_backups_aws_region }}"
# backups and scripts:
# --------------------
export POD_CHARLESREID1_BACKUP_DIR="{{ pod_charlesreid1_backups_dir }}"
export POD_CHARLESREID1_BACKUP_S3BUCKET="{{ pod_charlesreid1_backups_bucket }}"
export POD_CHARLESREID1_CANARY_WEBHOOK="{{ pod_charlesreid1_backups_canary_slack_url }}"

View File

@@ -21,10 +21,12 @@ Cleans all rendered Jinja templates. Does not require environment variables.
This script is destructive! Be careful! This script is destructive! Be careful!
# Ansible Scripts # /www Directory Scripts
These scripts are used by ansible when setting up a machine These scripts set up or pull a git repo that is set up to
to run the charlesreid1 docker pod. have a pecular directory structure.
The clone script is used by Ansible when setting up this pod.
## `git_clone_www.py` ## `git_clone_www.py`

View File

@@ -2,36 +2,43 @@ import os
import re import re
import sys import sys
import glob import glob
import time
import subprocess
from jinja2 import Environment, FileSystemLoader, select_autoescape from jinja2 import Environment, FileSystemLoader, select_autoescape
"""
Apply Default Values to all Jinja Templates
"""
# Should existing files be overwritten # Should existing files be overwritten
OVERWRITE = True OVERWRITE = True
# Map of jinja variables to environment variables
jinja_to_env = {
"pod_charlesreid1_pod_install_dir": "POD_CHARLESREID1_DIR",
"pod_charlesreid1_server_name": "POD_CHARLESREID1_TLD",
"pod_charlesreid1_username": "POD_CHARLESREID1_USER",
"pod_charlesreid1_vpn_ip_addr": "POD_CHARLESREID1_VPN_IP_ADDR",
"pod_charlesreid1_mediawiki_admin_email": "POD_CHARLESREID1_MW_ADMIN_EMAIL",
"pod_charlesreid1_mediawiki_secretkey": "POD_CHARLESREID1_MW_SECRET_KEY",
"pod_charlesreid1_mediawiki_upgradekey": "POD_CHARLESREID1_MW_UPGRADE_KEY",
"pod_charlesreid1_mysql_password": "POD_CHARLESREID1_MYSQL_PASSWORD",
"pod_charlesreid1_mysql_wikiuser_password": "POD_CHARLESREID1_MYSQL_WIKIUSER_PASSWORD",
"pod_charlesreid1_gitea_app_name": "POD_CHARLESREID1_GITEA_APP_NAME",
"pod_charlesreid1_gitea_secretkey": "POD_CHARLESREID1_GITEA_SECRET_KEY",
"pod_charlesreid1_gitea_internaltoken": "POD_CHARLESREID1_GITEA_INTERNAL_TOKEN",
"pod_charlesreid1_gitea_runner_token": "POD_CHARLESREID1_GITEA_RUNNER_TOKEN",
"pod_charlesreid1_backups_aws_access_key": "AWS_ACCESS_KEY_ID",
"pod_charlesreid1_backups_aws_secret_access_key": "AWS_SECRET_ACCESS_KEY",
"pod_charlesreid1_backups_aws_region": "AWS_DEFAULT_REGION",
"pod_charlesreid1_backups_dir": "POD_CHARLESREID1_BACKUP_DIR",
"pod_charlesreid1_backups_bucket": "POD_CHARLESREID1_BACKUP_S3BUCKET",
"pod_charlesreid1_backups_canary_slack_url": "POD_CHARLESREID1_CANARY_WEBHOOK",
}
scripts_dir = os.path.dirname(os.path.abspath(__file__)) scripts_dir = os.path.dirname(os.path.abspath(__file__))
repo_root = os.path.abspath(os.path.join(scripts_dir, '..')) repo_root = os.path.abspath(os.path.join(scripts_dir, '..'))
def check_env_vars(): def check_env_vars():
env_var_list = [ env_var_list = jinja_to_env.values()
'POD_CHARLESREID1_DIR',
'POD_CHARLESREID1_TLD',
'POD_CHARLESREID1_USER',
'POD_CHARLESREID1_MYSQL_PASSWORD',
'POD_CHARLESREID1_MW_ADMIN_EMAIL',
'POD_CHARLESREID1_GITEA_APP_NAME',
'POD_CHARLESREID1_GITEA_SECRET_KEY',
'POD_CHARLESREID1_GITEA_INTERNAL_TOKEN',
'POD_CHARLESREID1_BACKUP_S3BUCKET',
'POD_CHARLESREID1_AWS_ACCESS_KEY',
'POD_CHARLESREID1_AWS_ACCESS_SECRET',
'POD_CHARLESREID1_BACKUPCANARY_WEBHOOKURL',
]
nerrs = 0 nerrs = 0
print("Checking environment variables") print("Checking environment variables")
for env_var in env_var_list: for env_var in env_var_list:
@@ -48,6 +55,8 @@ def main():
check_env_vars() check_env_vars()
ignore_list = ['environment']
p = os.path.join(repo_root,'**','*.j2') p = os.path.join(repo_root,'**','*.j2')
template_files = glob.glob(p, recursive=True) template_files = glob.glob(p, recursive=True)
@@ -63,41 +72,35 @@ def main():
rname = tname[:-3] rname = tname[:-3]
rpath = os.path.join(tdir, rname) rpath = os.path.join(tdir, rname)
if rname in ignore_list:
print(f"\nSkipping template on ignore list: {tname}\n")
continue
env = Environment(loader=FileSystemLoader(tdir)) env = Environment(loader=FileSystemLoader(tdir))
print(f"Rendering template {tname}:") print(f"Rendering template {tname}:")
print(f" Template path: {tpath}") print(f" Template path: {tpath}")
print(f" Output path: {rpath}") print(f" Output path: {rpath}")
#content = env.get_template(tpath).render({
content = env.get_template(tname).render({ jinja_vars = {}
"pod_install_dir": os.environ['POD_CHARLESREID1_DIR'], for k, v in jinja_to_env.items():
"top_domain": os.environ['POD_CHARLESREID1_TLD'], jinja_vars[k] = os.environ[v]
"server_name_default" : os.environ['POD_CHARLESREID1_TLD'],
"username": os.environ['POD_CHARLESREID1_USER'], content = env.get_template(tname).render(jinja_vars)
# docker-compose:
"mysql_password" : os.environ['POD_CHARLESREID1_MYSQL_PASSWORD'],
"mediawiki_secretkey" : os.environ['POD_CHARLESREID1_MW_ADMIN_EMAIL'],
# mediawiki:
"admin_email": os.environ['POD_CHARLESREID1_MW_ADMIN_EMAIL'],
# gitea:
"gitea_app_name": os.environ['POD_CHARLESREID1_GITEA_APP_NAME'],
"gitea_secret_key": os.environ['POD_CHARLESREID1_GITEA_SECRET_KEY'],
"gitea_internal_token": os.environ['POD_CHARLESREID1_GITEA_INTERNAL_TOKEN'],
# aws:
"aws_backup_s3_bucket": os.environ['POD_CHARLESREID1_BACKUP_S3BUCKET'],
"aws_access_key": os.environ['POD_CHARLESREID1_AWS_ACCESS_KEY'],
"aws_access_secret": os.environ['POD_CHARLESREID1_AWS_ACCESS_SECRET'],
"backup_canary_webhook_url": os.environ['POD_CHARLESREID1_BACKUPCANARY_WEBHOOKURL'],
})
# Write to file # Write to file
if os.path.exists(rpath) and not OVERWRITE: if os.path.exists(rpath) and not OVERWRITE:
raise Exception("Error: file %s already exists!"%(rpath)) msg = "\n[!!!] Warning: file %s already exists! Skipping...\n"%(rpath)
print(msg)
time.sleep(1)
else: else:
with open(rpath,'w') as f: with open(rpath,'w') as f:
f.write(content) f.write(content)
print(f" Done!") print(f" Done!")
print("") print("")
if rpath[-3:] == ".sh":
subprocess.call(['chmod', '+x', rpath])
if __name__=="__main__": if __name__=="__main__":
main() main()

View File

@@ -0,0 +1,28 @@
if ( $programname startswith "pod-charlesreid1-canary" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-canary.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-certbot" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-certbot.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-aws" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-aws.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-cleanolderthan" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-cleanolderthan.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-gitea" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-gitea.service.log" flushOnTXEnd="off")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-wikidb" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-wikidb.service.log" flushOnTXEnd="on")
stop
}
if ( $programname startswith "pod-charlesreid1-backups-wikifiles" ) then {
action(type="omfile" file="/var/log/pod-charlesreid1-backups-wikifiles.service.log" flushOnTXEnd="on")
stop
}

52
scripts/backups/Readme.md Normal file
View File

@@ -0,0 +1,52 @@
# backup scripts
This directory contains several files for several services:
* Systemd .service file (Jinja template) to define a service that backs up files
* Systemd .timer file (Jinja template) to define a timer that runs the service on a schedule
* Shell script .sh that actually performs the backup operation and is called by the .service file
Use `make templates` in the top level of this repo to render
the Jinja templates using the environment variables in the
evnrionment file. That fixes the locations of the scripts
for the systemd service.
Use `make install` in the top level of this repo to install
the rendered service and timer files.
## syslog filtering
Due to a bug in systemd bundled with Ubuntu 18.04, we can't just use the nice easy solution of
directing output and error to a specific file.
Instead, the services all send their stderr and stdout to the system log, and then rsyslog
filters those messages and collects them into a separate log file.
First, install the services.
Then, install the following rsyslog config file:
`/etc/rsyslog.d/10-pod-charlesreid1-rsyslog.conf`:
```
if $programname == 'pod-charlesreid1-canary' then /var/log/pod-charlesreid1-canary.service.log
if $programname == 'pod-charlesreid1-canary' then stop
if $programname == 'pod-charlesreid1-backups-aws' then /var/log/pod-charlesreid1-backups-aws.service.log
if $programname == 'pod-charlesreid1-backups-aws' then stop
if $programname == 'pod-charlesreid1-backups-cleanolderthan' then /var/log/pod-charlesreid1-backups-cleanolderthan.service.log
if $programname == 'pod-charlesreid1-backups-cleanolderthan' then stop
if $programname == 'pod-charlesreid1-backups-gitea' then /var/log/pod-charlesreid1-backups-gitea.service.log
if $programname == 'pod-charlesreid1-backups-gitea' then stop
if $programname == 'pod-charlesreid1-backups-wikidb' then /var/log/pod-charlesreid1-backups-wikidb.service.log
if $programname == 'pod-charlesreid1-backups-wikidb' then stop
if $programname == 'pod-charlesreid1-backups-wikifiles' then /var/log/pod-charlesreid1-backups-wikifiles.service.log
if $programname == 'pod-charlesreid1-backups-wikifiles' then stop
```

60
scripts/backups/aws_backup.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env bash
#
# Find the last backup created, and copy it
# to an S3 bucket.
set -eux
function usage {
set +x
echo ""
echo "aws_backup.sh script:"
echo ""
echo "Find the last backup that was created,"
echo "and copy it to the backups bucket."
echo ""
echo " ./aws_backup.sh"
echo ""
exit 1;
}
if [ "$(id -u)" == "0" ]; then
echo ""
echo ""
echo "This script should NOT be run as root!"
echo ""
echo ""
exit 1;
fi
if [ "$#" == "0" ]; then
echo ""
echo "pod-charlesreid1: aws_backup.sh"
echo "-----------------------------------"
echo ""
echo "Backup directory: ${POD_CHARLESREID1_BACKUP_DIR}"
echo "Backup bucket: ${POD_CHARLESREID1_BACKUP_S3BUCKET}"
echo ""
echo "Checking that directory exists"
/usr/bin/test -d "${POD_CHARLESREID1_BACKUP_DIR}"
echo "Checking that we can access the S3 bucket"
aws s3 ls "s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}" > /dev/null
# Get name of last backup, to copy to AWS
LAST_BACKUP=$(/bin/ls -1 -t "${POD_CHARLESREID1_BACKUP_DIR}" | /usr/bin/head -n1)
echo "Last backup found: ${LAST_BACKUP}"
echo "Last backup directory: ${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}"
BACKUP_SIZE=$(/usr/bin/du -hs "${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}" | cut -f 1)
echo "Backup directory size: ${BACKUP_SIZE}"
# Copy to AWS
echo "Backing up directory ${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}"
aws s3 cp --only-show-errors --no-progress --recursive "${POD_CHARLESREID1_BACKUP_DIR}/${LAST_BACKUP}" "s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}/backups/${LAST_BACKUP}"
echo "Done."
else
usage
fi

View File

@@ -0,0 +1,124 @@
import os
import sys
import json
import requests
import boto3
import botocore
import subprocess
webhook_url = os.environ['POD_CHARLESREID1_CANARY_WEBHOOK']
backup_dir = os.environ['POD_CHARLESREID1_BACKUP_DIR']
backup_bucket = os.environ['POD_CHARLESREID1_BACKUP_S3BUCKET']
# Check for backups created in the last N days
N = 7
def main():
# verify the backups directory exists
if not os.path.exists(backup_dir):
msg = "Local Backups Error:\n"
msg += f"The backup directory `{backup_dir}` does not exist!"
alert(msg)
# verify there is a backup newer than N days
newer_backups = subprocess.getoutput(f'find {backup_dir}/* -mtime -{N}').split('\n')
if len(newer_backups)==1 and newer_backups[0]=='':
msg = "Local Backups Error:\n"
msg += f"The backup directory `{backup_dir}` is missing backup files from the last {N} day(s)!"
alert(msg)
newest_backup_name = subprocess.getoutput(f'ls -t {backup_dir} | head -n1')
newest_backup_path = os.path.join(backup_dir, newest_backup_name)
newest_backup_files = subprocess.getoutput(f'find {newest_backup_path} -type f').split('\n')
# verify the most recent backup directory is not empty
if len(newest_backup_files)==1 and newest_backup_files[0]=='':
msg = "Local Backups Error:\n"
msg += f"The most recent backup directory `{newest_backup_path}` is empty!"
alert(msg)
# verify the most recent backup files have nonzero size
for backup_file in newest_backup_files:
if os.path.getsize(backup_file)==0:
msg = "Local Backups Error:\n"
msg += f"The most recent backup directory `{newest_backup_path}` contains an empty backup file!\n"
msg += f"Backup file name: {backup_file}!"
alert(msg)
# verify .sql dumps end with the mysqldump completion trailer.
# A non-empty file can still be truncated mid-row (e.g. PTY deadlock,
# net_write_timeout) — without this check, a 439 MB partial dump looks
# healthy to a size-only canary.
for backup_file in newest_backup_files:
if not backup_file.endswith('.sql'):
continue
with open(backup_file, 'rb') as f:
f.seek(0, os.SEEK_END)
f.seek(max(0, f.tell() - 512))
tail = f.read()
if b'Dump completed on' not in tail:
msg = "Local Backups Error:\n"
msg += f"SQL backup file `{backup_file}` is missing the "
msg += "`-- Dump completed on ...` trailer.\n"
msg += "mysqldump did not finish — the dump is truncated and not restorable."
alert(msg)
# verify the most recent backup files exist in the s3 backups bucket
bucket_base_path = os.path.join('backups', newest_backup_name)
for backup_file in newest_backup_files:
backup_name = os.path.basename(backup_file)
backup_bucket_path = os.path.join(bucket_base_path, backup_name)
check_exists(backup_bucket, backup_bucket_path)
def check_exists(bucket_name, bucket_path):
s3 = boto3.resource('s3')
try:
s3.Object(bucket_name, bucket_path).load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
# File does not exist
msg = "S3 Backups Error:\n"
msg += f"Failed to find the file `{bucket_path}` in bucket `{bucket_name}`"
alert(msg)
else:
# Problem accessing backups on bucket
msg = "S3 Backups Error:\n"
msg += f"Failed to access the file `{bucket_path}` in bucket `{bucket_name}`"
alert(msg)
def alert(msg):
title = ":bangbang: pod-charlesreid1 backups canary"
hostname = subprocess.getoutput('hostname')
msg += f"\n\nHost: {hostname}"
slack_data = {
"username": "backups_canary",
"channel" : "#alerts",
"attachments": [
{
"color": "#CC0000",
"fields": [
{
"title": title,
"value": msg,
"short": "false",
}
]
}
]
}
byte_length = str(sys.getsizeof(slack_data))
headers = {'Content-Type': "application/json", 'Content-Length': byte_length}
response = requests.post(webhook_url, data=json.dumps(slack_data), headers=headers)
if response.status_code != 200:
raise Exception(response.status_code, response.text)
print("Goodbye.")
sys.exit(0)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,14 @@
[Unit]
Description=Backup canary service for pod-charlesreid1
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-canary
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; /home/charles/.pyenv/shims/python3 {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/canary/backups_canary.py'
User=charles
Group=charles

View File

@@ -0,0 +1,8 @@
[Unit]
Description=Timer to run the pod-charlesreid1 backups canary
[Timer]
OnCalendar=*-*-* 7:01:00
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,3 @@
boto3
botocore
requests

View File

@@ -2,13 +2,24 @@
# #
# Clean any files older than N days # Clean any files older than N days
# from the backup directory. # from the backup directory.
set -eu set -eux
# Number of days of backups to retain. # Number of days of backups to retain.
# Everything older than this many days will be deleted # Everything older than this many days will be deleted
N="30" N="22"
BACKUP_DIR="$HOME/backups" function usage {
set +x
echo ""
echo "clean_olderthan.sh script:"
echo ""
echo "Clean files older than ${N} days from the"
echo "backups directory, ~/backups"
echo ""
echo " ./clean_olderthan.sh"
echo ""
exit 1;
}
if [ "$(id -u)" == "0" ]; then if [ "$(id -u)" == "0" ]; then
echo "" echo ""
@@ -21,8 +32,21 @@ fi
if [ "$#" == "0" ]; then if [ "$#" == "0" ]; then
echo "Cleaning backups directory $BACKUP_DIR" echo ""
echo "Files older than $N days will be deleted" echo "pod-charlesreid1: clean_olderthan.sh"
find $BACKUP_DIR -mtime +${N} -delete echo "------------------------------------"
echo ""
echo "Backup directory: ${POD_CHARLESREID1_BACKUP_DIR}"
echo ""
echo "Cleaning backups directory $POD_CHARLESREID1_BACKUP_DIR"
echo "The following files older than $N days will be deleted:"
find ${POD_CHARLESREID1_BACKUP_DIR} -mtime +${N}
echo "Deleting files"
find ${POD_CHARLESREID1_BACKUP_DIR} -mtime +${N} -delete
echo "Done"
else
usage
fi fi

64
scripts/backups/gitea_backup.sh Executable file
View File

@@ -0,0 +1,64 @@
#!/bin/bash
#
# Bcak up the Gitea custom/ and data/ directories.
# These are needed to restore the site
# (as well as repository data, which is not backed up
# by this script, it is a separate drive).
set -eux
CONTAINER_NAME="stormy_gitea"
STAMP="`date +"%Y%m%d"`"
function usage {
set +x
echo ""
echo "gitea_backup.sh script:"
echo ""
echo "Create a tar file containing gitea"
echo "custom/ and data/ directories."
echo ""
echo " ./gitea_backup.sh"
echo ""
exit 1;
}
if [ "$(id -u)" == "0" ]; then
echo ""
echo ""
echo "This script should NOT be run as root!"
echo ""
echo ""
exit 1;
fi
if [ "$#" == "0" ]; then
CUSTOM_NAME="gitea_custom_${STAMP}.tar.gz"
DATA_NAME="gitea_data_${STAMP}.tar.gz"
CUSTOM_TARGET="${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}/${CUSTOM_NAME}"
DATA_TARGET="${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}/${DATA_NAME}"
echo ""
echo "pod-charlesreid1: gitea_backup.sh"
echo "-----------------------------------"
echo ""
echo "Backup target: custom: ${CUSTOM_TARGET}"
echo "Backup target: data: ${DATA_TARGET}"
echo ""
mkdir -p ${POD_CHARLESREID1_BACKUP_DIR}/${STAMP}
# We don't need to use docker, since these directories
# are both bind-mounted into the Docker container
echo "Backing up custom directory"
tar --exclude='gitea.log' --ignore-failed-read -czf ${CUSTOM_TARGET} ${POD_CHARLESREID1_DIR}/d-gitea/custom
echo "Backing up data directory"
tar czf ${DATA_TARGET} ${POD_CHARLESREID1_DIR}/d-gitea/data
echo "Done."
else
usage
fi

View File

@@ -1,86 +0,0 @@
#!/bin/bash
#
# Run the gitea dump command and send the dump file
# to the specified backup directory.
#
# Backup directory:
# /home/user/backups/gitea
BACKUP_DIR="$HOME/backups/gitea"
CONTAINER_NAME="stormy_gitea"
function usage {
set +x
echo ""
echo "gitea_dump.sh script:"
echo ""
echo "Run the gitea dump command inside the gitea docker container,"
echo "and copy the resulting zip file to the specified directory."
echo "The resulting gitea dump zip file will be timestamped."
echo ""
echo " ./gitea_dump.sh"
echo ""
echo "Example:"
echo ""
echo " ./gitea_dump.sh"
echo " (creates ${BACKUP_DIR}/gitea-dump_20200101_000000.zip)"
echo ""
exit 1;
}
if [ "$(id -u)" == "0" ]; then
echo ""
echo ""
echo "This script should NOT be run as root!"
echo ""
echo ""
exit 1;
fi
if [ "$#" == "0" ]; then
STAMP="`date +"%Y-%m-%d"`"
TARGET="gitea-dump_${STAMP}.zip"
echo ""
echo "pod-charlesreid1: gitea_dump.sh"
echo "-------------------------------"
echo ""
echo "Backup target: ${BACKUP_DIR}/${TARGET}"
echo ""
mkdir -p $BACKUP_DIR
## If this script is being run from a cron job,
## don't use -i flag with docker
#CRON="$( pstree -s $$ | /bin/grep -c cron )"
#DOCKER="/usr/local/bin/docker"
#DOCKERX=""
#if [[ "$CRON" -eq 1 ]];
#then
# DOCKERX="${DOCKER} exec -t"
#else
# DOCKERX="${DOCKER} exec -it"
#fi
DOCKER="/usr/local/bin/docker"
DOCKERX="${DOCKER} exec -t"
echo "Step 1: Run gitea dump command inside docker machine"
set -x
${DOCKERX} --user git ${CONTAINER_NAME} /bin/bash -c 'cd /app/gitea && /app/gitea/gitea dump --file gitea-dump.zip --skip-repository'
set +x
echo "Step 2: Copy gitea dump file out of docker machine"
set -x
${DOCKER} cp ${CONTAINER_NAME}:/app/gitea/gitea-dump.zip ${BACKUP_DIR}/${TARGET}
set +x
echo "Step 3: Clean up gitea dump file"
set -x
${DOCKERX} ${CONTAINER_NAME} /bin/bash -c "rm -f /app/gitea/gitea-dump.zip"
set +x
echo "Done."
else
usage
fi

View File

@@ -0,0 +1,14 @@
[Unit]
Description=Copy the latest pod-charlesreid1 backup to an S3 bucket
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-aws
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/aws_backup.sh'
User=charles
Group=charles

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Timer to copy the lastest pod-charlesreid1 backup to an S3 bucket
[Timer]
OnCalendar=Sun *-*-* 2:56:00
#OnCalendar=*-*-* 2:56:00
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,14 @@
[Unit]
Description=Clean pod-charlesreid1 backups older than N days
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-backups-cleanolderthan
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/clean_olderthan.sh'
User=charles
Group=charles

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Timer to clean files older than N days from the pod-charlesreid1 backups dir
[Timer]
OnCalendar=Sun *-*-* 2:28:00
#OnCalendar=*-*-* 2:28:00
[Install]
WantedBy=timers.target

View File

@@ -5,8 +5,10 @@ After=docker.service
[Service] [Service]
Type=oneshot Type=oneshot
StandardError={{ pod_install_dir }}/.pod-charlesreid1-backups-gitea.service.error.log StandardError=syslog
StandardOutput={{ pod_install_dir }}/.pod-charlesreid1-backups-gitea.service.output.log StandardOutput=syslog
ExecStart={{ pod_install_dir }}/scripts/backups/gitea_dump.sh SyslogIdentifier=pod-charlesreid1-backups-gitea
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/gitea_backup.sh'
User=charles User=charles
Group=charles Group=charles

View File

@@ -2,8 +2,8 @@
Description=Timer to back up pod-charlesreid1 gitea files Description=Timer to back up pod-charlesreid1 gitea files
[Timer] [Timer]
OnCalendar=*-*-* 0/2:23:00 OnCalendar=Sun *-*-* 2:12:00
#OnCalendar=*-*-* 2:12:00
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@@ -5,9 +5,10 @@ After=docker.service
[Service] [Service]
Type=oneshot Type=oneshot
StandardError={{ pod_install_dir }}/.pod-charlesreid1-backups-wikidb.service.error.log StandardError=syslog
StandardOutput={{ pod_install_dir }}/.pod-charlesreid1-backups-wikidb.service.output.log StandardOutput=syslog
ExecStart={{ pod_install_dir }}/scripts/backups/wikidb_dump.sh SyslogIdentifier=pod-charlesreid1-backups-wikidb
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/wikidb_dump.sh'
User=charles User=charles
Group=charles Group=charles

View File

@@ -2,7 +2,7 @@
Description=Timer to back up the pod-charlesreid1 wiki database Description=Timer to back up the pod-charlesreid1 wiki database
[Timer] [Timer]
OnCalendar=*-*-* 0/2:03:00 OnCalendar=Sun *-*-* 2:02:00
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@@ -1,12 +1,14 @@
[Unit] [Unit]
Description=Back up the pod-charlesreid1 wiki database Description=Back up pod-charlesreid1 wiki files
Requires=docker.service Requires=docker.service
After=docker.service After=docker.service
[Service] [Service]
Type=oneshot Type=oneshot
StandardError={{ pod_install_dir }}/.pod-charlesreid1-backups-wikifiles.service.error.log StandardError=syslog
StandardOutput={{ pod_install_dir }}/.pod-charlesreid1-backups-wikifiles.service.output.log StandardOutput=syslog
ExecStart={{ pod_install_dir }}/scripts/backups/wikifiles_dump.sh SyslogIdentifier=pod-charlesreid1-backups-wikifiles
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/backups/wikifiles_dump.sh'
User=charles User=charles
Group=charles Group=charles

View File

@@ -1,9 +1,8 @@
[Unit] [Unit]
Description=Timer to back up the pod-charlesreid1 wiki database Description=Timer to back up pod-charlesreid1 wiki files
[Timer] [Timer]
OnCalendar=*-*-* 0/2:13:00 OnCalendar=Sun *-*-* 2:08:00
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@@ -2,13 +2,11 @@
# #
# Run the mysql dump command to back up wikidb table, and send the # Run the mysql dump command to back up wikidb table, and send the
# resulting SQL file to the specified backup directory. # resulting SQL file to the specified backup directory.
# set -eux
# Backup directory:
# /home/user/backups/mysql
BACKUP_DIR="$HOME/backups"
CONTAINER_NAME="stormy_mysql" CONTAINER_NAME="stormy_mysql"
STAMP="`date +"%Y%m%d"`" DATESTAMP="`date +"%Y%m%d"`"
TIMESTAMP="`date +"%Y%m%d_%H%M%S"`"
function usage { function usage {
set +x set +x
@@ -23,7 +21,7 @@ function usage {
echo "Example:" echo "Example:"
echo "" echo ""
echo " ./wikidb_dump.sh" echo " ./wikidb_dump.sh"
echo " (creates ${BACKUP_DIR}/20200101/wikidb_20200101.sql)" echo " (creates ${POD_CHARLESREID1_BACKUP_DIR}/YYYYMMDD/wikidb_YYYYMMDD_HHMMSS.sql)"
echo "" echo ""
exit 1; exit 1;
} }
@@ -39,36 +37,64 @@ fi
if [ "$#" == "0" ]; then if [ "$#" == "0" ]; then
TARGET="wikidb_${STAMP}.sql" TARGET="wikidb_${TIMESTAMP}.sql"
BACKUP_TARGET="${BACKUP_DIR}/${STAMP}/${TARGET}" BACKUP_DIR="${POD_CHARLESREID1_BACKUP_DIR}/${DATESTAMP}"
BACKUP_TARGET="${BACKUP_DIR}/${TARGET}"
echo "" echo ""
echo "pod-charlesreid1: wikidb_dump.sh" echo "pod-charlesreid1: wikidb_dump.sh"
echo "--------------------------------" echo "--------------------------------"
echo "" echo ""
echo "Backup directory: ${BACKUP_DIR}"
echo "Backup target: ${BACKUP_TARGET}" echo "Backup target: ${BACKUP_TARGET}"
echo "" echo ""
mkdir -p ${BACKUP_DIR}/${STAMP} mkdir -p "${BACKUP_DIR}"
# If this script is being run from a cron job, echo "Running mysqldump inside the mysql container"
# don't use -i flag with docker
CRON="$( pstree -s $$ | /bin/grep -c cron )" # Pull the root password out of the container so we don't duplicate the
DOCKER=$(which docker) # secret on the host, and forward it in via MYSQL_PWD (which mysqldump
DOCKERX="" # reads automatically). No -t: a PTY corrupts --default-character-set=binary
if [[ "$CRON" -eq 1 ]]; # output (LF→CRLF translation on binary blobs) and its small kernel buffer
then # can deadlock on large dumps.
DOCKERX="${DOCKER} exec -t" set +x
else MYSQL_PWD="$(docker exec "${CONTAINER_NAME}" printenv MYSQL_ROOT_PASSWORD)"
DOCKERX="${DOCKER} exec -it" export MYSQL_PWD
set -x
docker exec -i \
-e MYSQL_PWD \
"${CONTAINER_NAME}" \
sh -c 'exec mysqldump \
--user=root \
--single-transaction \
--quick \
--routines \
--triggers \
--events \
--default-character-set=binary \
--databases wikidb' \
> "${BACKUP_TARGET}"
unset MYSQL_PWD
# A complete mysqldump always ends with "-- Dump completed on ...".
# Missing trailer means the dump is truncated and not restorable.
if ! tail -c 200 "${BACKUP_TARGET}" | grep -q 'Dump completed on'; then
echo "ERROR: dump file ${BACKUP_TARGET} is missing the completion trailer." >&2
echo " mysqldump did not finish successfully." >&2
exit 2
fi fi
echo "Running mysqldump" size=$(stat -c %s "${BACKUP_TARGET}")
set -x if [ "${size}" -lt $((50 * 1024 * 1024)) ]; then
${DOCKERX} ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > ${BACKUP_TARGET} echo "ERROR: dump file ${BACKUP_TARGET} is only ${size} bytes; suspicious." >&2
set +x exit 3
fi
echo "Dump OK: ${BACKUP_TARGET} (${size} bytes)"
echo "Done."
else else
usage usage
fi fi

View File

@@ -0,0 +1,110 @@
#!/bin/bash
#
# Restore a wikidb dump into a throwaway MySQL 5.7 container and run sanity
# queries against it. Compares row counts to live stormy_mysql. Exits non-zero
# on any failure.
#
# Usage:
# ./wikidb_restore_test.sh <path-to-dump.sql>
#
# A backup is only a backup if you have actually restored from it.
set -euo pipefail
DUMP="${1:-}"
if [ -z "${DUMP}" ] || [ ! -f "${DUMP}" ]; then
echo "Usage: $0 <path-to-wikidb-dump.sql>" >&2
exit 1
fi
LIVE_CONTAINER="stormy_mysql"
TEST_CONTAINER="wikidb_restore_test_$$"
TEST_PW="temp_restore_test_pw_$$"
IMAGE="mysql:5.7"
cleanup() {
docker stop "${TEST_CONTAINER}" >/dev/null 2>&1 || true
}
trap cleanup EXIT
echo "[1/5] Starting throwaway MySQL container ${TEST_CONTAINER}..."
docker run -d --rm \
--name "${TEST_CONTAINER}" \
-e MYSQL_ROOT_PASSWORD="${TEST_PW}" \
"${IMAGE}" >/dev/null
echo "[2/5] Waiting for MySQL to accept authenticated connections..."
# `mysqladmin ping` returns OK before the root user is actually set up, so we
# have to probe with a real authenticated query and accept only success.
ready=0
for i in $(seq 1 60); do
if docker exec -e MYSQL_PWD="${TEST_PW}" "${TEST_CONTAINER}" \
mysql -uroot -e 'SELECT 1' >/dev/null 2>&1; then
ready=1
break
fi
sleep 2
done
if [ "${ready}" -ne 1 ]; then
echo "ERROR: MySQL in ${TEST_CONTAINER} never became ready." >&2
docker logs "${TEST_CONTAINER}" 2>&1 | tail -20 >&2
exit 4
fi
echo "[3/5] Piping dump into throwaway MySQL..."
docker exec -i -e MYSQL_PWD="${TEST_PW}" "${TEST_CONTAINER}" \
mysql -uroot < "${DUMP}"
echo "[4/5] Querying restored DB..."
restored=$(docker exec -e MYSQL_PWD="${TEST_PW}" "${TEST_CONTAINER}" \
mysql -uroot -N -B -e "
USE wikidb;
SELECT COUNT(*) FROM page;
SELECT COUNT(*) FROM revision;
SELECT COUNT(*) FROM text;
SELECT COALESCE(MAX(rev_timestamp), 'none') FROM revision;
")
echo "--- restored ---"
echo "${restored}"
echo "[5/5] Querying live ${LIVE_CONTAINER}..."
LIVE_PW="$(docker exec "${LIVE_CONTAINER}" printenv MYSQL_ROOT_PASSWORD)"
live=$(docker exec -e MYSQL_PWD="${LIVE_PW}" "${LIVE_CONTAINER}" \
mysql -uroot -N -B -e "
USE wikidb;
SELECT COUNT(*) FROM page;
SELECT COUNT(*) FROM revision;
SELECT COUNT(*) FROM text;
SELECT COALESCE(MAX(rev_timestamp), 'none') FROM revision;
")
echo "--- live ---"
echo "${live}"
r_page=$(echo "${restored}" | sed -n '1p')
r_rev=$(echo "${restored}" | sed -n '2p')
r_text=$(echo "${restored}" | sed -n '3p')
l_page=$(echo "${live}" | sed -n '1p')
l_rev=$(echo "${live}" | sed -n '2p')
l_text=$(echo "${live}" | sed -n '3p')
fail=0
for kind in page rev text; do
r_var="r_${kind}"
l_var="l_${kind}"
r="${!r_var}"
l="${!l_var}"
if [ "${r}" != "${l}" ]; then
echo "MISMATCH: ${kind} count restored=${r} live=${l}" >&2
fail=1
else
echo "OK: ${kind} count = ${r}"
fi
done
if [ "${fail}" -ne 0 ]; then
echo "RESTORE TEST FAILED." >&2
exit 5
fi
echo "RESTORE TEST PASSED."

View File

@@ -2,13 +2,11 @@
# #
# Create a tar file containing wiki files # Create a tar file containing wiki files
# from the mediawiki docker container. # from the mediawiki docker container.
# set -eux
# Backup directory:
# /home/user/backups/mediawiki
BACKUP_DIR="$HOME/backups"
CONTAINER_NAME="stormy_mw" CONTAINER_NAME="stormy_mw"
STAMP="`date +"%Y%m%d"`" DATESTAMP="`date +"%Y%m%d"`"
TIMESTAMP="`date +"%Y%m%d_%H%M%S"`"
function usage { function usage {
set +x set +x
@@ -23,7 +21,7 @@ function usage {
echo "Example:" echo "Example:"
echo "" echo ""
echo " ./wikifiles_dump.sh" echo " ./wikifiles_dump.sh"
echo " (creates ${BACKUP_DIR}/20200101/wikifiles_20200101.tar.gz)" echo " (creates ${POD_CHARLESREID1_BACKUP_DIR}/YYYYMMDD/wikifiles_YYYYMMDD_HHMMSS.tar.gz)"
echo "" echo ""
exit 1; exit 1;
} }
@@ -39,48 +37,36 @@ fi
if [ "$#" == "0" ]; then if [ "$#" == "0" ]; then
TARGET="wikifiles_${STAMP}.tar.gz" TARGET="wikifiles_${TIMESTAMP}.tar.gz"
BACKUP_TARGET="${BACKUP_DIR}/${STAMP}/${TARGET}" BACKUP_DIR="${POD_CHARLESREID1_BACKUP_DIR}/${DATESTAMP}"
BACKUP_TARGET="${BACKUP_DIR}/${TARGET}"
echo "" echo ""
echo "pod-charlesreid1: wikifiles_dump.sh" echo "pod-charlesreid1: wikifiles_dump.sh"
echo "-----------------------------------" echo "-----------------------------------"
echo "" echo ""
echo "Backup directory: ${BACKUP_DIR}"
echo "Backup target: ${BACKUP_TARGET}" echo "Backup target: ${BACKUP_TARGET}"
echo "" echo ""
mkdir -p ${BACKUP_DIR}/${STAMP} mkdir -p ${BACKUP_DIR}
# If this script is being run from a cron job,
# don't use -i flag with docker
CRON="$( pstree -s $$ | /bin/grep -c cron )"
DOCKER=$(which docker) DOCKER=$(which docker)
DOCKERX=""
if [[ "$CRON" -eq 1 ]];
then
DOCKERX="${DOCKER} exec -t" DOCKERX="${DOCKER} exec -t"
else
DOCKERX="${DOCKER} exec -it"
fi
echo "Step 1: Compress wiki files inside container" echo "Step 1: Compress wiki files inside container"
set -x
${DOCKERX} ${CONTAINER_NAME} /bin/tar czf /tmp/${TARGET} /var/www/html/images ${DOCKERX} ${CONTAINER_NAME} /bin/tar czf /tmp/${TARGET} /var/www/html/images
set +x
echo "Step 2: Copy tar.gz file out of container" echo "Step 2: Copy tar.gz file out of container"
mkdir -p $(dirname "$1") mkdir -p $(dirname "${BACKUP_TARGET}")
set -x
${DOCKER} cp ${CONTAINER_NAME}:/tmp/${TARGET} ${BACKUP_TARGET} ${DOCKER} cp ${CONTAINER_NAME}:/tmp/${TARGET} ${BACKUP_TARGET}
set +x
echo "Step 3: Clean up tar.gz file" echo "Step 3: Clean up tar.gz file"
set -x
${DOCKERX} ${CONTAINER_NAME} /bin/rm -f /tmp/${TARGET} ${DOCKERX} ${CONTAINER_NAME} /bin/rm -f /tmp/${TARGET}
set +x
echo "Successfully wrote wikifiles dump to file: ${BACKUP_TARGET}"
echo "Done." echo "Done."
else else
usage usage
fi fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
#
# Restore wiki files from a tar file
# into the stormy_mw container.
set -eu
function usage {
echo ""
echo "restore_wikifiles.sh script:"
echo "Restore wiki files from a tar file"
echo "into the stormy_mw container"
echo ""
echo " ./restore_wikifiles.sh <tar-file>"
echo ""
echo "Example:"
echo ""
echo " ./restore_wikifiles.sh /path/to/wikifiles.tar.gz"
echo ""
echo ""
exit 1;
}
# NOTE:
# I assume images/ is the only directory to back up/restore.
# If there are more I forgot, add them back in here.
# (skins and extensions are static, added into image at build time.)
if [[ "$#" -eq 1 ]];
then
NAME="stormy_mw"
TAR=$(basename "$1")
echo "Checking that container ${NAME} exists"
docker ps --format '{{.Names}}' | grep ${NAME} || exit 1;
echo "Copying dir $1 into container ${NAME}"
set -x
docker cp $1 ${NAME}:/tmp/${TAR}
docker exec -it ${NAME} rm -rf /var/www/html/images.old
docker exec -it ${NAME} mv /var/www/html/images /var/www/html/images.old
docker exec -it ${NAME} tar -xf /tmp/${TAR} -C / && rm -f /tmp/${TAR}
docker exec -it ${NAME} chown -R www-data:www-data /var/www/html/images
else
usage
fi

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Renew certificates for pod-charlesreid1
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
StandardError=syslog
StandardOutput=syslog
SyslogIdentifier=pod-charlesreid1-certbot
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/environment
ExecStart=/bin/bash -ac '. {{ pod_charlesreid1_pod_install_dir }}/environment; {{ pod_charlesreid1_pod_install_dir }}/scripts/certbot/renew_charlesreid1_certs.sh'

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Timer to renew certificates for pod-charlesreid1
[Timer]
# Run daily
OnCalendar=*-*-* 4:03:00
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,76 @@
#!/bin/bash
#
# renew/run certbot on krash
set -eux
SERVICE="pod-charlesreid1"
function usage {
set +x
echo ""
echo "renew_charlesreid1_certs.sh script:"
echo ""
echo "Renew all certs used in the charlesreid1.com pod"
echo ""
echo " ./renew_charlesreid1_certs.sh"
echo ""
exit 1;
}
if [ "$(id -u)" != "0" ]; then
echo ""
echo ""
echo "This script should be run as root."
echo ""
echo ""
exit 1;
fi
if [ "$#" == "0" ]; then
# disable system service that will re-spawn docker pod
echo "Disable and stop system service ${SERVICE}"
sudo systemctl disable ${SERVICE}
sudo systemctl stop ${SERVICE}
echo "Stop pod"
docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml down
echo "Run certbot renew"
SUBS="git www"
DOMS="charlesreid1.com"
# top level domains
for DOM in $DOMS; do
certbot certonly \
--standalone \
--non-interactive \
--agree-tos \
--email charles@charlesreid1.com \
-d ${DOM}
done
# subdomains
for SUB in $SUBS; do
for DOM in $DOMS; do
certbot certonly \
--standalone \
--non-interactive \
--agree-tos \
--email charles@charlesreid1.com \
-d ${SUB}.${DOM}
done
done
echo "Start pod"
docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml up -d
echo "Enable and start system service ${SERVICE}"
sudo systemctl enable ${SERVICE}
sudo systemctl start ${SERVICE}
echo "Done"
else
usage
fi

View File

@@ -13,7 +13,9 @@ def clean():
rname = tname[:-3] rname = tname[:-3]
rpath = os.path.join(tdir, rname) rpath = os.path.join(tdir, rname)
if os.path.exists(rpath): ignore_list = ['environment']
if os.path.exists(rpath) and rname not in ignore_list:
print(f"Removing file {rpath}") print(f"Removing file {rpath}")
os.remove(rpath) os.remove(rpath)
else: else:

View File

@@ -11,8 +11,8 @@ directory structure for charlesreid1.com
content. (Or, charlesreid1.XYZ, whatever.) content. (Or, charlesreid1.XYZ, whatever.)
""" """
SERVER_NAME_DEFAULT = '{{ server_name_default }}' SERVER_NAME_DEFAULT = '{{ pod_charlesreid1_server_name }}'
USERNAME = '{{ username }}' USERNAME = '{{ pod_charlesreid1_username }}'

View File

@@ -10,8 +10,8 @@ This script git pulls the /www directory
for updating charlesreid1.com content. for updating charlesreid1.com content.
""" """
SERVER_NAME_DEFAULT = '{{ server_name_default }}' SERVER_NAME_DEFAULT = '{{ pod_charlesreid1_server_name }}'
USERNAME = '{{ username }}' USERNAME = '{{ pod_charlesreid1_username }}'

View File

@@ -1,37 +1,27 @@
#!/bin/bash #!/bin/bash
# #
# clone or download each extension, and build # Clone each REL1_39 extension into d-mediawiki-new for the MW 1.39 green stack.
# EmbedVideo is intentionally skipped for now (add back later if needed).
set -eux set -eux
MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki" MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki-new"
CONF_DIR="${MW_DIR}/charlesreid1-config" MW_CONF_DIR="${MW_DIR}/charlesreid1-config/mediawiki"
MW_CONF_DIR="${MW_CONF_DIR}/mediawiki"
EXT_DIR="${MW_CONF_DIR}/extensions" EXT_DIR="${MW_CONF_DIR}/extensions"
mkdir -p ${EXT_DIR}/extensions mkdir -p ${EXT_DIR}
( (
cd ${EXT_DIR}/extensions cd ${EXT_DIR}
############################## ##############################
Extension="SyntaxHighlight_GeSHi" Extension="SyntaxHighlight_GeSHi"
if [ ! -d ${Extension} ] if [ ! -d ${Extension} ]
then then
## This requires mediawiki > 1.31 git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git ${Extension}
## (so does REL1_31)
#git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
## This manually downloads REL1_30
#wget https://extdist.wmflabs.org/dist/extensions/SyntaxHighlight_GeSHi-REL1_30-87392f1.tar.gz -O SyntaxHighlight_GeSHi.tar.gz
#tar -xzf SyntaxHighlight_GeSHi.tar.gz -C ${PWD}
#rm -f SyntaxHighlight_GeSHi.tar.gz
# Best of both worlds
git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
( (
cd ${Extension} cd ${Extension}
git checkout --track remotes/origin/REL1_34 git checkout --track remotes/origin/REL1_39
) )
else else
echo "Skipping ${Extension}" echo "Skipping ${Extension}"
@@ -45,21 +35,7 @@ then
git clone https://github.com/wikimedia/mediawiki-extensions-ParserFunctions.git ${Extension} git clone https://github.com/wikimedia/mediawiki-extensions-ParserFunctions.git ${Extension}
( (
cd ${Extension} cd ${Extension}
git checkout --track remotes/origin/REL1_34 git checkout --track remotes/origin/REL1_39
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="EmbedVideo"
if [ ! -d ${Extension} ]
then
git clone https://github.com/HydraWiki/mediawiki-embedvideo.git ${Extension}
(
cd ${Extension}
git checkout v2.7.3
) )
else else
echo "Skipping ${Extension}" echo "Skipping ${Extension}"
@@ -73,21 +49,7 @@ then
git clone https://github.com/wikimedia/mediawiki-extensions-Math.git ${Extension} git clone https://github.com/wikimedia/mediawiki-extensions-Math.git ${Extension}
( (
cd ${Extension} cd ${Extension}
git checkout REL1_34 git checkout --track remotes/origin/REL1_39
)
else
echo "Skipping ${Extension}"
fi
##############################
Extension="Fail2banlog"
if [ ! -d ${Extension} ]
then
git clone https://github.com/charlesreid1-docker/mw-fail2ban.git ${Extension}
(
cd ${Extension}
git checkout master
) )
else else
echo "Skipping ${Extension}" echo "Skipping ${Extension}"

View File

@@ -1,20 +1,12 @@
#!/bin/bash #!/bin/bash
# #
# fix LocalSettings.php in the mediawiki container. # fix LocalSettings.php in the mediawiki container.
#
# docker is stupid, so it doesn't let you bind mount
# a single file into a docker volume.
#
# so, rather than rebuilding the entire goddamn container
# just to update LocalSettings.php when it changes, we just
# use a docker cp command to copy it into the container.
set -eux set -eux
NAME="stormy_mw" NAME="stormy_mw"
MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki" MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki"
CONF_DIR="${MW_DIR}/charlesreid1-config" MW_CONF_DIR="${MW_DIR}/charlesreid1-config/mediawiki"
MW_CONF_DIR="${MW_CONF_DIR}/mediawiki"
echo "Checking that container exists" echo "Checking that container exists"
docker ps --format '{{.Names}}' | grep ${NAME} || exit 1; docker ps --format '{{.Names}}' | grep ${NAME} || exit 1;

View File

@@ -1,12 +1,6 @@
#!/bin/bash #!/bin/bash
# #
# fix extensions dir in the mediawiki container # fix extensions dir in the mediawiki container
#
# in theory, we should be able to update the
# extensions folder in d-mediawiki/charlesreid1-config,
# but in reality this falls on its face.
# So, we have to fix the fucking extensions directory
# ourselves.
set -eux set -eux
NAME="stormy_mw" NAME="stormy_mw"
@@ -14,8 +8,7 @@ NAME="stormy_mw"
EXTENSIONS="SyntaxHighlight_GeSHi ParserFunctions EmbedVideo Math Fail2banlog" EXTENSIONS="SyntaxHighlight_GeSHi ParserFunctions EmbedVideo Math Fail2banlog"
MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki" MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki"
CONF_DIR="${MW_DIR}/charlesreid1-config" MW_CONF_DIR="${MW_DIR}/charlesreid1-config/mediawiki"
MW_CONF_DIR="${MW_CONF_DIR}/mediawiki"
EXT_DIR="${MW_CONF_DIR}/extensions" EXT_DIR="${MW_CONF_DIR}/extensions"
echo "Checking that container exists..." echo "Checking that container exists..."

View File

@@ -1,20 +1,12 @@
#!/bin/bash #!/bin/bash
# #
# fix skins in the mediawiki container. # fix skins in the mediawiki container.
#
# docker is stupid, so it doesn't let you bind mount
# a single file into a docker volume.
#
# so, rather than rebuilding the entire goddamn container
# just to update the skin when it changes, we just
# use a docker cp command to copy it into the container.
set -eux set -eux
NAME="stormy_mw" NAME="stormy_mw"
MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki" MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki"
CONF_DIR="${MW_DIR}/charlesreid1-config" MW_CONF_DIR="${MW_DIR}/charlesreid1-config/mediawiki"
MW_CONF_DIR="${MW_CONF_DIR}/mediawiki"
SKINS_DIR="${MW_CONF_DIR}/skins" SKINS_DIR="${MW_CONF_DIR}/skins"
echo "Checking that container exists" echo "Checking that container exists"
@@ -24,8 +16,8 @@ echo "Checking that skins dir exists"
test -d ${SKINS_DIR} test -d ${SKINS_DIR}
echo "Installing skins into $NAME" echo "Installing skins into $NAME"
docker exec -it $NAME /bin/bash -c 'rm -rf /var/www/html/skins' docker exec -i $NAME /bin/bash -c 'rm -rf /var/www/html/skins'
docker cp ${SKINS_DIR} $NAME:/var/www/html/skins docker cp ${SKINS_DIR} $NAME:/var/www/html/skins
docker exec -it $NAME /bin/bash -c 'chown -R www-data:www-data /var/www/html/skins' docker exec -i $NAME /bin/bash -c 'chown -R www-data:www-data /var/www/html/skins'
echo "Finished installing skins into $NAME" echo "Finished installing skins into $NAME"

View File

@@ -2,7 +2,7 @@
# #
# Restore wiki files from a tar file # Restore wiki files from a tar file
# into the stormy_mw container. # into the stormy_mw container.
set -eux set -eu
function usage { function usage {
echo "" echo ""
@@ -31,16 +31,16 @@ then
NAME="stormy_mw" NAME="stormy_mw"
TAR=$(basename "$1") TAR=$(basename "$1")
echo "Checking that container exists" echo "Checking that container ${NAME} exists"
docker ps --format '{{.Names}}' | grep ${NAME} || exit 1; docker ps --format '{{.Names}}' | grep ${NAME} || exit 1;
echo "Copying $1 into container ${NAME}" echo "Copying dir $1 into container ${NAME}"
set -x set -x
docker cp $1 ${NAME}:/tmp/${TAR} docker cp $1 ${NAME}:/tmp/${TAR}
docker exec -it ${NAME} rm -rf /var/www/html/images.old
docker exec -it ${NAME} mv /var/www/html/images /var/www/html/images.old docker exec -it ${NAME} mv /var/www/html/images /var/www/html/images.old
docker exec -it ${NAME} tar -xf /tmp/${TAR} -C / && rm -f /tmp/${TAR} docker exec -it ${NAME} tar -xf /tmp/${TAR} -C / && rm -f /tmp/${TAR}
docker exec -it ${NAME} chown -R www-data:www-data /var/www/html/images docker exec -it ${NAME} chown -R www-data:www-data /var/www/html/images
set +x
else else
usage usage

View File

@@ -1,35 +1,36 @@
#!/bin/bash #!/bin/bash
echo "this script is deprecated, see ../backups/wikidb_dump.sh"
##
## Dump a database to an .sql file
## from the stormy_mysql container.
#set -eu
# #
# Dump a database to an .sql file #function usage {
# from the stormy_mysql container. # echo ""
set -x # echo "dump_database.sh script:"
# echo "Dump a database to an .sql file "
function usage { # echo "from the stormy_mysql container."
echo "" # echo ""
echo "dump_database.sh script:" # echo " ./dump_database.sh <sql-dump-file>"
echo "Dump a database to an .sql file " # echo ""
echo "from the stormy_mysql container." # echo "Example:"
echo "" # echo ""
echo " ./dump_database.sh <sql-dump-file>" # echo " ./dump_database.sh /path/to/wikidb_dump.sql"
echo "" # echo ""
echo "Example:" # echo ""
echo "" # exit 1;
echo " ./dump_database.sh /path/to/wikidb_dump.sql" #}
echo "" #
echo "" #CONTAINER_NAME="stormy_mysql"
exit 1; #
} #if [[ "$#" -gt 0 ]];
#then
CONTAINER_NAME="stormy_mysql" #
# TARGET="$1"
if [[ "$#" -gt 0 ]]; # mkdir -p $(dirname $TARGET)
then # set -x
# docker exec -i ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > $TARGET
TARGET="$1" #
mkdir -p $(dirname $TARGET) #else
docker exec -i ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > $TARGET # usage
#fi
else
usage
fi

View File

@@ -6,6 +6,7 @@
# Note that this expects the .sql dump # Note that this expects the .sql dump
# to create its own databases. # to create its own databases.
# Use the --databases flag with mysqldump. # Use the --databases flag with mysqldump.
set -eu
function usage { function usage {
echo "" echo ""
@@ -42,31 +43,23 @@ function usage {
# because of all these one-off # because of all these one-off
# "whoopsie we don't do that" problems. # "whoopsie we don't do that" problems.
if [[ "$#" -eq 1 ]];
then
CONTAINER_NAME="stormy_mysql" CONTAINER_NAME="stormy_mysql"
TARGET=$(basename $1) TARGET=$(basename $1)
TARGET_DIR=$(dirname $1) TARGET_DIR=$(dirname $1)
if [[ "$#" -eq 1 ]];
then
# Step 1: Copy the sql dump into the container
set -x set -x
# Step 1: Copy the sql dump into the container
docker cp $1 ${CONTAINER_NAME}:/tmp/${TARGET} docker cp $1 ${CONTAINER_NAME}:/tmp/${TARGET}
set +x
# Step 2: Run sqldump inside the container # Step 2: Run sqldump inside the container
set -x
docker exec -i ${CONTAINER_NAME} sh -c "/usr/bin/mysql --defaults-file=/root/.mysql.rootpw.cnf < /tmp/${TARGET}" docker exec -i ${CONTAINER_NAME} sh -c "/usr/bin/mysql --defaults-file=/root/.mysql.rootpw.cnf < /tmp/${TARGET}"
set +x
# Step 3: Clean up sql dump from inside container # Step 3: Clean up sql dump from inside container
set -x docker exec -i ${CONTAINER_NAME} sh -c "/bin/rm -fr /tmp/${TARGET}"
docker exec -i ${CONTAINER_NAME} sh -c "/bin/rm -fr /tmp/${TARGET}.sql"
set +x
set +x
else else
usage usage
fi fi

View File

@@ -5,11 +5,11 @@ After=docker.service
[Service] [Service]
Restart=always Restart=always
StandardError=null StandardError=journal
StandardOutput=null StandardOutput=journal
ExecStartPre=test -f {{ pod_install_dir }}/docker-compose.yml ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml
ExecStart=/usr/local/bin/docker-compose -f {{ pod_install_dir }}/docker-compose.yml up ExecStart=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f {{ pod_install_dir }}/docker-compose.yml stop ExecStop=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml stop
[Install] [Install]
WantedBy=default.target WantedBy=default.target