Compare commits
32 Commits
claude-pla
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 4ea40c091e | |||
| 7f61795ad9 | |||
| 6e52b56302 | |||
| 8926e6c33b | |||
| e5b56493e8 | |||
| 3ec56cb967 | |||
| f134f3cfbe | |||
| 021673c5aa | |||
| f0e7740409 | |||
| 45b60b42b4 | |||
| f58fa07a9d | |||
| c67149837a | |||
| 6fcc762836 | |||
| 90a98652be | |||
| 3cc52ddb6f | |||
| 714d939a0f | |||
| e007447c99 | |||
| 8b0645d700 | |||
| e4eeab201f | |||
| 5470e90660 | |||
| 49d5f64c98 | |||
| 2cba400f81 | |||
| ef59df6890 | |||
| 047288fe7b | |||
| fc0abbad25 | |||
| e51d8b19f2 | |||
| fe1a35d06e | |||
| a7b4a4c9ff | |||
| 47fd49cecb | |||
| 564dfba199 | |||
| 91120fa093 | |||
| 227b8c058e |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -9,9 +9,10 @@ d-gitea/custom/
|
|||||||
|
|
||||||
# mediawiki
|
# mediawiki
|
||||||
charlesreid1.wiki.conf
|
charlesreid1.wiki.conf
|
||||||
|
d-mediawiki/mediawiki/
|
||||||
d-mediawiki/charlesreid1-config/mediawiki/skins/Bootstrap2/Bootstrap2.php
|
d-mediawiki/charlesreid1-config/mediawiki/skins/Bootstrap2/Bootstrap2.php
|
||||||
d-mediawiki/charlesreid1-config/mediawiki/skins/Bootstrap2/navbar.php
|
d-mediawiki/charlesreid1-config/mediawiki/skins/Bootstrap2/navbar.php
|
||||||
d-mediawiki/mediawiki/
|
d-mediawiki/charlesreid1-config/mediawiki/mathjax
|
||||||
|
|
||||||
# nginx
|
# nginx
|
||||||
d-nginx-charlesreid1/conf.d/http.DOMAIN.conf
|
d-nginx-charlesreid1/conf.d/http.DOMAIN.conf
|
||||||
|
|||||||
@@ -1,324 +0,0 @@
|
|||||||
# Upgrade Plan: MediaWiki 1.34 → 1.39+ and MySQL 5.7 → 8.0
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
MediaWiki 1.34 (EOL Nov 2021) and MySQL 5.7 (EOL Oct 2023) are both end-of-life and no longer receive security patches. The goal is to upgrade both with **minimal downtime** by running old and new versions side-by-side, testing the new stack, then switching over — with the ability to roll back instantly.
|
|
||||||
|
|
||||||
**Additional motivation:** The REST API v1 endpoint `/w/rest.php/v1/page/{title}/with_html` returns a 500 error ("Unable to fetch Parsoid HTML") because MW 1.34 does not bundle Parsoid. MW 1.39 bundles Parsoid in-process, which is required for this endpoint to work. This blocks tools (e.g., MediaWiki MCP) that rely on the REST API to fetch rendered HTML.
|
|
||||||
|
|
||||||
## Strategy: Blue-Green Deployment
|
|
||||||
|
|
||||||
Run the old stack ("blue") untouched while building and testing the new stack ("green") alongside it. Nginx acts as the switch — changing one `proxy_pass` line flips between old and new.
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─ stormy_mw (MW 1.34) ──── stormy_mysql (MySQL 5.7) ← BLUE (old)
|
|
||||||
nginx ── proxy_pass ──────┤
|
|
||||||
└─ stormy_mw_new (MW 1.39) ─ stormy_mysql_new (MySQL 8) ← GREEN (new)
|
|
||||||
```
|
|
||||||
|
|
||||||
Both stacks use **separate volumes** — the old data is never touched.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Decisions (Locked In)
|
|
||||||
|
|
||||||
- **Target:** MediaWiki 1.39 LTS (smallest jump from 1.34, can do 1.39→1.42 later)
|
|
||||||
- **Skin:** Patch Bootstrap2 to replace deprecated API calls for MW 1.39 compatibility
|
|
||||||
- **EmbedVideo:** Skip for now — don't include in green stack. Add back later if needed.
|
|
||||||
- **Extensions in green stack:** SyntaxHighlight_GeSHi, ParserFunctions, Math (all have REL1_39 branches)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Preparation (no downtime)
|
|
||||||
|
|
||||||
All work happens on the VPS alongside the running production stack.
|
|
||||||
|
|
||||||
### 1.1 Full backup
|
|
||||||
```bash
|
|
||||||
# Database dump
|
|
||||||
make backups
|
|
||||||
# or manually:
|
|
||||||
./scripts/backups/wikidb_dump.sh
|
|
||||||
|
|
||||||
# Also back up the MW volume (uploaded images, cache)
|
|
||||||
docker run --rm -v stormy_mw_data:/data -v /tmp/mw_backup:/backup \
|
|
||||||
alpine tar czf /backup/mw_data_backup.tar.gz -C /data .
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.2 Create new Dockerfiles
|
|
||||||
|
|
||||||
**`d-mediawiki-new/Dockerfile`** — based on `mediawiki:1.39`
|
|
||||||
- Same structure as current Dockerfile
|
|
||||||
- Update extension COPY paths for new versions
|
|
||||||
- Update apt packages if needed (texlive, imagemagick still required)
|
|
||||||
- Apache config stays the same (port 8989)
|
|
||||||
|
|
||||||
**`d-mysql-new/Dockerfile`** — based on `mysql:8.0`
|
|
||||||
- Same structure as current
|
|
||||||
- Keep slow-log config (syntax compatible with 8.0)
|
|
||||||
|
|
||||||
### 1.3 Update extensions for target MW version
|
|
||||||
|
|
||||||
Create `scripts/mw/build_extensions_dir_139.sh` to clone REL1_39 branches:
|
|
||||||
|
|
||||||
| Extension | Current | New |
|
|
||||||
|-----------|---------|-----|
|
|
||||||
| SyntaxHighlight_GeSHi | REL1_34 | REL1_39 |
|
|
||||||
| ParserFunctions | REL1_34 | REL1_39 |
|
|
||||||
| Math | REL1_34 | REL1_39 |
|
|
||||||
| EmbedVideo | v2.7.3 | **Skipped** (add back later) |
|
|
||||||
|
|
||||||
### 1.4 Patch Bootstrap2 skin
|
|
||||||
|
|
||||||
Replace deprecated calls in `skins/Bootstrap2/`:
|
|
||||||
- `wfRunHooks('hook', ...)` → `Hooks::run('hook', ...)` (MW 1.35+)
|
|
||||||
- `wfMsg('key')` → `wfMessage('key')->text()`
|
|
||||||
- `wfEmptyMsg('key')` → `wfMessage('key')->isDisabled()`
|
|
||||||
|
|
||||||
### 1.5 Update LocalSettings.php.j2 (new copy for green stack)
|
|
||||||
|
|
||||||
Changes needed for MW 1.39:
|
|
||||||
- `require_once "$IP/extensions/Math/Math.php"` → `wfLoadExtension( 'Math' )`
|
|
||||||
- `$wgDBmysql5 = true;` — remove (deprecated in 1.39)
|
|
||||||
- Remove `wfLoadExtension( 'EmbedVideo' )` (skipped for now)
|
|
||||||
- Review other deprecated settings
|
|
||||||
- Add Parsoid configuration (bundled in MW 1.39, runs in-process — no separate container needed):
|
|
||||||
```php
|
|
||||||
# Parsoid (required for REST API with_html endpoint)
|
|
||||||
wfLoadExtension( 'Parsoid', "$IP/vendor/wikimedia/parsoid/extension.json" );
|
|
||||||
$wgParsoidSettings = [
|
|
||||||
'useSelser' => true,
|
|
||||||
];
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Build Green Stack (no downtime)
|
|
||||||
|
|
||||||
### 2.1 Add new services to docker-compose.yml.j2
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
stormy_mysql_new:
|
|
||||||
restart: always
|
|
||||||
build: d-mysql-new
|
|
||||||
container_name: stormy_mysql_new
|
|
||||||
volumes:
|
|
||||||
- "stormy_mysql_new_data:/var/lib/mysql"
|
|
||||||
- "./d-mysql/conf.d:/etc/mysql/conf.d:ro"
|
|
||||||
environment:
|
|
||||||
- MYSQL_ROOT_PASSWORD={{ pod_charlesreid1_mysql_password }}
|
|
||||||
networks:
|
|
||||||
- backend_new
|
|
||||||
|
|
||||||
stormy_mw_new:
|
|
||||||
restart: always
|
|
||||||
build: d-mediawiki-new
|
|
||||||
container_name: stormy_mw_new
|
|
||||||
volumes:
|
|
||||||
- "stormy_mw_new_data:/var/www/html"
|
|
||||||
environment:
|
|
||||||
- MEDIAWIKI_SITE_SERVER=https://{{ pod_charlesreid1_server_name }}
|
|
||||||
- MEDIAWIKI_SECRETKEY={{ pod_charlesreid1_mediawiki_secretkey }}
|
|
||||||
- MEDIAWIKI_UPGRADEKEY={{ pod_charlesreid1_mediawiki_upgradekey }}
|
|
||||||
- MYSQL_HOST=stormy_mysql_new
|
|
||||||
- MYSQL_DATABASE=wikidb
|
|
||||||
- MYSQL_USER=root
|
|
||||||
- MYSQL_PASSWORD={{ pod_charlesreid1_mysql_password }}
|
|
||||||
depends_on:
|
|
||||||
- stormy_mysql_new
|
|
||||||
networks:
|
|
||||||
- frontend
|
|
||||||
- backend_new
|
|
||||||
```
|
|
||||||
|
|
||||||
Add `stormy_mysql_new_data`, `stormy_mw_new_data` to volumes, `backend_new` to networks.
|
|
||||||
|
|
||||||
### 2.2 Build and start green containers
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose build stormy_mysql_new stormy_mw_new
|
|
||||||
docker compose up -d stormy_mysql_new stormy_mw_new
|
|
||||||
```
|
|
||||||
|
|
||||||
Old containers keep running — no disruption.
|
|
||||||
|
|
||||||
### 2.3 Migrate database to new MySQL 8.0
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Dump from old MySQL 5.7
|
|
||||||
docker exec stormy_mysql sh -c \
|
|
||||||
'mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD" --default-character-set=binary' \
|
|
||||||
> /tmp/wikidb_for_upgrade.sql
|
|
||||||
|
|
||||||
# Load into new MySQL 8.0
|
|
||||||
docker exec -i stormy_mysql_new sh -c \
|
|
||||||
'mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' \
|
|
||||||
< /tmp/wikidb_for_upgrade.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.4 Migrate MW uploaded files
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Copy images/uploads from old volume to new volume
|
|
||||||
docker run --rm \
|
|
||||||
-v stormy_mw_data:/old:ro \
|
|
||||||
-v stormy_mw_new_data:/new \
|
|
||||||
alpine sh -c 'cp -a /old/images /new/images 2>/dev/null; echo done'
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.5 Run MediaWiki database upgrade
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec stormy_mw_new php /var/www/html/maintenance/update.php --quick
|
|
||||||
```
|
|
||||||
|
|
||||||
This migrates the DB schema from MW 1.34 → 1.39 format.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Test Green Stack (no downtime)
|
|
||||||
|
|
||||||
### 3.1 Direct browser test
|
|
||||||
|
|
||||||
Temporarily expose the new MW on a different port for testing:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
stormy_mw_new:
|
|
||||||
ports:
|
|
||||||
- "8990:8989" # temporary, for direct testing
|
|
||||||
```
|
|
||||||
|
|
||||||
Visit `http://<vps-ip>:8990` to verify MW loads, pages render, login works.
|
|
||||||
|
|
||||||
### 3.2 Test via nginx (brief switchover)
|
|
||||||
|
|
||||||
Edit nginx config to point `/wiki/` and `/w/` at `stormy_mw_new:8989`:
|
|
||||||
|
|
||||||
```nginx
|
|
||||||
proxy_pass http://stormy_mw_new:8989/wiki/;
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec stormy_nginx nginx -s reload
|
|
||||||
```
|
|
||||||
|
|
||||||
Test the live site. If broken, switch back:
|
|
||||||
|
|
||||||
```nginx
|
|
||||||
proxy_pass http://stormy_mw:8989/wiki/;
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec stormy_nginx nginx -s reload
|
|
||||||
```
|
|
||||||
|
|
||||||
**Switchover and rollback each take ~2 seconds** (nginx reload, no container restart).
|
|
||||||
|
|
||||||
### 3.3 Test checklist
|
|
||||||
|
|
||||||
- [ ] Wiki pages render correctly
|
|
||||||
- [ ] Bootstrap2 skin displays properly
|
|
||||||
- [ ] Login works
|
|
||||||
- [ ] Math equations render
|
|
||||||
- [ ] Syntax highlighting works
|
|
||||||
- [ ] Image uploads work
|
|
||||||
- [ ] File downloads work
|
|
||||||
- [ ] Edit pages (as sysop)
|
|
||||||
- [ ] Search works
|
|
||||||
- [ ] Special pages load
|
|
||||||
- [ ] REST API: `curl -s -o /dev/null -w '%{http_code}' https://wiki.golly.life/w/rest.php/v1/page/Main_Page/with_html` returns `200`
|
|
||||||
- [ ] REST API: response contains rendered HTML (not "Unable to fetch Parsoid HTML")
|
|
||||||
- [ ] MediaWiki MCP tool can fetch pages without 500 errors
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Switchover (~2 seconds downtime)
|
|
||||||
|
|
||||||
Once testing passes:
|
|
||||||
|
|
||||||
### 4.1 Final data sync
|
|
||||||
|
|
||||||
Right before switchover, re-dump and re-load the database to capture any edits made since Phase 2:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Fresh dump
|
|
||||||
docker exec stormy_mysql sh -c \
|
|
||||||
'mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD" --default-character-set=binary' \
|
|
||||||
> /tmp/wikidb_final.sql
|
|
||||||
|
|
||||||
# Load into new
|
|
||||||
docker exec -i stormy_mysql_new sh -c \
|
|
||||||
'mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "DROP DATABASE wikidb; CREATE DATABASE wikidb;"'
|
|
||||||
docker exec -i stormy_mysql_new sh -c \
|
|
||||||
'mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /tmp/wikidb_final.sql
|
|
||||||
|
|
||||||
# Re-run schema upgrade
|
|
||||||
docker exec stormy_mw_new php /var/www/html/maintenance/update.php --quick
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Switch nginx
|
|
||||||
|
|
||||||
Update proxy_pass in nginx config, reload. **This is the only moment of downtime.**
|
|
||||||
|
|
||||||
### 4.3 Stop old containers (optional, can defer)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose stop stormy_mysql stormy_mw
|
|
||||||
```
|
|
||||||
|
|
||||||
Keep volumes intact for rollback.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 5: Rollback (if needed)
|
|
||||||
|
|
||||||
At any point after switchover:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Point nginx back to old containers
|
|
||||||
# (edit proxy_pass back to stormy_mw:8989)
|
|
||||||
docker compose start stormy_mysql stormy_mw
|
|
||||||
docker exec stormy_nginx nginx -s reload
|
|
||||||
```
|
|
||||||
|
|
||||||
Old containers + old volumes are untouched. Rollback is instant.
|
|
||||||
|
|
||||||
**Keep old containers and volumes for at least 2 weeks** before removing.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files to Create/Modify
|
|
||||||
|
|
||||||
| File | Action |
|
|
||||||
|------|--------|
|
|
||||||
| `d-mediawiki-new/Dockerfile` | Create — based on `mediawiki:1.39` |
|
|
||||||
| `d-mediawiki-new/charlesreid1-config/` | Create — copy from d-mediawiki, update extensions |
|
|
||||||
| `d-mysql-new/Dockerfile` | Create — based on `mysql:8.0` |
|
|
||||||
| `docker-compose.yml.j2` | Add green stack services, volumes, network |
|
|
||||||
| `d-nginx-charlesreid1/conf.d/https.DOMAIN.conf.j2` | Switchover: change proxy_pass targets |
|
|
||||||
| `scripts/mw/build_extensions_dir_139.sh` | Create — clone REL1_39 branches |
|
|
||||||
| `d-mediawiki-new/charlesreid1-config/mediawiki/LocalSettings.php.j2` | Update for MW 1.39 compat |
|
|
||||||
| `d-mediawiki-new/charlesreid1-config/mediawiki/skins/Bootstrap2/` | Patch deprecated API calls |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Risk Assessment
|
|
||||||
|
|
||||||
| Risk | Likelihood | Mitigation |
|
|
||||||
|------|-----------|------------|
|
|
||||||
| Bootstrap2 skin breaks on MW 1.39 | MEDIUM | Patching deprecated calls; have Vector as fallback |
|
|
||||||
| Math extension rendering changes | LOW | REL1_39 branch exists; test rendering |
|
|
||||||
| MySQL 8 query compatibility | LOW | MW 1.39 officially supports MySQL 8.0 |
|
|
||||||
| Uploaded images lost | NONE | Copied to new volume; old volume preserved |
|
|
||||||
| Database corruption on migration | LOW | Old DB untouched; dump/restore is safe |
|
|
||||||
| Pages using EmbedVideo break | LOW | Videos won't render but pages still load; add back later |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Order
|
|
||||||
|
|
||||||
1. **Prepare** new Dockerfiles and extension builds (Phase 1)
|
|
||||||
2. **Build** green stack alongside production (Phase 2)
|
|
||||||
3. **Test** thoroughly (Phase 3)
|
|
||||||
4. **Switch** when confident (Phase 4)
|
|
||||||
5. **Clean up** old containers after 2 weeks (Phase 5)
|
|
||||||
@@ -1,405 +0,0 @@
|
|||||||
# Plan: Fix the Broken wikidb Backup Script
|
|
||||||
|
|
||||||
## Status
|
|
||||||
|
|
||||||
**BLOCKING:** The MySQL no-root-password migration (`MySqlNoRootPasswordPlan.md`)
|
|
||||||
is on hold until backups are working. We will not touch the database until we
|
|
||||||
have a verified, complete, restorable dump in hand.
|
|
||||||
|
|
||||||
## What we observed
|
|
||||||
|
|
||||||
On 2026-04-13 at 18:02 PDT we ran `scripts/backups/wikidb_dump.sh` as a
|
|
||||||
pre-flight safety net. After ~14 seconds the output file stopped growing at
|
|
||||||
459,628,206 bytes (~439 MB) and the script hung. After 6+ minutes:
|
|
||||||
|
|
||||||
- The `mysqldump` process inside `stormy_mysql` was still alive but in `S`
|
|
||||||
(sleeping) state, using ~1% CPU.
|
|
||||||
- `SHOW PROCESSLIST` on MySQL showed **no** mysqldump connection — MySQL had
|
|
||||||
already dropped it.
|
|
||||||
- The dump file ended mid-`INSERT`, mid-row, with **no** `-- Dump completed on …`
|
|
||||||
trailer. The dump is unusable.
|
|
||||||
|
|
||||||
So: every "successful" run of this script may have been silently producing
|
|
||||||
truncated dumps. We do not know how long this has been broken or whether any
|
|
||||||
recent backup in `/home/charles/backups` or in S3 is restorable. **That is
|
|
||||||
question one.**
|
|
||||||
|
|
||||||
## Root cause hypothesis
|
|
||||||
|
|
||||||
`scripts/backups/wikidb_dump.sh` runs:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
DOCKERX="${DOCKER} exec -t"
|
|
||||||
${DOCKERX} ${CONTAINER_NAME} sh -c 'exec mysqldump wikidb --databases -uroot -p"$MYSQL_ROOT_PASSWORD" --default-character-set=binary' > "${BACKUP_TARGET}"
|
|
||||||
```
|
|
||||||
|
|
||||||
The `-t` flag allocates a pseudo-TTY inside the container. Two problems with
|
|
||||||
that:
|
|
||||||
|
|
||||||
1. **PTY corrupts binary output.** A PTY translates `LF` → `CRLF` on output.
|
|
||||||
`mysqldump --default-character-set=binary` writes raw `_binary` blobs that
|
|
||||||
contain `\n` bytes; these get rewritten in transit, silently corrupting the
|
|
||||||
dump even when it does complete.
|
|
||||||
2. **PTY buffers can deadlock on large streams.** PTYs have small kernel
|
|
||||||
buffers (typically 4 KB). When the redirect target (`> file`) drains slower
|
|
||||||
than mysqldump produces, or when MySQL hits `net_write_timeout` and closes
|
|
||||||
the connection, mysqldump can end up sleeping on a PTY write that will
|
|
||||||
never complete. That matches what we saw: MySQL connection gone, mysqldump
|
|
||||||
alive but sleeping, file frozen at ~439 MB.
|
|
||||||
|
|
||||||
The script also strips the first line with `tail -n +2` to drop mysqldump's
|
|
||||||
"Using a password on the command line interface can be insecure" warning. The
|
|
||||||
warning goes to **stderr**, not stdout, so this `tail` is at best a no-op and
|
|
||||||
at worst silently deletes the first line of real SQL.
|
|
||||||
|
|
||||||
## Affected files
|
|
||||||
|
|
||||||
| File | Change |
|
|
||||||
|------|--------|
|
|
||||||
| `scripts/backups/wikidb_dump.sh` | Remove `-t`; switch auth to `MYSQL_PWD` env; remove broken `tail -n +2`; add completion-trailer check; add `--single-transaction --quick --routines --triggers --events` |
|
|
||||||
| `scripts/backups/wikidb_restore_test.sh` | **NEW** — restore the latest dump into a throwaway MySQL container and run sanity queries |
|
|
||||||
| `scripts/backups/README.md` *(if present)* | Document the restore-test command and integrity check |
|
|
||||||
|
|
||||||
We will not touch `scripts/mysql/restore_database.sh` here — it is broken
|
|
||||||
independently (references the deleted `.mysql.rootpw.cnf`) and is tracked
|
|
||||||
separately.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Triage (do this first, before any changes)
|
|
||||||
|
|
||||||
### Step 0.1: Kill the hung mysqldump
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec stormy_mysql sh -c 'pkill -9 mysqldump || true'
|
|
||||||
# also kill the host-side docker exec wrapper if it is still around
|
|
||||||
pgrep -af 'docker exec.*mysqldump' || true
|
|
||||||
```
|
|
||||||
|
|
||||||
After this, confirm nothing is running:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec stormy_mysql sh -c 'pgrep -a mysqldump || echo none'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 0.2: Remove the truncated dump
|
|
||||||
|
|
||||||
```bash
|
|
||||||
rm -i /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 0.3: Audit existing backups — are *any* of them complete?
|
|
||||||
|
|
||||||
We need to know whether we have a known-good dump anywhere. For each candidate
|
|
||||||
file, the last bytes should contain `-- Dump completed on`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
for f in $(find /home/charles/backups -name 'wikidb_*.sql' -mtime -30 | sort); do
|
|
||||||
trailer=$(tail -c 200 "$f" | tr -d '\0' | grep -o 'Dump completed on[^"]*' || echo "MISSING")
|
|
||||||
size=$(stat -c %s "$f")
|
|
||||||
echo "$f size=$size trailer=$trailer"
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
Any file showing `MISSING` is truncated and **not a real backup**. Record the
|
|
||||||
results — we need to know whether the most recent good dump is from yesterday,
|
|
||||||
last week, or never.
|
|
||||||
|
|
||||||
### Step 0.4: Audit the S3 backups
|
|
||||||
|
|
||||||
```bash
|
|
||||||
source ./environment
|
|
||||||
aws s3 ls "s3://${POD_CHARLESREID1_BACKUP_S3BUCKET}/" --recursive | grep wikidb | tail -20
|
|
||||||
```
|
|
||||||
|
|
||||||
Pull the most recent one down to a scratch dir and trailer-check it the same
|
|
||||||
way as Step 0.3. **Do not assume it is good just because it exists.**
|
|
||||||
|
|
||||||
### Step 0.5: Decide whether to pause writes
|
|
||||||
|
|
||||||
If Step 0.3 + 0.4 show no recent good backup, consider whether to pause writes
|
|
||||||
to the wiki (read-only mode via `$wgReadOnly` in `LocalSettings.php`) until we
|
|
||||||
have one. This is a judgement call — if the most recent good backup is days old
|
|
||||||
but the wiki is low-traffic, the risk of leaving it writable while we fix the
|
|
||||||
script is low. Decide explicitly, do not just drift.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Fix the script
|
|
||||||
|
|
||||||
### Step 1.1: Edit `scripts/backups/wikidb_dump.sh`
|
|
||||||
|
|
||||||
Replace the docker exec block with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Pass the password via env to avoid:
|
|
||||||
# - the cmdline-password warning on stderr
|
|
||||||
# - the password showing up in `ps` inside the container
|
|
||||||
# No `-t`: PTY corrupts binary dumps and can deadlock on large output.
|
|
||||||
docker exec -i \
|
|
||||||
-e MYSQL_PWD \
|
|
||||||
"${CONTAINER_NAME}" \
|
|
||||||
sh -c 'exec mysqldump \
|
|
||||||
--user=root \
|
|
||||||
--single-transaction \
|
|
||||||
--quick \
|
|
||||||
--routines \
|
|
||||||
--triggers \
|
|
||||||
--events \
|
|
||||||
--default-character-set=binary \
|
|
||||||
--databases wikidb' \
|
|
||||||
> "${BACKUP_TARGET}"
|
|
||||||
```
|
|
||||||
|
|
||||||
Notes on each flag:
|
|
||||||
|
|
||||||
- `-i` — keep stdin open (no PTY). This is the single most important change.
|
|
||||||
- `-e MYSQL_PWD` — forwards the host's `MYSQL_PWD` env var into the container
|
|
||||||
for this one exec call. mysqldump reads `MYSQL_PWD` automatically. Set it on
|
|
||||||
the host before invoking the script:
|
|
||||||
```bash
|
|
||||||
export MYSQL_PWD="$(docker exec stormy_mysql printenv MYSQL_ROOT_PASSWORD)"
|
|
||||||
```
|
|
||||||
We pull it from the container so we don't have to duplicate the secret on
|
|
||||||
the host. The systemd unit / cron wrapper that runs this script will need
|
|
||||||
the same line.
|
|
||||||
- `--single-transaction` — InnoDB-only consistent snapshot without table
|
|
||||||
locks. wikidb is InnoDB. This is the standard recommendation for live MW
|
|
||||||
databases.
|
|
||||||
- `--quick` — stream rows one at a time instead of buffering whole tables in
|
|
||||||
RAM. Important for large `text` / `revision` tables.
|
|
||||||
- `--routines --triggers --events` — include stored programs. Cheap insurance.
|
|
||||||
- Removed `-uroot -p"$MYSQL_ROOT_PASSWORD"` from the inner sh -c, replaced
|
|
||||||
with `--user=root` + `MYSQL_PWD`.
|
|
||||||
|
|
||||||
### Step 1.2: Remove the broken `tail -n +2` block
|
|
||||||
|
|
||||||
The "warning" it was trying to strip went to stderr, never stdout. The
|
|
||||||
existing code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tail -n +2 "${BACKUP_TARGET}" > "${BACKUP_TARGET}.tmp"
|
|
||||||
mv "${BACKUP_TARGET}.tmp" "${BACKUP_TARGET}"
|
|
||||||
```
|
|
||||||
|
|
||||||
is silently deleting the first line of real SQL (typically the
|
|
||||||
`-- MySQL dump …` header comment). Delete the block entirely.
|
|
||||||
|
|
||||||
### Step 1.3: Add an integrity check
|
|
||||||
|
|
||||||
After the dump, before declaring success:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# A complete mysqldump always ends with `-- Dump completed on …`.
|
|
||||||
if ! tail -c 200 "${BACKUP_TARGET}" | grep -q 'Dump completed on'; then
|
|
||||||
echo "ERROR: dump file ${BACKUP_TARGET} is missing the completion trailer." >&2
|
|
||||||
echo " mysqldump did not finish successfully." >&2
|
|
||||||
exit 2
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Sanity: file should be at least a few MB. Tune the floor as you like.
|
|
||||||
size=$(stat -c %s "${BACKUP_TARGET}")
|
|
||||||
if [ "${size}" -lt $((50 * 1024 * 1024)) ]; then
|
|
||||||
echo "ERROR: dump file ${BACKUP_TARGET} is only ${size} bytes; suspicious." >&2
|
|
||||||
exit 3
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Dump OK: ${BACKUP_TARGET} (${size} bytes)"
|
|
||||||
```
|
|
||||||
|
|
||||||
`set -eux` is already at the top of the script, so any failed step exits
|
|
||||||
non-zero. Good — make sure whatever runs the script (systemd, cron) actually
|
|
||||||
notices that exit code and alerts.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Verify the new script works
|
|
||||||
|
|
||||||
### Step 2.1: Run it
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export MYSQL_PWD="$(docker exec stormy_mysql printenv MYSQL_ROOT_PASSWORD)"
|
|
||||||
source ./environment
|
|
||||||
bash ./scripts/backups/wikidb_dump.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Time it. On a healthy `--quick` stream, 400 MB of wikidb should take well
|
|
||||||
under a minute on local disk.
|
|
||||||
|
|
||||||
### Step 2.2: Verify the trailer
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tail -c 200 /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql | tr -d '\0'
|
|
||||||
```
|
|
||||||
|
|
||||||
Must end with `-- Dump completed on YYYY-MM-DD HH:MM:SS`.
|
|
||||||
|
|
||||||
### Step 2.3: Verify the byte count is sane
|
|
||||||
|
|
||||||
It should be **larger** than the truncated 439 MB we saw earlier (because the
|
|
||||||
truncated file was missing the tail end of a table). Compare to the largest
|
|
||||||
recent S3 backup if you have one.
|
|
||||||
|
|
||||||
### Step 2.4: Spot-check the SQL
|
|
||||||
|
|
||||||
```bash
|
|
||||||
head -50 /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
Should start with `-- MySQL dump …` (NOT with `CREATE TABLE` — if it starts
|
|
||||||
with `CREATE TABLE` then the dead `tail -n +2` is still there, eating the
|
|
||||||
header).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Prove the dump is restorable
|
|
||||||
|
|
||||||
A backup is only a backup if you have actually restored from it. Until then
|
|
||||||
it is a file of unknown provenance.
|
|
||||||
|
|
||||||
### Step 3.1: Spin up a throwaway MySQL container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --rm \
|
|
||||||
--name wikidb_restore_test \
|
|
||||||
-e MYSQL_ROOT_PASSWORD=temp_test_pw_$$ \
|
|
||||||
mysql:5.7 # or whatever version stormy_mysql is — check with: docker inspect stormy_mysql --format '{{.Config.Image}}'
|
|
||||||
```
|
|
||||||
|
|
||||||
Wait for it to be ready:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
until docker exec wikidb_restore_test sh -c 'mysqladmin -uroot -p"$MYSQL_ROOT_PASSWORD" ping' 2>/dev/null; do
|
|
||||||
sleep 2
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3.2: Pipe the dump in
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -i wikidb_restore_test sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' \
|
|
||||||
< /home/charles/backups/$(date +%Y%m%d)/wikidb_*.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
Should complete with no errors.
|
|
||||||
|
|
||||||
### Step 3.3: Run sanity queries against the restored DB
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec wikidb_restore_test sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "
|
|
||||||
USE wikidb;
|
|
||||||
SELECT COUNT(*) AS pages FROM page;
|
|
||||||
SELECT COUNT(*) AS revisions FROM revision;
|
|
||||||
SELECT COUNT(*) AS texts FROM text;
|
|
||||||
SELECT MAX(rev_timestamp) AS most_recent_edit FROM revision;
|
|
||||||
"'
|
|
||||||
```
|
|
||||||
|
|
||||||
Compare those numbers to live `stormy_mysql`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -i stormy_mysql sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "
|
|
||||||
USE wikidb;
|
|
||||||
SELECT COUNT(*) FROM page;
|
|
||||||
SELECT COUNT(*) FROM revision;
|
|
||||||
SELECT COUNT(*) FROM text;
|
|
||||||
SELECT MAX(rev_timestamp) FROM revision;
|
|
||||||
"'
|
|
||||||
```
|
|
||||||
|
|
||||||
They should match (allowing for any edits between the dump time and the live
|
|
||||||
query).
|
|
||||||
|
|
||||||
### Step 3.4: Tear down
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker stop wikidb_restore_test
|
|
||||||
```
|
|
||||||
|
|
||||||
`--rm` removes it on stop. No leftover state.
|
|
||||||
|
|
||||||
### Step 3.5: Bake this into a script
|
|
||||||
|
|
||||||
Save the Phase 3 commands as `scripts/backups/wikidb_restore_test.sh` so we
|
|
||||||
can re-run it on demand. It should take a backup file path as its single
|
|
||||||
argument and exit non-zero on any mismatch.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Verify the scheduled-backup path
|
|
||||||
|
|
||||||
Whatever runs `wikidb_dump.sh` on a schedule needs to:
|
|
||||||
|
|
||||||
1. Set `MYSQL_PWD` (or otherwise provide the password) before invoking.
|
|
||||||
2. Actually notice and alert on a non-zero exit.
|
|
||||||
|
|
||||||
### Step 4.1: Find the scheduler
|
|
||||||
|
|
||||||
```bash
|
|
||||||
systemctl list-timers --all | grep -i backup
|
|
||||||
ls /etc/systemd/system/ | grep -i backup
|
|
||||||
crontab -l
|
|
||||||
sudo crontab -l
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.2: Inspect whatever you find
|
|
||||||
|
|
||||||
Confirm it sources `./environment` (or otherwise gets `MYSQL_PWD`), runs the
|
|
||||||
script, and surfaces failures (slack canary webhook? email? exit-code check?
|
|
||||||
journalctl?). If the failure path is "we'd notice in the logs eventually,"
|
|
||||||
that is not a failure path.
|
|
||||||
|
|
||||||
### Step 4.3: Trigger the scheduled job manually and confirm a clean run
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo systemctl start <whatever-the-unit-is>.service
|
|
||||||
journalctl -u <whatever-the-unit-is>.service --since "5 min ago"
|
|
||||||
```
|
|
||||||
|
|
||||||
The journal should show the "Dump OK" line from Step 1.3.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 5: Commit and unblock the MySQL work
|
|
||||||
|
|
||||||
### Step 5.1: Commit the script + new restore-test script
|
|
||||||
|
|
||||||
Branch, commit, push, PR. Reference this plan in the PR description.
|
|
||||||
|
|
||||||
### Step 5.2: Update `MySqlNoRootPasswordPlan.md` Step 4 (Take a fresh backup)
|
|
||||||
|
|
||||||
It should now point at the fixed script and the restore-test script — Phase 0
|
|
||||||
of the no-root-password plan should require **both** a successful dump AND a
|
|
||||||
successful restore-test before proceeding.
|
|
||||||
|
|
||||||
### Step 5.3: Resume the MySQL no-root-password migration
|
|
||||||
|
|
||||||
Only after Phase 3 above has passed at least once on a fresh dump.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Rollback
|
|
||||||
|
|
||||||
There is nothing to roll back in Phase 0–3 — we are only modifying a script
|
|
||||||
and creating throwaway containers. If the new script doesn't work, the old
|
|
||||||
script is in git history (`git checkout -- scripts/backups/wikidb_dump.sh`)
|
|
||||||
and we are no worse off than we are right now (which is: backups are
|
|
||||||
broken).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Notes / open questions
|
|
||||||
|
|
||||||
- **How long has this been broken?** Answer with Phase 0.3 + 0.4. If every
|
|
||||||
recent dump is truncated, this has been broken since whenever the wiki grew
|
|
||||||
past the first PTY-buffer-stall threshold. We should figure out an
|
|
||||||
approximate date so we know what window of "we thought we had backups" was
|
|
||||||
fictional.
|
|
||||||
- **Why no alert?** Phase 4 needs to answer this. A backup pipeline that can
|
|
||||||
silently produce 439 MB of garbage for an unknown number of days is the
|
|
||||||
real bug. The script fix is necessary but not sufficient.
|
|
||||||
- **Should we move off `mysqldump` entirely?** For a database this size,
|
|
||||||
`mysqldump` is fine. Not worth re-architecting. The fix is one flag and
|
|
||||||
one integrity check.
|
|
||||||
- **`docker exec -t` elsewhere in the repo?** Worth a grep — same bug pattern
|
|
||||||
could exist in any other backup or maintenance script.
|
|
||||||
@@ -98,6 +98,9 @@ SECRET_KEY = {{ pod_charlesreid1_gitea_secretkey }}
|
|||||||
MIN_PASSWORD_LENGTH = 10
|
MIN_PASSWORD_LENGTH = 10
|
||||||
INTERNAL_TOKEN = {{ pod_charlesreid1_gitea_internaltoken }}
|
INTERNAL_TOKEN = {{ pod_charlesreid1_gitea_internaltoken }}
|
||||||
|
|
||||||
|
[actions]
|
||||||
|
ENABLED = true
|
||||||
|
|
||||||
[other]
|
[other]
|
||||||
SHOW_FOOTER_BRANDING = false
|
SHOW_FOOTER_BRANDING = false
|
||||||
; Show version information about Gitea and Go in the footer
|
; Show version information about Gitea and Go in the footer
|
||||||
|
|||||||
20
d-gitea/runner/config.yaml
Normal file
20
d-gitea/runner/config.yaml
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
log:
|
||||||
|
level: info
|
||||||
|
|
||||||
|
runner:
|
||||||
|
# Label format: <label>:<runner-type>:<image>
|
||||||
|
# "ubuntu-latest" is the standard GitHub Actions label.
|
||||||
|
# Map it (and common aliases) to a Docker image so jobs don't sit waiting.
|
||||||
|
labels:
|
||||||
|
# alpine: ~50MB, has python3+pip; install extras with: apk add --no-cache git curl
|
||||||
|
- "alpine:docker://python:3.12-alpine"
|
||||||
|
- "ubuntu-latest:docker://catthehacker/ubuntu:act-22.04"
|
||||||
|
- "ubuntu-22.04:docker://catthehacker/ubuntu:act-22.04"
|
||||||
|
- "ubuntu-20.04:docker://catthehacker/ubuntu:act-20.04"
|
||||||
|
- "ubuntu-24.04:docker://catthehacker/ubuntu:act-22.04"
|
||||||
|
|
||||||
|
container:
|
||||||
|
network: "pod-charlesreid1_frontend"
|
||||||
|
|
||||||
|
cache:
|
||||||
|
enabled: true
|
||||||
@@ -1,22 +1,12 @@
|
|||||||
FROM mediawiki:1.34
|
FROM mediawiki:1.39.12
|
||||||
|
|
||||||
EXPOSE 8989
|
EXPOSE 8989
|
||||||
|
|
||||||
VOLUME ["/var/www/html"]
|
|
||||||
|
|
||||||
# Install ImageMagick
|
# Install ImageMagick (used for image thumbnailing)
|
||||||
# and math stuff mentioned by Math extension readme
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y build-essential \
|
apt-get install -y --no-install-recommends imagemagick && \
|
||||||
dvipng \
|
rm -rf /var/lib/apt/lists/*
|
||||||
ocaml \
|
|
||||||
ghostscript \
|
|
||||||
imagemagick \
|
|
||||||
texlive-latex-base \
|
|
||||||
texlive-latex-extra \
|
|
||||||
texlive-fonts-recommended \
|
|
||||||
texlive-lang-greek \
|
|
||||||
texlive-latex-recommended
|
|
||||||
|
|
||||||
# Copy skins, config files, and other particulars into container
|
# Copy skins, config files, and other particulars into container
|
||||||
|
|
||||||
@@ -24,37 +14,35 @@ RUN apt-get update && \
|
|||||||
# MediaWiki needs everything, everything, to be in one folder.
|
# MediaWiki needs everything, everything, to be in one folder.
|
||||||
# Docker is totally incapable of mounting a file in a volume.
|
# Docker is totally incapable of mounting a file in a volume.
|
||||||
# I cannot update LocalSettings.php without clearing the cache.
|
# I cannot update LocalSettings.php without clearing the cache.
|
||||||
# I cannot clear the cache without reinstalling all of latex.
|
|
||||||
# I can't bind-mount the skins dir, because then it's owned by root.
|
# I can't bind-mount the skins dir, because then it's owned by root.
|
||||||
# I can't fix the fact that all bind-mounted dirs are owned by root,
|
# I can't fix the fact that all bind-mounted dirs are owned by root,
|
||||||
# because I can only add commands in THIS DOCKERFILE.
|
# because I can only add commands in THIS DOCKERFILE.
|
||||||
# and when you run the commands in this dockerfile,
|
# and when you run the commands in this dockerfile,
|
||||||
# YOU CANNOT SEE THE BIND-MOUNTED STUFF.
|
# YOU CANNOT SEE THE BIND-MOUNTED STUFF.
|
||||||
|
|
||||||
# Extensions
|
# Extensions (REL1_39 branches; EmbedVideo skipped for the 1.34 -> 1.39 upgrade)
|
||||||
COPY charlesreid1-config/mediawiki/extensions/EmbedVideo /var/www/html/extensions/EmbedVideo
|
|
||||||
COPY charlesreid1-config/mediawiki/extensions/Math /var/www/html/extensions/Math
|
COPY charlesreid1-config/mediawiki/extensions/Math /var/www/html/extensions/Math
|
||||||
COPY charlesreid1-config/mediawiki/extensions/ParserFunctions /var/www/html/extensions/ParserFunctions
|
COPY charlesreid1-config/mediawiki/extensions/ParserFunctions /var/www/html/extensions/ParserFunctions
|
||||||
COPY charlesreid1-config/mediawiki/extensions/SyntaxHighlight_GeSHi /var/www/html/extensions/SyntaxHighlight_GeSHi
|
COPY charlesreid1-config/mediawiki/extensions/SyntaxHighlight_GeSHi /var/www/html/extensions/SyntaxHighlight_GeSHi
|
||||||
RUN chown -R www-data:www-data /var/www/html/*
|
|
||||||
|
|
||||||
# Skins
|
# Skins
|
||||||
COPY charlesreid1-config/mediawiki/skins /var/www/html/skins
|
COPY charlesreid1-config/mediawiki/skins /var/www/html/skins
|
||||||
RUN chown -R www-data:www-data /var/www/html/skins
|
|
||||||
RUN touch /var/www/html/skins
|
# MathJax 3.2.2 (self-hosted, served via Apache alias at /w/mathjax/*).
|
||||||
|
# Math extension runs in 'source' mode; MathJax renders client-side, so we
|
||||||
|
# never call out to restbase/mathoid. See LocalSettings.php.j2.
|
||||||
|
COPY charlesreid1-config/mediawiki/mathjax /var/www/html/mathjax
|
||||||
|
|
||||||
# Settings
|
# Settings
|
||||||
COPY charlesreid1-config/mediawiki/LocalSettings.php /var/www/html/LocalSettings.php
|
COPY charlesreid1-config/mediawiki/LocalSettings.php /var/www/html/LocalSettings.php
|
||||||
RUN chown -R www-data:www-data /var/www/html/LocalSettings*
|
|
||||||
RUN chmod 600 /var/www/html/LocalSettings.php
|
|
||||||
|
|
||||||
# Apache conf file
|
# Apache conf file
|
||||||
COPY charlesreid1-config/apache/*.conf /etc/apache2/sites-enabled/
|
COPY charlesreid1-config/apache/*.conf /etc/apache2/sites-enabled/
|
||||||
RUN a2enmod rewrite
|
|
||||||
RUN service apache2 restart
|
|
||||||
|
|
||||||
## make texvc
|
RUN chown -R www-data:www-data /var/www/html/* /var/www/html/skins /var/www/html/mathjax /var/www/html/LocalSettings* && \
|
||||||
#CMD cd /var/www/html/extensions/Math && make && apache2-foreground
|
touch /var/www/html/skins && \
|
||||||
|
chmod 600 /var/www/html/LocalSettings.php && \
|
||||||
|
a2enmod rewrite
|
||||||
|
|
||||||
# PHP conf file
|
# PHP conf file
|
||||||
# https://hub.docker.com/_/php/
|
# https://hub.docker.com/_/php/
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ $wgDBpassword = getenv('MYSQL_PASSWORD');
|
|||||||
# MySQL specific settings
|
# MySQL specific settings
|
||||||
$wgDBprefix = "";
|
$wgDBprefix = "";
|
||||||
$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
|
$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
|
||||||
$wgDBmysql5 = true;
|
# $wgDBmysql5 removed — deprecated in MW 1.39
|
||||||
|
|
||||||
# Shared memory settings
|
# Shared memory settings
|
||||||
$wgMainCacheType = CACHE_ACCEL;
|
$wgMainCacheType = CACHE_ACCEL;
|
||||||
@@ -84,16 +84,25 @@ $wgPingback = false;
|
|||||||
# available UTF-8 locale
|
# available UTF-8 locale
|
||||||
$wgShellLocale = "en_US.utf8";
|
$wgShellLocale = "en_US.utf8";
|
||||||
|
|
||||||
# If you have the appropriate support software installed
|
# Math rendering: Math extension emits raw LaTeX ('source' mode), then
|
||||||
# you can enable inline LaTeX equations:
|
# a self-hosted MathJax 3 build at /w/mathjax/ renders it client-side.
|
||||||
$wgUseTeX = true;
|
# No mathoid, no restbase, no external CDN.
|
||||||
$wgTexvc = "$IP/extensions/Math/math/texvc";
|
$wgDefaultUserOptions['math'] = 'source';
|
||||||
#$wgTexvc = '/usr/bin/texvc';
|
$wgMathValidModes = [ 'source' ];
|
||||||
|
# Skip TeX validation entirely — default validator calls out to restbase,
|
||||||
# Set MathML as default rendering option
|
# which breaks air-gapped installs even when we only emit source HTML.
|
||||||
$wgDefaultUserOptions['math'] = 'mathml';
|
$wgMathDisableTexFilter = 'always';
|
||||||
$wgMathFullRestbaseURL = 'https://en.wikipedia.org/api/rest_';
|
$wgHooks['BeforePageDisplay'][] = function ( $out, $skin ) {
|
||||||
$wgMathMathMLUrl = 'https://mathoid-beta.wmflabs.org/';
|
$out->addHeadItem( 'mathjax',
|
||||||
|
'<script>window.MathJax = {'
|
||||||
|
. 'tex: { inlineMath: [["$","$"],["\\\\(","\\\\)"]], '
|
||||||
|
. 'displayMath: [["$$","$$"],["\\\\[","\\\\]"]], '
|
||||||
|
. 'processEscapes: true }, '
|
||||||
|
. 'options: { processHtmlClass: "mwe-math-fallback-source-inline|mwe-math-fallback-source-display|mwe-math-element" } '
|
||||||
|
. '};</script>'
|
||||||
|
. '<script async src="/w/mathjax/tex-chtml.js"></script>'
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
# Site language code, should be one of the list in ./languages/data/Names.php
|
# Site language code, should be one of the list in ./languages/data/Names.php
|
||||||
$wgLanguageCode = "en";
|
$wgLanguageCode = "en";
|
||||||
@@ -198,17 +207,22 @@ $wgSyntaxHighlightDefaultLang = "text";
|
|||||||
wfLoadExtension( 'ParserFunctions' );
|
wfLoadExtension( 'ParserFunctions' );
|
||||||
|
|
||||||
##############################################
|
##############################################
|
||||||
# Embed videos extension
|
# Embed videos extension — SKIPPED for MW 1.39 upgrade (add back later)
|
||||||
# https://github.com/HydraWiki/mediawiki-embedvideo/
|
|
||||||
# require_once("$IP/extensions/EmbedVideo/EmbedVideo.php");
|
|
||||||
|
|
||||||
wfLoadExtension( 'EmbedVideo' );
|
|
||||||
|
|
||||||
###########################################
|
###########################################
|
||||||
# Math extension
|
# Math extension
|
||||||
# https://github.com/wikimedia/mediawiki-extensions-Math.git
|
# https://github.com/wikimedia/mediawiki-extensions-Math.git
|
||||||
|
|
||||||
require_once "$IP/extensions/Math/Math.php";
|
wfLoadExtension( 'Math' );
|
||||||
|
|
||||||
|
###########################################
|
||||||
|
# Parsoid (bundled in MW 1.39, runs in-process)
|
||||||
|
# Required for REST API /v1/page/{title}/with_html endpoint
|
||||||
|
|
||||||
|
wfLoadExtension( 'Parsoid', "$IP/vendor/wikimedia/parsoid/extension.json" );
|
||||||
|
$wgParsoidSettings = [
|
||||||
|
'useSelser' => true,
|
||||||
|
];
|
||||||
|
|
||||||
#############################################
|
#############################################
|
||||||
# Fix cookies crap
|
# Fix cookies crap
|
||||||
|
|||||||
@@ -23,12 +23,12 @@ class SkinBootstrap2 extends SkinTemplate {
|
|||||||
// cmr 05/08/2014
|
// cmr 05/08/2014
|
||||||
$template = 'Bootstrap2Template';
|
$template = 'Bootstrap2Template';
|
||||||
|
|
||||||
function setupSkinUserCss( OutputPage $out ) {
|
// MW 1.39: Skin::setupSkinUserCss() was removed. initPage() is the
|
||||||
global $wgHandheldStyle;
|
// per-request hook that still receives OutputPage and runs before
|
||||||
|
// headElement is generated, so addStyle() calls land in the <head>.
|
||||||
|
public function initPage( OutputPage $out ) {
|
||||||
|
parent::initPage( $out );
|
||||||
|
|
||||||
parent::setupSkinUserCss( $out );
|
|
||||||
|
|
||||||
// Append to the default screen common & print styles...
|
|
||||||
$out->addStyle( 'Bootstrap2/IE50Fixes.css', 'screen', 'lt IE 5.5000' );
|
$out->addStyle( 'Bootstrap2/IE50Fixes.css', 'screen', 'lt IE 5.5000' );
|
||||||
$out->addStyle( 'Bootstrap2/IE55Fixes.css', 'screen', 'IE 5.5000' );
|
$out->addStyle( 'Bootstrap2/IE55Fixes.css', 'screen', 'IE 5.5000' );
|
||||||
$out->addStyle( 'Bootstrap2/IE60Fixes.css', 'screen', 'IE 6' );
|
$out->addStyle( 'Bootstrap2/IE60Fixes.css', 'screen', 'IE 6' );
|
||||||
@@ -36,13 +36,12 @@ class SkinBootstrap2 extends SkinTemplate {
|
|||||||
|
|
||||||
$out->addStyle( 'Bootstrap2/rtl.css', 'screen', '', 'rtl' );
|
$out->addStyle( 'Bootstrap2/rtl.css', 'screen', '', 'rtl' );
|
||||||
|
|
||||||
|
$out->addStyle( 'Bootstrap2/bootstrap.css' );
|
||||||
|
$out->addStyle( 'Bootstrap2/slate.css' );
|
||||||
|
$out->addStyle( 'Bootstrap2/main.css' );
|
||||||
|
$out->addStyle( 'Bootstrap2/dox.css' );
|
||||||
|
|
||||||
$out->addStyle('Bootstrap2/bootstrap.css' );
|
$out->addStyle( 'Bootstrap2/css/font-awesome.css' );
|
||||||
$out->addStyle('Bootstrap2/slate.css' );
|
|
||||||
$out->addStyle('Bootstrap2/main.css' );
|
|
||||||
$out->addStyle('Bootstrap2/dox.css' );
|
|
||||||
|
|
||||||
$out->addStyle('Bootstrap2/css/font-awesome.css');
|
|
||||||
//$out->addStyle('Bootstrap2/cmr-bootstrap-cyborg.css');
|
//$out->addStyle('Bootstrap2/cmr-bootstrap-cyborg.css');
|
||||||
//$out->addStyle('Bootstrap2/cmr-bootstrap-cyborg-wiki.css');
|
//$out->addStyle('Bootstrap2/cmr-bootstrap-cyborg-wiki.css');
|
||||||
//
|
//
|
||||||
@@ -72,7 +71,8 @@ class Bootstrap2Template extends QuickTemplate {
|
|||||||
|
|
||||||
// -------- Start ------------
|
// -------- Start ------------
|
||||||
// Adding the following line makes Geshi work
|
// Adding the following line makes Geshi work
|
||||||
$this->html( 'headelement' );
|
// (MW 1.39: read $this->data directly to avoid QuickTemplate::html('headelement') deprecation)
|
||||||
|
echo $this->data['headelement'];
|
||||||
// Left this out because the [edit] buttons were becoming right-aligned
|
// Left this out because the [edit] buttons were becoming right-aligned
|
||||||
// Got around that behavior by changing shared.css
|
// Got around that behavior by changing shared.css
|
||||||
// -------- End ------------
|
// -------- End ------------
|
||||||
@@ -146,7 +146,7 @@ include('/var/www/html/skins/Bootstrap2/navbar.php');
|
|||||||
echo ' ';
|
echo ' ';
|
||||||
echo $tab['class'];
|
echo $tab['class'];
|
||||||
}
|
}
|
||||||
echo '" id="' . Sanitizer::escapeId( "ca-$key" ) . '">';
|
echo '" id="' . Sanitizer::escapeIdForAttribute( "ca-$key" ) . '">';
|
||||||
echo '<a href="';
|
echo '<a href="';
|
||||||
echo htmlspecialchars($tab['href']);
|
echo htmlspecialchars($tab['href']);
|
||||||
echo '">';
|
echo '">';
|
||||||
@@ -329,7 +329,7 @@ include('/var/www/html/skins/Bootstrap2/footer.php');
|
|||||||
<?php }
|
<?php }
|
||||||
if($this->data['feeds']) { ?>
|
if($this->data['feeds']) { ?>
|
||||||
<li id="feedlinks"><?php foreach($this->data['feeds'] as $key => $feed) {
|
<li id="feedlinks"><?php foreach($this->data['feeds'] as $key => $feed) {
|
||||||
?><a id="<?php echo Sanitizer::escapeId( "feed-$key" ) ?>" href="<?php
|
?><a id="<?php echo Sanitizer::escapeIdForAttribute( "feed-$key" ) ?>" href="<?php
|
||||||
echo htmlspecialchars($feed['href']) ?>" rel="alternate" type="application/<?php echo $key ?>+xml" class="feedlink"<?php echo $this->skin->tooltipAndAccesskey('feed-'.$key) ?>><?php echo htmlspecialchars($feed['text'])?></a>
|
echo htmlspecialchars($feed['href']) ?>" rel="alternate" type="application/<?php echo $key ?>+xml" class="feedlink"<?php echo $this->skin->tooltipAndAccesskey('feed-'.$key) ?>><?php echo htmlspecialchars($feed['text'])?></a>
|
||||||
<?php } ?></li><?php
|
<?php } ?></li><?php
|
||||||
}
|
}
|
||||||
@@ -390,7 +390,7 @@ include('/var/www/html/skins/Bootstrap2/footer.php');
|
|||||||
}
|
}
|
||||||
|
|
||||||
//wfRunHooks( 'BootstrapTemplateToolboxEnd', array( &$this ) );
|
//wfRunHooks( 'BootstrapTemplateToolboxEnd', array( &$this ) );
|
||||||
wfRunHooks( 'BootstrapTemplateToolboxEnd', array( &$this ) );
|
Hooks::run( 'BootstrapTemplateToolboxEnd', array( &$this ) );
|
||||||
?>
|
?>
|
||||||
</ul>
|
</ul>
|
||||||
<!--
|
<!--
|
||||||
@@ -429,7 +429,7 @@ include('/var/www/html/skins/Bootstrap2/footer.php');
|
|||||||
|
|
||||||
<?php if ( is_array( $cont ) ) { ?>
|
<?php if ( is_array( $cont ) ) { ?>
|
||||||
<ul class="nav nav-list">
|
<ul class="nav nav-list">
|
||||||
<li class="nav-header"><?php $out = wfMsg( $bar ); if (wfEmptyMsg($bar, $out)) echo htmlspecialchars($bar); else echo htmlspecialchars($out); ?></li>
|
<li class="nav-header"><?php $msg = wfMessage( $bar ); if ($msg->isDisabled()) echo htmlspecialchars($bar); else echo htmlspecialchars($msg->text()); ?></li>
|
||||||
<?php foreach($cont as $key => $val) { ?>
|
<?php foreach($cont as $key => $val) { ?>
|
||||||
<li id="<?php echo Sanitizer::escapeId($val['id']) ?>"<?php
|
<li id="<?php echo Sanitizer::escapeId($val['id']) ?>"<?php
|
||||||
if ( $val['active'] ) { ?> class="active" <?php }
|
if ( $val['active'] ) { ?> class="active" <?php }
|
||||||
|
|||||||
@@ -518,8 +518,18 @@ a.new:visited {
|
|||||||
color: #a55858;
|
color: #a55858;
|
||||||
}
|
}
|
||||||
|
|
||||||
span.editsection {
|
.mw-editsection, .editsection {
|
||||||
font-size: small;
|
font-size: small;
|
||||||
|
font-weight: normal;
|
||||||
|
margin-left: 1em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.editOptions {
|
||||||
|
background-color: #777;
|
||||||
|
}
|
||||||
|
|
||||||
|
.mw-editsection-bracket {
|
||||||
|
margin-left: 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
#preftoc {
|
#preftoc {
|
||||||
|
|||||||
@@ -1,3 +1,6 @@
|
|||||||
post_max_size = 128M
|
post_max_size = 128M
|
||||||
memory_limit = 128M
|
memory_limit = 128M
|
||||||
upload_max_filesize = 100M
|
upload_max_filesize = 100M
|
||||||
|
display_errors = Off
|
||||||
|
log_errors = On
|
||||||
|
error_log = /var/log/apache2/php_errors.log
|
||||||
|
|||||||
@@ -1,7 +1,2 @@
|
|||||||
FROM mysql:5.7
|
FROM mysql:8.0
|
||||||
MAINTAINER charles@charlesreid1.com
|
MAINTAINER charles@charlesreid1.com
|
||||||
|
|
||||||
# make mysql data a volume
|
|
||||||
VOLUME ["/var/lib/mysql"]
|
|
||||||
|
|
||||||
RUN chown mysql:mysql /var/lib/mysql
|
|
||||||
|
|||||||
@@ -27,6 +27,29 @@ services:
|
|||||||
networks:
|
networks:
|
||||||
- frontend
|
- frontend
|
||||||
|
|
||||||
|
stormy_gitea_runner:
|
||||||
|
image: gitea/act_runner:latest
|
||||||
|
container_name: stormy_gitea_runner
|
||||||
|
restart: always
|
||||||
|
volumes:
|
||||||
|
- "stormy_gitea_runner_data:/data"
|
||||||
|
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||||
|
- "./d-gitea/runner/config.yaml:/etc/act_runner/config.yaml:ro"
|
||||||
|
environment:
|
||||||
|
- GITEA_INSTANCE_URL=http://stormy_gitea:3000
|
||||||
|
- GITEA_RUNNER_REGISTRATION_TOKEN={{ pod_charlesreid1_gitea_runner_token }}
|
||||||
|
- GITEA_RUNNER_NAME=stormy-runner
|
||||||
|
- CONFIG_FILE=/etc/act_runner/config.yaml
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: 1m
|
||||||
|
max-file: "10"
|
||||||
|
depends_on:
|
||||||
|
- stormy_gitea
|
||||||
|
networks:
|
||||||
|
- frontend
|
||||||
|
|
||||||
stormy_mysql:
|
stormy_mysql:
|
||||||
restart: always
|
restart: always
|
||||||
build: d-mysql
|
build: d-mysql
|
||||||
@@ -52,7 +75,7 @@ services:
|
|||||||
build: d-mediawiki
|
build: d-mediawiki
|
||||||
container_name: stormy_mw
|
container_name: stormy_mw
|
||||||
volumes:
|
volumes:
|
||||||
- "stormy_mw_data:/var/www/html"
|
- "stormy_mw_images:/var/www/html/images"
|
||||||
logging:
|
logging:
|
||||||
driver: "json-file"
|
driver: "json-file"
|
||||||
options:
|
options:
|
||||||
@@ -107,6 +130,8 @@ networks:
|
|||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
stormy_mysql_data:
|
stormy_mysql_data:
|
||||||
stormy_mw_data:
|
stormy_mw_images:
|
||||||
|
external: true
|
||||||
stormy_gitea_data:
|
stormy_gitea_data:
|
||||||
|
stormy_gitea_runner_data:
|
||||||
stormy_nginx_logs:
|
stormy_nginx_logs:
|
||||||
|
|||||||
@@ -5,6 +5,7 @@
|
|||||||
export POD_CHARLESREID1_DIR="/path/to/pod-charlesreid1"
|
export POD_CHARLESREID1_DIR="/path/to/pod-charlesreid1"
|
||||||
export POD_CHARLESREID1_TLD="example.com"
|
export POD_CHARLESREID1_TLD="example.com"
|
||||||
export POD_CHARLESREID1_USER="nonrootuser"
|
export POD_CHARLESREID1_USER="nonrootuser"
|
||||||
|
export POD_CHARLESREID1_VPN_IP_ADDR="1.2.3.4"
|
||||||
|
|
||||||
# mediawiki:
|
# mediawiki:
|
||||||
# ----------
|
# ----------
|
||||||
@@ -31,6 +32,6 @@ export AWS_DEFAULT_REGION="us-west-1"
|
|||||||
|
|
||||||
# backups and scripts:
|
# backups and scripts:
|
||||||
# --------------------
|
# --------------------
|
||||||
export POD_CHARLESREID1_USER="charles"
|
export POD_CHARLESREID1_BACKUP_DIR="/path/to"
|
||||||
export POD_CHARLESREID1_BACKUP_S3BUCKET="name-of-backups-bucket"
|
export POD_CHARLESREID1_BACKUP_S3BUCKET="name-of-backups-bucket"
|
||||||
export POD_CHARLESREID1_CANARY_WEBHOOK="https://hooks.slack.com/services/000000000/AAAAAAAAA/111111111111111111111111"
|
export POD_CHARLESREID1_CANARY_WEBHOOK="https://hooks.slack.com/services/000000000/AAAAAAAAA/111111111111111111111111"
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ export POD_CHARLESREID1_VPN_IP_ADDR="{{ pod_charlesreid1_vpn_ip_addr }}"
|
|||||||
# ----------
|
# ----------
|
||||||
export POD_CHARLESREID1_MW_ADMIN_EMAIL="{{ pod_charlesreid1_mediawiki_admin_email }}"
|
export POD_CHARLESREID1_MW_ADMIN_EMAIL="{{ pod_charlesreid1_mediawiki_admin_email }}"
|
||||||
export POD_CHARLESREID1_MW_SECRET_KEY="{{ pod_charlesreid1_mediawiki_secretkey }}"
|
export POD_CHARLESREID1_MW_SECRET_KEY="{{ pod_charlesreid1_mediawiki_secretkey }}"
|
||||||
|
export POD_CHARLESREID1_MW_UPGRADE_KEY="{{ pod_charlesreid1_mediawiki_upgradekey }}"
|
||||||
|
|
||||||
# mysql:
|
# mysql:
|
||||||
# ------
|
# ------
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ jinja_to_env = {
|
|||||||
"pod_charlesreid1_gitea_app_name": "POD_CHARLESREID1_GITEA_APP_NAME",
|
"pod_charlesreid1_gitea_app_name": "POD_CHARLESREID1_GITEA_APP_NAME",
|
||||||
"pod_charlesreid1_gitea_secretkey": "POD_CHARLESREID1_GITEA_SECRET_KEY",
|
"pod_charlesreid1_gitea_secretkey": "POD_CHARLESREID1_GITEA_SECRET_KEY",
|
||||||
"pod_charlesreid1_gitea_internaltoken": "POD_CHARLESREID1_GITEA_INTERNAL_TOKEN",
|
"pod_charlesreid1_gitea_internaltoken": "POD_CHARLESREID1_GITEA_INTERNAL_TOKEN",
|
||||||
|
"pod_charlesreid1_gitea_runner_token": "POD_CHARLESREID1_GITEA_RUNNER_TOKEN",
|
||||||
"pod_charlesreid1_backups_aws_access_key": "AWS_ACCESS_KEY_ID",
|
"pod_charlesreid1_backups_aws_access_key": "AWS_ACCESS_KEY_ID",
|
||||||
"pod_charlesreid1_backups_aws_secret_access_key": "AWS_SECRET_ACCESS_KEY",
|
"pod_charlesreid1_backups_aws_secret_access_key": "AWS_SECRET_ACCESS_KEY",
|
||||||
"pod_charlesreid1_backups_aws_region": "AWS_DEFAULT_REGION",
|
"pod_charlesreid1_backups_aws_region": "AWS_DEFAULT_REGION",
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
#
|
#
|
||||||
# clone or download each extension, and build
|
# Clone each REL1_39 extension into d-mediawiki-new for the MW 1.39 green stack.
|
||||||
|
# EmbedVideo is intentionally skipped for now (add back later if needed).
|
||||||
set -eux
|
set -eux
|
||||||
|
|
||||||
MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki"
|
MW_DIR="${POD_CHARLESREID1_DIR}/d-mediawiki-new"
|
||||||
MW_CONF_DIR="${MW_DIR}/charlesreid1-config/mediawiki"
|
MW_CONF_DIR="${MW_DIR}/charlesreid1-config/mediawiki"
|
||||||
EXT_DIR="${MW_CONF_DIR}/extensions"
|
EXT_DIR="${MW_CONF_DIR}/extensions"
|
||||||
|
|
||||||
@@ -17,20 +18,10 @@ cd ${EXT_DIR}
|
|||||||
Extension="SyntaxHighlight_GeSHi"
|
Extension="SyntaxHighlight_GeSHi"
|
||||||
if [ ! -d ${Extension} ]
|
if [ ! -d ${Extension} ]
|
||||||
then
|
then
|
||||||
## This requires mediawiki > 1.31
|
git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git ${Extension}
|
||||||
## (so does REL1_31)
|
|
||||||
#git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
|
|
||||||
|
|
||||||
## This manually downloads REL1_30
|
|
||||||
#wget https://extdist.wmflabs.org/dist/extensions/SyntaxHighlight_GeSHi-REL1_30-87392f1.tar.gz -O SyntaxHighlight_GeSHi.tar.gz
|
|
||||||
#tar -xzf SyntaxHighlight_GeSHi.tar.gz -C ${PWD}
|
|
||||||
#rm -f SyntaxHighlight_GeSHi.tar.gz
|
|
||||||
|
|
||||||
# Best of both worlds
|
|
||||||
git clone https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi.git SyntaxHighlight_GeSHi
|
|
||||||
(
|
(
|
||||||
cd ${Extension}
|
cd ${Extension}
|
||||||
git checkout --track remotes/origin/REL1_34
|
git checkout --track remotes/origin/REL1_39
|
||||||
)
|
)
|
||||||
else
|
else
|
||||||
echo "Skipping ${Extension}"
|
echo "Skipping ${Extension}"
|
||||||
@@ -44,21 +35,7 @@ then
|
|||||||
git clone https://github.com/wikimedia/mediawiki-extensions-ParserFunctions.git ${Extension}
|
git clone https://github.com/wikimedia/mediawiki-extensions-ParserFunctions.git ${Extension}
|
||||||
(
|
(
|
||||||
cd ${Extension}
|
cd ${Extension}
|
||||||
git checkout --track remotes/origin/REL1_34
|
git checkout --track remotes/origin/REL1_39
|
||||||
)
|
|
||||||
else
|
|
||||||
echo "Skipping ${Extension}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
##############################
|
|
||||||
|
|
||||||
Extension="EmbedVideo"
|
|
||||||
if [ ! -d ${Extension} ]
|
|
||||||
then
|
|
||||||
git clone https://github.com/HydraWiki/mediawiki-embedvideo.git ${Extension}
|
|
||||||
(
|
|
||||||
cd ${Extension}
|
|
||||||
git checkout v2.7.3
|
|
||||||
)
|
)
|
||||||
else
|
else
|
||||||
echo "Skipping ${Extension}"
|
echo "Skipping ${Extension}"
|
||||||
@@ -72,7 +49,7 @@ then
|
|||||||
git clone https://github.com/wikimedia/mediawiki-extensions-Math.git ${Extension}
|
git clone https://github.com/wikimedia/mediawiki-extensions-Math.git ${Extension}
|
||||||
(
|
(
|
||||||
cd ${Extension}
|
cd ${Extension}
|
||||||
git checkout REL1_34
|
git checkout --track remotes/origin/REL1_39
|
||||||
)
|
)
|
||||||
else
|
else
|
||||||
echo "Skipping ${Extension}"
|
echo "Skipping ${Extension}"
|
||||||
|
|||||||
@@ -16,8 +16,8 @@ echo "Checking that skins dir exists"
|
|||||||
test -d ${SKINS_DIR}
|
test -d ${SKINS_DIR}
|
||||||
|
|
||||||
echo "Installing skins into $NAME"
|
echo "Installing skins into $NAME"
|
||||||
docker exec -it $NAME /bin/bash -c 'rm -rf /var/www/html/skins'
|
docker exec -i $NAME /bin/bash -c 'rm -rf /var/www/html/skins'
|
||||||
docker cp ${SKINS_DIR} $NAME:/var/www/html/skins
|
docker cp ${SKINS_DIR} $NAME:/var/www/html/skins
|
||||||
docker exec -it $NAME /bin/bash -c 'chown -R www-data:www-data /var/www/html/skins'
|
docker exec -i $NAME /bin/bash -c 'chown -R www-data:www-data /var/www/html/skins'
|
||||||
|
|
||||||
echo "Finished installing skins into $NAME"
|
echo "Finished installing skins into $NAME"
|
||||||
|
|||||||
@@ -5,8 +5,8 @@ After=docker.service
|
|||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Restart=always
|
Restart=always
|
||||||
StandardError=null
|
StandardError=journal
|
||||||
StandardOutput=null
|
StandardOutput=journal
|
||||||
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml
|
ExecStartPre=/usr/bin/test -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml
|
||||||
ExecStart=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml up
|
ExecStart=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml up
|
||||||
ExecStop=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml stop
|
ExecStop=/usr/local/bin/docker-compose -f {{ pod_charlesreid1_pod_install_dir }}/docker-compose.yml stop
|
||||||
|
|||||||
Reference in New Issue
Block a user