Compare commits

..

13 Commits

Author SHA1 Message Date
holger krekel
6573ccc05f cache images from fist commit in the PR and re-use it until end of PR 2026-03-07 21:34:21 +01:00
holger krekel
25285005c3 same as https://github.com/chatmail/relay/pull/887/changes 2026-03-07 20:24:59 +01:00
holger krekel
482194437d docs: update lxc.rst for per-relay caching and parallel deploy
Update the quick-start and CLI reference sections to reflect
per-relay cached images (localchat-test0, localchat-test1),
parallel deploy, DNS readiness checks, and revised memory
limits (256 MiB for DNS container).  Add mention of section
timing summary printed at the end of lxc-test.
2026-03-07 20:24:59 +01:00
holger krekel
735e9d3e7f feat: route output through Out, add DNS check and version-string excludes
Extend the Out class with section(), section_line(), and print()
methods that replace the standalone _section()/_section_line()
helpers.  Section timings are recorded and printed as a summary
at the end of lxc-test.

Add RelayContainer.check_dns() which retries 'getent hosts
pypi.org' to verify external DNS resolution works inside the
container, called right after configure_dns() in lxc-start.

Add DIFF_EXCLUDES to get_version_string() so that diffs limited
to test directories do not cause a version mismatch and
unnecessary redeployment.

Update test_lxc.py to use QuietOut for the new Out API.
2026-03-07 20:24:53 +01:00
holger krekel
4b79606d49 feat: per-relay image caching, static IPs, and parallel deploy
Switch from a single localchat-relay image to per-relay cached
images (localchat-test0, localchat-test1) and add a DNS image
(localchat-ns).  Assign static IPs via a fixed incusbr0 bridge
subnet (10.200.200.0/24) so containers always get deterministic
addresses.

Container launch is split into 'incus init' + device-override +
'incus start' to set the static IP before boot.

Deploy runs in parallel via _run_cmdeploy_parallel(), which
captures output per-relay and shows progress lines.  Tests now
run in both directions (test0↔test1, test1↔test0).

publish_image() returns bool (True if published, False if cached)
so lxc-test can report cache hits.
2026-03-07 14:40:00 +01:00
holger krekel
6e52bfe8c4 refactor: extract sdist build into util with lock-based idempotency
Move _build_chatmaild from deployers.py into util.py as
build_chatmaild_sdist() with fcntl-based file locking so
parallel deploys do not race on the sdist.  The build is
called once from run_cmd() before pyinfra starts; deployers.py
now only calls get_chatmaild_sdist() to locate the pre-built
archive.

Add test_build_chatmaild_sdist and test_get_chatmaild_sdist_errors.
2026-03-07 14:38:47 +01:00
holger krekel
95c76aa2b0 ci: replace staging workflows with LXC-local testing
Replace the two staging-server CI workflows and their zone-file
helpers with a single lxc-test job in ci.yaml that runs
'cmdeploy lxc-test' inside an ubuntu-24.04 runner.

The new workflow installs Incus from the Zabbly apt repository,
initialises it, bootstraps the venv, caches the base LXC image
together with SSH keys, and runs the full LXC pipeline
(container creation, deploy, DNS zones, tests).
2026-03-07 14:36:27 +01:00
holger krekel
4f109e8c31 address link2xt comments (zone parsing and turn v0.4 release 2026-03-07 07:35:07 +01:00
holger krekel
8c30714279 simplify start instructions 2026-03-06 12:12:28 +01:00
holger krekel
23f21d36b1 make helpers testable and test them, also streamline intro of docs 2026-03-06 11:01:25 +01:00
holger krekel
4ed3f5dd91 fix lxc-test to not re-run deploy when nothing changed + some other beautifications 2026-03-06 10:52:08 +01:00
holger krekel
972b46be74 don't use env vars but explicit pytest options to pass ssh info around. 2026-03-06 10:24:15 +01:00
holger krekel
7edb4e860a feat: add LXC container support for local chatmail development
Add cmdeploy "lxc-test" command to run cmdeploy against local containers,
with supplementary lxc-start, lxc-stop and lxc-status subcommands.
See doc/source/lxc.rst for full documentation including prerequisites,
DNS setup, TLS handling, DNS-free testing, and known limitations.

Apart from adding lxc-specific docs, tests, and implementation files in the cmdeploy/lxc directory,
this PR adds the --ssh-config option to cmdeploy run/dns/status/test commands and pyinfra invocations,
and also to sshexec (Execnet) handling.  This allows for the host to need no DNS entries for a relay,
and route all resolution through ssh-config.  This is used by the "lxc-test" command, which performs
a completely local setup -- again, see docs for more details.

While working on DNS/SSH things i also unified all zone-file handling
to use actual BIND format as it is easy enough to parse back.
2026-03-06 10:06:00 +01:00
20 changed files with 803 additions and 982 deletions

View File

@@ -15,28 +15,102 @@ jobs:
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: download filtermail - name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.0/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.5.2/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests - name: run chatmaild tests
working-directory: chatmaild working-directory: chatmaild
run: pipx run tox run: pipx run tox
scripts: scripts:
name: deploy-chatmail tests name: deploy-chatmail tests
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- name: initenv - name: initenv
run: scripts/initenv.sh run: scripts/initenv.sh
- name: append venv/bin to PATH - name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH run: echo venv/bin >>$GITHUB_PATH
- name: run formatting checks - name: run formatting checks
run: cmdeploy fmt -v run: cmdeploy fmt -v
- name: run deploy-chatmail offline tests - name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy run: pytest --pyargs cmdeploy
# all other cmdeploy commands require a staging server lxc-test:
# see https://github.com/deltachat/chatmail/issues/100 name: LXC deploy and test
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: install incus
run: |
# zabbly is the official incus community packages source
curl -fsSL https://pkgs.zabbly.com/key.asc \
| sudo gpg --dearmor -o /etc/apt/keyrings/zabbly.gpg
sudo sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.gpg
EOF'
sudo apt-get update
sudo apt-get install -y incus
- name: initialise incus
run: |
sudo systemctl stop docker.socket docker || true
sudo iptables -P FORWARD ACCEPT
sudo sysctl -w fs.inotify.max_user_instances=65535
sudo sysctl -w fs.inotify.max_user_watches=65535
sudo incus admin init --minimal
sudo usermod -aG incus-admin "$USER"
- name: initenv
run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: restore cached images
id: cache-images
uses: actions/cache@v4
with:
path: |
/tmp/localchat-base.tar.gz
/tmp/localchat-ns.tar.gz
/tmp/localchat-test0.tar.gz
/tmp/localchat-test1.tar.gz
lxconfigs/id_localchat*
key: incus-images-${{ runner.os }}-${{ github.ref_name }}
restore-keys: |
incus-images-${{ runner.os }}-${{ github.ref_name }}-
incus-images-${{ runner.os }}-main-
incus-images-${{ runner.os }}-
- name: import cached images
run: |
for alias in localchat-base localchat-ns localchat-test0 localchat-test1; do
if [ -f /tmp/$alias.tar.gz ]; then
sg incus-admin -c "incus image import /tmp/$alias.tar.gz --alias $alias" || true
fi
done
- name: lxc-test
run: sg incus-admin -c 'cmdeploy lxc-test'
- name: export images for cache
if: always()
run: |
for alias in localchat-base localchat-ns localchat-test0 localchat-test1; do
if ! [ -f /tmp/$alias.tar.gz ]; then
sg incus-admin -c "incus image export $alias /tmp/$alias" || true
fi
done

View File

@@ -1,20 +0,0 @@
;; Zone file for staging-ipv4.testrun.org
$ORIGIN staging-ipv4.testrun.org.
$TTL 300
@ IN SOA ns.testrun.org. root.nine.testrun.org (
2023010101 ; Serial
7200 ; Refresh
3600 ; Retry
1209600 ; Expire
3600 ; Negative response caching TTL
)
;; Nameservers.
@ IN NS ns.testrun.org.
;; DNS records.
@ IN A 37.27.95.249
mta-sts.staging-ipv4.testrun.org. CNAME staging-ipv4.testrun.org.
www.staging-ipv4.testrun.org. CNAME staging-ipv4.testrun.org.

View File

@@ -1,21 +0,0 @@
;; Zone file for staging2.testrun.org
$ORIGIN staging2.testrun.org.
$TTL 300
@ IN SOA ns.testrun.org. root.nine.testrun.org (
2023010101 ; Serial
7200 ; Refresh
3600 ; Retry
1209600 ; Expire
3600 ; Negative response caching TTL
)
;; Nameservers.
@ IN NS ns.testrun.org.
;; DNS records.
@ IN A 37.27.24.139
mta-sts.staging2.testrun.org. CNAME staging2.testrun.org.
www.staging2.testrun.org. CNAME staging2.testrun.org.

View File

@@ -1,104 +0,0 @@
name: deploy on staging-ipv4.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging-ipv4.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging-ipv4.testrun.org
url: https://staging-ipv4.testrun.org/
concurrency: staging-ipv4.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging-ipv4.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging-ipv4.testrun.org:/var/lib/acme acme-ipv4 || true
rsync -avz root@staging-ipv4.testrun.org:/etc/dkimkeys dkimkeys-ipv4 || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys-ipv4/dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys-ipv4 root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme-ipv4/acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme-ipv4 root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging-ipv4.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_IPV4_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme-ipv4/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys-ipv4/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging-ipv4.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- name: setup dependencies
run: |
ssh root@staging-ipv4.testrun.org apt update
ssh root@staging-ipv4.testrun.org apt install -y git python3.11-venv python3-dev gcc
ssh root@staging-ipv4.testrun.org git clone https://github.com/chatmail/relay
ssh root@staging-ipv4.testrun.org "cd relay && git checkout " ${{ github.head_ref }}
ssh root@staging-ipv4.testrun.org "cd relay && scripts/initenv.sh"
- name: initialize config
run: |
ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy init staging-ipv4.testrun.org"
ssh root@staging-ipv4.testrun.org "sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' relay/chatmail.ini"
ssh root@staging-ipv4.testrun.org "sed -i 's/#\s*mtail_address/mtail_address/' relay/chatmail.ini"
- run: ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy run --verbose --skip-dns-check --ssh-host localhost"
- name: set DNS entries
run: |
ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy dns --zonefile staging-generated.zone --ssh-host localhost"
ssh root@staging-ipv4.testrun.org cat relay/staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: ssh root@staging-ipv4.testrun.org "cd relay && CHATMAIL_DOMAIN2=ci-chatmail.testrun.org scripts/cmdeploy test --slow --ssh-host localhost"
- name: cmdeploy dns
run: ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy dns -v --ssh-host localhost"

View File

@@ -1,97 +0,0 @@
name: deploy on staging2.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging2.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging2.testrun.org
url: https://staging2.testrun.org/
concurrency: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging2.testrun.org:/var/lib/acme . || true
rsync -avz root@staging2.testrun.org:/etc/dkimkeys . || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging2.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging2.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging2.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true
- name: add hpk42 key to staging server
run: ssh root@staging2.testrun.org 'curl -s https://github.com/hpk42.keys >> .ssh/authorized_keys'
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging2.testrun.org
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
cmdeploy dns --zonefile staging-generated.zone --verbose
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone
scp .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

View File

@@ -85,13 +85,13 @@ def mockout():
captured_green = [] captured_green = []
captured_plain = [] captured_plain = []
def red(self, msg, **kw): def red(self, msg):
self.captured_red.append(msg) self.captured_red.append(msg)
def green(self, msg, **kw): def green(self, msg):
self.captured_green.append(msg) self.captured_green.append(msg)
def print(self, msg="", **kw): def __call__(self, msg):
self.captured_plain.append(msg) self.captured_plain.append(msg)
return MockOut() return MockOut()

View File

@@ -1,7 +1,6 @@
import importlib.resources import importlib.resources
import io import io
import os import os
from contextlib import contextmanager
from pyinfra.operations import files, server, systemd from pyinfra.operations import files, server, systemd
@@ -11,28 +10,6 @@ def has_systemd():
return os.path.isdir("/run/systemd/system") return os.path.isdir("/run/systemd/system")
@contextmanager
def blocked_service_startup():
"""Prevent services from auto-starting during package installation.
Installs a ``/usr/sbin/policy-rc.d`` that exits 101, blocking any
service from being started by the package manager. This avoids bind
conflicts and CPU/RAM spikes during initial setup. The file is removed
when the context exits.
"""
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
yield
files.file("/usr/sbin/policy-rc.d", present=False)
def get_resource(arg, pkg=__package__): def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg) return importlib.resources.files(pkg).joinpath(arg)

View File

@@ -10,14 +10,17 @@ import pathlib
import shutil import shutil
import subprocess import subprocess
import sys import sys
import time
from contextlib import contextmanager
from pathlib import Path from pathlib import Path
import pyinfra import pyinfra
from chatmaild.config import read_config, write_initial_config from chatmaild.config import read_config, write_initial_config
from packaging import version from packaging import version
from termcolor import colored
from . import dns, remote from . import dns, remote
from .lxc.cli import ( from .lxc.cli import ( # noqa: F401
lxc_start_cmd, lxc_start_cmd,
lxc_start_cmd_options, lxc_start_cmd_options,
lxc_status_cmd, lxc_status_cmd,
@@ -27,14 +30,13 @@ from .lxc.cli import (
lxc_test_cmd, lxc_test_cmd,
lxc_test_cmd_options, lxc_test_cmd_options,
) )
from .lxc.incus import DNSConfigurationError
from .sshexec import ( from .sshexec import (
LocalExec, LocalExec,
SSHExec, SSHExec,
resolve_host_from_ssh_config, resolve_host_from_ssh_config,
resolve_key_from_ssh_config, resolve_key_from_ssh_config,
) )
from .util import Out from .util import build_chatmaild_sdist
from .www import main as webdev_main from .www import main as webdev_main
# #
@@ -121,11 +123,12 @@ def run_cmd(args, out):
env["CHATMAIL_WEBSITE_ONLY"] = "True" if args.website_only else "" env["CHATMAIL_WEBSITE_ONLY"] = "True" if args.website_only else ""
env["CHATMAIL_DISABLE_MAIL"] = "True" if args.disable_mail else "" env["CHATMAIL_DISABLE_MAIL"] = "True" if args.disable_mail else ""
env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else "" env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else ""
if not args.website_only:
build_chatmaild_sdist()
if not args.dns_check_disabled: if not args.dns_check_disabled:
env["CHATMAIL_ADDR_V4"] = remote_data.get("A") or "" env["CHATMAIL_ADDR_V4"] = remote_data.get("A") or ""
env["CHATMAIL_ADDR_V6"] = remote_data.get("AAAA") or "" env["CHATMAIL_ADDR_V6"] = remote_data.get("AAAA") or ""
env["DEBIAN_FRONTEND"] = "noninteractive"
env["TERM"] = "linux"
deploy_path = importlib.resources.files(__package__).joinpath("run.py").resolve() deploy_path = importlib.resources.files(__package__).joinpath("run.py").resolve()
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra" pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
@@ -153,10 +156,7 @@ def run_cmd(args, out):
return 1 return 1
try: try:
ret = out.shell(cmd, env=env) out.check_call(cmd, env=env)
if ret:
out.red("Deploy failed")
return 1
if args.website_only: if args.website_only:
out.green("Website deployment completed.") out.green("Website deployment completed.")
elif ( elif (
@@ -272,7 +272,7 @@ def test_cmd(args, out):
pytest_args.extend(["--ssh-host", args.ssh_host]) pytest_args.extend(["--ssh-host", args.ssh_host])
if args.ssh_config: if args.ssh_config:
pytest_args.extend(["--ssh-config", str(Path(args.ssh_config).resolve())]) pytest_args.extend(["--ssh-config", str(Path(args.ssh_config).resolve())])
ret = out.shell(" ".join(pytest_args), env=env) ret = out.run_ret(pytest_args, env=env)
return ret return ret
@@ -309,8 +309,8 @@ def fmt_cmd(args, out):
format_args.extend(sources) format_args.extend(sources)
check_args.extend(sources) check_args.extend(sources)
out.shell(" ".join(format_args), quiet=not args.verbose) out.check_call(" ".join(format_args), quiet=not args.verbose)
out.shell(" ".join(check_args), quiet=not args.verbose) out.check_call(" ".join(check_args), quiet=not args.verbose)
def bench_cmd(args, out): def bench_cmd(args, out):
@@ -331,6 +331,59 @@ def webdev_cmd(args, out):
# #
class Out:
"""Convenience output printer providing coloring and section formatting."""
SECTION_WIDTH = 72
def __init__(self):
self.section_timings = []
def red(self, msg, file=sys.stderr):
print(colored(msg, "red"), file=file, flush=True)
def green(self, msg, file=sys.stderr):
print(colored(msg, "green"), file=file, flush=True)
def print(self, msg="", **kwargs):
"""Print to stdout with automatic flush."""
print(msg, flush=True, **kwargs)
@contextmanager
def section(self, title):
"""Context manager that prints a section header and records elapsed time."""
bar = "\u2501" * (self.SECTION_WIDTH - len(title) - 5)
self.green(f"\u2501\u2501\u2501 {title} {bar}")
t0 = time.time()
yield
elapsed = time.time() - t0
self.section_timings.append((title, elapsed))
self.print(f"{'':>{self.SECTION_WIDTH - 10}}({elapsed:.1f}s)")
self.print()
def section_line(self, title):
"""Print a section header without timing."""
bar = "\u2501" * (self.SECTION_WIDTH - len(title) - 5)
self.green(f"\u2501\u2501\u2501 {title} {bar}")
self.print()
def __call__(self, msg, red=False, green=False, file=sys.stdout):
color = "red" if red else ("green" if green else None)
print(colored(msg, color), file=file, flush=True)
def check_call(self, arg, env=None, quiet=False):
if not quiet:
self(f"[$ {arg}]", file=sys.stderr)
return subprocess.check_call(arg, shell=True, env=env)
def run_ret(self, args, env=None, quiet=False):
if not quiet:
cmdstring = " ".join(args)
self(f"[$ {cmdstring}]", file=sys.stderr)
proc = subprocess.run(args, env=env, check=False)
return proc.returncode
def add_ssh_host_option(parser): def add_ssh_host_option(parser):
parser.add_argument( parser.add_argument(
"--ssh-host", "--ssh-host",
@@ -360,6 +413,15 @@ def add_config_option(parser):
help="path to the chatmail.ini file", help="path to the chatmail.ini file",
) )
parser.add_argument(
"--verbose",
"-v",
dest="verbose",
action="store_true",
default=False,
help="provide verbose logging",
)
def add_subcommand(subparsers, func, add_config=True): def add_subcommand(subparsers, func, add_config=True):
name = func.__name__ name = func.__name__
@@ -371,14 +433,6 @@ def add_subcommand(subparsers, func, add_config=True):
p.set_defaults(func=func) p.set_defaults(func=func)
if add_config: if add_config:
add_config_option(p) add_config_option(p)
p.add_argument(
"-v",
"--verbose",
dest="verbose",
action="count",
default=0,
help="increase verbosity (can be repeated: -v, -vv)",
)
return p return p
@@ -387,23 +441,6 @@ Setup your chatmail server configuration and
deploy it via SSH to your remote location. deploy it via SSH to your remote location.
""" """
# Explicit subcommand registry: (cmd_func, options_func_or_None, needs_config).
# LXC commands don't need a chatmail.ini (no config); all others do.
SUBCOMMANDS = [
(init_cmd, init_cmd_options, True),
(run_cmd, run_cmd_options, True),
(dns_cmd, dns_cmd_options, True),
(status_cmd, status_cmd_options, True),
(test_cmd, test_cmd_options, True),
(fmt_cmd, fmt_cmd_options, True),
(bench_cmd, None, True),
(webdev_cmd, None, True),
(lxc_start_cmd, lxc_start_cmd_options, False),
(lxc_stop_cmd, lxc_stop_cmd_options, False),
(lxc_status_cmd, lxc_status_cmd_options, False),
(lxc_test_cmd, lxc_test_cmd_options, False),
]
def get_parser(): def get_parser():
"""Return an ArgumentParser for the 'cmdeploy' CLI""" """Return an ArgumentParser for the 'cmdeploy' CLI"""
@@ -412,10 +449,15 @@ def get_parser():
parser.set_defaults(func=None, inipath=None) parser.set_defaults(func=None, inipath=None)
subparsers = parser.add_subparsers(title="subcommands") subparsers = parser.add_subparsers(title="subcommands")
for func, addopts, needs_config in SUBCOMMANDS: # find all subcommands in the module namespace
subparser = add_subcommand(subparsers, func, add_config=needs_config) glob = globals()
if addopts is not None: for name, func in glob.items():
addopts(subparser) if name.endswith("_cmd"):
needs_config = not name.startswith("lxc_")
subparser = add_subcommand(subparsers, func, add_config=needs_config)
addopts = glob.get(name + "_options")
if addopts is not None:
addopts(subparser)
return parser return parser
@@ -437,7 +479,7 @@ def main(args=None):
if args.func is None: if args.func is None:
return parser.parse_args(["-h"]) return parser.parse_args(["-h"])
out = Out(verbosity=args.verbose) out = Out()
kwargs = {} kwargs = {}
if args.inipath is not None and args.func.__name__ not in ("init_cmd", "fmt_cmd"): if args.inipath is not None and args.func.__name__ not in ("init_cmd", "fmt_cmd"):
@@ -455,9 +497,6 @@ def main(args=None):
if res is None: if res is None:
res = 0 res = 0
return res return res
except DNSConfigurationError as exc:
out.red(str(exc))
return 1
except KeyboardInterrupt: except KeyboardInterrupt:
out.red("KeyboardInterrupt") out.red("KeyboardInterrupt")
sys.exit(130) sys.exit(130)

View File

@@ -3,9 +3,6 @@ Chat Mail pyinfra deploy.
""" """
import os import os
import shutil
import subprocess
import sys
from io import BytesIO, StringIO from io import BytesIO, StringIO
from pathlib import Path from pathlib import Path
@@ -17,12 +14,14 @@ from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
from cmdeploy.cmdeploy import Out
from cmdeploy.util import get_chatmaild_sdist, get_version_string
from .acmetool import AcmetoolDeployer from .acmetool import AcmetoolDeployer
from .basedeploy import ( from .basedeploy import (
Deployer, Deployer,
Deployment, Deployment,
activate_remote_units, activate_remote_units,
blocked_service_startup,
configure_remote_units, configure_remote_units,
get_resource, get_resource,
has_systemd, has_systemd,
@@ -35,7 +34,6 @@ from .nginx.deployer import NginxDeployer
from .opendkim.deployer import OpendkimDeployer from .opendkim.deployer import OpendkimDeployer
from .postfix.deployer import PostfixDeployer from .postfix.deployer import PostfixDeployer
from .selfsigned.deployer import SelfSignedTlsDeployer from .selfsigned.deployer import SelfSignedTlsDeployer
from .util import Out, get_version_string
from .www import build_webpages, find_merge_conflict, get_paths from .www import build_webpages, find_merge_conflict, get_paths
@@ -54,20 +52,6 @@ class Port(FactBase):
return output[0] return output[0]
def _build_chatmaild(dist_dir) -> None:
dist_dir = Path(dist_dir).resolve()
if dist_dir.exists():
shutil.rmtree(dist_dir)
dist_dir.mkdir()
subprocess.check_output(
[sys.executable, "-m", "build", "-n"]
+ ["--sdist", "chatmaild", "--outdir", str(dist_dir)]
)
entries = list(dist_dir.iterdir())
assert len(entries) == 1
return entries[0]
def remove_legacy_artifacts(): def remove_legacy_artifacts():
if not has_systemd(): if not has_systemd():
return return
@@ -83,7 +67,7 @@ def remove_legacy_artifacts():
def _install_remote_venv_with_chatmaild() -> None: def _install_remote_venv_with_chatmaild() -> None:
remove_legacy_artifacts() remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist")) dist_file = get_chatmaild_sdist()
remote_base_dir = "/usr/local/lib/chatmaild" remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}" remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv" remote_venv_dir = f"{remote_base_dir}/venv"
@@ -149,16 +133,33 @@ class UnboundDeployer(Deployer):
self.need_restart = False self.need_restart = False
def install(self): def install(self):
# Run local DNS resolver `unbound`. `resolvconf` takes care of # Run local DNS resolver `unbound`.
# setting up /etc/resolv.conf to use 127.0.0.1 as the resolver. # `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
# On an IPv4-only system, if unbound is started but not configured, #
# it causes subsequent steps to fail to resolve hosts. # On an IPv4-only system, if unbound is started but not
with blocked_service_startup(): # configured, it causes subsequent steps to fail to resolve hosts.
apt.packages( # Here, we use policy-rc.d to prevent unbound from starting up
name="Install unbound", # on initial install. Later, we will configure it and start it.
packages=["unbound", "unbound-anchor", "dnsutils"], #
) # For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self): def configure(self):
server.shell( server.shell(
@@ -463,15 +464,14 @@ class ChatmailDeployer(Deployer):
("iroh", None, None), ("iroh", None, None),
] ]
def __init__(self, config): def __init__(self, mail_domain):
self.config = config self.mail_domain = mail_domain
self.mail_domain = config.mail_domain
def install(self): def install(self):
files.put( files.put(
name="Disable installing recommended packages globally", name="Disable installing recommended packages globally",
src=BytesIO(b'APT::Install-Recommends "false";\n'), src=BytesIO(b'APT::Install-Recommends "0";\n'),
dest="/etc/apt/apt.conf.d/00InstallRecommends", dest="/etc/apt/apt.conf.d/99no-recommends",
user="root", user="root",
group="root", group="root",
mode="644", mode="644",
@@ -494,17 +494,6 @@ class ChatmailDeployer(Deployer):
) )
def configure(self): def configure(self):
# Ensure the per-domain mailbox directory exists before
# chatmail-metadata starts (it crashes without it).
files.directory(
name="Ensure vmail mailbox directory exists",
path=f"/home/vmail/mail/{self.mail_domain}",
user="vmail",
group="vmail",
mode="700",
present=True,
)
# This file is used by auth proxy. # This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName # https://wiki.debian.org/EtcMailName
server.shell( server.shell(
@@ -514,15 +503,6 @@ class ChatmailDeployer(Deployer):
], ],
) )
files.directory(
name=f"Ensure mailboxes directory {self.config.mailboxes_dir} exists",
path=str(self.config.mailboxes_dir),
user="vmail",
group="vmail",
mode="700",
present=True,
)
class FcgiwrapDeployer(Deployer): class FcgiwrapDeployer(Deployer):
def install(self): def install(self):
@@ -641,7 +621,7 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
tls_deployer = get_tls_deployer(config, mail_domain) tls_deployer = get_tls_deployer(config, mail_domain)
all_deployers = [ all_deployers = [
ChatmailDeployer(config), ChatmailDeployer(mail_domain),
LegacyRemoveDeployer(), LegacyRemoveDeployer(),
FiltermailDeployer(), FiltermailDeployer(),
JournaldDeployer(), JournaldDeployer(),

View File

@@ -91,19 +91,18 @@ def check_full_zone(sshexec, remote_data, out, zonefile) -> int:
if required_diff: if required_diff:
out.red("Please set required DNS entries at your DNS provider:\n") out.red("Please set required DNS entries at your DNS provider:\n")
for line in required_diff: for line in required_diff:
out.print(line) out(line)
out.print() out("")
returncode = 1 returncode = 1
if remote_data.get("dkim_entry") in required_diff: if remote_data.get("dkim_entry") in required_diff:
out.print( out(
"If the DKIM entry above does not work with your DNS provider," "If the DKIM entry above does not work with your DNS provider, you can try this one:\n"
" you can try this one:\n"
) )
out.print(remote_data.get("web_dkim_entry") + "\n") out(remote_data.get("web_dkim_entry") + "\n")
if recommended_diff: if recommended_diff:
out.print("WARNING: these recommended DNS entries are not set:\n") out("WARNING: these recommended DNS entries are not set:\n")
for line in recommended_diff: for line in recommended_diff:
out.print(line) out(line)
if not (recommended_diff or required_diff): if not (recommended_diff or required_diff):
out.green("Great! All your DNS entries are verified and correct.") out.green("Great! All your DNS entries are verified and correct.")

View File

@@ -1,15 +1,15 @@
import os
import urllib.request import urllib.request
from chatmaild.config import Config from chatmaild.config import Config
from pyinfra import host from pyinfra import host
from pyinfra.facts.server import Arch, Command, Sysctl from pyinfra.facts.server import Arch, Sysctl
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import ( from cmdeploy.basedeploy import (
Deployer, Deployer,
activate_remote_units, activate_remote_units,
blocked_service_startup,
configure_remote_units, configure_remote_units,
get_resource, get_resource,
has_systemd, has_systemd,
@@ -28,11 +28,9 @@ class DovecotDeployer(Deployer):
arch = host.get_fact(Arch) arch = host.get_fact(Arch)
if has_systemd() and "dovecot.service" in host.get_fact(SystemdEnabled): if has_systemd() and "dovecot.service" in host.get_fact(SystemdEnabled):
return # already installed and running return # already installed and running
_install_dovecot_package("core", arch)
with blocked_service_startup(): _install_dovecot_package("imapd", arch)
_install_dovecot_package("core", arch) _install_dovecot_package("lmtpd", arch)
_install_dovecot_package("imapd", arch)
_install_dovecot_package("lmtpd", arch)
def configure(self): def configure(self):
configure_remote_units(self.config.mail_domain, self.units) configure_remote_units(self.config.mail_domain, self.units)
@@ -136,25 +134,19 @@ def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
# as per https://doc.dovecot.org/2.3/configuration_manual/os/ # as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits # it is recommended to set the following inotify limits
can_modify = host.get_fact(Command, "systemd-detect-virt -c || true") == "none" if not os.environ.get("CHATMAIL_NOSYSCTL"):
for name in ("max_user_instances", "max_user_watches"): for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}" key = f"fs.inotify.{name}"
value = host.get_fact(Sysctl)[key] if host.get_fact(Sysctl)[key] > 65535:
if value > 65534: # Skip updating limits if already sufficient
continue # (enables running in incus containers where sysctl readonly)
if not can_modify: continue
print( server.sysctl(
"\n!!!! refusing to attempt sysctl setting in shared-kernel containers\n" name=f"Change {key}",
f"!!!! dovecot: sysctl {key!r}={value}, should be >65534 for production setups\n" key=key,
"!!!!" value=65535,
persist=True,
) )
continue
server.sysctl(
name=f"Change {key}",
key=key,
value=65535,
persist=True,
)
timezone_env = files.line( timezone_env = files.line(
name="Set TZ environment variable", name="Set TZ environment variable",

View File

@@ -14,10 +14,10 @@ class FiltermailDeployer(Deployer):
def install(self): def install(self):
arch = host.get_fact(facts.server.Arch) arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.0/filtermail-{arch}" url = f"https://github.com/chatmail/filtermail/releases/download/v0.5.2/filtermail-{arch}"
sha256sum = { sha256sum = {
"x86_64": "3fd8b18282252c75a5bbfa603d8c1b65f6563e5e920bddf3e64e451b7cdb43ce", "x86_64": "ce24ca0075aa445510291d775fb3aea8f4411818c7b885ae51a0fe18c5f789ce",
"aarch64": "2bd191de205f7fd60158dd8e3516ab7e3efb14627696f3d7dc186bdcd9e10a43", "aarch64": "c5d783eefa5332db3d97a0e6a23917d72849e3eb45da3d16ce908a9b4e5a797d",
}[arch] }[arch]
self.need_restart |= files.download( self.need_restart |= files.download(
name="Download filtermail", name="Download filtermail",

View File

@@ -1,10 +1,17 @@
"""lxc-start/stop/status/test subcommands for testing with local containers.""" """lxc-start/stop/status/test subcommands for testing with local containers."""
import os import os
import subprocess
import threading
import time import time
from ..util import get_git_hash, get_version_string, shell from ..util import (
from .incus import RELAY_IMAGE_ALIAS, Incus, RelayContainer collapse,
get_git_hash,
get_version_string,
shell,
)
from .incus import Incus, RelayContainer
RELAY_NAMES = ("test0", "test1") RELAY_NAMES = ("test0", "test1")
@@ -34,20 +41,14 @@ def lxc_start_cmd_options(parser):
def lxc_start_cmd(args, out): def lxc_start_cmd(args, out):
"""Create/Ensure and start LXC relay and DNS containers.""" """Create/Ensure and start LXC relay and DNS containers."""
ix = Incus()
with out.section("Preparing container setup"):
_lxc_start_cmd(args, out)
def _lxc_start_cmd(args, out):
ix = Incus(out)
sub = out.new_prefixed_out()
out.green("Ensuring base image ...")
ix.ensure_base_image()
out.green("Ensuring DNS container (ns-localchat) ...") out.green("Ensuring DNS container (ns-localchat) ...")
dns_ct = ix.get_dns_container() dns_ct = ix.get_dns_container()
dns_ct.ensure() dns_ct.ensure()
sub.print(f"DNS container IP: {dns_ct.ipv4}") if not ix.find_dns_image():
with out.section("LXC: publishing DNS image"):
dns_ct.publish_as_dns_image()
out.print(f" DNS container IP: {dns_ct.ipv4}")
names = args.names if args.names else RELAY_NAMES names = args.names if args.names else RELAY_NAMES
relays = list(ix.get_container(n) for n in names) relays = list(ix.get_container(n) for n in names)
@@ -56,12 +57,12 @@ def _lxc_start_cmd(args, out):
ct.ensure() ct.ensure()
ip = ct.ipv4 ip = ct.ipv4
sub.print("Configuring container hostname ...") out.print(" Configuring container hostname ...")
ct.configure_hosts(ip) ct.configure_hosts(ip)
sub.print(f"Writing {ct.ini.name} ...") out.print(f" Writing {ct.ini.name} ...")
ct.write_ini(disable_ipv6=args.ipv4_only) ct.write_ini(disable_ipv6=args.ipv4_only)
sub.print(f"Config: {ct.ini}") out.print(f" Config: {ct.ini}")
if args.ipv4_only: if args.ipv4_only:
ct.disable_ipv6() ct.disable_ipv6()
ipv6 = None ipv6 = None
@@ -72,9 +73,9 @@ def _lxc_start_cmd(args, out):
check=False, check=False,
) )
ipv6 = output.strip() if output else None ipv6 = output.strip() if output else None
sub.print(f"{_format_addrs(ip, ipv6)}") out.print(f" {_format_addrs(ip, ipv6)}")
sub.green(f"Container {ct.name!r} ready: {ct.domain} -> {ip}") out.green(f" Container {ct.name!r} ready: {ct.domain} -> {ip}")
out.print() out.print()
# Reset DNS zones only for the containers we just started # Reset DNS zones only for the containers we just started
@@ -84,37 +85,44 @@ def _lxc_start_cmd(args, out):
if started: if started:
out.print( out.print(
f"Resetting DNS zones for {len(started)} domain(s) (A + AAAA records) ..." f"Resetting DNS zones for {len(started)}"
" domain(s) (A + AAAA records) ..."
) )
dns_ct.reset_dns_records(dns_ct.ipv4, started) dns_ct.reset_dns_records(dns_ct.ipv4, started)
for ct in relays: for ct in relays:
if ct.name in started_cnames: if ct.name in started_cnames:
sub.print(f"Configuring DNS in {ct.name} ...") out.print(f" Configuring and testing DNS in {ct.name} ...")
ct.configure_dns(dns_ct.ipv4) ct.configure_dns(dns_ct.ipv4)
if not ct.check_dns():
out.red(
f" DNS check failed for {ct.name}"
": cannot resolve external hosts"
)
return 1
# Generate the unified SSH config # Generate the unified SSH config
out.green("Writing ssh-config ...") out.green("Writing ssh-config ...")
ssh_cfg = ix.write_ssh_config() ssh_cfg = ix.write_ssh_config()
sub.print(f"{ssh_cfg}") out.print(f" {ssh_cfg}")
# Verify SSH via the generated config # Verify SSH via the generated config
for ct in relays: for ct in relays:
sub.print(f"Verifying SSH to {ct.name} via ssh-config ...") out.print(f" Verifying SSH to {ct.name} via ssh-config ...")
if ct.verify_ssh(ssh_cfg): if ct.verify_ssh(ssh_cfg):
sub.print(f"SSH OK: ssh -F lxconfigs/ssh-config {ct.domain}") out.print(f" SSH OK: ssh -F lxconfigs/ssh-config {ct.domain}")
else: else:
sub.red(f"WARNING: SSH verification failed for {ct.name}") out.red(f" WARNING: SSH verification failed for {ct.name}")
# Print integration suggestions # Print integration suggestions
ssh_cfg = ix.ssh_config_path ssh_cfg = ix.ssh_config_path
if not ix.check_ssh_include(): if not ix.check_ssh_include():
sub.green( out.green(
"\n(Optional) To use containers from any SSH client, add to ~/.ssh/config:" "\n (Optional) To use containers from any SSH client, add to ~/.ssh/config:"
) )
sub.green(f" Include {ssh_cfg}") out.green(f" Include {ssh_cfg}")
# Optionally run cmdeploy run + dns on each relay # Optionally run cmdeploy run on each relay
if args.run: if args.run:
for ct in relays: for ct in relays:
with out.section(f"cmdeploy run: {ct.sname} ({ct.domain})"): with out.section(f"cmdeploy run: {ct.sname} ({ct.domain})"):
@@ -123,20 +131,6 @@ def _lxc_start_cmd(args, out):
out.red(f"Deploy to {ct.sname} failed (exit {ret})") out.red(f"Deploy to {ct.sname} failed (exit {ret})")
return ret return ret
with out.section("loading DNS zones"):
for ct in relays:
ret = _run_cmdeploy(
"dns", ct, ix, out,
extra=["--zonefile", str(ct.zone)],
)
if ret:
out.red(f"DNS for {ct.sname} failed (exit {ret})")
return ret
if ct.zone.exists():
dns_ct.set_dns_records(ct.zone.read_text())
out.print(f"Restarting filtermail-incoming on {ct.name}")
ct.bash("systemctl restart filtermail-incoming")
# ------------------------------------------------------------------- # -------------------------------------------------------------------
# lxc-stop # lxc-stop
@@ -163,7 +157,7 @@ def lxc_stop_cmd_options(parser):
def lxc_stop_cmd(args, out): def lxc_stop_cmd(args, out):
"""Stop (and optionally destroy) local LXC relay containers.""" """Stop (and optionally destroy) local LXC relay containers."""
ix = Incus(out) ix = Incus()
names = args.names or RELAY_NAMES names = args.names or RELAY_NAMES
destroy = args.destroy or args.destroy_all destroy = args.destroy or args.destroy_all
@@ -171,6 +165,9 @@ def lxc_stop_cmd(args, out):
if destroy: if destroy:
out.green(f"Destroying container {ct.name!r} ...") out.green(f"Destroying container {ct.name!r} ...")
ct.destroy() ct.destroy()
if hasattr(ct, "image_alias"):
out.green(f" Deleting cached image {ct.image_alias!r} ...")
ix.run(["image", "delete", ct.image_alias], check=False)
else: else:
out.green(f"Stopping container {ct.name!r} ...") out.green(f"Stopping container {ct.name!r} ...")
ct.stop(force=True) ct.stop(force=True)
@@ -207,7 +204,7 @@ def lxc_test_cmd(args, out):
All commands run directly on the host using All commands run directly on the host using
``--ssh-config lxconfigs/ssh-config`` for SSH access. ``--ssh-config lxconfigs/ssh-config`` for SSH access.
""" """
ix = Incus(out) ix = Incus()
t_total = time.time() t_total = time.time()
relay_names = list(RELAY_NAMES) relay_names = list(RELAY_NAMES)
if args.one: if args.one:
@@ -215,36 +212,48 @@ def lxc_test_cmd(args, out):
local_hash = get_git_hash() local_hash = get_git_hash()
# Per-relay: start, deploy, then snapshot the first relay as a # Per-relay: start containers, then deploy in parallel.
# reusable image so the second relay launches pre-deployed.
ipv4_only_flags = {RELAY_NAMES[0]: False, RELAY_NAMES[1]: True} ipv4_only_flags = {RELAY_NAMES[0]: False, RELAY_NAMES[1]: True}
# Phase 1: start all containers (sequential, fast)
for ct in map(ix.get_container, relay_names): for ct in map(ix.get_container, relay_names):
name = ct.sname name = ct.sname
ipv4_only = ipv4_only_flags.get(name, False) ipv4_only = ipv4_only_flags.get(name, False)
v_flag = " -" + "v" * out.verbosity if out.verbosity > 0 else "" label = "IPv4-only" if ipv4_only else "dual-stack"
start_cmd = f"cmdeploy lxc-start{v_flag} {name}"
if ipv4_only: with out.section(f"LXC: lxc-start {name} ({label})"):
start_cmd += " --ipv4-only" args.names = [name]
with out.section(f"cmdeploy lxc-start: {name}"): args.ipv4_only = ipv4_only
ret = out.shell(start_cmd, cwd=str(ix.project_root)) args.run = False
ret = lxc_start_cmd(args, out)
if ret: if ret:
return ret return ret
# Phase 2: deploy all relays in parallel
to_deploy = []
for ct in map(ix.get_container, relay_names):
status = _deploy_status(ct, local_hash, ix) status = _deploy_status(ct, local_hash, ix)
with out.section(f"cmdeploy run: {name}"): if "IN-SYNC" in status:
if "IN-SYNC" in status: out.section_line(f"cmdeploy run: {ct.sname}: {status}, skipping")
out.print(f"{name} is {status}, skipping") else:
else: to_deploy.append(ct)
ret = _run_cmdeploy("run", ct, ix, out, extra=["--skip-dns-check"])
if ret:
out.red(f"Deploy to {name} failed (exit {ret})")
return ret
# Snapshot the first relay so subsequent ones launch pre-deployed if to_deploy:
if not ix.find_image([RELAY_IMAGE_ALIAS]): with out.section("cmdeploy run (parallel)"):
with out.section("lxc-test: caching relay image"): ret = _run_cmdeploy_parallel(
ct.publish_as_relay_image() "run", to_deploy, ix, out, extra=["--skip-dns-check"]
)
if ret:
return ret
# Phase 3: publish images (sequential, fast)
for ct in map(ix.get_container, relay_names):
if ct.publish_image():
out.section_line(f"LXC: published {ct.sname} image")
else:
out.section_line(
f"LXC: publish {ct.sname} image: skipped, cached",
)
for ct in map(ix.get_container, relay_names): for ct in map(ix.get_container, relay_names):
with out.section(f"cmdeploy dns: {ct.sname} ({ct.domain})"): with out.section(f"cmdeploy dns: {ct.sname} ({ct.domain})"):
@@ -253,31 +262,31 @@ def lxc_test_cmd(args, out):
out.red(f"DNS for {ct.sname} failed (exit {ret})") out.red(f"DNS for {ct.sname} failed (exit {ret})")
return ret return ret
with out.section(f"lxc-test: loading DNS zones {' & '.join(relay_names)}"): with out.section("LXC: PowerDNS zone update"):
dns_ct = ix.get_dns_container() dns_ct = ix.get_dns_container()
for ct in map(ix.get_container, relay_names): for ct in map(ix.get_container, relay_names):
if ct.zone.exists(): if ct.zone.exists():
zone_data = ct.zone.read_text() zone_data = ct.zone.read_text()
out.print(f"Loading {ct.zone} into PowerDNS ...") out.print(f" Loading {ct.zone} into PowerDNS ...")
dns_ct.set_dns_records(zone_data) dns_ct.set_dns_records(zone_data)
# Restart filtermail so its in-process DNS cache # Run tests in both directions when two relays are available.
# does not hold stale negative DKIM responses test_pairs = [(0, 1), (1, 0)] if len(relay_names) > 1 else [(0,)]
# from before the zones were loaded. for pair in test_pairs:
for ct in map(ix.get_container, relay_names): first = ix.get_container(relay_names[pair[0]])
out.print(f"Restarting filtermail-incoming on {ct.name} ...") label = first.sname
ct.bash("systemctl restart filtermail-incoming")
with out.section("cmdeploy test"):
first = ix.get_container(relay_names[0])
env = None env = None
if len(relay_names) > 1: if len(pair) > 1:
second = ix.get_container(relay_names[pair[1]])
label = f"{first.sname} \u2194 {second.sname}"
env = os.environ.copy() env = os.environ.copy()
env["CHATMAIL_DOMAIN2"] = ix.get_container(relay_names[1]).domain env["CHATMAIL_DOMAIN2"] = second.domain
ret = _run_cmdeploy("test", first, ix, out, **({"env": env} if env else {}))
if ret: with out.section(f"cmdeploy test: {label}"):
out.red(f"Tests failed (exit {ret})") ret = _run_cmdeploy("test", first, ix, out, **({"env": env} if env else {}))
return ret if ret:
out.red(f"Tests failed (exit {ret})")
return ret
elapsed = time.time() - t_total elapsed = time.time() - t_total
out.section_line(f"lxc-test complete ({elapsed:.1f}s)") out.section_line(f"lxc-test complete ({elapsed:.1f}s)")
@@ -301,7 +310,7 @@ def lxc_status_cmd_options(parser):
def lxc_status_cmd(args, out): def lxc_status_cmd(args, out):
"""Show status of local LXC chatmail containers.""" """Show status of local LXC chatmail containers."""
ix = Incus(out) ix = Incus()
containers = ix.list_managed() containers = ix.list_managed()
if not containers: if not containers:
out.red("No LXC containers found. Run 'cmdeploy lxc-start' first.") out.red("No LXC containers found. Run 'cmdeploy lxc-start' first.")
@@ -314,10 +323,10 @@ def lxc_status_cmd(args, out):
data = ix.run_json(["storage", "show", "default"], check=False) data = ix.run_json(["storage", "show", "default"], check=False)
if data: if data:
storage_path = data.get("config", {}).get("source") storage_path = data.get("config", {}).get("source")
msg = "Container status"
if storage_path: if storage_path:
msg += f": {storage_path}" out.green(f"Containers: ({storage_path})")
out.section_line(msg) else:
out.green("Containers:")
dns_ip = None dns_ip = None
for c in containers: for c in containers:
@@ -325,7 +334,6 @@ def lxc_status_cmd(args, out):
if c["name"] == ix.get_dns_container().name: if c["name"] == ix.get_dns_container().name:
dns_ip = c["ip"] dns_ip = c["ip"]
out.section_line("Host ssh and DNS configuration")
_print_ssh_status(out, ix) _print_ssh_status(out, ix)
_print_dns_forwarding_status(out, dns_ip) _print_dns_forwarding_status(out, dns_ip)
return 0 return 0
@@ -344,16 +352,16 @@ def _print_container_status(out, c, ix, local_hash):
tag = "running" tag = "running"
else: else:
tag = f"running {_deploy_status(ct, local_hash, ix)}" tag = f"running {_deploy_status(ct, local_hash, ix)}"
out.print(f"{cname:20s} {tag}") out.print(f" {cname:20s} {tag}")
# Second line: domain, IPv4, IPv6 # Second line: domain, IPv4, IPv6
domain = c.get("domain", "") domain = c.get("domain", "")
ip = c.get("ip") or "?" ip = c.get("ip") or "?"
ipv6 = c.get("ipv6") ipv6 = c.get("ipv6")
out.print(f"{domain:20s} {_format_addrs(ip, ipv6)}") out.print(f" {domain:20s} {_format_addrs(ip, ipv6)}")
# Third line: RAM (RSS), config # Third line: RAM (RSS), config
detail_out = out.new_prefixed_out(" " * 21) indent = " " * 21
try: try:
used, total = ct.rss_mib() used, total = ct.rss_mib()
except Exception: except Exception:
@@ -366,42 +374,41 @@ def _print_container_status(out, c, ix, local_hash):
else: else:
detail = ram_str detail = ram_str
detail_out.print(detail) out.print(f" {indent}{detail}")
out.print() out.print()
def _print_ssh_status(out, ix): def _print_ssh_status(out, ix):
"""Print SSH integration status.""" """Print SSH integration status."""
out.print()
ssh_cfg = ix.ssh_config_path ssh_cfg = ix.ssh_config_path
if ix.check_ssh_include(): if ix.check_ssh_include():
out.green("SSH: ~/.ssh/config includes lxconfigs/ssh-config ✓") out.green("SSH: ~/.ssh/config includes lxconfigs/ssh-config ✓")
else: else:
out.red("SSH: ~/.ssh/config does NOT include lxconfigs/ssh-config") out.red("SSH: ~/.ssh/config does NOT include lxconfigs/ssh-config")
sub = out.new_prefixed_out() out.print(" Add to ~/.ssh/config:")
sub.print("Add to ~/.ssh/config:") out.print(f" Include {ssh_cfg}")
sub.print(f" Include {ssh_cfg}")
def _print_dns_forwarding_status(out, dns_ip): def _print_dns_forwarding_status(out, dns_ip):
"""Print host DNS forwarding status for .localchat.""" """Print host DNS forwarding status for .localchat."""
sub = out.new_prefixed_out()
if not dns_ip: if not dns_ip:
out.red("DNS: ns-localchat container not found") out.red("DNS: ns-localchat container not found")
return return
try: try:
rv = shell("resolvectl status incusbr0") rv = shell("resolvectl status incusbr0", timeout=5)
dns_ok = dns_ip in rv.stdout and "localchat" in rv.stdout dns_ok = dns_ip in rv.stdout and "localchat" in rv.stdout
except Exception: except (FileNotFoundError, subprocess.TimeoutExpired, OSError):
dns_ok = None dns_ok = None
if dns_ok is True: if dns_ok is True:
out.green(f"DNS: .localchat forwarding to {dns_ip}") out.green(f"DNS: .localchat forwarding to {dns_ip}")
elif dns_ok is False: elif dns_ok is False:
out.red("DNS: .localchat forwarding NOT configured") out.red("DNS: .localchat forwarding NOT configured")
sub.print("Run:") out.print(" Run:")
sub.print(f" sudo resolvectl dns incusbr0 {dns_ip}") out.print(f" sudo resolvectl dns incusbr0 {dns_ip}")
sub.print(" sudo resolvectl domain incusbr0 ~localchat") out.print(" sudo resolvectl domain incusbr0 ~localchat")
else: else:
sub.print("DNS: .localchat forwarding status UNKNOWN") out.print(" DNS: .localchat forwarding status UNKNOWN")
# ------------------------------------------------------------------- # -------------------------------------------------------------------
@@ -427,7 +434,7 @@ def _deploy_status(ct, local_hash, ix):
return "NOT DEPLOYED" return "NOT DEPLOYED"
# A container launched from the relay image has the same # A container launched from the relay image has the same
# git hash but a different domain always redeploy. # git hash but a different domain - always redeploy.
deployed_domain = ct.deployed_domain() deployed_domain = ct.deployed_domain()
if deployed_domain and deployed_domain != ct.domain: if deployed_domain and deployed_domain != ct.domain:
return f"DOMAIN-MISMATCH (deployed: {deployed_domain})" return f"DOMAIN-MISMATCH (deployed: {deployed_domain})"
@@ -443,7 +450,7 @@ def _deploy_status(ct, local_hash, ix):
if deployed_hash != local_hash: if deployed_hash != local_hash:
return f"STALE (deployed: {short}, local: {local_short})" return f"STALE (deployed: {short}, local: {local_short})"
# Hash matches check for uncommitted diffs # Hash matches - check for uncommitted diffs
local_version = get_version_string() local_version = get_version_string()
if deployed != local_version: if deployed != local_version:
return f"DIRTY ({local_short}, undeployed changes)" return f"DIRTY ({local_short}, undeployed changes)"
@@ -451,8 +458,26 @@ def _deploy_status(ct, local_hash, ix):
return f"IN-SYNC ({short})" return f"IN-SYNC ({short})"
def _add_name_args(parser, help_text): def _add_name_args(parser, help_text=None):
parser.add_argument("names", nargs="*", metavar="NAME", help=help_text) """Add optional positional NAME arguments."""
parser.add_argument(
"names",
nargs="*",
metavar="NAME",
help=help_text or "Relay name(s) to operate on.",
)
def _build_cmdeploy_cmd(subcmd, ct, ix, extra=None):
"""Build the ``cmdeploy <subcmd>`` command string."""
extra_str = " ".join(extra) if extra else ""
return collapse(f"""\
cmdeploy {subcmd}
--config {ct.ini}
--ssh-config {ix.ssh_config_path}
--ssh-host {ct.domain}
{extra_str}
""")
def _run_cmdeploy(subcmd, ct, ix, out, extra=None, **kwargs): def _run_cmdeploy(subcmd, ct, ix, out, extra=None, **kwargs):
@@ -461,15 +486,73 @@ def _run_cmdeploy(subcmd, ct, ix, out, extra=None, **kwargs):
*ct* is a Container (uses ``ct.ini`` and ``ct.domain``). *ct* is a Container (uses ``ct.ini`` and ``ct.domain``).
Returns the subprocess exit code. Returns the subprocess exit code.
""" """
extra_str = " ".join(extra) if extra else "" cmd = _build_cmdeploy_cmd(subcmd, ct, ix, extra=extra)
v_flag = " -" + "v" * out.verbosity if out.verbosity > 0 else ""
cmd = f"""
cmdeploy {subcmd}{v_flag}
--config {ct.ini}
--ssh-config {ix.ssh_config_path}
--ssh-host {ct.domain}
{extra_str}
"""
if "cwd" not in kwargs: if "cwd" not in kwargs:
kwargs["cwd"] = str(ix.project_root) kwargs["cwd"] = str(ix.project_root)
return out.shell(cmd, **kwargs) out.print(f" [$ {cmd}]")
return shell(cmd, capture_output=False, **kwargs).returncode
# Number of tail lines to print on failure.
_FAIL_CONTEXT_LINES = 40
def _run_cmdeploy_parallel(subcmd, containers, ix, out, extra=None):
"""Run ``cmdeploy <subcmd>`` for every container in parallel.
Output is captured and filtered: only lines containing
``"Start operation"`` are printed (prefixed with the relay
short-name). On failure the last *_FAIL_CONTEXT_LINES*
lines of that process's output are shown.
"""
procs = [] # list of (container, Popen, collected_lines)
cwd = str(ix.project_root)
for ct in containers:
cmd = _build_cmdeploy_cmd(subcmd, ct, ix, extra=extra)
out.print(f" [{ct.sname}] $ {cmd}")
proc = subprocess.Popen(
cmd,
shell=True,
text=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
)
procs.append((ct, proc, []))
def _reader(ct, proc, lines):
prefix = f" [{ct.sname}]"
for raw in proc.stdout:
line = raw.rstrip("\n")
lines.append(line)
if "Starting operation" in line:
out.print(f"{prefix} {line}")
threads = []
for ct, proc, lines in procs:
t = threading.Thread(
target=_reader,
args=(ct, proc, lines),
daemon=True,
)
t.start()
threads.append(t)
for t in threads:
t.join()
for _, proc, _ in procs:
proc.wait()
# Check results
first_failure = 0
for ct, proc, lines in procs:
if proc.returncode:
out.red(f"Deploy to {ct.sname} failed " f"(exit {proc.returncode})")
tail = lines[-_FAIL_CONTEXT_LINES:]
for tl in tail:
out.print(f" [{ct.sname}] {tl}")
if not first_failure:
first_failure = proc.returncode
return first_failure

View File

@@ -14,14 +14,18 @@ DOMAIN_SUFFIX = ".localchat"
UPSTREAM_IMAGE = "images:debian/12" UPSTREAM_IMAGE = "images:debian/12"
BASE_IMAGE_ALIAS = "localchat-base" BASE_IMAGE_ALIAS = "localchat-base"
BASE_SETUP_NAME = "localchat-base-setup" BASE_SETUP_NAME = "localchat-base-setup"
RELAY_IMAGE_ALIAS = "localchat-relay" DNS_IMAGE_ALIAS = "localchat-ns"
DNS_CONTAINER_NAME = "ns-localchat" DNS_CONTAINER_NAME = "ns-localchat"
DNS_DOMAIN = "ns.localchat" DNS_DOMAIN = "ns.localchat"
BRIDGE_IPV4 = "10.200.200.1/24"
class DNSConfigurationError(Exception): DNS_IP = "10.200.200.2"
"""Raised when the DNS container is not reachable or not answering.""" RELAY_IPS = {
"test0": "10.200.200.10",
"test1": "10.200.200.11",
"test2": "10.200.200.12",
}
def _extract_ip(net_data, family="inet"): def _extract_ip(net_data, family="inet"):
@@ -47,8 +51,7 @@ class Incus:
all modules share a single entry point for Incus interactions. all modules share a single entry point for Incus interactions.
""" """
def __init__(self, out): def __init__(self):
self.out = out
self.project_root = Path(__file__).resolve().parent.parent.parent.parent.parent self.project_root = Path(__file__).resolve().parent.parent.parent.parent.parent
self.lxconfigs_dir = self.project_root / "lxconfigs" self.lxconfigs_dir = self.project_root / "lxconfigs"
self.lxconfigs_dir.mkdir(exist_ok=True) self.lxconfigs_dir.mkdir(exist_ok=True)
@@ -69,7 +72,7 @@ class Incus:
""" """
containers = self.list_managed() containers = self.list_managed()
key_path = self.ssh_key_path key_path = self.ssh_key_path
lines = ["# Auto-generated by cmdeploy lxc-start do not edit\n"] lines = ["# Auto-generated by cmdeploy lxc-start - do not edit\n"]
for c in containers: for c in containers:
hosts = [c["name"]] hosts = [c["name"]]
domain = c.get("domain", "") domain = c.get("domain", "")
@@ -95,81 +98,19 @@ class Incus:
user_ssh_config = Path.home() / ".ssh" / "config" user_ssh_config = Path.home() / ".ssh" / "config"
if not user_ssh_config.exists(): if not user_ssh_config.exists():
return False return False
lines = user_ssh_config.read_text().splitlines() lines = filter(None, map(str.strip, user_ssh_config.open("r")))
target = f"include {self.ssh_config_path}".lower() return f"Include {self.ssh_config_path}" in lines
return any(line.strip().lower() == target for line in lines)
def get_host_nameservers(self):
"""Return upstream nameservers found on the host."""
ns = []
for path in ["/run/systemd/resolve/resolv.conf", "/etc/resolv.conf"]:
p = Path(path)
if p.exists():
for line in p.read_text().splitlines():
if line.strip().startswith("nameserver "):
addr = line.split()[1]
if addr not in ("127.0.0.1", "127.0.0.53", "::1"):
if addr not in ns:
ns.append(addr)
if ns:
break
return ns
def run(self, args, check=True, capture=True, input=None): def run(self, args, check=True, capture=True, input=None):
"""Run an incus command. """Run an incus command."""
cmd = ["incus"] + list(args)
When *capture* is True and *verbosity* >= 1, output is streamed kwargs = dict(check=check, text=True, input=input)
to the terminal line-by-line while also being captured for if capture:
later return via result.stdout. kwargs["capture_output"] = True
""" else:
cmd = ["incus", "--quiet"] + list(args) kwargs["stdout"] = None
sub = self.out.new_prefixed_out(" ") kwargs["stderr"] = None
return subprocess.run(cmd, **kwargs) # noqa: PLW1510
if not capture:
# Simple case: let subprocess handle streams (no capture)
if self.out.verbosity >= 1:
sub.print(f"$ {' '.join(cmd)}")
return subprocess.run(
cmd, text=True, input=input, check=check, stdout=None, stderr=None
)
# Capture case: we may need to stream while capturing
if sub.verbosity >= 1:
cmd_lines = " ".join(cmd).splitlines()
sub.print(f"$ {cmd_lines.pop(0)}")
if sub.verbosity >= 2:
for line in cmd_lines:
sub.print(f" {line}")
proc = subprocess.Popen(
cmd,
text=True,
stdin=subprocess.PIPE if input else subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
stdout_lines = []
if input:
proc.stdin.write(input)
proc.stdin.close()
for line in proc.stdout:
stdout_lines.append(line)
if sub.verbosity >= 2:
sub.print(f" > {line.rstrip()}")
stderr = proc.stderr.read()
ret = proc.wait()
stdout = "".join(stdout_lines)
if check and ret != 0:
full_output = stdout + stderr
for line in full_output.splitlines():
if sub.verbosity < 1: # and we haven't printed it yet
sub.red(line)
raise subprocess.CalledProcessError(ret, cmd, output=stdout, stderr=stderr)
return subprocess.CompletedProcess(cmd, ret, stdout=stdout, stderr=stderr)
def run_json(self, args, check=True): def run_json(self, args, check=True):
"""Run an incus command with ``--format=json``. """Run an incus command with ``--format=json``.
@@ -197,19 +138,25 @@ class Incus:
return None return None
return result.stdout.strip() return result.stdout.strip()
def find_image(self, aliases): def _find_image(self, alias):
"""Return the first alias from *aliases* that exists, else None.""" """Return *alias* if an image with that alias exists, else None."""
images = self.run_json(["image", "list"], check=False) or [] images = self.run_json(["image", "list"], check=False) or []
existing = {a.get("name") for img in images for a in img.get("aliases", [])} for img in images:
for alias in aliases: for a in img.get("aliases", []):
if alias in existing: if a.get("name") == alias:
return alias return alias
return None return None
def find_dns_image(self):
"""Return the DNS image alias if it exists, else None."""
return self._find_image(DNS_IMAGE_ALIAS)
def delete_images(self): def delete_images(self):
"""Delete the cached base and relay images.""" """Delete all cached localchat images."""
for alias in (RELAY_IMAGE_ALIAS, BASE_IMAGE_ALIAS): for alias in (DNS_IMAGE_ALIAS, BASE_IMAGE_ALIAS):
self.run(["image", "delete", alias], check=False) # ok if absent self.run(["image", "delete", alias], check=False)
for name in RELAY_IPS:
self.run(["image", "delete", f"localchat-{name}"], check=False)
def list_managed(self): def list_managed(self):
"""Return list of dicts with name, ip, ipv6, domain, status, memory_usage.""" """Return list of dicts with name, ip, ipv6, domain, status, memory_usage."""
@@ -244,25 +191,32 @@ class Incus:
slow apt-get install step. slow apt-get install step.
Returns the image alias. Returns the image alias.
""" """
if self.find_image([BASE_IMAGE_ALIAS]): if self._find_image(BASE_IMAGE_ALIAS):
self.out.print(f" Base image '{BASE_IMAGE_ALIAS}' already cached.")
return BASE_IMAGE_ALIAS return BASE_IMAGE_ALIAS
self.out.print(" Building base image (one-time setup) ...") print(" Building base image (one-time setup) ...")
self.run(["delete", BASE_SETUP_NAME, "--force"], check=False) self.run(["delete", BASE_SETUP_NAME, "--force"], check=False)
self.run(["image", "delete", BASE_IMAGE_ALIAS], check=False) self.run(["image", "delete", BASE_IMAGE_ALIAS], check=False)
self.run(["launch", UPSTREAM_IMAGE, BASE_SETUP_NAME]) self.run(
["launch", UPSTREAM_IMAGE, BASE_SETUP_NAME, "-c", "limits.memory=512MiB"]
)
ct = Container(self, BASE_SETUP_NAME) ct = Container(self, BASE_SETUP_NAME, memory="512MiB")
ct.wait_ready() ct.wait_ready()
key_path = self.ssh_key_path key_path = self.ssh_key_path
pub_key = key_path.with_suffix(".pub").read_text().strip() pub_key = key_path.with_suffix(".pub").read_text().strip()
host_ns = self.get_host_nameservers() print(" ── apt-get install (base image) ──")
ns_lines = "\n".join(f"nameserver {n}" for n in host_ns) ct.bash(
ct.bash(f""" f"""\
printf '{ns_lines}\n' > /etc/resolv.conf systemctl disable --now systemd-resolved 2>/dev/null || true
rm -f /etc/resolv.conf
echo 'nameserver 9.9.9.9' > /etc/resolv.conf
while fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
echo "Waiting for other apt-get instance to finish..."
sleep 5
done
apt-get -o DPkg::Lock::Timeout=60 update apt-get -o DPkg::Lock::Timeout=60 update
DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server python3 DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server python3
systemctl enable ssh systemctl enable ssh
@@ -271,14 +225,39 @@ class Incus:
chmod 700 /root/.ssh chmod 700 /root/.ssh
echo '{pub_key}' > /root/.ssh/authorized_keys echo '{pub_key}' > /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys chmod 600 /root/.ssh/authorized_keys
""") """,
capture=False,
)
print(" ── base image install done ──")
self.run(["stop", BASE_SETUP_NAME]) self.run(["stop", BASE_SETUP_NAME])
self.run(["publish", BASE_SETUP_NAME, f"--alias={BASE_IMAGE_ALIAS}"]) self.run(["publish", BASE_SETUP_NAME, f"--alias={BASE_IMAGE_ALIAS}"])
self.run(["delete", BASE_SETUP_NAME, "--force"]) self.run(["delete", BASE_SETUP_NAME, "--force"])
self.out.print(f" Base image '{BASE_IMAGE_ALIAS}' ready.") print(f" Base image '{BASE_IMAGE_ALIAS}' ready.")
return BASE_IMAGE_ALIAS return BASE_IMAGE_ALIAS
def ensure_bridge(self):
"""Ensure incusbr0 exists and uses our fixed IPv4 subnet."""
bridge = self.run_json(["network", "show", "incusbr0"], check=False)
if bridge and bridge.get("config", {}).get("ipv4.address") == BRIDGE_IPV4:
return
print(f" Configuring incusbr0 with static subnet {BRIDGE_IPV4} ...")
if not bridge:
self.run(["network", "create", "incusbr0"], check=False)
self.run(
[
"network",
"set",
"incusbr0",
f"ipv4.address={BRIDGE_IPV4}",
"ipv4.nat=true",
"ipv6.address=none",
"dns.mode=none",
]
)
def get_container(self, name): def get_container(self, name):
"""Return a container handle for the given name. """Return a container handle for the given name.
@@ -297,24 +276,32 @@ class Incus:
class Container: class Container:
"""The base container handle wraps all interactions with incus.""" """Lightweight handle for an Incus container.
def __init__(self, incus, name, domain=None): Carries the container *name* and provides convenience methods
for running commands, managing lifecycle, and extracting state
so callers don't repeat the name everywhere.
"""
def __init__(self, incus, name, domain=None, memory="200MiB", ipv4=None):
self.incus = incus self.incus = incus
self.out = incus.out
self.name = name self.name = name
self.domain = domain or f"{name}{DOMAIN_SUFFIX}" self.domain = domain or f"{name}{DOMAIN_SUFFIX}"
self.ipv4 = None self.memory = memory
self.ipv4 = ipv4
self.ipv6 = None self.ipv6 = None
def bash(self, script, check=True): def bash(self, script, check=True, capture=True):
"""Returns stdout from executing ``bash -ec <script>`` inside this container. """Returns stdout from executing ``bash -ec <script>`` inside this container.
*script* is dedented and stripped so callers can use triple-quoted strings. *script* is dedented and stripped so callers can use triple-quoted strings.
When *check* is False, returns *None* on non-zero exit instead of raising. When *check* is False, returns *None* on non-zero exit instead of raising.
When *capture* is False, output streams to the terminal and None is returned.
""" """
script = textwrap.dedent(script).strip() cmd = ["exec", self.name, "--", "bash", "-ec", textwrap.dedent(script).strip()]
cmd = ["exec", self.name, "--", "bash", "-ec", script] if not capture:
self.incus.run(cmd, check=check, capture=False)
return None
return self.incus.run_output(cmd, check=check) return self.incus.run_output(cmd, check=check)
def run_cmd(self, *args, check=True): def run_cmd(self, *args, check=True):
@@ -336,19 +323,28 @@ class Container:
cmd.append("--force") cmd.append("--force")
self.incus.run(cmd, check=False) self.incus.run(cmd, check=False)
def launch(self): def launch(self, image=None):
"""Launch from the best available image, return the alias used.""" """Launch from the specified image, or the base image if None."""
image = self.incus.find_image([RELAY_IMAGE_ALIAS, BASE_IMAGE_ALIAS]) self.incus.ensure_bridge()
if not image: if image is None:
raise RuntimeError( image = self.incus.ensure_base_image()
f"No base image '{BASE_IMAGE_ALIAS}' found. "
"Call ensure_base_image() before launching containers."
)
self.out.print(f" Launching from '{image}' image ...")
cfg = [] cfg = []
cfg += ("-c", f"{LABEL_KEY}=true") cfg += ("-c", f"{LABEL_KEY}=true")
cfg += ("-c", f"user.localchat-domain={self.domain}") cfg += ("-c", f"user.localchat-domain={self.domain}")
self.incus.run(["launch", image, self.name, *cfg]) cfg += ("-c", f"limits.memory={self.memory}")
self.incus.run(["init", image, self.name, *cfg])
if self.ipv4:
self.incus.run(
[
"config",
"device",
"override",
self.name,
"eth0",
f"ipv4.address={self.ipv4}",
]
)
self.incus.run(["start", self.name])
return image return image
def ensure(self): def ensure(self):
@@ -361,12 +357,19 @@ class Container:
data = self.incus.run_json(["list", self.name], check=False) or [] data = self.incus.run_json(["list", self.name], check=False) or []
existing = [c for c in data if c["name"] == self.name] existing = [c for c in data if c["name"] == self.name]
image = None
if existing: if existing:
if existing[0]["status"] != "Running": status = existing[0]["status"]
if status != "Running":
print(f" Starting stopped {self.name} container ...")
self.start() self.start()
else:
print(f" {self.name} already running")
else: else:
self.launch() image = self.launch()
self.wait_ready() self.wait_ready()
if image:
print(f" Ensured {self.name} (launched from {image!r} image)")
return self return self
def destroy(self): def destroy(self):
@@ -432,14 +435,22 @@ class RelayContainer(Container):
incus, incus,
f"{name}-localchat", f"{name}-localchat",
domain=f"_{name}{DOMAIN_SUFFIX}", domain=f"_{name}{DOMAIN_SUFFIX}",
memory="500MiB",
ipv4=RELAY_IPS.get(name),
) )
self.sname = name self.sname = name
self.image_alias = f"localchat-{name}"
self.ini = incus.lxconfigs_dir / f"chatmail-{name}.ini" self.ini = incus.lxconfigs_dir / f"chatmail-{name}.ini"
self.zone = incus.lxconfigs_dir / f"{name}.zone" self.zone = incus.lxconfigs_dir / f"{name}.zone"
def launch(self): def launch(self):
"""Launch (from a potentially cached image) and clear inherited chatmail-version.""" """Launch from a cached per-relay image if available, else from base."""
image = super().launch() cached = self.incus._find_image(self.image_alias)
if cached:
print(f" Using cached image {cached!r}")
else:
print(" No cached image, building from base")
image = super().launch(image=cached)
self.bash("rm -f /etc/chatmail-version") self.bash("rm -f /etc/chatmail-version")
return image return image
@@ -451,19 +462,14 @@ class RelayContainer(Container):
def disable_ipv6(self): def disable_ipv6(self):
"""Disable IPv6 inside the container via sysctl.""" """Disable IPv6 inside the container via sysctl."""
# incus provides net.* virtualization for LXC containers so that self.bash("""\
# these sysctls only affect the container's network namespace.
self.bash("""
sysctl -w net.ipv6.conf.all.disable_ipv6=1 sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1 sysctl -w net.ipv6.conf.default.disable_ipv6=1
mkdir -p /etc/sysctl.d
printf 'net.ipv6.conf.all.disable_ipv6=1\\n
net.ipv6.conf.default.disable_ipv6=1\\n'
> /etc/sysctl.d/99-disable-ipv6.conf
""") """)
self.push_file_content(
"/etc/sysctl.d/99-disable-ipv6.conf",
"""
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
""",
)
def configure_hosts(self, ip): def configure_hosts(self, ip):
"""Set hostname and /etc/hosts inside the container.""" """Set hostname and /etc/hosts inside the container."""
@@ -474,21 +480,23 @@ class RelayContainer(Container):
echo '{ip} {self.name} {self.domain}' >> /etc/hosts echo '{ip} {self.name} {self.domain}' >> /etc/hosts
""") """)
def publish_as_relay_image(self): def publish_image(self):
"""Publish this container as a reusable relay image. """Publish this container as a reusable per-relay image.
Stops the container, 'publishes' it as 'localchat-relay', then restarts it. Returns True if an image was published,
False if a cached image already existed.
""" """
if self.incus.find_image([RELAY_IMAGE_ALIAS]): if self.incus._find_image(self.image_alias):
return return False
self.out.print( self.bash("apt-get clean && rm -rf /var/lib/apt/lists/*")
f" Locally caching {self.name!r} as '{RELAY_IMAGE_ALIAS}' image ..." print(f" Publishing {self.name!r} as {self.image_alias!r} image ...")
)
self.incus.run( self.incus.run(
["publish", self.name, f"--alias={RELAY_IMAGE_ALIAS}", "--force"] ["publish", self.name, f"--alias={self.image_alias}", "--force"],
capture=False,
) )
self.wait_ready() self.wait_ready()
self.out.print(f" Relay image '{RELAY_IMAGE_ALIAS}' ready.") print(f" Image {self.image_alias!r} ready.")
return True
def deployed_version(self): def deployed_version(self):
"""Read /etc/chatmail-version, or None if absent.""" """Read /etc/chatmail-version, or None if absent."""
@@ -503,50 +511,40 @@ class RelayContainer(Container):
def verify_ssh(self, ssh_config): def verify_ssh(self, ssh_config):
"""Verify SSH connectivity to this container.""" """Verify SSH connectivity to this container."""
cmd = f"ssh -F {ssh_config} -o ConnectTimeout=60 root@{self.domain} hostname" cmd = f"ssh -F {ssh_config} -o ConnectTimeout=10 root@{self.domain} hostname"
return shell(cmd, timeout=60).returncode == 0 return shell(cmd, timeout=15).returncode == 0
def configure_dns(self, dns_ip): def configure_dns(self, dns_ip):
"""Point this container's resolver at *dns_ip* and verify DNS is reachable.""" """Point this container's resolver at *dns_ip*.
self.bash(f"""
Disables systemd-resolved to free port 53 and writes
a static /etc/resolv.conf. Also configures unbound
(if present) to forward .localchat queries.
"""
self.bash(f"""\
systemctl disable --now systemd-resolved 2>/dev/null || true systemctl disable --now systemd-resolved 2>/dev/null || true
rm -f /etc/resolv.conf rm -f /etc/resolv.conf
printf 'nameserver {dns_ip}\\n' >/etc/resolv.conf echo 'nameserver {dns_ip}' > /etc/resolv.conf
mkdir -p /etc/unbound/unbound.conf.d mkdir -p /etc/unbound/unbound.conf.d
printf 'server:\\n domain-insecure: "localchat"\\n\\n
forward-zone:\\n name: "localchat"\\n
forward-addr: {dns_ip}\\n'
> /etc/unbound/unbound.conf.d/localchat-forward.conf
systemctl restart unbound 2>/dev/null || true
""") """)
self.push_file_content(
"/etc/unbound/unbound.conf.d/localchat-forward.conf",
f"""
server:
domain-insecure: "localchat"
forward-zone: def check_dns(self, retries=5, delay=2):
name: "localchat" """Verify that external DNS resolution works inside the container."""
forward-addr: {dns_ip} for i in range(retries):
""",
)
self.bash("systemctl restart unbound 2>/dev/null || true")
self._wait_dns_reachable(dns_ip)
def _wait_dns_reachable(self, dns_ip, timeout=10):
"""Poll until *dns_ip* answers a DNS query from this container."""
if self.bash("which dig", check=False) is None:
self.bash(
"DEBIAN_FRONTEND=noninteractive "
"apt-get install -y dnsutils 2>/dev/null || true"
)
deadline = time.time() + timeout
while time.time() < deadline:
result = self.bash( result = self.bash(
f"dig @{dns_ip} . SOA +short +time=1 +tries=1", "getent hosts pypi.org",
check=False, check=False,
) )
if result and result.strip(): if result:
return return True
time.sleep(0.5) if i < retries - 1:
raise DNSConfigurationError( time.sleep(delay)
f"DNS at {dns_ip} not reachable from {self.name} after {timeout}s" return False
)
def write_ini(self, disable_ipv6=False): def write_ini(self, disable_ipv6=False):
"""Generate a chatmail.ini config file in lxconfigs/.""" """Generate a chatmail.ini config file in lxconfigs/."""
@@ -564,14 +562,34 @@ class RelayContainer(Container):
class DNSContainer(Container): class DNSContainer(Container):
"""Container handle for the PowerDNS name server. """Specialised container handle for the PowerDNS name server."""
Manages the authoritative and recursive DNS services required for
name resolution in the local testing environment.
"""
def __init__(self, incus): def __init__(self, incus):
super().__init__(incus, DNS_CONTAINER_NAME, domain=DNS_DOMAIN) super().__init__(
incus, DNS_CONTAINER_NAME, domain=DNS_DOMAIN, memory="256MiB", ipv4=DNS_IP
)
def launch(self):
"""Launch from cached DNS image if available, else from base image."""
cached = self.incus._find_image(DNS_IMAGE_ALIAS)
if cached:
print(f" Using cached image {cached!r}")
else:
print(" No cached image, building from base")
return super().launch(image=cached)
def publish_as_dns_image(self):
"""Publish this container as a reusable DNS image."""
if self.incus._find_image(DNS_IMAGE_ALIAS):
return
self.bash("apt-get clean && rm -rf /var/lib/apt/lists/*")
print(f" Publishing {self.name!r} as {DNS_IMAGE_ALIAS!r} image ...")
self.incus.run(
["publish", self.name, f"--alias={DNS_IMAGE_ALIAS}", "--force"],
capture=False,
)
self.wait_ready()
print(f" DNS image {DNS_IMAGE_ALIAS!r} ready.")
def pdnsutil(self, *args, check=True): def pdnsutil(self, *args, check=True):
"""Run ``pdnsutil <args>`` inside the DNS container.""" """Run ``pdnsutil <args>`` inside the DNS container."""
@@ -582,25 +600,11 @@ class DNSContainer(Container):
self.pdnsutil("replace-rrset", zone, name, rtype, ttl, rdata) self.pdnsutil("replace-rrset", zone, name, rtype, ttl, rdata)
def restart_services(self): def restart_services(self):
"""Restart pdns and pdns-recursor, then wait until DNS is answering.""" """Restart pdns and pdns-recursor."""
self.bash(""" self.bash("""\
systemctl restart pdns systemctl restart pdns
systemctl restart pdns-recursor || true systemctl restart pdns-recursor || true
""") """)
self._wait_dns_ready()
def _wait_dns_ready(self, timeout=60):
"""Poll until the recursor answers a query on port 53."""
deadline = time.time() + timeout
while time.time() < deadline:
result = self.bash(
"dig @127.0.0.1 . SOA +short +time=1 +tries=1",
check=False,
)
if result and result.strip():
return
time.sleep(0.5)
raise DNSConfigurationError(f"DNS recursor not answering after {timeout}s")
def ensure(self): def ensure(self):
"""Create the DNS container with PowerDNS if needed. """Create the DNS container with PowerDNS if needed.
@@ -620,36 +624,18 @@ class DNSContainer(Container):
check=False, check=False,
) )
def destroy(self):
"""Stop, delete, and reset bridge DNS config."""
super().destroy()
self.incus.run(["network", "unset", "incusbr0", "dns.mode"], check=False)
self.incus.run(["network", "unset", "incusbr0", "raw.dnsmasq"], check=False)
def _install_powerdns(self): def _install_powerdns(self):
"""Install and configure PowerDNS if not already present.""" """Install and configure PowerDNS if not already present."""
if self.run_cmd("which", "pdns_server", check=False) is not None: if self.run_cmd("which", "pdns_server", check=False) is not None:
return return
host_ns = self.incus.get_host_nameservers() self.bash("""\
ns_lines = "\n".join(f"nameserver {n}" for n in host_ns)
self.bash(f"""
systemctl disable --now systemd-resolved 2>/dev/null || true systemctl disable --now systemd-resolved 2>/dev/null || true
rm -f /etc/resolv.conf rm -f /etc/resolv.conf
printf '{ns_lines}\n' > /etc/resolv.conf echo 'nameserver 9.9.9.9' > /etc/resolv.conf
# Block automatic service startup during package installation
printf '#!/bin/sh\\nexit 101\\n' > /usr/sbin/policy-rc.d
chmod +x /usr/sbin/policy-rc.d
apt-get -o DPkg::Lock::Timeout=60 update apt-get -o DPkg::Lock::Timeout=60 update
DEBIAN_FRONTEND=noninteractive apt-get install -y \ DEBIAN_FRONTEND=noninteractive apt-get install -y \
pdns-server pdns-backend-sqlite3 sqlite3 pdns-recursor dnsutils pdns-server pdns-backend-sqlite3 sqlite3 pdns-recursor dnsutils
# Remove the startup block
rm /usr/sbin/policy-rc.d
systemctl stop pdns pdns-recursor || true systemctl stop pdns pdns-recursor || true
mkdir -p /var/lib/powerdns mkdir -p /var/lib/powerdns
sqlite3 /var/lib/powerdns/pdns.sqlite3 \ sqlite3 /var/lib/powerdns/pdns.sqlite3 \
@@ -659,7 +645,7 @@ class DNSContainer(Container):
self.push_file_content( self.push_file_content(
"/etc/powerdns/pdns.conf", "/etc/powerdns/pdns.conf",
""" """\
launch=gsqlite3 launch=gsqlite3
gsqlite3-database=/var/lib/powerdns/pdns.sqlite3 gsqlite3-database=/var/lib/powerdns/pdns.sqlite3
local-address=127.0.0.1 local-address=127.0.0.1
@@ -669,22 +655,22 @@ class DNSContainer(Container):
self.push_file_content( self.push_file_content(
"/etc/powerdns/recursor.conf", "/etc/powerdns/recursor.conf",
""" """\
local-address=0.0.0.0 local-address=0.0.0.0
local-port=53 local-port=53
forward-zones=localchat=127.0.0.1:5353 forward-zones=localchat=127.0.0.1:5353
forward-zones-recurse=.=9.9.9.9;149.112.112.112
allow-from=0.0.0.0/0 allow-from=0.0.0.0/0
dont-query= dont-query=
dnssec=off dnssec=off
""", """,
) )
self.bash(""" self.bash("""\
systemctl start pdns systemctl start pdns
systemctl start pdns-recursor systemctl start pdns-recursor
echo 'nameserver 127.0.0.1' > /etc/resolv.conf echo 'nameserver 127.0.0.1' > /etc/resolv.conf
""") """)
self._wait_dns_ready()
def reset_dns_records(self, dns_ip, domains): def reset_dns_records(self, dns_ip, domains):
"""Create DNS zones with initial A records via pdnsutil. """Create DNS zones with initial A records via pdnsutil.
@@ -700,7 +686,7 @@ class DNSContainer(Container):
for d in domains: for d in domains:
domain = d["domain"] domain = d["domain"]
ip = d["ip"] ip = d["ip"]
self.out.print(f" {domain} -> {ip}") print(f" {domain} -> {ip}")
# Delete and recreate zone fresh (removes stale records) # Delete and recreate zone fresh (removes stale records)
self.pdnsutil("delete-zone", domain, check=False) self.pdnsutil("delete-zone", domain, check=False)
@@ -717,11 +703,11 @@ class DNSContainer(Container):
ipv6 = d.get("ipv6") ipv6 = d.get("ipv6")
if ipv6: if ipv6:
self.replace_rrset(domain, ".", "AAAA", "3600", ipv6) self.replace_rrset(domain, ".", "AAAA", "3600", ipv6)
self.out.print(f" zone reset: SOA, NS, A, AAAA ({ip}, {ipv6})") print(f" zone reset: SOA, NS, A, AAAA ({ip}, {ipv6})")
else: else:
# Remove any stale AAAA record # Remove any stale AAAA record
self.pdnsutil("delete-rrset", domain, ".", "AAAA", check=False) self.pdnsutil("delete-rrset", domain, ".", "AAAA", check=False)
self.out.print(f" zone reset: SOA, NS, A ({ip}, IPv4-only)") print(f" zone reset: SOA, NS, A ({ip}, IPv4-only)")
self.restart_services() self.restart_services()

View File

@@ -89,9 +89,7 @@ def test_concurrent_logins_same_account(
assert login_results.get() assert login_results.get()
def test_no_vrfy(cmfactory, chatmail_config): def test_no_vrfy(chatmail_config):
ac = cmfactory.get_online_account()
addr = ac.get_config("addr")
domain = chatmail_config.mail_domain domain = chatmail_config.mail_domain
s = smtplib.SMTP(domain) s = smtplib.SMTP(domain)
@@ -100,7 +98,7 @@ def test_no_vrfy(cmfactory, chatmail_config):
s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}") s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}")
result = s.getreply() result = s.getreply()
print(result) print(result)
s.putcmd("vrfy", addr) s.putcmd("vrfy", f"echo@{chatmail_config.mail_domain}")
result2 = s.getreply() result2 = s.getreply()
print(result2) print(result2)
assert result[0] == result2[0] == 252 assert result[0] == result2[0] == 252

View File

@@ -409,16 +409,13 @@ class ChatmailACFactory:
def _make_transport(self, domain): def _make_transport(self, domain):
"""Build a transport config dict for the given domain.""" """Build a transport config dict for the given domain."""
addr, password = self.gencreds(domain) addr, password = self.gencreds(domain)
server = self._ssh_config_host_map.get(domain, domain)
transport = { transport = {
"addr": addr, "addr": addr,
"password": password, "password": password,
"imapServer": server,
"smtpServer": server,
} }
# To support running against local relays without host DNS resolution
# we attempt resolving the domain via ssh-config
# because otherwise core fails to find the address
server = self._ssh_config_host_map.get(domain)
if server is not None:
transport.update({"imapServer": server, "smtpServer": server})
if self.chatmail_config.tls_cert_mode == "self": if self.chatmail_config.tls_cert_mode == "self":
transport["certificateChecks"] = "acceptInvalidCertificates" transport["certificateChecks"] = "acceptInvalidCertificates"
return transport return transport
@@ -487,16 +484,13 @@ def cmfactory(
@pytest.fixture @pytest.fixture
def remote(sshdomain, pytestconfig): def remote(sshdomain, pytestconfig):
r = Remote(sshdomain, ssh_config=pytestconfig.getoption("ssh_config")) return Remote(sshdomain, ssh_config=pytestconfig.getoption("ssh_config"))
yield r
r.close()
class Remote: class Remote:
def __init__(self, sshdomain, ssh_config=None): def __init__(self, sshdomain, ssh_config=None):
self.sshdomain = sshdomain self.sshdomain = sshdomain
self.ssh_config = ssh_config self.ssh_config = ssh_config
self._procs = []
def iter_output(self, logcmd="", ready=None): def iter_output(self, logcmd="", ready=None):
getjournal = "journalctl -f" if not logcmd else logcmd getjournal = "journalctl -f" if not logcmd else logcmd
@@ -512,15 +506,12 @@ class Remote:
command.extend(["-F", self.ssh_config]) command.extend(["-F", self.ssh_config])
command.append(f"root@{self.sshdomain}") command.append(f"root@{self.sshdomain}")
[command.append(arg) for arg in getjournal.split()] [command.append(arg) for arg in getjournal.split()]
popen = subprocess.Popen( self.popen = subprocess.Popen(
command, command,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
) )
self._procs.append(popen)
while 1: while 1:
line = popen.stdout.readline() line = self.popen.stdout.readline()
res = line.decode().strip().lower() res = line.decode().strip().lower()
if not res: if not res:
break break
@@ -529,12 +520,6 @@ class Remote:
ready = None ready = None
yield res yield res
def close(self):
while self._procs:
proc = self._procs.pop()
proc.kill()
proc.wait()
@pytest.fixture @pytest.fixture
def lp(request): def lp(request):

View File

@@ -8,11 +8,10 @@ import pytest
from cmdeploy.lxc import cli from cmdeploy.lxc import cli
from cmdeploy.lxc.incus import Incus from cmdeploy.lxc.incus import Incus
from cmdeploy.util import Out
pytestmark = pytest.mark.skipif( pytestmark = pytest.mark.skipif(
not shutil.which("incus"), not shutil.which("incus") or not shutil.which("lxc"),
reason="incus not installed", reason="incus/lxc not installed",
) )
@@ -23,14 +22,12 @@ pytestmark = pytest.mark.skipif(
@pytest.fixture @pytest.fixture
def ix(): def ix():
out = Out() return Incus()
return Incus(out)
@pytest.fixture(scope="session") @pytest.fixture(scope="session")
def lxc_setup(): def lxc_setup():
out = Out() ix = Incus()
ix = Incus(out)
ix.get_dns_container().ensure() ix.get_dns_container().ensure()
return ix.list_managed() return ix.list_managed()
@@ -129,6 +126,8 @@ class TestLxcStatus:
assert "status" in result.stdout.lower() assert "status" in result.stdout.lower()
def test_shows_containers(self, lxc_setup, capsys): def test_shows_containers(self, lxc_setup, capsys):
from cmdeploy.cmdeploy import Out
class QuietOut(Out): class QuietOut(Out):
def red(self, msg, **kw): def red(self, msg, **kw):
pass pass

View File

@@ -1,71 +1,13 @@
import sys import pytest
from cmdeploy.util import Out, collapse, get_git_hash, get_version_string, shell from cmdeploy.util import (
build_chatmaild_sdist,
collapse,
class TestOut: get_chatmaild_sdist,
def test_prefix_default(self, capsys): get_git_hash,
out = Out() get_version_string,
out.print("hello") shell,
assert capsys.readouterr().out == "hello\n" )
def test_prefix_custom(self, capsys):
out = Out(prefix=">> ")
out.print("hello")
assert capsys.readouterr().out == ">> hello\n"
def test_prefix_print_file(self):
import io
buf = io.StringIO()
out = Out(prefix=":: ")
out.print("msg", file=buf)
assert ":: msg" in buf.getvalue()
def test_new_prefixed_out(self, capsys):
parent = Out(prefix="A")
child = parent.new_prefixed_out("B")
child.print("x")
assert capsys.readouterr().out == "ABx\n"
# shares section_timings
assert child.section_timings is parent.section_timings
def test_section_no_auto_indent(self, capsys):
out = Out(prefix="")
with out.section("test"):
out.print("inside")
captured = capsys.readouterr().out
# "inside" should NOT be indented by section()
lines = captured.strip().splitlines()
inside_line = [l for l in lines if "inside" in l][0]
assert inside_line == "inside"
def test_section_records_timing(self):
out = Out()
with out.section("s1"):
pass
assert len(out.section_timings) == 1
assert out.section_timings[0][0] == "s1"
def test_shell_failure_shows_output(self):
"""When a shell command fails, its output and exit code are shown."""
import subprocess
result = subprocess.run(
[
sys.executable,
"-c",
"from cmdeploy.util import Out; Out(prefix='').shell("
"\"echo 'boom on stderr' >&2; exit 42\")",
],
capture_output=True,
text=True,
check=False,
)
# the command's stderr is merged into stdout by Popen
assert "boom on stderr" in result.stdout
# Out.red() prints the failure notice to stderr
assert "exit code 42" in result.stderr
def test_collapse(): def test_collapse():
@@ -118,3 +60,38 @@ def test_git_helpers_with_commits_and_diffs(tmp_path):
new_hash = get_git_hash(root=tmp_path) new_hash = get_git_hash(root=tmp_path)
assert new_hash != git_hash assert new_hash != git_hash
assert get_version_string(root=tmp_path) == new_hash assert get_version_string(root=tmp_path) == new_hash
# Diffs inside excluded test dirs are invisible to the version string
test_dir = tmp_path / "cmdeploy" / "src" / "cmdeploy" / "tests"
test_dir.mkdir(parents=True)
test_file = test_dir / "test_foo.py"
test_file.write_text("pass")
shell("git add .", cwd=tmp_path, check=True)
shell("git commit -m 'add test file'", cwd=tmp_path, check=True)
test_file.write_text("assert True")
assert get_version_string(root=tmp_path) == get_git_hash(root=tmp_path)
def test_build_chatmaild_sdist(tmp_path):
dist_dir = tmp_path / "dist"
# First call builds the sdist
result = build_chatmaild_sdist(dist_dir)
assert result.name.endswith(".tar.gz")
assert result.stat().st_size > 0
# Second call is idempotent - returns the same file, no rebuild
mtime = result.stat().st_mtime
result2 = build_chatmaild_sdist(dist_dir)
assert result2 == result
assert result2.stat().st_mtime == mtime
def test_get_chatmaild_sdist_errors(tmp_path):
with pytest.raises(FileNotFoundError):
get_chatmaild_sdist(tmp_path / "nonexistent")
empty = tmp_path / "empty"
empty.mkdir()
with pytest.raises(FileNotFoundError):
get_chatmaild_sdist(empty)

View File

@@ -1,108 +1,11 @@
"""Shared utility functions for cmdeploy.""" """Shared utility functions for cmdeploy."""
import os import fcntl
import shutil
import subprocess import subprocess
import sys import sys
import textwrap import textwrap
import time
from contextlib import contextmanager
from pathlib import Path from pathlib import Path
from termcolor import colored
class Out:
"""Convenience output printer providing coloring and section formatting."""
def __init__(self, prefix="", verbosity=0):
self.section_timings = []
self.prefix = prefix
self.sepchar = "\u2501"
self.verbosity = verbosity
env_width = os.environ.get("_CMDEPLOY_WIDTH")
if env_width:
self.section_width = int(env_width)
else:
self.section_width = shutil.get_terminal_size((80, 24)).columns
def new_prefixed_out(self, newprefix=" "):
"""Return a new Out with an extended prefix,
sharing section_timings with the parent.
"""
out = Out(
prefix=self.prefix + newprefix,
verbosity=self.verbosity,
)
out.section_timings = self.section_timings
return out
def red(self, msg, file=sys.stderr):
print(colored(self.prefix + msg, "red"), file=file, flush=True)
def green(self, msg, file=sys.stderr):
print(colored(self.prefix + msg, "green"), file=file, flush=True)
def print(self, msg="", **kwargs):
"""Print to stdout with automatic flush."""
if msg:
msg = self.prefix + msg
print(msg, flush=True, **kwargs)
def _format_header(self, title):
"""Return a formatted section header string."""
width = self.section_width - len(self.prefix)
bar = self.sepchar * (width - len(title) - 5)
return f"{self.sepchar * 3} {title} {bar}"
@contextmanager
def section(self, title):
"""Context manager that prints a section header and records elapsed time."""
self.green(self._format_header(title))
t0 = time.time()
yield
elapsed = time.time() - t0
self.section_timings.append((title, elapsed))
def section_line(self, title):
"""Print a section header without timing."""
self.green(self._format_header(title))
def shell(self, cmd, quiet=False, **kwargs):
"""Print *cmd*, run it, and re-print its output with the current prefix.
*cmd* is passed through :func:`collapse`, so callers
can use triple-quoted f-strings freely.
Stdout and stderr are merged, read line-by-line,
and each line is printed with ``self.prefix`` prepended.
When the command exits non-zero, a red error line is printed.
"""
cmd = collapse(cmd)
if not quiet:
self.print(f"$ {cmd}")
indent = self.prefix + " "
env = kwargs.pop("env", None)
if env is None:
env = os.environ.copy()
env["_CMDEPLOY_WIDTH"] = str(self.section_width - len(indent))
proc = subprocess.Popen(
cmd,
shell=True,
text=True,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=env,
**kwargs,
)
for line in proc.stdout:
sys.stdout.write(indent + line)
sys.stdout.flush()
ret = proc.wait()
if ret:
self.red(f"command failed with exit code {ret}: {cmd}")
return ret
def _project_root(): def _project_root():
"""Return the project root directory.""" """Return the project root directory."""
@@ -133,7 +36,6 @@ def shell(cmd, check=False, **kwargs):
""" """
if "capture_output" not in kwargs and "stdout" not in kwargs: if "capture_output" not in kwargs and "stdout" not in kwargs:
kwargs["capture_output"] = True kwargs["capture_output"] = True
kwargs.setdefault("stdin", subprocess.DEVNULL)
return subprocess.run(collapse(cmd), shell=True, text=True, check=check, **kwargs) return subprocess.run(collapse(cmd), shell=True, text=True, check=check, **kwargs)
@@ -150,20 +52,75 @@ def get_git_hash(root=None):
return None return None
DIFF_EXCLUDES = (
":(exclude)cmdeploy/src/cmdeploy/tests",
":(exclude)chatmaild/src/chatmaild/tests",
)
"""Git pathspecs appended to ``git diff`` so that changes
limited to test files do not affect the deployed version string."""
def get_version_string(root=None): def get_version_string(root=None):
"""Return ``git_hash\\ngit_diff`` for the local working tree. """Return ``git_hash\\ngit_diff`` for the local working tree.
Used by :class:`~cmdeploy.deployers.GithashDeployer` to write Used by :class:`~cmdeploy.deployers.GithashDeployer` to write
``/etc/chatmail-version`` and by ``lxc-status`` to compare ``/etc/chatmail-version`` and by ``lxc-status`` to compare
the deployed state against the local checkout. the deployed state against the local checkout.
Changes inside directories listed in :data:`DIFF_EXCLUDES`
are ignored so that test-only edits do not trigger
a redeployment.
""" """
if root is None: if root is None:
root = _project_root() root = _project_root()
git_hash = get_git_hash(root=root) or "unknown" git_hash = get_git_hash(root=root) or "unknown"
excludes = " ".join(f"'{e}'" for e in DIFF_EXCLUDES)
try: try:
git_diff = shell("git diff", cwd=str(root)).stdout.strip() git_diff = shell(
f"git diff -- . {excludes}",
cwd=str(root),
).stdout.strip()
except Exception: except Exception:
git_diff = "" git_diff = ""
if git_diff: if git_diff:
return f"{git_hash}\n{git_diff}" return f"{git_hash}\n{git_diff}"
return git_hash return git_hash
def _chatmaild_default_dist_dir():
return _project_root() / "chatmaild" / "dist"
def build_chatmaild_sdist(dist_dir=None):
"""Build the chatmaild sdist if not already present (idempotent, process-safe)."""
if dist_dir is None:
dist_dir = _chatmaild_default_dist_dir()
dist_dir = Path(dist_dir).resolve()
dist_dir.mkdir(parents=True, exist_ok=True)
lockfile = dist_dir.parent / ".dist.lock"
with open(lockfile, "w") as fh:
fcntl.flock(fh, fcntl.LOCK_EX)
existing = [p for p in dist_dir.iterdir() if p.suffix == ".gz"]
if existing:
return existing[0]
subprocess.check_output(
[sys.executable, "-m", "build", "-n"]
+ ["--sdist", "chatmaild", "--outdir", str(dist_dir)],
cwd=str(_project_root()),
)
return get_chatmaild_sdist(dist_dir)
def get_chatmaild_sdist(dist_dir=None):
"""Return the path to the pre-built chatmaild sdist."""
if dist_dir is None:
dist_dir = _chatmaild_default_dist_dir()
entries = list(Path(dist_dir).iterdir())
if len(entries) == 0:
raise FileNotFoundError(f"dist directory is empty: {dist_dir}")
if len(entries) > 1:
raise ValueError(f"expected one file in {dist_dir}, found {len(entries)}")
return entries[0]

View File

@@ -1,15 +1,21 @@
Local testing with LXC/Incus Local testing with LXC/Incus
============================ ============================
.. warning::
cmdeploy LXC support is geared towards local testing and CI, only.
Do not base production setups on it.
The ``cmdeploy`` tool includes support for running The ``cmdeploy`` tool includes support for running
chatmail relays inside local chatmail relays inside local
`Incus <https://linuxcontainers.org/incus/>`_ LXC containers. `Incus <https://linuxcontainers.org/incus/>`_ LXC containers.
This is meant for development, testing, and CI This is useful for development, testing, and CI
without requiring a remote server. without requiring a remote server.
LXC system containers are lightweight virtual machines LXC system containers behave like lightweight virtual machines.
that share the host's kernel but run their own init system, They share the host's kernel but run their own init system
package manager, and network stack, (systemd), package manager, and network stack,
so the cmdeploy deployment scripts work pretty much so the cmdeploy deployment scripts work exactly
as they would on a real Debian server or cloud VPS. as they would on a real Debian server or cloud VPS.
Prerequisites Prerequisites
@@ -26,16 +32,6 @@ After installing incus, initialise and grant yourself access::
sudo incus admin init --minimal sudo incus admin init --minimal
sudo usermod -aG incus-admin $USER sudo usermod -aG incus-admin $USER
.. caution::
Adding yourself to ``incus-admin`` grants effective root access
to the host: any member can mount host directories into a container
and manipulate them as root.
This is fine for local testing of your own relay branches,
but do **not** use it for production setups
or for testing untrusted relay branches from others.
.. warning:: .. warning::
You **must now log out and back in** (or run ``newgrp incus-admin``) You **must now log out and back in** (or run ``newgrp incus-admin``)
@@ -57,13 +53,13 @@ Quick start
source venv/bin/activate # activate venv source venv/bin/activate # activate venv
cmdeploy lxc-test # create containers, deploy, test cmdeploy lxc-test # create containers, deploy, test
The ``lxc-test`` command provides an automated way
to run the full deployment and test pipeline. The ``lxc-test`` command executes each ``cmdeploy`` subprocess command
It executes several ``cmdeploy`` subcommands in sequential steps. so you can copy-paste and run them individually.
If a step fails, you can copy-paste the printed command A section timing summary is printed at the end.
and run it manually to debug.
No host DNS delegation or ``~/.ssh/config`` changes are needed No host DNS delegation or ``~/.ssh/config`` changes are needed
because ``lxc-test`` passes the required SSH and DNS options directly. because lxc-test passes ssh-related CLI options to
``cmdeploy run`` and ``cmdeploy test`` commands.
CLI reference CLI reference
@@ -86,12 +82,29 @@ CLI reference
Pass ``NAME`` to stop specific containers. Pass ``NAME`` to stop specific containers.
Use ``--destroy`` to also delete the containers and their config files. Use ``--destroy`` to also delete the containers and their config files.
Use ``--destroy-all`` to additionally destroy Use ``--destroy-all`` to additionally destroy
the ``ns-localchat`` DNS container **and** remove the ``ns-localchat`` DNS container **and** remove all cached
the cached ``localchat-base`` and ``localchat-relay`` images (``localchat-base``, per-relay images),
images, giving a fully clean slate for the next ``lxc-test``. giving a fully clean slate for the next ``lxc-test``.
User containers are **never** destroyed unless named explicitly. User containers are **never** destroyed unless named explicitly.
``lxc-test [--one]`` ``lxc-test [--one]``
Idempotent full pipeline:
1. ``lxc-start``: create ``test0`` + ``test1`` containers,
configure DNS with readiness check
2. ``cmdeploy run``: deploy chatmail services
on all relays **in parallel**
3. publish per-relay cached images (``localchat-test0``,
``localchat-test1``) after first successful deploy
4. ``cmdeploy dns --zonefile``: generate standard
BIND-format zone files, load full DNS records
5. ``cmdeploy test``: run full test suite
with ``-n4 -x``
By default creates, deploys, and tests both ``test0`` and ``test1`` By default creates, deploys, and tests both ``test0`` and ``test1``
for dual-domain federation testing (sets ``CHATMAIL_DOMAIN2=_test1.localchat``). for dual-domain federation testing (sets ``CHATMAIL_DOMAIN2=_test1.localchat``).
test0 runs dual-stack (IPv4 + IPv6) while test1 runs IPv4-only (``disable_ipv6 = True``). test0 runs dual-stack (IPv4 + IPv6) while test1 runs IPv4-only (``disable_ipv6 = True``).
@@ -176,7 +189,7 @@ running two `PowerDNS <https://www.powerdns.com/>`_ services:
* **pdns-recursor** (recursive) listens on the Incus * **pdns-recursor** (recursive) listens on the Incus
bridge so all containers can use it. bridge so all containers can use it.
Forwards ``.localchat`` queries to the local Forwards ``.localchat`` queries to the local
authoritative server and resolves everything else recursively. authoritative server and everything else to Quad9 (``9.9.9.9``).
After the DNS container is up, ``lxc-start`` configures the Incus bridge After the DNS container is up, ``lxc-start`` configures the Incus bridge
to advertise its IP via DHCP and disables Incus's own DNS. to advertise its IP via DHCP and disables Incus's own DNS.
@@ -212,14 +225,18 @@ per-container ``chatmail-*.ini`` files, zone files, and ``ssh-config``.
The only state *outside* the repository is the Incus containers and images themselves The only state *outside* the repository is the Incus containers and images themselves
(managed via the ``incus`` CLI, labelled with ``user.localchat-managed=true``). (managed via the ``incus`` CLI, labelled with ``user.localchat-managed=true``).
The Incus image store retains the following snapshot images: Several cached images are published to the local Incus image store:
* ``localchat-base``: Debian 12 with openssh-server and Python (built on first run) * ``localchat-base``: Debian 12 with openssh-server and Python
(built on first run)
* ``localchat-relay``: fully deployed relay snapshot, * ``localchat-test0``, ``localchat-test1``: per-relay snapshots
cached after the first successful ``cmdeploy run``. published after the first successful ``cmdeploy run``.
Subsequent relay containers launch from this image Subsequent containers launch from these images
so the deploy step is mostly no-ops (roughly 3× faster than a fresh deploy). so the deploy step is mostly no-ops.
Relay containers are limited to **500 MiB RAM**
and the DNS container to **256 MiB**.
.. _lxc-tls: .. _lxc-tls: