Compare commits

..

13 Commits

Author SHA1 Message Date
holger krekel
6573ccc05f cache images from fist commit in the PR and re-use it until end of PR 2026-03-07 21:34:21 +01:00
holger krekel
25285005c3 same as https://github.com/chatmail/relay/pull/887/changes 2026-03-07 20:24:59 +01:00
holger krekel
482194437d docs: update lxc.rst for per-relay caching and parallel deploy
Update the quick-start and CLI reference sections to reflect
per-relay cached images (localchat-test0, localchat-test1),
parallel deploy, DNS readiness checks, and revised memory
limits (256 MiB for DNS container).  Add mention of section
timing summary printed at the end of lxc-test.
2026-03-07 20:24:59 +01:00
holger krekel
735e9d3e7f feat: route output through Out, add DNS check and version-string excludes
Extend the Out class with section(), section_line(), and print()
methods that replace the standalone _section()/_section_line()
helpers.  Section timings are recorded and printed as a summary
at the end of lxc-test.

Add RelayContainer.check_dns() which retries 'getent hosts
pypi.org' to verify external DNS resolution works inside the
container, called right after configure_dns() in lxc-start.

Add DIFF_EXCLUDES to get_version_string() so that diffs limited
to test directories do not cause a version mismatch and
unnecessary redeployment.

Update test_lxc.py to use QuietOut for the new Out API.
2026-03-07 20:24:53 +01:00
holger krekel
4b79606d49 feat: per-relay image caching, static IPs, and parallel deploy
Switch from a single localchat-relay image to per-relay cached
images (localchat-test0, localchat-test1) and add a DNS image
(localchat-ns).  Assign static IPs via a fixed incusbr0 bridge
subnet (10.200.200.0/24) so containers always get deterministic
addresses.

Container launch is split into 'incus init' + device-override +
'incus start' to set the static IP before boot.

Deploy runs in parallel via _run_cmdeploy_parallel(), which
captures output per-relay and shows progress lines.  Tests now
run in both directions (test0↔test1, test1↔test0).

publish_image() returns bool (True if published, False if cached)
so lxc-test can report cache hits.
2026-03-07 14:40:00 +01:00
holger krekel
6e52bfe8c4 refactor: extract sdist build into util with lock-based idempotency
Move _build_chatmaild from deployers.py into util.py as
build_chatmaild_sdist() with fcntl-based file locking so
parallel deploys do not race on the sdist.  The build is
called once from run_cmd() before pyinfra starts; deployers.py
now only calls get_chatmaild_sdist() to locate the pre-built
archive.

Add test_build_chatmaild_sdist and test_get_chatmaild_sdist_errors.
2026-03-07 14:38:47 +01:00
holger krekel
95c76aa2b0 ci: replace staging workflows with LXC-local testing
Replace the two staging-server CI workflows and their zone-file
helpers with a single lxc-test job in ci.yaml that runs
'cmdeploy lxc-test' inside an ubuntu-24.04 runner.

The new workflow installs Incus from the Zabbly apt repository,
initialises it, bootstraps the venv, caches the base LXC image
together with SSH keys, and runs the full LXC pipeline
(container creation, deploy, DNS zones, tests).
2026-03-07 14:36:27 +01:00
holger krekel
4f109e8c31 address link2xt comments (zone parsing and turn v0.4 release 2026-03-07 07:35:07 +01:00
holger krekel
8c30714279 simplify start instructions 2026-03-06 12:12:28 +01:00
holger krekel
23f21d36b1 make helpers testable and test them, also streamline intro of docs 2026-03-06 11:01:25 +01:00
holger krekel
4ed3f5dd91 fix lxc-test to not re-run deploy when nothing changed + some other beautifications 2026-03-06 10:52:08 +01:00
holger krekel
972b46be74 don't use env vars but explicit pytest options to pass ssh info around. 2026-03-06 10:24:15 +01:00
holger krekel
7edb4e860a feat: add LXC container support for local chatmail development
Add cmdeploy "lxc-test" command to run cmdeploy against local containers,
with supplementary lxc-start, lxc-stop and lxc-status subcommands.
See doc/source/lxc.rst for full documentation including prerequisites,
DNS setup, TLS handling, DNS-free testing, and known limitations.

Apart from adding lxc-specific docs, tests, and implementation files in the cmdeploy/lxc directory,
this PR adds the --ssh-config option to cmdeploy run/dns/status/test commands and pyinfra invocations,
and also to sshexec (Execnet) handling.  This allows for the host to need no DNS entries for a relay,
and route all resolution through ssh-config.  This is used by the "lxc-test" command, which performs
a completely local setup -- again, see docs for more details.

While working on DNS/SSH things i also unified all zone-file handling
to use actual BIND format as it is easy enough to parse back.
2026-03-06 10:06:00 +01:00
75 changed files with 3347 additions and 2011 deletions

View File

@@ -1,39 +0,0 @@
name: No-DNS
on:
# Triggers when a PR is merged into main or a direct push occurs
push:
branches: [ "main" ]
# Triggers for any PR (and its subsequent commits) targeting the main branch
pull_request:
branches: [ "main" ]
permissions: {}
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
no-dns:
name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.14.5
with:
cmlxc_commands: |
cmlxc init
# single cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo --type ipv4 cm0
cmlxc -v test-cmdeploy cm0
# cross cmdeploy relay test (two ipv4 relays)
cmlxc -v deploy-cmdeploy --source ./repo --ipv4-only --type ipv4 cm1
cmlxc -v test-cmdeploy cm0 cm1
# cross cmdeploy/madmail relay tests
cmlxc -v deploy-madmail mad0
cmlxc -v test-cmdeploy cm0 mad0
cmlxc -v test-mini mad0 cm0
cmlxc -v test-mini cm0 mad0

View File

@@ -1,78 +1,116 @@
name: CI
on:
# Triggers when a PR is merged into main or a direct push occurs
push:
branches: [ "main" ]
# Triggers for any PR (and its subsequent commits) targeting the main branch
pull_request:
branches: [ "main" ]
permissions: {}
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
push:
jobs:
tox:
name: isolated chatmaild tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
# Checkout pull request HEAD commit instead of merge commit
# Otherwise `test_deployed_state` will be unhappy.
with:
ref: ${{ github.event.pull_request.head.sha }}
persist-credentials: false
- name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.4/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.5.2/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests
working-directory: chatmaild
run: pipx run tox
scripts:
name: deploy-chatmail tests
name: deploy-chatmail tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
with:
ref: ${{ github.event.pull_request.head.sha }}
persist-credentials: false
- uses: actions/checkout@v4
- name: initenv
- name: initenv
run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: run formatting checks
run: cmdeploy fmt -v
- name: run formatting checks
run: cmdeploy fmt -v
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
lxc-test:
name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.14.5
with:
cmlxc_commands: |
cmlxc init
# single cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo cm0
cmlxc -v test-mini cm0
cmlxc -v test-cmdeploy cm0
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
# cross cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo --ipv4-only cm1
cmlxc -v test-cmdeploy cm0 cm1
- name: install incus
run: |
# zabbly is the official incus community packages source
curl -fsSL https://pkgs.zabbly.com/key.asc \
| sudo gpg --dearmor -o /etc/apt/keyrings/zabbly.gpg
sudo sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.gpg
EOF'
sudo apt-get update
sudo apt-get install -y incus
# cross cmdeploy/madmail relay tests
cmlxc -v deploy-madmail mad0
cmlxc -v test-cmdeploy cm0 mad0
cmlxc -v test-mini cm0 mad0
cmlxc -v test-mini mad0 cm0
- name: initialise incus
run: |
sudo systemctl stop docker.socket docker || true
sudo iptables -P FORWARD ACCEPT
sudo sysctl -w fs.inotify.max_user_instances=65535
sudo sysctl -w fs.inotify.max_user_watches=65535
sudo incus admin init --minimal
sudo usermod -aG incus-admin "$USER"
- name: initenv
run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: restore cached images
id: cache-images
uses: actions/cache@v4
with:
path: |
/tmp/localchat-base.tar.gz
/tmp/localchat-ns.tar.gz
/tmp/localchat-test0.tar.gz
/tmp/localchat-test1.tar.gz
lxconfigs/id_localchat*
key: incus-images-${{ runner.os }}-${{ github.ref_name }}
restore-keys: |
incus-images-${{ runner.os }}-${{ github.ref_name }}-
incus-images-${{ runner.os }}-main-
incus-images-${{ runner.os }}-
- name: import cached images
run: |
for alias in localchat-base localchat-ns localchat-test0 localchat-test1; do
if [ -f /tmp/$alias.tar.gz ]; then
sg incus-admin -c "incus image import /tmp/$alias.tar.gz --alias $alias" || true
fi
done
- name: lxc-test
run: sg incus-admin -c 'cmdeploy lxc-test'
- name: export images for cache
if: always()
run: |
for alias in localchat-base localchat-ns localchat-test0 localchat-test1; do
if ! [ -f /tmp/$alias.tar.gz ]; then
sg incus-admin -c "incus image export $alias /tmp/$alias" || true
fi
done

View File

@@ -1,37 +0,0 @@
# Notify the docker repo to build and test a new image after relay CI passes.
#
# Sends a repository_dispatch event to chatmail/docker with the relay ref
# and short SHA, which triggers docker-ci.yaml to build, push to GHCR,
# and run integration tests via cmlxc.
name: Trigger Docker build
on:
push:
branches: [main]
workflow_dispatch:
permissions: {}
jobs:
dispatch:
name: Dispatch build to chatmail/docker
runs-on: ubuntu-latest
if: github.repository == 'chatmail/relay'
steps:
- name: Compute short SHA
id: sha
run: echo "short=$(echo '${{ github.sha }}' | cut -c1-7)" >> "$GITHUB_OUTPUT"
- name: Send repository_dispatch
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3
with:
token: ${{ secrets.CHATMAIL_DOCKER_DISPATCH_TOKEN }}
repository: chatmail/docker
event-type: relay-updated
client-payload: >-
{
"relay_ref": "${{ github.ref_name }}",
"relay_sha": "${{ github.sha }}",
"relay_sha_short": "${{ steps.sha.outputs.short }}"
}

View File

@@ -7,8 +7,6 @@ on:
- 'scripts/build-docs.sh'
- '.github/workflows/docs-preview.yaml'
permissions: {}
jobs:
scripts:
name: build
@@ -18,8 +16,6 @@ jobs:
url: https://staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: initenv
run: scripts/initenv.sh
@@ -38,22 +34,18 @@ jobs:
- name: Get Pullrequest ID
id: prepare
run: |
export PULLREQUEST_ID=$(echo "${GITHUB_REF}" | cut -d "/" -f3)
export PULLREQUEST_ID=$(echo "${{ github.ref }}" | cut -d "/" -f3)
echo "prid=$PULLREQUEST_ID" >> $GITHUB_OUTPUT
if [ $(expr length "${{ secrets.USERNAME }}") -gt "1" ]; then echo "uploadtoserver=true" >> $GITHUB_OUTPUT; fi
- run: |
echo "baseurl: /${STEPS_PREPARE_OUTPUTS_PRID}" >> _config.yml
env:
STEPS_PREPARE_OUTPUTS_PRID: ${{ steps.prepare.outputs.prid }}
echo "baseurl: /${{ steps.prepare.outputs.prid }}" >> _config.yml
- name: Upload preview
run: |
mkdir -p "$HOME/.ssh"
echo "${{ secrets.CHATMAIL_STAGING_SSHKEY }}" > "$HOME/.ssh/key"
chmod 600 "$HOME/.ssh/key"
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${STEPS_PREPARE_OUTPUTS_PRID}/"
env:
STEPS_PREPARE_OUTPUTS_PRID: ${{ steps.prepare.outputs.prid }}
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}/"
- name: check links
working-directory: doc

View File

@@ -10,8 +10,6 @@ on:
- 'scripts/build-docs.sh'
- '.github/workflows/docs.yaml'
permissions: {}
jobs:
scripts:
name: build
@@ -21,8 +19,6 @@ jobs:
url: https://chatmail.at/doc/relay/
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: initenv
run: scripts/initenv.sh

View File

@@ -1,20 +0,0 @@
;; Zone file for staging-ipv4.testrun.org
$ORIGIN staging-ipv4.testrun.org.
$TTL 300
@ IN SOA ns.testrun.org. root.nine.testrun.org (
2023010101 ; Serial
7200 ; Refresh
3600 ; Retry
1209600 ; Expire
3600 ; Negative response caching TTL
)
;; Nameservers.
@ IN NS ns.testrun.org.
;; DNS records.
@ IN A 37.27.95.249
mta-sts.staging-ipv4.testrun.org. CNAME staging-ipv4.testrun.org.
www.staging-ipv4.testrun.org. CNAME staging-ipv4.testrun.org.

View File

@@ -1,21 +0,0 @@
;; Zone file for staging2.testrun.org
$ORIGIN staging2.testrun.org.
$TTL 300
@ IN SOA ns.testrun.org. root.nine.testrun.org (
2023010101 ; Serial
7200 ; Refresh
3600 ; Retry
1209600 ; Expire
3600 ; Negative response caching TTL
)
;; Nameservers.
@ IN NS ns.testrun.org.
;; DNS records.
@ IN A 37.27.24.139
mta-sts.staging2.testrun.org. CNAME staging2.testrun.org.
www.staging2.testrun.org. CNAME staging2.testrun.org.

View File

@@ -1,26 +0,0 @@
name: GitHub Actions Security Analysis with zizmor
on:
push:
branches: ["main"]
pull_request:
branches: ["**"]
permissions: {}
jobs:
zizmor:
name: Run zizmor
runs-on: ubuntu-latest
permissions:
security-events: write # Required for upload-sarif (used by zizmor-action) to upload SARIF files.
contents: read
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@b1d7e1fb5de872772f31590499237e7cce841e8e # v0.5.3

7
.github/zizmor.yml vendored
View File

@@ -1,7 +0,0 @@
rules:
unpinned-uses:
config:
policies:
actions/*: ref-pin
dependabot/*: ref-pin
chatmail/*: ref-pin

1
.gitignore vendored
View File

@@ -5,6 +5,7 @@ __pycache__/
*.swp
*qr-*.png
chatmail*.ini
lxconfigs/
# C extensions

View File

@@ -1,89 +1,5 @@
# Changelog for chatmail deployment
## 1.10.0 2026-04-30
* start mtail after networking is fully up <https://github.com/chatmail/relay/pull/942>
* support specifying custom filtermail binary through environment variable <https://github.com/chatmail/relay/pull/941>
* add automated zizmor scanning of github workflows <https://github.com/chatmail/relay/pull/938>
* added dispatch for *automated builds of chatmail relay docker images* <https://github.com/chatmail/relay/pull/934>
* do not bind SMTP client sockets to public addresses <https://github.com/chatmail/relay/pull/932>
* underline in docs that scripts/initenv.sh should be used for building the docs <https://github.com/chatmail/relay/pull/933>
* automatic oldest-first message removal from mailboxes to always stay under max_mailbox_size <https://github.com/chatmail/relay/pull/929>
* remove --slow from cmdeploy test <https://github.com/chatmail/relay/pull/931>
* handle missing inotify sysctl keys in containers <https://github.com/chatmail/relay/pull/930>
* replace resolvconf with static resolv.conf <https://github.com/chatmail/relay/pull/928>
* disable fsync for LMTP and IMAP services <https://github.com/chatmail/relay/pull/925>
* re-use cmlxc workflow, replacing CI with hetzner staging servers with local lxc containers <https://github.com/chatmail/relay/pull/917>
* explicitly install resolvconf <https://github.com/chatmail/relay/pull/924>
* detect stale dovecot binary and force restart in activate() <https://github.com/chatmail/relay/pull/922>
* Rename filtermail_http_port to filtermail_http_port_incoming <https://github.com/chatmail/relay/pull/921>
* consolidated is_in_container() check https://github.com/chatmail/relay/pull/920>
* restart dovecot after package replacement (rebase, test condense) <https://github.com/chatmail/relay/pull/913>
* Set permissions on dovecot pin prefs <https://github.com/chatmail/relay/pull/915>
* Route `/mxdeliv/` to configurable port <https://github.com/chatmail/relay/pull/901>
* fix VM detection, automated testing fixes, use newer chatmail-turn and move to standard BIND DNS zone format <https://github.com/chatmail/relay/pull/912>
* Upgrade to filtermail 0.6.1 <https://github.com/chatmail/relay/pull/910>
* pin dovecot packages to prevent apt upgrades <https://github.com/chatmail/relay/pull/908>
* add rpc server to cmdeploy along with client <https://github.com/chatmail/relay/pull/906>
* remove unused deps from chatmaild <https://github.com/chatmail/relay/pull/905>
* set default smtp_tls_security_level to "verify" unconditionally <https://github.com/chatmail/relay/pull/902>
* featprefer IPv4 in SMTP client <https://github.com/chatmail/relay/pull/900>
* Install dovecot .deb packages atomically <https://github.com/chatmail/relay/pull/899>
* stop installing cron package <https://github.com/chatmail/relay/pull/898>
* Rewrite dovecot install logic, update <https://github.com/chatmail/relay/pull/862>
* fix a test and some linting fixes <https://github.com/chatmail/relay/pull/897>
* Disable IP verification on domain-literal addresses <https://github.com/chatmail/relay/pull/895>
* disable installing recommended packages globally on the relay <https://github.com/chatmail/relay/pull/887>
* multiple bug fixes across chatmaild and cmdeploy <https://github.com/chatmail/relay/pull/883>
* remove /metrics from the website <https://github.com/chatmail/relay/pull/703>
* add Prometheus textfile output to fsreport <https://github.com/chatmail/relay/pull/881>
* chown opendkim: private key <https://github.com/chatmail/relay/pull/879>
* make sure chatmail-metadata was started <https://github.com/chatmail/relay/pull/882>
* dovecot update url <https://github.com/chatmail/relay/pull/880>
* upgrade to filtermail v0.5.2 <https://github.com/chatmail/relay/pull/876>
* download dovecot packages from github release <https://github.com/chatmail/relay/pull/875>
* replace DKIM verification with filtermail v0.5 <https://github.com/chatmail/relay/pull/831>
* remove CFFI deltachat bindings usage, and consolidate test support with rpc-bindings <https://github.com/chatmail/relay/pull/872>
* prepare chatmaild/cmdeploy changes for Docker support <https://github.com/chatmail/relay/pull/857>
* stabilize online benchmark timing adding rate-limit-aware cooldown between iterations <https://github.com/chatmail/relay/pull/867>
* move rate-limit cooldown to benchmark fixture <https://github.com/chatmail/relay/pull/868>
* reconfigure acmetool from redirector to proxy mode <https://github.com/chatmail/relay/pull/861>
* make tests work with `--ssh-host localhost` <https://github.com/chatmail/relay/pull/856>
* mark f-string with f prefix in test_expunged <https://github.com/chatmail/relay/pull/863>
* install also if dovecot.service=False in SystemdEnabled Fact <https://github.com/chatmail/relay/pull/841>
* Introduce support for self-signed chatmail relays <https://github.com/chatmail/relay/pull/855>
* Strip Received headers before delivery <https://github.com/chatmail/relay/pull/849>
* upgrade to filtermail v0.3 <https://github.com/chatmail/relay/pull/850>
* fix link to Maddy and update madmail URL <https://github.com/chatmail/relay/pull/847>
* accept self-signed certificates for IP-only relays <https://github.com/chatmail/relay/pull/846>
* enforce sending from public IP addresses <https://github.com/chatmail/relay/pull/845>
* port check: check addresses, fix single services <https://github.com/chatmail/relay/pull/844>
* remediates issue with improper concat on resolver injection <https://github.com/chatmail/relay/pull/834>
* ipv6 boolean not being respected during operations <https://github.com/chatmail/relay/pull/832>
* upgrade to filtermail v0.2 by <https://github.com/chatmail/relay/pull/825>
* fix link to filtermail <https://github.com/chatmail/relay/pull/824>
* print timestamps when sending messages <https://github.com/chatmail/relay/pull/823>
* fix flaky test_exceed_rate_limit <https://github.com/chatmail/relay/pull/822>
* Replace filtermail with rust reimplementation <https://github.com/chatmail/relay/pull/808>
* Set default internal SMTP ports in Config <https://github.com/chatmail/relay/pull/819>
* separate metrics for incoming and outgoing messages <https://github.com/chatmail/relay/pull/820>
* disable appending the Received header <https://github.com/chatmail/relay/pull/815>
* fail on errors in postfix/dovecot config <https://github.com/chatmail/relay/pull/813>
* tweak idle/hibernate metrics some more <https://github.com/chatmail/relay/pull/811>
* add config flag to export statistics <https://github.com/chatmail/relay/pull/806>
* add --website-only option to run subcommand <https://github.com/chatmail/relay/pull/768>
* Strip DKIM-Signature header before LMTP <https://github.com/chatmail/relay/pull/803>
* properly make sure that postfix gets restarted on failure <https://github.com/chatmail/relay/pull/802>
* expire.py: use absolute path to maildirsize <https://github.com/chatmail/relay/pull/807>
* pin Dovecot documentation URLs to version 2.3 <https://github.com/chatmail/relay/pull/800>
* try to use "build machine" and "deployment server" consistently <https://github.com/chatmail/relay/pull/797>
* adds instructions for migrating control machines <https://github.com/chatmail/relay/pull/795>
* use consistent naming schema in getting started <https://github.com/chatmail/relay/pull/793>
* remove jsok/serialize-workflow-action dependency <https://github.com/chatmail/relay/pull/790>
* streamline migration guide wording, provide titled steps <https://github.com/chatmail/relay/pull/789>
* increases default max mailbox size <https://github.com/chatmail/relay/pull/792>
* use daemon_name for OpenDKIM sign-verify decision instead of IP <https://github.com/chatmail/relay/pull/784>
## 1.9.0 2025-12-18
### Documentation

View File

@@ -6,11 +6,13 @@ build-backend = "setuptools.build_meta"
name = "chatmaild"
version = "0.3"
dependencies = [
"aiosmtpd",
"iniconfig",
"deltachat-rpc-server",
"deltachat-rpc-client",
"filelock",
"requests",
"crypt-r >= 3.13.1 ; python_version >= '3.11'",
"domain-validator",
]
[tool.setuptools]
@@ -22,8 +24,7 @@ where = ['src']
[project.scripts]
doveauth = "chatmaild.doveauth:main"
chatmail-metadata = "chatmaild.metadata:main"
chatmail-expire = "chatmaild.expire:daily_expire_main"
chatmail-quota-expire = "chatmaild.expire:quota_expire_main"
chatmail-expire = "chatmaild.expire:main"
chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main"
@@ -69,7 +70,6 @@ commands =
deps = pytest
pdbpp
pytest-localserver
aiosmtpd
execnet
commands = pytest -v -rsXx {posargs}
"""

View File

@@ -1,8 +1,7 @@
import ipaddress
import os
from pathlib import Path
import iniconfig
from domain_validator import DomainValidator
from chatmaild.user import User
@@ -21,19 +20,7 @@ def read_config(inipath):
class Config:
def __init__(self, inipath, params):
self._inipath = inipath
raw_domain = params["mail_domain"]
self.mail_domain_bare = raw_domain
if is_valid_ipv4(raw_domain):
self.ipv4_relay = raw_domain
self.mail_domain = f"[{raw_domain}]"
self.postfix_myhostname = ipaddress.IPv4Address(raw_domain).reverse_pointer
else:
DomainValidator().validate_domain_re(raw_domain)
self.ipv4_relay = None
self.mail_domain = raw_domain
self.postfix_myhostname = raw_domain
self.mail_domain = params["mail_domain"]
self.max_user_send_per_minute = int(params.get("max_user_send_per_minute", 60))
self.max_user_send_burst_size = int(params.get("max_user_send_burst_size", 10))
self.max_mailbox_size = params["max_mailbox_size"]
@@ -51,23 +38,19 @@ class Config:
self.filtermail_smtp_port_incoming = int(
params.get("filtermail_smtp_port_incoming", "10081")
)
self.filtermail_http_port_incoming = int(
params.get("filtermail_http_port_incoming", "10082")
)
self.filtermail_lmtp_port_transport = int(
params.get("filtermail_lmtp_port_transport", "10083")
)
self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025"))
self.postfix_reinject_port_incoming = int(
params.get("postfix_reinject_port_incoming", "10026")
)
self.mtail_address = params.get("mtail_address")
self.disable_ipv6 = params.get("disable_ipv6", "false").lower() == "true"
self.addr_v4 = os.environ.get("CHATMAIL_ADDR_V4", "")
self.addr_v6 = os.environ.get("CHATMAIL_ADDR_V6", "")
self.acme_email = params.get("acme_email", "")
self.imap_rawlog = params.get("imap_rawlog", "false").lower() == "true"
self.imap_compress = params.get("imap_compress", "false").lower() == "true"
if "iroh_relay" not in params:
self.iroh_relay = "https://" + raw_domain
self.iroh_relay = "https://" + params["mail_domain"]
self.enable_iroh_relay = True
else:
self.iroh_relay = params["iroh_relay"].strip()
@@ -93,27 +76,22 @@ class Config:
)
self.tls_cert_mode = "external"
self.tls_cert_path, self.tls_key_path = parts
elif raw_domain.startswith("_") or self.ipv4_relay:
elif self.mail_domain.startswith("_"):
self.tls_cert_mode = "self"
self.tls_cert_path = "/etc/ssl/certs/mailserver.pem"
self.tls_key_path = "/etc/ssl/private/mailserver.key"
else:
self.tls_cert_mode = "acme"
self.tls_cert_path = f"/var/lib/acme/live/{raw_domain}/fullchain"
self.tls_key_path = f"/var/lib/acme/live/{raw_domain}/privkey"
self.tls_cert_path = f"/var/lib/acme/live/{self.mail_domain}/fullchain"
self.tls_key_path = f"/var/lib/acme/live/{self.mail_domain}/privkey"
# deprecated option
mbdir = params.get("mailboxes_dir", f"/home/vmail/mail/{raw_domain}")
mbdir = params.get("mailboxes_dir", f"/home/vmail/mail/{self.mail_domain}")
self.mailboxes_dir = Path(mbdir.strip())
# old unused option (except for first migration from sqlite to maildir store)
self.passdb_path = Path(params.get("passdb_path", "/home/vmail/passdb.sqlite"))
@property
def max_mailbox_size_mb(self):
"""Return max_mailbox_size as an integer in megabytes."""
return parse_size_mb(self.max_mailbox_size)
def _getbytefile(self):
return open(self._inipath, "rb")
@@ -127,16 +105,6 @@ class Config:
return User(maildir, addr, password_path, uid="vmail", gid="vmail")
def parse_size_mb(limit):
"""Parse a size string like ``500M`` or ``2G`` and return megabytes."""
value = limit.strip().upper().removesuffix("B")
if value.endswith("G"):
return int(value[:-1]) * 1024
if value.endswith("M"):
return int(value[:-1])
return int(value)
def write_initial_config(inipath, mail_domain, overrides):
"""Write out default config file, using the specified config value overrides."""
content = get_default_config_content(mail_domain, **overrides)
@@ -189,19 +157,3 @@ def get_default_config_content(mail_domain, **overrides):
lines.append(line)
content = "\n".join(lines)
return content
def is_valid_ipv4(address: str) -> bool:
"""Check if a mail_domain is an IPv4 address."""
try:
ipaddress.IPv4Address(address)
return True
except ValueError:
return False
def format_mail_domain(raw_domain: str) -> str:
if is_valid_ipv4(raw_domain):
return f"[{raw_domain}]"
DomainValidator().validate_domain_re(raw_domain)
return raw_domain

View File

@@ -4,26 +4,17 @@ Expire old messages and addresses.
"""
import os
import re
import shutil
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from datetime import datetime
from pathlib import Path
from stat import S_ISREG
from chatmaild.config import read_config
FileEntry = namedtuple("FileEntry", ("path", "mtime", "size"))
QuotaFileEntry = namedtuple("QuotaFileEntry", ("mtime", "quota_size", "path"))
# Quota cleanup factor of max_mailbox_size. The mailbox is reset to this size.
QUOTA_CLEANUP_FACTOR = 0.7
# e.g. "cur/1775324677.M448978P3029757.exam,S=3235,W=3305:2,S"
_dovecot_fn_rex = re.compile(r".+/(\d+)\..+,S=(\d+)")
def iter_mailboxes(basedir, maxnum):
@@ -83,42 +74,6 @@ class MailboxStat:
self.extrafiles.sort(key=lambda x: -x.size)
def parse_dovecot_filename(relpath):
m = _dovecot_fn_rex.match(relpath)
if not m:
return None
return QuotaFileEntry(int(m.group(1)), int(m.group(2)), relpath)
def scan_mailbox_messages(mbox):
messages = []
for sub in ("cur", "new"):
for name in os_listdir_if_exists(mbox / sub):
if entry := parse_dovecot_filename(f"{sub}/{name}"):
messages.append(entry)
return messages
def expire_to_target(mbox, target_bytes):
messages = scan_mailbox_messages(mbox)
total_size = sum(m.quota_size for m in messages)
# Keep recent 24 hours of messages protected from expiry because
# likely something is wrong with interactions on that address
# and quota-full signal can help the address owner's device to notice it
undeletable_messages_cutoff = time.time() - (3600 * 24)
removed = 0
for entry in sorted(messages):
if total_size <= target_bytes:
break
if entry.mtime > undeletable_messages_cutoff:
break
(mbox / entry.path).unlink(missing_ok=True)
total_size -= entry.quota_size
removed += 1
return removed
def print_info(msg):
print(msg, file=sys.stderr)
@@ -188,19 +143,6 @@ class Expiry:
else:
continue
changed = True
target_bytes = (
self.config.max_mailbox_size_mb * 1024 * 1024 * QUOTA_CLEANUP_FACTOR
)
removed = expire_to_target(Path(mbox.basedir), target_bytes)
if removed:
changed = True
self.del_files += removed
if self.verbose:
print_info(
f"quota-expire: removed {removed} message(s) from {mboxname}"
)
if changed:
self.remove_file(f"{mbox.basedir}/maildirsize")
@@ -212,9 +154,9 @@ class Expiry:
)
def daily_expire_main(args=None):
def main(args=None):
"""Expire mailboxes and messages according to chatmail config"""
parser = ArgumentParser(description=daily_expire_main.__doc__)
parser = ArgumentParser(description=main.__doc__)
ini = "/usr/local/lib/chatmaild/chatmail.ini"
parser.add_argument(
"chatmail_ini",
@@ -260,33 +202,5 @@ def daily_expire_main(args=None):
print(exp.get_summary())
def quota_expire_main(args=None):
"""Remove mailbox messages to stay within a megabyte target.
This entry point is called by dovecot when a quota threshold is passed.
"""
parser = ArgumentParser(description=quota_expire_main.__doc__)
parser.add_argument(
"target_mb",
type=int,
help="target mailbox size in megabytes",
)
parser.add_argument(
"mailbox_path",
type=Path,
help="path to a user mailbox",
)
args = parser.parse_args(args)
target_bytes = args.target_mb * 1024 * 1024
removed_count = expire_to_target(args.mailbox_path, target_bytes)
if removed_count:
(args.mailbox_path / "maildirsize").unlink(missing_ok=True)
print(
f"quota-expire: removed {removed_count} message(s)"
f" from {args.mailbox_path.name}",
file=sys.stderr,
)
return 0
if __name__ == "__main__":
main(sys.argv[1:])

View File

@@ -18,7 +18,6 @@ max_user_send_per_minute = 60
max_user_send_burst_size = 10
# maximum mailbox size of a chatmail address
# Oldest messages will be removed automatically, so mailboxes never run full.
max_mailbox_size = 500M
# maximum message size for an e-mail in bytes

View File

@@ -70,9 +70,6 @@ class Metadata:
# Some tokens have expired, remove them.
with self._modify_tokens(addr) as _tokens:
pass
elif isinstance(tokens, list):
with self._modify_tokens(addr) as tokens:
token_list = list(tokens.keys())
else:
token_list = []
return token_list

View File

@@ -25,19 +25,13 @@ def create_newemail_dict(config: Config):
return dict(email=f"{user}@{config.mail_domain}", password=f"{password}")
def create_dclogin_url(config, email, password):
def create_dclogin_url(email, password):
"""Build a dclogin: URL with credentials and self-signed cert acceptance.
Uses ic=3 (AcceptInvalidCertificates) so chatmail clients
can connect to servers with self-signed TLS certificates.
"""
if config.ipv4_relay:
imap_host = "&ih=" + config.ipv4_relay
smtp_host = "&sh=" + config.ipv4_relay
else:
imap_host = ""
smtp_host = ""
return f"dclogin:{quote(email, safe='@[]')}?p={quote(password, safe='')}&v=1{imap_host}{smtp_host}&ic=3"
return f"dclogin:{quote(email, safe='@')}?p={quote(password, safe='')}&v=1&ic=3"
def print_new_account():
@@ -46,9 +40,7 @@ def print_new_account():
result = dict(email=creds["email"], password=creds["password"])
if config.tls_cert_mode == "self":
result["dclogin_url"] = create_dclogin_url(
config, creds["email"], creds["password"]
)
result["dclogin_url"] = create_dclogin_url(creds["email"], creds["password"])
print("Content-Type: application/json")
print("")

View File

@@ -31,11 +31,6 @@ def example_config(make_config):
return make_config("chat.example.org")
@pytest.fixture
def ipv4_config(make_config):
return make_config("1.3.3.7")
@pytest.fixture
def maildomain(example_config):
return example_config.mail_domain

View File

@@ -1,13 +1,6 @@
from contextlib import nullcontext as does_not_raise
import pytest
from chatmaild.config import (
format_mail_domain,
is_valid_ipv4,
parse_size_mb,
read_config,
)
from chatmaild.config import read_config
def test_read_config_basic(example_config):
@@ -20,12 +13,6 @@ def test_read_config_basic(example_config):
example_config = read_config(inipath)
assert example_config.max_user_send_per_minute == 37
assert example_config.mail_domain == "chat.example.org"
assert example_config.ipv4_relay is None
def test_read_config_ipv4(ipv4_config):
assert ipv4_config.ipv4_relay == "1.3.3.7"
assert ipv4_config.mail_domain == "[1.3.3.7]"
def test_read_config_basic_using_defaults(tmp_path, maildomain):
@@ -134,45 +121,3 @@ def test_config_tls_external_bad_format(make_config):
"tls_external_cert_and_key": "/only/one/path.pem",
},
)
def test_parse_size_mb():
assert parse_size_mb("500M") == 500
assert parse_size_mb("2G") == 2048
assert parse_size_mb(" 1g ") == 1024
assert parse_size_mb("100MB") == 100
assert parse_size_mb("256") == 256
def test_max_mailbox_size_mb(make_config):
config = make_config("chat.example.org")
assert config.max_mailbox_size == "500M"
assert config.max_mailbox_size_mb == 500
@pytest.mark.parametrize(
["input", "result"],
[
("example.org", False),
("1.3.3.7", True),
("fe::1", False),
("ad.1e.dag.adf", False),
("12394142", False),
],
)
def test_is_valid_ipv4(input, result):
assert result == is_valid_ipv4(input)
@pytest.mark.parametrize(
["input", "result", "exception"],
[
("example.org", "example.org", does_not_raise()),
("1.3.3.7", "[1.3.3.7]", does_not_raise()),
("fe::1", None, pytest.raises(ValueError)),
("12394142", None, pytest.raises(ValueError)),
],
)
def test_format_mail_domain(input, result, exception):
with exception:
assert result == format_mail_domain(input)

View File

@@ -1,7 +1,7 @@
import time
from chatmaild.doveauth import AuthDictProxy
from chatmaild.expire import daily_expire_main as main_expire
from chatmaild.expire import main as main_expire
def test_login_timestamps(example_config):

View File

@@ -1,7 +1,5 @@
import itertools
import os
import random
import time
from datetime import datetime
from fnmatch import fnmatch
from pathlib import Path
@@ -11,19 +9,13 @@ import pytest
from chatmaild.expire import (
FileEntry,
MailboxStat,
expire_to_target,
get_file_entry,
iter_mailboxes,
os_listdir_if_exists,
parse_dovecot_filename,
quota_expire_main,
scan_mailbox_messages,
)
from chatmaild.expire import daily_expire_main as expiry_main
from chatmaild.expire import main as expiry_main
from chatmaild.fsreport import main as report_main
MB = 1024 * 1024
def fill_mbox(folderdir):
password = folderdir.joinpath("password")
@@ -204,51 +196,3 @@ def test_os_listdir_if_exists(tmp_path):
tmp_path.joinpath("x").write_text("hello")
assert len(os_listdir_if_exists(str(tmp_path))) == 1
assert len(os_listdir_if_exists(str(tmp_path.joinpath("123123")))) == 0
# --- quota expire tests ---
_msg_counter = itertools.count(1)
def _create_message(basedir, sub, size, days_old=0, disk_size=None):
seq = next(_msg_counter)
mtime = int(time.time() - days_old * 86400)
name = f"{mtime}.M1P1Q{seq}.hostname,S={size},W={size}:2,S"
path = basedir / sub / name
path.parent.mkdir(parents=True, exist_ok=True)
path.write_bytes(b"x" * (disk_size if disk_size is not None else size))
os.utime(path, (mtime, mtime))
return path
def test_parse_dovecot_filename():
e = parse_dovecot_filename("cur/1775324677.M448978P3029757.exam,S=3235,W=3305:2,S")
assert e.path == "cur/1775324677.M448978P3029757.exam,S=3235,W=3305:2,S"
assert e.mtime == 1775324677
assert e.quota_size == 3235
assert parse_dovecot_filename("cur/msg_without_structure") is None
def test_expire_to_target(tmp_path):
_create_message(tmp_path, "cur", MB, days_old=10, disk_size=100)
_create_message(tmp_path, "new", MB, days_old=5)
_create_message(tmp_path, "cur", MB, days_old=0) # undeletable (<1 hour)
assert len(scan_mailbox_messages(tmp_path)) == 3
# removes oldest first, uses S= size not disk size
removed = expire_to_target(tmp_path, MB)
assert removed == 2
msgs = scan_mailbox_messages(tmp_path)
assert len(msgs) == 1
# the surviving message is the fresh undeletable one
assert msgs[0].mtime > time.time() - 3600
def test_quota_expire_main(tmp_path, capsys):
mbox = tmp_path / "user@example.org"
_create_message(mbox, "cur", 2 * MB, days_old=5)
(mbox / "maildirsize").write_text("x")
quota_expire_main([str(1), str(mbox)])
_, err = capsys.readouterr()
assert "quota-expire: removed 1 message(s) from user@example.org" in err
assert not (mbox / "maildirsize").exists()

View File

@@ -372,14 +372,3 @@ def test_iroh_relay(dictproxy):
dictproxy.iroh_relay = "https://example.org/"
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"Ohttps://example.org/\n"
def test_legacy_token_migration(metadata, testaddr):
with metadata.get_metadata_dict(testaddr).modify() as data:
data[metadata.DEVICETOKEN_KEY] = ["oldtoken1", "oldtoken2"]
assert metadata.get_tokens_for_addr(testaddr) == ["oldtoken1", "oldtoken2"]
mdict = metadata.get_metadata_dict(testaddr).read()
tokens = mdict[metadata.DEVICETOKEN_KEY]
assert isinstance(tokens, dict)
assert "oldtoken1" in tokens and "oldtoken2" in tokens

View File

@@ -48,8 +48,6 @@ def test_migration(tmp_path, example_config, caplog):
assert passdb_path.stat().st_size > 10000
example_config.passdb_path = passdb_path
# ensure logging.info records are captured regardless of global configuration
caplog.set_level("INFO")
assert not caplog.records

View File

@@ -19,35 +19,18 @@ def test_create_newemail_dict(example_config):
assert ac1["password"] != ac2["password"]
def test_create_newemail_dict_ip(ipv4_config):
ac = create_newemail_dict(ipv4_config)
assert ac["email"].endswith("@[1.3.3.7]")
def test_create_dclogin_url(example_config):
addr = "user@example.org"
password = "p@ss w+rd"
url = create_dclogin_url(example_config, addr, password)
def test_create_dclogin_url():
url = create_dclogin_url("user@example.org", "p@ss w+rd")
assert url.startswith("dclogin:")
assert "v=1" in url
assert "ic=3" in url
assert addr in url
assert "user@example.org" in url
# password special chars must be encoded
assert "p%40ss" in url
assert "w%2Brd" in url
def test_create_dclogin_url_ipv4(ipv4_config):
addr = "user@[1.3.3.7]"
password = "p@ss w+rd"
url = create_dclogin_url(ipv4_config, addr, password)
assert url.startswith("dclogin:")
assert "v=1" in url
assert "ic=3" in url
assert addr in url
def test_print_new_account(capsys, monkeypatch, maildomain, tmpdir, example_config):
monkeypatch.setattr(chatmaild.newemail, "CONFIG_PATH", str(example_config._inipath))
print_new_account()

View File

@@ -10,6 +10,7 @@ dependencies = [
"pillow",
"qrcode",
"markdown",
"pytest",
"setuptools>=68",
"termcolor",
"build",
@@ -20,7 +21,6 @@ dependencies = [
"execnet",
"imap_tools",
"deltachat-rpc-client",
"deltachat-rpc-server",
]
[project.scripts]

View File

@@ -1,4 +1,6 @@
from pyinfra.operations import apt, server
import importlib.resources
from pyinfra.operations import apt, files, server, systemd
from ..basedeploy import Deployer
@@ -7,6 +9,9 @@ class AcmetoolDeployer(Deployer):
def __init__(self, email, domains):
self.domains = domains
self.email = email
self.need_restart_redirector = False
self.need_restart_reconcile_service = False
self.need_restart_reconcile_timer = False
def install(self):
apt.packages(
@@ -14,41 +19,121 @@ class AcmetoolDeployer(Deployer):
packages=["acmetool"],
)
self.remove_file("/etc/cron.d/acmetool")
files.file(
name="Remove old acmetool cronjob, it is replaced with systemd timer.",
path="/etc/cron.d/acmetool",
present=False,
)
self.put_executable("acmetool/acmetool.hook", "/etc/acme/hooks/nginx")
self.remove_file("/usr/lib/acme/hooks/nginx")
files.put(
name="Install acmetool hook.",
src=importlib.resources.files(__package__)
.joinpath("acmetool.hook")
.open("rb"),
dest="/etc/acme/hooks/nginx",
user="root",
group="root",
mode="755",
)
files.file(
name="Remove acmetool hook from the wrong location where it was previously installed.",
path="/usr/lib/acme/hooks/nginx",
present=False,
)
def configure(self):
self.put_template(
"acmetool/response-file.yaml.j2",
"/var/lib/acme/conf/responses",
files.template(
src=importlib.resources.files(__package__).joinpath(
"response-file.yaml.j2"
),
dest="/var/lib/acme/conf/responses",
user="root",
group="root",
mode="644",
email=self.email,
)
self.put_template(
"acmetool/target.yaml.j2",
"/var/lib/acme/conf/target",
files.template(
src=importlib.resources.files(__package__).joinpath("target.yaml.j2"),
dest="/var/lib/acme/conf/target",
user="root",
group="root",
mode="644",
)
server.shell(
name=f"Remove old acmetool desired files for {self.domains[0]}",
commands=[f"rm -f /var/lib/acme/desired/{self.domains[0]}-*"],
)
self.put_template(
"acmetool/desired.yaml.j2",
f"/var/lib/acme/desired/{self.domains[0]}",
files.template(
src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"),
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD
user="root",
group="root",
mode="644",
domains=self.domains,
)
self.ensure_systemd_unit("acmetool/acmetool-redirector.service")
self.ensure_systemd_unit("acmetool/acmetool-reconcile.service")
self.ensure_systemd_unit("acmetool/acmetool-reconcile.timer")
service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-redirector.service"
),
dest="/etc/systemd/system/acmetool-redirector.service",
user="root",
group="root",
mode="644",
)
self.need_restart_redirector = service_file.changed
reconcile_service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.service"
),
dest="/etc/systemd/system/acmetool-reconcile.service",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_service = reconcile_service_file.changed
reconcile_timer_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.timer"
),
dest="/etc/systemd/system/acmetool-reconcile.timer",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_timer = reconcile_timer_file.changed
def activate(self):
self.ensure_service("acmetool-redirector.service")
self.ensure_service("acmetool-reconcile.service", running=False, enabled=False)
self.ensure_service("acmetool-reconcile.timer")
systemd.service(
name="Setup acmetool-redirector service",
service="acmetool-redirector.service",
running=True,
enabled=True,
restarted=self.need_restart_redirector,
)
self.need_restart_redirector = False
systemd.service(
name="Setup acmetool-reconcile service",
service="acmetool-reconcile.service",
running=False,
enabled=False,
daemon_reload=self.need_restart_reconcile_service,
)
self.need_restart_reconcile_service = False
systemd.service(
name="Setup acmetool-reconcile timer",
service="acmetool-reconcile.timer",
running=True,
enabled=True,
daemon_reload=self.need_restart_reconcile_timer,
)
self.need_restart_reconcile_timer = False
server.shell(
name=f"Reconcile certificates for: {', '.join(self.domains)}",

View File

@@ -1,11 +1,7 @@
import importlib.resources
import io
import os
from contextlib import contextmanager
from pyinfra import host
from pyinfra.facts.files import Sha256File
from pyinfra.facts.server import Command
from pyinfra.operations import files, server, systemd
@@ -14,47 +10,15 @@ def has_systemd():
return os.path.isdir("/run/systemd/system")
def is_in_container() -> bool:
"""Return True if running inside a container (Docker, LXC, etc.)."""
return (
host.get_fact(
Command,
"systemd-detect-virt --container --quiet 2>/dev/null && echo yes || true",
)
== "yes"
)
@contextmanager
def blocked_service_startup():
"""Prevent services from auto-starting during package installation.
Installs a ``/usr/sbin/policy-rc.d`` that exits 101, blocking any
service from being started by the package manager. This avoids bind
conflicts and CPU/RAM spikes during initial setup. The file is removed
when the context exits.
"""
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
yield
files.file("/usr/sbin/policy-rc.d", present=False)
def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg)
def configure_remote_units(deployer, mail_domain, units) -> None:
def configure_remote_units(mail_domain, units) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
# install systemd units
for fn in units:
@@ -70,13 +34,15 @@ def configure_remote_units(deployer, mail_domain, units) -> None:
source_path = get_resource(f"service/{basename}.f")
content = source_path.read_text().format(**params).encode()
deployer.put_file(
files.put(
name=f"Upload {basename}",
src=io.BytesIO(content),
dest=f"/etc/systemd/system/{basename}",
**root_owned,
)
def activate_remote_units(deployer, units) -> None:
def activate_remote_units(units) -> None:
# activate systemd units
for fn in units:
basename = fn if "." in fn else f"{fn}.service"
@@ -86,8 +52,14 @@ def activate_remote_units(deployer, units) -> None:
enabled = False
else:
enabled = True
deployer.ensure_service(basename, running=enabled, enabled=enabled)
systemd.service(
name=f"Setup {basename}",
service=basename,
running=enabled,
enabled=enabled,
restarted=enabled,
daemon_reload=True,
)
class Deployment:
@@ -133,7 +105,6 @@ class Deployment:
class Deployer:
need_restart = False
daemon_reload = False
def install(self):
pass
@@ -143,113 +114,3 @@ class Deployer:
def activate(self):
pass
def ensure_service(self, service, running=True, enabled=True):
if running:
verb = "Start and enable"
else:
verb = "Stop"
systemd.service(
name=f"{verb} {service}",
service=service,
running=running,
enabled=enabled,
restarted=self.need_restart if running else False,
daemon_reload=self.daemon_reload,
)
self.daemon_reload = False
def ensure_systemd_unit(self, src, **kwargs):
dest_name = src.split("/")[-1].replace(".j2", "")
dest = f"/etc/systemd/system/{dest_name}"
if src.endswith(".j2"):
return self.put_template(src, dest, **kwargs)
return self.put_file(src, dest)
def put_file(self, src, dest, mode="644"):
if isinstance(src, str):
src = get_resource(src)
res = files.put(
name=f"Upload {dest}",
src=src,
dest=dest,
user="root",
group="root",
mode=mode,
)
return self._update_restart_signals(dest, res)
def put_executable(self, src, dest):
return self.put_file(src, dest, mode="755")
def put_template(self, src, dest, owner="root", **kwargs):
if isinstance(src, str):
src = get_resource(src)
res = files.template(
name=f"Upload {dest}",
src=src,
dest=dest,
user=owner,
group=owner,
mode="644",
**kwargs,
)
return self._update_restart_signals(dest, res)
def remove_file(self, dest):
res = files.file(name=f"Remove {dest}", path=dest, present=False)
return self._update_restart_signals(dest, res)
def ensure_line(self, path, line, **kwargs):
name = kwargs.pop("name", f"Ensure line in {path}")
res = files.line(name=name, path=path, line=line, **kwargs)
return self._update_restart_signals(path, res)
def ensure_directory(self, path, owner="root", mode="755", **kwargs):
name = kwargs.pop("name", f"Ensure directory {path}")
res = files.directory(
name=name,
path=path,
user=owner,
group=owner,
mode=mode,
present=True,
**kwargs,
)
return self._update_restart_signals(path, res)
def remove_directory(self, path, **kwargs):
name = kwargs.pop("name", f"Remove directory {path}")
res = files.directory(name=name, path=path, present=False, **kwargs)
return self._update_restart_signals(path, res)
def download_executable(self, url, dest, sha256sum, extract=None):
existing = host.get_fact(Sha256File, dest)
if existing == sha256sum:
return
tmp = f"{dest}.new"
if extract:
dl_cmd = f"curl -fSL {url} | {extract} >{tmp}"
else:
dl_cmd = f"curl -fSL {url} -o {tmp}"
server.shell(
name=f"Download {dest}",
commands=[
f"({dl_cmd}"
f" && echo '{sha256sum} {tmp}' | sha256sum -c"
f" && mv {tmp} {dest})",
f"chmod 755 {dest}",
],
)
self.need_restart = True
def _update_restart_signals(self, path, res):
if res.changed:
self.need_restart = True
if str(path).startswith("/etc/systemd/system/"):
self.daemon_reload = True
return res

View File

@@ -10,6 +10,8 @@ import pathlib
import shutil
import subprocess
import sys
import time
from contextlib import contextmanager
from pathlib import Path
import pyinfra
@@ -18,7 +20,24 @@ from packaging import version
from termcolor import colored
from . import dns, remote
from .sshexec import LocalExec, SSHExec
from .lxc.cli import ( # noqa: F401
lxc_start_cmd,
lxc_start_cmd_options,
lxc_status_cmd,
lxc_status_cmd_options,
lxc_stop_cmd,
lxc_stop_cmd_options,
lxc_test_cmd,
lxc_test_cmd_options,
)
from .sshexec import (
LocalExec,
SSHExec,
resolve_host_from_ssh_config,
resolve_key_from_ssh_config,
)
from .util import build_chatmaild_sdist
from .www import main as webdev_main
#
# cmdeploy sub commands and options
@@ -82,20 +101,21 @@ def run_cmd_options(parser):
help="disable checks nslookup for dns",
)
add_ssh_host_option(parser)
add_ssh_config_option(parser)
def run_cmd(args, out):
"""Deploy chatmail services on the remote server."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain_bare
sshexec = get_sshexec(ssh_host)
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, ssh_config=args.ssh_config)
require_iroh = args.config.enable_iroh_relay
strict_tls = args.config.tls_cert_mode == "acme"
if args.config.ipv4_relay:
args.dns_check_disabled = True
if not args.dns_check_disabled:
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
if not dns.check_initial_remote_data(remote_data, strict_tls=strict_tls, print=out.red):
if not dns.check_initial_remote_data(
remote_data, strict_tls=strict_tls, print=out.red
):
return 1
env = os.environ.copy()
@@ -103,11 +123,32 @@ def run_cmd(args, out):
env["CHATMAIL_WEBSITE_ONLY"] = "True" if args.website_only else ""
env["CHATMAIL_DISABLE_MAIL"] = "True" if args.disable_mail else ""
env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else ""
if not args.website_only:
build_chatmaild_sdist()
if not args.dns_check_disabled:
env["CHATMAIL_ADDR_V4"] = remote_data.get("A") or ""
env["CHATMAIL_ADDR_V6"] = remote_data.get("AAAA") or ""
deploy_path = importlib.resources.files(__package__).joinpath("run.py").resolve()
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host == "localhost":
ssh_config = args.ssh_config
if ssh_config:
ssh_config = str(Path(ssh_config).resolve())
# Use pyinfra's native SSH data keys to configure the connection directly
# rather than relying on paramiko config parsing (see also sshexec.py)
ip = resolve_host_from_ssh_config(ssh_host, ssh_config)
key = resolve_key_from_ssh_config(ssh_host, ssh_config)
data_args = f"--data ssh_hostname={ip} --data ssh_known_hosts_file=/dev/null"
if key:
data_args += f" --data ssh_key={key}"
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y {data_args}"
if ssh_host in ["localhost", "@docker"]:
if ssh_host == "@docker":
env["CHATMAIL_NOPORTCHECK"] = "True"
env["CHATMAIL_NOSYSCTL"] = "True"
cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"):
@@ -118,11 +159,13 @@ def run_cmd(args, out):
out.check_call(cmd, env=env)
if args.website_only:
out.green("Website deployment completed.")
elif not args.dns_check_disabled and strict_tls and not remote_data["acme_account_url"]:
elif (
not args.dns_check_disabled
and strict_tls
and not remote_data["acme_account_url"]
):
out.red("Deploy completed but letsencrypt not configured")
out.red("Run 'cmdeploy run' again")
elif args.config.ipv4_relay:
out.green("Deploy completed.")
else:
out.green("Deploy completed, call `cmdeploy dns` next.")
return 0
@@ -137,19 +180,16 @@ def dns_cmd_options(parser):
dest="zonefile",
type=pathlib.Path,
default=None,
help="write out a zonefile",
help="write DNS records in standard BIND format to the given file",
)
add_ssh_host_option(parser)
add_ssh_config_option(parser)
def dns_cmd(args, out):
"""Check DNS entries and optionally generate dns zone file."""
if args.config.ipv4_relay:
ipv4 = args.config.ipv4_relay
print(f"[WARNING] {ipv4} is not a domain, skipping DNS checks.")
return 0
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
sshexec = get_sshexec(ssh_host, verbose=args.verbose, ssh_config=args.ssh_config)
tls_cert_mode = args.config.tls_cert_mode
strict_tls = tls_cert_mode == "acme"
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
@@ -180,13 +220,14 @@ def dns_cmd(args, out):
def status_cmd_options(parser):
add_ssh_host_option(parser)
add_ssh_config_option(parser)
def status_cmd(args, out):
"""Display status for online chatmail instance."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain_bare
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, verbose=args.verbose, ssh_config=args.ssh_config)
out.green(f"chatmail domain: {args.config.mail_domain}")
if args.config.privacy_mail:
@@ -199,16 +240,21 @@ def status_cmd(args, out):
def test_cmd_options(parser):
parser.add_argument(
"--slow",
dest="slow",
action="store_true",
help="also run slow tests",
)
add_ssh_host_option(parser)
add_ssh_config_option(parser)
def test_cmd(args, out):
"""Run local and online tests for chatmail deployment."""
env = os.environ.copy()
env["CHATMAIL_INI"] = str(args.inipath.absolute())
if args.ssh_host:
env["CHATMAIL_SSH"] = args.ssh_host
env["CHATMAIL_INI"] = str(args.inipath.resolve())
pytest_path = shutil.which("pytest")
pytest_args = [
@@ -220,6 +266,12 @@ def test_cmd(args, out):
"-v",
"--durations=5",
]
if args.slow:
pytest_args.append("--slow")
if args.ssh_host:
pytest_args.extend(["--ssh-host", args.ssh_host])
if args.ssh_config:
pytest_args.extend(["--ssh-config", str(Path(args.ssh_config).resolve())])
ret = out.run_ret(pytest_args, env=env)
return ret
@@ -271,9 +323,7 @@ def bench_cmd(args, out):
def webdev_cmd(args, out):
"""Run local web development loop for static web pages."""
from .www import main
main()
webdev_main()
#
@@ -282,17 +332,44 @@ def webdev_cmd(args, out):
class Out:
"""Convenience output printer providing coloring."""
"""Convenience output printer providing coloring and section formatting."""
SECTION_WIDTH = 72
def __init__(self):
self.section_timings = []
def red(self, msg, file=sys.stderr):
print(colored(msg, "red"), file=file)
print(colored(msg, "red"), file=file, flush=True)
def green(self, msg, file=sys.stderr):
print(colored(msg, "green"), file=file)
print(colored(msg, "green"), file=file, flush=True)
def print(self, msg="", **kwargs):
"""Print to stdout with automatic flush."""
print(msg, flush=True, **kwargs)
@contextmanager
def section(self, title):
"""Context manager that prints a section header and records elapsed time."""
bar = "\u2501" * (self.SECTION_WIDTH - len(title) - 5)
self.green(f"\u2501\u2501\u2501 {title} {bar}")
t0 = time.time()
yield
elapsed = time.time() - t0
self.section_timings.append((title, elapsed))
self.print(f"{'':>{self.SECTION_WIDTH - 10}}({elapsed:.1f}s)")
self.print()
def section_line(self, title):
"""Print a section header without timing."""
bar = "\u2501" * (self.SECTION_WIDTH - len(title) - 5)
self.green(f"\u2501\u2501\u2501 {title} {bar}")
self.print()
def __call__(self, msg, red=False, green=False, file=sys.stdout):
color = "red" if red else ("green" if green else None)
print(colored(msg, color), file=file)
print(colored(msg, color), file=file, flush=True)
def check_call(self, arg, env=None, quiet=False):
if not quiet:
@@ -311,11 +388,21 @@ def add_ssh_host_option(parser):
parser.add_argument(
"--ssh-host",
dest="ssh_host",
help="Run commands on 'localhost' or on a specific SSH host "
help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
"instead of chatmail.ini's mail_domain.",
)
def add_ssh_config_option(parser):
parser.add_argument(
"--ssh-config",
dest="ssh_config",
type=Path,
default=None,
help="Path to an SSH config file (e.g. lxconfigs/ssh-config).",
)
def add_config_option(parser):
parser.add_argument(
"--config",
@@ -325,6 +412,7 @@ def add_config_option(parser):
type=Path,
help="path to the chatmail.ini file",
)
parser.add_argument(
"--verbose",
"-v",
@@ -335,15 +423,16 @@ def add_config_option(parser):
)
def add_subcommand(subparsers, func):
def add_subcommand(subparsers, func, add_config=True):
name = func.__name__
assert name.endswith("_cmd")
name = name[:-4]
name = name[:-4].replace("_", "-")
doc = func.__doc__.strip()
help = doc.split("\n")[0].strip(".")
p = subparsers.add_parser(name, description=doc, help=help)
p.set_defaults(func=func)
add_config_option(p)
if add_config:
add_config_option(p)
return p
@@ -357,13 +446,15 @@ def get_parser():
"""Return an ArgumentParser for the 'cmdeploy' CLI"""
parser = argparse.ArgumentParser(description=description.strip())
parser.set_defaults(func=None, inipath=None)
subparsers = parser.add_subparsers(title="subcommands")
# find all subcommands in the module namespace
glob = globals()
for name, func in glob.items():
if name.endswith("_cmd"):
subparser = add_subcommand(subparsers, func)
needs_config = not name.startswith("lxc_")
subparser = add_subcommand(subparsers, func, add_config=needs_config)
addopts = glob.get(name + "_options")
if addopts is not None:
addopts(subparser)
@@ -371,24 +462,27 @@ def get_parser():
return parser
def get_sshexec(ssh_host: str, verbose=True):
def get_sshexec(ssh_host: str, verbose=True, ssh_config=None):
if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose)
return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
if verbose:
print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose)
return SSHExec(ssh_host, verbose=verbose, ssh_config=ssh_config)
def main(args=None):
"""Provide main entry point for 'cmdeploy' CLI invocation."""
parser = get_parser()
args = parser.parse_args(args=args)
if not hasattr(args, "func"):
if args.func is None:
return parser.parse_args(["-h"])
out = Out()
kwargs = {}
if args.func.__name__ not in ("init_cmd", "fmt_cmd"):
if args.inipath is not None and args.func.__name__ not in ("init_cmd", "fmt_cmd"):
if not args.inipath.exists():
out.red(f"expecting {args.inipath} to exist, run init first?")
raise SystemExit(1)

View File

@@ -2,9 +2,7 @@
Chat Mail pyinfra deploy.
"""
import shutil
import subprocess
import sys
import os
from io import BytesIO, StringIO
from pathlib import Path
@@ -12,20 +10,21 @@ from chatmaild.config import read_config
from pyinfra import facts, host, logger
from pyinfra.api import FactBase
from pyinfra.facts import hardware
from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd
from cmdeploy.cmdeploy import Out
from cmdeploy.util import get_chatmaild_sdist, get_version_string
from .acmetool import AcmetoolDeployer
from .basedeploy import (
Deployer,
Deployment,
activate_remote_units,
blocked_service_startup,
configure_remote_units,
get_resource,
has_systemd,
is_in_container,
)
from .dovecot.deployer import DovecotDeployer
from .external.deployer import ExternalTlsDeployer
@@ -53,20 +52,6 @@ class Port(FactBase):
return output[0]
def _build_chatmaild(dist_dir) -> None:
dist_dir = Path(dist_dir).resolve()
if dist_dir.exists():
shutil.rmtree(dist_dir)
dist_dir.mkdir()
subprocess.check_output(
[sys.executable, "-m", "build", "-n"]
+ ["--sdist", "chatmaild", "--outdir", str(dist_dir)]
)
entries = list(dist_dir.iterdir())
assert len(entries) == 1
return entries[0]
def remove_legacy_artifacts():
if not has_systemd():
return
@@ -80,22 +65,25 @@ def remove_legacy_artifacts():
)
def _install_remote_venv_with_chatmaild(deployer) -> None:
def _install_remote_venv_with_chatmaild() -> None:
remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist"))
dist_file = get_chatmaild_sdist()
remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv"
root_owned = dict(user="root", group="root", mode="644")
apt.packages(
name="apt install python3-virtualenv",
packages=["python3-virtualenv"],
)
deployer.ensure_directory(f"{remote_base_dir}/dist")
deployer.put_file(
files.put(
name="Upload chatmaild source package",
src=dist_file.open("rb"),
dest=remote_dist_file,
create_remote_dir=True,
**root_owned,
)
pip.virtualenv(
@@ -117,54 +105,63 @@ def _install_remote_venv_with_chatmaild(deployer) -> None:
)
def _configure_remote_venv_with_chatmaild(deployer, config) -> None:
def _configure_remote_venv_with_chatmaild(config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
deployer.put_file(
files.put(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(),
dest=remote_chatmail_inipath,
**root_owned,
)
deployer.remove_file("/etc/cron.d/chatmail-metrics")
deployer.remove_file("/var/www/html/metrics")
files.file(
path="/etc/cron.d/chatmail-metrics",
present=False,
)
files.file(
path="/var/www/html/metrics",
present=False,
)
class UnboundDeployer(Deployer):
def __init__(self, config):
self.config = config
self.need_restart = False
def install(self):
# On an IPv4-only system, if unbound is started but not configured,
# it causes subsequent steps to fail to resolve hosts.
with blocked_service_startup():
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
# Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
#
# On an IPv4-only system, if unbound is started but not
# configured, it causes subsequent steps to fail to resolve hosts.
# Here, we use policy-rc.d to prevent unbound from starting up
# on initial install. Later, we will configure it and start it.
#
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
# Remove dynamic resolver managers that compete for /etc/resolv.conf.
apt.packages(
name="Purge resolvconf",
packages=["resolvconf"],
present=False,
extra_uninstall_args="--purge",
)
# systemd-resolved can't be purged due to dependencies; stop and mask.
server.shell(
name="Stop and mask systemd-resolved",
commands=[
"systemctl stop systemd-resolved.service || true",
"systemctl mask systemd-resolved.service",
],
)
# Configure unbound resolver with Quad9 fallback and a trailing newline
# (SolusVM bug).
self.put_file(
src=BytesIO(b"nameserver 127.0.0.1\nnameserver 9.9.9.9\n"),
dest="/etc/resolv.conf",
)
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
@@ -172,15 +169,26 @@ class UnboundDeployer(Deployer):
],
)
if self.config.disable_ipv6:
self.ensure_directory(
files.directory(
path="/etc/unbound/unbound.conf.d",
present=True,
user="root",
group="root",
mode="755",
)
self.put_template(
"unbound/unbound.conf.j2",
"/etc/unbound/unbound.conf.d/chatmail.conf",
conf = files.put(
src=get_resource("unbound/unbound.conf.j2"),
dest="/etc/unbound/unbound.conf.d/chatmail.conf",
user="root",
group="root",
mode="644",
)
else:
self.remove_file("/etc/unbound/unbound.conf.d/chatmail.conf")
conf = files.file(
path="/etc/unbound/unbound.conf.d/chatmail.conf",
present=False,
)
self.need_restart |= conf.changed
def activate(self):
server.shell(
@@ -190,25 +198,27 @@ class UnboundDeployer(Deployer):
],
)
self.ensure_service("unbound.service")
self.ensure_service(
"unbound-resolvconf.service",
running=False,
enabled=False,
systemd.service(
name="Start and enable unbound",
service="unbound.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
class MtastsDeployer(Deployer):
def configure(self):
# Remove configuration.
self.remove_file("/etc/mta-sts-daemon.yml")
self.remove_directory("/usr/local/lib/postfix-mta-sts-resolver")
self.remove_file("/etc/systemd/system/mta-sts-daemon.service")
files.file("/etc/mta-sts-daemon.yml", present=False)
files.directory("/usr/local/lib/postfix-mta-sts-resolver", present=False)
files.file("/etc/systemd/system/mta-sts-daemon.service", present=False)
def activate(self):
self.ensure_service(
"mta-sts-daemon.service",
systemd.service(
name="Stop MTA-STS daemon",
service="mta-sts-daemon.service",
daemon_reload=True,
running=False,
enabled=False,
)
@@ -219,7 +229,14 @@ class WebsiteDeployer(Deployer):
self.config = config
def install(self):
self.ensure_directory("/var/www")
files.directory(
name="Ensure /var/www exists",
path="/var/www",
user="root",
group="root",
mode="755",
present=True,
)
def configure(self):
www_path, src_dir, build_dir = get_paths(self.config)
@@ -238,8 +255,14 @@ class WebsiteDeployer(Deployer):
logger.warning("Web page build failed, skipping website deployment")
return
# if it is not a hugo page, upload it as is
files.rsync(
f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"]
# pyinfra files.rsync (experimental) causes problems with ssh-config configuration
# the stable files.sync should do
files.sync(
src=str(www_path),
dest="/var/www/html",
user="www-data",
group="www-data",
delete=True,
)
@@ -249,11 +272,15 @@ class LegacyRemoveDeployer(Deployer):
# remove historic expunge script
# which is now implemented through a systemd timer (chatmail-expire)
self.remove_file("/etc/cron.d/expunge")
files.file(
path="/etc/cron.d/expunge",
present=False,
)
# Remove OBS repository key that is no longer used.
self.remove_file("/etc/apt/keyrings/obs-home-deltachat.gpg")
self.ensure_line(
files.file("/etc/apt/keyrings/obs-home-deltachat.gpg", present=False)
files.line(
name="Remove DeltaChat OBS home repository from sources.list",
path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True,
@@ -261,7 +288,11 @@ class LegacyRemoveDeployer(Deployer):
)
# prior relay versions used filelogging
self.remove_directory("/var/log/journal/")
files.directory(
name="Ensure old logs on disk are deleted",
path="/var/log/journal/",
present=False,
)
# remove echobot if it is still running
if has_systemd() and host.get_fact(SystemdEnabled).get("echobot.service"):
systemd.service(
@@ -303,13 +334,22 @@ class TurnDeployer(Deployer):
"0fb3e792419494e21ecad536464929dba706bb2c88884ed8f1788141d26fc756",
),
}[host.get_fact(facts.server.Arch)]
self.download_executable(url, "/usr/local/bin/chatmail-turn", sha256sum)
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn")
if existing_sha256sum != sha256sum:
server.shell(
name="Download chatmail-turn",
commands=[
f"(curl -L {url} >/usr/local/bin/chatmail-turn.new && (echo '{sha256sum} /usr/local/bin/chatmail-turn.new' | sha256sum -c) && mv /usr/local/bin/chatmail-turn.new /usr/local/bin/chatmail-turn)",
"chmod 755 /usr/local/bin/chatmail-turn",
],
)
def configure(self):
configure_remote_units(self, self.mail_domain, self.units)
configure_remote_units(self.mail_domain, self.units)
def activate(self):
activate_remote_units(self, self.units)
activate_remote_units(self.units)
class IrohDeployer(Deployer):
@@ -327,30 +367,72 @@ class IrohDeployer(Deployer):
"f8ef27631fac213b3ef668d02acd5b3e215292746a3fc71d90c63115446008b1",
),
}[host.get_fact(facts.server.Arch)]
self.download_executable(
url,
"/usr/local/bin/iroh-relay",
sha256sum,
extract="gunzip | tar -xf - ./iroh-relay -O",
)
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay")
if existing_sha256sum != sha256sum:
server.shell(
name="Download iroh-relay",
commands=[
f"(curl -L {url} | gunzip | tar -x -f - ./iroh-relay -O >/usr/local/bin/iroh-relay.new && (echo '{sha256sum} /usr/local/bin/iroh-relay.new' | sha256sum -c) && mv /usr/local/bin/iroh-relay.new /usr/local/bin/iroh-relay)",
"chmod 755 /usr/local/bin/iroh-relay",
],
)
self.need_restart = True
def configure(self):
self.ensure_systemd_unit("iroh-relay.service")
self.put_file("iroh-relay.toml", "/etc/iroh-relay.toml")
systemd_unit = files.put(
name="Upload iroh-relay systemd unit",
src=get_resource("iroh-relay.service"),
dest="/etc/systemd/system/iroh-relay.service",
user="root",
group="root",
mode="644",
)
self.need_restart |= systemd_unit.changed
iroh_config = files.put(
name="Upload iroh-relay config",
src=get_resource("iroh-relay.toml"),
dest="/etc/iroh-relay.toml",
user="root",
group="root",
mode="644",
)
self.need_restart |= iroh_config.changed
def activate(self):
self.ensure_service(
"iroh-relay.service",
systemd.service(
name="Start and enable iroh-relay",
service="iroh-relay.service",
running=True,
enabled=self.enable_iroh_relay,
restarted=self.need_restart,
)
self.need_restart = False
class JournaldDeployer(Deployer):
def configure(self):
self.put_file("journald.conf", "/etc/systemd/journald.conf")
journald_conf = files.put(
name="Configure journald",
src=get_resource("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
self.need_restart = journald_conf.changed
def activate(self):
self.ensure_service("systemd-journald.service")
systemd.service(
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
class ChatmailVenvDeployer(Deployer):
@@ -366,14 +448,14 @@ class ChatmailVenvDeployer(Deployer):
)
def install(self):
_install_remote_venv_with_chatmaild(self)
_install_remote_venv_with_chatmaild()
def configure(self):
_configure_remote_venv_with_chatmaild(self, self.config)
configure_remote_units(self, self.config.mail_domain_bare, self.units)
_configure_remote_venv_with_chatmaild(self.config)
configure_remote_units(self.config.mail_domain, self.units)
def activate(self):
activate_remote_units(self, self.units)
activate_remote_units(self.units)
class ChatmailDeployer(Deployer):
@@ -382,14 +464,17 @@ class ChatmailDeployer(Deployer):
("iroh", None, None),
]
def __init__(self, config):
self.config = config
self.mail_domain = config.mail_domain
def __init__(self, mail_domain):
self.mail_domain = mail_domain
def install(self):
self.put_file(
src=BytesIO(b'APT::Install-Recommends "false";\n'),
dest="/etc/apt/apt.conf.d/00InstallRecommends",
files.put(
name="Disable installing recommended packages globally",
src=BytesIO(b'APT::Install-Recommends "0";\n'),
dest="/etc/apt/apt.conf.d/99no-recommends",
user="root",
group="root",
mode="644",
)
apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -403,15 +488,12 @@ class ChatmailDeployer(Deployer):
name="Install rsync",
packages=["rsync"],
)
def configure(self):
# metadata crashes if the mailboxes dir does not exist
self.ensure_directory(
str(self.config.mailboxes_dir),
owner="vmail",
mode="700",
apt.packages(
name="Ensure cron is installed",
packages=["cron"],
)
def configure(self):
# This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName
server.shell(
@@ -430,20 +512,22 @@ class FcgiwrapDeployer(Deployer):
)
def activate(self):
self.ensure_service("fcgiwrap.service")
systemd.service(
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
class GithashDeployer(Deployer):
def activate(self):
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
self.put_file(src=StringIO(git_hash + git_diff), dest="/etc/chatmail-version")
files.put(
name="Upload chatmail relay git commit hash",
src=StringIO(get_version_string()),
dest="/etc/chatmail-version",
mode="700",
)
def get_tls_deployer(config, mail_domain):
@@ -469,12 +553,20 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
"""
config = read_config(config_path)
check_config(config)
bare_host = config.mail_domain_bare
mail_domain = config.mail_domain
if website_only:
Deployment().perform_stages([WebsiteDeployer(config)])
return
if host.get_fact(Port, port=53) != "unbound":
files.line(
name="Add 9.9.9.9 to resolv.conf",
path="/etc/resolv.conf",
# Guard against resolv.conf missing a trailing newline (SolusVM bug).
line="\nnameserver 9.9.9.9",
)
# Check if mtail_address interface is available (if configured)
if config.mtail_address and config.mtail_address not in (
"127.0.0.1",
@@ -489,7 +581,7 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
)
exit(1)
if not is_in_container():
if not os.environ.get("CHATMAIL_NOPORTCHECK"):
port_services = [
(["master", "smtpd"], 25),
("unbound", 53),
@@ -526,21 +618,21 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
)
exit(1)
tls_deployer = get_tls_deployer(config, bare_host)
tls_deployer = get_tls_deployer(config, mail_domain)
all_deployers = [
ChatmailDeployer(config),
ChatmailDeployer(mail_domain),
LegacyRemoveDeployer(),
FiltermailDeployer(),
JournaldDeployer(),
UnboundDeployer(config),
TurnDeployer(bare_host),
TurnDeployer(mail_domain),
IrohDeployer(config.enable_iroh_relay),
tls_deployer,
WebsiteDeployer(config),
ChatmailVenvDeployer(config),
MtastsDeployer(),
*([] if config.ipv4_relay else [OpendkimDeployer(bare_host)]),
OpendkimDeployer(mail_domain),
# Dovecot should be started before Postfix
# because it creates authentication socket
# required by Postfix.

View File

@@ -4,7 +4,11 @@ from . import remote
def parse_zone_records(text):
"""Yield ``(name, ttl, rtype, rdata)`` from standard BIND-format text."""
"""Yield ``(name, ttl, rtype, rdata)`` from standard BIND-format text.
Skips comment lines (starting with ``;``) and blank lines.
Each record line must have the format ``name TTL IN type rdata``.
"""
for raw_line in text.splitlines():
line = raw_line.strip()
if not line or line.startswith(";"):
@@ -43,36 +47,33 @@ def get_filled_zone_file(remote_data):
remote_data["sts_id"] = datetime.datetime.now().strftime("%Y%m%d%H%M")
d = remote_data["mail_domain"]
def append_record(name, rtype, rdata, ttl=3600):
lines.append(f"{name:<40} {ttl:<6} IN {rtype:<5} {rdata}")
lines = ["; Required DNS entries"]
if remote_data.get("A"):
append_record(f"{d}.", "A", remote_data["A"])
lines.append(f"{d}. 3600 IN A {remote_data['A']}")
if remote_data.get("AAAA"):
append_record(f"{d}.", "AAAA", remote_data["AAAA"])
append_record(f"{d}.", "MX", f"10 {d}.")
lines.append(f"{d}. 3600 IN AAAA {remote_data['AAAA']}")
lines.append(f"{d}. 3600 IN MX 10 {d}.")
if remote_data.get("strict_tls"):
append_record(f"_mta-sts.{d}.", "TXT", f'"v=STSv1; id={remote_data["sts_id"]}"')
append_record(f"mta-sts.{d}.", "CNAME", f"{d}.")
append_record(f"www.{d}.", "CNAME", f"{d}.")
lines.append(
f'_mta-sts.{d}. 3600 IN TXT "v=STSv1; id={remote_data["sts_id"]}"'
)
lines.append(f"mta-sts.{d}. 3600 IN CNAME {d}.")
lines.append(f"www.{d}. 3600 IN CNAME {d}.")
lines.append(remote_data["dkim_entry"])
lines.append("")
lines.append("; Recommended DNS entries")
append_record(f"{d}.", "TXT", '"v=spf1 a ~all"')
append_record(f"_dmarc.{d}.", "TXT", '"v=DMARC1;p=reject;adkim=s;aspf=s"')
lines.append(f'{d}. 3600 IN TXT "v=spf1 a ~all"')
lines.append(f'_dmarc.{d}. 3600 IN TXT "v=DMARC1;p=reject;adkim=s;aspf=s"')
if remote_data.get("acme_account_url"):
append_record(
f"{d}.",
"CAA",
f'0 issue "letsencrypt.org;accounturi={remote_data["acme_account_url"]}"',
lines.append(
f"{d}. 3600 IN CAA 0 issue"
f' "letsencrypt.org;accounturi={remote_data["acme_account_url"]}"'
)
append_record(f"_adsp._domainkey.{d}.", "TXT", '"dkim=discardable"')
append_record(f"_submission._tcp.{d}.", "SRV", f"0 1 587 {d}.")
append_record(f"_submissions._tcp.{d}.", "SRV", f"0 1 465 {d}.")
append_record(f"_imap._tcp.{d}.", "SRV", f"0 1 143 {d}.")
append_record(f"_imaps._tcp.{d}.", "SRV", f"0 1 993 {d}.")
lines.append(f'_adsp._domainkey.{d}. 3600 IN TXT "dkim=discardable"')
lines.append(f"_submission._tcp.{d}. 3600 IN SRV 0 1 587 {d}.")
lines.append(f"_submissions._tcp.{d}. 3600 IN SRV 0 1 465 {d}.")
lines.append(f"_imap._tcp.{d}. 3600 IN SRV 0 1 143 {d}.")
lines.append(f"_imaps._tcp.{d}. 3600 IN SRV 0 1 993 {d}.")
lines.append("")
return "\n".join(lines)
@@ -95,8 +96,7 @@ def check_full_zone(sshexec, remote_data, out, zonefile) -> int:
returncode = 1
if remote_data.get("dkim_entry") in required_diff:
out(
"If the DKIM entry above does not work with your DNS provider,"
" you can try this one:\n"
"If the DKIM entry above does not work with your DNS provider, you can try this one:\n"
)
out(remote_data.get("web_dkim_entry") + "\n")
if recommended_diff:

View File

@@ -1,32 +1,20 @@
import io
import os
import urllib.request
from chatmaild.config import Config
from pyinfra import host
from pyinfra.facts.deb import DebPackages
from pyinfra.facts.server import Arch, Command, Sysctl
from pyinfra.operations import apt, files, server
from pyinfra.facts.server import Arch, Sysctl
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import (
Deployer,
activate_remote_units,
blocked_service_startup,
configure_remote_units,
is_in_container,
get_resource,
has_systemd,
)
DOVECOT_ARCHIVE_VERSION = "2.3.21+dfsg1-3"
DOVECOT_PACKAGE_VERSION = f"1:{DOVECOT_ARCHIVE_VERSION}"
DOVECOT_SHA256 = {
("core", "amd64"): "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d",
("core", "arm64"): "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9",
("imapd", "amd64"): "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86",
("imapd", "arm64"): "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f",
("lmtpd", "amd64"): "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab",
("lmtpd", "arm64"): "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f",
}
class DovecotDeployer(Deployer):
daemon_reload = False
@@ -38,59 +26,32 @@ class DovecotDeployer(Deployer):
def install(self):
arch = host.get_fact(Arch)
with blocked_service_startup():
debs = []
for pkg in ("core", "imapd", "lmtpd"):
deb, changed = _download_dovecot_package(pkg, arch)
self.need_restart |= changed
if deb:
debs.append(deb)
if debs:
deb_list = " ".join(debs)
# First dpkg may fail on missing dependencies (stderr suppressed);
# apt-get --fix-broken pulls them in, then dpkg retries cleanly.
server.shell(
name="Install dovecot packages",
commands=[
f"dpkg --force-confdef --force-confold -i {deb_list} 2> /dev/null || true",
"DEBIAN_FRONTEND=noninteractive apt-get -y --fix-broken install",
f"dpkg --force-confdef --force-confold -i {deb_list}",
],
)
self.need_restart = True
self.put_file(
src=io.StringIO(
"Package: dovecot-*\n"
"Pin: version *\n"
"Pin-Priority: -1\n"
),
dest="/etc/apt/preferences.d/pin-dovecot",
)
if has_systemd() and "dovecot.service" in host.get_fact(SystemdEnabled):
return # already installed and running
_install_dovecot_package("core", arch)
_install_dovecot_package("imapd", arch)
_install_dovecot_package("lmtpd", arch)
def configure(self):
configure_remote_units(self, self.config.mail_domain_bare, self.units)
_configure_dovecot(self, self.config)
configure_remote_units(self.config.mail_domain, self.units)
self.need_restart, self.daemon_reload = _configure_dovecot(self.config)
def activate(self):
activate_remote_units(self, self.units)
activate_remote_units(self.units)
# Detect stale binary: package installed but service still runs old (deleted) binary.
if not self.disable_mail and not self.need_restart:
stale = host.get_fact(
Command,
'pid=$(systemctl show -p MainPID --value dovecot.service 2>/dev/null);'
' [ "${pid:-0}" != "0" ] && readlink "/proc/$pid/exe" 2>/dev/null | grep -q "(deleted)"'
" && echo STALE || true",
)
if stale == "STALE":
self.need_restart = True
restart = False if self.disable_mail else self.need_restart
active = not self.disable_mail
self.ensure_service(
"dovecot.service",
running=active,
enabled=active,
systemd.service(
name="Disable dovecot for now"
if self.disable_mail
else "Start and enable Dovecot",
service="dovecot.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
)
self.need_restart = False
def _pick_url(primary, fallback):
@@ -102,88 +63,110 @@ def _pick_url(primary, fallback):
return fallback
def _download_dovecot_package(package: str, arch: str) -> tuple[str | None, bool]:
"""Download a dovecot .deb if needed, return (path, changed)."""
def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch
pkg_name = f"dovecot-{package}"
sha256 = DOVECOT_SHA256.get((package, arch))
if sha256 is None:
op = apt.packages(packages=[pkg_name])
return None, bool(getattr(op, "changed", False))
installed_versions = host.get_fact(DebPackages).get(pkg_name, [])
if DOVECOT_PACKAGE_VERSION in installed_versions:
return None, False
url_version = DOVECOT_ARCHIVE_VERSION.replace("+", "%2B")
deb_base = f"{pkg_name}_{url_version}_{arch}.deb"
primary_url = f"https://download.delta.chat/dovecot/{deb_base}"
fallback_url = f"https://github.com/chatmail/dovecot/releases/download/upstream%2F{url_version}/{deb_base}"
primary_url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
fallback_url = f"https://github.com/chatmail/dovecot/releases/download/upstream%2F2.3.21%2Bdfsg1/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
url = _pick_url(primary_url, fallback_url)
deb_filename = f"/root/{deb_base}"
deb_filename = "/root/" + url.split("/")[-1]
match (package, arch):
case ("core", "amd64"):
sha256 = "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d"
case ("core", "arm64"):
sha256 = "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9"
case ("imapd", "amd64"):
sha256 = "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86"
case ("imapd", "arm64"):
sha256 = "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f"
case ("lmtpd", "amd64"):
sha256 = "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab"
case ("lmtpd", "arm64"):
sha256 = "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f"
case _:
apt.packages(packages=[f"dovecot-{package}"])
return
files.download(
name=f"Download {pkg_name}",
name=f"Download dovecot-{package}",
src=url,
dest=deb_filename,
sha256sum=sha256,
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
)
return deb_filename, True
apt.deb(name=f"Install dovecot-{package}", src=deb_filename)
def _configure_dovecot(deployer, config: Config, debug: bool = False):
def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
"""Configures Dovecot IMAP server."""
deployer.put_template(
"dovecot/dovecot.conf.j2",
"/etc/dovecot/dovecot.conf",
need_restart = False
daemon_reload = False
main_config = files.template(
src=get_resource("dovecot/dovecot.conf.j2"),
dest="/etc/dovecot/dovecot.conf",
user="root",
group="root",
mode="644",
config=config,
debug=debug,
disable_ipv6=config.disable_ipv6,
)
deployer.put_file("dovecot/auth.conf", "/etc/dovecot/auth.conf")
deployer.put_file(
"dovecot/push_notification.lua", "/etc/dovecot/push_notification.lua"
need_restart |= main_config.changed
auth_config = files.put(
src=get_resource("dovecot/auth.conf"),
dest="/etc/dovecot/auth.conf",
user="root",
group="root",
mode="644",
)
need_restart |= auth_config.changed
lua_push_notification_script = files.put(
src=get_resource("dovecot/push_notification.lua"),
dest="/etc/dovecot/push_notification.lua",
user="root",
group="root",
mode="644",
)
need_restart |= lua_push_notification_script.changed
# as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits
can_modify = not is_in_container()
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
value = host.get_fact(Sysctl).get(key, 0)
if value > 65534:
continue
if not can_modify:
print(
"\n!!!! refusing to attempt sysctl setting in containers\n"
f"!!!! dovecot: sysctl {key!r}={value}, should be >65534 for production setups\n"
"!!!!"
if not os.environ.get("CHATMAIL_NOSYSCTL"):
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
if host.get_fact(Sysctl)[key] > 65535:
# Skip updating limits if already sufficient
# (enables running in incus containers where sysctl readonly)
continue
server.sysctl(
name=f"Change {key}",
key=key,
value=65535,
persist=True,
)
continue
server.sysctl(
name=f"Change {key}",
key=key,
value=65535,
persist=True,
)
deployer.ensure_line(
timezone_env = files.line(
name="Set TZ environment variable",
path="/etc/environment",
line="TZ=:/etc/localtime",
)
need_restart |= timezone_env.changed
deployer.put_file(
"service/10_restart_on_failure.conf",
"/etc/systemd/system/dovecot.service.d/10_restart.conf",
restart_conf = files.put(
name="dovecot: restart automatically on failure",
src=get_resource("service/10_restart.conf"),
dest="/etc/systemd/system/dovecot.service.d/10_restart.conf",
)
daemon_reload |= restart_conf.changed
# Validate dovecot configuration before restart
if deployer.need_restart:
if need_restart:
server.shell(
name="Validate dovecot configuration",
commands=["doveconf -n >/dev/null"],
)
return need_restart, daemon_reload

View File

@@ -7,7 +7,6 @@ listen = 0.0.0.0
protocols = imap lmtp
auth_mechanisms = plain
auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@[]
{% if debug == true %}
auth_verbose = yes
@@ -134,11 +133,6 @@ protocol lmtp {
# mail_lua and push_notification_lua are needed for Lua push notification handler.
# <https://doc.dovecot.org/2.3/configuration_manual/push_notification/#configuration>
mail_plugins = $mail_plugins mail_lua notify push_notification push_notification_lua
# Disable fsync for LMTP. May lose delivered message,
# but unlikely to cause problems with multiple relays.
# https://doc.dovecot.org/2.3/admin_manual/mailbox_formats/#fsyncing
mail_fsync = never
}
plugin {
@@ -150,26 +144,12 @@ plugin {
}
plugin {
# for now we define static quota-rules for all users
quota = maildir:User quota
quota_rule = *:storage={{ config.max_mailbox_size }}
quota_max_mail_size={{ config.max_message_size }}
quota_grace = 0
quota_rule = *:storage={{ config.max_mailbox_size_mb }}M
# Trigger at 75%% of quota, expire oldest messages down to 70%%.
# The percentages are chosen to prevent current Delta Chat users
# from seeing "quota warnings" which trigger at 80% and 95%.
quota_warning = storage=75%% quota-warning {{ config.max_mailbox_size_mb * 70 // 100 }} {{ config.mailboxes_dir }}/%u
}
service quota-warning {
executable = script /usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire
user = vmail
unix_listener quota-warning {
user = vmail
mode = 0600
}
# quota_over_flag_value = TRUE
}
# push_notification configuration
@@ -272,9 +252,6 @@ protocol imap {
# sort -sn <(sed 's/ / C: /' *.in) <(sed 's/ / S: /' cat *.out)
rawlog_dir = %h
# Disable fsync for IMAP. May lose IMAP changes like setting flags.
mail_fsync = never
}
{% endif %}

View File

@@ -1,7 +1,10 @@
import io
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import files, systemd
from ..basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class ExternalTlsDeployer(Deployer):
@@ -20,22 +23,45 @@ class ExternalTlsDeployer(Deployer):
def configure(self):
# Verify cert and key exist on the remote host using pyinfra facts.
for path in (self.cert_path, self.key_path):
if host.get_fact(File, path=path) is None:
info = host.get_fact(File, path=path)
if info is None:
raise Exception(f"External TLS file not found on server: {path}")
self.ensure_systemd_unit(
"external/tls-cert-reload.path.j2",
cert_path=self.cert_path,
)
self.ensure_systemd_unit(
"external/tls-cert-reload.service",
# Deploy the .path unit (templated with the cert path).
# pkg=__package__ is required here because the resource files
# live in cmdeploy.external, not the default cmdeploy package.
source = get_resource("tls-cert-reload.path.f", pkg=__package__)
content = source.read_text().format(cert_path=self.cert_path).encode()
path_unit = files.put(
name="Upload tls-cert-reload.path",
src=io.BytesIO(content),
dest="/etc/systemd/system/tls-cert-reload.path",
user="root",
group="root",
mode="644",
)
service_unit = files.put(
name="Upload tls-cert-reload.service",
src=get_resource("tls-cert-reload.service", pkg=__package__),
dest="/etc/systemd/system/tls-cert-reload.service",
user="root",
group="root",
mode="644",
)
if path_unit.changed or service_unit.changed:
self.need_restart = True
def activate(self):
# No explicit reload needed here: dovecot/nginx read the cert
# on startup, and the .path watcher handles live changes.
self.ensure_service(
"tls-cert-reload.path",
systemd.service(
name="Enable tls-cert-reload path watcher",
service="tls-cert-reload.path",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=self.need_restart,
)
# No explicit reload needed here: dovecot/nginx read the cert
# on startup, and the .path watcher handles live changes.

View File

@@ -9,7 +9,7 @@
Description=Watch TLS certificate for changes
[Path]
PathChanged={{ cert_path }}
PathChanged={cert_path}
[Install]
WantedBy=multi-user.target

View File

@@ -1,40 +1,52 @@
import os
from pyinfra import facts, host
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class FiltermailDeployer(Deployer):
services = ["filtermail", "filtermail-incoming", "filtermail-transport"]
services = ["filtermail", "filtermail-incoming"]
bin_path = "/usr/local/bin/filtermail"
config_path = "/usr/local/lib/chatmaild/chatmail.ini"
def install(self):
local_bin = os.environ.get("CHATMAIL_FILTERMAIL_BINARY")
if local_bin:
self.put_executable(
src=local_bin,
dest=self.bin_path,
)
return
def __init__(self):
self.need_restart = False
def install(self):
arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.4/filtermail-{arch}"
url = f"https://github.com/chatmail/filtermail/releases/download/v0.5.2/filtermail-{arch}"
sha256sum = {
"x86_64": "5295115952c72e4c4ec3c85546e094b4155a4c702c82bd71fcdcb744dc73adf6",
"aarch64": "6892244f17b8f26ccb465766e96028e7222b3c8adefca9fc6bfe9ff332ca8dff",
"x86_64": "ce24ca0075aa445510291d775fb3aea8f4411818c7b885ae51a0fe18c5f789ce",
"aarch64": "c5d783eefa5332db3d97a0e6a23917d72849e3eb45da3d16ce908a9b4e5a797d",
}[arch]
self.download_executable(url, self.bin_path, sha256sum)
self.need_restart |= files.download(
name="Download filtermail",
src=url,
sha256sum=sha256sum,
dest=self.bin_path,
mode="755",
).changed
def configure(self):
for service in self.services:
self.ensure_systemd_unit(
f"filtermail/{service}.service.j2",
self.need_restart |= files.template(
src=get_resource(f"filtermail/{service}.service.j2"),
dest=f"/etc/systemd/system/{service}.service",
user="root",
group="root",
mode="644",
bin_path=self.bin_path,
config_path=self.config_path,
)
).changed
def activate(self):
for service in self.services:
self.ensure_service(f"{service}.service")
systemd.service(
name=f"Start and enable {service}",
service=f"{service}.service",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=True,
)
self.need_restart = False

View File

@@ -1,11 +0,0 @@
[Unit]
Description=Chatmail transport service
[Service]
ExecStart={{ bin_path }} {{ config_path }} transport
Restart=always
RestartSec=30
User=vmail
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,558 @@
"""lxc-start/stop/status/test subcommands for testing with local containers."""
import os
import subprocess
import threading
import time
from ..util import (
collapse,
get_git_hash,
get_version_string,
shell,
)
from .incus import Incus, RelayContainer
RELAY_NAMES = ("test0", "test1")
# -------------------------------------------------------------------
# lxc-start
# -------------------------------------------------------------------
def lxc_start_cmd_options(parser):
_add_name_args(
parser,
help_text="User relay name(s) to create (default: test0).",
)
parser.add_argument(
"--ipv4-only",
dest="ipv4_only",
action="store_true",
help="Create an IPv4-only container.",
)
parser.add_argument(
"--run",
action="store_true",
help="Run 'cmdeploy run' on each container after starting it.",
)
def lxc_start_cmd(args, out):
"""Create/Ensure and start LXC relay and DNS containers."""
ix = Incus()
out.green("Ensuring DNS container (ns-localchat) ...")
dns_ct = ix.get_dns_container()
dns_ct.ensure()
if not ix.find_dns_image():
with out.section("LXC: publishing DNS image"):
dns_ct.publish_as_dns_image()
out.print(f" DNS container IP: {dns_ct.ipv4}")
names = args.names if args.names else RELAY_NAMES
relays = list(ix.get_container(n) for n in names)
for ct in relays:
out.green(f"Ensuring container {ct.name!r} ({ct.domain}) ...")
ct.ensure()
ip = ct.ipv4
out.print(" Configuring container hostname ...")
ct.configure_hosts(ip)
out.print(f" Writing {ct.ini.name} ...")
ct.write_ini(disable_ipv6=args.ipv4_only)
out.print(f" Config: {ct.ini}")
if args.ipv4_only:
ct.disable_ipv6()
ipv6 = None
else:
output = ct.bash(
"ip -6 addr show scope global -deprecated"
" | grep -oP '(?<=inet6 )[^/]+'",
check=False,
)
ipv6 = output.strip() if output else None
out.print(f" {_format_addrs(ip, ipv6)}")
out.green(f" Container {ct.name!r} ready: {ct.domain} -> {ip}")
out.print()
# Reset DNS zones only for the containers we just started
started_cnames = {ct.name for ct in relays}
managed = ix.list_managed()
started = [c for c in managed if c["name"] in started_cnames]
if started:
out.print(
f"Resetting DNS zones for {len(started)}"
" domain(s) (A + AAAA records) ..."
)
dns_ct.reset_dns_records(dns_ct.ipv4, started)
for ct in relays:
if ct.name in started_cnames:
out.print(f" Configuring and testing DNS in {ct.name} ...")
ct.configure_dns(dns_ct.ipv4)
if not ct.check_dns():
out.red(
f" DNS check failed for {ct.name}"
": cannot resolve external hosts"
)
return 1
# Generate the unified SSH config
out.green("Writing ssh-config ...")
ssh_cfg = ix.write_ssh_config()
out.print(f" {ssh_cfg}")
# Verify SSH via the generated config
for ct in relays:
out.print(f" Verifying SSH to {ct.name} via ssh-config ...")
if ct.verify_ssh(ssh_cfg):
out.print(f" SSH OK: ssh -F lxconfigs/ssh-config {ct.domain}")
else:
out.red(f" WARNING: SSH verification failed for {ct.name}")
# Print integration suggestions
ssh_cfg = ix.ssh_config_path
if not ix.check_ssh_include():
out.green(
"\n (Optional) To use containers from any SSH client, add to ~/.ssh/config:"
)
out.green(f" Include {ssh_cfg}")
# Optionally run cmdeploy run on each relay
if args.run:
for ct in relays:
with out.section(f"cmdeploy run: {ct.sname} ({ct.domain})"):
ret = _run_cmdeploy("run", ct, ix, out, extra=["--skip-dns-check"])
if ret:
out.red(f"Deploy to {ct.sname} failed (exit {ret})")
return ret
# -------------------------------------------------------------------
# lxc-stop
# -------------------------------------------------------------------
def lxc_stop_cmd_options(parser):
parser.add_argument(
"--destroy",
action="store_true",
help="Delete containers and their config files after stopping.",
)
parser.add_argument(
"--destroy-all",
dest="destroy_all",
action="store_true",
help="Like --destroy, but also remove the ns-localchat DNS container.",
)
_add_name_args(
parser,
help_text="Container name(s) to stop (default: test0 + test1).",
)
def lxc_stop_cmd(args, out):
"""Stop (and optionally destroy) local LXC relay containers."""
ix = Incus()
names = args.names or RELAY_NAMES
destroy = args.destroy or args.destroy_all
for ct in map(ix.get_container, names):
if destroy:
out.green(f"Destroying container {ct.name!r} ...")
ct.destroy()
if hasattr(ct, "image_alias"):
out.green(f" Deleting cached image {ct.image_alias!r} ...")
ix.run(["image", "delete", ct.image_alias], check=False)
else:
out.green(f"Stopping container {ct.name!r} ...")
ct.stop(force=True)
if args.destroy_all:
dns_ct = ix.get_dns_container()
out.green(f"Destroying DNS container {dns_ct.name!r} ...")
dns_ct.destroy()
ix.delete_images()
if destroy:
ix.write_ssh_config()
out.green("LXC containers destroyed.")
else:
out.green("LXC containers stopped.")
# -------------------------------------------------------------------
# lxc-test
# -------------------------------------------------------------------
def lxc_test_cmd_options(parser):
parser.add_argument(
"--one",
action="store_true",
help="Only deploy and test against test0 (skip test1).",
)
def lxc_test_cmd(args, out):
"""Run full LXC pipeline: start, deploy, DNS, zone files, and tests.
All commands run directly on the host using
``--ssh-config lxconfigs/ssh-config`` for SSH access.
"""
ix = Incus()
t_total = time.time()
relay_names = list(RELAY_NAMES)
if args.one:
relay_names = relay_names[:1]
local_hash = get_git_hash()
# Per-relay: start containers, then deploy in parallel.
ipv4_only_flags = {RELAY_NAMES[0]: False, RELAY_NAMES[1]: True}
# Phase 1: start all containers (sequential, fast)
for ct in map(ix.get_container, relay_names):
name = ct.sname
ipv4_only = ipv4_only_flags.get(name, False)
label = "IPv4-only" if ipv4_only else "dual-stack"
with out.section(f"LXC: lxc-start {name} ({label})"):
args.names = [name]
args.ipv4_only = ipv4_only
args.run = False
ret = lxc_start_cmd(args, out)
if ret:
return ret
# Phase 2: deploy all relays in parallel
to_deploy = []
for ct in map(ix.get_container, relay_names):
status = _deploy_status(ct, local_hash, ix)
if "IN-SYNC" in status:
out.section_line(f"cmdeploy run: {ct.sname}: {status}, skipping")
else:
to_deploy.append(ct)
if to_deploy:
with out.section("cmdeploy run (parallel)"):
ret = _run_cmdeploy_parallel(
"run", to_deploy, ix, out, extra=["--skip-dns-check"]
)
if ret:
return ret
# Phase 3: publish images (sequential, fast)
for ct in map(ix.get_container, relay_names):
if ct.publish_image():
out.section_line(f"LXC: published {ct.sname} image")
else:
out.section_line(
f"LXC: publish {ct.sname} image: skipped, cached",
)
for ct in map(ix.get_container, relay_names):
with out.section(f"cmdeploy dns: {ct.sname} ({ct.domain})"):
ret = _run_cmdeploy("dns", ct, ix, out, extra=["--zonefile", str(ct.zone)])
if ret:
out.red(f"DNS for {ct.sname} failed (exit {ret})")
return ret
with out.section("LXC: PowerDNS zone update"):
dns_ct = ix.get_dns_container()
for ct in map(ix.get_container, relay_names):
if ct.zone.exists():
zone_data = ct.zone.read_text()
out.print(f" Loading {ct.zone} into PowerDNS ...")
dns_ct.set_dns_records(zone_data)
# Run tests in both directions when two relays are available.
test_pairs = [(0, 1), (1, 0)] if len(relay_names) > 1 else [(0,)]
for pair in test_pairs:
first = ix.get_container(relay_names[pair[0]])
label = first.sname
env = None
if len(pair) > 1:
second = ix.get_container(relay_names[pair[1]])
label = f"{first.sname} \u2194 {second.sname}"
env = os.environ.copy()
env["CHATMAIL_DOMAIN2"] = second.domain
with out.section(f"cmdeploy test: {label}"):
ret = _run_cmdeploy("test", first, ix, out, **({"env": env} if env else {}))
if ret:
out.red(f"Tests failed (exit {ret})")
return ret
elapsed = time.time() - t_total
out.section_line(f"lxc-test complete ({elapsed:.1f}s)")
if out.section_timings:
out.print("Section timings:")
for name, secs in out.section_timings:
out.print(f" {name:.<50s} {secs:5.1f}s")
out.print(f" {'total':.<50s} {elapsed:5.1f}s")
out.section_timings.clear()
return 0
# -------------------------------------------------------------------
# lxc-status
# -------------------------------------------------------------------
def lxc_status_cmd_options(parser):
pass
def lxc_status_cmd(args, out):
"""Show status of local LXC chatmail containers."""
ix = Incus()
containers = ix.list_managed()
if not containers:
out.red("No LXC containers found. Run 'cmdeploy lxc-start' first.")
return 1
local_hash = get_git_hash()
# Get storage pool path for display
storage_path = None
data = ix.run_json(["storage", "show", "default"], check=False)
if data:
storage_path = data.get("config", {}).get("source")
if storage_path:
out.green(f"Containers: ({storage_path})")
else:
out.green("Containers:")
dns_ip = None
for c in containers:
_print_container_status(out, c, ix, local_hash)
if c["name"] == ix.get_dns_container().name:
dns_ip = c["ip"]
_print_ssh_status(out, ix)
_print_dns_forwarding_status(out, dns_ip)
return 0
def _print_container_status(out, c, ix, local_hash):
"""Print name/status, domain/IPs, and RAM for one container."""
cname = c["name"]
is_running = c.get("status") == "Running"
ct = ix.get_container(cname)
# First line: name + running/STOPPED + deploy status
if not is_running:
tag = "STOPPED"
elif not isinstance(ct, RelayContainer):
tag = "running"
else:
tag = f"running {_deploy_status(ct, local_hash, ix)}"
out.print(f" {cname:20s} {tag}")
# Second line: domain, IPv4, IPv6
domain = c.get("domain", "")
ip = c.get("ip") or "?"
ipv6 = c.get("ipv6")
out.print(f" {domain:20s} {_format_addrs(ip, ipv6)}")
# Third line: RAM (RSS), config
indent = " " * 21
try:
used, total = ct.rss_mib()
except Exception:
ram_str = "RSS ?"
else:
ram_str = f"RSS {used}/{total} MiB ({used * 100 // total}%)"
if isinstance(ct, RelayContainer):
detail = f"{ram_str}, config: {os.path.relpath(ct.ini)}"
else:
detail = ram_str
out.print(f" {indent}{detail}")
out.print()
def _print_ssh_status(out, ix):
"""Print SSH integration status."""
out.print()
ssh_cfg = ix.ssh_config_path
if ix.check_ssh_include():
out.green("SSH: ~/.ssh/config includes lxconfigs/ssh-config ✓")
else:
out.red("SSH: ~/.ssh/config does NOT include lxconfigs/ssh-config")
out.print(" Add to ~/.ssh/config:")
out.print(f" Include {ssh_cfg}")
def _print_dns_forwarding_status(out, dns_ip):
"""Print host DNS forwarding status for .localchat."""
if not dns_ip:
out.red("DNS: ns-localchat container not found")
return
try:
rv = shell("resolvectl status incusbr0", timeout=5)
dns_ok = dns_ip in rv.stdout and "localchat" in rv.stdout
except (FileNotFoundError, subprocess.TimeoutExpired, OSError):
dns_ok = None
if dns_ok is True:
out.green(f"DNS: .localchat forwarding to {dns_ip}")
elif dns_ok is False:
out.red("DNS: .localchat forwarding NOT configured")
out.print(" Run:")
out.print(f" sudo resolvectl dns incusbr0 {dns_ip}")
out.print(" sudo resolvectl domain incusbr0 ~localchat")
else:
out.print(" DNS: .localchat forwarding status UNKNOWN")
# -------------------------------------------------------------------
# Internal helpers
# -------------------------------------------------------------------
def _format_addrs(ip, ipv6=None):
parts = [f"IPv4 {ip}"]
if ipv6:
parts.append(f"IPv6 {ipv6}")
return ", ".join(parts)
def _deploy_status(ct, local_hash, ix):
"""Return a human-readable deploy status string.
Compares the full deployed version (hash + diff) against
the local state built by :func:`~cmdeploy.util.get_version_string`.
"""
deployed = ct.deployed_version()
if deployed is None:
return "NOT DEPLOYED"
# A container launched from the relay image has the same
# git hash but a different domain - always redeploy.
deployed_domain = ct.deployed_domain()
if deployed_domain and deployed_domain != ct.domain:
return f"DOMAIN-MISMATCH (deployed: {deployed_domain})"
deployed_lines = deployed.splitlines()
deployed_hash = deployed_lines[0] if deployed_lines else ""
short = deployed_hash[:12]
if not local_hash:
return f"UNKNOWN (deployed: {short})"
local_short = local_hash[:12]
if deployed_hash != local_hash:
return f"STALE (deployed: {short}, local: {local_short})"
# Hash matches - check for uncommitted diffs
local_version = get_version_string()
if deployed != local_version:
return f"DIRTY ({local_short}, undeployed changes)"
return f"IN-SYNC ({short})"
def _add_name_args(parser, help_text=None):
"""Add optional positional NAME arguments."""
parser.add_argument(
"names",
nargs="*",
metavar="NAME",
help=help_text or "Relay name(s) to operate on.",
)
def _build_cmdeploy_cmd(subcmd, ct, ix, extra=None):
"""Build the ``cmdeploy <subcmd>`` command string."""
extra_str = " ".join(extra) if extra else ""
return collapse(f"""\
cmdeploy {subcmd}
--config {ct.ini}
--ssh-config {ix.ssh_config_path}
--ssh-host {ct.domain}
{extra_str}
""")
def _run_cmdeploy(subcmd, ct, ix, out, extra=None, **kwargs):
"""Run ``cmdeploy <subcmd>`` with standard --config/--ssh flags.
*ct* is a Container (uses ``ct.ini`` and ``ct.domain``).
Returns the subprocess exit code.
"""
cmd = _build_cmdeploy_cmd(subcmd, ct, ix, extra=extra)
if "cwd" not in kwargs:
kwargs["cwd"] = str(ix.project_root)
out.print(f" [$ {cmd}]")
return shell(cmd, capture_output=False, **kwargs).returncode
# Number of tail lines to print on failure.
_FAIL_CONTEXT_LINES = 40
def _run_cmdeploy_parallel(subcmd, containers, ix, out, extra=None):
"""Run ``cmdeploy <subcmd>`` for every container in parallel.
Output is captured and filtered: only lines containing
``"Start operation"`` are printed (prefixed with the relay
short-name). On failure the last *_FAIL_CONTEXT_LINES*
lines of that process's output are shown.
"""
procs = [] # list of (container, Popen, collected_lines)
cwd = str(ix.project_root)
for ct in containers:
cmd = _build_cmdeploy_cmd(subcmd, ct, ix, extra=extra)
out.print(f" [{ct.sname}] $ {cmd}")
proc = subprocess.Popen(
cmd,
shell=True,
text=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
)
procs.append((ct, proc, []))
def _reader(ct, proc, lines):
prefix = f" [{ct.sname}]"
for raw in proc.stdout:
line = raw.rstrip("\n")
lines.append(line)
if "Starting operation" in line:
out.print(f"{prefix} {line}")
threads = []
for ct, proc, lines in procs:
t = threading.Thread(
target=_reader,
args=(ct, proc, lines),
daemon=True,
)
t.start()
threads.append(t)
for t in threads:
t.join()
for _, proc, _ in procs:
proc.wait()
# Check results
first_failure = 0
for ct, proc, lines in procs:
if proc.returncode:
out.red(f"Deploy to {ct.sname} failed " f"(exit {proc.returncode})")
tail = lines[-_FAIL_CONTEXT_LINES:]
for tl in tail:
out.print(f" [{ct.sname}] {tl}")
if not first_failure:
first_failure = proc.returncode
return first_failure

View File

@@ -0,0 +1,754 @@
"""Core Incus operations for local chatmail LXC containers."""
import json
import subprocess
import textwrap
import time
from pathlib import Path
from ..util import shell
LABEL_KEY = "user.localchat-managed"
SSH_KEY_NAME = "id_localchat"
DOMAIN_SUFFIX = ".localchat"
UPSTREAM_IMAGE = "images:debian/12"
BASE_IMAGE_ALIAS = "localchat-base"
BASE_SETUP_NAME = "localchat-base-setup"
DNS_IMAGE_ALIAS = "localchat-ns"
DNS_CONTAINER_NAME = "ns-localchat"
DNS_DOMAIN = "ns.localchat"
BRIDGE_IPV4 = "10.200.200.1/24"
DNS_IP = "10.200.200.2"
RELAY_IPS = {
"test0": "10.200.200.10",
"test1": "10.200.200.11",
"test2": "10.200.200.12",
}
def _extract_ip(net_data, family="inet"):
"""Extract the first global-scope IP of *family* from network state data.
*net_data* is the ``state.network`` dict from ``incus list --format=json``.
*family* is ``"inet"`` for IPv4 or ``"inet6"`` for IPv6.
Returns the address string, or None.
"""
for iface_name, iface in net_data.items():
if iface_name == "lo":
continue
for addr in iface.get("addresses", []):
if addr["family"] == family and addr["scope"] == "global":
return addr["address"]
return None
class Incus:
"""Gateway for all Incus container operations.
Instantiated once per CLI command and passed around so that
all modules share a single entry point for Incus interactions.
"""
def __init__(self):
self.project_root = Path(__file__).resolve().parent.parent.parent.parent.parent
self.lxconfigs_dir = self.project_root / "lxconfigs"
self.lxconfigs_dir.mkdir(exist_ok=True)
self.ssh_key_path = self.lxconfigs_dir / SSH_KEY_NAME
if not self.ssh_key_path.exists():
shell(
f"ssh-keygen -t ed25519 -f {self.ssh_key_path} -N '' -C localchat",
check=True,
)
self.ssh_config_path = self.lxconfigs_dir / "ssh-config"
def write_ssh_config(self):
"""Write ``lxconfigs/ssh-config`` mapping all containers to their IPs.
Each Host block maps the container name, the domain name, and the
short relay name (e.g. ``_test0``) to the container's IP, using the
shared localchat SSH key. Returns the path to the file.
"""
containers = self.list_managed()
key_path = self.ssh_key_path
lines = ["# Auto-generated by cmdeploy lxc-start - do not edit\n"]
for c in containers:
hosts = [c["name"]]
domain = c.get("domain", "")
if domain and domain != c["name"]:
hosts.append(domain)
short = domain.split(".")[0]
if short and short not in hosts:
hosts.append(short)
lines.append(f"\nHost {' '.join(hosts)}\n")
lines.append(f" Hostname {c['ip']}\n")
lines.append(" User root\n")
lines.append(f" IdentityFile {key_path}\n")
lines.append(" IdentitiesOnly yes\n")
lines.append(" StrictHostKeyChecking accept-new\n")
lines.append(" UserKnownHostsFile /dev/null\n")
lines.append(" LogLevel ERROR\n")
path = self.ssh_config_path
path.write_text("".join(lines))
return path
def check_ssh_include(self):
"""Check if the user's ~/.ssh/config already includes our ssh-config."""
user_ssh_config = Path.home() / ".ssh" / "config"
if not user_ssh_config.exists():
return False
lines = filter(None, map(str.strip, user_ssh_config.open("r")))
return f"Include {self.ssh_config_path}" in lines
def run(self, args, check=True, capture=True, input=None):
"""Run an incus command."""
cmd = ["incus"] + list(args)
kwargs = dict(check=check, text=True, input=input)
if capture:
kwargs["capture_output"] = True
else:
kwargs["stdout"] = None
kwargs["stderr"] = None
return subprocess.run(cmd, **kwargs) # noqa: PLW1510
def run_json(self, args, check=True):
"""Run an incus command with ``--format=json``.
Returns the parsed JSON on success.
When *check* is True raises ``subprocess.CalledProcessError``
on non-zero exit; when False returns *None* instead.
"""
result = self.run(
list(args) + ["--format=json"],
check=check,
)
if result.returncode != 0:
return None
return json.loads(result.stdout)
def run_output(self, args, check=True):
"""Run an incus command and return its stripped stdout.
When *check* is False, returns *None* on non-zero exit
instead of raising.
"""
result = self.run(args, check=check)
if result.returncode != 0:
return None
return result.stdout.strip()
def _find_image(self, alias):
"""Return *alias* if an image with that alias exists, else None."""
images = self.run_json(["image", "list"], check=False) or []
for img in images:
for a in img.get("aliases", []):
if a.get("name") == alias:
return alias
return None
def find_dns_image(self):
"""Return the DNS image alias if it exists, else None."""
return self._find_image(DNS_IMAGE_ALIAS)
def delete_images(self):
"""Delete all cached localchat images."""
for alias in (DNS_IMAGE_ALIAS, BASE_IMAGE_ALIAS):
self.run(["image", "delete", alias], check=False)
for name in RELAY_IPS:
self.run(["image", "delete", f"localchat-{name}"], check=False)
def list_managed(self):
"""Return list of dicts with name, ip, ipv6, domain, status, memory_usage."""
containers = []
for ct in self.run_json(["list"]):
config = ct.get("config", {})
if config.get(LABEL_KEY) != "true":
continue
name = ct["name"]
state = ct.get("state", {})
net = state.get("network") or {}
containers.append(
{
"name": name,
"ip": _extract_ip(net, "inet"),
"ipv6": _extract_ip(net, "inet6"),
"domain": config.get(
"user.localchat-domain", f"{name}{DOMAIN_SUFFIX}"
),
"status": ct.get("status", "Unknown"),
"memory_usage": state.get("memory", {}).get("usage", 0),
}
)
return containers
def ensure_base_image(self):
"""Build and cache a base image with openssh and the SSH key.
The image is published as a local incus image with alias
'localchat-base'. Subsequent container launches use this
image instead of the upstream Debian 12, skipping the
slow apt-get install step.
Returns the image alias.
"""
if self._find_image(BASE_IMAGE_ALIAS):
return BASE_IMAGE_ALIAS
print(" Building base image (one-time setup) ...")
self.run(["delete", BASE_SETUP_NAME, "--force"], check=False)
self.run(["image", "delete", BASE_IMAGE_ALIAS], check=False)
self.run(
["launch", UPSTREAM_IMAGE, BASE_SETUP_NAME, "-c", "limits.memory=512MiB"]
)
ct = Container(self, BASE_SETUP_NAME, memory="512MiB")
ct.wait_ready()
key_path = self.ssh_key_path
pub_key = key_path.with_suffix(".pub").read_text().strip()
print(" ── apt-get install (base image) ──")
ct.bash(
f"""\
systemctl disable --now systemd-resolved 2>/dev/null || true
rm -f /etc/resolv.conf
echo 'nameserver 9.9.9.9' > /etc/resolv.conf
while fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
echo "Waiting for other apt-get instance to finish..."
sleep 5
done
apt-get -o DPkg::Lock::Timeout=60 update
DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server python3
systemctl enable ssh
apt-get clean
mkdir -p /root/.ssh
chmod 700 /root/.ssh
echo '{pub_key}' > /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
""",
capture=False,
)
print(" ── base image install done ──")
self.run(["stop", BASE_SETUP_NAME])
self.run(["publish", BASE_SETUP_NAME, f"--alias={BASE_IMAGE_ALIAS}"])
self.run(["delete", BASE_SETUP_NAME, "--force"])
print(f" Base image '{BASE_IMAGE_ALIAS}' ready.")
return BASE_IMAGE_ALIAS
def ensure_bridge(self):
"""Ensure incusbr0 exists and uses our fixed IPv4 subnet."""
bridge = self.run_json(["network", "show", "incusbr0"], check=False)
if bridge and bridge.get("config", {}).get("ipv4.address") == BRIDGE_IPV4:
return
print(f" Configuring incusbr0 with static subnet {BRIDGE_IPV4} ...")
if not bridge:
self.run(["network", "create", "incusbr0"], check=False)
self.run(
[
"network",
"set",
"incusbr0",
f"ipv4.address={BRIDGE_IPV4}",
"ipv4.nat=true",
"ipv6.address=none",
"dns.mode=none",
]
)
def get_container(self, name):
"""Return a container handle for the given name.
Accepts both short relay names (``test0``) and full Incus
container names (``test0-localchat``). Returns
``DNSContainer`` for the DNS container and
``RelayContainer`` for everything else.
"""
if name == DNS_CONTAINER_NAME:
return DNSContainer(self)
return RelayContainer(self, name.removesuffix("-localchat"))
def get_dns_container(self):
"""Return a DNSContainer handle."""
return DNSContainer(self)
class Container:
"""Lightweight handle for an Incus container.
Carries the container *name* and provides convenience methods
for running commands, managing lifecycle, and extracting state
so callers don't repeat the name everywhere.
"""
def __init__(self, incus, name, domain=None, memory="200MiB", ipv4=None):
self.incus = incus
self.name = name
self.domain = domain or f"{name}{DOMAIN_SUFFIX}"
self.memory = memory
self.ipv4 = ipv4
self.ipv6 = None
def bash(self, script, check=True, capture=True):
"""Returns stdout from executing ``bash -ec <script>`` inside this container.
*script* is dedented and stripped so callers can use triple-quoted strings.
When *check* is False, returns *None* on non-zero exit instead of raising.
When *capture* is False, output streams to the terminal and None is returned.
"""
cmd = ["exec", self.name, "--", "bash", "-ec", textwrap.dedent(script).strip()]
if not capture:
self.incus.run(cmd, check=check, capture=False)
return None
return self.incus.run_output(cmd, check=check)
def run_cmd(self, *args, check=True):
"""Return stdout from running a command directly in the container (no shell).
When *check* is False, returns *None* on non-zero exit instead of raising.
"""
return self.incus.run_output(
["exec", self.name, "--", *args],
check=check,
)
def start(self):
self.incus.run(["start", self.name])
def stop(self, force=False):
cmd = ["stop", self.name]
if force:
cmd.append("--force")
self.incus.run(cmd, check=False)
def launch(self, image=None):
"""Launch from the specified image, or the base image if None."""
self.incus.ensure_bridge()
if image is None:
image = self.incus.ensure_base_image()
cfg = []
cfg += ("-c", f"{LABEL_KEY}=true")
cfg += ("-c", f"user.localchat-domain={self.domain}")
cfg += ("-c", f"limits.memory={self.memory}")
self.incus.run(["init", image, self.name, *cfg])
if self.ipv4:
self.incus.run(
[
"config",
"device",
"override",
self.name,
"eth0",
f"ipv4.address={self.ipv4}",
]
)
self.incus.run(["start", self.name])
return image
def ensure(self):
"""Create/start this container from the cached base image.
On first call, builds the base image (~30s).
Subsequent containers launch in ~2s from the cached image.
Returns ``self`` for chaining.
"""
data = self.incus.run_json(["list", self.name], check=False) or []
existing = [c for c in data if c["name"] == self.name]
image = None
if existing:
status = existing[0]["status"]
if status != "Running":
print(f" Starting stopped {self.name} container ...")
self.start()
else:
print(f" {self.name} already running")
else:
image = self.launch()
self.wait_ready()
if image:
print(f" Ensured {self.name} (launched from {image!r} image)")
return self
def destroy(self):
"""Stop, delete, and clean up config files."""
self.stop(force=True)
self.incus.run(["delete", self.name, "--force"], check=False)
def push_file_content(self, dest_path, content):
"""Write *content* to *dest_path* inside the container.
*content* is dedented and stripped so callers can use
indented triple-quoted strings.
"""
content = textwrap.dedent(content).strip() + "\n"
self.incus.run(
["file", "push", "-", f"{self.name}{dest_path}"],
input=content,
)
self.bash(f"chmod 644 {dest_path}")
def wait_ready(self, timeout=60):
"""Wait until the container is running with an IPv4 address.
Sets ``self.ipv4`` and ``self.ipv6`` (may be *None*),
or raises ``TimeoutError``.
"""
deadline = time.time() + timeout
while time.time() < deadline:
data = self.incus.run_json(
["list", self.name],
check=False,
)
if data and data[0].get("status") == "Running":
net = data[0].get("state", {}).get("network", {})
self.ipv4 = _extract_ip(net, "inet")
self.ipv6 = _extract_ip(net, "inet6")
if self.ipv4:
return
time.sleep(1)
raise TimeoutError(
f"Container {self.name!r} did not become ready within {timeout}s"
)
def rss_mib(self):
"""Return ``(used, total)`` memory from container (or None if unobtainable)."""
output = self.bash("free -m", check=False)
if output:
for line in output.splitlines():
if line.startswith("Mem:"):
parts = line.split()
return int(parts[2]), int(parts[1])
class RelayContainer(Container):
"""Container handle for a chatmail relay.
Accepts the short relay name (e.g. ``test0``) and derives
the Incus container name and mail domain automatically.
"""
def __init__(self, incus, name):
super().__init__(
incus,
f"{name}-localchat",
domain=f"_{name}{DOMAIN_SUFFIX}",
memory="500MiB",
ipv4=RELAY_IPS.get(name),
)
self.sname = name
self.image_alias = f"localchat-{name}"
self.ini = incus.lxconfigs_dir / f"chatmail-{name}.ini"
self.zone = incus.lxconfigs_dir / f"{name}.zone"
def launch(self):
"""Launch from a cached per-relay image if available, else from base."""
cached = self.incus._find_image(self.image_alias)
if cached:
print(f" Using cached image {cached!r}")
else:
print(" No cached image, building from base")
image = super().launch(image=cached)
self.bash("rm -f /etc/chatmail-version")
return image
def destroy(self):
"""Stop, delete, and clean up config files."""
super().destroy()
if self.ini.exists():
self.ini.unlink()
def disable_ipv6(self):
"""Disable IPv6 inside the container via sysctl."""
self.bash("""\
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
mkdir -p /etc/sysctl.d
printf 'net.ipv6.conf.all.disable_ipv6=1\\n
net.ipv6.conf.default.disable_ipv6=1\\n'
> /etc/sysctl.d/99-disable-ipv6.conf
""")
def configure_hosts(self, ip):
"""Set hostname and /etc/hosts inside the container."""
self.bash(f"""
echo '{self.name}' > /etc/hostname
hostname {self.name}
sed -i '/ {self.domain}$/d' /etc/hosts
echo '{ip} {self.name} {self.domain}' >> /etc/hosts
""")
def publish_image(self):
"""Publish this container as a reusable per-relay image.
Returns True if an image was published,
False if a cached image already existed.
"""
if self.incus._find_image(self.image_alias):
return False
self.bash("apt-get clean && rm -rf /var/lib/apt/lists/*")
print(f" Publishing {self.name!r} as {self.image_alias!r} image ...")
self.incus.run(
["publish", self.name, f"--alias={self.image_alias}", "--force"],
capture=False,
)
self.wait_ready()
print(f" Image {self.image_alias!r} ready.")
return True
def deployed_version(self):
"""Read /etc/chatmail-version, or None if absent."""
return self.bash("cat /etc/chatmail-version", check=False)
def deployed_domain(self):
"""Read the domain deployed on the container (postfix myhostname)."""
return self.bash(
"postconf -h myhostname 2>/dev/null",
check=False,
)
def verify_ssh(self, ssh_config):
"""Verify SSH connectivity to this container."""
cmd = f"ssh -F {ssh_config} -o ConnectTimeout=10 root@{self.domain} hostname"
return shell(cmd, timeout=15).returncode == 0
def configure_dns(self, dns_ip):
"""Point this container's resolver at *dns_ip*.
Disables systemd-resolved to free port 53 and writes
a static /etc/resolv.conf. Also configures unbound
(if present) to forward .localchat queries.
"""
self.bash(f"""\
systemctl disable --now systemd-resolved 2>/dev/null || true
rm -f /etc/resolv.conf
echo 'nameserver {dns_ip}' > /etc/resolv.conf
mkdir -p /etc/unbound/unbound.conf.d
printf 'server:\\n domain-insecure: "localchat"\\n\\n
forward-zone:\\n name: "localchat"\\n
forward-addr: {dns_ip}\\n'
> /etc/unbound/unbound.conf.d/localchat-forward.conf
systemctl restart unbound 2>/dev/null || true
""")
def check_dns(self, retries=5, delay=2):
"""Verify that external DNS resolution works inside the container."""
for i in range(retries):
result = self.bash(
"getent hosts pypi.org",
check=False,
)
if result:
return True
if i < retries - 1:
time.sleep(delay)
return False
def write_ini(self, disable_ipv6=False):
"""Generate a chatmail.ini config file in lxconfigs/."""
from chatmaild.config import write_initial_config
overrides = {
"max_user_send_per_minute": 600,
"max_user_send_burst_size": 100,
"mtail_address": "127.0.0.1",
}
if disable_ipv6:
overrides["disable_ipv6"] = "True"
write_initial_config(self.ini, self.domain, overrides)
return self.ini
class DNSContainer(Container):
"""Specialised container handle for the PowerDNS name server."""
def __init__(self, incus):
super().__init__(
incus, DNS_CONTAINER_NAME, domain=DNS_DOMAIN, memory="256MiB", ipv4=DNS_IP
)
def launch(self):
"""Launch from cached DNS image if available, else from base image."""
cached = self.incus._find_image(DNS_IMAGE_ALIAS)
if cached:
print(f" Using cached image {cached!r}")
else:
print(" No cached image, building from base")
return super().launch(image=cached)
def publish_as_dns_image(self):
"""Publish this container as a reusable DNS image."""
if self.incus._find_image(DNS_IMAGE_ALIAS):
return
self.bash("apt-get clean && rm -rf /var/lib/apt/lists/*")
print(f" Publishing {self.name!r} as {DNS_IMAGE_ALIAS!r} image ...")
self.incus.run(
["publish", self.name, f"--alias={DNS_IMAGE_ALIAS}", "--force"],
capture=False,
)
self.wait_ready()
print(f" DNS image {DNS_IMAGE_ALIAS!r} ready.")
def pdnsutil(self, *args, check=True):
"""Run ``pdnsutil <args>`` inside the DNS container."""
return self.run_cmd("pdnsutil", *args, check=check)
def replace_rrset(self, zone, name, rtype, ttl, rdata):
"""Shortcut for ``pdnsutil replace-rrset``."""
self.pdnsutil("replace-rrset", zone, name, rtype, ttl, rdata)
def restart_services(self):
"""Restart pdns and pdns-recursor."""
self.bash("""\
systemctl restart pdns
systemctl restart pdns-recursor || true
""")
def ensure(self):
"""Create the DNS container with PowerDNS if needed.
Calls ``super().ensure()`` to create/start the container
and set up SSH, then installs PowerDNS and configures
the Incus bridge to use this container as DNS.
"""
super().ensure()
self._install_powerdns()
self.incus.run(
["network", "set", "incusbr0", "dns.mode=none"],
check=False,
)
self.incus.run(
["network", "set", "incusbr0", f"raw.dnsmasq=dhcp-option=6,{self.ipv4}"],
check=False,
)
def _install_powerdns(self):
"""Install and configure PowerDNS if not already present."""
if self.run_cmd("which", "pdns_server", check=False) is not None:
return
self.bash("""\
systemctl disable --now systemd-resolved 2>/dev/null || true
rm -f /etc/resolv.conf
echo 'nameserver 9.9.9.9' > /etc/resolv.conf
apt-get -o DPkg::Lock::Timeout=60 update
DEBIAN_FRONTEND=noninteractive apt-get install -y \
pdns-server pdns-backend-sqlite3 sqlite3 pdns-recursor dnsutils
systemctl stop pdns pdns-recursor || true
mkdir -p /var/lib/powerdns
sqlite3 /var/lib/powerdns/pdns.sqlite3 \
</usr/share/doc/pdns-backend-sqlite3/schema.sqlite3.sql
chown -R pdns:pdns /var/lib/powerdns
""")
self.push_file_content(
"/etc/powerdns/pdns.conf",
"""\
launch=gsqlite3
gsqlite3-database=/var/lib/powerdns/pdns.sqlite3
local-address=127.0.0.1
local-port=5353
""",
)
self.push_file_content(
"/etc/powerdns/recursor.conf",
"""\
local-address=0.0.0.0
local-port=53
forward-zones=localchat=127.0.0.1:5353
forward-zones-recurse=.=9.9.9.9;149.112.112.112
allow-from=0.0.0.0/0
dont-query=
dnssec=off
""",
)
self.bash("""\
systemctl start pdns
systemctl start pdns-recursor
echo 'nameserver 127.0.0.1' > /etc/resolv.conf
""")
def reset_dns_records(self, dns_ip, domains):
"""Create DNS zones with initial A records via pdnsutil.
Only sets SOA, NS, and A records as the minimal set
needed for SSH connectivity. Full records (MX, TXT, SRV,
CNAME, DKIM) are added later by ``cmdeploy dns``.
Args:
dns_ip: IP of the DNS container
domains: list of dicts with 'name', 'domain', 'ip'
"""
for d in domains:
domain = d["domain"]
ip = d["ip"]
print(f" {domain} -> {ip}")
# Delete and recreate zone fresh (removes stale records)
self.pdnsutil("delete-zone", domain, check=False)
self.pdnsutil("create-zone", domain, f"ns.{domain}")
serial = str(int(time.time()))
soa = f"ns.{domain} hostmaster.{domain} {serial} 3600 900 604800 300"
self.replace_rrset(domain, ".", "SOA", "3600", soa)
self.replace_rrset(domain, ".", "NS", "3600", f"ns.{domain}.")
self.replace_rrset(domain, ".", "A", "3600", ip)
self.replace_rrset(domain, "ns", "A", "3600", dns_ip)
# AAAA (domain -> container IPv6, if available)
ipv6 = d.get("ipv6")
if ipv6:
self.replace_rrset(domain, ".", "AAAA", "3600", ipv6)
print(f" zone reset: SOA, NS, A, AAAA ({ip}, {ipv6})")
else:
# Remove any stale AAAA record
self.pdnsutil("delete-rrset", domain, ".", "AAAA", check=False)
print(f" zone reset: SOA, NS, A ({ip}, IPv4-only)")
self.restart_services()
def set_dns_records(self, text):
"""Add or overwrite DNS records from standard BIND format.
Uses ``cmdeploy.dns.parse_zone_records`` to parse.
Zones are created automatically from the record names.
"""
from ..dns import parse_zone_records
zones_seen = set()
for name, ttl, rtype, rdata in parse_zone_records(text):
# Derive zone from name: find top-level .localchat domain
name_parts = name.split(".")
zone = name # fallback
for i in range(len(name_parts) - 1):
if name_parts[i + 1 :] == ["localchat"]:
zone = ".".join(name_parts[i:])
break
# Create zone if first time seeing it
if zone not in zones_seen:
self.pdnsutil(
"create-zone",
zone,
f"ns.{zone}",
check=False,
)
zones_seen.add(zone)
# Figure out the record name relative to zone
if name == zone:
relative = "."
elif name.endswith(f".{zone}"):
relative = name[: -(len(zone) + 1)]
else:
relative = name
self.replace_rrset(zone, relative, rtype, ttl, rdata)
if zones_seen:
self.restart_services()

View File

@@ -78,11 +78,3 @@ counter rejected_unencrypted_mail_count
/Rejected unencrypted mail/ {
rejected_unencrypted_mail_count++
}
counter quota_expire_runs
counter quota_expire_removed_files
/quota-expire: removed (?P<count>\d+) message\(s\)/ {
quota_expire_runs++
quota_expire_removed_files += $count
}

View File

@@ -1,7 +1,10 @@
from pyinfra import facts, host
from pyinfra.operations import apt
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import (
Deployer,
get_resource,
)
class MtailDeployer(Deployer):
@@ -15,30 +18,51 @@ class MtailDeployer(Deployer):
(url, sha256sum) = {
"x86_64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz",
"d55cb601049c5e61eabab29998dbbcea95d480e5448544f9470337ba2eea882e",
"123c2ee5f48c3eff12ebccee38befd2233d715da736000ccde49e3d5607724e4",
),
"aarch64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz",
"f748db8ad2a1e0b63684d4c8868cf6a373a20f7e6922e5ece601fff0ee00eb1a",
"aa04811c0929b6754408676de520e050c45dddeb3401881888a092c9aea89cae",
),
}[host.get_fact(facts.server.Arch)]
self.download_executable(
url,
"/usr/local/bin/mtail",
sha256sum,
extract="gunzip | tar -xf - mtail -O",
server.shell(
name="Download mtail",
commands=[
f"(echo '{sha256sum} /usr/local/bin/mtail' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - mtail -O >/usr/local/bin/mtail.new && mv /usr/local/bin/mtail.new /usr/local/bin/mtail)",
"chmod 755 /usr/local/bin/mtail",
],
)
def configure(self):
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
# This allows to read from journalctl instead of log files.
self.ensure_systemd_unit(
"mtail/mtail.service.j2",
files.template(
src=get_resource("mtail/mtail.service.j2"),
dest="/etc/systemd/system/mtail.service",
user="root",
group="root",
mode="644",
address=self.mtail_address or "127.0.0.1",
port=3903,
)
self.put_file("mtail/delivered_mail.mtail", "/etc/mtail/delivered_mail.mtail")
mtail_conf = files.put(
name="Mtail configuration",
src=get_resource("mtail/delivered_mail.mtail"),
dest="/etc/mtail/delivered_mail.mtail",
user="root",
group="root",
mode="644",
)
self.need_restart = mtail_conf.changed
def activate(self):
active = bool(self.mtail_address)
self.ensure_service("mtail.service", running=active, enabled=active)
systemd.service(
name="Start and enable mtail",
service="mtail.service",
running=bool(self.mtail_address),
enabled=bool(self.mtail_address),
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -1,13 +1,10 @@
[Unit]
Description=mtail
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -"
Restart=on-failure
RestartSec=2s
[Install]
WantedBy=multi-user.target

View File

@@ -1,5 +1,5 @@
from chatmaild.config import Config
from pyinfra.operations import apt
from pyinfra.operations import apt, files, systemd
from cmdeploy.basedeploy import (
Deployer,
@@ -31,50 +31,87 @@ class NginxDeployer(Deployer):
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
self.put_executable(src="policy-rc.d", dest="/usr/sbin/policy-rc.d")
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install nginx",
packages=["nginx", "libnginx-mod-stream"],
)
self.remove_file("/usr/sbin/policy-rc.d")
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
_configure_nginx(self, self.config)
self.need_restart = _configure_nginx(self.config)
def activate(self):
self.ensure_service("nginx.service")
systemd.service(
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
def _configure_nginx(deployer, config: Config, debug: bool = False):
def _configure_nginx(config: Config, debug: bool = False) -> bool:
"""Configures nginx HTTP server."""
need_restart = False
deployer.put_template(
"nginx/nginx.conf.j2",
"/etc/nginx/nginx.conf",
main_config = files.template(
src=get_resource("nginx/nginx.conf.j2"),
dest="/etc/nginx/nginx.conf",
user="root",
group="root",
mode="644",
config=config,
disable_ipv6=config.disable_ipv6,
)
need_restart |= main_config.changed
deployer.put_template(
"nginx/autoconfig.xml.j2",
"/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
autoconfig = files.template(
src=get_resource("nginx/autoconfig.xml.j2"),
dest="/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
user="root",
group="root",
mode="644",
config=config,
)
need_restart |= autoconfig.changed
deployer.put_template(
"nginx/mta-sts.txt.j2",
"/var/www/html/.well-known/mta-sts.txt",
mta_sts_config = files.template(
src=get_resource("nginx/mta-sts.txt.j2"),
dest="/var/www/html/.well-known/mta-sts.txt",
user="root",
group="root",
mode="644",
config=config,
)
need_restart |= mta_sts_config.changed
# install CGI newemail script
#
cgi_dir = "/usr/lib/cgi-bin"
deployer.ensure_directory(cgi_dir)
files.directory(
name=f"Ensure {cgi_dir} exists",
path=cgi_dir,
user="root",
group="root",
)
deployer.put_executable(
files.put(
name="Upload cgi newemail.py script",
src=get_resource("newemail.py", pkg="chatmaild").open("rb"),
dest=f"{cgi_dir}/newemail.py",
user="root",
group="root",
mode="755",
)
return need_restart

View File

@@ -42,9 +42,6 @@ stream {
}
http {
# access_log setting is inherited by all server sections
access_log syslog:server=unix:/dev/log,facility=local7;
{% if config.tls_cert_mode == "self" %}
limit_req_zone $binary_remote_addr zone=newaccount:10m rate=2r/s;
{% endif %}
@@ -72,11 +69,9 @@ http {
index index.html index.htm;
server_name {{ config.mail_domain }} mta-sts.{{ config.mail_domain }};
server_name {{ config.mail_domain }} www.{{ config.mail_domain }} mta-sts.{{ config.mail_domain }};
location /mxdeliv {
proxy_pass http://127.0.0.1:{{ config.filtermail_http_port_incoming }};
}
access_log syslog:server=unix:/dev/log,facility=local7;
location / {
# First attempt to serve request as file, then
@@ -144,6 +139,7 @@ http {
listen 127.0.0.1:8443 ssl;
server_name www.{{ config.mail_domain }};
return 301 $scheme://{{ config.mail_domain }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7;
}
server {

View File

@@ -4,9 +4,9 @@ Installs OpenDKIM
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import apt, files, server
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class OpendkimDeployer(Deployer):
@@ -25,39 +25,65 @@ class OpendkimDeployer(Deployer):
domain = self.mail_domain
dkim_selector = "opendkim"
"""Configures OpenDKIM"""
need_restart = False
self.put_template(
"opendkim/opendkim.conf",
"/etc/opendkim.conf",
main_config = files.template(
src=get_resource("opendkim/opendkim.conf"),
dest="/etc/opendkim.conf",
user="root",
group="root",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= main_config.changed
self.remove_file("/etc/opendkim/screen.lua")
self.remove_file("/etc/opendkim/final.lua")
screen_script = files.file(
path="/etc/opendkim/screen.lua",
present=False,
)
need_restart |= screen_script.changed
self.ensure_directory(
"/etc/opendkim",
owner="opendkim",
final_script = files.file(
path="/etc/opendkim/final.lua",
present=False,
)
need_restart |= final_script.changed
files.directory(
name="Add opendkim directory to /etc",
path="/etc/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
self.put_template(
"opendkim/KeyTable",
"/etc/dkimkeys/KeyTable",
owner="opendkim",
keytable = files.template(
src=get_resource("opendkim/KeyTable"),
dest="/etc/dkimkeys/KeyTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= keytable.changed
self.put_template(
"opendkim/SigningTable",
"/etc/dkimkeys/SigningTable",
owner="opendkim",
signing_table = files.template(
src=get_resource("opendkim/SigningTable"),
dest="/etc/dkimkeys/SigningTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
self.ensure_directory(
"/var/spool/postfix/opendkim",
owner="opendkim",
need_restart |= signing_table.changed
files.directory(
name="Add opendkim socket directory to /var/spool/postfix",
path="/var/spool/postfix/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"):
@@ -70,10 +96,12 @@ class OpendkimDeployer(Deployer):
_su_user="opendkim",
)
self.put_file(
"opendkim/systemd.conf",
"/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
service_file = files.put(
name="Configure opendkim to restart once a day",
src=get_resource("opendkim/systemd.conf"),
dest="/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
)
need_restart |= service_file.changed
files.file(
name="chown opendkim: /etc/dkimkeys/opendkim.private",
@@ -82,5 +110,15 @@ class OpendkimDeployer(Deployer):
group="opendkim",
)
self.need_restart = need_restart
def activate(self):
self.ensure_service("opendkim.service")
systemd.service(
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
daemon_reload=self.need_restart,
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -1,10 +1,11 @@
from pyinfra.operations import apt, server
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class PostfixDeployer(Deployer):
required_users = [("postfix", None, ["opendkim"])]
daemon_reload = False
def __init__(self, config, disable_mail):
self.config = config
@@ -18,46 +19,81 @@ class PostfixDeployer(Deployer):
def configure(self):
config = self.config
need_restart = False
self.put_template(
"postfix/main.cf.j2",
"/etc/postfix/main.cf",
main_config = files.template(
src=get_resource("postfix/main.cf.j2"),
dest="/etc/postfix/main.cf",
user="root",
group="root",
mode="644",
config=config,
disable_ipv6=config.disable_ipv6,
)
need_restart |= main_config.changed
self.put_template(
"postfix/master.cf.j2",
"/etc/postfix/master.cf",
master_config = files.template(
src=get_resource("postfix/master.cf.j2"),
dest="/etc/postfix/master.cf",
user="root",
group="root",
mode="644",
debug=False,
config=config,
)
need_restart |= master_config.changed
self.put_file(
"postfix/submission_header_cleanup",
"/etc/postfix/submission_header_cleanup",
header_cleanup = files.put(
src=get_resource("postfix/submission_header_cleanup"),
dest="/etc/postfix/submission_header_cleanup",
user="root",
group="root",
mode="644",
)
self.put_file("postfix/lmtp_header_cleanup", "/etc/postfix/lmtp_header_cleanup")
need_restart |= header_cleanup.changed
res = self.put_file(
"postfix/smtp_tls_policy_map", "/etc/postfix/smtp_tls_policy_map"
lmtp_header_cleanup = files.put(
src=get_resource("postfix/lmtp_header_cleanup"),
dest="/etc/postfix/lmtp_header_cleanup",
user="root",
group="root",
mode="644",
)
tls_policy_changed = res.changed
if tls_policy_changed:
need_restart |= lmtp_header_cleanup.changed
tls_policy_map = files.put(
name="Upload SMTP TLS Policy that accepts self-signed certificates for IP-only hosts",
src=get_resource("postfix/smtp_tls_policy_map"),
dest="/etc/postfix/smtp_tls_policy_map",
user="root",
group="root",
mode="644",
)
need_restart |= tls_policy_map.changed
if tls_policy_map.changed:
server.shell(
commands=["postmap /etc/postfix/smtp_tls_policy_map"],
)
# Login map that 1:1 maps email address to login.
self.put_file("postfix/login_map", "/etc/postfix/login_map")
self.put_file(
"service/10_restart_on_failure.conf",
"/etc/systemd/system/postfix@.service.d/10_restart.conf",
login_map = files.put(
src=get_resource("postfix/login_map"),
dest="/etc/postfix/login_map",
user="root",
group="root",
mode="644",
)
need_restart |= login_map.changed
restart_conf = files.put(
name="postfix: restart automatically on failure",
src=get_resource("service/10_restart.conf"),
dest="/etc/systemd/system/postfix@.service.d/10_restart.conf",
)
self.daemon_reload = restart_conf.changed
# Validate postfix configuration before restart
if self.need_restart:
if need_restart:
server.shell(
name="Validate postfix configuration",
# Extract stderr and quit with error if non-zero
@@ -65,11 +101,19 @@ class PostfixDeployer(Deployer):
"""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""
],
)
self.need_restart = need_restart
def activate(self):
active = not self.disable_mail
self.ensure_service(
"postfix.service",
running=active,
enabled=active,
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="disable postfix for now"
if self.disable_mail
else "Start and enable Postfix",
service="postfix.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
)
self.need_restart = False

View File

@@ -20,7 +20,7 @@ smtpd_tls_key_file={{ config.tls_key_path }}
smtpd_tls_security_level=may
smtp_tls_CApath=/etc/ssl/certs
smtp_tls_security_level=verify
smtp_tls_security_level={{ "verify" if config.tls_cert_mode == "acme" else "encrypt" }}
# Send SNI extension when connecting to other servers.
# <https://www.postfix.org/postconf.5.html#smtp_tls_servername>
smtp_tls_servername = hostname
@@ -54,16 +54,14 @@ smtpd_tls_exclude_ciphers = aNULL, RC4, MD5, DES
tls_preempt_cipherlist = yes
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = {{ config.postfix_myhostname }}
myhostname = {{ config.mail_domain }}
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
# When postfix receives mail for $mydestination,
# it hands it over to dovecot via $local_transport.
mydestination = {{ config.mail_domain }}
local_transport = lmtp:unix:private/dovecot-lmtp
# postfix doesn't check whether local users exist or not:
local_recipient_maps =
# Postfix does not deliver mail for any domain by itself.
# Primary domain is listed in `virtual_mailbox_domains` instead
# and handed over to Dovecot.
mydestination =
relayhost =
{% if disable_ipv6 %}
@@ -71,6 +69,15 @@ mynetworks = 127.0.0.0/8
{% else %}
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
{% endif %}
{% if config.addr_v4 %}
smtp_bind_address = {{ config.addr_v4 }}
{% endif %}
{% if config.addr_v6 %}
smtp_bind_address6 = {{ config.addr_v6 }}
{% endif %}
{% if config.addr_v4 or config.addr_v6 %}
smtp_bind_address_enforce = yes
{% endif %}
mailbox_size_limit = 0
message_size_limit = {{config.max_message_size}}
recipient_delimiter = +
@@ -81,6 +88,8 @@ inet_protocols = ipv4
inet_protocols = all
{% endif %}
virtual_transport = lmtp:unix:private/dovecot-lmtp
virtual_mailbox_domains = {{ config.mail_domain }}
lmtp_header_checks = regexp:/etc/postfix/lmtp_header_cleanup
mua_client_restrictions = permit_sasl_authenticated, reject
@@ -93,12 +102,3 @@ smtpd_sender_login_maps = regexp:/etc/postfix/login_map
# Do not lookup SMTP client hostnames to reduce delays
# and avoid unnecessary DNS requests.
smtpd_peername_lookup = no
# Use filtermail-transport to relay messages.
# We can't force postfix to split messages per destination,
# when specifying a custom next-hop,
# so instead this is handled in filtermail.
# We use LMTP instead SMTP so we can communicate per-recipient errors back to postfix.
default_transport = lmtp-filtermail:inet:[127.0.0.1]:{{ config.filtermail_lmtp_port_transport }}
lmtp-filtermail_initial_destination_concurrency=10000
lmtp-filtermail_destination_concurrency_limit=10000

View File

@@ -80,9 +80,8 @@ filter unix - n n - - lmtp
127.0.0.1:{{ config.postfix_reinject_port }} inet n - n - 100 smtpd
-o syslog_name=postfix/reinject
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_milters=unix:opendkim/opendkim.sock
-o cleanup_service_name=authclean
{% if not config.ipv4_relay %} -o smtpd_milters=unix:opendkim/opendkim.sock
{% endif %}
# Local SMTP server for reinjecting incoming filtered mail
127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd
@@ -101,8 +100,3 @@ filter unix - n n - - lmtp
# cannot send unprotected Subject.
authclean unix n - - - 0 cleanup
-o header_checks=regexp:/etc/postfix/submission_header_cleanup
lmtp-filtermail unix - - y - 10000 lmtp
-o syslog_name=postfix/lmtp-filtermail
-o lmtp_header_checks=
-o lmtp_tls_security_level=none

View File

@@ -57,32 +57,27 @@ def get_dkim_entry(mail_domain, pre_command, dkim_selector):
dkim_value_raw = f"v=DKIM1;k=rsa;p={dkim_pubkey};s=email;t=s"
dkim_value = '" "'.join(re.findall(".{1,255}", dkim_value_raw))
web_dkim_value = "".join(re.findall(".{1,255}", dkim_value_raw))
name = f"{dkim_selector}._domainkey.{mail_domain}."
return (
f'{name:<40} 3600 IN TXT "{dkim_value}"',
f'{name:<40} 3600 IN TXT "{web_dkim_value}"',
f'{dkim_selector}._domainkey.{mail_domain}. 3600 IN TXT "{dkim_value}"',
f'{dkim_selector}._domainkey.{mail_domain}. 3600 IN TXT "{web_dkim_value}"',
)
def get_authoritative_ns(domain):
ns_replies = [
def query_dns(typ, domain):
# Get autoritative nameserver from the SOA record.
soa_answers = [
x.split()
for x in shell(
f"dig -r -q {domain} -t NS +noall +authority +answer", print=log_progress
f"dig -r -q {domain} -t SOA +noall +authority +answer", print=log_progress
).split("\n")
]
filtered_replies = [a for a in ns_replies if len(a) >= 5 and a[3] == "NS"]
if not filtered_replies:
soa = [a for a in soa_answers if len(a) >= 3 and a[3] == "SOA"]
if not soa:
return
return filtered_replies[0][4]
def query_dns(typ, domain):
ns = get_authoritative_ns(domain)
ns = soa[0][4]
# Query authoritative nameserver directly to bypass DNS cache.
direct_ns = f"@{ns}" if ns else ""
res = shell(f"dig {direct_ns} -r -q {domain} -t {typ} +short", print=log_progress)
res = shell(f"dig @{ns} -r -q {domain} -t {typ} +short", print=log_progress)
return next((line for line in res.split("\n") if not line.startswith(";")), "")
@@ -99,9 +94,11 @@ def check_zonefile(zonefile, verbose=True):
if not zf_line.strip() or zf_line.startswith(";"):
continue
print(f"dns-checking {zf_line!r}") if verbose else log_progress("")
zf_domain, _ttl, _in, zf_typ, zf_value = zf_line.split(None, 4)
zf_domain = zf_domain.rstrip(".")
zf_value = zf_value.strip()
parts = zf_line.split(None, 4)
zf_domain = parts[0].rstrip(".")
# parts[1]=TTL, parts[2]=IN, parts[3]=type, parts[4]=rdata
zf_typ = parts[3]
zf_value = parts[4].strip()
query_value = query_dns(zf_typ, zf_domain)
if zf_value != query_value:
assert zf_typ in ("A", "AAAA", "CNAME", "CAA", "SRV", "MX", "TXT"), zf_line

View File

@@ -1,8 +1,8 @@
import shlex
from pyinfra.operations import server
from pyinfra.operations import apt, server
from ..basedeploy import Deployer
from cmdeploy.basedeploy import Deployer
def openssl_selfsigned_args(domain, cert_path, key_path, days=36500):
@@ -12,15 +12,27 @@ def openssl_selfsigned_args(domain, cert_path, key_path, days=36500):
``www.<domain>`` and ``mta-sts.<domain>``.
"""
return [
"openssl", "req", "-x509",
"-newkey", "ec", "-pkeyopt", "ec_paramgen_curve:P-256",
"-noenc", "-days", str(days),
"-keyout", str(key_path),
"-out", str(cert_path),
"-subj", f"/CN={domain}",
"openssl",
"req",
"-x509",
"-newkey",
"ec",
"-pkeyopt",
"ec_paramgen_curve:P-256",
"-noenc",
"-days",
str(days),
"-keyout",
str(key_path),
"-out",
str(cert_path),
"-subj",
f"/CN={domain}",
# Mark as end-entity cert so it cannot be used as a CA to sign others.
"-addext", "basicConstraints=critical,CA:FALSE",
"-addext", "extendedKeyUsage=serverAuth,clientAuth",
"-addext",
"basicConstraints=critical,CA:FALSE",
"-addext",
"extendedKeyUsage=serverAuth,clientAuth",
"-addext",
f"subjectAltName=DNS:{domain},DNS:www.{domain},DNS:mta-sts.{domain}",
]
@@ -34,11 +46,17 @@ class SelfSignedTlsDeployer(Deployer):
self.cert_path = "/etc/ssl/certs/mailserver.pem"
self.key_path = "/etc/ssl/private/mailserver.key"
def install(self):
apt.packages(
name="Install openssl",
packages=["openssl"],
)
def configure(self):
args = openssl_selfsigned_args(
self.mail_domain, self.cert_path, self.key_path,
self.mail_domain,
self.cert_path,
self.key_path,
)
cmd = shlex.join(args)
server.shell(
@@ -48,5 +66,3 @@ class SelfSignedTlsDeployer(Deployer):
def activate(self):
pass

View File

@@ -49,8 +49,13 @@ class SSHExec:
RemoteError = execnet.RemoteError
FuncError = FuncError
def __init__(self, host, verbose=False, python="python3", timeout=60):
self.gateway = execnet.makegateway(f"ssh=root@{host}//python={python}")
def __init__(
self, host, verbose=False, python="python3", timeout=60, ssh_config=None
):
spec = f"ssh=root@{host}//python={python}"
if ssh_config:
spec += f"//ssh_config={ssh_config}"
self.gateway = execnet.makegateway(spec)
self._remote_cmdloop_channel = bootstrap_remote(self.gateway, remote)
self.timeout = timeout
self.verbose = verbose
@@ -87,8 +92,9 @@ class SSHExec:
class LocalExec:
FuncError = FuncError
def __init__(self, verbose=False):
def __init__(self, verbose=False, docker=False):
self.verbose = verbose
self.docker = docker
def __call__(self, call, kwargs=None, log_callback=None):
if kwargs is None:
@@ -100,6 +106,10 @@ class LocalExec:
if not title:
title = call.__name__
where = "locally"
if self.docker:
if call == remote.rdns.perform_initial_checks:
kwargs["pre_command"] = "docker exec chatmail "
where = "in docker"
if self.verbose:
print_stderr(f"Running {where}: {title}(**{kwargs})")
return self(call, kwargs, log_callback=print_stderr)
@@ -108,3 +118,46 @@ class LocalExec:
res = self(call, kwargs, log_callback=remote.rshell.log_progress)
print_stderr()
return res
# pyinfra exposes a ``ssh_config_file`` data key that *should* let
# paramiko parse an SSH config file directly. In practice it silently
# fails to connect (zero hosts / zero operations), so we resolve the
# hostname and identity-file ourselves and pass them via
# ``--data ssh_hostname`` / ``--data ssh_key`` instead.
# Execnet uses ssh natively (and not paramiko) and doesn't have this problem.
def _get_from_ssh_config(host, ssh_config_path, key):
"""Internal helper to parse a value for a specific key from ssh-config."""
current_hosts = []
found_value = None
with open(ssh_config_path) as f:
for raw_line in f:
line = raw_line.strip()
if not line or line.startswith("#"):
continue
parts = line.split(None, 1)
if not parts:
continue
directive = parts[0].lower()
if directive == "host":
if host in current_hosts and found_value:
return found_value
current_hosts = parts[1].split()
found_value = None
elif directive == key.lower():
found_value = parts[1]
if host in current_hosts and found_value:
return found_value
return None
def resolve_host_from_ssh_config(host, ssh_config_path):
"""Resolve a host alias to its IP from an ssh-config file."""
return _get_from_ssh_config(host, ssh_config_path, "Hostname") or host
def resolve_key_from_ssh_config(host, ssh_config_path):
"""Resolve a host alias to its IdentityFile from an ssh-config file."""
return _get_from_ssh_config(host, ssh_config_path, "IdentityFile")

View File

@@ -1,18 +1,18 @@
; Required DNS entries
zftest.testrun.org. 3600 IN A 135.181.204.127
zftest.testrun.org. 3600 IN AAAA 2a01:4f9:c012:52f4::1
zftest.testrun.org. 3600 IN MX 10 zftest.testrun.org.
_mta-sts.zftest.testrun.org. 3600 IN TXT "v=STSv1; id=202403211706"
mta-sts.zftest.testrun.org. 3600 IN CNAME zftest.testrun.org.
www.zftest.testrun.org. 3600 IN CNAME zftest.testrun.org.
opendkim._domainkey.zftest.testrun.org. 3600 IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoYt82CVUyz2ouaqjX2kB+5J80knAyoOU3MGU5aWppmwUwwTvj/oSTSpkc5JMtVTRmKKr8NUDWAL1Yw7dfGqqPHdHfwwjS3BIvDzYx+hzgtz62RnfNgV+/2MAoNpfX7cAFIHdRzEHNtwugc3RDLquqPoupAE3Y2YRw2T5zG5fILh4vwIcJZL5Uq6B92j8wwJqOex" "33n+vm1NKQ9rxo/UsHAmZlJzpooXcG/4igTBxJyJlamVSRR6N7Nul1v//YJb7J6v2o0iPHW6uE0StzKaPPNC2IVosSRFbD9H2oqppltptFSNPlI0E+t0JBWHem6YK7xcugiO3ImMCaaU8g6Jt/wIDAQAB;s=email;t=s"
zftest.testrun.org. 3600 IN A 135.181.204.127
zftest.testrun.org. 3600 IN AAAA 2a01:4f9:c012:52f4::1
zftest.testrun.org. 3600 IN MX 10 zftest.testrun.org.
_mta-sts.zftest.testrun.org. 3600 IN TXT "v=STSv1; id=202403211706"
mta-sts.zftest.testrun.org. 3600 IN CNAME zftest.testrun.org.
www.zftest.testrun.org. 3600 IN CNAME zftest.testrun.org.
opendkim._domainkey.zftest.testrun.org. 3600 IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoYt82CVUyz2ouaqjX2kB+5J80knAyoOU3MGU5aWppmwUwwTvj/oSTSpkc5JMtVTRmKKr8NUDWAL1Yw7dfGqqPHdHfwwjS3BIvDzYx+hzgtz62RnfNgV+/2MAoNpfX7cAFIHdRzEHNtwugc3RDLquqPoupAE3Y2YRw2T5zG5fILh4vwIcJZL5Uq6B92j8wwJqOex" "33n+vm1NKQ9rxo/UsHAmZlJzpooXcG/4igTBxJyJlamVSRR6N7Nul1v//YJb7J6v2o0iPHW6uE0StzKaPPNC2IVosSRFbD9H2oqppltptFSNPlI0E+t0JBWHem6YK7xcugiO3ImMCaaU8g6Jt/wIDAQAB;s=email;t=s"
; Recommended DNS entries
zftest.testrun.org. 3600 IN TXT "v=spf1 a ~all"
_dmarc.zftest.testrun.org. 3600 IN TXT "v=DMARC1;p=reject;adkim=s;aspf=s"
zftest.testrun.org. 3600 IN CAA 0 issue "letsencrypt.org;accounturi=https://acme-v02.api.letsencrypt.org/acme/acct/1371472956"
_adsp._domainkey.zftest.testrun.org. 3600 IN TXT "dkim=discardable"
_submission._tcp.zftest.testrun.org. 3600 IN SRV 0 1 587 zftest.testrun.org.
_submissions._tcp.zftest.testrun.org. 3600 IN SRV 0 1 465 zftest.testrun.org.
_imap._tcp.zftest.testrun.org. 3600 IN SRV 0 1 143 zftest.testrun.org.
_imaps._tcp.zftest.testrun.org. 3600 IN SRV 0 1 993 zftest.testrun.org.
zftest.testrun.org. 3600 IN TXT "v=spf1 a ~all"
_dmarc.zftest.testrun.org. 3600 IN TXT "v=DMARC1;p=reject;adkim=s;aspf=s"
zftest.testrun.org. 3600 IN CAA 0 issue "letsencrypt.org;accounturi=https://acme-v02.api.letsencrypt.org/acme/acct/1371472956"
_adsp._domainkey.zftest.testrun.org. 3600 IN TXT "dkim=discardable"
_submission._tcp.zftest.testrun.org. 3600 IN SRV 0 1 587 zftest.testrun.org.
_submissions._tcp.zftest.testrun.org. 3600 IN SRV 0 1 465 zftest.testrun.org.
_imap._tcp.zftest.testrun.org. 3600 IN SRV 0 1 143 zftest.testrun.org.
_imaps._tcp.zftest.testrun.org. 3600 IN SRV 0 1 993 zftest.testrun.org.

View File

@@ -12,7 +12,7 @@ def test_init(tmp_path, maildomain):
inipath = tmp_path.joinpath("chatmail.ini")
main(["init", "--config", str(inipath), maildomain])
config = read_config(inipath)
assert config.mail_domain_bare == maildomain
assert config.mail_domain == maildomain
def test_capabilities(imap):
@@ -89,17 +89,16 @@ def test_concurrent_logins_same_account(
assert login_results.get()
def test_no_vrfy(cmfactory, chatmail_config, maildomain):
ac = cmfactory.get_online_account()
addr = ac.get_config("addr")
def test_no_vrfy(chatmail_config):
domain = chatmail_config.mail_domain
s = smtplib.SMTP(maildomain)
s = smtplib.SMTP(domain)
s.starttls()
s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}")
result = s.getreply()
print(result)
s.putcmd("vrfy", addr)
s.putcmd("vrfy", f"echo@{chatmail_config.mail_domain}")
result2 = s.getreply()
print(result2)
assert result[0] == result2[0] == 252

View File

@@ -20,7 +20,7 @@ def test_fastcgi_working(maildomain, chatmail_config):
@pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning")
def test_newemail_configure(maildomain, rpc, chatmail_config):
def test_newemail_configure(maildomain, maildomain_ip, rpc, chatmail_config):
"""Test configuring accounts by scanning a QR code works."""
url = f"DCACCOUNT:https://{maildomain}/new"
for i in range(3):
@@ -30,12 +30,15 @@ def test_newemail_configure(maildomain, rpc, chatmail_config):
# set_config_from_qr, so fetch credentials via requests instead
res = requests.post(f"https://{maildomain}/new", verify=False)
data = res.json()
rpc.add_or_update_transport(account_id, {
"addr": data["email"],
"password": data["password"],
"imapServer": maildomain,
"smtpServer": maildomain,
"certificateChecks": "acceptInvalidCertificates",
})
rpc.add_or_update_transport(
account_id,
{
"addr": data["email"],
"password": data["password"],
"imapServer": maildomain_ip,
"smtpServer": maildomain_ip,
"certificateChecks": "acceptInvalidCertificates",
},
)
else:
rpc.add_transport_from_qr(account_id, url)

View File

@@ -5,7 +5,6 @@ import subprocess
import time
import pytest
from chatmaild.config import is_valid_ipv4
from cmdeploy import remote
from cmdeploy.cmdeploy import get_sshexec
@@ -13,8 +12,9 @@ from cmdeploy.cmdeploy import get_sshexec
class TestSSHExecutor:
@pytest.fixture(scope="class")
def sshexec(self, sshdomain):
return get_sshexec(sshdomain)
def sshexec(self, sshdomain, pytestconfig):
ssh_config = pytestconfig.getoption("ssh_config")
return get_sshexec(sshdomain, ssh_config=ssh_config)
def test_ls(self, sshexec):
out = sshexec(call=remote.rdns.shell, kwargs=dict(command="ls"))
@@ -22,8 +22,6 @@ class TestSSHExecutor:
assert out == out2
def test_perform_initial(self, sshexec, maildomain):
if is_valid_ipv4(maildomain):
pytest.skip(f"{maildomain} is not a domain")
res = sshexec(
remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain)
)
@@ -64,10 +62,8 @@ class TestSSHExecutor:
else:
pytest.fail("didn't raise exception")
def test_opendkim_restarted(self, sshexec, maildomain):
def test_opendkim_restarted(self, sshexec):
"""check that opendkim is not running for longer than a day."""
if is_valid_ipv4(maildomain):
pytest.skip(f"{maildomain} is an IPv4 relay, opendkim is not installed")
cmd = "systemctl show opendkim --timestamp=utc --property=ActiveEnterTimestamp"
out = sshexec(call=remote.rshell.shell, kwargs=dict(command=cmd))
datestring = out.split("=")[1]
@@ -76,44 +72,6 @@ class TestSSHExecutor:
assert (now - since_date).total_seconds() < 60 * 60 * 51
def test_dovecot_main_process_matches_installed_binary(sshdomain):
sshexec = get_sshexec(sshdomain)
main_pid = int(
sshexec(
call=remote.rshell.shell,
kwargs=dict(
command="timeout 10 systemctl show -p MainPID --value dovecot.service"
),
).strip()
)
assert main_pid != 0, "dovecot.service MainPID is 0 -- service not running?"
exe = sshexec(
call=remote.rshell.shell,
kwargs=dict(command=f"timeout 10 readlink /proc/{main_pid}/exe"),
).strip()
status_text = sshexec(
call=remote.rshell.shell,
kwargs=dict(
command="timeout 10 systemctl show -p StatusText --value dovecot.service"
),
).strip()
installed_version = sshexec(
call=remote.rshell.shell, kwargs=dict(command="timeout 10 dovecot --version")
).strip()
assert not exe.endswith("(deleted)"), (
f"running dovecot binary was deleted (stale after upgrade): {exe}"
)
expected_status_text = f"v{installed_version}"
assert status_text == expected_status_text or status_text.startswith(
f"{expected_status_text} "
), (
f"dovecot status version mismatch: "
f"StatusText={status_text!r}, installed={installed_version!r}"
)
def test_timezone_env(remote):
for line in remote.iter_output("env"):
print(line)
@@ -175,11 +133,10 @@ def test_authenticated_from(cmsetup, maildata):
@pytest.mark.parametrize("from_addr", ["fake@example.org", "fake@testrun.org"])
def test_reject_missing_dkim(cmsetup, maildata, from_addr):
domain = cmsetup.maildomain
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10)
try:
sock.connect((domain, 25))
except socket.timeout:
sock = socket.create_connection((domain, 25), timeout=10)
sock.close()
except (socket.timeout, OSError):
pytest.skip(f"port 25 not reachable for {domain}")
recipient = cmsetup.gen_users(1)[0]
@@ -226,6 +183,7 @@ def test_rewrite_subject(cmsetup, maildata):
assert "Subject: Unencrypted subject" not in rcvd_msg
@pytest.mark.slow
def test_exceed_rate_limit(cmsetup, gencreds, maildata, chatmail_config):
"""Test that the per-account send-mail limit is exceeded."""
user1, user2 = cmsetup.gen_users(2)
@@ -248,6 +206,7 @@ def test_exceed_rate_limit(cmsetup, gencreds, maildata, chatmail_config):
pytest.fail("Rate limit was not exceeded")
@pytest.mark.slow
def test_expunged(remote, chatmail_config):
outdated_days = int(chatmail_config.delete_mails_after) + 1
find_cmds = [
@@ -286,15 +245,3 @@ def test_deployed_state(remote):
# assert len(git_status) == len(remote_version) # for some reason, we only get 11 lines from remote.iter_output()
for i in range(len(remote_version)):
assert git_status[i] == remote_version[i], "You have undeployed changes."
def test_nginx_access_log_only_defined_once(sshdomain):
sshexec = get_sshexec(sshdomain)
conf = sshexec(
call=remote.rshell.shell,
kwargs=dict(command="nginx -T 2>/dev/null"),
)
access_logs = [l for l in conf.splitlines() if l.strip().startswith("access_log")]
assert len(access_logs) == 1, (
f"expected 1 access_log, found {len(access_logs)}: {access_logs}"
)

View File

@@ -15,7 +15,7 @@ def imap_mailbox(cmfactory, ssl_context):
(ac1,) = cmfactory.get_online_accounts(1)
user = ac1.get_config("addr")
password = ac1.get_config("mail_pw")
host = user.split("@")[1].strip("[").strip("]")
host = user.split("@")[1]
mailbox = imap_tools.MailBox(host, ssl_context=ssl_context)
mailbox.login(user, password)
mailbox.dc_ac = ac1
@@ -67,7 +67,7 @@ class TestEndToEndDeltaChat:
assert msg2.get_snapshot().text == "message0"
def test_exceed_quota(
self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain
self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain, pytestconfig
):
"""This is a very slow test as it needs to upload >100MB of mail data
before quota is exceeded, and thus depends on the speed of the upload.
@@ -92,7 +92,9 @@ class TestEndToEndDeltaChat:
lp.sec(f"filling remote inbox for {user}")
fn = f"7743102289.M843172P2484002.c20,S={quota},W=2398:2,"
path = chatmail_config.mailboxes_dir.joinpath(user, "cur", fn)
sshexec = get_sshexec(sshdomain)
sshexec = get_sshexec(
sshdomain, ssh_config=pytestconfig.getoption("ssh_config")
)
sshexec(call=rshell.write_numbytes, kwargs=dict(path=str(path), num=120))
res = sshexec(call=rshell.dovecot_recalc_quota, kwargs=dict(user=user))
assert res["percent"] >= 100
@@ -178,7 +180,7 @@ def test_hide_senders_ip_address(cmfactory, ssl_context):
chat.send_text("testing submission header cleanup")
user2.wait_for_incoming_msg()
addr = user2.get_config("addr")
host = addr.split("@")[1].strip("[").strip("]")
host = addr.split("@")[1]
pw = user2.get_config("mail_pw")
mailbox = imap_tools.MailBox(host, ssl_context=ssl_context)
mailbox.login(addr, pw)

View File

@@ -3,12 +3,15 @@ import os
from cmdeploy.cmdeploy import main
def test_status_cmd(chatmail_config, capsys, request):
def test_status_cmd(chatmail_config, capsys, request, pytestconfig):
os.chdir(request.config.invocation_params.dir)
command = ["status"]
if os.getenv("CHATMAIL_SSH"):
command.append("--ssh-host")
command.append(os.getenv("CHATMAIL_SSH"))
ssh_host = pytestconfig.getoption("ssh_host")
if ssh_host:
command.extend(["--ssh-host", ssh_host])
ssh_config = pytestconfig.getoption("ssh_config")
if ssh_config:
command.extend(["--ssh-config", ssh_config])
assert main(command) == 0
status_out = capsys.readouterr()
print(status_out.out)

View File

@@ -2,18 +2,96 @@ import imaplib
import itertools
import os
import random
import re
import smtplib
import socket
import ssl
import subprocess
import time
from pathlib import Path
import pytest
from chatmaild.config import format_mail_domain, is_valid_ipv4, read_config
from chatmaild.config import read_config
conftestdir = Path(__file__).parent
def pytest_addoption(parser):
parser.addoption(
"--slow", action="store_true", default=False, help="also run slow tests"
)
parser.addoption(
"--ssh-host",
dest="ssh_host",
default=None,
help="SSH host (overrides mail_domain for SSH operations).",
)
parser.addoption(
"--ssh-config",
dest="ssh_config",
default=None,
help="Path to an SSH config file (e.g. lxconfigs/ssh-config).",
)
def _parse_ssh_config_hosts(path):
"""Parse an OpenSSH config file and return a dict of hostname -> IP."""
mapping = {}
current_names = []
for ln in Path(path).read_text().splitlines():
line = ln.strip()
m = re.match(r"^Host\s+(.+)", line)
if m:
current_names = m.group(1).split()
continue
m = re.match(r"^Hostname\s+(\S+)", line)
if m and current_names:
ip = m.group(1)
for name in current_names:
mapping[name] = ip
current_names = []
return mapping
_original_getaddrinfo = socket.getaddrinfo
def _make_patched_getaddrinfo(host_map):
"""Return a getaddrinfo that resolves hosts in host_map to their IPs."""
def patched_getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
if host in host_map:
ip = host_map[host]
return _original_getaddrinfo(ip, port, family, type, proto, flags)
return _original_getaddrinfo(host, port, family, type, proto, flags)
return patched_getaddrinfo
@pytest.fixture(autouse=True, scope="session")
def _setup_localchat_dns(pytestconfig):
"""Monkey-patch socket.getaddrinfo to resolve .localchat via ssh-config."""
ssh_config = pytestconfig.getoption("ssh_config")
if not ssh_config or not Path(ssh_config).exists():
yield {}
return
host_map = _parse_ssh_config_hosts(ssh_config)
if not host_map:
yield {}
return
socket.getaddrinfo = _make_patched_getaddrinfo(host_map)
try:
yield host_map
finally:
socket.getaddrinfo = _original_getaddrinfo
@pytest.fixture(scope="session")
def ssh_config_host_map(_setup_localchat_dns):
"""Return the host-name → IP map parsed from ssh-config."""
return _setup_localchat_dns
def pytest_configure(config):
config._benchresults = {}
config.addinivalue_line(
@@ -21,12 +99,19 @@ def pytest_configure(config):
)
def _get_chatmail_config():
inipath = os.environ.get("CHATMAIL_INI")
if inipath:
path = Path(inipath).resolve()
return read_config(path), path
def pytest_runtest_setup(item):
markers = list(item.iter_markers(name="slow"))
if markers:
if not item.config.getoption("--slow"):
pytest.skip("skipping slow test, use --slow to run")
def _get_chatmail_config():
ini = os.environ.get("CHATMAIL_INI")
if ini:
path = Path(ini).resolve()
if path.exists():
return read_config(path), path
current = Path().resolve()
while 1:
path = current.joinpath("chatmail.ini").resolve()
@@ -49,17 +134,18 @@ def chatmail_config(pytestconfig):
@pytest.fixture(scope="session")
def maildomain(chatmail_config):
return chatmail_config.mail_domain_bare
return chatmail_config.mail_domain
@pytest.fixture(scope="session")
def maildomain_deliverable(maildomain):
return format_mail_domain(maildomain)
def sshdomain(maildomain, pytestconfig):
return pytestconfig.getoption("ssh_host") or maildomain
@pytest.fixture(scope="session")
def sshdomain(maildomain):
return os.environ.get("CHATMAIL_SSH", maildomain)
def maildomain_ip(maildomain, ssh_config_host_map):
"""Return the IP for maildomain from ssh-config, or maildomain itself."""
return ssh_config_host_map.get(maildomain, maildomain)
@pytest.fixture
@@ -303,26 +389,34 @@ from deltachat_rpc_client import DeltaChat, Rpc
class ChatmailACFactory:
"""RPC-based account factory for chatmail testing."""
def __init__(self, rpc, maildomain, gencreds, chatmail_config):
def __init__(
self,
rpc,
maildomain,
maildomain_ip,
gencreds,
chatmail_config,
ssh_config_host_map,
):
self.dc = DeltaChat(rpc)
self.rpc = rpc
self._maildomain = maildomain
self._maildomain_ip = maildomain_ip
self.gencreds = gencreds
self.chatmail_config = chatmail_config
self._ssh_config_host_map = ssh_config_host_map
def _make_transport(self, domain):
"""Build a transport config dict for the given domain."""
domain_deliverable = format_mail_domain(domain)
addr, password = self.gencreds(domain_deliverable)
addr, password = self.gencreds(domain)
server = self._ssh_config_host_map.get(domain, domain)
transport = {
"addr": addr,
"password": password,
# Setting server explicitly skips requesting autoconfig XML,
# see https://datatracker.ietf.org/doc/draft-ietf-mailmaint-autoconfig/
"imapServer": domain,
"smtpServer": domain,
"imapServer": server,
"smtpServer": server,
}
if domain.startswith("_") or is_valid_ipv4(domain):
if self.chatmail_config.tls_cert_mode == "self":
transport["certificateChecks"] = "acceptInvalidCertificates"
return transport
@@ -337,23 +431,9 @@ class ChatmailACFactory:
accounts = []
for _ in range(num):
account = self.dc.add_account()
domain_deliverable = format_mail_domain(domain)
addr, password = self.gencreds(domain_deliverable)
if is_valid_ipv4(domain):
# Use DCLOGIN scheme with explicit server hosts,
# matching how madmail presents its addresses to users.
qr = (
f"dclogin:{addr}"
f"?p={password}&v=1"
f"&ih={domain}&ip=993"
f"&sh={domain}&sp=465"
f"&ic=3&ss=default"
)
future = account.add_transport_from_qr.future(qr)
else:
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
futures.append(future)
# ensure messages stay in INBOX so that they can be
@@ -388,62 +468,57 @@ def rpc(tmp_path_factory):
@pytest.fixture
def cmfactory(rpc, gencreds, maildomain, chatmail_config):
def cmfactory(
rpc, gencreds, maildomain, maildomain_ip, chatmail_config, ssh_config_host_map
):
"""Return a ChatmailACFactory for creating online Delta Chat accounts."""
return ChatmailACFactory(
rpc=rpc,
maildomain=maildomain,
maildomain_ip=maildomain_ip,
gencreds=gencreds,
chatmail_config=chatmail_config,
ssh_config_host_map=ssh_config_host_map,
)
@pytest.fixture
def remote(sshdomain):
r = Remote(sshdomain)
yield r
r.close()
def remote(sshdomain, pytestconfig):
return Remote(sshdomain, ssh_config=pytestconfig.getoption("ssh_config"))
class Remote:
def __init__(self, sshdomain):
def __init__(self, sshdomain, ssh_config=None):
self.sshdomain = sshdomain
self._procs = []
self.ssh_config = ssh_config
def iter_output(self, logcmd="", ready=None):
getjournal = "journalctl -f" if not logcmd else logcmd
print(self.sshdomain)
if self.sshdomain in ("@local", "localhost"):
command = []
else:
command = ["ssh", f"root@{self.sshdomain}"]
match self.sshdomain:
case "@local":
command = []
case "localhost":
command = []
case _:
command = ["ssh"]
if self.ssh_config:
command.extend(["-F", self.ssh_config])
command.append(f"root@{self.sshdomain}")
[command.append(arg) for arg in getjournal.split()]
popen = subprocess.Popen(
self.popen = subprocess.Popen(
command,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
)
self._procs.append(popen)
try:
while 1:
line = popen.stdout.readline()
res = line.decode().strip().lower()
if not res:
break
if ready is not None:
ready()
ready = None
yield res
finally:
popen.terminate()
popen.wait()
def close(self):
while self._procs:
proc = self._procs.pop()
proc.kill()
proc.wait()
while 1:
line = self.popen.stdout.readline()
res = line.decode().strip().lower()
if not res:
break
if ready is not None:
ready()
ready = None
yield res
@pytest.fixture

View File

@@ -1,118 +0,0 @@
from unittest.mock import MagicMock, patch
from cmdeploy.basedeploy import Deployer
def test_put_file_restart_and_reload():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res):
deployer.put_file("foo.conf", "/etc/foo.conf")
assert deployer.need_restart is True
assert deployer.daemon_reload is False
deployer = Deployer()
deployer.put_file("test.service", "/etc/systemd/system/test.service")
assert deployer.need_restart is True
assert deployer.daemon_reload is True
def test_remove_file():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
with patch("cmdeploy.basedeploy.files.file", return_value=mock_res) as mock_file:
deployer.remove_file("/etc/foo.conf")
mock_file.assert_called_once_with(
name="Remove /etc/foo.conf", path="/etc/foo.conf", present=False
)
assert deployer.need_restart is True
def test_ensure_systemd_unit():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
# Plain service file
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res) as mock_put:
deployer.ensure_systemd_unit("iroh-relay.service")
assert (
mock_put.call_args.kwargs["dest"]
== "/etc/systemd/system/iroh-relay.service"
)
assert deployer.need_restart is True
assert deployer.daemon_reload is True
deployer = Deployer()
# Template (.j2) dispatches to put_template and strips .j2 suffix
with patch("cmdeploy.basedeploy.files.template", return_value=mock_res) as mock_tpl:
deployer.ensure_systemd_unit(
"filtermail/chatmaild.service.j2",
bin_path="/usr/local/bin/filtermail",
)
assert (
mock_tpl.call_args.kwargs["dest"] == "/etc/systemd/system/chatmaild.service"
)
deployer = Deployer()
# Explicit dest_name override
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res) as mock_put:
deployer.ensure_systemd_unit(
"acmetool/acmetool-reconcile.timer",
dest_name="acmetool-reconcile.timer",
)
assert (
mock_put.call_args.kwargs["dest"]
== "/etc/systemd/system/acmetool-reconcile.timer"
)
def test_ensure_service():
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service("nginx.service")
mock_svc.assert_called_once_with(
name="Start and enable nginx.service",
service="nginx.service",
running=True,
enabled=True,
restarted=True,
daemon_reload=True,
)
# daemon_reload is cleared to avoid multiple systemctl daemon-reload calls
# need_restart is kept to ensure all subsequent services also restart
assert deployer.need_restart is True
assert deployer.daemon_reload is False
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
# Stopping suppresses restarted even when need_restart is True
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service(
"mta-sts-daemon.service",
running=False,
enabled=False,
)
assert mock_svc.call_args.kwargs["restarted"] is False
assert deployer.need_restart is True
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
# Multiple calls: daemon_reload resets after first, need_restart persists
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service("chatmaild.service")
deployer.ensure_service("chatmaild-metadata.service")
second_call = mock_svc.call_args_list[1]
assert second_call.kwargs["restarted"] is True
assert second_call.kwargs["daemon_reload"] is False

View File

@@ -23,30 +23,21 @@ class TestCmdline:
run = parser.parse_args(["run"])
assert init and run
def test_init_not_overwrite(self, capsys, tmp_path, monkeypatch):
def test_init_not_overwrite(self, tmp_path, capsys, monkeypatch):
monkeypatch.delenv("CHATMAIL_INI", raising=False)
inipath = tmp_path / "chatmail.ini"
args = ["init", "--config", str(inipath), "chat.example.org"]
assert main(args) == 0
monkeypatch.chdir(tmp_path)
assert main(["init", "chat.example.org"]) == 0
capsys.readouterr()
assert main(args) == 1
assert main(["init", "chat.example.org"]) == 1
out, err = capsys.readouterr()
assert "path exists" in out.lower()
args.insert(1, "--force")
assert main(args) == 0
assert main(["init", "chat.example.org", "--force"]) == 0
out, err = capsys.readouterr()
assert "deleting config file" in out.lower()
def test_dns_skip_on_ip(self, capsys, tmp_path, monkeypatch):
monkeypatch.delenv("CHATMAIL_INI", raising=False)
inipath = tmp_path / "chatmail.ini"
assert main(["init", "--config", str(inipath), "1.3.3.7"]) == 0
assert main(["dns", "--config", str(inipath)]) == 0
out, err = capsys.readouterr()
assert out == "[WARNING] 1.3.3.7 is not a domain, skipping DNS checks.\n"
def test_www_folder(example_config, tmp_path):
reporoot = importlib.resources.files(__package__).joinpath("../../../../").resolve()

View File

@@ -4,7 +4,6 @@ import pytest
from cmdeploy import remote
from cmdeploy.dns import check_full_zone, check_initial_remote_data, parse_zone_records
from cmdeploy.remote.rdns import get_authoritative_ns
@pytest.fixture
@@ -15,15 +14,11 @@ def mockdns_base(monkeypatch):
if command.startswith("dig"):
if command == "dig":
return "."
if "with.public.soa" in command and "NS" in command:
return "domain.with.public.soa. 2419 IN NS ns1.first-ns.de."
if "with.hidden.soa" in command and "NS" in command:
if "SOA" in command:
return (
"domain.with.hidden.soa. 2137 IN NS ns1.desec.io.\n"
"domain.with.hidden.soa. 2137 IN NS ns2.desec.org."
"delta.chat. 21600 IN SOA ns1.first-ns.de. dns.hetzner.com."
" 2025102800 14400 1800 604800 3600"
)
if "NS" in command:
return "delta.chat. 21600 IN NS ns1.first-ns.de."
command_chunks = command.split()
domain, typ = command_chunks[4], command_chunks[6]
try:
@@ -130,17 +125,6 @@ class TestPerformInitialChecks:
assert not l
@pytest.mark.parametrize(
("domain", "ns"),
[
("domain.with.public.soa", "ns1.first-ns.de."),
("domain.with.hidden.soa", "ns1.desec.io."),
],
)
def test_get_authoritative_ns(domain, ns, mockdns):
assert get_authoritative_ns(domain) == ns
def test_parse_zone_records():
text = """
; This is a comment
@@ -148,28 +132,11 @@ def test_parse_zone_records():
; Another comment
www.some.domain. 3600 IN CNAME some.domain.
; Multi-word rdata
some.domain. 3600 IN MX 10 mail.some.domain.
; DKIM record (single line, multi-word TXT rdata)
dkim._domainkey.some.domain. 3600 IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG" "9w0BAQEFAAOCAQ8AMIIBCgKCAQEA"
; Another TXT record
_dmarc.some.domain. 3600 IN TXT "v=DMARC1;p=reject"
"""
records = list(parse_zone_records(text))
assert records == [
("some.domain", "3600", "A", "1.1.1.1"),
("www.some.domain", "3600", "CNAME", "some.domain."),
("some.domain", "3600", "MX", "10 mail.some.domain."),
(
"dkim._domainkey.some.domain",
"3600",
"TXT",
'"v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG" "9w0BAQEFAAOCAQ8AMIIBCgKCAQEA"',
),
("_dmarc.some.domain", "3600", "TXT", '"v=DMARC1;p=reject"'),
]
@@ -181,6 +148,7 @@ def test_parse_zone_records_invalid_line():
def parse_zonefile_into_dict(zonefile, mockdns_base, only_required=False):
if only_required:
# Only take records before the "; Recommended" section
zonefile = zonefile.split("; Recommended")[0]
for name, ttl, rtype, rdata in parse_zone_records(zonefile):
mockdns_base.setdefault(rtype, {})[name] = rdata

View File

@@ -1,237 +0,0 @@
from contextlib import nullcontext
from types import SimpleNamespace
import pytest
from pyinfra.facts.deb import DebPackages
from cmdeploy.dovecot import deployer as dovecot_deployer
def make_host(*fact_pairs):
"""Build a mock host; get_fact(cls) dispatches to the provided facts mapping.
Args:
*fact_pairs: tuples of (fact_class, fact_value) to register
Returns:
SimpleNamespace with get_fact that raises a clear error if an
unexpected fact type is requested.
"""
facts = dict(fact_pairs)
def get_fact(cls):
if cls not in facts:
registered = ", ".join(c.__name__ for c in facts)
raise LookupError(
f"unexpected get_fact({cls.__name__}); only registered: {registered}"
)
return facts[cls]
return SimpleNamespace(get_fact=get_fact)
@pytest.fixture
def deployer():
return dovecot_deployer.DovecotDeployer(
SimpleNamespace(mail_domain="chat.example.org"),
disable_mail=False,
)
@pytest.fixture
def patch_blocked(monkeypatch):
monkeypatch.setattr(dovecot_deployer, "blocked_service_startup", nullcontext)
@pytest.fixture
def mock_files_put(monkeypatch):
monkeypatch.setattr(
dovecot_deployer.files,
"put",
lambda **kwargs: SimpleNamespace(changed=False),
)
@pytest.fixture
def track_shell(monkeypatch):
calls = []
monkeypatch.setattr(
dovecot_deployer.server,
"shell",
lambda **kwargs: calls.append(kwargs) or SimpleNamespace(changed=False),
)
return calls
def test_download_dovecot_package_skips_epoch_matched_install(monkeypatch):
epoch_version = dovecot_deployer.DOVECOT_PACKAGE_VERSION
downloads = []
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host((DebPackages, {"dovecot-core": [epoch_version]})),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deb, changed = dovecot_deployer._download_dovecot_package("core", "amd64")
assert deb is None, f"expected no deb path when version matches, got {deb!r}"
assert changed is False, "should not flag changed when version already installed"
assert downloads == [], "should not download when version already installed"
def test_download_dovecot_package_uses_archive_version_for_url_and_filename(
monkeypatch,
):
downloads = []
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host((DebPackages, {})),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deb, changed = dovecot_deployer._download_dovecot_package("core", "amd64")
archive_version = dovecot_deployer.DOVECOT_ARCHIVE_VERSION.replace("+", "%2B")
expected_deb = f"/root/dovecot-core_{archive_version}_amd64.deb"
# Verify the returned path uses archive version, not package version (with epoch)
assert changed is True, "should flag changed when package not yet installed"
assert deb == expected_deb, f"deb path mismatch: {deb!r} != {expected_deb!r}"
assert dovecot_deployer.DOVECOT_PACKAGE_VERSION not in deb, (
f"deb path should use archive version (no epoch), got {deb!r}"
)
assert len(downloads) == 1, "files.download should be called exactly once"
def test_install_skips_dpkg_path_when_epoch_matched_packages_present(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host(
(
dovecot_deployer.DebPackages,
{
"dovecot-core": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
"dovecot-imapd": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
"dovecot-lmtpd": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
},
),
(dovecot_deployer.Arch, "x86_64"),
),
)
downloads = []
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deployer.install()
assert downloads == [], "should not download when all packages epoch-matched"
assert track_shell == [], "should not run dpkg when all packages epoch-matched"
assert deployer.need_restart is False, (
"need_restart should be False when nothing changed"
)
def test_install_unsupported_arch_falls_back_to_apt(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
# For unsupported architectures, all fact lookups return the arch string.
monkeypatch.setattr(
dovecot_deployer,
"host",
SimpleNamespace(get_fact=lambda cls: "riscv64"),
)
apt_calls = []
# Mirrors apt.packages() return value: OperationMeta with .changed property.
# Only lmtpd triggers a change to verify |= accumulation of changed flags.
def fake_apt(**kwargs):
apt_calls.append(kwargs)
changed = "lmtpd" in kwargs["packages"][0]
return SimpleNamespace(changed=changed)
monkeypatch.setattr(dovecot_deployer.apt, "packages", fake_apt)
deployer.install()
actual_pkgs = [c["packages"] for c in apt_calls]
assert actual_pkgs == [["dovecot-core"], ["dovecot-imapd"], ["dovecot-lmtpd"]], (
f"expected apt install of core/imapd/lmtpd, got {actual_pkgs}"
)
assert track_shell == [], "should not run dpkg for unsupported arch"
assert deployer.need_restart is True, (
"need_restart should be True when apt installed a package"
)
def test_install_runs_dpkg_when_packages_need_download(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host(
(dovecot_deployer.DebPackages, {}),
(dovecot_deployer.Arch, "x86_64"),
),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: SimpleNamespace(changed=True),
)
deployer.install()
assert len(track_shell) == 1, (
f"expected one server.shell() call for dpkg install, got {len(track_shell)}"
)
cmds = track_shell[0]["commands"]
assert len(cmds) == 3, f"expected 3 dpkg/apt commands, got: {cmds}"
assert cmds[0].startswith("dpkg --force-confdef --force-confold -i ")
assert "apt-get -y --fix-broken install" in cmds[1]
assert cmds[2].startswith("dpkg --force-confdef --force-confold -i ")
assert deployer.need_restart is True, (
"need_restart should be True after dpkg install"
)
def test_pick_url_falls_back_on_primary_error(monkeypatch):
def raise_error(req, timeout):
raise OSError("connection timeout")
monkeypatch.setattr(dovecot_deployer.urllib.request, "urlopen", raise_error)
result = dovecot_deployer._pick_url("http://primary", "http://fallback")
assert result == "http://fallback", (
f"should fall back when primary fails, got {result!r}"
)

View File

@@ -1,10 +1,11 @@
from pathlib import Path
import importlib.resources
from cmdeploy.www import build_webpages
def test_build_webpages(tmp_path, make_config):
src_dir = (Path(__file__).resolve() / "../../../../../www/src").resolve()
pkgroot = importlib.resources.files("cmdeploy")
src_dir = pkgroot.joinpath("../../../www/src").resolve()
assert src_dir.exists(), src_dir
config = make_config("chat.example.org")
build_dir = tmp_path.joinpath("build")

View File

@@ -0,0 +1,173 @@
"""Tests for cmdeploy lxc-* subcommands."""
import shutil
import subprocess
import sys
import pytest
from cmdeploy.lxc import cli
from cmdeploy.lxc.incus import Incus
pytestmark = pytest.mark.skipif(
not shutil.which("incus") or not shutil.which("lxc"),
reason="incus/lxc not installed",
)
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def ix():
return Incus()
@pytest.fixture(scope="session")
def lxc_setup():
ix = Incus()
ix.get_dns_container().ensure()
return ix.list_managed()
@pytest.fixture(scope="session")
def relay_container(lxc_setup):
test_names = {f"{n}-localchat" for n in cli.RELAY_NAMES}
relays = [c for c in lxc_setup if c["name"] in test_names and c.get("ip")]
if not relays:
pytest.skip("no test relay containers running")
return relays[0]
@pytest.fixture
def cmdeploy():
def run(*args):
return subprocess.run(
[sys.executable, "-m", "cmdeploy.cmdeploy", *args],
capture_output=True,
text=True,
check=False,
)
return run
# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------
@pytest.mark.parametrize(
"subcmd, expected, absent",
[
(None, ["lxc-start", "lxc-stop", "lxc-test", "lxc-status"], ["lxc-destroy"]),
("lxc-start", ["--ipv4-only", "--run"], ["--config"]),
("lxc-stop", ["--destroy", "--destroy-all"], ["--config"]),
("lxc-test", ["--one"], ["--config"]),
("lxc-status", [], ["--config"]),
("run", ["--ssh-config"], ["--lxc"]),
("dns", ["--ssh-config"], []),
("test", ["--ssh-config"], []),
("status", ["--ssh-config"], []),
],
)
def test_help_options(cmdeploy, subcmd, expected, absent):
args = [subcmd, "--help"] if subcmd else ["--help"]
result = cmdeploy(*args)
output = result.stdout + result.stderr
assert result.returncode == 0
for flag in expected:
assert flag in output
for flag in absent:
assert flag not in output
class TestSSHConfig:
def test_lxconfigs(self, ix, lxc_setup):
d = ix.lxconfigs_dir
assert d.name == "lxconfigs"
assert d.exists()
path = ix.ssh_config_path
assert path.name == "ssh-config"
assert path.parent.name == "lxconfigs"
def test_write_ssh_config(self, ix, lxc_setup):
path = ix.write_ssh_config()
assert path.exists()
text = path.read_text()
for c in lxc_setup:
if c.get("ip"):
assert c["name"] in text
assert f"Hostname {c['ip']}" in text
assert "User root" in text
assert "IdentityFile" in text
assert "StrictHostKeyChecking accept-new" in text
def test_dns(ix, relay_container):
def dig(qname, qtype):
ct = ix.get_dns_container()
return ct.bash(f"dig @127.0.0.1 {qname} {qtype} +short").strip()
domain = relay_container["domain"]
assert dig(domain, "A") == relay_container["ip"]
assert domain in dig(domain, "MX")
assert "587" in dig(f"_submission._tcp.{domain}", "SRV")
class TestLxcStatus:
def test_cli_lxc_status_help(self, cmdeploy):
result = cmdeploy("lxc-status", "--help")
assert result.returncode == 0
assert "status" in result.stdout.lower()
def test_shows_containers(self, lxc_setup, capsys):
from cmdeploy.cmdeploy import Out
class QuietOut(Out):
def red(self, msg, **kw):
pass
def green(self, msg, **kw):
pass
ret = cli.lxc_status_cmd(None, QuietOut())
assert ret == 0
captured = capsys.readouterr().out
assert "ns-localchat" in captured
assert "running" in captured
def test_deploy_freshness(self, ix, monkeypatch):
ct = ix.get_container("x")
monkeypatch.setattr(
"cmdeploy.lxc.incus.RelayContainer.deployed_version",
lambda _self: "abc123def456",
)
monkeypatch.setattr(
"cmdeploy.lxc.incus.RelayContainer.deployed_domain",
lambda _self: ct.domain,
)
monkeypatch.setattr(
"cmdeploy.lxc.cli.get_version_string",
lambda: "abc123def456",
)
assert "IN-SYNC" in cli._deploy_status(ct, "abc123def456", ix)
assert "STALE" in cli._deploy_status(ct, "other_hash_here", ix)
# Hash matches but local has uncommitted changes
monkeypatch.setattr(
"cmdeploy.lxc.cli.get_version_string",
lambda: "abc123def456\ndiff --git a/foo",
)
assert "DIRTY" in cli._deploy_status(ct, "abc123def456", ix)
monkeypatch.setattr(
"cmdeploy.lxc.incus.RelayContainer.deployed_version",
lambda _self: None,
)
assert "NOT DEPLOYED" in cli._deploy_status(ct, "abc123", ix)

View File

@@ -0,0 +1,97 @@
import pytest
from cmdeploy.util import (
build_chatmaild_sdist,
collapse,
get_chatmaild_sdist,
get_git_hash,
get_version_string,
shell,
)
def test_collapse():
text = """
line 1
line 2
"""
assert collapse(text) == "line 1 line 2"
assert collapse(" single line ") == "single line"
def test_git_helpers_no_git(tmp_path):
# Not a git repo
assert get_git_hash(root=tmp_path) is None
assert get_version_string(root=tmp_path) == "unknown"
def test_git_helpers_empty_repo(tmp_path):
shell("git init", cwd=tmp_path, check=True)
# No commits yet
assert get_git_hash(root=tmp_path) is None
assert get_version_string(root=tmp_path) == "unknown"
def test_git_helpers_with_commits_and_diffs(tmp_path):
shell("git init", cwd=tmp_path, check=True)
shell("git config user.email you@example.com", cwd=tmp_path, check=True)
shell("git config user.name 'Your Name'", cwd=tmp_path, check=True)
# First commit
path = tmp_path / "file.txt"
path.write_text("content")
shell("git add file.txt", cwd=tmp_path, check=True)
shell("git commit -m initial", cwd=tmp_path, check=True)
git_hash = get_git_hash(root=tmp_path)
assert len(git_hash) >= 7 # usually 40, but git is git
assert get_version_string(root=tmp_path) == git_hash
# Create a diff
path.write_text("new content")
v = get_version_string(root=tmp_path)
assert v.startswith(git_hash + "\n")
assert "new content" in v
assert not v.endswith("\n")
# Commit again -> no diff
shell("git add file.txt", cwd=tmp_path, check=True)
shell("git commit -m second", cwd=tmp_path, check=True)
new_hash = get_git_hash(root=tmp_path)
assert new_hash != git_hash
assert get_version_string(root=tmp_path) == new_hash
# Diffs inside excluded test dirs are invisible to the version string
test_dir = tmp_path / "cmdeploy" / "src" / "cmdeploy" / "tests"
test_dir.mkdir(parents=True)
test_file = test_dir / "test_foo.py"
test_file.write_text("pass")
shell("git add .", cwd=tmp_path, check=True)
shell("git commit -m 'add test file'", cwd=tmp_path, check=True)
test_file.write_text("assert True")
assert get_version_string(root=tmp_path) == get_git_hash(root=tmp_path)
def test_build_chatmaild_sdist(tmp_path):
dist_dir = tmp_path / "dist"
# First call builds the sdist
result = build_chatmaild_sdist(dist_dir)
assert result.name.endswith(".tar.gz")
assert result.stat().st_size > 0
# Second call is idempotent - returns the same file, no rebuild
mtime = result.stat().st_mtime
result2 = build_chatmaild_sdist(dist_dir)
assert result2 == result
assert result2.stat().st_mtime == mtime
def test_get_chatmaild_sdist_errors(tmp_path):
with pytest.raises(FileNotFoundError):
get_chatmaild_sdist(tmp_path / "nonexistent")
empty = tmp_path / "empty"
empty.mkdir()
with pytest.raises(FileNotFoundError):
get_chatmaild_sdist(empty)

View File

@@ -0,0 +1,126 @@
"""Shared utility functions for cmdeploy."""
import fcntl
import subprocess
import sys
import textwrap
from pathlib import Path
def _project_root():
"""Return the project root directory."""
return Path(__file__).resolve().parent.parent.parent.parent
def collapse(text):
"""Dedent, join lines, and strip a (triple-quoted) string.
Handy for writing shell commands across multiple lines::
cmd = collapse(f\"""
cmdeploy run
--config {ct.ini}
--ssh-host {ct.domain}
\""")
"""
return textwrap.dedent(text).replace("\n", " ").strip()
def shell(cmd, check=False, **kwargs):
"""Run a shell command string with sensible defaults.
*cmd* is passed through :func:`collapse` first, so callers
can use triple-quoted f-strings freely.
Captures stdout/stderr by default; pass ``capture_output=False``
to stream output to the terminal instead.
"""
if "capture_output" not in kwargs and "stdout" not in kwargs:
kwargs["capture_output"] = True
return subprocess.run(collapse(cmd), shell=True, text=True, check=check, **kwargs)
def get_git_hash(root=None):
"""Return the local HEAD commit hash, or None."""
if root is None:
root = _project_root()
result = shell(
"git rev-parse HEAD",
cwd=str(root),
)
if result.returncode == 0:
return result.stdout.strip()
return None
DIFF_EXCLUDES = (
":(exclude)cmdeploy/src/cmdeploy/tests",
":(exclude)chatmaild/src/chatmaild/tests",
)
"""Git pathspecs appended to ``git diff`` so that changes
limited to test files do not affect the deployed version string."""
def get_version_string(root=None):
"""Return ``git_hash\\ngit_diff`` for the local working tree.
Used by :class:`~cmdeploy.deployers.GithashDeployer` to write
``/etc/chatmail-version`` and by ``lxc-status`` to compare
the deployed state against the local checkout.
Changes inside directories listed in :data:`DIFF_EXCLUDES`
are ignored so that test-only edits do not trigger
a redeployment.
"""
if root is None:
root = _project_root()
git_hash = get_git_hash(root=root) or "unknown"
excludes = " ".join(f"'{e}'" for e in DIFF_EXCLUDES)
try:
git_diff = shell(
f"git diff -- . {excludes}",
cwd=str(root),
).stdout.strip()
except Exception:
git_diff = ""
if git_diff:
return f"{git_hash}\n{git_diff}"
return git_hash
def _chatmaild_default_dist_dir():
return _project_root() / "chatmaild" / "dist"
def build_chatmaild_sdist(dist_dir=None):
"""Build the chatmaild sdist if not already present (idempotent, process-safe)."""
if dist_dir is None:
dist_dir = _chatmaild_default_dist_dir()
dist_dir = Path(dist_dir).resolve()
dist_dir.mkdir(parents=True, exist_ok=True)
lockfile = dist_dir.parent / ".dist.lock"
with open(lockfile, "w") as fh:
fcntl.flock(fh, fcntl.LOCK_EX)
existing = [p for p in dist_dir.iterdir() if p.suffix == ".gz"]
if existing:
return existing[0]
subprocess.check_output(
[sys.executable, "-m", "build", "-n"]
+ ["--sdist", "chatmaild", "--outdir", str(dist_dir)],
cwd=str(_project_root()),
)
return get_chatmaild_sdist(dist_dir)
def get_chatmaild_sdist(dist_dir=None):
"""Return the path to the pre-built chatmaild sdist."""
if dist_dir is None:
dist_dir = _chatmaild_default_dist_dir()
entries = list(Path(dist_dir).iterdir())
if len(entries) == 0:
raise FileNotFoundError(f"dist directory is empty: {dist_dir}")
if len(entries) > 1:
raise ValueError(f"expected one file in {dist_dir}, found {len(entries)}")
return entries[0]

View File

@@ -1,4 +1,5 @@
import hashlib
import importlib.resources
import re
import time
import traceback
@@ -36,7 +37,7 @@ def prepare_template(source):
def get_paths(config) -> (Path, Path, Path):
reporoot = (Path(__file__).resolve() / "../../../../").resolve()
reporoot = importlib.resources.files(__package__).joinpath("../../../").resolve()
www_path = Path(config.www_folder)
# if www_folder was not set, use default directory
if config.www_folder == "":
@@ -132,7 +133,8 @@ def find_merge_conflict(src_dir) -> Path:
def main():
reporoot = (Path(__file__).resolve() / "../../../../").resolve()
path = importlib.resources.files(__package__)
reporoot = path.joinpath("../../../").resolve()
inipath = reporoot.joinpath("chatmail.ini")
config = read_config(inipath)
config.webdev = True

View File

@@ -4,14 +4,12 @@
You can use the `make` command and `make html` to build web pages.
You need a Python environment with `sphinx` and other
dependencies, you can create it by running `scripts/initenv.sh`
from the repository root.
You need a Python environment where the following install was excuted:
pip install furo sphinx-autobuild
To develop/change documentation, you can then do:
. venv/bin/activate
cd doc
make auto
A page will open at https://127.0.0.1:8000/ serving the docs and it will

View File

@@ -15,7 +15,7 @@ author = 'chatmail collective'
extensions = [
#'sphinx.ext.autodoc',
#'sphinx.ext.viewdoc',
#'sphinx.ext.viewcode',
'sphinxcontrib.mermaid',
]

View File

@@ -14,6 +14,8 @@ Minimal requirements and prerequisites
You will need the following:
- Control over a domain through a DNS provider of your choice.
- A Debian 12 **deployment server** with reachable SMTP/SUBMISSIONS/IMAPS/HTTPS ports.
IPv6 is encouraged if available. Chatmail relay servers only require
1GB RAM, one CPU, and perhaps 10GB storage for a few thousand active
@@ -26,11 +28,6 @@ You will need the following:
(An ed25519 private key is required due to an `upstream bug in
paramiko <https://github.com/paramiko/paramiko/issues/2191>`_)
- Control over a domain through a DNS provider of your choice
(there is experimental support for :ref:`DNS-less relays <iponly>`).
.. _setup:
Setup with ``scripts/cmdeploy``
-------------------------------------

View File

@@ -16,7 +16,6 @@ Contributions and feedback welcome through the https://github.com/chatmail/relay
proxy
migrate
overview
reverse_dns
lxc
related
faq
iponly

View File

@@ -1,29 +0,0 @@
.. _iponly:
Hosting without DNS records
===========================
.. note::
This option is experimental and might change without notice.
In case you don't have a domain,
for example in a local network,
you can run a chatmail relay with only an IPv4 address as well.
To deploy a relay without a domain,
run ``cmdeploy init`` with only the IPv4 address
during the :ref:`installation steps <setup>`,
for example ``cmdeploy init 13.12.23.42``.
Drawbacks
---------
- your transport encryption will only use self-signed TLS certificates,
which are vulnerable against MITM attacks.
the chatmail core's end-to-end encryption should suffice in most scenarios though.
- your messages will not be DKIM-signed;
experimentally, most chatmail relays accept non-DKIM-signed messages from IPv4-only relays,
but some relays might not accept messages from yours.

288
doc/source/lxc.rst Normal file
View File

@@ -0,0 +1,288 @@
Local testing with LXC/Incus
============================
.. warning::
cmdeploy LXC support is geared towards local testing and CI, only.
Do not base production setups on it.
The ``cmdeploy`` tool includes support for running
chatmail relays inside local
`Incus <https://linuxcontainers.org/incus/>`_ LXC containers.
This is useful for development, testing, and CI
without requiring a remote server.
LXC system containers behave like lightweight virtual machines.
They share the host's kernel but run their own init system
(systemd), package manager, and network stack,
so the cmdeploy deployment scripts work exactly
as they would on a real Debian server or cloud VPS.
Prerequisites
-------------
Install `Incus <https://linuxcontainers.org/incus/>`_
(LXC container manager).
See the `official installation guide
<https://linuxcontainers.org/incus/docs/main/installing/>`_
for full details.
After installing incus, initialise and grant yourself access::
sudo incus admin init --minimal
sudo usermod -aG incus-admin $USER
.. warning::
You **must now log out and back in** (or run ``newgrp incus-admin``)
after adding yourself to the group.
Without this, all ``cmdeploy lxc-*`` commands
will fail with permission errors.
Verify the installation works by running ``incus list``,
which should print an empty table without errors.
Quick start
-----------
::
cd relay
scripts/initenv.sh # bootstrap venv
source venv/bin/activate # activate venv
cmdeploy lxc-test # create containers, deploy, test
The ``lxc-test`` command executes each ``cmdeploy`` subprocess command
so you can copy-paste and run them individually.
A section timing summary is printed at the end.
No host DNS delegation or ``~/.ssh/config`` changes are needed
because lxc-test passes ssh-related CLI options to
``cmdeploy run`` and ``cmdeploy test`` commands.
CLI reference
--------------
``lxc-start [--ipv4-only] [--run] [NAME ...]``
Create and start containers.
Without arguments, creates ``test0-localchat`` and ``ns-localchat`` (DNS).
Pass one or more ``NAME`` arguments to create user relay containers instead
(e.g. ``cmdeploy lxc-start myrelay``).
Use ``--ipv4-only`` to set ``disable_ipv6 = True`` in the generated ``chatmail.ini``,
producing an IPv4-only relay.
Use ``--run`` to automatically run ``cmdeploy run`` on each container after starting it.
Generates ``lxconfigs/ssh-config``.
It reuses existing containers and resets DNS zones to minimal records.
``lxc-stop [--destroy] [--destroy-all] [NAME ...]``
Stop relay containers.
Without arguments, stops ``test0-localchat`` and ``test1-localchat``.
Pass ``NAME`` to stop specific containers.
Use ``--destroy`` to also delete the containers and their config files.
Use ``--destroy-all`` to additionally destroy
the ``ns-localchat`` DNS container **and** remove all cached
images (``localchat-base``, per-relay images),
giving a fully clean slate for the next ``lxc-test``.
User containers are **never** destroyed unless named explicitly.
``lxc-test [--one]``
Idempotent full pipeline:
1. ``lxc-start``: create ``test0`` + ``test1`` containers,
configure DNS with readiness check
2. ``cmdeploy run``: deploy chatmail services
on all relays **in parallel**
3. publish per-relay cached images (``localchat-test0``,
``localchat-test1``) after first successful deploy
4. ``cmdeploy dns --zonefile``: generate standard
BIND-format zone files, load full DNS records
5. ``cmdeploy test``: run full test suite
with ``-n4 -x``
By default creates, deploys, and tests both ``test0`` and ``test1``
for dual-domain federation testing (sets ``CHATMAIL_DOMAIN2=_test1.localchat``).
test0 runs dual-stack (IPv4 + IPv6) while test1 runs IPv4-only (``disable_ipv6 = True``).
Pass ``--one`` to only deploy and test against ``test0``
(skips ``test1``, does not set ``CHATMAIL_DOMAIN2``).
``lxc-status``
Show live status of all LXC containers (including the DNS container),
deploy freshness (comparing ``/etc/chatmail-version``
against local ``git rev-parse HEAD`` and ``git diff``),
SSH config inclusion, and host DNS forwarding for ``.localchat``.
Reports **IN-SYNC**, **DIRTY** (hash matches but uncommitted changes exist),
**STALE** (different commit), or **NOT DEPLOYED**.
Container types
-----------------
**Test relay containers** (``test0-localchat``, ``test1-localchat``)
Created automatically by ``lxc-test``.
**test0** has IPv4 and IPv6 configured,
**test1** is IPv4-only (``disable_ipv6 = True``).
**User relay containers** (``<name>-localchat``)
Created by ``cmdeploy lxc-start <name>``
where ``<name>`` does not start with ``test``.
These are personal development instances,
never touched by ``lxc-stop --destroy`` unless named explicitly.
**DNS container** (``ns-localchat``)
Singleton container running PowerDNS.
Created automatically when any relay is started.
.. _lxc-ssh-config:
SSH configuration
-----------------
``cmdeploy lxc-start`` generates ``lxconfigs/ssh-config``,
a standard OpenSSH config file mapping every container name,
its domain, and a short alias to the container's IP address::
Host test0-localchat _test0.localchat _test0
Hostname 10.204.0.42
User root
IdentityFile /path/to/relay/lxconfigs/id_localchat
IdentitiesOnly yes
StrictHostKeyChecking accept-new
UserKnownHostsFile /dev/null
LogLevel ERROR
All ``cmdeploy`` commands (``run``, ``dns``, ``status``, ``test``)
accept ``--ssh-config lxconfigs/ssh-config`` to use this file.
``lxc-test`` passes it automatically.
**Using containers from the host shell:**
To make ``ssh _test0`` work from any terminal, add one line to ``~/.ssh/config``::
Include /absolute/path/to/relay/lxconfigs/ssh-config
.. _lxc-dns-setup:
.. _localchat-tld:
``.localchat`` DNS and name resolution
---------------------------------------
All LXC-managed chatmail domains use the ``.localchat`` pseudo-TLD
(e.g. ``_test0.localchat``, ``_test1.localchat``),
a non-delegated suffix that exists only within the local PowerDNS infrastructure.
A dedicated DNS container (``ns-localchat``)
is created so that local test relays interact
with DNS similar to a regular public Internet setup.
On first start, ``cmdeploy lxc-start`` creates this container
running two `PowerDNS <https://www.powerdns.com/>`_ services:
* **pdns-server** (authoritative) serves ``.localchat``
zones from a local SQLite database.
* **pdns-recursor** (recursive) listens on the Incus
bridge so all containers can use it.
Forwards ``.localchat`` queries to the local
authoritative server and everything else to Quad9 (``9.9.9.9``).
After the DNS container is up, ``lxc-start`` configures the Incus bridge
to advertise its IP via DHCP and disables Incus's own DNS.
DNS records are then created in two phases matching the "cmdeploy run" deployment flow:
1. **``lxc-start``** resets each relay zone to
**SOA, NS, and A** records (plus **AAAA** for dual-stack containers).
If host DNS resolution is configured, users can
afterwards run ``cmdeploy run --config lxconfigs/chatmail-test0.ini
--ssh-config lxconfigs/ssh-config --ssh-host _test0.localchat``.
LXC subcommands do not depend on host DNS resolution
and resolve addresses via ``lxconfigs/ssh-config``.
2. **``cmdeploy dns --zonefile``** generates a standard
BIND-format zone file (MX, TXT/SPF, TXT/DMARC,
TXT/MTA-STS, SRV, CNAME, DKIM) and loads it
into PowerDNS.
This two-phase approach prevents premature configuration of mail records
before the relay is actually deployed and running.
Once ``cmdeploy run`` deploys `Unbound <https://nlnetlabs.nl/projects/unbound/>`_
inside a relay container, Unbound has a configuration plugin snippet
that forwards all ``.localchat`` queries to the PowerDNS recursor,
and lets all other queries go through normal recursive resolution.
State outside the repository
-----------------------------
All generated configuration by lxc subcommands live in ``lxconfigs/``
(git-ignored), including the SSH key pair (``id_localchat``),
per-container ``chatmail-*.ini`` files, zone files, and ``ssh-config``.
The only state *outside* the repository is the Incus containers and images themselves
(managed via the ``incus`` CLI, labelled with ``user.localchat-managed=true``).
Several cached images are published to the local Incus image store:
* ``localchat-base``: Debian 12 with openssh-server and Python
(built on first run)
* ``localchat-test0``, ``localchat-test1``: per-relay snapshots
published after the first successful ``cmdeploy run``.
Subsequent containers launch from these images
so the deploy step is mostly no-ops.
Relay containers are limited to **500 MiB RAM**
and the DNS container to **256 MiB**.
.. _lxc-tls:
TLS handling and underscore domains
------------------------------------
Container domains start with ``_`` (e.g. ``_test0.localchat``).
As described in :doc:`getting_started` ("Running a relay with self-signed certificates"),
underscore domains automatically use self-signed TLS
and ``smtp_tls_security_level = encrypt``.
This permits cross-relay federation between LXC containers
without any external certificate authority.
Delta Chat clients connecting to these relays
must be configured with
``certificateChecks = acceptInvalidCertificates``
(the test fixtures handle this automatically).
`PR #7926 on chatmail-core <https://github.com/chatmail/core/pull/7926>`_
is meant to make this special setting unnecessary for chatmail clients
that are connecting to underscore domains.
Known limitations
------------------
The LXC environment differs from a production
deployment in several ways:
**No ACME / Let's Encrypt**:
Self-signed TLS only (see :ref:`lxc-tls`);
ACME code paths are never exercised locally.
**No inbound connections from the internet**:
Containers sit on a private Incus bridge and are not port-forwarded.
Only the host and other containers on the same bridge can reach them.
**Local federation only**:
Cross-relay mail delivery (e.g. test0 → test1) works between containers on the same host,
but these relays are invisible to any external mail server.
**DNS is local only**:
The ``.localchat`` pseudo-TLD is not resolvable from the wider internet
(see :ref:`lxc-dns-setup`).
**IPv6 is ULA-only**:
Containers receive IPv6 addresses from the ``fd42:...`` ULA range on the Incus bridge.
These are not globally routable, but are sufficient for testing IPv6 service binding
(Postfix, Dovecot, Nginx) and DNS AAAA records inside the local environment.
test1 runs with ``disable_ipv6 = True`` to exercise the IPv4-only deployment path.

View File

@@ -102,12 +102,8 @@ short overview of ``chatmaild`` services:
Apple/Google/Huawei.
- `chatmail-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/expire.py>`_
deletes old messages, large messages, and entire mailboxes
of users who have not logged in for longer than
``delete_inactive_users_after`` days.
- ``chatmail-quota-expire`` is called by Dovecot's ``quota_warning`` mechanism
and will automatically remove oldest messages to keep mailboxes well under ``max_mailbox_size``.
deletes users if they have not logged in for a longer while.
The timeframe can be configured in ``chatmail.ini``.
- `lastlogin <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/lastlogin.py>`_
is contacted by Dovecot when a user logs in and stores the date of
@@ -153,7 +149,6 @@ Chatmail relay dependency diagram
autoconfig.xml --- dovecot;
postfix --- |10080|filtermail-outgoing;
postfix --- |10081|filtermail-incoming;
postfix --- |10083|filtermail-transport;
filtermail-outgoing --- |10025 reinject|postfix;
filtermail-incoming --- |10026 reinject|postfix;
dovecot --- |doveauth.socket|doveauth;
@@ -161,8 +156,6 @@ Chatmail relay dependency diagram
/home/vmail/.../user"];
dovecot --- |lastlogin.socket|lastlogin;
dovecot --- chatmail-metadata;
dovecot --- |quota-warning|chatmail-quota-expire;
chatmail-quota-expire --- maildir;
lastlogin --- maildir;
doveauth --- maildir;
chatmail-expire-daily --- maildir;
@@ -296,7 +289,9 @@ ensured by ``filtermail`` proxy.
TLS requirements
~~~~~~~~~~~~~~~~
Filtermail (used for delivery) requires a valid TLS.
Postfix is configured to require valid TLS by setting
`smtp_tls_security_level <https://www.postfix.org/postconf.5.html#smtp_tls_security_level>`_
to ``verify``.
You can test it by resolving ``MX`` records of your relay domain and
then connecting to MX relays (e.g ``mx.example.org``) with

View File

@@ -1,64 +0,0 @@
Configuring reverse DNS
=======================
Some email servers reject the emails
if they don't pass `FCrDNS`_ check, also known as `iprev`_ check.
.. _FCrDNS: https://en.wikipedia.org/wiki/Forward-confirmed_reverse_DNS
.. _iprev: https://datatracker.ietf.org/doc/html/rfc8601#section-3
Passing the check requires that the IP address that email is sent from
should have a ``PTR`` record pointing to the domain name of the server,
and domain name record should have an ``A/AAAA`` record
pointing to the IP address.
Modern email relies on DKIM and SPF for authentication,
while iprev check exists for
`historical reasons <https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-reverse-mapping-considerations-06#section-2.1>`_.
Chatmail relays don't resolve ``PTR`` records,
so you can ignore this section if configuring ``PTR`` records
is difficult and federation with legacy email servers that don't accept
valid DKIM signature for authentication is not important.
Multi-homed setups
------------------
If you have a server with multiple IP addresses,
also known as multi-homed setup,
and don't publish all IP addresses in DNS,
you need to make sure you are using
the published address when making outgoing connections.
For example, your server may have a static IP
address, and a so-called Floating IP or Virtual IP
that can be moved between servers in case of
migration or for failover.
By using Floating IP you can avoid downtime
and keep the IP address reputation
for destinatinons that rely on IP reputation and IP blocklists.
In this case you will only publish
the Floating IP to DNS and only use the static IP
to SSH into the server.
If you have such setup, make sure that
you not only set ``PTR`` records for the Floating IP,
but make outgoing connections using the Floating IP.
Otherwise reverse DNS check succeed,
but forward check making sure your domain name points
to the IP address will fail.
Such setup is indistinguishable from someone
setting IP address ``PTR`` with the domain they don't own
and as a result don't succeed.
On Linux you can configure source IP address with ``ip route`` command,
for example:
::
ip route change default via <default-gateway> dev eth0 src <source-address>
Make sure to persist the change after verifying it is working.
You can check what your outgoing IP address is
with ``curl icanhazip.com``.
Check both the IPv4 and IPv6 addresses.
For IPv4 address use ``curl ipv4.icanhazip.com`` or ``curl -4 icanhazip.com``
and similarly for IPv6 if you have it.