Compare commits

...

68 Commits

Author SHA1 Message Date
holger krekel
fb80f23cfd feat: Automatic per-user quota-preservation.
Replace daily timer-based message expire script
with Dovecot quota-warning-triggered cleanup.
When a user reaches 90% of their mailbox quota
Dovecot calls the new script which removes the largest and oldest messages
until usage drops below 80%.

The daily `chatmail-expire` service now only
handles deletion of inactive user mailboxes.
2026-04-18 02:04:36 +02:00
link2xt
0aa08b7413 feat(dovecot): disable fsync for LMTP and IMAP services
This is aimed at reducing SSD wear level.
SSDs wear out because of writes
according to <https://superuser.com/a/440219/1777696>,
so anything reducing the writes should be helpful.

For online users Maildir format that we use
results in first storing the message in new/
and then moving to cur/ and then maybe even deleting
it immediately for users with a single device
or bots. Syncing all these changes to disk
is unnecessary and wears SSDs.
2026-04-17 19:23:28 +00:00
holger krekel
14dfabf2ff generate compliant IP-address email addresses 2026-04-17 14:40:52 +02:00
holger krekel
0a77b3339b ci: ensure consistent checkout and fix cross-relay test typo 2026-04-17 14:40:52 +02:00
holger krekel
001d8c80fc feat: re-use cmlxc workflow from chatmail/cmlxc to perform testing 2026-04-17 14:40:52 +02:00
j4n
1e376f7945 fix(cmdeploy): explicitly install resolvconf
Since ff541b8 introduced APT::Install-Recommends "false", we need to
explicitly install resolvconf. Fixes DNS breakage caused by apt.upgrade
with auto_remove=True purging resolvconf as an orphan and removing
'nameserver 127.0.0.1' in /etc/resolv.conf that pointed to the local
unbound, in consequence DNS resolution breaks and filtermail-incoming
exits because it cannot find resolvers.
2026-04-17 10:08:39 +02:00
j4n
1ae92e0639 fix(cmdeploy/dovecot): detect stale dovecot binary and force restart in activate()
When a previous deploy installed dovecot packages but the restart was
blocked (policy-rc.d) or the deploy aborted before activate(), the next
deploy sees the correct package version already installed and skips
restart. Extend activate() to check /proc/MainPID/exe for "(deleted)"
before the restart decision.
2026-04-16 15:29:04 +02:00
Jagoda Estera Ślązak
56386c231b refactor: Rename filtermail_http_port to filtermail_http_port_incoming (#921)
Since http port will be used for MTA-to-MTA,
it should be suffixed with "incoming" for consistency.

This will also make it clearer if we decide to
introduce client-relay http channel in the future.

Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-04-16 14:37:00 +02:00
j4n
2bdfecff72 cmdeploy: consolidate container detection into is_in_container() helper 2026-04-15 16:33:52 +02:00
j4n
cef739e3b3 cmdeploy/sshexec: remove dead @docker SSH host
@docker is no longer needed because we use @local inside the container now.
2026-04-15 16:33:52 +02:00
j4n
3d128d3c64 test: add dovecot deployer checks
Offline tests (test_dovecot_deployer.py, 5 tests):
- skips_epoch_matched_install: core epoch bug regression
- uses_archive_version_for_url_and_filename: epoch must not leak into URLs
- skips_dpkg_path_when_epoch_matched: end-to-end no-op deploy path
- unsupported_arch_falls_back_to_apt: integrated apt fallback with
  mixed changed results to verify |= accumulation
- pick_url_falls_back_on_primary_error: URL failover

Online test (test_1_basic.py):
- dovecot_main_process_matches_installed_binary: stale-binary
  regression guard: checks /proc/PID/exe is not deleted and
  status text matches dovecot --version
2026-04-15 15:46:03 +02:00
j4n
79f68342f4 fix: dovecot epoch version and stale-binary handling
Restart dovecot after package replacement even when `policy-rc.d` blocks
package-triggered restarts, avoid reinstalling already-correct packages.

Adds proper version separation for dovecot packages:
- Split DOVECOT_VERSION into DOVECOT_ARCHIVE_VERSION (for URLs/filenames)
  and DOVECOT_PACKAGE_VERSION (epoch-prefixed for dpkg matching).
- Update _download_dovecot_package() to return (path, changed) tuple
  so install() can track whether packages triggered restart intent.
- Use self.need_restart |= changed consistently throughout deployer.
- Move self.need_restart = True inside `if debs:` block -- previously
  the apt pin file write unconditionally forced a restart every deploy.
- Comment on dpkg retry pattern (first dpkg may fail on missing deps,
  apt-get --fix-broken resolves, then dpkg retries).

Authored-by: Alex V. <119082209+Retengart@users.noreply.github.com>

fixup
2026-04-15 15:46:03 +02:00
Alexandre Gauthier
54863453c2 fix(cmdeploy): Set permissions on dovecot pin
Ensure the preferences.d snippet that pins dovecot packages to block
Debian dist-upgrades is owned by root:root and has 644 permissions.

Files in this directory are generally expected to be world readable to ensure unprivileged operations such as apt-get in simulation mode. Having them not world readable breaks such usages.
2026-04-10 15:52:49 +02:00
Jagoda Estera Ślązak
74326a8c54 feat(nginx): Route /mxdeliv/ to configurable port (#901) 2026-04-08 19:11:11 +02:00
holger krekel
59e5dea597 fix: make "cmdeploy test --config ..." work, without requiring or implicitely falling back to a "chatmail.ini" in parent dirs 2026-04-08 19:05:51 +02:00
holger krekel
d7d89d66c1 fix: properly terminate and wait on subprocesses on teardown 2026-04-08 19:05:51 +02:00
holger krekel
00d723bd6e refactor: deployer improvements (VM detection, mailboxes dir ensured to be there, proper unbound on ipv4) 2026-04-08 19:05:51 +02:00
holger krekel
c257bfca4b feat: update chatmail-turn to support private addresses 2026-04-08 19:05:51 +02:00
holger krekel
82c9831369 refactor: unify DNS zone-file to standard BIND format 2026-04-08 19:05:51 +02:00
Jagoda Estera Ślązak
b835318ce9 chore(deps): Upgrade to filtermail 0.6.1 (#910)
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-04-07 12:48:40 +02:00
j4n
b4a46d23e6 fix(cmdeploy): pin dovecot packages to prevent apt upgrades
As our .deb packages use Debian's version naming scheme, deploy an apt
preferences file that sets Pin-Priority: -1 for all dovecot-* packages
for every version of dovecot-* from every origin.
2026-03-31 17:12:30 +02:00
DarkCat09
c6d9d27a84 fix(deps): add rpc server to cmdeploy along with client 2026-03-29 16:02:24 +00:00
DarkCat09
4521f03c99 fix: remove duplicate deps from cmdeploy 2026-03-29 13:52:08 +00:00
DarkCat09
c78859aec6 fix(deps): add aiosmtpd to testenv 2026-03-29 13:52:08 +00:00
DarkCat09
98bd5944cc chore(deps): remove unused deps from chatmaild 2026-03-29 13:52:08 +00:00
link2xt
e8933c455f fix: set default smtp_tls_security_level to "verify" unconditionally
This change was accidentally added in cf96be2cbb
Relay should not stop validating TLS certificates of other relays
just because it has a self-signed or externally managed certificate.
Externally managed certificate is likely to even be valid.
2026-03-23 19:52:49 +00:00
link2xt
d3a483c403 feat(postfix): prefer IPv4 in SMTP client 2026-03-22 21:05:02 +00:00
j4n
e687120d96 fix(cmdeploy): Install dovecot .deb packages atomically
Since change 635ac7 we try to install Dovecot, even if it is already
running, which fails Dovecot upgrades fail when the installed version
differs from the target because dovecot-imapd/lmtpd dependencies
on dovecot-core: packages are installed one at a time via apt.deb(),
i.e. `dpkg -i`, and dpkg cannot satisfy them dependencies:
```
  dpkg: dependency problems prevent configuration of dovecot-imapd:
    dovecot-imapd depends on dovecot-core (= 1:2.3.21+dfsg1-3); however:
      Version of dovecot-core on system is 1:2.3.21.1+dfsg1-1~bpo12+1.
```

Split _install_dovecot_package into _download_dovecot_package (download
only, return path) and a single server.shell call that passes all .deb
files to dpkg -i together. Uses the same 3-step pattern as pyinfra's
apt.deb: tolerant first dpkg -i, apt-get --fix-broken, then final
dpkg -i to fail if there are still errors.
2026-03-21 16:17:37 +01:00
373[Ø]™
7409bd3452 Merge pull request #898 from chatmail/373/decom-cron
chore(cmdeploy): stop installing cron package
2026-03-19 10:55:36 +00:00
ccclxxiii
1a34172487 chore(cmdeploy): stop installing cron package 2026-03-18 20:35:27 +00:00
j4n
38246ca8ea feat(cmdeploy): Add blocked_service_startup() context manager
Prevent services from auto-starting during package installation by
installing a policy-rc.d that exits 101. This avoids dovecot startup
failures when no TLS cert exists yet (e.g. acmetool failed on first run).

Picked out of 62fe113b from hpk/lxcdeploy branch.
2026-03-17 14:28:11 +01:00
j4n
2635ac7e6d fix(cmdeploy): Rewrite dovecot install logic, update
The old code did not install updates when the service was running; check
installed version instead of systemd status. Also, rewrite install logic
to extract dovecot version and hashes as module-level constants.
Use blocked_service_startup from lxcdeploy branch as it solves our
problem here too.
2026-03-17 14:28:11 +01:00
holger krekel
4fabfb31f8 fix test and some linting fixes 2026-03-16 13:25:57 +01:00
Jagoda Ślązak
36478dbfcf feat(filtermail): Disable IP verification on domain-literal addresses
Disables IP verification by upgrading filtermail to v0.6,
changelog: <https://github.com/chatmail/filtermail/releases/tag/v0.6.0>

Messages using domain-literal addresses no longer require
to match the origin SMTP connection IP anymore.

This allows for example a relay using IPv4 email addresses
to send messages to other relays over IPv6.

This is not considering a breaking change as IP-address-only
relays are not considered a stable feature.

Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-03-13 20:47:10 +01:00
holger krekel
ff541b81ea chore: prevent installing recommended packages (e.g. installing cron leads to installing exim without it). 2026-03-08 23:40:16 +01:00
Alex V.
ed9b4092a8 test: add error-path tests for all bug fixes
- test_doveauth: invalid localpart chars rejected, concurrent same-account creation
- test_expire: --mdir filtering uses msg.path correctly
- test_metadata: TURN exception returns N\n, success returns credentials
- test_turnserver: socket timeout, connection refused, happy path
- test_dns: get_dkim_entry returns (None, None) on CalledProcessError
- test_rshell: dovecot_recalc_quota handles empty/malformed output
2026-03-05 16:27:15 +01:00
Alex V.
1b8ad3ca12 fix: handle turn_credentials exceptions in metadata proxy
ConnectionRefusedError/FileNotFoundError/TimeoutError from
turn_credentials() would kill the dict proxy connection.
Return N (not found) response instead and log the error.
2026-03-05 16:27:15 +01:00
Alex V.
f85d304e65 fix: add 5s timeout to TURN credential socket
Hung TURN daemon would block dict proxy handler thread indefinitely.
Per Python docs, settimeout() raises TimeoutError on expiry.
2026-03-05 16:27:15 +01:00
Alex V.
4d1856d8f1 fix(security): validate localpart chars and fix account creation race
- Reject localparts with chars outside [a-z0-9._-] to prevent
  filesystem issues from crafted usernames via IMAP/SMTP auth
- Use filelock to serialize concurrent account creation for same
  address, preventing TOCTOU race where two threads both create
  an account and last writer wins
2026-03-05 16:27:15 +01:00
Alex V.
ae2ab52aa9 fix(security): remove deprecated TLS 1.0/1.1 from nginx config
TLS 1.0/1.1 deprecated by RFC 8996. Nginx default is TLSv1.2 TLSv1.3.
Aligns with postfix (>=TLSv1.2) and dovecot (TLSv1.3) in the same stack.
2026-03-05 16:27:15 +01:00
Alex V.
d0c396538b fix(security): use secrets.choice instead of random.choices for username
Per Python docs, secrets module should be used for security-sensitive
data. random.choices uses Mersenne Twister PRNG which is predictable.
secrets.choice was already used for password generation in the same file.
2026-03-05 16:27:15 +01:00
Alex V.
78a4e28408 fix: guard against IndexError in dovecot_recalc_quota
doveadm output ends with empty line, parts=[] causes parts[2] crash.
2026-03-05 16:27:15 +01:00
Alex V.
2432d4f498 fix: remove dead code and potential NameError in run_cmd
check_call always returns 0 or raises, making retcode!=0 branches
unreachable. Also remote_data was undefined with --skip-dns-check.
2026-03-05 16:27:15 +01:00
Alex V.
31301abb42 fix: handle build_webpages returning None in WebsiteDeployer
Exception in _build_webpages was silently caught, returning None.
rsync then received "None/" as source path, silently breaking deploy.
2026-03-05 16:27:15 +01:00
Alex V.
6b4edd8502 fix: return tuple from get_dkim_entry on CalledProcessError
Bare return yielded None, causing TypeError on tuple unpacking
in perform_initial_checks on fresh servers without DKIM keys.
2026-03-05 16:27:15 +01:00
Alex V.
9c467ab3e8 chore: fix ruff formatting in acmetool, dovecot, postfix deployers 2026-03-05 16:27:15 +01:00
link2xt
774350778b feat: remove /metrics from the website
Similar data is already generated by fsreport
available for the relay operator
and metrics for prometheus are generated by mtail.

Closes <https://github.com/chatmail/relay/issues/431>
2026-03-05 14:58:11 +01:00
j4n
06d53503e5 feat(chatmaild/fsreport): add Prometheus textfile output, count files
- Count files in report
- Extend size buckets to bigger messages (5, 10 MiB)
- Two textfile exporters:
  - Full, bucketed size statistics with --textfile option
  - Account count only matching metrics.py format with --legacy-metrics
    option (filename defaults to /var/www/html/metrics)
- Improve option help texts
2026-03-05 13:52:09 +01:00
Alex V.
b128935940 fix: use msg.path instead of nonexistent msg.relpath in fsreport
FileEntry namedtuple has (path, mtime, size), not relpath.
Crashes with AttributeError when --mdir flag is used.
2026-03-05 13:52:09 +01:00
missytake
2e38c61ca2 opendkim: chown opendkim: private key 2026-03-05 11:24:06 +01:00
missytake
9dd8ce8ce1 tests: make sure chatmail-metadata was started
fix a flaky test: https://github.com/chatmail/relay/pull/856#issuecomment-3919881473
since #856 chatmail-metadata is restarted every 5 second, if it didn't come up after that, the failure likely sits deeper.
2026-03-04 18:53:31 +01:00
j4n
0ae3f94ecc fix(cmdeploy): dovecot update url 2026-03-04 17:19:14 +01:00
Jagoda Ślązak
4481a12369 chore(deps): upgrade to filtermail v0.5.2
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-03-04 15:53:50 +01:00
373[Ø]™
a47016e9f2 Merge pull request #875 from chatmail/dovecot-github
fix(dovecot): download dovecot packages from github release
2026-03-03 16:03:21 +00:00
j4n
4e6ba7378d feat(cmdeploy): fall back to github url for dovecot 2026-03-02 10:29:03 +01:00
j4n
e428c646d1 fix(dovecot): download dovecot packages from github release 2026-02-26 21:06:55 +01:00
Jagoda Estera Ślązak
dbd5cd16f5 feat: replace DKIM verification with filtermail v0.5 (#831)
Upgrade to filtermail v0.5, which has a built-in DKIM verifier
and disable OpenDKIM on reinject_incoming.

Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-02-25 12:39:33 +01:00
holger krekel
e21f2a0fa2 feat: support externally managed TLS via tls_external_cert_and_key option (#860)
Adds a new tls_external_cert_and_key config option for chatmail servers
that manage their own TLS certificates (e.g. via an external ACME client
or a load balancer).

A systemd path unit (tls-cert-reload.path) watches the certificate file
via inotify and automatically reloads dovecot and nginx when it changes.
Postfix reads certs per TLS handshake so needs no reload.

Also extracts openssl_selfsigned_args() so cert generation parameters
are shared between SelfSignedTlsDeployer and the e2e test.
2026-02-24 09:46:38 +01:00
holger krekel
8ca0909fa5 cleanup: remove CFFI deltachat bindings usage, and consolidate test support with rpc-bindings (#872)
* cleanup: remove CFFI deltachat bindings usage, and consolidate test support with rpc-bindings

major simplification: all chatmail fixtures used in the test are now created inside the cmdeploy plugin,
and do not inherit anything from other fixture machineries, let alone the legacy deltachat CFFI ones.
also fix that pytest report headers show correct chatmail domains under test
2026-02-24 08:27:56 +01:00
j4n
2c99cc84aa cmdeploy: prepare chatmaild/cmdeploy changes for Docker support
- chatmaild:
  - basedeploy.py: Add has_systemd() guard. During Docker image builds
    there's no running systemd, so deployers that query SystemdEnabled
    facts would crash; this change might also be helpful for non-systemd
    platforms.
- cmdeploy:
  - cmdeploy.py:
    - when deploying to @docker, auto-set CHATMAIL_NOPORTCHECK and
      CHATMAIL_NOSYSCTL since neither makes sense inside a container
    - --config default now reads CHATMAIL_INI env var, so Docker
      entrypoints can point to a mounted ini without CLI flags.
  - deployers.py:
    - skip port check / CHATMAIL_NOPORTCHECK
    - skip echobot systemd cleanup w/ has_systemd
  - dovecot/deployer.py:
    - Guard sysctl writes behind CHATMAIL_NOSYSCTL
    - invert dovecot install check so it works without systemd
  - sshexec.py: Add __call__ to LocalExec so cmdeploy status works with
    @local target. Without it, cmdeploy status tried to call the
    executor directly and got TypeError.

Consolidated from j4n/docker branch commits (selection):
- 8953fde feat(cmdeploy): read CHATMAIL_INI env var for default --config path
- 81d7782 fix(cmdeploy): add __call__ to LocalExec so status works with @local
- 8bba78e docker: disable port check if docker is running. fix #694
- 865b514 docker: replace config flags with env vars, drop docker param (instead of f26cb08)

Files: cmdeploy/src/cmdeploy/{basedeploy,cmdeploy,deployers,sshexec,dovecot/deployer}.py

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-23 09:12:48 +01:00
373[Ø]™
73309778c2 Merge pull request #867 from chatmail/373/benchmark-filtermail-refinement
stabilize online benchmark timing adding rate-limit-aware cooldown between iterations
2026-02-22 18:13:34 +00:00
373[Ø]™
50ecc2b315 Merge pull request #868 from chatmail/hpk/simplify-cooldown
refactor(benchmark): move rate-limit cooldown to benchmark fixture
2026-02-22 18:05:19 +00:00
holger krekel
7b5b180b4b refactor(benchmark): move rate-limit cooldown to benchmark fixture 2026-02-22 18:26:15 +01:00
373[Ø]™
193624e522 fix(benchmark): add rate-limit refill cooldown for send_10_receive_10 and avoid fixture signature mismatch 2026-02-22 15:58:21 +00:00
373[Ø]™
437287fadc feat(tests): add optional benchmark cooldown between iterations 2026-02-22 15:55:03 +00:00
link2xt
0ad679997a feat: reconfigure acmetool from redirector to proxy mode
This elimitates the problem of acmetool failing
to start when nginx is installed already and uses port 80.

This also makes nginx redirect HTTP requests to HTTPS
for setups that don't have acmetool.
2026-02-21 22:10:20 +00:00
missytake
38cc1c7cd6 fix(cmdeploy): make tests work with --ssh-host localhost (#856)
* tests: fix test_remote[imap]
* cmdeploy: call LocalExec directly, not .logged()
* tests: fix TestSSHExecutor.test_logged
* tests: fix test_status_cmd with --ssh-host @local
* tests: fix test_logged with --ssh-host localhost
* tests: fix TestSSHExecutor::test_exception with --ssh-host localhost
* ci: deploy with --ssh-host localhost on staging-ipv4
* metadata: lower RestartSec
2026-02-19 21:34:39 +01:00
link2xt
7a6ed8340e test: mark f-string with f prefix in test_expunged
This one was not marked accidentally.
2026-02-19 19:41:14 +00:00
68 changed files with 2107 additions and 840 deletions

View File

@@ -1,21 +1,32 @@
name: CI
name: Run unit-tests and container-based deploy+test verification
on:
pull_request:
# Triggers when a PR is merged into main or a direct push occurs
push:
branches: [ "main" ]
# Triggers for any PR (and its subsequent commits) targeting the main branch
pull_request:
branches: [ "main" ]
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
tox:
name: isolated chatmaild tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
# Checkout pull request HEAD commit instead of merge commit
# Otherwise `test_deployed_state` will be unhappy.
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.1/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests
working-directory: chatmaild
run: pipx run tox
@@ -24,7 +35,9 @@ jobs:
name: deploy-chatmail tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: initenv
run: scripts/initenv.sh
@@ -38,5 +51,23 @@ jobs:
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
# all other cmdeploy commands require a staging server
# see https://github.com/deltachat/chatmail/issues/100
lxc-test:
name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.10.0
with:
cmlxc_commands: |
cmlxc init
# single cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo cm0
cmlxc -v test-mini cm0
cmlxc -v test-cmdeploy cm0
# cross cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo --ipv4-only cm1
cmlxc -v test-cmdeploy cm0 cm1
# cross cmdeploy/madmail relay tests
cmlxc -v deploy-madmail mad0
cmlxc -v test-cmdeploy cm0 mad0
cmlxc -v test-mini cm0 mad0
cmlxc -v test-mini mad0 cm0

View File

@@ -1,95 +0,0 @@
name: deploy on staging-ipv4.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging-ipv4.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging-ipv4.testrun.org
url: https://staging-ipv4.testrun.org/
concurrency: staging-ipv4.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging-ipv4.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging-ipv4.testrun.org:/var/lib/acme acme-ipv4 || true
rsync -avz root@staging-ipv4.testrun.org:/etc/dkimkeys dkimkeys-ipv4 || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys-ipv4/dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys-ipv4 root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme-ipv4/acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme-ipv4 root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging-ipv4.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_IPV4_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme-ipv4/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys-ipv4/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging-ipv4.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging-ipv4.testrun.org
sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' chatmail.ini
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone
cat staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

View File

@@ -1,98 +0,0 @@
name: deploy on staging2.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging2.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging2.testrun.org
url: https://staging2.testrun.org/
concurrency: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging2.testrun.org:/var/lib/acme . || true
rsync -avz root@staging2.testrun.org:/etc/dkimkeys . || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging2.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging2.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging2.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true
- name: add hpk42 key to staging server
run: ssh root@staging2.testrun.org 'curl -s https://github.com/hpk42.keys >> .ssh/authorized_keys'
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging2.testrun.org
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
ssh -o StrictHostKeyChecking=accept-new root@staging2.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone --verbose
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone
scp .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

2
.gitignore vendored
View File

@@ -4,7 +4,7 @@ __pycache__/
*$py.class
*.swp
*qr-*.png
chatmail.ini
chatmail*.ini
# C extensions

View File

@@ -1,5 +1,24 @@
# Changelog for chatmail deployment
## Unreleased
### Features
- Automated per-user quota-keeping.
Replace daily timer-based message expire script
with Dovecot quota-warning-triggered cleanup (`chatmail-quota-expire`).
When a user reaches 90% of their mailbox quota
Dovecot calls the new script which removes the largest and oldest messages
until usage drops below 80%.
The daily `chatmail-expire` timer now only handles deletion
of inactive user mailboxes.
After upgrading, run the following once to clean up
mailboxes that are already over quota::
/usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire \
400 /home/vmail/mail/YOURDOMAIN --sweep
## 1.9.0 2025-12-18
### Documentation

View File

@@ -6,10 +6,7 @@ build-backend = "setuptools.build_meta"
name = "chatmaild"
version = "0.3"
dependencies = [
"aiosmtpd",
"iniconfig",
"deltachat-rpc-server",
"deltachat-rpc-client",
"filelock",
"requests",
"crypt-r >= 3.13.1 ; python_version >= '3.11'",
@@ -24,8 +21,8 @@ where = ['src']
[project.scripts]
doveauth = "chatmaild.doveauth:main"
chatmail-metadata = "chatmaild.metadata:main"
chatmail-metrics = "chatmaild.metrics:main"
chatmail-expire = "chatmaild.expire:main"
chatmail-expire = "chatmaild.expire_inactive_users:main"
chatmail-quota-expire = "chatmaild.quota_expire:main"
chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main"
@@ -71,6 +68,7 @@ commands =
deps = pytest
pdbpp
pytest-localserver
aiosmtpd
execnet
commands = pytest -v -rsXx {posargs}
"""

View File

@@ -25,8 +25,6 @@ class Config:
self.max_user_send_burst_size = int(params.get("max_user_send_burst_size", 10))
self.max_mailbox_size = params["max_mailbox_size"]
self.max_message_size = int(params.get("max_message_size", "31457280"))
self.delete_mails_after = params["delete_mails_after"]
self.delete_large_after = params["delete_large_after"]
self.delete_inactive_users_after = int(params["delete_inactive_users_after"])
self.username_min_length = int(params["username_min_length"])
self.username_max_length = int(params["username_max_length"])
@@ -38,6 +36,9 @@ class Config:
self.filtermail_smtp_port_incoming = int(
params.get("filtermail_smtp_port_incoming", "10081")
)
self.filtermail_http_port_incoming = int(
params.get("filtermail_http_port_incoming", "10082")
)
self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025"))
self.postfix_reinject_port_incoming = int(
params.get("postfix_reinject_port_incoming", "10026")
@@ -60,10 +61,23 @@ class Config:
self.privacy_pdo = params.get("privacy_pdo")
self.privacy_supervisor = params.get("privacy_supervisor")
# TLS certificate management: derived from the domain name.
# Domains starting with "_" use self-signed certificates
# All other domains use ACME.
if self.mail_domain.startswith("_"):
# TLS certificate management.
# If tls_external_cert_and_key is set, use externally managed certs.
# Otherwise derived from the domain name:
# - Domains starting with "_" use self-signed certificates
# - All other domains use ACME.
external = params.get("tls_external_cert_and_key", "").strip()
if external:
parts = external.split()
if len(parts) != 2:
raise ValueError(
"tls_external_cert_and_key must have two space-separated"
" paths: CERT_PATH KEY_PATH"
)
self.tls_cert_mode = "external"
self.tls_cert_path, self.tls_key_path = parts
elif self.mail_domain.startswith("_"):
self.tls_cert_mode = "self"
self.tls_cert_path = "/etc/ssl/certs/mailserver.pem"
self.tls_key_path = "/etc/ssl/private/mailserver.key"
@@ -79,6 +93,11 @@ class Config:
# old unused option (except for first migration from sqlite to maildir store)
self.passdb_path = Path(params.get("passdb_path", "/home/vmail/passdb.sqlite"))
@property
def max_mailbox_size_mb(self):
"""Return max_mailbox_size as an integer in megabytes."""
return parse_size_mb(self.max_mailbox_size)
def _getbytefile(self):
return open(self._inipath, "rb")
@@ -92,6 +111,16 @@ class Config:
return User(maildir, addr, password_path, uid="vmail", gid="vmail")
def parse_size_mb(limit):
"""Parse a size string like ``500M`` or ``2G`` and return megabytes."""
value = limit.strip().upper().rstrip("B")
if value.endswith("G"):
return int(value[:-1]) * 1024
if value.endswith("M"):
return int(value[:-1])
return int(value)
def write_initial_config(inipath, mail_domain, overrides):
"""Write out default config file, using the specified config value overrides."""
content = get_default_config_content(mail_domain, **overrides)

View File

@@ -1,8 +1,11 @@
import json
import logging
import os
import re
import sys
import filelock
try:
import crypt_r
except ImportError:
@@ -13,6 +16,7 @@ from .dictproxy import DictProxy
from .migrate_db import migrate_from_db_to_maildir
NOCREATE_FILE = "/etc/chatmail-nocreate"
VALID_LOCALPART_RE = re.compile(r"^[a-z0-9._-]+$")
def encrypt_password(password: str):
@@ -52,6 +56,10 @@ def is_allowed_to_create(config: Config, user, cleartext_password) -> bool:
)
return False
if not VALID_LOCALPART_RE.match(localpart):
logging.warning("localpart %r contains invalid characters", localpart)
return False
return True
@@ -140,8 +148,13 @@ class AuthDictProxy(DictProxy):
if not is_allowed_to_create(self.config, addr, cleartext_password):
return
user.set_password(encrypt_password(cleartext_password))
print(f"Created address: {addr}", file=sys.stderr)
lock = filelock.FileLock(str(user.password_path) + ".lock", timeout=5)
with lock:
userdata = user.get_userdb_dict()
if userdata:
return userdata
user.set_password(encrypt_password(cleartext_password))
print(f"Created address: {addr}", file=sys.stderr)
return user.get_userdb_dict()

View File

@@ -115,11 +115,8 @@ class Expiry:
cutoff_without_login = (
self.now - int(self.config.delete_inactive_users_after) * 86400
)
cutoff_mails = self.now - int(self.config.delete_mails_after) * 86400
cutoff_large_mails = self.now - int(self.config.delete_large_after) * 86400
self.all_mboxes += 1
changed = False
if mbox.last_login and mbox.last_login < cutoff_without_login:
self.remove_mailbox(mbox.basedir)
return
@@ -131,25 +128,10 @@ class Expiry:
print_info(f"checking mailbox {date.strftime('%b %d')} {mboxname}")
else:
print_info(f"checking mailbox (no last_login) {mboxname}")
self.all_files += len(mbox.messages)
for message in mbox.messages:
if message.mtime < cutoff_mails:
self.remove_file(message.path, mtime=message.mtime)
elif message.size > 200000 and message.mtime < cutoff_large_mails:
# we only remove noticed large files (not unnoticed ones in new/)
parts = message.path.split("/")
if len(parts) >= 2 and parts[-2] == "cur":
self.remove_file(message.path, mtime=message.mtime)
else:
continue
changed = True
if changed:
self.remove_file(f"{mbox.basedir}/maildirsize")
def get_summary(self):
return (
f"Removed {self.del_mboxes} out of {self.all_mboxes} mailboxes "
f"and {self.del_files} out of {self.all_files} files in existing mailboxes "
f"in {time.time() - self.start:2.2f} seconds"
)

View File

@@ -13,9 +13,20 @@ to show storage summaries only for first 1000 mailboxes
python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000
to write Prometheus textfile for node_exporter
python -m chatmaild.fsreport --textfile /var/lib/prometheus/node-exporter/
writes to /var/lib/prometheus/node-exporter/fsreport.prom
to also write legacy metrics.py style output (default: /var/www/html/metrics):
python -m chatmaild.fsreport --textfile /var/lib/prometheus/node-exporter/ --legacy-metrics
"""
import os
import tempfile
from argparse import ArgumentParser
from datetime import datetime
@@ -48,7 +59,19 @@ class Report:
self.num_ci_logins = self.num_all_logins = 0
self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)}
self.message_buckets = {x: 0 for x in (0, 160000, 500000, 2000000)}
KiB = 1024
MiB = 1024 * KiB
self.message_size_thresholds = (
0,
100 * KiB,
MiB // 2,
1 * MiB,
2 * MiB,
5 * MiB,
10 * MiB,
)
self.message_buckets = {x: 0 for x in self.message_size_thresholds}
self.message_count_buckets = {x: 0 for x in self.message_size_thresholds}
def process_mailbox_stat(self, mailbox):
# categorize login times
@@ -68,9 +91,10 @@ class Report:
for size in self.message_buckets:
for msg in mailbox.messages:
if msg.size >= size:
if self.mdir and not msg.relpath.startswith(self.mdir):
if self.mdir and f"/{self.mdir}/" not in msg.path:
continue
self.message_buckets[size] += msg.size
self.message_count_buckets[size] += 1
self.size_messages += sum(entry.size for entry in mailbox.messages)
self.size_extra += sum(entry.size for entry in mailbox.extrafiles)
@@ -93,9 +117,10 @@ class Report:
pref = f"[{self.mdir}] " if self.mdir else ""
for minsize, sumsize in self.message_buckets.items():
count = self.message_count_buckets[minsize]
percent = (sumsize / all_messages * 100) if all_messages else 0
print(
f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%)"
f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%), {count} msgs"
)
user_logins = self.num_all_logins - self.num_ci_logins
@@ -111,6 +136,75 @@ class Report:
for days, active in self.login_buckets.items():
print(f"last {days:3} days: {HSize(active)} {p(active)}")
def _write_atomic(self, filepath, content):
"""Atomically write content to filepath via tmp+rename."""
dirpath = os.path.dirname(os.path.abspath(filepath))
fd, tmppath = tempfile.mkstemp(dir=dirpath, suffix=".tmp")
try:
with os.fdopen(fd, "w") as f:
f.write(content)
os.chmod(tmppath, 0o644)
os.rename(tmppath, filepath)
except BaseException:
try:
os.unlink(tmppath)
except OSError:
pass
raise
def dump_textfile(self, filepath):
"""Dump metrics in Prometheus exposition format."""
lines = []
lines.append("# HELP chatmail_storage_bytes Mailbox storage in bytes.")
lines.append("# TYPE chatmail_storage_bytes gauge")
lines.append(f'chatmail_storage_bytes{{kind="messages"}} {self.size_messages}')
lines.append(f'chatmail_storage_bytes{{kind="extra"}} {self.size_extra}')
total = self.size_extra + self.size_messages
lines.append(f'chatmail_storage_bytes{{kind="total"}} {total}')
lines.append("# HELP chatmail_messages_bytes Sum of msg bytes >= threshold.")
lines.append("# TYPE chatmail_messages_bytes gauge")
for minsize, sumsize in self.message_buckets.items():
lines.append(f'chatmail_messages_bytes{{min_size="{minsize}"}} {sumsize}')
lines.append("# HELP chatmail_messages_count Number of msgs >= size threshold.")
lines.append("# TYPE chatmail_messages_count gauge")
for minsize, count in self.message_count_buckets.items():
lines.append(f'chatmail_messages_count{{min_size="{minsize}"}} {count}')
lines.append("# HELP chatmail_accounts Number of accounts.")
lines.append("# TYPE chatmail_accounts gauge")
user_logins = self.num_all_logins - self.num_ci_logins
lines.append(f'chatmail_accounts{{kind="all"}} {self.num_all_logins}')
lines.append(f'chatmail_accounts{{kind="ci"}} {self.num_ci_logins}')
lines.append(f'chatmail_accounts{{kind="user"}} {user_logins}')
lines.append(
"# HELP chatmail_accounts_active Non-CI accounts active within N days."
)
lines.append("# TYPE chatmail_accounts_active gauge")
for days, active in self.login_buckets.items():
lines.append(f'chatmail_accounts_active{{days="{days}"}} {active}')
self._write_atomic(filepath, "\n".join(lines) + "\n")
def dump_compat_textfile(self, filepath):
"""Dump legacy metrics.py style metrics."""
user_logins = self.num_all_logins - self.num_ci_logins
lines = [
"# HELP total number of accounts",
"# TYPE accounts gauge",
f"accounts {self.num_all_logins}",
"# HELP number of CI accounts",
"# TYPE ci_accounts gauge",
f"ci_accounts {self.num_ci_logins}",
"# HELP number of non-CI accounts",
"# TYPE nonci_accounts gauge",
f"nonci_accounts {user_logins}",
]
self._write_atomic(filepath, "\n".join(lines) + "\n")
def main(args=None):
"""Report about filesystem storage usage of all mailboxes and messages"""
@@ -127,19 +221,21 @@ def main(args=None):
"--days",
default=0,
action="store",
help="assume date to be days older than now",
help="assume date to be DAYS older than now",
)
parser.add_argument(
"--min-login-age",
default=0,
metavar="DAYS",
dest="min_login_age",
action="store",
help="only sum up message size if last login is at least min-login-age days old",
help="only sum up message size if last login is at least DAYS days old",
)
parser.add_argument(
"--mdir",
metavar="{cur,new,tmp}",
action="store",
help="only consider 'cur' or 'new' or 'tmp' messages for summary",
help="only consider messages in specified Maildir subdirectory for summary",
)
parser.add_argument(
@@ -148,6 +244,21 @@ def main(args=None):
action="store",
help="maximum number of mailboxes to iterate on",
)
parser.add_argument(
"--textfile",
metavar="PATH",
default=None,
help="write Prometheus textfile to PATH (directory or file); "
"if PATH is a directory, writes 'fsreport.prom' inside it",
)
parser.add_argument(
"--legacy-metrics",
metavar="FILENAME",
nargs="?",
const="/var/www/html/metrics",
default=None,
help="write legacy metrics.py textfile (default: /var/www/html/metrics)",
)
args = parser.parse_args(args)
@@ -161,7 +272,15 @@ def main(args=None):
rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir)
for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
rep.process_mailbox_stat(mbox)
rep.dump_summary()
if args.textfile:
path = args.textfile
if os.path.isdir(path):
path = os.path.join(path, "fsreport.prom")
rep.dump_textfile(path)
if args.legacy_metrics:
rep.dump_compat_textfile(args.legacy_metrics)
if not args.textfile and not args.legacy_metrics:
rep.dump_summary()
if __name__ == "__main__":

View File

@@ -23,12 +23,6 @@ max_mailbox_size = 500M
# maximum message size for an e-mail in bytes
max_message_size = 31457280
# days after which mails are unconditionally deleted
delete_mails_after = 20
# days after which large messages (>200k) are unconditionally deleted
delete_large_after = 7
# days after which users without a successful login are deleted (database and mails)
delete_inactive_users_after = 90
@@ -48,6 +42,13 @@ passthrough_senders =
# (space-separated, item may start with "@" to whitelist whole recipient domains)
passthrough_recipients =
# Use externally managed TLS certificates instead of built-in acmetool.
# Paths refer to files on the deployment server (not the build machine).
# Both files must already exist before running cmdeploy.
# Certificate renewal is your responsibility; changed files are
# picked up automatically by all relay services.
# tls_external_cert_and_key = /path/to/fullchain.pem /path/to/privkey.pem
# path to www directory - documented here: https://chatmail.at/doc/relay/getting_started.html#custom-web-pages
#www_folder = www

View File

@@ -101,7 +101,11 @@ class MetadataDictProxy(DictProxy):
# Handle `GETMETADATA "" /shared/vendor/deltachat/irohrelay`
return f"O{self.iroh_relay}\n"
elif keyname == "vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn":
res = turn_credentials()
try:
res = turn_credentials()
except Exception:
logging.exception("failed to get TURN credentials")
return "N\n"
port = 3478
return f"O{self.turn_hostname}:{port}:{res}\n"

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env python3
import sys
from pathlib import Path
def main(vmail_dir=None):
if vmail_dir is None:
vmail_dir = sys.argv[1]
accounts = 0
ci_accounts = 0
for path in Path(vmail_dir).iterdir():
if not path.joinpath("cur").is_dir():
continue
accounts += 1
if path.name[:3] in ("ci-", "ac_"):
ci_accounts += 1
print("# HELP total number of accounts")
print("# TYPE accounts gauge")
print(f"accounts {accounts}")
print("# HELP number of CI accounts")
print("# TYPE ci_accounts gauge")
print(f"ci_accounts {ci_accounts}")
print("# HELP number of non-CI accounts")
print("# TYPE nonci_accounts gauge")
print(f"nonci_accounts {accounts - ci_accounts}")
if __name__ == "__main__":
main()

View File

@@ -2,8 +2,8 @@
"""CGI script for creating new accounts."""
import ipaddress
import json
import random
import secrets
import string
from urllib.parse import quote
@@ -15,13 +15,25 @@ ALPHANUMERIC = string.ascii_lowercase + string.digits
ALPHANUMERIC_PUNCT = string.ascii_letters + string.digits + string.punctuation
def wrap_ip(host):
if host.startswith("[") and host.endswith("]"):
return host
try:
ipaddress.ip_address(host)
return f"[{host}]"
except ValueError:
return host
def create_newemail_dict(config: Config):
user = "".join(random.choices(ALPHANUMERIC, k=config.username_max_length))
user = "".join(
secrets.choice(ALPHANUMERIC) for _ in range(config.username_max_length)
)
password = "".join(
secrets.choice(ALPHANUMERIC_PUNCT)
for _ in range(config.password_min_length + 3)
)
return dict(email=f"{user}@{config.mail_domain}", password=f"{password}")
return dict(email=f"{user}@{wrap_ip(config.mail_domain)}", password=f"{password}")
def create_dclogin_url(email, password):

View File

@@ -0,0 +1,152 @@
"""
Remove messages from a mailbox to meet a size target.
Dovecot calls this script when a user's quota is near its limit.
Files are scored by ``size * age`` so that large, old messages
are removed first.
Usage::
quota_expire <target_mb> <mailbox_path>
"""
import os
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from stat import S_ISREG
FileEntry = namedtuple("FileEntry", ("path", "mtime", "size"))
def _get_file_entry(path):
try:
st = os.stat(path)
except FileNotFoundError:
return None
if not S_ISREG(st.st_mode):
return None
return FileEntry(path, st.st_mtime, st.st_size)
def _listdir(path):
try:
return os.listdir(path)
except FileNotFoundError:
return []
def scan_mailbox_messages(mailbox_dir):
messages = []
for sub in ("cur", "new", "tmp"):
subdir = f"{mailbox_dir}/{sub}"
for name in _listdir(subdir):
entry = _get_file_entry(f"{subdir}/{name}")
if entry is not None:
messages.append(entry)
return messages
def _remove_stale_caches(mailbox_dir):
for name in ("maildirsize", "dovecot.index.cache"):
try:
os.unlink(f"{mailbox_dir}/{name}")
except FileNotFoundError:
pass
def expire_to_target(mailbox_dir, target_bytes, now=None):
"""Remove highest-scored files until total size <= *target_bytes*.
Returns the list of removed file paths.
"""
if now is None:
now = time.time()
messages = scan_mailbox_messages(mailbox_dir)
total_size = sum(m.size for m in messages)
if total_size <= target_bytes:
return []
# Score: large and old files get the highest score.
scored = sorted(
messages,
key=lambda m: m.size * (now - m.mtime),
reverse=True,
)
removed = []
for entry in scored:
if total_size <= target_bytes:
break
try:
os.unlink(entry.path)
except FileNotFoundError:
continue
total_size -= entry.size
removed.append(entry.path)
if removed:
_remove_stale_caches(mailbox_dir)
return removed
def main(args=None):
"""Remove mailbox messages to stay within a megabyte target."""
parser = ArgumentParser(description=main.__doc__)
parser.add_argument(
"target_mb",
type=int,
help="target mailbox size in megabytes",
)
parser.add_argument(
"mailbox_path",
help="path to a user mailbox, or with --sweep the mailboxes directory",
)
parser.add_argument(
"--sweep",
action="store_true",
help="sweep all mailboxes under mailbox_path",
)
args = parser.parse_args(args)
target_bytes = args.target_mb * 1024 * 1024
if args.sweep:
return _sweep(args.mailbox_path, target_bytes)
removed = expire_to_target(args.mailbox_path, target_bytes)
if removed:
print(
f"removed {len(removed)} file(s) from {args.mailbox_path}"
f" to reach {args.target_mb} MB target",
file=sys.stderr,
)
return 0
def _sweep(mailboxes_dir, target_bytes):
try:
names = os.listdir(mailboxes_dir)
except FileNotFoundError:
print(f"directory not found: {mailboxes_dir}", file=sys.stderr)
return 1
for name in sorted(names):
if "@" not in name:
continue
mbox = f"{mailboxes_dir}/{name}"
removed = expire_to_target(mbox, target_bytes)
if removed:
print(
f"removed {len(removed)} file(s) from {name}",
file=sys.stderr,
)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -34,8 +34,6 @@ def test_read_config_testrun(make_config):
assert config.postfix_reinject_port == 10025
assert config.max_user_send_per_minute == 60
assert config.max_mailbox_size == "500M"
assert config.delete_mails_after == "20"
assert config.delete_large_after == "7"
assert config.username_min_length == 9
assert config.username_max_length == 9
assert config.password_min_length == 9
@@ -87,3 +85,37 @@ def test_config_tls_self(make_config):
assert config.tls_cert_mode == "self"
assert config.tls_cert_path == "/etc/ssl/certs/mailserver.pem"
assert config.tls_key_path == "/etc/ssl/private/mailserver.key"
def test_config_tls_external(make_config):
config = make_config(
"chat.example.org",
{
"tls_external_cert_and_key": "/custom/fullchain.pem /custom/privkey.pem",
},
)
assert config.tls_cert_mode == "external"
assert config.tls_cert_path == "/custom/fullchain.pem"
assert config.tls_key_path == "/custom/privkey.pem"
def test_config_tls_external_overrides_underscore(make_config):
config = make_config(
"_test.example.org",
{
"tls_external_cert_and_key": "/certs/fullchain.pem /certs/privkey.pem",
},
)
assert config.tls_cert_mode == "external"
assert config.tls_cert_path == "/certs/fullchain.pem"
assert config.tls_key_path == "/certs/privkey.pem"
def test_config_tls_external_bad_format(make_config):
with pytest.raises(ValueError, match="two space-separated"):
make_config(
"chat.example.org",
{
"tls_external_cert_and_key": "/only/one/path.pem",
},
)

View File

@@ -120,6 +120,60 @@ def test_handle_dovecot_protocol_iterate(gencreds, example_config):
assert not lines[2]
def test_invalid_localpart_characters(make_config):
"""Test that is_allowed_to_create rejects localparts with invalid characters."""
config = make_config("chat.example.org", {"username_min_length": "3"})
password = "zequ0Aimuchoodaechik"
domain = config.mail_domain
# valid localparts
assert is_allowed_to_create(config, f"abc123@{domain}", password)
assert is_allowed_to_create(config, f"a.b-c_d@{domain}", password)
# uppercase rejected
assert not is_allowed_to_create(config, f"Abc123@{domain}", password)
assert not is_allowed_to_create(config, f"ABCDEFG@{domain}", password)
# spaces and special chars rejected
assert not is_allowed_to_create(config, f"a b cde@{domain}", password)
assert not is_allowed_to_create(config, f"abc+def@{domain}", password)
assert not is_allowed_to_create(config, f"abc!def@{domain}", password)
assert not is_allowed_to_create(config, f"ab@cdef@{domain}", password)
assert not is_allowed_to_create(config, f"abc/def@{domain}", password)
assert not is_allowed_to_create(config, f"abc\\def@{domain}", password)
def test_concurrent_creation_same_account(dictproxy):
"""Test that concurrent creation of the same account doesn't corrupt password."""
addr = "racetest1@chat.example.org"
password = "zequ0Aimuchoodaechik"
num_threads = 10
results = queue.Queue()
def create():
try:
res = dictproxy.lookup_passdb(addr, password)
results.put(("ok", res))
except Exception:
results.put(("err", traceback.format_exc()))
threads = [threading.Thread(target=create, daemon=True) for _ in range(num_threads)]
for t in threads:
t.start()
for t in threads:
t.join(timeout=10)
passwords_seen = set()
for _ in range(num_threads):
status, res = results.get()
if status == "err":
pytest.fail(f"concurrent creation failed\n{res}")
passwords_seen.add(res["password"])
# all threads must see the same password hash
assert len(passwords_seen) == 1
def test_50_concurrent_lookups_different_accounts(gencreds, dictproxy):
num_threads = 50
req_per_thread = 5

View File

@@ -1,7 +1,6 @@
import os
import random
from datetime import datetime
from fnmatch import fnmatch
from pathlib import Path
import pytest
@@ -112,40 +111,48 @@ def test_report(mbox1, example_config):
report_main(args)
def test_report_mdir_filters_by_path(mbox1, example_config):
"""Test that Report with mdir='cur' only counts messages in cur/ subdirectory."""
from chatmaild.fsreport import Report
now = datetime.utcnow().timestamp()
# Set password mtime to old enough so min_login_age check passes
password = Path(mbox1.basedir).joinpath("password")
old_time = now - 86400 * 10 # 10 days ago
os.utime(password, (old_time, old_time))
# Reload mailbox with updated mtime
from chatmaild.expire import MailboxStat
mbox = MailboxStat(mbox1.basedir)
# Report without mdir — should count all messages
rep_all = Report(now=now, min_login_age=1, mdir=None)
rep_all.process_mailbox_stat(mbox)
total_all = rep_all.message_buckets[0]
# Report with mdir='cur' — should only count cur/ messages
rep_cur = Report(now=now, min_login_age=1, mdir="cur")
rep_cur.process_mailbox_stat(mbox)
total_cur = rep_cur.message_buckets[0]
# Report with mdir='new' — should only count new/ messages
rep_new = Report(now=now, min_login_age=1, mdir="new")
rep_new.process_mailbox_stat(mbox)
total_new = rep_new.message_buckets[0]
# cur has 500-byte msg, new has 600-byte msg (from fill_mbox)
assert total_cur == 500
assert total_new == 600
assert total_all == 500 + 600
def test_expiry_cli_basic(example_config, mbox1):
args = (str(example_config._inipath),)
expiry_main(args)
def test_expiry_cli_old_files(capsys, example_config, mbox1):
relpaths_old = ["cur/msg_old1", "cur/msg_old1"]
cutoff_days = int(example_config.delete_mails_after) + 1
create_new_messages(mbox1.basedir, relpaths_old, size=1000, days=cutoff_days)
relpaths_large = ["cur/msg_old_large1", "new/msg_old_large2"]
cutoff_days = int(example_config.delete_large_after) + 1
create_new_messages(
mbox1.basedir, relpaths_large, size=1000 * 300, days=cutoff_days
)
create_new_messages(mbox1.basedir, ["cur/shouldstay"], size=1000 * 300, days=1)
args = str(example_config._inipath), "--remove", "-v"
expiry_main(args)
out, err = capsys.readouterr()
allpaths = relpaths_old + relpaths_large + ["maildirsize"]
for path in allpaths:
for line in err.split("\n"):
if fnmatch(line, f"removing*{path}"):
break
else:
if path != "new/msg_old_large2":
pytest.fail(f"failed to remove {path}\n{err}")
assert "shouldstay" not in err
def test_get_file_entry(tmp_path):
assert get_file_entry(str(tmp_path.joinpath("123123"))) is None
p = tmp_path.joinpath("x")

View File

@@ -47,6 +47,8 @@ def test_one_mail(
make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch
):
monkeypatch.setenv("PYTHONUNBUFFERED", "1")
# DKIM is tested by cmdeploy tests.
monkeypatch.setenv("FILTERMAIL_SKIP_DKIM", "1")
smtp_inject_port = 20025
if filtermail_mode == "outgoing":
settings = dict(
@@ -64,6 +66,10 @@ def test_one_mail(
popen = make_popen(["filtermail", path, filtermail_mode])
line = popen.stderr.readline().strip()
# skip a warning that FILTERMAIL_SKIP_DKIM shouldn't be used in prod
if b"DKIM verification DISABLED!" in line:
line = popen.stderr.readline().strip()
if b"loop" not in line:
print(line.decode("ascii"), file=sys.stderr)
pytest.fail("starting filtermail failed")

View File

@@ -314,6 +314,51 @@ def test_persistent_queue_items(tmp_path, testaddr, token):
assert not queue_item < item2 and not item2 < queue_item
def test_turn_credentials_exception_returns_N(notifier, metadata, monkeypatch):
"""Test that turn_credentials() failure returns N\\n instead of crashing."""
import chatmaild.metadata
dictproxy = MetadataDictProxy(
notifier=notifier,
metadata=metadata,
turn_hostname="turn.example.org",
)
def mock_turn_credentials():
raise ConnectionRefusedError("socket not available")
monkeypatch.setattr(chatmaild.metadata, "turn_credentials", mock_turn_credentials)
transactions = {}
res = dictproxy.handle_dovecot_request(
"Lshared/0123/vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn"
"\tuser@example.org",
transactions,
)
assert res == "N\n"
def test_turn_credentials_success(notifier, metadata, monkeypatch):
"""Test that valid turn_credentials() returns TURN URI."""
import chatmaild.metadata
dictproxy = MetadataDictProxy(
notifier=notifier,
metadata=metadata,
turn_hostname="turn.example.org",
)
monkeypatch.setattr(chatmaild.metadata, "turn_credentials", lambda: "user:pass")
transactions = {}
res = dictproxy.handle_dovecot_request(
"Lshared/0123/vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn"
"\tuser@example.org",
transactions,
)
assert res == "Oturn.example.org:3478:user:pass\n"
def test_iroh_relay(dictproxy):
rfile = io.BytesIO(
b"\n".join(

View File

@@ -1,24 +0,0 @@
from chatmaild.metrics import main
def test_main(tmp_path, capsys):
paths = []
for x in ("ci-asllkj", "ac_12l3kj", "qweqwe", "ci-l1k2j31l2k3"):
p = tmp_path.joinpath(x)
p.mkdir()
p.joinpath("cur").mkdir()
paths.append(p)
tmp_path.joinpath("nomailbox").mkdir()
main(tmp_path)
out, _ = capsys.readouterr()
d = {}
for line in out.split("\n"):
if line.strip() and not line.startswith("#"):
name, num = line.split()
d[name] = int(num)
assert d["accounts"] == 4
assert d["ci_accounts"] == 3
assert d["nonci_accounts"] == 1

View File

@@ -19,6 +19,12 @@ def test_create_newemail_dict(example_config):
assert ac1["password"] != ac2["password"]
def test_create_newemail_dict_ip(make_config):
config = make_config("1.2.3.4")
ac = create_newemail_dict(config)
assert ac["email"].endswith("@[1.2.3.4]")
def test_create_dclogin_url():
url = create_dclogin_url("user@example.org", "p@ss w+rd")
assert url.startswith("dclogin:")

View File

@@ -0,0 +1,91 @@
import os
import time
from chatmaild.quota_expire import expire_to_target, scan_mailbox_messages
MB = 1024 * 1024
def _create_message(basedir, relpath, size, days_old=0):
path = basedir / relpath
path.parent.mkdir(parents=True, exist_ok=True)
path.write_bytes(b"x" * size)
mtime = time.time() - days_old * 86400
os.utime(path, (mtime, mtime))
return path
def test_scan_cur_new_tmp(tmp_path):
_create_message(tmp_path, "cur/msg1", 100)
_create_message(tmp_path, "new/msg2", 200)
_create_message(tmp_path, "tmp/msg3", 300)
messages = scan_mailbox_messages(str(tmp_path))
assert len(messages) == 3
sizes = sorted(m.size for m in messages)
assert sizes == [100, 200, 300]
def test_scan_ignores_subfolders(tmp_path):
_create_message(tmp_path, "cur/a", 10)
_create_message(tmp_path, ".DeltaChat/cur/b", 20)
assert len(scan_mailbox_messages(str(tmp_path))) == 1
def test_scan_empty(tmp_path):
assert scan_mailbox_messages(str(tmp_path)) == []
assert scan_mailbox_messages(str(tmp_path / "nope")) == []
def test_noop_under_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", MB)
assert expire_to_target(str(tmp_path), 2 * MB) == []
assert (tmp_path / "cur" / "msg1").exists()
def test_removes_to_target(tmp_path):
now = time.time()
for i in range(15):
_create_message(tmp_path, f"cur/msg{i:02d}", MB, days_old=i + 1)
removed = expire_to_target(str(tmp_path), 10 * MB, now=now)
assert len(removed) == 5
assert len(scan_mailbox_messages(str(tmp_path))) == 10
def test_scoring_prefers_large_old(tmp_path):
now = time.time()
_create_message(tmp_path, "cur/large_old", 2 * MB, days_old=30)
_create_message(tmp_path, "cur/small_new", MB, days_old=1)
removed = expire_to_target(str(tmp_path), 2 * MB, now=now)
assert len(removed) == 1
assert "large_old" in removed[0]
def test_scoring_large_new_beats_small_old(tmp_path):
now = time.time()
_create_message(tmp_path, "cur/big_new", 10 * MB, days_old=1)
_create_message(tmp_path, "cur/small_old", MB, days_old=5)
# big_new score: 10MB * 1d = 10 vs small_old score: 1MB * 5d = 5
removed = expire_to_target(str(tmp_path), 10 * MB, now=now)
assert len(removed) == 1
assert "big_new" in removed[0]
def test_exact_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", 5 * MB)
assert expire_to_target(str(tmp_path), 5 * MB) == []
def test_removes_stale_caches(tmp_path):
_create_message(tmp_path, "cur/msg1", 2 * MB, days_old=5)
(tmp_path / "maildirsize").write_text("x")
(tmp_path / "dovecot.index.cache").write_text("x")
expire_to_target(str(tmp_path), MB)
assert not (tmp_path / "maildirsize").exists()
assert not (tmp_path / "dovecot.index.cache").exists()
def test_no_cache_removal_when_under_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", MB)
(tmp_path / "maildirsize").write_text("x")
expire_to_target(str(tmp_path), 2 * MB)
assert (tmp_path / "maildirsize").exists()

View File

@@ -0,0 +1,73 @@
import socket
import threading
import time
from unittest.mock import patch
import pytest
from chatmaild.turnserver import turn_credentials
SOCKET_PATH = "/run/chatmail-turn/turn.socket"
@pytest.fixture
def turn_socket(tmp_path):
"""Create a real Unix socket server at a temp path."""
sock_path = str(tmp_path / "turn.socket")
server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
server.bind(sock_path)
server.listen(1)
yield sock_path, server
server.close()
def _call_turn_credentials(sock_path):
"""Call turn_credentials but connect to sock_path instead of hardcoded path."""
original_connect = socket.socket.connect
def patched_connect(self, address):
if address == SOCKET_PATH:
address = sock_path
return original_connect(self, address)
with patch.object(socket.socket, "connect", patched_connect):
return turn_credentials()
def test_turn_credentials_timeout(turn_socket):
"""Server accepts but never responds — must raise socket.timeout."""
sock_path, server = turn_socket
def accept_and_hang():
conn, _ = server.accept()
time.sleep(30)
conn.close()
t = threading.Thread(target=accept_and_hang, daemon=True)
t.start()
with pytest.raises(socket.timeout):
_call_turn_credentials(sock_path)
def test_turn_credentials_connection_refused(tmp_path):
"""Socket file doesn't exist — must raise ConnectionRefusedError or FileNotFoundError."""
missing = str(tmp_path / "nonexistent.socket")
with pytest.raises((ConnectionRefusedError, FileNotFoundError)):
_call_turn_credentials(missing)
def test_turn_credentials_success(turn_socket):
"""Server responds with credentials — must return stripped string."""
sock_path, server = turn_socket
def respond():
conn, _ = server.accept()
conn.sendall(b"testuser:testpass\n")
conn.close()
t = threading.Thread(target=respond, daemon=True)
t.start()
result = _call_turn_credentials(sock_path)
assert result == "testuser:testpass"

View File

@@ -4,6 +4,7 @@ import socket
def turn_credentials() -> str:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as client_socket:
client_socket.settimeout(5)
client_socket.connect("/run/chatmail-turn/turn.socket")
with client_socket.makefile("rb") as file:
return file.readline().decode("utf-8").strip()

View File

@@ -10,7 +10,6 @@ dependencies = [
"pillow",
"qrcode",
"markdown",
"pytest",
"setuptools>=68",
"termcolor",
"build",
@@ -20,6 +19,8 @@ dependencies = [
"pytest-xdist",
"execnet",
"imap_tools",
"deltachat-rpc-client",
"deltachat-rpc-server",
]
[project.scripts]

View File

@@ -67,7 +67,7 @@ class AcmetoolDeployer(Deployer):
)
files.template(
src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"),
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD
user="root",
group="root",
mode="644",

View File

@@ -3,7 +3,7 @@ Description=acmetool HTTP redirector
[Service]
Type=notify
ExecStart=/usr/bin/acmetool redirector --service.uid=daemon
ExecStart=/usr/bin/acmetool redirector --service.uid=daemon --bind=127.0.0.1:402
Restart=always
RestartSec=30

View File

@@ -1,10 +1,51 @@
import importlib.resources
import io
import os
from contextlib import contextmanager
from pyinfra import host
from pyinfra.facts.server import Command
from pyinfra.operations import files, server, systemd
def has_systemd():
"""Returns False during Docker image builds or any other non-systemd environment."""
return os.path.isdir("/run/systemd/system")
def is_in_container() -> bool:
"""Return True if running inside a container (Docker, LXC, etc.)."""
return (
host.get_fact(
Command,
"systemd-detect-virt --container --quiet 2>/dev/null && echo yes || true",
)
== "yes"
)
@contextmanager
def blocked_service_startup():
"""Prevent services from auto-starting during package installation.
Installs a ``/usr/sbin/policy-rc.d`` that exits 101, blocking any
service from being started by the package manager. This avoids bind
conflicts and CPU/RAM spikes during initial setup. The file is removed
when the context exits.
"""
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
yield
files.file("/usr/sbin/policy-rc.d", present=False)
def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg)

View File

@@ -1,32 +0,0 @@
;
; Required DNS entries for chatmail servers
;
{% if A %}
{{ mail_domain }}. A {{ A }}
{% endif %}
{% if AAAA %}
{{ mail_domain }}. AAAA {{ AAAA }}
{% endif %}
{{ mail_domain }}. MX 10 {{ mail_domain }}.
{% if strict_tls %}
_mta-sts.{{ mail_domain }}. TXT "v=STSv1; id={{ sts_id }}"
mta-sts.{{ mail_domain }}. CNAME {{ mail_domain }}.
{% endif %}
www.{{ mail_domain }}. CNAME {{ mail_domain }}.
{{ dkim_entry }}
;
; Recommended DNS entries for interoperability and security-hardening
;
{{ mail_domain }}. TXT "v=spf1 a ~all"
_dmarc.{{ mail_domain }}. TXT "v=DMARC1;p=reject;adkim=s;aspf=s"
{% if acme_account_url %}
{{ mail_domain }}. CAA 0 issue "letsencrypt.org;accounturi={{ acme_account_url }}"
{% endif %}
_adsp._domainkey.{{ mail_domain }}. TXT "dkim=discardable"
_submission._tcp.{{ mail_domain }}. SRV 0 1 587 {{ mail_domain }}.
_submissions._tcp.{{ mail_domain }}. SRV 0 1 465 {{ mail_domain }}.
_imap._tcp.{{ mail_domain }}. SRV 0 1 143 {{ mail_domain }}.
_imaps._tcp.{{ mail_domain }}. SRV 0 1 993 {{ mail_domain }}.

View File

@@ -5,7 +5,6 @@ along with command line option and subcommand parsing.
import argparse
import importlib.resources
import importlib.util
import os
import pathlib
import shutil
@@ -109,7 +108,7 @@ def run_cmd(args, out):
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host in ["localhost", "@docker"]:
if ssh_host == "localhost":
cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"):
@@ -117,24 +116,18 @@ def run_cmd(args, out):
return 1
try:
retcode = out.check_call(cmd, env=env)
out.check_call(cmd, env=env)
if args.website_only:
if retcode == 0:
out.green("Website deployment completed.")
else:
out.red("Website deployment failed.")
elif retcode == 0:
out.green("Deploy completed, call `cmdeploy dns` next.")
out.green("Website deployment completed.")
elif not args.dns_check_disabled and strict_tls and not remote_data["acme_account_url"]:
out.red("Deploy completed but letsencrypt not configured")
out.red("Run 'cmdeploy run' again")
retcode = 0
else:
out.red("Deploy failed")
out.green("Deploy completed, call `cmdeploy dns` next.")
return 0
except subprocess.CalledProcessError:
out.red("Deploy failed")
retcode = 1
return retcode
return 1
def dns_cmd_options(parser):
@@ -207,17 +200,16 @@ def test_cmd_options(parser):
action="store_true",
help="also run slow tests",
)
add_ssh_host_option(parser)
def test_cmd(args, out):
"""Run local and online tests for chatmail deployment.
"""Run local and online tests for chatmail deployment."""
This will automatically pip-install 'deltachat' if it's not available.
"""
x = importlib.util.find_spec("deltachat")
if x is None:
out.check_call(f"{sys.executable} -m pip install deltachat")
env = os.environ.copy()
env["CHATMAIL_INI"] = str(args.inipath.absolute())
if args.ssh_host:
env["CHATMAIL_SSH"] = args.ssh_host
pytest_path = shutil.which("pytest")
pytest_args = [
@@ -231,7 +223,7 @@ def test_cmd(args, out):
]
if args.slow:
pytest_args.append("--slow")
ret = out.run_ret(pytest_args)
ret = out.run_ret(pytest_args, env=env)
return ret
@@ -322,7 +314,7 @@ def add_ssh_host_option(parser):
parser.add_argument(
"--ssh-host",
dest="ssh_host",
help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
help="Run commands on 'localhost' or on a specific SSH host "
"instead of chatmail.ini's mail_domain.",
)
@@ -332,7 +324,7 @@ def add_config_option(parser):
"--config",
dest="inipath",
action="store",
default=Path("chatmail.ini"),
default=Path(os.environ.get("CHATMAIL_INI", "chatmail.ini")),
type=Path,
help="path to the chatmail.ini file",
)
@@ -384,9 +376,7 @@ def get_parser():
def get_sshexec(ssh_host: str, verbose=True):
if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
return LocalExec(verbose)
if verbose:
print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose)

View File

@@ -5,13 +5,13 @@ Chat Mail pyinfra deploy.
import shutil
import subprocess
import sys
from io import StringIO
from io import BytesIO, StringIO
from pathlib import Path
from chatmaild.config import read_config
from pyinfra import facts, host, logger
from pyinfra.facts import hardware
from pyinfra.api import FactBase
from pyinfra.facts import hardware
from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd
@@ -19,20 +19,24 @@ from pyinfra.operations import apt, files, pip, server, systemd
from cmdeploy.cmdeploy import Out
from .acmetool import AcmetoolDeployer
from .selfsigned.deployer import SelfSignedTlsDeployer
from .basedeploy import (
Deployer,
Deployment,
activate_remote_units,
blocked_service_startup,
configure_remote_units,
get_resource,
has_systemd,
is_in_container,
)
from .dovecot.deployer import DovecotDeployer
from .external.deployer import ExternalTlsDeployer
from .filtermail.deployer import FiltermailDeployer
from .mtail.deployer import MtailDeployer
from .nginx.deployer import NginxDeployer
from .opendkim.deployer import OpendkimDeployer
from .postfix.deployer import PostfixDeployer
from .selfsigned.deployer import SelfSignedTlsDeployer
from .www import build_webpages, find_merge_conflict, get_paths
@@ -66,6 +70,8 @@ def _build_chatmaild(dist_dir) -> None:
def remove_legacy_artifacts():
if not has_systemd():
return
# disable legacy doveauth-dictproxy.service
if host.get_fact(SystemdEnabled).get("doveauth-dictproxy.service"):
systemd.service(
@@ -118,7 +124,6 @@ def _install_remote_venv_with_chatmaild() -> None:
def _configure_remote_venv_with_chatmaild(config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
@@ -129,16 +134,13 @@ def _configure_remote_venv_with_chatmaild(config) -> None:
**root_owned,
)
files.template(
src=get_resource("metrics.cron.j2"),
dest="/etc/cron.d/chatmail-metrics",
user="root",
group="root",
mode="644",
config={
"mailboxes_dir": config.mailboxes_dir,
"execpath": f"{remote_venv_dir}/bin/chatmail-metrics",
},
files.file(
path="/etc/cron.d/chatmail-metrics",
present=False,
)
files.file(
path="/var/www/html/metrics",
present=False,
)
@@ -148,33 +150,16 @@ class UnboundDeployer(Deployer):
self.need_restart = False
def install(self):
# Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
# Run local DNS resolver `unbound`. `resolvconf` takes care of
# setting up /etc/resolv.conf to use 127.0.0.1 as the resolver.
#
# On an IPv4-only system, if unbound is started but not
# configured, it causes subsequent steps to fail to resolve hosts.
# Here, we use policy-rc.d to prevent unbound from starting up
# on initial install. Later, we will configure it and start it.
#
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
# On an IPv4-only system, if unbound is started but not configured,
# it causes subsequent steps to fail to resolve hosts.
with blocked_service_startup():
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils", "resolvconf"],
)
def configure(self):
server.shell(
@@ -266,6 +251,9 @@ class WebsiteDeployer(Deployer):
# if www_folder is a hugo page, build it
if build_dir:
www_path = build_webpages(src_dir, build_dir, self.config)
if www_path is None:
logger.warning("Web page build failed, skipping website deployment")
return
# if it is not a hugo page, upload it as is
files.rsync(
f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"]
@@ -300,7 +288,7 @@ class LegacyRemoveDeployer(Deployer):
present=False,
)
# remove echobot if it is still running
if host.get_fact(SystemdEnabled).get("echobot.service"):
if has_systemd() and host.get_fact(SystemdEnabled).get("echobot.service"):
systemd.service(
name="Disable echobot.service",
service="echobot.service",
@@ -332,12 +320,12 @@ class TurnDeployer(Deployer):
def install(self):
(url, sha256sum) = {
"x86_64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-x86_64-linux",
"841e527c15fdc2940b0469e206188ea8f0af48533be12ecb8098520f813d41e4",
"https://github.com/chatmail/chatmail-turn/releases/download/v0.4/chatmail-turn-x86_64-linux",
"1ec1f5c50122165e858a5a91bcba9037a28aa8cb8b64b8db570aa457c6141a8a",
),
"aarch64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-aarch64-linux",
"a5fc2d06d937b56a34e098d2cd72a82d3e89967518d159bf246dc69b65e81b42",
"https://github.com/chatmail/chatmail-turn/releases/download/v0.4/chatmail-turn-aarch64-linux",
"0fb3e792419494e21ecad536464929dba706bb2c88884ed8f1788141d26fc756",
),
}[host.get_fact(facts.server.Arch)]
@@ -470,10 +458,19 @@ class ChatmailDeployer(Deployer):
("iroh", None, None),
]
def __init__(self, mail_domain):
self.mail_domain = mail_domain
def __init__(self, config):
self.config = config
self.mail_domain = config.mail_domain
def install(self):
files.put(
name="Disable installing recommended packages globally",
src=BytesIO(b'APT::Install-Recommends "false";\n'),
dest="/etc/apt/apt.conf.d/00InstallRecommends",
user="root",
group="root",
mode="644",
)
apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -486,12 +483,18 @@ class ChatmailDeployer(Deployer):
name="Install rsync",
packages=["rsync"],
)
apt.packages(
name="Ensure cron is installed",
packages=["cron"],
)
def configure(self):
# metadata crashes if the mailboxes dir does not exist
files.directory(
name="Ensure vmail mailbox directory exists",
path=str(self.config.mailboxes_dir),
user="vmail",
group="vmail",
mode="700",
present=True,
)
# This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName
server.shell(
@@ -536,6 +539,20 @@ class GithashDeployer(Deployer):
)
def get_tls_deployer(config, mail_domain):
"""Select the appropriate TLS deployer based on config."""
tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"]
if config.tls_cert_mode == "acme":
return AcmetoolDeployer(config.acme_email, tls_domains)
elif config.tls_cert_mode == "self":
return SelfSignedTlsDeployer(mail_domain)
elif config.tls_cert_mode == "external":
return ExternalTlsDeployer(config.tls_cert_path, config.tls_key_path)
else:
raise ValueError(f"Unknown tls_cert_mode: {config.tls_cert_mode}")
def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -> None:
"""Deploy a chat-mail instance.
@@ -567,47 +584,47 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n")
exit(1)
port_services = [
(["master", "smtpd"], 25),
("unbound", 53),
]
if config.tls_cert_mode == "acme":
port_services.append(("acmetool", 80))
port_services += [
(["imap-login", "dovecot"], 143),
("nginx", 443),
(["master", "smtpd"], 465),
(["master", "smtpd"], 587),
(["imap-login", "dovecot"], 993),
("iroh-relay", 3340),
("mtail", 3903),
("stats", 3904),
("nginx", 8443),
(["master", "smtpd"], config.postfix_reinject_port),
(["master", "smtpd"], config.postfix_reinject_port_incoming),
("filtermail", config.filtermail_smtp_port),
("filtermail", config.filtermail_smtp_port_incoming),
]
for service, port in port_services:
print(f"Checking if port {port} is available for {service}...")
running_service = host.get_fact(Port, port=port)
services = [service] if isinstance(service, str) else service
if running_service:
if running_service not in services:
Out().red(
f"Deploy failed: port {port} is occupied by: {running_service}"
)
exit(1)
if not is_in_container():
port_services = [
(["master", "smtpd"], 25),
("unbound", 53),
]
if config.tls_cert_mode == "acme":
port_services.append(("acmetool", 402))
port_services += [
(["imap-login", "dovecot"], 143),
# acmetool previously listened on port 80,
# so don't complain during upgrade that moved it to port 402
# and gave the port to nginx.
(["acmetool", "nginx"], 80),
("nginx", 443),
(["master", "smtpd"], 465),
(["master", "smtpd"], 587),
(["imap-login", "dovecot"], 993),
("iroh-relay", 3340),
("mtail", 3903),
("stats", 3904),
("nginx", 8443),
(["master", "smtpd"], config.postfix_reinject_port),
(["master", "smtpd"], config.postfix_reinject_port_incoming),
("filtermail", config.filtermail_smtp_port),
("filtermail", config.filtermail_smtp_port_incoming),
]
for service, port in port_services:
print(f"Checking if port {port} is available for {service}...")
running_service = host.get_fact(Port, port=port)
services = [service] if isinstance(service, str) else service
if running_service:
if running_service not in services:
Out().red(
f"Deploy failed: port {port} is occupied by: {running_service}"
)
exit(1)
tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"]
if config.tls_cert_mode == "acme":
tls_deployer = AcmetoolDeployer(config.acme_email, tls_domains)
else:
tls_deployer = SelfSignedTlsDeployer(mail_domain)
tls_deployer = get_tls_deployer(config, mail_domain)
all_deployers = [
ChatmailDeployer(mail_domain),
ChatmailDeployer(config),
LegacyRemoveDeployer(),
FiltermailDeployer(),
JournaldDeployer(),

View File

@@ -1,11 +1,22 @@
import datetime
import importlib
from jinja2 import Template
from . import remote
def parse_zone_records(text):
"""Yield ``(name, ttl, rtype, rdata)`` from standard BIND-format text."""
for raw_line in text.splitlines():
line = raw_line.strip()
if not line or line.startswith(";"):
continue
try:
name, ttl, _in, rtype, rdata = line.split(None, 4)
except ValueError:
raise ValueError(f"Bad zone record line: {line!r}") from None
name = name.rstrip(".")
yield name, ttl, rtype.upper(), rdata
def get_initial_remote_data(sshexec, mail_domain):
return sshexec.logged(
call=remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=mail_domain)
@@ -31,13 +42,39 @@ def get_filled_zone_file(remote_data):
if not sts_id:
remote_data["sts_id"] = datetime.datetime.now().strftime("%Y%m%d%H%M")
template = importlib.resources.files(__package__).joinpath("chatmail.zone.j2")
content = template.read_text()
zonefile = Template(content).render(**remote_data)
lines = [x.strip() for x in zonefile.split("\n") if x.strip()]
d = remote_data["mail_domain"]
def append_record(name, rtype, rdata, ttl=3600):
lines.append(f"{name:<40} {ttl:<6} IN {rtype:<5} {rdata}")
lines = ["; Required DNS entries"]
if remote_data.get("A"):
append_record(f"{d}.", "A", remote_data["A"])
if remote_data.get("AAAA"):
append_record(f"{d}.", "AAAA", remote_data["AAAA"])
append_record(f"{d}.", "MX", f"10 {d}.")
if remote_data.get("strict_tls"):
append_record(f"_mta-sts.{d}.", "TXT", f'"v=STSv1; id={remote_data["sts_id"]}"')
append_record(f"mta-sts.{d}.", "CNAME", f"{d}.")
append_record(f"www.{d}.", "CNAME", f"{d}.")
lines.append(remote_data["dkim_entry"])
lines.append("")
zonefile = "\n".join(lines)
return zonefile
lines.append("; Recommended DNS entries")
append_record(f"{d}.", "TXT", '"v=spf1 a ~all"')
append_record(f"_dmarc.{d}.", "TXT", '"v=DMARC1;p=reject;adkim=s;aspf=s"')
if remote_data.get("acme_account_url"):
append_record(
f"{d}.",
"CAA",
f'0 issue "letsencrypt.org;accounturi={remote_data["acme_account_url"]}"',
)
append_record(f"_adsp._domainkey.{d}.", "TXT", '"dkim=discardable"')
append_record(f"_submission._tcp.{d}.", "SRV", f"0 1 587 {d}.")
append_record(f"_submissions._tcp.{d}.", "SRV", f"0 1 465 {d}.")
append_record(f"_imap._tcp.{d}.", "SRV", f"0 1 143 {d}.")
append_record(f"_imaps._tcp.{d}.", "SRV", f"0 1 993 {d}.")
lines.append("")
return "\n".join(lines)
def check_full_zone(sshexec, remote_data, out, zonefile) -> int:
@@ -58,7 +95,8 @@ def check_full_zone(sshexec, remote_data, out, zonefile) -> int:
returncode = 1
if remote_data.get("dkim_entry") in required_diff:
out(
"If the DKIM entry above does not work with your DNS provider, you can try this one:\n"
"If the DKIM entry above does not work with your DNS provider,"
" you can try this one:\n"
)
out(remote_data.get("web_dkim_entry") + "\n")
if recommended_diff:

View File

@@ -1,16 +1,33 @@
import io
import urllib.request
from chatmaild.config import Config
from pyinfra import host
from pyinfra.facts.server import Arch, Sysctl
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.facts.deb import DebPackages
from pyinfra.facts.server import Arch, Command, Sysctl
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import (
Deployer,
activate_remote_units,
blocked_service_startup,
configure_remote_units,
get_resource,
is_in_container,
)
DOVECOT_ARCHIVE_VERSION = "2.3.21+dfsg1-3"
DOVECOT_PACKAGE_VERSION = f"1:{DOVECOT_ARCHIVE_VERSION}"
DOVECOT_SHA256 = {
("core", "amd64"): "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d",
("core", "arm64"): "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9",
("imapd", "amd64"): "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86",
("imapd", "arm64"): "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f",
("lmtpd", "amd64"): "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab",
("lmtpd", "arm64"): "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f",
}
class DovecotDeployer(Deployer):
daemon_reload = False
@@ -22,22 +39,64 @@ class DovecotDeployer(Deployer):
def install(self):
arch = host.get_fact(Arch)
if not host.get_fact(SystemdEnabled).get("dovecot.service"):
_install_dovecot_package("core", arch)
_install_dovecot_package("imapd", arch)
_install_dovecot_package("lmtpd", arch)
with blocked_service_startup():
debs = []
for pkg in ("core", "imapd", "lmtpd"):
deb, changed = _download_dovecot_package(pkg, arch)
self.need_restart |= changed
if deb:
debs.append(deb)
if debs:
deb_list = " ".join(debs)
# First dpkg may fail on missing dependencies (stderr suppressed);
# apt-get --fix-broken pulls them in, then dpkg retries cleanly.
server.shell(
name="Install dovecot packages",
commands=[
f"dpkg --force-confdef --force-confold -i {deb_list} 2> /dev/null || true",
"DEBIAN_FRONTEND=noninteractive apt-get -y --fix-broken install",
f"dpkg --force-confdef --force-confold -i {deb_list}",
],
)
self.need_restart = True
files.put(
name="Pin dovecot packages to block Debian dist-upgrades",
src=io.StringIO(
"Package: dovecot-*\n"
"Pin: version *\n"
"Pin-Priority: -1\n"
),
dest="/etc/apt/preferences.d/pin-dovecot",
user="root",
group="root",
mode="644",
)
def configure(self):
configure_remote_units(self.config.mail_domain, self.units)
self.need_restart, self.daemon_reload = _configure_dovecot(self.config)
config_restart, self.daemon_reload = _configure_dovecot(self.config)
self.need_restart |= config_restart
def activate(self):
activate_remote_units(self.units)
# Detect stale binary: package installed but service still runs old (deleted) binary.
if not self.disable_mail and not self.need_restart:
stale = host.get_fact(
Command,
'pid=$(systemctl show -p MainPID --value dovecot.service 2>/dev/null);'
' [ "${pid:-0}" != "0" ] && readlink "/proc/$pid/exe" 2>/dev/null | grep -q "(deleted)"'
" && echo STALE || true",
)
if stale == "STALE":
self.need_restart = True
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="Disable dovecot for now" if self.disable_mail else "Start and enable Dovecot",
name="Disable dovecot for now"
if self.disable_mail
else "Start and enable Dovecot",
service="dovecot.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
@@ -47,41 +106,49 @@ class DovecotDeployer(Deployer):
self.need_restart = False
def _install_dovecot_package(package: str, arch: str):
def _pick_url(primary, fallback):
try:
req = urllib.request.Request(primary, method="HEAD")
urllib.request.urlopen(req, timeout=10)
return primary
except Exception:
return fallback
def _download_dovecot_package(package: str, arch: str) -> tuple[str | None, bool]:
"""Download a dovecot .deb if needed, return (path, changed)."""
arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch
url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
deb_filename = "/root/" + url.split("/")[-1]
match (package, arch):
case ("core", "amd64"):
sha256 = "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d"
case ("core", "arm64"):
sha256 = "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9"
case ("imapd", "amd64"):
sha256 = "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86"
case ("imapd", "arm64"):
sha256 = "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f"
case ("lmtpd", "amd64"):
sha256 = "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab"
case ("lmtpd", "arm64"):
sha256 = "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f"
case _:
apt.packages(packages=[f"dovecot-{package}"])
return
pkg_name = f"dovecot-{package}"
sha256 = DOVECOT_SHA256.get((package, arch))
if sha256 is None:
op = apt.packages(packages=[pkg_name])
return None, bool(getattr(op, "changed", False))
installed_versions = host.get_fact(DebPackages).get(pkg_name, [])
if DOVECOT_PACKAGE_VERSION in installed_versions:
return None, False
url_version = DOVECOT_ARCHIVE_VERSION.replace("+", "%2B")
deb_base = f"{pkg_name}_{url_version}_{arch}.deb"
primary_url = f"https://download.delta.chat/dovecot/{deb_base}"
fallback_url = f"https://github.com/chatmail/dovecot/releases/download/upstream%2F{url_version}/{deb_base}"
url = _pick_url(primary_url, fallback_url)
deb_filename = f"/root/{deb_base}"
files.download(
name=f"Download dovecot-{package}",
name=f"Download {pkg_name}",
src=url,
dest=deb_filename,
sha256sum=sha256,
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
)
apt.deb(name=f"Install dovecot-{package}", src=deb_filename)
return deb_filename, True
def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
def _configure_dovecot(config: Config, debug: bool = False) -> tuple[bool, bool]:
"""Configures Dovecot IMAP server."""
need_restart = False
daemon_reload = False
@@ -116,11 +183,18 @@ def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
# as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits
can_modify = not is_in_container()
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
if host.get_fact(Sysctl)[key] > 65535:
# Skip updating limits if already sufficient
# (enables running in incus containers where sysctl readonly)
value = host.get_fact(Sysctl)[key]
if value > 65534:
continue
if not can_modify:
print(
"\n!!!! refusing to attempt sysctl setting in containers\n"
f"!!!! dovecot: sysctl {key!r}={value}, should be >65534 for production setups\n"
"!!!!"
)
continue
server.sysctl(
name=f"Change {key}",

View File

@@ -133,6 +133,11 @@ protocol lmtp {
# mail_lua and push_notification_lua are needed for Lua push notification handler.
# <https://doc.dovecot.org/2.3/configuration_manual/push_notification/#configuration>
mail_plugins = $mail_plugins mail_lua notify push_notification push_notification_lua
# Disable fsync for LMTP. May lose delivered message,
# but unlikely to cause problems with multiple relays.
# https://doc.dovecot.org/2.3/admin_manual/mailbox_formats/#fsyncing
mail_fsync = never
}
plugin {
@@ -144,12 +149,22 @@ plugin {
}
plugin {
# for now we define static quota-rules for all users
# for now we define static quota-rules for all users
quota = maildir:User quota
quota_rule = *:storage={{ config.max_mailbox_size }}
quota_max_mail_size={{ config.max_message_size }}
quota_grace = 0
# quota_over_flag_value = TRUE
# When a user reaches 90% quota, run chatmail-quota-expire
# to remove large/old messages until usage is below 80%.
quota_warning = storage=90%% quota-warning {{ config.max_mailbox_size_mb * 80 // 100 }} {{ config.mailboxes_dir }}/%u
}
service quota-warning {
executable = script /usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire
user = vmail
unix_listener quota-warning {
}
}
# push_notification configuration
@@ -252,6 +267,9 @@ protocol imap {
# sort -sn <(sed 's/ / C: /' *.in) <(sed 's/ / S: /' cat *.out)
rawlog_dir = %h
# Disable fsync for IMAP. May lose IMAP changes like setting flags.
mail_fsync = never
}
{% endif %}

View File

@@ -0,0 +1,67 @@
import io
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer, get_resource
class ExternalTlsDeployer(Deployer):
"""Expects TLS certificates to be managed on the server.
Validates that the configured certificate and key files
exist on the remote host. Installs a systemd path unit
that watches the certificate file and automatically
restarts/reloads affected services when it changes.
"""
def __init__(self, cert_path, key_path):
self.cert_path = cert_path
self.key_path = key_path
def configure(self):
# Verify cert and key exist on the remote host using pyinfra facts.
for path in (self.cert_path, self.key_path):
info = host.get_fact(File, path=path)
if info is None:
raise Exception(f"External TLS file not found on server: {path}")
# Deploy the .path unit (templated with the cert path).
# pkg=__package__ is required here because the resource files
# live in cmdeploy.external, not the default cmdeploy package.
source = get_resource("tls-cert-reload.path.f", pkg=__package__)
content = source.read_text().format(cert_path=self.cert_path).encode()
path_unit = files.put(
name="Upload tls-cert-reload.path",
src=io.BytesIO(content),
dest="/etc/systemd/system/tls-cert-reload.path",
user="root",
group="root",
mode="644",
)
service_unit = files.put(
name="Upload tls-cert-reload.service",
src=get_resource("tls-cert-reload.service", pkg=__package__),
dest="/etc/systemd/system/tls-cert-reload.service",
user="root",
group="root",
mode="644",
)
if path_unit.changed or service_unit.changed:
self.need_restart = True
def activate(self):
systemd.service(
name="Enable tls-cert-reload path watcher",
service="tls-cert-reload.path",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=self.need_restart,
)
# No explicit reload needed here: dovecot/nginx read the cert
# on startup, and the .path watcher handles live changes.

View File

@@ -0,0 +1,15 @@
# Watch the TLS certificate file for changes.
# When the cert is updated (e.g. renewed by an external process),
# this triggers tls-cert-reload.service to reload the affected services.
#
# NOTE: changes to the certificates are not detected if they cross bind-mount boundaries.
# After cert renewal, you must then trigger the reload explicitly:
# systemctl start tls-cert-reload.service
[Unit]
Description=Watch TLS certificate for changes
[Path]
PathChanged={cert_path}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,15 @@
# Reload services that cache the TLS certificate.
#
# dovecot: caches the cert at startup; reload re-reads SSL certs
# without dropping existing connections.
# nginx: caches the cert at startup; reload gracefully picks up
# the new cert for new connections.
# postfix: reads the cert fresh on each TLS handshake,
# does NOT need a reload/restart.
[Unit]
Description=Reload TLS services after certificate change
[Service]
Type=oneshot
ExecStart=/bin/systemctl try-reload-or-restart dovecot
ExecStart=/bin/systemctl try-reload-or-restart nginx

View File

@@ -14,10 +14,10 @@ class FiltermailDeployer(Deployer):
def install(self):
arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-{arch}"
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.1/filtermail-{arch}"
sha256sum = {
"x86_64": "f14a31323ae2dad3b59d3fdafcde507521da2f951a9478cd1f2fe2b4463df71d",
"aarch64": "933770d75046c4fd7084ce8d43f905f8748333426ad839154f0fc654755ef09f",
"x86_64": "48b3fb80c092d00b9b0a0ef77a8673496da3b9aed5ec1851e1df936d5589d62f",
"aarch64": "c65bd5f45df187d3d65d6965a285583a3be0f44a6916ff12909ff9a8d702c22e",
}[arch]
self.need_restart |= files.download(
name="Download filtermail",

View File

@@ -1 +0,0 @@
*/5 * * * * root {{ config.execpath }} {{ config.mailboxes_dir }} >/var/www/html/metrics

View File

@@ -54,7 +54,7 @@ http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_certificate {{ config.tls_cert_path }};
ssl_certificate_key {{ config.tls_key_path }};
@@ -73,18 +73,18 @@ http {
access_log syslog:server=unix:/dev/log,facility=local7;
location /mxdeliv/ {
proxy_pass http://127.0.0.1:{{ config.filtermail_http_port_incoming }};
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /metrics {
default_type text/plain;
}
location /new {
{% if config.tls_cert_mode == "acme" %}
{% if config.tls_cert_mode != "self" %}
if ($request_method = GET) {
# Redirect to Delta Chat,
# which will in turn do a POST request.
@@ -106,7 +106,7 @@ http {
#
# Redirects are only for browsers.
location /cgi-bin/newemail.py {
{% if config.tls_cert_mode == "acme" %}
{% if config.tls_cert_mode != "self" %}
if ($request_method = GET) {
return 301 dcaccount:https://{{ config.mail_domain }}/new;
}
@@ -145,4 +145,25 @@ http {
return 301 $scheme://{{ config.mail_domain }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7;
}
server {
listen 80;
{% if not disable_ipv6 %}
listen [::]:80;
{% endif %}
{% if config.tls_cert_mode == "acme" %}
location /.well-known/acme-challenge/ {
proxy_pass http://acmetool;
}
{% endif %}
return 301 https://$host$request_uri;
}
{% if config.tls_cert_mode == "acme" %}
upstream acmetool {
server 127.0.0.1:402;
}
{% endif %}
}

View File

@@ -37,21 +37,15 @@ class OpendkimDeployer(Deployer):
)
need_restart |= main_config.changed
screen_script = files.put(
src=get_resource("opendkim/screen.lua"),
dest="/etc/opendkim/screen.lua",
user="root",
group="root",
mode="644",
screen_script = files.file(
path="/etc/opendkim/screen.lua",
present=False,
)
need_restart |= screen_script.changed
final_script = files.put(
src=get_resource("opendkim/final.lua"),
dest="/etc/opendkim/final.lua",
user="root",
group="root",
mode="644",
final_script = files.file(
path="/etc/opendkim/final.lua",
present=False,
)
need_restart |= final_script.changed
@@ -109,6 +103,13 @@ class OpendkimDeployer(Deployer):
)
need_restart |= service_file.changed
files.file(
name="chown opendkim: /etc/dkimkeys/opendkim.private",
path="/etc/dkimkeys/opendkim.private",
user="opendkim",
group="opendkim",
)
self.need_restart = need_restart
def activate(self):

View File

@@ -1,42 +0,0 @@
mtaname = odkim.get_mtasymbol(ctx, "{daemon_name}")
if mtaname == "ORIGINATING" then
-- Outgoing message will be signed,
-- no need to look for signatures.
return nil
end
nsigs = odkim.get_sigcount(ctx)
if nsigs == nil then
return nil
end
local valid = false
local error_msg = "No valid DKIM signature found."
for i = 1, nsigs do
sig = odkim.get_sighandle(ctx, i - 1)
sigres = odkim.sig_result(sig)
-- All signatures that do not correspond to From:
-- were ignored in screen.lua and return sigres -1.
--
-- Any valid signature that was not ignored like this
-- means the message is acceptable.
if sigres == 0 then
valid = true
else
error_msg = "DKIM signature is invalid, error code " .. tostring(sigres) .. ", search https://github.com/trusteddomainproject/OpenDKIM/blob/master/libopendkim/dkim.h#L108"
end
end
if valid then
-- Strip all DKIM-Signature headers after successful validation
-- Delete in reverse order to avoid index shifting.
for i = nsigs, 1, -1 do
odkim.del_header(ctx, "DKIM-Signature", i)
end
else
odkim.set_reply(ctx, "554", "5.7.1", error_msg)
odkim.set_result(ctx, SMFIS_REJECT)
end
return nil

View File

@@ -45,12 +45,6 @@ SignHeaders *,+autocrypt,+content-type
# Default is empty.
OversignHeaders from,reply-to,subject,date,to,cc,resent-date,resent-from,resent-sender,resent-to,resent-cc,in-reply-to,references,list-id,list-help,list-unsubscribe,list-subscribe,list-post,list-owner,list-archive,autocrypt
# Script to ignore signatures that do not correspond to the From: domain.
ScreenPolicyScript /etc/opendkim/screen.lua
# Script to reject mails without a valid DKIM signature.
FinalPolicyScript /etc/opendkim/final.lua
# In Debian, opendkim runs as user "opendkim". A umask of 007 is required when
# using a local socket with MTAs that access the socket as a non-privileged
# user (for example, Postfix). You may need to add user "postfix" to group

View File

@@ -1,21 +0,0 @@
-- Ignore signatures that do not correspond to the From: domain.
from_domain = odkim.get_fromdomain(ctx)
if from_domain == nil then
return nil
end
n = odkim.get_sigcount(ctx)
if n == nil then
return nil
end
for i = 1, n do
sig = odkim.get_sighandle(ctx, i - 1)
sig_domain = odkim.sig_getdomain(sig)
if from_domain ~= sig_domain then
odkim.sig_ignore(sig)
end
end
return nil

View File

@@ -97,7 +97,9 @@ class PostfixDeployer(Deployer):
server.shell(
name="Validate postfix configuration",
# Extract stderr and quit with error if non-zero
commands=["""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""],
commands=[
"""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""
],
)
self.need_restart = need_restart

View File

@@ -20,7 +20,7 @@ smtpd_tls_key_file={{ config.tls_key_path }}
smtpd_tls_security_level=may
smtp_tls_CApath=/etc/ssl/certs
smtp_tls_security_level={{ "verify" if config.tls_cert_mode == "acme" else "encrypt" }}
smtp_tls_security_level=verify
# Send SNI extension when connecting to other servers.
# <https://www.postfix.org/postconf.5.html#smtp_tls_servername>
smtp_tls_servername = hostname
@@ -88,6 +88,22 @@ inet_protocols = ipv4
inet_protocols = all
{% endif %}
# Postfix does not try IPv4 and IPv6 connections
# concurrently as of version 3.7.11.
#
# When relay has both A (IPv4) and AAAA (IPv6) records,
# but broken IPv6 connectivity,
# every second message is delayed by the connection timeout
# <https://www.postfix.org/postconf.5.html#smtp_connect_timeout>
# which defaults to 30 seconds. Reducing timeouts is not a solution
# as this will result in a failure to connect to slow servers.
#
# As a workaround we always prefer IPv4 when it is available.
#
# The setting is documented at
# <https://www.postfix.org/postconf.5.html#smtp_address_preference>
smtp_address_preference=ipv4
virtual_transport = lmtp:unix:private/dovecot-lmtp
virtual_mailbox_domains = {{ config.mail_domain }}
lmtp_header_checks = regexp:/etc/postfix/lmtp_header_cleanup

View File

@@ -86,7 +86,6 @@ filter unix - n n - - lmtp
# Local SMTP server for reinjecting incoming filtered mail
127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd
-o syslog_name=postfix/reinject_incoming
-o smtpd_milters=unix:opendkim/opendkim.sock
# Cleanup `Received` headers for authenticated mail
# to avoid leaking client IP.

View File

@@ -53,13 +53,14 @@ def get_dkim_entry(mail_domain, pre_command, dkim_selector):
print=log_progress,
)
except CalledProcessError:
return
return None, None
dkim_value_raw = f"v=DKIM1;k=rsa;p={dkim_pubkey};s=email;t=s"
dkim_value = '" "'.join(re.findall(".{1,255}", dkim_value_raw))
web_dkim_value = "".join(re.findall(".{1,255}", dkim_value_raw))
name = f"{dkim_selector}._domainkey.{mail_domain}."
return (
f'{dkim_selector}._domainkey.{mail_domain}. TXT "{dkim_value}"',
f'{dkim_selector}._domainkey.{mail_domain}. TXT "{web_dkim_value}"',
f'{name:<40} 3600 IN TXT "{dkim_value}"',
f'{name:<40} 3600 IN TXT "{web_dkim_value}"',
)
@@ -94,7 +95,7 @@ def check_zonefile(zonefile, verbose=True):
if not zf_line.strip() or zf_line.startswith(";"):
continue
print(f"dns-checking {zf_line!r}") if verbose else log_progress("")
zf_domain, zf_typ, zf_value = zf_line.split(maxsplit=2)
zf_domain, _ttl, _in, zf_typ, zf_value = zf_line.split(None, 4)
zf_domain = zf_domain.rstrip(".")
zf_value = zf_value.strip()
query_value = query_dns(zf_typ, zf_domain)

View File

@@ -40,5 +40,5 @@ def dovecot_recalc_quota(user):
#
for line in output.split("\n"):
parts = line.split()
if parts[2] == "STORAGE":
if len(parts) >= 6 and parts[2] == "STORAGE":
return dict(value=int(parts[3]), limit=int(parts[4]), percent=int(parts[5]))

View File

@@ -1,8 +1,31 @@
from pyinfra.operations import apt, files, server
import shlex
from pyinfra.operations import apt, server
from cmdeploy.basedeploy import Deployer
def openssl_selfsigned_args(domain, cert_path, key_path, days=36500):
"""Return the openssl argument list for a self-signed certificate.
The certificate uses an EC P-256 key with SAN entries for *domain*,
``www.<domain>`` and ``mta-sts.<domain>``.
"""
return [
"openssl", "req", "-x509",
"-newkey", "ec", "-pkeyopt", "ec_paramgen_curve:P-256",
"-noenc", "-days", str(days),
"-keyout", str(key_path),
"-out", str(cert_path),
"-subj", f"/CN={domain}",
# Mark as end-entity cert so it cannot be used as a CA to sign others.
"-addext", "basicConstraints=critical,CA:FALSE",
"-addext", "extendedKeyUsage=serverAuth,clientAuth",
"-addext",
f"subjectAltName=DNS:{domain},DNS:www.{domain},DNS:mta-sts.{domain}",
]
class SelfSignedTlsDeployer(Deployer):
"""Generates a self-signed TLS certificate for all chatmail endpoints."""
@@ -18,18 +41,13 @@ class SelfSignedTlsDeployer(Deployer):
)
def configure(self):
args = openssl_selfsigned_args(
self.mail_domain, self.cert_path, self.key_path,
)
cmd = shlex.join(args)
server.shell(
name="Generate self-signed TLS certificate if not present",
commands=[
f"[ -f {self.cert_path} ] || openssl req -x509"
f" -newkey ec -pkeyopt ec_paramgen_curve:P-256"
f" -noenc -days 36500"
f" -keyout {self.key_path}"
f" -out {self.cert_path}"
f' -subj "/CN={self.mail_domain}"'
f' -addext "extendedKeyUsage=serverAuth,clientAuth"'
f' -addext "subjectAltName=DNS:{self.mail_domain},DNS:www.{self.mail_domain},DNS:mta-sts.{self.mail_domain}"',
],
commands=[f"[ -f {self.cert_path} ] || {cmd}"],
)
def activate(self):

View File

@@ -5,5 +5,5 @@ After=network.target
[Service]
Type=oneshot
User=vmail
ExecStart=/usr/local/lib/chatmaild/venv/bin/chatmail-fsreport /usr/local/lib/chatmaild/chatmail.ini
ExecStart=/usr/local/lib/chatmaild/venv/bin/chatmail-fsreport /usr/local/lib/chatmaild/chatmail.ini

View File

@@ -4,7 +4,7 @@ Description=Chatmail dict proxy for IMAP METADATA
[Service]
ExecStart={execpath} /run/chatmail-metadata/metadata.socket {config_path}
Restart=always
RestartSec=30
RestartSec=5
User=vmail
RuntimeDirectory=chatmail-metadata
UMask=0077

View File

@@ -85,16 +85,26 @@ class SSHExec:
class LocalExec:
def __init__(self, verbose=False, docker=False):
FuncError = FuncError
def __init__(self, verbose=False):
self.verbose = verbose
self.docker = docker
def __call__(self, call, kwargs=None, log_callback=None):
if kwargs is None:
kwargs = {}
return call(**kwargs)
def logged(self, call, kwargs: dict):
title = call.__doc__
if not title:
title = call.__name__
where = "locally"
if self.docker:
if call == remote.rdns.perform_initial_checks:
kwargs["pre_command"] = "docker exec chatmail "
where = "in docker"
if self.verbose:
print(f"Running {where}: {call.__name__}(**{kwargs})")
return call(**kwargs)
print_stderr(f"Running {where}: {title}(**{kwargs})")
return self(call, kwargs, log_callback=print_stderr)
else:
print_stderr(title, end="")
res = self(call, kwargs, log_callback=remote.rshell.log_progress)
print_stderr()
return res

View File

@@ -1,17 +1,18 @@
; Required DNS entries for chatmail servers
zftest.testrun.org. A 135.181.204.127
zftest.testrun.org. AAAA 2a01:4f9:c012:52f4::1
zftest.testrun.org. MX 10 zftest.testrun.org.
_mta-sts.zftest.testrun.org. TXT "v=STSv1; id=202403211706"
mta-sts.zftest.testrun.org. CNAME zftest.testrun.org.
www.zftest.testrun.org. CNAME zftest.testrun.org.
opendkim._domainkey.zftest.testrun.org. TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoYt82CVUyz2ouaqjX2kB+5J80knAyoOU3MGU5aWppmwUwwTvj/oSTSpkc5JMtVTRmKKr8NUDWAL1Yw7dfGqqPHdHfwwjS3BIvDzYx+hzgtz62RnfNgV+/2MAoNpfX7cAFIHdRzEHNtwugc3RDLquqPoupAE3Y2YRw2T5zG5fILh4vwIcJZL5Uq6B92j8wwJqOex" "33n+vm1NKQ9rxo/UsHAmZlJzpooXcG/4igTBxJyJlamVSRR6N7Nul1v//YJb7J6v2o0iPHW6uE0StzKaPPNC2IVosSRFbD9H2oqppltptFSNPlI0E+t0JBWHem6YK7xcugiO3ImMCaaU8g6Jt/wIDAQAB;s=email;t=s"
; Required DNS entries
zftest.testrun.org. 3600 IN A 135.181.204.127
zftest.testrun.org. 3600 IN AAAA 2a01:4f9:c012:52f4::1
zftest.testrun.org. 3600 IN MX 10 zftest.testrun.org.
_mta-sts.zftest.testrun.org. 3600 IN TXT "v=STSv1; id=202403211706"
mta-sts.zftest.testrun.org. 3600 IN CNAME zftest.testrun.org.
www.zftest.testrun.org. 3600 IN CNAME zftest.testrun.org.
opendkim._domainkey.zftest.testrun.org. 3600 IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoYt82CVUyz2ouaqjX2kB+5J80knAyoOU3MGU5aWppmwUwwTvj/oSTSpkc5JMtVTRmKKr8NUDWAL1Yw7dfGqqPHdHfwwjS3BIvDzYx+hzgtz62RnfNgV+/2MAoNpfX7cAFIHdRzEHNtwugc3RDLquqPoupAE3Y2YRw2T5zG5fILh4vwIcJZL5Uq6B92j8wwJqOex" "33n+vm1NKQ9rxo/UsHAmZlJzpooXcG/4igTBxJyJlamVSRR6N7Nul1v//YJb7J6v2o0iPHW6uE0StzKaPPNC2IVosSRFbD9H2oqppltptFSNPlI0E+t0JBWHem6YK7xcugiO3ImMCaaU8g6Jt/wIDAQAB;s=email;t=s"
; Recommended DNS entries
_submission._tcp.zftest.testrun.org. SRV 0 1 587 zftest.testrun.org.
_submissions._tcp.zftest.testrun.org. SRV 0 1 465 zftest.testrun.org.
_imap._tcp.zftest.testrun.org. SRV 0 1 143 zftest.testrun.org.
_imaps._tcp.zftest.testrun.org. SRV 0 1 993 zftest.testrun.org.
zftest.testrun.org. CAA 0 issue "letsencrypt.org;accounturi=https://acme-v02.api.letsencrypt.org/acme/acct/1371472956"
zftest.testrun.org. TXT "v=spf1 a:zftest.testrun.org ~all"
_dmarc.zftest.testrun.org. TXT "v=DMARC1;p=reject;adkim=s;aspf=s"
_adsp._domainkey.zftest.testrun.org. TXT "dkim=discardable"
zftest.testrun.org. 3600 IN TXT "v=spf1 a ~all"
_dmarc.zftest.testrun.org. 3600 IN TXT "v=DMARC1;p=reject;adkim=s;aspf=s"
zftest.testrun.org. 3600 IN CAA 0 issue "letsencrypt.org;accounturi=https://acme-v02.api.letsencrypt.org/acme/acct/1371472956"
_adsp._domainkey.zftest.testrun.org. 3600 IN TXT "dkim=discardable"
_submission._tcp.zftest.testrun.org. 3600 IN SRV 0 1 587 zftest.testrun.org.
_submissions._tcp.zftest.testrun.org. 3600 IN SRV 0 1 465 zftest.testrun.org.
_imap._tcp.zftest.testrun.org. 3600 IN SRV 0 1 143 zftest.testrun.org.
_imaps._tcp.zftest.testrun.org. 3600 IN SRV 0 1 993 zftest.testrun.org.

View File

@@ -41,9 +41,9 @@ class TestDC:
def dc_ping_pong():
chat.send_text("ping")
msg = ac2._evtracker.wait_next_incoming_message()
msg.chat.send_text("pong")
ac1._evtracker.wait_next_incoming_message()
msg = ac2.wait_for_incoming_msg()
msg.get_snapshot().chat.send_text("pong")
ac1.wait_for_incoming_msg()
benchmark(dc_ping_pong, 5)
@@ -55,6 +55,6 @@ class TestDC:
for i in range(10):
chat.send_text(f"hello {i}")
for i in range(10):
ac2._evtracker.wait_next_incoming_message()
ac2.wait_for_incoming_msg()
benchmark(dc_send_10_receive_10, 5)
benchmark(dc_send_10_receive_10, 5, cooldown="auto")

View File

@@ -89,7 +89,9 @@ def test_concurrent_logins_same_account(
assert login_results.get()
def test_no_vrfy(chatmail_config):
def test_no_vrfy(cmfactory, chatmail_config):
ac = cmfactory.get_online_account()
addr = ac.get_config("addr")
domain = chatmail_config.mail_domain
s = smtplib.SMTP(domain)
@@ -98,7 +100,7 @@ def test_no_vrfy(chatmail_config):
s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}")
result = s.getreply()
print(result)
s.putcmd("vrfy", f"echo@{chatmail_config.mail_domain}")
s.putcmd("vrfy", addr)
result2 = s.getreply()
print(result2)
assert result[0] == result2[0] == 252

View File

@@ -7,13 +7,13 @@ import time
import pytest
from cmdeploy import remote
from cmdeploy.sshexec import SSHExec
from cmdeploy.cmdeploy import get_sshexec
class TestSSHExecutor:
@pytest.fixture(scope="class")
def sshexec(self, sshdomain):
return SSHExec(sshdomain)
return get_sshexec(sshdomain)
def test_ls(self, sshexec):
out = sshexec(call=remote.rdns.shell, kwargs=dict(command="ls"))
@@ -27,6 +27,7 @@ class TestSSHExecutor:
assert res["A"] or res["AAAA"]
def test_logged(self, sshexec, maildomain, capsys):
sshexec.verbose = False
sshexec.logged(
remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain)
)
@@ -52,6 +53,8 @@ class TestSSHExecutor:
remote.rdns.perform_initial_checks,
kwargs=dict(mail_domain=None),
)
except AssertionError:
pass
except sshexec.FuncError as e:
assert "rdns.py" in str(e)
assert "AssertionError" in str(e)
@@ -68,6 +71,44 @@ class TestSSHExecutor:
assert (now - since_date).total_seconds() < 60 * 60 * 51
def test_dovecot_main_process_matches_installed_binary(sshdomain):
sshexec = get_sshexec(sshdomain)
main_pid = int(
sshexec(
call=remote.rshell.shell,
kwargs=dict(
command="timeout 10 systemctl show -p MainPID --value dovecot.service"
),
).strip()
)
assert main_pid != 0, "dovecot.service MainPID is 0 -- service not running?"
exe = sshexec(
call=remote.rshell.shell,
kwargs=dict(command=f"timeout 10 readlink /proc/{main_pid}/exe"),
).strip()
status_text = sshexec(
call=remote.rshell.shell,
kwargs=dict(
command="timeout 10 systemctl show -p StatusText --value dovecot.service"
),
).strip()
installed_version = sshexec(
call=remote.rshell.shell, kwargs=dict(command="timeout 10 dovecot --version")
).strip()
assert not exe.endswith("(deleted)"), (
f"running dovecot binary was deleted (stale after upgrade): {exe}"
)
expected_status_text = f"v{installed_version}"
assert status_text == expected_status_text or status_text.startswith(
f"{expected_status_text} "
), (
f"dovecot status version mismatch: "
f"StatusText={status_text!r}, installed={installed_version!r}"
)
def test_timezone_env(remote):
for line in remote.iter_output("env"):
print(line)
@@ -83,10 +124,8 @@ def test_remote(remote, imap_or_smtp):
def test_use_two_chatmailservers(cmfactory, maildomain2):
ac1 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
ac1 = cmfactory.get_online_account()
ac2 = cmfactory.get_online_account(domain=maildomain2)
cmfactory.get_accepted_chat(ac1, ac2)
domain1 = ac1.get_config("addr").split("@")[1]
domain2 = ac2.get_config("addr").split("@")[1]
@@ -146,7 +185,7 @@ def test_reject_missing_dkim(cmsetup, maildata, from_addr):
conn.starttls()
with conn as s:
with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"):
with pytest.raises(smtplib.SMTPDataError, match="No DKIM signature found"):
s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg)
@@ -205,24 +244,6 @@ def test_exceed_rate_limit(cmsetup, gencreds, maildata, chatmail_config):
pytest.fail("Rate limit was not exceeded")
@pytest.mark.slow
def test_expunged(remote, chatmail_config):
outdated_days = int(chatmail_config.delete_mails_after) + 1
find_cmds = [
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/cur/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/new/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/new/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f",
]
outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append(
"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
)
for cmd in find_cmds:
for line in remote.iter_output(cmd):
assert not line
def test_deployed_state(remote):

View File

@@ -6,8 +6,8 @@ import imap_tools
import pytest
import requests
from cmdeploy.cmdeploy import get_sshexec
from cmdeploy.remote import rshell
from cmdeploy.sshexec import SSHExec
@pytest.fixture
@@ -27,6 +27,7 @@ class TestMetadataTokens:
def test_set_get_metadata(self, imap_mailbox):
"set and get metadata token for an account"
time.sleep(5) # make sure Metadata service had a chance to restart
client = imap_mailbox.client
client.send(b'a01 SETMETADATA INBOX (/private/devicetoken "1111" )\n')
res = client.readline()
@@ -62,8 +63,8 @@ class TestEndToEndDeltaChat:
chat.send_text("message0")
lp.sec("wait for ac2 to receive message")
msg2 = ac2._evtracker.wait_next_incoming_message()
assert msg2.text == "message0"
msg2 = ac2.wait_for_incoming_msg()
assert msg2.get_snapshot().text == "message0"
def test_exceed_quota(
self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain
@@ -91,45 +92,41 @@ class TestEndToEndDeltaChat:
lp.sec(f"filling remote inbox for {user}")
fn = f"7743102289.M843172P2484002.c20,S={quota},W=2398:2,"
path = chatmail_config.mailboxes_dir.joinpath(user, "cur", fn)
sshexec = SSHExec(sshdomain)
sshexec = get_sshexec(sshdomain)
sshexec(call=rshell.write_numbytes, kwargs=dict(path=str(path), num=120))
res = sshexec(call=rshell.dovecot_recalc_quota, kwargs=dict(user=user))
assert res["percent"] >= 100
lp.sec("ac2: check quota is triggered")
starting = True
for line in remote.iter_output("journalctl -n0 -f -u dovecot"):
if starting:
chat.send_text("hello")
starting = False
def send_hello():
chat.send_text("hello")
for line in remote.iter_output(
"journalctl -n1 -f -u dovecot", ready=send_hello
):
if user not in line:
# print(line)
continue
if "quota exceeded" in line:
return
def test_securejoin(self, cmfactory, lp, maildomain2):
ac1 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
ac1 = cmfactory.get_online_account()
ac2 = cmfactory.get_online_account(domain=maildomain2)
lp.sec("ac1: create QR code and let ac2 scan it, starting the securejoin")
qr = ac1.get_setup_contact_qr()
qr = ac1.get_qr_code()
lp.sec("ac2: start QR-code based setup contact protocol")
ch = ac2.qr_setup_contact(qr)
ch = ac2.secure_join(qr)
assert ch.id >= 10
ac1._evtracker.wait_securejoin_inviter_progress(1000)
ac1.wait_for_securejoin_inviter_success()
def test_dkim_header_stripped(self, cmfactory, maildomain2, lp, imap_mailbox):
"""Test that if a DC address receives a message, it has no
DKIM-Signature and Authentication-Results headers."""
ac1 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
ac1 = cmfactory.get_online_account()
ac2 = cmfactory.get_online_account(domain=maildomain2)
chat = cmfactory.get_accepted_chat(ac1, imap_mailbox.dc_ac)
chat.send_text("message0")
chat2 = cmfactory.get_accepted_chat(ac2, imap_mailbox.dc_ac)
@@ -146,29 +143,28 @@ class TestEndToEndDeltaChat:
assert "dkim-signature" not in msg.headers
def test_read_receipts_between_instances(self, cmfactory, lp, maildomain2):
ac1 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
ac1 = cmfactory.get_online_account()
ac2 = cmfactory.get_online_account(domain=maildomain2)
lp.sec("setup encrypted comms between ac1 and ac2 on different instances")
qr = ac1.get_setup_contact_qr()
ch = ac2.qr_setup_contact(qr)
qr = ac1.get_qr_code()
ch = ac2.secure_join(qr)
assert ch.id >= 10
ac1._evtracker.wait_securejoin_inviter_progress(1000)
ac1.wait_for_securejoin_inviter_success()
lp.sec("ac1 sends a message and ac2 marks it as seen")
chat = ac1.create_chat(ac2)
msg = chat.send_text("hi")
m = ac2._evtracker.wait_next_incoming_message()
m = ac2.wait_for_incoming_msg()
m.mark_seen()
# we can only indirectly wait for mark-seen to cause an smtp-error
lp.sec("try to wait for markseen to complete and check error states")
deadline = time.time() + 3.1
while time.time() < deadline:
msgs = m.chat.get_messages()
m_snap = m.get_snapshot()
msgs = m_snap.chat.get_messages()
for msg in msgs:
assert "error" not in m.get_message_info()
assert "error" not in m.get_info()
time.sleep(1)
@@ -180,7 +176,7 @@ def test_hide_senders_ip_address(cmfactory, ssl_context):
chat = cmfactory.get_accepted_chat(user1, user2)
chat.send_text("testing submission header cleanup")
user2._evtracker.wait_next_incoming_message()
user2.wait_for_incoming_msg()
addr = user2.get_config("addr")
host = addr.split("@")[1]
pw = user2.get_config("mail_pw")

View File

@@ -5,7 +5,11 @@ from cmdeploy.cmdeploy import main
def test_status_cmd(chatmail_config, capsys, request):
os.chdir(request.config.invocation_params.dir)
assert main(["status"]) == 0
command = ["status"]
if os.getenv("CHATMAIL_SSH"):
command.append("--ssh-host")
command.append(os.getenv("CHATMAIL_SSH"))
assert main(command) == 0
status_out = capsys.readouterr()
print(status_out.out)

View File

@@ -1,5 +1,5 @@
import imaplib
import io
import ipaddress
import itertools
import os
import random
@@ -15,6 +15,14 @@ from chatmaild.config import read_config
conftestdir = Path(__file__).parent
def _is_ip(domain):
try:
ipaddress.ip_address(domain)
return True
except ValueError:
return False
def pytest_addoption(parser):
parser.addoption(
"--slow", action="store_true", default=False, help="also run slow tests"
@@ -35,17 +43,29 @@ def pytest_runtest_setup(item):
pytest.skip("skipping slow test, use --slow to run")
@pytest.fixture(scope="session")
def chatmail_config(pytestconfig):
current = basedir = Path().resolve()
def _get_chatmail_config():
inipath = os.environ.get("CHATMAIL_INI")
if inipath:
path = Path(inipath).resolve()
return read_config(path), path
current = Path().resolve()
while 1:
path = current.joinpath("chatmail.ini").resolve()
if path.exists():
return read_config(path)
return read_config(path), path
if current == current.parent:
break
current = current.parent
return None, None
@pytest.fixture(scope="session")
def chatmail_config(pytestconfig):
config, path = _get_chatmail_config()
if config:
return config
basedir = Path().resolve()
pytest.skip(f"no chatmail.ini file found in {basedir} or parent dirs")
@@ -73,10 +93,17 @@ def sshdomain2(maildomain2):
def pytest_report_header():
domain = os.environ.get("CHATMAIL_DOMAIN")
if domain:
text = f"chatmail test instance: {domain}"
return ["-" * len(text), text, "-" * len(text)]
config, path = _get_chatmail_config()
domain2 = os.environ.get("CHATMAIL_DOMAIN2", "NOT SET")
domain = config.mail_domain if config else "NOT SET"
path = path if path else "NOT SET"
lines = [
f"chatmail.ini {domain} location: {path}",
f"chatmail2: {domain2}",
]
sep = "-" * max(map(len, lines))
return [sep, *lines, sep]
@pytest.fixture
@@ -91,15 +118,22 @@ def cm_data(request):
@pytest.fixture
def benchmark(request):
def bench(func, num, name=None, reportfunc=None):
def benchmark(request, chatmail_config):
def bench(func, num, name=None, reportfunc=None, cooldown=0.0):
if name is None:
name = func.__name__
if cooldown == "auto":
per_minute = max(chatmail_config.max_user_send_per_minute, 1)
cooldown = chatmail_config.max_user_send_burst_size * 60 / per_minute
durations = []
for i in range(num):
now = time.time()
func()
durations.append(time.time() - now)
if cooldown > 0 and i + 1 < num:
# Keep post-run cooldown out of measured benchmark duration.
time.sleep(cooldown)
durations.sort()
request.config._benchresults[name] = (reportfunc, durations)
@@ -257,6 +291,7 @@ def gencreds(chatmail_config):
def gen(domain=None):
domain = domain if domain else chatmail_config.mail_domain
addr_domain = f"[{domain}]" if _is_ip(domain) else domain
while 1:
num = next(count)
alphanumeric = "abcdefghijklmnopqrstuvwxyz1234567890"
@@ -270,108 +305,161 @@ def gencreds(chatmail_config):
password = "".join(
random.choices(alphanumeric, k=chatmail_config.password_min_length)
)
yield f"{user}@{domain}", f"{password}"
yield f"{user}@{addr_domain}", f"{password}"
return lambda domain=None: next(gen(domain))
#
# Delta Chat testplugin re-use
# Delta Chat RPC-based test support
# use the cmfactory fixture to get chatmail instance accounts
#
from deltachat_rpc_client import DeltaChat, Rpc
class ChatmailTestProcess:
"""Provider for chatmail instance accounts as used by deltachat.testplugin.acfactory"""
def __init__(self, pytestconfig, maildomain, gencreds, chatmail_config):
self.pytestconfig = pytestconfig
self.maildomain = maildomain
assert "." in self.maildomain, maildomain
class ChatmailACFactory:
"""RPC-based account factory for chatmail testing."""
def __init__(self, rpc, maildomain, gencreds, chatmail_config):
self.dc = DeltaChat(rpc)
self.rpc = rpc
self._maildomain = maildomain
self.gencreds = gencreds
self.chatmail_config = chatmail_config
self._addr2files = {}
def get_liveconfig_producer(self):
while 1:
user, password = self.gencreds(self.maildomain)
config = {
"addr": user,
"mail_pw": password,
}
# speed up account configuration
config["mail_server"] = self.maildomain
config["send_server"] = self.maildomain
if self.chatmail_config.tls_cert_mode == "self":
# Accept self-signed TLS certificates
config["imap_certificate_checks"] = "3"
yield config
def _make_transport(self, domain):
"""Build a transport config dict for the given domain."""
addr, password = self.gencreds(domain)
transport = {
"addr": addr,
"password": password,
# Setting server explicitly skips requesting autoconfig XML,
# see https://datatracker.ietf.org/doc/draft-ietf-mailmaint-autoconfig/
"imapServer": domain,
"smtpServer": domain,
}
if self.chatmail_config.tls_cert_mode == "self":
transport["certificateChecks"] = "acceptInvalidCertificates"
return transport
def cache_maybe_retrieve_configured_db_files(self, cache_addr, db_target_path):
pass
def get_online_account(self, domain=None):
"""Create, configure and bring online a single account."""
return self.get_online_accounts(1, domain)[0]
def cache_maybe_store_configured_db_files(self, acc):
pass
def get_online_accounts(self, num, domain=None):
"""Create multiple online accounts in parallel."""
domain = domain or self._maildomain
futures = []
accounts = []
for _ in range(num):
account = self.dc.add_account()
addr, password = self.gencreds(domain)
if _is_ip(domain):
# Use DCLOGIN scheme with explicit server hosts,
# matching how madmail presents its addresses to users.
qr = (
f"dclogin:{addr}"
f"?p={password}&v=1"
f"&ih={domain}&ip=993"
f"&sh={domain}&sp=465"
f"&ic=3&ss=default"
)
future = account.add_transport_from_qr.future(qr)
else:
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
futures.append(future)
# ensure messages stay in INBOX so that they can be
# concurrently fetched via extra IMAP connections during tests
account.set_config("delete_server_after", "10")
accounts.append(account)
for future in futures:
future()
for account in accounts:
account.bring_online()
return accounts
def get_accepted_chat(self, ac1, ac2):
"""Create a 1:1 chat between ac1 and ac2 accepted on both sides."""
ac2.create_chat(ac1)
return ac1.create_chat(ac2)
@pytest.fixture(scope="session")
def rpc(tmp_path_factory):
"""Start a deltachat-rpc-server process for the test session."""
# NB: accounts_dir must NOT already exist as directory --
# core-rust only creates accounts.toml if the dir doesn't exist yet.
accounts_dir = str(tmp_path_factory.mktemp("dc") / "accounts")
rpc = Rpc(accounts_dir=accounts_dir)
rpc.start()
yield rpc
rpc.close()
@pytest.fixture
def cmfactory(request, gencreds, tmpdir, maildomain, chatmail_config):
# cloned from deltachat.testplugin.amfactory
pytest.importorskip("deltachat")
from deltachat.testplugin import ACFactory
testproc = ChatmailTestProcess(
request.config, maildomain, gencreds, chatmail_config
def cmfactory(rpc, gencreds, maildomain, chatmail_config):
"""Return a ChatmailACFactory for creating online Delta Chat accounts."""
return ChatmailACFactory(
rpc=rpc,
maildomain=maildomain,
gencreds=gencreds,
chatmail_config=chatmail_config,
)
class Data:
def read_path(self, path):
return
am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data())
# Skip upstream's init_imap to prevent extra imap connections not
# needed for relay testing
am._acsetup.init_imap = lambda acc: None
# nb. a bit hacky
# would probably be better if deltachat's test machinery grows native support
def switch_maildomain(maildomain2):
am.testprocess.maildomain = maildomain2
am.switch_maildomain = switch_maildomain
yield am
if hasattr(request.node, "rep_call") and request.node.rep_call.failed:
if testproc.pytestconfig.getoption("--extra-info"):
logfile = io.StringIO()
am.dump_imap_summary(logfile=logfile)
print(logfile.getvalue())
# request.node.add_report_section("call", "imap-server-state", s)
@pytest.fixture
def remote(sshdomain):
return Remote(sshdomain)
r = Remote(sshdomain)
yield r
r.close()
class Remote:
def __init__(self, sshdomain):
self.sshdomain = sshdomain
self._procs = []
def iter_output(self, logcmd=""):
def iter_output(self, logcmd="", ready=None):
getjournal = "journalctl -f" if not logcmd else logcmd
self.popen = subprocess.Popen(
["ssh", f"root@{self.sshdomain}", getjournal],
print(self.sshdomain)
match self.sshdomain:
case "@local": command = []
case "localhost": command = []
case _: command = ["ssh", f"root@{self.sshdomain}"]
[command.append(arg) for arg in getjournal.split()]
popen = subprocess.Popen(
command,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
)
while 1:
line = self.popen.stdout.readline()
res = line.decode().strip().lower()
if res:
self._procs.append(popen)
try:
while 1:
line = popen.stdout.readline()
res = line.decode().strip().lower()
if not res:
break
if ready is not None:
ready()
ready = None
yield res
else:
break
finally:
popen.terminate()
popen.wait()
def close(self):
while self._procs:
proc = self._procs.pop()
proc.kill()
proc.wait()
@pytest.fixture

View File

@@ -23,15 +23,19 @@ class TestCmdline:
run = parser.parse_args(["run"])
assert init and run
def test_init_not_overwrite(self, capsys):
assert main(["init", "chat.example.org"]) == 0
def test_init_not_overwrite(self, capsys, tmp_path, monkeypatch):
monkeypatch.delenv("CHATMAIL_INI", raising=False)
inipath = tmp_path / "chatmail.ini"
args = ["init", "--config", str(inipath), "chat.example.org"]
assert main(args) == 0
capsys.readouterr()
assert main(["init", "chat.example.org"]) == 1
assert main(args) == 1
out, err = capsys.readouterr()
assert "path exists" in out.lower()
assert main(["init", "chat.example.org", "--force"]) == 0
args.insert(1, "--force")
assert main(args) == 0
out, err = capsys.readouterr()
assert "deleting config file" in out.lower()

View File

@@ -3,7 +3,7 @@ from copy import deepcopy
import pytest
from cmdeploy import remote
from cmdeploy.dns import check_full_zone, check_initial_remote_data
from cmdeploy.dns import check_full_zone, check_initial_remote_data, parse_zone_records
@pytest.fixture
@@ -60,6 +60,29 @@ def mockdns(request, mockdns_base, mockdns_expected):
return mockdns_base
class TestGetDkimEntry:
def test_dkim_entry_returns_tuple_on_success(self, mockdns):
entry, web_entry = remote.rdns.get_dkim_entry(
"some.domain", "", dkim_selector="opendkim"
)
# May return None,None if openssl not available, but should never crash
if entry is not None:
assert "opendkim._domainkey.some.domain" in entry
assert "opendkim._domainkey.some.domain" in web_entry
def test_dkim_entry_returns_none_tuple_on_error(self, monkeypatch):
"""CalledProcessError must return (None, None), not bare None."""
from subprocess import CalledProcessError
def failing_shell(command, fail_ok=False, print=print):
raise CalledProcessError(1, command)
monkeypatch.setattr(remote.rdns, "shell", failing_shell)
result = remote.rdns.get_dkim_entry("some.domain", "", dkim_selector="opendkim")
assert result == (None, None)
assert result[0] is None and result[1] is None
class TestPerformInitialChecks:
def test_perform_initial_checks_ok1(self, mockdns, mockdns_expected):
remote_data = remote.rdns.perform_initial_checks("some.domain")
@@ -102,18 +125,49 @@ class TestPerformInitialChecks:
assert not l
def test_parse_zone_records():
text = """
; This is a comment
some.domain. 3600 IN A 1.1.1.1
; Another comment
www.some.domain. 3600 IN CNAME some.domain.
; Multi-word rdata
some.domain. 3600 IN MX 10 mail.some.domain.
; DKIM record (single line, multi-word TXT rdata)
dkim._domainkey.some.domain. 3600 IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG" "9w0BAQEFAAOCAQ8AMIIBCgKCAQEA"
; Another TXT record
_dmarc.some.domain. 3600 IN TXT "v=DMARC1;p=reject"
"""
records = list(parse_zone_records(text))
assert records == [
("some.domain", "3600", "A", "1.1.1.1"),
("www.some.domain", "3600", "CNAME", "some.domain."),
("some.domain", "3600", "MX", "10 mail.some.domain."),
(
"dkim._domainkey.some.domain",
"3600",
"TXT",
'"v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG" "9w0BAQEFAAOCAQ8AMIIBCgKCAQEA"',
),
("_dmarc.some.domain", "3600", "TXT", '"v=DMARC1;p=reject"'),
]
def test_parse_zone_records_invalid_line():
text = "invalid line"
with pytest.raises(ValueError, match="Bad zone record line"):
list(parse_zone_records(text))
def parse_zonefile_into_dict(zonefile, mockdns_base, only_required=False):
for zf_line in zonefile.split("\n"):
if zf_line.startswith("#"):
if "Recommended" in zf_line and only_required:
return
continue
if not zf_line.strip():
continue
zf_domain, zf_typ, zf_value = zf_line.split(maxsplit=2)
zf_domain = zf_domain.rstrip(".")
zf_value = zf_value.strip()
mockdns_base.setdefault(zf_typ, {})[zf_domain] = zf_value
if only_required:
zonefile = zonefile.split("; Recommended")[0]
for name, ttl, rtype, rdata in parse_zone_records(zonefile):
mockdns_base.setdefault(rtype, {})[name] = rdata
class MockSSHExec:

View File

@@ -0,0 +1,238 @@
from contextlib import nullcontext
from types import SimpleNamespace
import pytest
from pyinfra.facts.deb import DebPackages
from cmdeploy.dovecot import deployer as dovecot_deployer
def make_host(*fact_pairs):
"""Build a mock host; get_fact(cls) dispatches to the provided facts mapping.
Args:
*fact_pairs: tuples of (fact_class, fact_value) to register
Returns:
SimpleNamespace with get_fact that raises a clear error if an
unexpected fact type is requested.
"""
facts = dict(fact_pairs)
def get_fact(cls):
if cls not in facts:
registered = ", ".join(c.__name__ for c in facts)
raise LookupError(
f"unexpected get_fact({cls.__name__}); "
f"only registered: {registered}"
)
return facts[cls]
return SimpleNamespace(get_fact=get_fact)
@pytest.fixture
def deployer():
return dovecot_deployer.DovecotDeployer(
SimpleNamespace(mail_domain="chat.example.org"),
disable_mail=False,
)
@pytest.fixture
def patch_blocked(monkeypatch):
monkeypatch.setattr(dovecot_deployer, "blocked_service_startup", nullcontext)
@pytest.fixture
def mock_files_put(monkeypatch):
monkeypatch.setattr(
dovecot_deployer.files,
"put",
lambda **kwargs: SimpleNamespace(changed=False),
)
@pytest.fixture
def track_shell(monkeypatch):
calls = []
monkeypatch.setattr(
dovecot_deployer.server,
"shell",
lambda **kwargs: calls.append(kwargs) or SimpleNamespace(changed=False),
)
return calls
def test_download_dovecot_package_skips_epoch_matched_install(monkeypatch):
epoch_version = dovecot_deployer.DOVECOT_PACKAGE_VERSION
downloads = []
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host((DebPackages, {"dovecot-core": [epoch_version]})),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deb, changed = dovecot_deployer._download_dovecot_package("core", "amd64")
assert deb is None, f"expected no deb path when version matches, got {deb!r}"
assert changed is False, "should not flag changed when version already installed"
assert downloads == [], "should not download when version already installed"
def test_download_dovecot_package_uses_archive_version_for_url_and_filename(
monkeypatch,
):
downloads = []
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host((DebPackages, {})),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deb, changed = dovecot_deployer._download_dovecot_package("core", "amd64")
archive_version = dovecot_deployer.DOVECOT_ARCHIVE_VERSION.replace("+", "%2B")
expected_deb = f"/root/dovecot-core_{archive_version}_amd64.deb"
# Verify the returned path uses archive version, not package version (with epoch)
assert changed is True, "should flag changed when package not yet installed"
assert deb == expected_deb, f"deb path mismatch: {deb!r} != {expected_deb!r}"
assert dovecot_deployer.DOVECOT_PACKAGE_VERSION not in deb, (
f"deb path should use archive version (no epoch), got {deb!r}"
)
assert len(downloads) == 1, "files.download should be called exactly once"
def test_install_skips_dpkg_path_when_epoch_matched_packages_present(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host(
(
dovecot_deployer.DebPackages,
{
"dovecot-core": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
"dovecot-imapd": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
"dovecot-lmtpd": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
},
),
(dovecot_deployer.Arch, "x86_64"),
),
)
downloads = []
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deployer.install()
assert downloads == [], "should not download when all packages epoch-matched"
assert track_shell == [], "should not run dpkg when all packages epoch-matched"
assert deployer.need_restart is False, (
"need_restart should be False when nothing changed"
)
def test_install_unsupported_arch_falls_back_to_apt(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
# For unsupported architectures, all fact lookups return the arch string.
monkeypatch.setattr(
dovecot_deployer,
"host",
SimpleNamespace(get_fact=lambda cls: "riscv64"),
)
apt_calls = []
# Mirrors apt.packages() return value: OperationMeta with .changed property.
# Only lmtpd triggers a change to verify |= accumulation of changed flags.
def fake_apt(**kwargs):
apt_calls.append(kwargs)
changed = "lmtpd" in kwargs["packages"][0]
return SimpleNamespace(changed=changed)
monkeypatch.setattr(dovecot_deployer.apt, "packages", fake_apt)
deployer.install()
actual_pkgs = [c["packages"] for c in apt_calls]
assert actual_pkgs == [["dovecot-core"], ["dovecot-imapd"], ["dovecot-lmtpd"]], (
f"expected apt install of core/imapd/lmtpd, got {actual_pkgs}"
)
assert track_shell == [], "should not run dpkg for unsupported arch"
assert deployer.need_restart is True, (
"need_restart should be True when apt installed a package"
)
def test_install_runs_dpkg_when_packages_need_download(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host(
(dovecot_deployer.DebPackages, {}),
(dovecot_deployer.Arch, "x86_64"),
),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: SimpleNamespace(changed=True),
)
deployer.install()
assert len(track_shell) == 1, (
f"expected one server.shell() call for dpkg install, got {len(track_shell)}"
)
cmds = track_shell[0]["commands"]
assert len(cmds) == 3, f"expected 3 dpkg/apt commands, got: {cmds}"
assert cmds[0].startswith("dpkg --force-confdef --force-confold -i ")
assert "apt-get -y --fix-broken install" in cmds[1]
assert cmds[2].startswith("dpkg --force-confdef --force-confold -i ")
assert deployer.need_restart is True, (
"need_restart should be True after dpkg install"
)
def test_pick_url_falls_back_on_primary_error(monkeypatch):
def raise_error(req, timeout):
raise OSError("connection timeout")
monkeypatch.setattr(dovecot_deployer.urllib.request, "urlopen", raise_error)
result = dovecot_deployer._pick_url("http://primary", "http://fallback")
assert result == "http://fallback", (
f"should fall back when primary fails, got {result!r}"
)

View File

@@ -0,0 +1,78 @@
"""Functional tests for tls_external_cert_and_key option."""
import json
import chatmaild.newemail
import pytest
from chatmaild.config import read_config, write_initial_config
def make_external_config(tmp_path, cert_key=None):
inipath = tmp_path / "chatmail.ini"
overrides = {}
if cert_key is not None:
overrides["tls_external_cert_and_key"] = cert_key
write_initial_config(inipath, "chat.example.org", overrides=overrides)
return inipath
def test_external_tls_config_reads_paths(tmp_path):
inipath = make_external_config(
tmp_path,
cert_key=(
"/etc/letsencrypt/live/chat.example.org/fullchain.pem"
" /etc/letsencrypt/live/chat.example.org/privkey.pem"
),
)
config = read_config(inipath)
assert config.tls_cert_mode == "external"
assert (
config.tls_cert_path == "/etc/letsencrypt/live/chat.example.org/fullchain.pem"
)
assert config.tls_key_path == "/etc/letsencrypt/live/chat.example.org/privkey.pem"
def test_external_tls_missing_option_uses_acme(tmp_path):
config = read_config(make_external_config(tmp_path))
assert config.tls_cert_mode == "acme"
def test_external_tls_bad_format_raises(tmp_path):
inipath = make_external_config(tmp_path, cert_key="/only/one/path.pem")
with pytest.raises(ValueError, match="two space-separated"):
read_config(inipath)
def test_external_tls_three_paths_raises(tmp_path):
inipath = make_external_config(tmp_path, cert_key="/a /b /c")
with pytest.raises(ValueError, match="two space-separated"):
read_config(inipath)
def test_external_tls_no_dclogin_url(tmp_path, capsys, monkeypatch):
inipath = make_external_config(
tmp_path, cert_key="/certs/fullchain.pem /certs/privkey.pem"
)
monkeypatch.setattr(chatmaild.newemail, "CONFIG_PATH", str(inipath))
chatmaild.newemail.print_new_account()
out, _ = capsys.readouterr()
lines = out.split("\n")
dic = json.loads(lines[2])
assert "dclogin_url" not in dic
def test_external_tls_selects_correct_deployer(tmp_path):
from cmdeploy.deployers import get_tls_deployer
from cmdeploy.external.deployer import ExternalTlsDeployer
from cmdeploy.selfsigned.deployer import SelfSignedTlsDeployer
inipath = make_external_config(
tmp_path, cert_key="/certs/fullchain.pem /certs/privkey.pem"
)
config = read_config(inipath)
deployer = get_tls_deployer(config, "chat.example.org")
assert isinstance(deployer, ExternalTlsDeployer)
assert not isinstance(deployer, SelfSignedTlsDeployer)
assert deployer.cert_path == "/certs/fullchain.pem"
assert deployer.key_path == "/certs/privkey.pem"

View File

@@ -0,0 +1,68 @@
from unittest.mock import patch
from cmdeploy.remote.rshell import dovecot_recalc_quota
def test_dovecot_recalc_quota_normal_output():
"""Normal doveadm output returns parsed dict."""
normal_output = (
"Quota name Type Value Limit %\n"
"User quota STORAGE 5 102400 0\n"
"User quota MESSAGE 2 - 0\n"
)
with patch("cmdeploy.remote.rshell.shell", return_value=normal_output):
result = dovecot_recalc_quota("user@example.org")
# shell is called twice (recalc + get), patch returns same for both
assert result == {"value": 5, "limit": 102400, "percent": 0}
def test_dovecot_recalc_quota_empty_output():
"""Empty doveadm output (trailing newline) must not IndexError."""
call_count = [0]
def mock_shell(cmd):
call_count[0] += 1
if "recalc" in cmd:
return ""
# quota get returns only empty lines
return "\n\n"
with patch("cmdeploy.remote.rshell.shell", side_effect=mock_shell):
result = dovecot_recalc_quota("user@example.org")
assert result is None
def test_dovecot_recalc_quota_malformed_output():
"""Malformed output with too few columns must not crash."""
call_count = [0]
def mock_shell(cmd):
call_count[0] += 1
if "recalc" in cmd:
return ""
# partial line, fewer than 6 parts
return "Quota name\nUser quota STORAGE\n"
with patch("cmdeploy.remote.rshell.shell", side_effect=mock_shell):
result = dovecot_recalc_quota("user@example.org")
assert result is None
def test_dovecot_recalc_quota_header_only():
"""Only header line, no data rows."""
call_count = [0]
def mock_shell(cmd):
call_count[0] += 1
if "recalc" in cmd:
return ""
return "Quota name Type Value Limit %\n"
with patch("cmdeploy.remote.rshell.shell", side_effect=mock_shell):
result = dovecot_recalc_quota("user@example.org")
assert result is None

View File

@@ -198,6 +198,44 @@ and all other relays will accept connections from it
without requiring certificate verification.
This is useful for experimental setups and testing.
.. _external-tls:
Running a relay with externally managed certificates
-----------------------------------------------------
If you already have a TLS certificate manager
(e.g. Traefik, certbot, or another ACME client)
running on the deployment server,
you can configure the relay to use those certificates
instead of the built-in ``acmetool``.
Set the following in ``chatmail.ini``::
tls_external_cert_and_key = /path/to/fullchain.pem /path/to/privkey.pem
The paths must point to certificate and key files
on the deployment server.
During ``cmdeploy run``, these paths are written into
the Postfix, Dovecot, and Nginx configurations.
No certificate files are transferred from the build machine —
they must already exist on the server,
managed by your external certificate tool.
The deploy will verify that both files exist on the server.
``acmetool`` is **not** installed or run in this mode.
.. note::
You are responsible for certificate renewal.
When the certificate file changes on disk,
all relay services pick up the new certificate automatically
via a systemd path watcher installed during deploy.
The watcher uses inotify, which does not cross bind-mount boundaries.
If you use such a setup, you must trigger the reload explicitly after renewal::
systemctl start tls-cert-reload.service
Migrating to a new build machine
----------------------------------

View File

@@ -102,17 +102,19 @@ short overview of ``chatmaild`` services:
Apple/Google/Huawei.
- `chatmail-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/expire.py>`_
deletes users if they have not logged in for a longer while.
The timeframe can be configured in ``chatmail.ini``.
deletes entire mailboxes of users who have not logged in
for longer than ``delete_inactive_users_after`` days.
- `chatmail-quota-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/quota_expire.py>`_
is called by Dovecot's ``quota_warning`` mechanism when a
user reaches 90% of their mailbox quota.
It removes the largest and oldest messages
until usage drops below 80% of the quota.
- `lastlogin <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/lastlogin.py>`_
is contacted by Dovecot when a user logs in and stores the date of
the login.
- `metrics <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/metrics.py>`_
collects some metrics and displays them at
``https://example.org/metrics``.
``www/``
~~~~~~~~~
@@ -142,11 +144,9 @@ Chatmail relay dependency diagram
nginx-internal --- autoconfig.xml;
certs-nginx[("`TLS certs
/var/lib/acme`")] --> nginx-internal;
systemd-timer --- chatmail-metrics;
systemd-timer --- acmetool;
systemd-timer --- chatmail-expire-daily;
systemd-timer --- chatmail-expire-inactive;
systemd-timer --- chatmail-fsreport-daily;
chatmail-metrics --- website;
acmetool --> certs[("`TLS certs
/var/lib/acme`")];
nginx-external --- |993|dovecot;
@@ -162,9 +162,11 @@ Chatmail relay dependency diagram
/home/vmail/.../user"];
dovecot --- |lastlogin.socket|lastlogin;
dovecot --- chatmail-metadata;
dovecot --- |quota-warning|chatmail-quota-expire;
chatmail-quota-expire --- maildir;
lastlogin --- maildir;
doveauth --- maildir;
chatmail-expire-daily --- maildir;
chatmail-expire-inactive --- maildir;
chatmail-fsreport-daily --- maildir;
chatmail-metadata --- iroh-relay;
chatmail-metadata --- |encrypted device token| notifications.delta.chat;
@@ -308,6 +310,11 @@ When providing a TLS certificate to your chatmail relay server, make
sure to provide the full certificate chain and not just the last
certificate.
If you use an external certificate manager (e.g. Traefik or certbot),
set ``tls_external_cert_and_key`` in ``chatmail.ini``
to provide the certificate and key paths.
See :ref:`external-tls` for details.
If you are running an Exim server and dont see incoming connections
from a chatmail relay server in the logs, make sure ``smtp_no_mail`` log
item is enabled in the config with ``log_selector = +smtp_no_mail``. By