Compare commits

...

13 Commits

Author SHA1 Message Date
holger krekel
fb80f23cfd feat: Automatic per-user quota-preservation.
Replace daily timer-based message expire script
with Dovecot quota-warning-triggered cleanup.
When a user reaches 90% of their mailbox quota
Dovecot calls the new script which removes the largest and oldest messages
until usage drops below 80%.

The daily `chatmail-expire` service now only
handles deletion of inactive user mailboxes.
2026-04-18 02:04:36 +02:00
link2xt
0aa08b7413 feat(dovecot): disable fsync for LMTP and IMAP services
This is aimed at reducing SSD wear level.
SSDs wear out because of writes
according to <https://superuser.com/a/440219/1777696>,
so anything reducing the writes should be helpful.

For online users Maildir format that we use
results in first storing the message in new/
and then moving to cur/ and then maybe even deleting
it immediately for users with a single device
or bots. Syncing all these changes to disk
is unnecessary and wears SSDs.
2026-04-17 19:23:28 +00:00
holger krekel
14dfabf2ff generate compliant IP-address email addresses 2026-04-17 14:40:52 +02:00
holger krekel
0a77b3339b ci: ensure consistent checkout and fix cross-relay test typo 2026-04-17 14:40:52 +02:00
holger krekel
001d8c80fc feat: re-use cmlxc workflow from chatmail/cmlxc to perform testing 2026-04-17 14:40:52 +02:00
j4n
1e376f7945 fix(cmdeploy): explicitly install resolvconf
Since ff541b8 introduced APT::Install-Recommends "false", we need to
explicitly install resolvconf. Fixes DNS breakage caused by apt.upgrade
with auto_remove=True purging resolvconf as an orphan and removing
'nameserver 127.0.0.1' in /etc/resolv.conf that pointed to the local
unbound, in consequence DNS resolution breaks and filtermail-incoming
exits because it cannot find resolvers.
2026-04-17 10:08:39 +02:00
j4n
1ae92e0639 fix(cmdeploy/dovecot): detect stale dovecot binary and force restart in activate()
When a previous deploy installed dovecot packages but the restart was
blocked (policy-rc.d) or the deploy aborted before activate(), the next
deploy sees the correct package version already installed and skips
restart. Extend activate() to check /proc/MainPID/exe for "(deleted)"
before the restart decision.
2026-04-16 15:29:04 +02:00
Jagoda Estera Ślązak
56386c231b refactor: Rename filtermail_http_port to filtermail_http_port_incoming (#921)
Since http port will be used for MTA-to-MTA,
it should be suffixed with "incoming" for consistency.

This will also make it clearer if we decide to
introduce client-relay http channel in the future.

Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-04-16 14:37:00 +02:00
j4n
2bdfecff72 cmdeploy: consolidate container detection into is_in_container() helper 2026-04-15 16:33:52 +02:00
j4n
cef739e3b3 cmdeploy/sshexec: remove dead @docker SSH host
@docker is no longer needed because we use @local inside the container now.
2026-04-15 16:33:52 +02:00
j4n
3d128d3c64 test: add dovecot deployer checks
Offline tests (test_dovecot_deployer.py, 5 tests):
- skips_epoch_matched_install: core epoch bug regression
- uses_archive_version_for_url_and_filename: epoch must not leak into URLs
- skips_dpkg_path_when_epoch_matched: end-to-end no-op deploy path
- unsupported_arch_falls_back_to_apt: integrated apt fallback with
  mixed changed results to verify |= accumulation
- pick_url_falls_back_on_primary_error: URL failover

Online test (test_1_basic.py):
- dovecot_main_process_matches_installed_binary: stale-binary
  regression guard: checks /proc/PID/exe is not deleted and
  status text matches dovecot --version
2026-04-15 15:46:03 +02:00
j4n
79f68342f4 fix: dovecot epoch version and stale-binary handling
Restart dovecot after package replacement even when `policy-rc.d` blocks
package-triggered restarts, avoid reinstalling already-correct packages.

Adds proper version separation for dovecot packages:
- Split DOVECOT_VERSION into DOVECOT_ARCHIVE_VERSION (for URLs/filenames)
  and DOVECOT_PACKAGE_VERSION (epoch-prefixed for dpkg matching).
- Update _download_dovecot_package() to return (path, changed) tuple
  so install() can track whether packages triggered restart intent.
- Use self.need_restart |= changed consistently throughout deployer.
- Move self.need_restart = True inside `if debs:` block -- previously
  the apt pin file write unconditionally forced a restart every deploy.
- Comment on dpkg retry pattern (first dpkg may fail on missing deps,
  apt-get --fix-broken resolves, then dpkg retries).

Authored-by: Alex V. <119082209+Retengart@users.noreply.github.com>

fixup
2026-04-15 15:46:03 +02:00
Alexandre Gauthier
54863453c2 fix(cmdeploy): Set permissions on dovecot pin
Ensure the preferences.d snippet that pins dovecot packages to block
Debian dist-upgrades is owned by root:root and has 644 permissions.

Files in this directory are generally expected to be world readable to ensure unprivileged operations such as apt-get in simulation mode. Having them not world readable breaks such usages.
2026-04-10 15:52:49 +02:00
25 changed files with 726 additions and 336 deletions

View File

@@ -1,15 +1,26 @@
name: CI
name: Run unit-tests and container-based deploy+test verification
on:
pull_request:
# Triggers when a PR is merged into main or a direct push occurs
push:
branches: [ "main" ]
# Triggers for any PR (and its subsequent commits) targeting the main branch
pull_request:
branches: [ "main" ]
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
tox:
name: isolated chatmaild tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
# Checkout pull request HEAD commit instead of merge commit
# Otherwise `test_deployed_state` will be unhappy.
with:
@@ -24,7 +35,9 @@ jobs:
name: deploy-chatmail tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: initenv
run: scripts/initenv.sh
@@ -38,5 +51,23 @@ jobs:
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
# all other cmdeploy commands require a staging server
# see https://github.com/deltachat/chatmail/issues/100
lxc-test:
name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.10.0
with:
cmlxc_commands: |
cmlxc init
# single cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo cm0
cmlxc -v test-mini cm0
cmlxc -v test-cmdeploy cm0
# cross cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo --ipv4-only cm1
cmlxc -v test-cmdeploy cm0 cm1
# cross cmdeploy/madmail relay tests
cmlxc -v deploy-madmail mad0
cmlxc -v test-cmdeploy cm0 mad0
cmlxc -v test-mini cm0 mad0
cmlxc -v test-mini mad0 cm0

View File

@@ -1,104 +0,0 @@
name: deploy on staging-ipv4.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging-ipv4.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging-ipv4.testrun.org
url: https://staging-ipv4.testrun.org/
concurrency: staging-ipv4.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging-ipv4.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging-ipv4.testrun.org:/var/lib/acme acme-ipv4 || true
rsync -avz root@staging-ipv4.testrun.org:/etc/dkimkeys dkimkeys-ipv4 || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys-ipv4/dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys-ipv4 root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme-ipv4/acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme-ipv4 root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging-ipv4.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_IPV4_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme-ipv4/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys-ipv4/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging-ipv4.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- name: setup dependencies
run: |
ssh root@staging-ipv4.testrun.org apt update
ssh root@staging-ipv4.testrun.org apt install -y git python3.11-venv python3-dev gcc
ssh root@staging-ipv4.testrun.org git clone https://github.com/chatmail/relay
ssh root@staging-ipv4.testrun.org "cd relay && git checkout " ${{ github.head_ref }}
ssh root@staging-ipv4.testrun.org "cd relay && scripts/initenv.sh"
- name: initialize config
run: |
ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy init staging-ipv4.testrun.org"
ssh root@staging-ipv4.testrun.org "sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' relay/chatmail.ini"
ssh root@staging-ipv4.testrun.org "sed -i 's/#\s*mtail_address/mtail_address/' relay/chatmail.ini"
- run: ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy run --verbose --skip-dns-check --ssh-host localhost"
- name: set DNS entries
run: |
ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy dns --zonefile staging-generated.zone --ssh-host localhost"
ssh root@staging-ipv4.testrun.org cat relay/staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: ssh root@staging-ipv4.testrun.org "cd relay && CHATMAIL_DOMAIN2=ci-chatmail.testrun.org scripts/cmdeploy test --slow --ssh-host localhost"
- name: cmdeploy dns
run: ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy dns -v --ssh-host localhost"

View File

@@ -1,97 +0,0 @@
name: deploy on staging2.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging2.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging2.testrun.org
url: https://staging2.testrun.org/
concurrency: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging2.testrun.org:/var/lib/acme . || true
rsync -avz root@staging2.testrun.org:/etc/dkimkeys . || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging2.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging2.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging2.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true
- name: add hpk42 key to staging server
run: ssh root@staging2.testrun.org 'curl -s https://github.com/hpk42.keys >> .ssh/authorized_keys'
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging2.testrun.org
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
cmdeploy dns --zonefile staging-generated.zone --verbose
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone
scp .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

View File

@@ -1,5 +1,24 @@
# Changelog for chatmail deployment
## Unreleased
### Features
- Automated per-user quota-keeping.
Replace daily timer-based message expire script
with Dovecot quota-warning-triggered cleanup (`chatmail-quota-expire`).
When a user reaches 90% of their mailbox quota
Dovecot calls the new script which removes the largest and oldest messages
until usage drops below 80%.
The daily `chatmail-expire` timer now only handles deletion
of inactive user mailboxes.
After upgrading, run the following once to clean up
mailboxes that are already over quota::
/usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire \
400 /home/vmail/mail/YOURDOMAIN --sweep
## 1.9.0 2025-12-18
### Documentation

View File

@@ -21,7 +21,8 @@ where = ['src']
[project.scripts]
doveauth = "chatmaild.doveauth:main"
chatmail-metadata = "chatmaild.metadata:main"
chatmail-expire = "chatmaild.expire:main"
chatmail-expire = "chatmaild.expire_inactive_users:main"
chatmail-quota-expire = "chatmaild.quota_expire:main"
chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main"

View File

@@ -25,8 +25,6 @@ class Config:
self.max_user_send_burst_size = int(params.get("max_user_send_burst_size", 10))
self.max_mailbox_size = params["max_mailbox_size"]
self.max_message_size = int(params.get("max_message_size", "31457280"))
self.delete_mails_after = params["delete_mails_after"]
self.delete_large_after = params["delete_large_after"]
self.delete_inactive_users_after = int(params["delete_inactive_users_after"])
self.username_min_length = int(params["username_min_length"])
self.username_max_length = int(params["username_max_length"])
@@ -38,7 +36,9 @@ class Config:
self.filtermail_smtp_port_incoming = int(
params.get("filtermail_smtp_port_incoming", "10081")
)
self.filtermail_http_port = int(params.get("filtermail_http_port", "10082"))
self.filtermail_http_port_incoming = int(
params.get("filtermail_http_port_incoming", "10082")
)
self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025"))
self.postfix_reinject_port_incoming = int(
params.get("postfix_reinject_port_incoming", "10026")
@@ -93,6 +93,11 @@ class Config:
# old unused option (except for first migration from sqlite to maildir store)
self.passdb_path = Path(params.get("passdb_path", "/home/vmail/passdb.sqlite"))
@property
def max_mailbox_size_mb(self):
"""Return max_mailbox_size as an integer in megabytes."""
return parse_size_mb(self.max_mailbox_size)
def _getbytefile(self):
return open(self._inipath, "rb")
@@ -106,6 +111,16 @@ class Config:
return User(maildir, addr, password_path, uid="vmail", gid="vmail")
def parse_size_mb(limit):
"""Parse a size string like ``500M`` or ``2G`` and return megabytes."""
value = limit.strip().upper().rstrip("B")
if value.endswith("G"):
return int(value[:-1]) * 1024
if value.endswith("M"):
return int(value[:-1])
return int(value)
def write_initial_config(inipath, mail_domain, overrides):
"""Write out default config file, using the specified config value overrides."""
content = get_default_config_content(mail_domain, **overrides)

View File

@@ -115,11 +115,8 @@ class Expiry:
cutoff_without_login = (
self.now - int(self.config.delete_inactive_users_after) * 86400
)
cutoff_mails = self.now - int(self.config.delete_mails_after) * 86400
cutoff_large_mails = self.now - int(self.config.delete_large_after) * 86400
self.all_mboxes += 1
changed = False
if mbox.last_login and mbox.last_login < cutoff_without_login:
self.remove_mailbox(mbox.basedir)
return
@@ -131,25 +128,10 @@ class Expiry:
print_info(f"checking mailbox {date.strftime('%b %d')} {mboxname}")
else:
print_info(f"checking mailbox (no last_login) {mboxname}")
self.all_files += len(mbox.messages)
for message in mbox.messages:
if message.mtime < cutoff_mails:
self.remove_file(message.path, mtime=message.mtime)
elif message.size > 200000 and message.mtime < cutoff_large_mails:
# we only remove noticed large files (not unnoticed ones in new/)
parts = message.path.split("/")
if len(parts) >= 2 and parts[-2] == "cur":
self.remove_file(message.path, mtime=message.mtime)
else:
continue
changed = True
if changed:
self.remove_file(f"{mbox.basedir}/maildirsize")
def get_summary(self):
return (
f"Removed {self.del_mboxes} out of {self.all_mboxes} mailboxes "
f"and {self.del_files} out of {self.all_files} files in existing mailboxes "
f"in {time.time() - self.start:2.2f} seconds"
)

View File

@@ -23,12 +23,6 @@ max_mailbox_size = 500M
# maximum message size for an e-mail in bytes
max_message_size = 31457280
# days after which mails are unconditionally deleted
delete_mails_after = 20
# days after which large messages (>200k) are unconditionally deleted
delete_large_after = 7
# days after which users without a successful login are deleted (database and mails)
delete_inactive_users_after = 90

View File

@@ -2,6 +2,7 @@
"""CGI script for creating new accounts."""
import ipaddress
import json
import secrets
import string
@@ -14,6 +15,16 @@ ALPHANUMERIC = string.ascii_lowercase + string.digits
ALPHANUMERIC_PUNCT = string.ascii_letters + string.digits + string.punctuation
def wrap_ip(host):
if host.startswith("[") and host.endswith("]"):
return host
try:
ipaddress.ip_address(host)
return f"[{host}]"
except ValueError:
return host
def create_newemail_dict(config: Config):
user = "".join(
secrets.choice(ALPHANUMERIC) for _ in range(config.username_max_length)
@@ -22,7 +33,7 @@ def create_newemail_dict(config: Config):
secrets.choice(ALPHANUMERIC_PUNCT)
for _ in range(config.password_min_length + 3)
)
return dict(email=f"{user}@{config.mail_domain}", password=f"{password}")
return dict(email=f"{user}@{wrap_ip(config.mail_domain)}", password=f"{password}")
def create_dclogin_url(email, password):

View File

@@ -0,0 +1,152 @@
"""
Remove messages from a mailbox to meet a size target.
Dovecot calls this script when a user's quota is near its limit.
Files are scored by ``size * age`` so that large, old messages
are removed first.
Usage::
quota_expire <target_mb> <mailbox_path>
"""
import os
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from stat import S_ISREG
FileEntry = namedtuple("FileEntry", ("path", "mtime", "size"))
def _get_file_entry(path):
try:
st = os.stat(path)
except FileNotFoundError:
return None
if not S_ISREG(st.st_mode):
return None
return FileEntry(path, st.st_mtime, st.st_size)
def _listdir(path):
try:
return os.listdir(path)
except FileNotFoundError:
return []
def scan_mailbox_messages(mailbox_dir):
messages = []
for sub in ("cur", "new", "tmp"):
subdir = f"{mailbox_dir}/{sub}"
for name in _listdir(subdir):
entry = _get_file_entry(f"{subdir}/{name}")
if entry is not None:
messages.append(entry)
return messages
def _remove_stale_caches(mailbox_dir):
for name in ("maildirsize", "dovecot.index.cache"):
try:
os.unlink(f"{mailbox_dir}/{name}")
except FileNotFoundError:
pass
def expire_to_target(mailbox_dir, target_bytes, now=None):
"""Remove highest-scored files until total size <= *target_bytes*.
Returns the list of removed file paths.
"""
if now is None:
now = time.time()
messages = scan_mailbox_messages(mailbox_dir)
total_size = sum(m.size for m in messages)
if total_size <= target_bytes:
return []
# Score: large and old files get the highest score.
scored = sorted(
messages,
key=lambda m: m.size * (now - m.mtime),
reverse=True,
)
removed = []
for entry in scored:
if total_size <= target_bytes:
break
try:
os.unlink(entry.path)
except FileNotFoundError:
continue
total_size -= entry.size
removed.append(entry.path)
if removed:
_remove_stale_caches(mailbox_dir)
return removed
def main(args=None):
"""Remove mailbox messages to stay within a megabyte target."""
parser = ArgumentParser(description=main.__doc__)
parser.add_argument(
"target_mb",
type=int,
help="target mailbox size in megabytes",
)
parser.add_argument(
"mailbox_path",
help="path to a user mailbox, or with --sweep the mailboxes directory",
)
parser.add_argument(
"--sweep",
action="store_true",
help="sweep all mailboxes under mailbox_path",
)
args = parser.parse_args(args)
target_bytes = args.target_mb * 1024 * 1024
if args.sweep:
return _sweep(args.mailbox_path, target_bytes)
removed = expire_to_target(args.mailbox_path, target_bytes)
if removed:
print(
f"removed {len(removed)} file(s) from {args.mailbox_path}"
f" to reach {args.target_mb} MB target",
file=sys.stderr,
)
return 0
def _sweep(mailboxes_dir, target_bytes):
try:
names = os.listdir(mailboxes_dir)
except FileNotFoundError:
print(f"directory not found: {mailboxes_dir}", file=sys.stderr)
return 1
for name in sorted(names):
if "@" not in name:
continue
mbox = f"{mailboxes_dir}/{name}"
removed = expire_to_target(mbox, target_bytes)
if removed:
print(
f"removed {len(removed)} file(s) from {name}",
file=sys.stderr,
)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -34,8 +34,6 @@ def test_read_config_testrun(make_config):
assert config.postfix_reinject_port == 10025
assert config.max_user_send_per_minute == 60
assert config.max_mailbox_size == "500M"
assert config.delete_mails_after == "20"
assert config.delete_large_after == "7"
assert config.username_min_length == 9
assert config.username_max_length == 9
assert config.password_min_length == 9

View File

@@ -1,7 +1,6 @@
import os
import random
from datetime import datetime
from fnmatch import fnmatch
from pathlib import Path
import pytest
@@ -154,35 +153,6 @@ def test_expiry_cli_basic(example_config, mbox1):
expiry_main(args)
def test_expiry_cli_old_files(capsys, example_config, mbox1):
relpaths_old = ["cur/msg_old1", "cur/msg_old1"]
cutoff_days = int(example_config.delete_mails_after) + 1
create_new_messages(mbox1.basedir, relpaths_old, size=1000, days=cutoff_days)
relpaths_large = ["cur/msg_old_large1", "new/msg_old_large2"]
cutoff_days = int(example_config.delete_large_after) + 1
create_new_messages(
mbox1.basedir, relpaths_large, size=1000 * 300, days=cutoff_days
)
create_new_messages(mbox1.basedir, ["cur/shouldstay"], size=1000 * 300, days=1)
args = str(example_config._inipath), "--remove", "-v"
expiry_main(args)
out, err = capsys.readouterr()
allpaths = relpaths_old + relpaths_large + ["maildirsize"]
for path in allpaths:
for line in err.split("\n"):
if fnmatch(line, f"removing*{path}"):
break
else:
if path != "new/msg_old_large2":
pytest.fail(f"failed to remove {path}\n{err}")
assert "shouldstay" not in err
def test_get_file_entry(tmp_path):
assert get_file_entry(str(tmp_path.joinpath("123123"))) is None
p = tmp_path.joinpath("x")

View File

@@ -19,6 +19,12 @@ def test_create_newemail_dict(example_config):
assert ac1["password"] != ac2["password"]
def test_create_newemail_dict_ip(make_config):
config = make_config("1.2.3.4")
ac = create_newemail_dict(config)
assert ac["email"].endswith("@[1.2.3.4]")
def test_create_dclogin_url():
url = create_dclogin_url("user@example.org", "p@ss w+rd")
assert url.startswith("dclogin:")

View File

@@ -0,0 +1,91 @@
import os
import time
from chatmaild.quota_expire import expire_to_target, scan_mailbox_messages
MB = 1024 * 1024
def _create_message(basedir, relpath, size, days_old=0):
path = basedir / relpath
path.parent.mkdir(parents=True, exist_ok=True)
path.write_bytes(b"x" * size)
mtime = time.time() - days_old * 86400
os.utime(path, (mtime, mtime))
return path
def test_scan_cur_new_tmp(tmp_path):
_create_message(tmp_path, "cur/msg1", 100)
_create_message(tmp_path, "new/msg2", 200)
_create_message(tmp_path, "tmp/msg3", 300)
messages = scan_mailbox_messages(str(tmp_path))
assert len(messages) == 3
sizes = sorted(m.size for m in messages)
assert sizes == [100, 200, 300]
def test_scan_ignores_subfolders(tmp_path):
_create_message(tmp_path, "cur/a", 10)
_create_message(tmp_path, ".DeltaChat/cur/b", 20)
assert len(scan_mailbox_messages(str(tmp_path))) == 1
def test_scan_empty(tmp_path):
assert scan_mailbox_messages(str(tmp_path)) == []
assert scan_mailbox_messages(str(tmp_path / "nope")) == []
def test_noop_under_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", MB)
assert expire_to_target(str(tmp_path), 2 * MB) == []
assert (tmp_path / "cur" / "msg1").exists()
def test_removes_to_target(tmp_path):
now = time.time()
for i in range(15):
_create_message(tmp_path, f"cur/msg{i:02d}", MB, days_old=i + 1)
removed = expire_to_target(str(tmp_path), 10 * MB, now=now)
assert len(removed) == 5
assert len(scan_mailbox_messages(str(tmp_path))) == 10
def test_scoring_prefers_large_old(tmp_path):
now = time.time()
_create_message(tmp_path, "cur/large_old", 2 * MB, days_old=30)
_create_message(tmp_path, "cur/small_new", MB, days_old=1)
removed = expire_to_target(str(tmp_path), 2 * MB, now=now)
assert len(removed) == 1
assert "large_old" in removed[0]
def test_scoring_large_new_beats_small_old(tmp_path):
now = time.time()
_create_message(tmp_path, "cur/big_new", 10 * MB, days_old=1)
_create_message(tmp_path, "cur/small_old", MB, days_old=5)
# big_new score: 10MB * 1d = 10 vs small_old score: 1MB * 5d = 5
removed = expire_to_target(str(tmp_path), 10 * MB, now=now)
assert len(removed) == 1
assert "big_new" in removed[0]
def test_exact_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", 5 * MB)
assert expire_to_target(str(tmp_path), 5 * MB) == []
def test_removes_stale_caches(tmp_path):
_create_message(tmp_path, "cur/msg1", 2 * MB, days_old=5)
(tmp_path / "maildirsize").write_text("x")
(tmp_path / "dovecot.index.cache").write_text("x")
expire_to_target(str(tmp_path), MB)
assert not (tmp_path / "maildirsize").exists()
assert not (tmp_path / "dovecot.index.cache").exists()
def test_no_cache_removal_when_under_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", MB)
(tmp_path / "maildirsize").write_text("x")
expire_to_target(str(tmp_path), 2 * MB)
assert (tmp_path / "maildirsize").exists()

View File

@@ -3,6 +3,8 @@ import io
import os
from contextlib import contextmanager
from pyinfra import host
from pyinfra.facts.server import Command
from pyinfra.operations import files, server, systemd
@@ -11,6 +13,17 @@ def has_systemd():
return os.path.isdir("/run/systemd/system")
def is_in_container() -> bool:
"""Return True if running inside a container (Docker, LXC, etc.)."""
return (
host.get_fact(
Command,
"systemd-detect-virt --container --quiet 2>/dev/null && echo yes || true",
)
== "yes"
)
@contextmanager
def blocked_service_startup():
"""Prevent services from auto-starting during package installation.

View File

@@ -108,9 +108,7 @@ def run_cmd(args, out):
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host in ["localhost", "@docker"]:
if ssh_host == "@docker":
env["CHATMAIL_NOPORTCHECK"] = "True"
if ssh_host == "localhost":
cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"):
@@ -316,7 +314,7 @@ def add_ssh_host_option(parser):
parser.add_argument(
"--ssh-host",
dest="ssh_host",
help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
help="Run commands on 'localhost' or on a specific SSH host "
"instead of chatmail.ini's mail_domain.",
)
@@ -378,9 +376,7 @@ def get_parser():
def get_sshexec(ssh_host: str, verbose=True):
if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
return LocalExec(verbose)
if verbose:
print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose)

View File

@@ -2,7 +2,6 @@
Chat Mail pyinfra deploy.
"""
import os
import shutil
import subprocess
import sys
@@ -28,6 +27,7 @@ from .basedeploy import (
configure_remote_units,
get_resource,
has_systemd,
is_in_container,
)
from .dovecot.deployer import DovecotDeployer
from .external.deployer import ExternalTlsDeployer
@@ -158,7 +158,7 @@ class UnboundDeployer(Deployer):
with blocked_service_startup():
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
packages=["unbound", "unbound-anchor", "dnsutils", "resolvconf"],
)
def configure(self):
@@ -584,7 +584,7 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n")
exit(1)
if not os.environ.get("CHATMAIL_NOPORTCHECK"):
if not is_in_container():
port_services = [
(["master", "smtpd"], 25),
("unbound", 53),

View File

@@ -13,9 +13,11 @@ from cmdeploy.basedeploy import (
blocked_service_startup,
configure_remote_units,
get_resource,
is_in_container,
)
DOVECOT_VERSION = "2.3.21+dfsg1-3"
DOVECOT_ARCHIVE_VERSION = "2.3.21+dfsg1-3"
DOVECOT_PACKAGE_VERSION = f"1:{DOVECOT_ARCHIVE_VERSION}"
DOVECOT_SHA256 = {
("core", "amd64"): "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d",
@@ -40,11 +42,14 @@ class DovecotDeployer(Deployer):
with blocked_service_startup():
debs = []
for pkg in ("core", "imapd", "lmtpd"):
deb = _download_dovecot_package(pkg, arch)
deb, changed = _download_dovecot_package(pkg, arch)
self.need_restart |= changed
if deb:
debs.append(deb)
if debs:
deb_list = " ".join(debs)
# First dpkg may fail on missing dependencies (stderr suppressed);
# apt-get --fix-broken pulls them in, then dpkg retries cleanly.
server.shell(
name="Install dovecot packages",
commands=[
@@ -53,6 +58,7 @@ class DovecotDeployer(Deployer):
f"dpkg --force-confdef --force-confold -i {deb_list}",
],
)
self.need_restart = True
files.put(
name="Pin dovecot packages to block Debian dist-upgrades",
src=io.StringIO(
@@ -61,15 +67,30 @@ class DovecotDeployer(Deployer):
"Pin-Priority: -1\n"
),
dest="/etc/apt/preferences.d/pin-dovecot",
user="root",
group="root",
mode="644",
)
def configure(self):
configure_remote_units(self.config.mail_domain, self.units)
self.need_restart, self.daemon_reload = _configure_dovecot(self.config)
config_restart, self.daemon_reload = _configure_dovecot(self.config)
self.need_restart |= config_restart
def activate(self):
activate_remote_units(self.units)
# Detect stale binary: package installed but service still runs old (deleted) binary.
if not self.disable_mail and not self.need_restart:
stale = host.get_fact(
Command,
'pid=$(systemctl show -p MainPID --value dovecot.service 2>/dev/null);'
' [ "${pid:-0}" != "0" ] && readlink "/proc/$pid/exe" 2>/dev/null | grep -q "(deleted)"'
" && echo STALE || true",
)
if stale == "STALE":
self.need_restart = True
restart = False if self.disable_mail else self.need_restart
systemd.service(
@@ -94,22 +115,22 @@ def _pick_url(primary, fallback):
return fallback
def _download_dovecot_package(package: str, arch: str):
"""Download a dovecot .deb if needed, return its path (or None)."""
def _download_dovecot_package(package: str, arch: str) -> tuple[str | None, bool]:
"""Download a dovecot .deb if needed, return (path, changed)."""
arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch
pkg_name = f"dovecot-{package}"
sha256 = DOVECOT_SHA256.get((package, arch))
if sha256 is None:
apt.packages(packages=[pkg_name])
return None
op = apt.packages(packages=[pkg_name])
return None, bool(getattr(op, "changed", False))
installed_versions = host.get_fact(DebPackages).get(pkg_name, [])
if DOVECOT_VERSION in installed_versions:
return None
if DOVECOT_PACKAGE_VERSION in installed_versions:
return None, False
url_version = DOVECOT_VERSION.replace("+", "%2B")
url_version = DOVECOT_ARCHIVE_VERSION.replace("+", "%2B")
deb_base = f"{pkg_name}_{url_version}_{arch}.deb"
primary_url = f"https://download.delta.chat/dovecot/{deb_base}"
fallback_url = f"https://github.com/chatmail/dovecot/releases/download/upstream%2F{url_version}/{deb_base}"
@@ -124,18 +145,7 @@ def _download_dovecot_package(package: str, arch: str):
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
)
return deb_filename
def _can_set_inotify_limits() -> bool:
is_container = (
host.get_fact(
Command,
"systemd-detect-virt --container --quiet 2>/dev/null && echo yes || true",
)
== "yes"
)
return not is_container
return deb_filename, True
def _configure_dovecot(config: Config, debug: bool = False) -> tuple[bool, bool]:
@@ -173,7 +183,7 @@ def _configure_dovecot(config: Config, debug: bool = False) -> tuple[bool, bool]
# as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits
can_modify = _can_set_inotify_limits()
can_modify = not is_in_container()
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
value = host.get_fact(Sysctl)[key]

View File

@@ -133,6 +133,11 @@ protocol lmtp {
# mail_lua and push_notification_lua are needed for Lua push notification handler.
# <https://doc.dovecot.org/2.3/configuration_manual/push_notification/#configuration>
mail_plugins = $mail_plugins mail_lua notify push_notification push_notification_lua
# Disable fsync for LMTP. May lose delivered message,
# but unlikely to cause problems with multiple relays.
# https://doc.dovecot.org/2.3/admin_manual/mailbox_formats/#fsyncing
mail_fsync = never
}
plugin {
@@ -144,12 +149,22 @@ plugin {
}
plugin {
# for now we define static quota-rules for all users
# for now we define static quota-rules for all users
quota = maildir:User quota
quota_rule = *:storage={{ config.max_mailbox_size }}
quota_max_mail_size={{ config.max_message_size }}
quota_grace = 0
# quota_over_flag_value = TRUE
# When a user reaches 90% quota, run chatmail-quota-expire
# to remove large/old messages until usage is below 80%.
quota_warning = storage=90%% quota-warning {{ config.max_mailbox_size_mb * 80 // 100 }} {{ config.mailboxes_dir }}/%u
}
service quota-warning {
executable = script /usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire
user = vmail
unix_listener quota-warning {
}
}
# push_notification configuration
@@ -252,6 +267,9 @@ protocol imap {
# sort -sn <(sed 's/ / C: /' *.in) <(sed 's/ / S: /' cat *.out)
rawlog_dir = %h
# Disable fsync for IMAP. May lose IMAP changes like setting flags.
mail_fsync = never
}
{% endif %}

View File

@@ -74,7 +74,7 @@ http {
access_log syslog:server=unix:/dev/log,facility=local7;
location /mxdeliv/ {
proxy_pass http://127.0.0.1:{{ config.filtermail_http_port }};
proxy_pass http://127.0.0.1:{{ config.filtermail_http_port_incoming }};
}
location / {

View File

@@ -87,9 +87,8 @@ class SSHExec:
class LocalExec:
FuncError = FuncError
def __init__(self, verbose=False, docker=False):
def __init__(self, verbose=False):
self.verbose = verbose
self.docker = docker
def __call__(self, call, kwargs=None, log_callback=None):
if kwargs is None:
@@ -101,10 +100,6 @@ class LocalExec:
if not title:
title = call.__name__
where = "locally"
if self.docker:
if call == remote.rdns.perform_initial_checks:
kwargs["pre_command"] = "docker exec chatmail "
where = "in docker"
if self.verbose:
print_stderr(f"Running {where}: {title}(**{kwargs})")
return self(call, kwargs, log_callback=print_stderr)

View File

@@ -71,6 +71,44 @@ class TestSSHExecutor:
assert (now - since_date).total_seconds() < 60 * 60 * 51
def test_dovecot_main_process_matches_installed_binary(sshdomain):
sshexec = get_sshexec(sshdomain)
main_pid = int(
sshexec(
call=remote.rshell.shell,
kwargs=dict(
command="timeout 10 systemctl show -p MainPID --value dovecot.service"
),
).strip()
)
assert main_pid != 0, "dovecot.service MainPID is 0 -- service not running?"
exe = sshexec(
call=remote.rshell.shell,
kwargs=dict(command=f"timeout 10 readlink /proc/{main_pid}/exe"),
).strip()
status_text = sshexec(
call=remote.rshell.shell,
kwargs=dict(
command="timeout 10 systemctl show -p StatusText --value dovecot.service"
),
).strip()
installed_version = sshexec(
call=remote.rshell.shell, kwargs=dict(command="timeout 10 dovecot --version")
).strip()
assert not exe.endswith("(deleted)"), (
f"running dovecot binary was deleted (stale after upgrade): {exe}"
)
expected_status_text = f"v{installed_version}"
assert status_text == expected_status_text or status_text.startswith(
f"{expected_status_text} "
), (
f"dovecot status version mismatch: "
f"StatusText={status_text!r}, installed={installed_version!r}"
)
def test_timezone_env(remote):
for line in remote.iter_output("env"):
print(line)
@@ -206,24 +244,6 @@ def test_exceed_rate_limit(cmsetup, gencreds, maildata, chatmail_config):
pytest.fail("Rate limit was not exceeded")
@pytest.mark.slow
def test_expunged(remote, chatmail_config):
outdated_days = int(chatmail_config.delete_mails_after) + 1
find_cmds = [
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/cur/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/new/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/new/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f",
]
outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append(
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
)
for cmd in find_cmds:
for line in remote.iter_output(cmd):
assert not line
def test_deployed_state(remote):

View File

@@ -1,4 +1,5 @@
import imaplib
import ipaddress
import itertools
import os
import random
@@ -14,6 +15,14 @@ from chatmaild.config import read_config
conftestdir = Path(__file__).parent
def _is_ip(domain):
try:
ipaddress.ip_address(domain)
return True
except ValueError:
return False
def pytest_addoption(parser):
parser.addoption(
"--slow", action="store_true", default=False, help="also run slow tests"
@@ -282,6 +291,7 @@ def gencreds(chatmail_config):
def gen(domain=None):
domain = domain if domain else chatmail_config.mail_domain
addr_domain = f"[{domain}]" if _is_ip(domain) else domain
while 1:
num = next(count)
alphanumeric = "abcdefghijklmnopqrstuvwxyz1234567890"
@@ -295,7 +305,7 @@ def gencreds(chatmail_config):
password = "".join(
random.choices(alphanumeric, k=chatmail_config.password_min_length)
)
yield f"{user}@{domain}", f"{password}"
yield f"{user}@{addr_domain}", f"{password}"
return lambda domain=None: next(gen(domain))
@@ -344,9 +354,22 @@ class ChatmailACFactory:
accounts = []
for _ in range(num):
account = self.dc.add_account()
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
addr, password = self.gencreds(domain)
if _is_ip(domain):
# Use DCLOGIN scheme with explicit server hosts,
# matching how madmail presents its addresses to users.
qr = (
f"dclogin:{addr}"
f"?p={password}&v=1"
f"&ih={domain}&ip=993"
f"&sh={domain}&sp=465"
f"&ic=3&ss=default"
)
future = account.add_transport_from_qr.future(qr)
else:
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
futures.append(future)
# ensure messages stay in INBOX so that they can be

View File

@@ -0,0 +1,238 @@
from contextlib import nullcontext
from types import SimpleNamespace
import pytest
from pyinfra.facts.deb import DebPackages
from cmdeploy.dovecot import deployer as dovecot_deployer
def make_host(*fact_pairs):
"""Build a mock host; get_fact(cls) dispatches to the provided facts mapping.
Args:
*fact_pairs: tuples of (fact_class, fact_value) to register
Returns:
SimpleNamespace with get_fact that raises a clear error if an
unexpected fact type is requested.
"""
facts = dict(fact_pairs)
def get_fact(cls):
if cls not in facts:
registered = ", ".join(c.__name__ for c in facts)
raise LookupError(
f"unexpected get_fact({cls.__name__}); "
f"only registered: {registered}"
)
return facts[cls]
return SimpleNamespace(get_fact=get_fact)
@pytest.fixture
def deployer():
return dovecot_deployer.DovecotDeployer(
SimpleNamespace(mail_domain="chat.example.org"),
disable_mail=False,
)
@pytest.fixture
def patch_blocked(monkeypatch):
monkeypatch.setattr(dovecot_deployer, "blocked_service_startup", nullcontext)
@pytest.fixture
def mock_files_put(monkeypatch):
monkeypatch.setattr(
dovecot_deployer.files,
"put",
lambda **kwargs: SimpleNamespace(changed=False),
)
@pytest.fixture
def track_shell(monkeypatch):
calls = []
monkeypatch.setattr(
dovecot_deployer.server,
"shell",
lambda **kwargs: calls.append(kwargs) or SimpleNamespace(changed=False),
)
return calls
def test_download_dovecot_package_skips_epoch_matched_install(monkeypatch):
epoch_version = dovecot_deployer.DOVECOT_PACKAGE_VERSION
downloads = []
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host((DebPackages, {"dovecot-core": [epoch_version]})),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deb, changed = dovecot_deployer._download_dovecot_package("core", "amd64")
assert deb is None, f"expected no deb path when version matches, got {deb!r}"
assert changed is False, "should not flag changed when version already installed"
assert downloads == [], "should not download when version already installed"
def test_download_dovecot_package_uses_archive_version_for_url_and_filename(
monkeypatch,
):
downloads = []
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host((DebPackages, {})),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deb, changed = dovecot_deployer._download_dovecot_package("core", "amd64")
archive_version = dovecot_deployer.DOVECOT_ARCHIVE_VERSION.replace("+", "%2B")
expected_deb = f"/root/dovecot-core_{archive_version}_amd64.deb"
# Verify the returned path uses archive version, not package version (with epoch)
assert changed is True, "should flag changed when package not yet installed"
assert deb == expected_deb, f"deb path mismatch: {deb!r} != {expected_deb!r}"
assert dovecot_deployer.DOVECOT_PACKAGE_VERSION not in deb, (
f"deb path should use archive version (no epoch), got {deb!r}"
)
assert len(downloads) == 1, "files.download should be called exactly once"
def test_install_skips_dpkg_path_when_epoch_matched_packages_present(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host(
(
dovecot_deployer.DebPackages,
{
"dovecot-core": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
"dovecot-imapd": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
"dovecot-lmtpd": [dovecot_deployer.DOVECOT_PACKAGE_VERSION],
},
),
(dovecot_deployer.Arch, "x86_64"),
),
)
downloads = []
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: downloads.append(kwargs),
)
deployer.install()
assert downloads == [], "should not download when all packages epoch-matched"
assert track_shell == [], "should not run dpkg when all packages epoch-matched"
assert deployer.need_restart is False, (
"need_restart should be False when nothing changed"
)
def test_install_unsupported_arch_falls_back_to_apt(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
# For unsupported architectures, all fact lookups return the arch string.
monkeypatch.setattr(
dovecot_deployer,
"host",
SimpleNamespace(get_fact=lambda cls: "riscv64"),
)
apt_calls = []
# Mirrors apt.packages() return value: OperationMeta with .changed property.
# Only lmtpd triggers a change to verify |= accumulation of changed flags.
def fake_apt(**kwargs):
apt_calls.append(kwargs)
changed = "lmtpd" in kwargs["packages"][0]
return SimpleNamespace(changed=changed)
monkeypatch.setattr(dovecot_deployer.apt, "packages", fake_apt)
deployer.install()
actual_pkgs = [c["packages"] for c in apt_calls]
assert actual_pkgs == [["dovecot-core"], ["dovecot-imapd"], ["dovecot-lmtpd"]], (
f"expected apt install of core/imapd/lmtpd, got {actual_pkgs}"
)
assert track_shell == [], "should not run dpkg for unsupported arch"
assert deployer.need_restart is True, (
"need_restart should be True when apt installed a package"
)
def test_install_runs_dpkg_when_packages_need_download(
deployer, patch_blocked, mock_files_put, track_shell, monkeypatch
):
monkeypatch.setattr(
dovecot_deployer,
"host",
make_host(
(dovecot_deployer.DebPackages, {}),
(dovecot_deployer.Arch, "x86_64"),
),
)
monkeypatch.setattr(
dovecot_deployer,
"_pick_url",
lambda primary, fallback: primary,
)
monkeypatch.setattr(
dovecot_deployer.files,
"download",
lambda **kwargs: SimpleNamespace(changed=True),
)
deployer.install()
assert len(track_shell) == 1, (
f"expected one server.shell() call for dpkg install, got {len(track_shell)}"
)
cmds = track_shell[0]["commands"]
assert len(cmds) == 3, f"expected 3 dpkg/apt commands, got: {cmds}"
assert cmds[0].startswith("dpkg --force-confdef --force-confold -i ")
assert "apt-get -y --fix-broken install" in cmds[1]
assert cmds[2].startswith("dpkg --force-confdef --force-confold -i ")
assert deployer.need_restart is True, (
"need_restart should be True after dpkg install"
)
def test_pick_url_falls_back_on_primary_error(monkeypatch):
def raise_error(req, timeout):
raise OSError("connection timeout")
monkeypatch.setattr(dovecot_deployer.urllib.request, "urlopen", raise_error)
result = dovecot_deployer._pick_url("http://primary", "http://fallback")
assert result == "http://fallback", (
f"should fall back when primary fails, got {result!r}"
)

View File

@@ -102,8 +102,14 @@ short overview of ``chatmaild`` services:
Apple/Google/Huawei.
- `chatmail-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/expire.py>`_
deletes users if they have not logged in for a longer while.
The timeframe can be configured in ``chatmail.ini``.
deletes entire mailboxes of users who have not logged in
for longer than ``delete_inactive_users_after`` days.
- `chatmail-quota-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/quota_expire.py>`_
is called by Dovecot's ``quota_warning`` mechanism when a
user reaches 90% of their mailbox quota.
It removes the largest and oldest messages
until usage drops below 80% of the quota.
- `lastlogin <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/lastlogin.py>`_
is contacted by Dovecot when a user logs in and stores the date of
@@ -139,7 +145,7 @@ Chatmail relay dependency diagram
certs-nginx[("`TLS certs
/var/lib/acme`")] --> nginx-internal;
systemd-timer --- acmetool;
systemd-timer --- chatmail-expire-daily;
systemd-timer --- chatmail-expire-inactive;
systemd-timer --- chatmail-fsreport-daily;
acmetool --> certs[("`TLS certs
/var/lib/acme`")];
@@ -156,9 +162,11 @@ Chatmail relay dependency diagram
/home/vmail/.../user"];
dovecot --- |lastlogin.socket|lastlogin;
dovecot --- chatmail-metadata;
dovecot --- |quota-warning|chatmail-quota-expire;
chatmail-quota-expire --- maildir;
lastlogin --- maildir;
doveauth --- maildir;
chatmail-expire-daily --- maildir;
chatmail-expire-inactive --- maildir;
chatmail-fsreport-daily --- maildir;
chatmail-metadata --- iroh-relay;
chatmail-metadata --- |encrypted device token| notifications.delta.chat;