Compare commits

..

1 Commits

Author SHA1 Message Date
holger krekel
fb80f23cfd feat: Automatic per-user quota-preservation.
Replace daily timer-based message expire script
with Dovecot quota-warning-triggered cleanup.
When a user reaches 90% of their mailbox quota
Dovecot calls the new script which removes the largest and oldest messages
until usage drops below 80%.

The daily `chatmail-expire` service now only
handles deletion of inactive user mailboxes.
2026-04-18 02:04:36 +02:00
61 changed files with 1107 additions and 1342 deletions

View File

@@ -1,39 +0,0 @@
name: No-DNS
on:
# Triggers when a PR is merged into main or a direct push occurs
push:
branches: [ "main" ]
# Triggers for any PR (and its subsequent commits) targeting the main branch
pull_request:
branches: [ "main" ]
permissions: {}
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
no-dns:
name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.14.5
with:
cmlxc_commands: |
cmlxc init
# single cmdeploy relay test
cmlxc -v deploy-cmdeploy --source ./repo --type ipv4 cm0
cmlxc -v test-cmdeploy cm0
# cross cmdeploy relay test (two ipv4 relays)
cmlxc -v deploy-cmdeploy --source ./repo --ipv4-only --type ipv4 cm1
cmlxc -v test-cmdeploy cm0 cm1
# cross cmdeploy/madmail relay tests
cmlxc -v deploy-madmail mad0
cmlxc -v test-cmdeploy cm0 mad0
cmlxc -v test-mini mad0 cm0
cmlxc -v test-mini cm0 mad0

View File

@@ -1,4 +1,4 @@
name: CI
name: Run unit-tests and container-based deploy+test verification
on:
# Triggers when a PR is merged into main or a direct push occurs
@@ -9,8 +9,6 @@ on:
pull_request:
branches: [ "main" ]
permissions: {}
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -27,9 +25,8 @@ jobs:
# Otherwise `test_deployed_state` will be unhappy.
with:
ref: ${{ github.event.pull_request.head.sha }}
persist-credentials: false
- name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.4/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.1/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests
working-directory: chatmaild
run: pipx run tox
@@ -41,7 +38,6 @@ jobs:
- uses: actions/checkout@v6
with:
ref: ${{ github.event.pull_request.head.sha }}
persist-credentials: false
- name: initenv
run: scripts/initenv.sh
@@ -57,7 +53,7 @@ jobs:
lxc-test:
name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.14.5
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.10.0
with:
cmlxc_commands: |
cmlxc init
@@ -75,4 +71,3 @@ jobs:
cmlxc -v test-cmdeploy cm0 mad0
cmlxc -v test-mini cm0 mad0
cmlxc -v test-mini mad0 cm0

View File

@@ -1,37 +0,0 @@
# Notify the docker repo to build and test a new image after relay CI passes.
#
# Sends a repository_dispatch event to chatmail/docker with the relay ref
# and short SHA, which triggers docker-ci.yaml to build, push to GHCR,
# and run integration tests via cmlxc.
name: Trigger Docker build
on:
push:
branches: [main]
workflow_dispatch:
permissions: {}
jobs:
dispatch:
name: Dispatch build to chatmail/docker
runs-on: ubuntu-latest
if: github.repository == 'chatmail/relay'
steps:
- name: Compute short SHA
id: sha
run: echo "short=$(echo '${{ github.sha }}' | cut -c1-7)" >> "$GITHUB_OUTPUT"
- name: Send repository_dispatch
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3
with:
token: ${{ secrets.CHATMAIL_DOCKER_DISPATCH_TOKEN }}
repository: chatmail/docker
event-type: relay-updated
client-payload: >-
{
"relay_ref": "${{ github.ref_name }}",
"relay_sha": "${{ github.sha }}",
"relay_sha_short": "${{ steps.sha.outputs.short }}"
}

View File

@@ -7,8 +7,6 @@ on:
- 'scripts/build-docs.sh'
- '.github/workflows/docs-preview.yaml'
permissions: {}
jobs:
scripts:
name: build
@@ -18,8 +16,6 @@ jobs:
url: https://staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: initenv
run: scripts/initenv.sh
@@ -38,22 +34,18 @@ jobs:
- name: Get Pullrequest ID
id: prepare
run: |
export PULLREQUEST_ID=$(echo "${GITHUB_REF}" | cut -d "/" -f3)
export PULLREQUEST_ID=$(echo "${{ github.ref }}" | cut -d "/" -f3)
echo "prid=$PULLREQUEST_ID" >> $GITHUB_OUTPUT
if [ $(expr length "${{ secrets.USERNAME }}") -gt "1" ]; then echo "uploadtoserver=true" >> $GITHUB_OUTPUT; fi
- run: |
echo "baseurl: /${STEPS_PREPARE_OUTPUTS_PRID}" >> _config.yml
env:
STEPS_PREPARE_OUTPUTS_PRID: ${{ steps.prepare.outputs.prid }}
echo "baseurl: /${{ steps.prepare.outputs.prid }}" >> _config.yml
- name: Upload preview
run: |
mkdir -p "$HOME/.ssh"
echo "${{ secrets.CHATMAIL_STAGING_SSHKEY }}" > "$HOME/.ssh/key"
chmod 600 "$HOME/.ssh/key"
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${STEPS_PREPARE_OUTPUTS_PRID}/"
env:
STEPS_PREPARE_OUTPUTS_PRID: ${{ steps.prepare.outputs.prid }}
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}/"
- name: check links
working-directory: doc

View File

@@ -10,8 +10,6 @@ on:
- 'scripts/build-docs.sh'
- '.github/workflows/docs.yaml'
permissions: {}
jobs:
scripts:
name: build
@@ -21,8 +19,6 @@ jobs:
url: https://chatmail.at/doc/relay/
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: initenv
run: scripts/initenv.sh

View File

@@ -1,26 +0,0 @@
name: GitHub Actions Security Analysis with zizmor
on:
push:
branches: ["main"]
pull_request:
branches: ["**"]
permissions: {}
jobs:
zizmor:
name: Run zizmor
runs-on: ubuntu-latest
permissions:
security-events: write # Required for upload-sarif (used by zizmor-action) to upload SARIF files.
contents: read
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@b1d7e1fb5de872772f31590499237e7cce841e8e # v0.5.3

7
.github/zizmor.yml vendored
View File

@@ -1,7 +0,0 @@
rules:
unpinned-uses:
config:
policies:
actions/*: ref-pin
dependabot/*: ref-pin
chatmail/*: ref-pin

View File

@@ -1,88 +1,23 @@
# Changelog for chatmail deployment
## 1.10.0 2026-04-30
## Unreleased
* start mtail after networking is fully up <https://github.com/chatmail/relay/pull/942>
* support specifying custom filtermail binary through environment variable <https://github.com/chatmail/relay/pull/941>
* add automated zizmor scanning of github workflows <https://github.com/chatmail/relay/pull/938>
* added dispatch for *automated builds of chatmail relay docker images* <https://github.com/chatmail/relay/pull/934>
* do not bind SMTP client sockets to public addresses <https://github.com/chatmail/relay/pull/932>
* underline in docs that scripts/initenv.sh should be used for building the docs <https://github.com/chatmail/relay/pull/933>
* automatic oldest-first message removal from mailboxes to always stay under max_mailbox_size <https://github.com/chatmail/relay/pull/929>
* remove --slow from cmdeploy test <https://github.com/chatmail/relay/pull/931>
* handle missing inotify sysctl keys in containers <https://github.com/chatmail/relay/pull/930>
* replace resolvconf with static resolv.conf <https://github.com/chatmail/relay/pull/928>
* disable fsync for LMTP and IMAP services <https://github.com/chatmail/relay/pull/925>
* re-use cmlxc workflow, replacing CI with hetzner staging servers with local lxc containers <https://github.com/chatmail/relay/pull/917>
* explicitly install resolvconf <https://github.com/chatmail/relay/pull/924>
* detect stale dovecot binary and force restart in activate() <https://github.com/chatmail/relay/pull/922>
* Rename filtermail_http_port to filtermail_http_port_incoming <https://github.com/chatmail/relay/pull/921>
* consolidated is_in_container() check https://github.com/chatmail/relay/pull/920>
* restart dovecot after package replacement (rebase, test condense) <https://github.com/chatmail/relay/pull/913>
* Set permissions on dovecot pin prefs <https://github.com/chatmail/relay/pull/915>
* Route `/mxdeliv/` to configurable port <https://github.com/chatmail/relay/pull/901>
* fix VM detection, automated testing fixes, use newer chatmail-turn and move to standard BIND DNS zone format <https://github.com/chatmail/relay/pull/912>
* Upgrade to filtermail 0.6.1 <https://github.com/chatmail/relay/pull/910>
* pin dovecot packages to prevent apt upgrades <https://github.com/chatmail/relay/pull/908>
* add rpc server to cmdeploy along with client <https://github.com/chatmail/relay/pull/906>
* remove unused deps from chatmaild <https://github.com/chatmail/relay/pull/905>
* set default smtp_tls_security_level to "verify" unconditionally <https://github.com/chatmail/relay/pull/902>
* featprefer IPv4 in SMTP client <https://github.com/chatmail/relay/pull/900>
* Install dovecot .deb packages atomically <https://github.com/chatmail/relay/pull/899>
* stop installing cron package <https://github.com/chatmail/relay/pull/898>
* Rewrite dovecot install logic, update <https://github.com/chatmail/relay/pull/862>
* fix a test and some linting fixes <https://github.com/chatmail/relay/pull/897>
* Disable IP verification on domain-literal addresses <https://github.com/chatmail/relay/pull/895>
* disable installing recommended packages globally on the relay <https://github.com/chatmail/relay/pull/887>
* multiple bug fixes across chatmaild and cmdeploy <https://github.com/chatmail/relay/pull/883>
* remove /metrics from the website <https://github.com/chatmail/relay/pull/703>
* add Prometheus textfile output to fsreport <https://github.com/chatmail/relay/pull/881>
* chown opendkim: private key <https://github.com/chatmail/relay/pull/879>
* make sure chatmail-metadata was started <https://github.com/chatmail/relay/pull/882>
* dovecot update url <https://github.com/chatmail/relay/pull/880>
* upgrade to filtermail v0.5.2 <https://github.com/chatmail/relay/pull/876>
* download dovecot packages from github release <https://github.com/chatmail/relay/pull/875>
* replace DKIM verification with filtermail v0.5 <https://github.com/chatmail/relay/pull/831>
* remove CFFI deltachat bindings usage, and consolidate test support with rpc-bindings <https://github.com/chatmail/relay/pull/872>
* prepare chatmaild/cmdeploy changes for Docker support <https://github.com/chatmail/relay/pull/857>
* stabilize online benchmark timing adding rate-limit-aware cooldown between iterations <https://github.com/chatmail/relay/pull/867>
* move rate-limit cooldown to benchmark fixture <https://github.com/chatmail/relay/pull/868>
* reconfigure acmetool from redirector to proxy mode <https://github.com/chatmail/relay/pull/861>
* make tests work with `--ssh-host localhost` <https://github.com/chatmail/relay/pull/856>
* mark f-string with f prefix in test_expunged <https://github.com/chatmail/relay/pull/863>
* install also if dovecot.service=False in SystemdEnabled Fact <https://github.com/chatmail/relay/pull/841>
* Introduce support for self-signed chatmail relays <https://github.com/chatmail/relay/pull/855>
* Strip Received headers before delivery <https://github.com/chatmail/relay/pull/849>
* upgrade to filtermail v0.3 <https://github.com/chatmail/relay/pull/850>
* fix link to Maddy and update madmail URL <https://github.com/chatmail/relay/pull/847>
* accept self-signed certificates for IP-only relays <https://github.com/chatmail/relay/pull/846>
* enforce sending from public IP addresses <https://github.com/chatmail/relay/pull/845>
* port check: check addresses, fix single services <https://github.com/chatmail/relay/pull/844>
* remediates issue with improper concat on resolver injection <https://github.com/chatmail/relay/pull/834>
* ipv6 boolean not being respected during operations <https://github.com/chatmail/relay/pull/832>
* upgrade to filtermail v0.2 by <https://github.com/chatmail/relay/pull/825>
* fix link to filtermail <https://github.com/chatmail/relay/pull/824>
* print timestamps when sending messages <https://github.com/chatmail/relay/pull/823>
* fix flaky test_exceed_rate_limit <https://github.com/chatmail/relay/pull/822>
* Replace filtermail with rust reimplementation <https://github.com/chatmail/relay/pull/808>
* Set default internal SMTP ports in Config <https://github.com/chatmail/relay/pull/819>
* separate metrics for incoming and outgoing messages <https://github.com/chatmail/relay/pull/820>
* disable appending the Received header <https://github.com/chatmail/relay/pull/815>
* fail on errors in postfix/dovecot config <https://github.com/chatmail/relay/pull/813>
* tweak idle/hibernate metrics some more <https://github.com/chatmail/relay/pull/811>
* add config flag to export statistics <https://github.com/chatmail/relay/pull/806>
* add --website-only option to run subcommand <https://github.com/chatmail/relay/pull/768>
* Strip DKIM-Signature header before LMTP <https://github.com/chatmail/relay/pull/803>
* properly make sure that postfix gets restarted on failure <https://github.com/chatmail/relay/pull/802>
* expire.py: use absolute path to maildirsize <https://github.com/chatmail/relay/pull/807>
* pin Dovecot documentation URLs to version 2.3 <https://github.com/chatmail/relay/pull/800>
* try to use "build machine" and "deployment server" consistently <https://github.com/chatmail/relay/pull/797>
* adds instructions for migrating control machines <https://github.com/chatmail/relay/pull/795>
* use consistent naming schema in getting started <https://github.com/chatmail/relay/pull/793>
* remove jsok/serialize-workflow-action dependency <https://github.com/chatmail/relay/pull/790>
* streamline migration guide wording, provide titled steps <https://github.com/chatmail/relay/pull/789>
* increases default max mailbox size <https://github.com/chatmail/relay/pull/792>
* use daemon_name for OpenDKIM sign-verify decision instead of IP <https://github.com/chatmail/relay/pull/784>
### Features
- Automated per-user quota-keeping.
Replace daily timer-based message expire script
with Dovecot quota-warning-triggered cleanup (`chatmail-quota-expire`).
When a user reaches 90% of their mailbox quota
Dovecot calls the new script which removes the largest and oldest messages
until usage drops below 80%.
The daily `chatmail-expire` timer now only handles deletion
of inactive user mailboxes.
After upgrading, run the following once to clean up
mailboxes that are already over quota::
/usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire \
400 /home/vmail/mail/YOURDOMAIN --sweep
## 1.9.0 2025-12-18

View File

@@ -10,7 +10,6 @@ dependencies = [
"filelock",
"requests",
"crypt-r >= 3.13.1 ; python_version >= '3.11'",
"domain-validator",
]
[tool.setuptools]
@@ -22,8 +21,8 @@ where = ['src']
[project.scripts]
doveauth = "chatmaild.doveauth:main"
chatmail-metadata = "chatmaild.metadata:main"
chatmail-expire = "chatmaild.expire:daily_expire_main"
chatmail-quota-expire = "chatmaild.expire:quota_expire_main"
chatmail-expire = "chatmaild.expire_inactive_users:main"
chatmail-quota-expire = "chatmaild.quota_expire:main"
chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main"

View File

@@ -1,8 +1,7 @@
import ipaddress
import os
from pathlib import Path
import iniconfig
from domain_validator import DomainValidator
from chatmaild.user import User
@@ -21,25 +20,11 @@ def read_config(inipath):
class Config:
def __init__(self, inipath, params):
self._inipath = inipath
raw_domain = params["mail_domain"]
self.mail_domain_bare = raw_domain
if is_valid_ipv4(raw_domain):
self.ipv4_relay = raw_domain
self.mail_domain = f"[{raw_domain}]"
self.postfix_myhostname = ipaddress.IPv4Address(raw_domain).reverse_pointer
else:
DomainValidator().validate_domain_re(raw_domain)
self.ipv4_relay = None
self.mail_domain = raw_domain
self.postfix_myhostname = raw_domain
self.mail_domain = params["mail_domain"]
self.max_user_send_per_minute = int(params.get("max_user_send_per_minute", 60))
self.max_user_send_burst_size = int(params.get("max_user_send_burst_size", 10))
self.max_mailbox_size = params["max_mailbox_size"]
self.max_message_size = int(params.get("max_message_size", "31457280"))
self.delete_mails_after = params["delete_mails_after"]
self.delete_large_after = params["delete_large_after"]
self.delete_inactive_users_after = int(params["delete_inactive_users_after"])
self.username_min_length = int(params["username_min_length"])
self.username_max_length = int(params["username_max_length"])
@@ -54,20 +39,19 @@ class Config:
self.filtermail_http_port_incoming = int(
params.get("filtermail_http_port_incoming", "10082")
)
self.filtermail_lmtp_port_transport = int(
params.get("filtermail_lmtp_port_transport", "10083")
)
self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025"))
self.postfix_reinject_port_incoming = int(
params.get("postfix_reinject_port_incoming", "10026")
)
self.mtail_address = params.get("mtail_address")
self.disable_ipv6 = params.get("disable_ipv6", "false").lower() == "true"
self.addr_v4 = os.environ.get("CHATMAIL_ADDR_V4", "")
self.addr_v6 = os.environ.get("CHATMAIL_ADDR_V6", "")
self.acme_email = params.get("acme_email", "")
self.imap_rawlog = params.get("imap_rawlog", "false").lower() == "true"
self.imap_compress = params.get("imap_compress", "false").lower() == "true"
if "iroh_relay" not in params:
self.iroh_relay = "https://" + raw_domain
self.iroh_relay = "https://" + params["mail_domain"]
self.enable_iroh_relay = True
else:
self.iroh_relay = params["iroh_relay"].strip()
@@ -93,17 +77,17 @@ class Config:
)
self.tls_cert_mode = "external"
self.tls_cert_path, self.tls_key_path = parts
elif raw_domain.startswith("_") or self.ipv4_relay:
elif self.mail_domain.startswith("_"):
self.tls_cert_mode = "self"
self.tls_cert_path = "/etc/ssl/certs/mailserver.pem"
self.tls_key_path = "/etc/ssl/private/mailserver.key"
else:
self.tls_cert_mode = "acme"
self.tls_cert_path = f"/var/lib/acme/live/{raw_domain}/fullchain"
self.tls_key_path = f"/var/lib/acme/live/{raw_domain}/privkey"
self.tls_cert_path = f"/var/lib/acme/live/{self.mail_domain}/fullchain"
self.tls_key_path = f"/var/lib/acme/live/{self.mail_domain}/privkey"
# deprecated option
mbdir = params.get("mailboxes_dir", f"/home/vmail/mail/{raw_domain}")
mbdir = params.get("mailboxes_dir", f"/home/vmail/mail/{self.mail_domain}")
self.mailboxes_dir = Path(mbdir.strip())
# old unused option (except for first migration from sqlite to maildir store)
@@ -129,7 +113,7 @@ class Config:
def parse_size_mb(limit):
"""Parse a size string like ``500M`` or ``2G`` and return megabytes."""
value = limit.strip().upper().removesuffix("B")
value = limit.strip().upper().rstrip("B")
if value.endswith("G"):
return int(value[:-1]) * 1024
if value.endswith("M"):
@@ -189,19 +173,3 @@ def get_default_config_content(mail_domain, **overrides):
lines.append(line)
content = "\n".join(lines)
return content
def is_valid_ipv4(address: str) -> bool:
"""Check if a mail_domain is an IPv4 address."""
try:
ipaddress.IPv4Address(address)
return True
except ValueError:
return False
def format_mail_domain(raw_domain: str) -> str:
if is_valid_ipv4(raw_domain):
return f"[{raw_domain}]"
DomainValidator().validate_domain_re(raw_domain)
return raw_domain

View File

@@ -4,26 +4,17 @@ Expire old messages and addresses.
"""
import os
import re
import shutil
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from datetime import datetime
from pathlib import Path
from stat import S_ISREG
from chatmaild.config import read_config
FileEntry = namedtuple("FileEntry", ("path", "mtime", "size"))
QuotaFileEntry = namedtuple("QuotaFileEntry", ("mtime", "quota_size", "path"))
# Quota cleanup factor of max_mailbox_size. The mailbox is reset to this size.
QUOTA_CLEANUP_FACTOR = 0.7
# e.g. "cur/1775324677.M448978P3029757.exam,S=3235,W=3305:2,S"
_dovecot_fn_rex = re.compile(r".+/(\d+)\..+,S=(\d+)")
def iter_mailboxes(basedir, maxnum):
@@ -83,42 +74,6 @@ class MailboxStat:
self.extrafiles.sort(key=lambda x: -x.size)
def parse_dovecot_filename(relpath):
m = _dovecot_fn_rex.match(relpath)
if not m:
return None
return QuotaFileEntry(int(m.group(1)), int(m.group(2)), relpath)
def scan_mailbox_messages(mbox):
messages = []
for sub in ("cur", "new"):
for name in os_listdir_if_exists(mbox / sub):
if entry := parse_dovecot_filename(f"{sub}/{name}"):
messages.append(entry)
return messages
def expire_to_target(mbox, target_bytes):
messages = scan_mailbox_messages(mbox)
total_size = sum(m.quota_size for m in messages)
# Keep recent 24 hours of messages protected from expiry because
# likely something is wrong with interactions on that address
# and quota-full signal can help the address owner's device to notice it
undeletable_messages_cutoff = time.time() - (3600 * 24)
removed = 0
for entry in sorted(messages):
if total_size <= target_bytes:
break
if entry.mtime > undeletable_messages_cutoff:
break
(mbox / entry.path).unlink(missing_ok=True)
total_size -= entry.quota_size
removed += 1
return removed
def print_info(msg):
print(msg, file=sys.stderr)
@@ -160,11 +115,8 @@ class Expiry:
cutoff_without_login = (
self.now - int(self.config.delete_inactive_users_after) * 86400
)
cutoff_mails = self.now - int(self.config.delete_mails_after) * 86400
cutoff_large_mails = self.now - int(self.config.delete_large_after) * 86400
self.all_mboxes += 1
changed = False
if mbox.last_login and mbox.last_login < cutoff_without_login:
self.remove_mailbox(mbox.basedir)
return
@@ -176,45 +128,17 @@ class Expiry:
print_info(f"checking mailbox {date.strftime('%b %d')} {mboxname}")
else:
print_info(f"checking mailbox (no last_login) {mboxname}")
self.all_files += len(mbox.messages)
for message in mbox.messages:
if message.mtime < cutoff_mails:
self.remove_file(message.path, mtime=message.mtime)
elif message.size > 200000 and message.mtime < cutoff_large_mails:
# we only remove noticed large files (not unnoticed ones in new/)
parts = message.path.split("/")
if len(parts) >= 2 and parts[-2] == "cur":
self.remove_file(message.path, mtime=message.mtime)
else:
continue
changed = True
target_bytes = (
self.config.max_mailbox_size_mb * 1024 * 1024 * QUOTA_CLEANUP_FACTOR
)
removed = expire_to_target(Path(mbox.basedir), target_bytes)
if removed:
changed = True
self.del_files += removed
if self.verbose:
print_info(
f"quota-expire: removed {removed} message(s) from {mboxname}"
)
if changed:
self.remove_file(f"{mbox.basedir}/maildirsize")
def get_summary(self):
return (
f"Removed {self.del_mboxes} out of {self.all_mboxes} mailboxes "
f"and {self.del_files} out of {self.all_files} files in existing mailboxes "
f"in {time.time() - self.start:2.2f} seconds"
)
def daily_expire_main(args=None):
def main(args=None):
"""Expire mailboxes and messages according to chatmail config"""
parser = ArgumentParser(description=daily_expire_main.__doc__)
parser = ArgumentParser(description=main.__doc__)
ini = "/usr/local/lib/chatmaild/chatmail.ini"
parser.add_argument(
"chatmail_ini",
@@ -260,33 +184,5 @@ def daily_expire_main(args=None):
print(exp.get_summary())
def quota_expire_main(args=None):
"""Remove mailbox messages to stay within a megabyte target.
This entry point is called by dovecot when a quota threshold is passed.
"""
parser = ArgumentParser(description=quota_expire_main.__doc__)
parser.add_argument(
"target_mb",
type=int,
help="target mailbox size in megabytes",
)
parser.add_argument(
"mailbox_path",
type=Path,
help="path to a user mailbox",
)
args = parser.parse_args(args)
target_bytes = args.target_mb * 1024 * 1024
removed_count = expire_to_target(args.mailbox_path, target_bytes)
if removed_count:
(args.mailbox_path / "maildirsize").unlink(missing_ok=True)
print(
f"quota-expire: removed {removed_count} message(s)"
f" from {args.mailbox_path.name}",
file=sys.stderr,
)
return 0
if __name__ == "__main__":
main(sys.argv[1:])

View File

@@ -18,18 +18,11 @@ max_user_send_per_minute = 60
max_user_send_burst_size = 10
# maximum mailbox size of a chatmail address
# Oldest messages will be removed automatically, so mailboxes never run full.
max_mailbox_size = 500M
# maximum message size for an e-mail in bytes
max_message_size = 31457280
# days after which mails are unconditionally deleted
delete_mails_after = 20
# days after which large messages (>200k) are unconditionally deleted
delete_large_after = 7
# days after which users without a successful login are deleted (database and mails)
delete_inactive_users_after = 90

View File

@@ -70,9 +70,6 @@ class Metadata:
# Some tokens have expired, remove them.
with self._modify_tokens(addr) as _tokens:
pass
elif isinstance(tokens, list):
with self._modify_tokens(addr) as tokens:
token_list = list(tokens.keys())
else:
token_list = []
return token_list

View File

@@ -2,6 +2,7 @@
"""CGI script for creating new accounts."""
import ipaddress
import json
import secrets
import string
@@ -14,6 +15,16 @@ ALPHANUMERIC = string.ascii_lowercase + string.digits
ALPHANUMERIC_PUNCT = string.ascii_letters + string.digits + string.punctuation
def wrap_ip(host):
if host.startswith("[") and host.endswith("]"):
return host
try:
ipaddress.ip_address(host)
return f"[{host}]"
except ValueError:
return host
def create_newemail_dict(config: Config):
user = "".join(
secrets.choice(ALPHANUMERIC) for _ in range(config.username_max_length)
@@ -22,22 +33,16 @@ def create_newemail_dict(config: Config):
secrets.choice(ALPHANUMERIC_PUNCT)
for _ in range(config.password_min_length + 3)
)
return dict(email=f"{user}@{config.mail_domain}", password=f"{password}")
return dict(email=f"{user}@{wrap_ip(config.mail_domain)}", password=f"{password}")
def create_dclogin_url(config, email, password):
def create_dclogin_url(email, password):
"""Build a dclogin: URL with credentials and self-signed cert acceptance.
Uses ic=3 (AcceptInvalidCertificates) so chatmail clients
can connect to servers with self-signed TLS certificates.
"""
if config.ipv4_relay:
imap_host = "&ih=" + config.ipv4_relay
smtp_host = "&sh=" + config.ipv4_relay
else:
imap_host = ""
smtp_host = ""
return f"dclogin:{quote(email, safe='@[]')}?p={quote(password, safe='')}&v=1{imap_host}{smtp_host}&ic=3"
return f"dclogin:{quote(email, safe='@')}?p={quote(password, safe='')}&v=1&ic=3"
def print_new_account():
@@ -46,9 +51,7 @@ def print_new_account():
result = dict(email=creds["email"], password=creds["password"])
if config.tls_cert_mode == "self":
result["dclogin_url"] = create_dclogin_url(
config, creds["email"], creds["password"]
)
result["dclogin_url"] = create_dclogin_url(creds["email"], creds["password"])
print("Content-Type: application/json")
print("")

View File

@@ -0,0 +1,152 @@
"""
Remove messages from a mailbox to meet a size target.
Dovecot calls this script when a user's quota is near its limit.
Files are scored by ``size * age`` so that large, old messages
are removed first.
Usage::
quota_expire <target_mb> <mailbox_path>
"""
import os
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from stat import S_ISREG
FileEntry = namedtuple("FileEntry", ("path", "mtime", "size"))
def _get_file_entry(path):
try:
st = os.stat(path)
except FileNotFoundError:
return None
if not S_ISREG(st.st_mode):
return None
return FileEntry(path, st.st_mtime, st.st_size)
def _listdir(path):
try:
return os.listdir(path)
except FileNotFoundError:
return []
def scan_mailbox_messages(mailbox_dir):
messages = []
for sub in ("cur", "new", "tmp"):
subdir = f"{mailbox_dir}/{sub}"
for name in _listdir(subdir):
entry = _get_file_entry(f"{subdir}/{name}")
if entry is not None:
messages.append(entry)
return messages
def _remove_stale_caches(mailbox_dir):
for name in ("maildirsize", "dovecot.index.cache"):
try:
os.unlink(f"{mailbox_dir}/{name}")
except FileNotFoundError:
pass
def expire_to_target(mailbox_dir, target_bytes, now=None):
"""Remove highest-scored files until total size <= *target_bytes*.
Returns the list of removed file paths.
"""
if now is None:
now = time.time()
messages = scan_mailbox_messages(mailbox_dir)
total_size = sum(m.size for m in messages)
if total_size <= target_bytes:
return []
# Score: large and old files get the highest score.
scored = sorted(
messages,
key=lambda m: m.size * (now - m.mtime),
reverse=True,
)
removed = []
for entry in scored:
if total_size <= target_bytes:
break
try:
os.unlink(entry.path)
except FileNotFoundError:
continue
total_size -= entry.size
removed.append(entry.path)
if removed:
_remove_stale_caches(mailbox_dir)
return removed
def main(args=None):
"""Remove mailbox messages to stay within a megabyte target."""
parser = ArgumentParser(description=main.__doc__)
parser.add_argument(
"target_mb",
type=int,
help="target mailbox size in megabytes",
)
parser.add_argument(
"mailbox_path",
help="path to a user mailbox, or with --sweep the mailboxes directory",
)
parser.add_argument(
"--sweep",
action="store_true",
help="sweep all mailboxes under mailbox_path",
)
args = parser.parse_args(args)
target_bytes = args.target_mb * 1024 * 1024
if args.sweep:
return _sweep(args.mailbox_path, target_bytes)
removed = expire_to_target(args.mailbox_path, target_bytes)
if removed:
print(
f"removed {len(removed)} file(s) from {args.mailbox_path}"
f" to reach {args.target_mb} MB target",
file=sys.stderr,
)
return 0
def _sweep(mailboxes_dir, target_bytes):
try:
names = os.listdir(mailboxes_dir)
except FileNotFoundError:
print(f"directory not found: {mailboxes_dir}", file=sys.stderr)
return 1
for name in sorted(names):
if "@" not in name:
continue
mbox = f"{mailboxes_dir}/{name}"
removed = expire_to_target(mbox, target_bytes)
if removed:
print(
f"removed {len(removed)} file(s) from {name}",
file=sys.stderr,
)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -31,11 +31,6 @@ def example_config(make_config):
return make_config("chat.example.org")
@pytest.fixture
def ipv4_config(make_config):
return make_config("1.3.3.7")
@pytest.fixture
def maildomain(example_config):
return example_config.mail_domain

View File

@@ -1,13 +1,6 @@
from contextlib import nullcontext as does_not_raise
import pytest
from chatmaild.config import (
format_mail_domain,
is_valid_ipv4,
parse_size_mb,
read_config,
)
from chatmaild.config import read_config
def test_read_config_basic(example_config):
@@ -20,12 +13,6 @@ def test_read_config_basic(example_config):
example_config = read_config(inipath)
assert example_config.max_user_send_per_minute == 37
assert example_config.mail_domain == "chat.example.org"
assert example_config.ipv4_relay is None
def test_read_config_ipv4(ipv4_config):
assert ipv4_config.ipv4_relay == "1.3.3.7"
assert ipv4_config.mail_domain == "[1.3.3.7]"
def test_read_config_basic_using_defaults(tmp_path, maildomain):
@@ -47,8 +34,6 @@ def test_read_config_testrun(make_config):
assert config.postfix_reinject_port == 10025
assert config.max_user_send_per_minute == 60
assert config.max_mailbox_size == "500M"
assert config.delete_mails_after == "20"
assert config.delete_large_after == "7"
assert config.username_min_length == 9
assert config.username_max_length == 9
assert config.password_min_length == 9
@@ -134,45 +119,3 @@ def test_config_tls_external_bad_format(make_config):
"tls_external_cert_and_key": "/only/one/path.pem",
},
)
def test_parse_size_mb():
assert parse_size_mb("500M") == 500
assert parse_size_mb("2G") == 2048
assert parse_size_mb(" 1g ") == 1024
assert parse_size_mb("100MB") == 100
assert parse_size_mb("256") == 256
def test_max_mailbox_size_mb(make_config):
config = make_config("chat.example.org")
assert config.max_mailbox_size == "500M"
assert config.max_mailbox_size_mb == 500
@pytest.mark.parametrize(
["input", "result"],
[
("example.org", False),
("1.3.3.7", True),
("fe::1", False),
("ad.1e.dag.adf", False),
("12394142", False),
],
)
def test_is_valid_ipv4(input, result):
assert result == is_valid_ipv4(input)
@pytest.mark.parametrize(
["input", "result", "exception"],
[
("example.org", "example.org", does_not_raise()),
("1.3.3.7", "[1.3.3.7]", does_not_raise()),
("fe::1", None, pytest.raises(ValueError)),
("12394142", None, pytest.raises(ValueError)),
],
)
def test_format_mail_domain(input, result, exception):
with exception:
assert result == format_mail_domain(input)

View File

@@ -1,7 +1,7 @@
import time
from chatmaild.doveauth import AuthDictProxy
from chatmaild.expire import daily_expire_main as main_expire
from chatmaild.expire import main as main_expire
def test_login_timestamps(example_config):

View File

@@ -1,9 +1,6 @@
import itertools
import os
import random
import time
from datetime import datetime
from fnmatch import fnmatch
from pathlib import Path
import pytest
@@ -11,19 +8,13 @@ import pytest
from chatmaild.expire import (
FileEntry,
MailboxStat,
expire_to_target,
get_file_entry,
iter_mailboxes,
os_listdir_if_exists,
parse_dovecot_filename,
quota_expire_main,
scan_mailbox_messages,
)
from chatmaild.expire import daily_expire_main as expiry_main
from chatmaild.expire import main as expiry_main
from chatmaild.fsreport import main as report_main
MB = 1024 * 1024
def fill_mbox(folderdir):
password = folderdir.joinpath("password")
@@ -162,35 +153,6 @@ def test_expiry_cli_basic(example_config, mbox1):
expiry_main(args)
def test_expiry_cli_old_files(capsys, example_config, mbox1):
relpaths_old = ["cur/msg_old1", "cur/msg_old1"]
cutoff_days = int(example_config.delete_mails_after) + 1
create_new_messages(mbox1.basedir, relpaths_old, size=1000, days=cutoff_days)
relpaths_large = ["cur/msg_old_large1", "new/msg_old_large2"]
cutoff_days = int(example_config.delete_large_after) + 1
create_new_messages(
mbox1.basedir, relpaths_large, size=1000 * 300, days=cutoff_days
)
create_new_messages(mbox1.basedir, ["cur/shouldstay"], size=1000 * 300, days=1)
args = str(example_config._inipath), "--remove", "-v"
expiry_main(args)
out, err = capsys.readouterr()
allpaths = relpaths_old + relpaths_large + ["maildirsize"]
for path in allpaths:
for line in err.split("\n"):
if fnmatch(line, f"removing*{path}"):
break
else:
if path != "new/msg_old_large2":
pytest.fail(f"failed to remove {path}\n{err}")
assert "shouldstay" not in err
def test_get_file_entry(tmp_path):
assert get_file_entry(str(tmp_path.joinpath("123123"))) is None
p = tmp_path.joinpath("x")
@@ -204,51 +166,3 @@ def test_os_listdir_if_exists(tmp_path):
tmp_path.joinpath("x").write_text("hello")
assert len(os_listdir_if_exists(str(tmp_path))) == 1
assert len(os_listdir_if_exists(str(tmp_path.joinpath("123123")))) == 0
# --- quota expire tests ---
_msg_counter = itertools.count(1)
def _create_message(basedir, sub, size, days_old=0, disk_size=None):
seq = next(_msg_counter)
mtime = int(time.time() - days_old * 86400)
name = f"{mtime}.M1P1Q{seq}.hostname,S={size},W={size}:2,S"
path = basedir / sub / name
path.parent.mkdir(parents=True, exist_ok=True)
path.write_bytes(b"x" * (disk_size if disk_size is not None else size))
os.utime(path, (mtime, mtime))
return path
def test_parse_dovecot_filename():
e = parse_dovecot_filename("cur/1775324677.M448978P3029757.exam,S=3235,W=3305:2,S")
assert e.path == "cur/1775324677.M448978P3029757.exam,S=3235,W=3305:2,S"
assert e.mtime == 1775324677
assert e.quota_size == 3235
assert parse_dovecot_filename("cur/msg_without_structure") is None
def test_expire_to_target(tmp_path):
_create_message(tmp_path, "cur", MB, days_old=10, disk_size=100)
_create_message(tmp_path, "new", MB, days_old=5)
_create_message(tmp_path, "cur", MB, days_old=0) # undeletable (<1 hour)
assert len(scan_mailbox_messages(tmp_path)) == 3
# removes oldest first, uses S= size not disk size
removed = expire_to_target(tmp_path, MB)
assert removed == 2
msgs = scan_mailbox_messages(tmp_path)
assert len(msgs) == 1
# the surviving message is the fresh undeletable one
assert msgs[0].mtime > time.time() - 3600
def test_quota_expire_main(tmp_path, capsys):
mbox = tmp_path / "user@example.org"
_create_message(mbox, "cur", 2 * MB, days_old=5)
(mbox / "maildirsize").write_text("x")
quota_expire_main([str(1), str(mbox)])
_, err = capsys.readouterr()
assert "quota-expire: removed 1 message(s) from user@example.org" in err
assert not (mbox / "maildirsize").exists()

View File

@@ -372,14 +372,3 @@ def test_iroh_relay(dictproxy):
dictproxy.iroh_relay = "https://example.org/"
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"Ohttps://example.org/\n"
def test_legacy_token_migration(metadata, testaddr):
with metadata.get_metadata_dict(testaddr).modify() as data:
data[metadata.DEVICETOKEN_KEY] = ["oldtoken1", "oldtoken2"]
assert metadata.get_tokens_for_addr(testaddr) == ["oldtoken1", "oldtoken2"]
mdict = metadata.get_metadata_dict(testaddr).read()
tokens = mdict[metadata.DEVICETOKEN_KEY]
assert isinstance(tokens, dict)
assert "oldtoken1" in tokens and "oldtoken2" in tokens

View File

@@ -48,8 +48,6 @@ def test_migration(tmp_path, example_config, caplog):
assert passdb_path.stat().st_size > 10000
example_config.passdb_path = passdb_path
# ensure logging.info records are captured regardless of global configuration
caplog.set_level("INFO")
assert not caplog.records

View File

@@ -19,35 +19,24 @@ def test_create_newemail_dict(example_config):
assert ac1["password"] != ac2["password"]
def test_create_newemail_dict_ip(ipv4_config):
ac = create_newemail_dict(ipv4_config)
assert ac["email"].endswith("@[1.3.3.7]")
def test_create_newemail_dict_ip(make_config):
config = make_config("1.2.3.4")
ac = create_newemail_dict(config)
assert ac["email"].endswith("@[1.2.3.4]")
def test_create_dclogin_url(example_config):
addr = "user@example.org"
password = "p@ss w+rd"
url = create_dclogin_url(example_config, addr, password)
def test_create_dclogin_url():
url = create_dclogin_url("user@example.org", "p@ss w+rd")
assert url.startswith("dclogin:")
assert "v=1" in url
assert "ic=3" in url
assert addr in url
assert "user@example.org" in url
# password special chars must be encoded
assert "p%40ss" in url
assert "w%2Brd" in url
def test_create_dclogin_url_ipv4(ipv4_config):
addr = "user@[1.3.3.7]"
password = "p@ss w+rd"
url = create_dclogin_url(ipv4_config, addr, password)
assert url.startswith("dclogin:")
assert "v=1" in url
assert "ic=3" in url
assert addr in url
def test_print_new_account(capsys, monkeypatch, maildomain, tmpdir, example_config):
monkeypatch.setattr(chatmaild.newemail, "CONFIG_PATH", str(example_config._inipath))
print_new_account()

View File

@@ -0,0 +1,91 @@
import os
import time
from chatmaild.quota_expire import expire_to_target, scan_mailbox_messages
MB = 1024 * 1024
def _create_message(basedir, relpath, size, days_old=0):
path = basedir / relpath
path.parent.mkdir(parents=True, exist_ok=True)
path.write_bytes(b"x" * size)
mtime = time.time() - days_old * 86400
os.utime(path, (mtime, mtime))
return path
def test_scan_cur_new_tmp(tmp_path):
_create_message(tmp_path, "cur/msg1", 100)
_create_message(tmp_path, "new/msg2", 200)
_create_message(tmp_path, "tmp/msg3", 300)
messages = scan_mailbox_messages(str(tmp_path))
assert len(messages) == 3
sizes = sorted(m.size for m in messages)
assert sizes == [100, 200, 300]
def test_scan_ignores_subfolders(tmp_path):
_create_message(tmp_path, "cur/a", 10)
_create_message(tmp_path, ".DeltaChat/cur/b", 20)
assert len(scan_mailbox_messages(str(tmp_path))) == 1
def test_scan_empty(tmp_path):
assert scan_mailbox_messages(str(tmp_path)) == []
assert scan_mailbox_messages(str(tmp_path / "nope")) == []
def test_noop_under_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", MB)
assert expire_to_target(str(tmp_path), 2 * MB) == []
assert (tmp_path / "cur" / "msg1").exists()
def test_removes_to_target(tmp_path):
now = time.time()
for i in range(15):
_create_message(tmp_path, f"cur/msg{i:02d}", MB, days_old=i + 1)
removed = expire_to_target(str(tmp_path), 10 * MB, now=now)
assert len(removed) == 5
assert len(scan_mailbox_messages(str(tmp_path))) == 10
def test_scoring_prefers_large_old(tmp_path):
now = time.time()
_create_message(tmp_path, "cur/large_old", 2 * MB, days_old=30)
_create_message(tmp_path, "cur/small_new", MB, days_old=1)
removed = expire_to_target(str(tmp_path), 2 * MB, now=now)
assert len(removed) == 1
assert "large_old" in removed[0]
def test_scoring_large_new_beats_small_old(tmp_path):
now = time.time()
_create_message(tmp_path, "cur/big_new", 10 * MB, days_old=1)
_create_message(tmp_path, "cur/small_old", MB, days_old=5)
# big_new score: 10MB * 1d = 10 vs small_old score: 1MB * 5d = 5
removed = expire_to_target(str(tmp_path), 10 * MB, now=now)
assert len(removed) == 1
assert "big_new" in removed[0]
def test_exact_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", 5 * MB)
assert expire_to_target(str(tmp_path), 5 * MB) == []
def test_removes_stale_caches(tmp_path):
_create_message(tmp_path, "cur/msg1", 2 * MB, days_old=5)
(tmp_path / "maildirsize").write_text("x")
(tmp_path / "dovecot.index.cache").write_text("x")
expire_to_target(str(tmp_path), MB)
assert not (tmp_path / "maildirsize").exists()
assert not (tmp_path / "dovecot.index.cache").exists()
def test_no_cache_removal_when_under_limit(tmp_path):
_create_message(tmp_path, "cur/msg1", MB)
(tmp_path / "maildirsize").write_text("x")
expire_to_target(str(tmp_path), 2 * MB)
assert (tmp_path / "maildirsize").exists()

View File

@@ -1,4 +1,6 @@
from pyinfra.operations import apt, server
import importlib.resources
from pyinfra.operations import apt, files, server, systemd
from ..basedeploy import Deployer
@@ -7,6 +9,9 @@ class AcmetoolDeployer(Deployer):
def __init__(self, email, domains):
self.domains = domains
self.email = email
self.need_restart_redirector = False
self.need_restart_reconcile_service = False
self.need_restart_reconcile_timer = False
def install(self):
apt.packages(
@@ -14,41 +19,121 @@ class AcmetoolDeployer(Deployer):
packages=["acmetool"],
)
self.remove_file("/etc/cron.d/acmetool")
files.file(
name="Remove old acmetool cronjob, it is replaced with systemd timer.",
path="/etc/cron.d/acmetool",
present=False,
)
self.put_executable("acmetool/acmetool.hook", "/etc/acme/hooks/nginx")
self.remove_file("/usr/lib/acme/hooks/nginx")
files.put(
name="Install acmetool hook.",
src=importlib.resources.files(__package__)
.joinpath("acmetool.hook")
.open("rb"),
dest="/etc/acme/hooks/nginx",
user="root",
group="root",
mode="755",
)
files.file(
name="Remove acmetool hook from the wrong location where it was previously installed.",
path="/usr/lib/acme/hooks/nginx",
present=False,
)
def configure(self):
self.put_template(
"acmetool/response-file.yaml.j2",
"/var/lib/acme/conf/responses",
files.template(
src=importlib.resources.files(__package__).joinpath(
"response-file.yaml.j2"
),
dest="/var/lib/acme/conf/responses",
user="root",
group="root",
mode="644",
email=self.email,
)
self.put_template(
"acmetool/target.yaml.j2",
"/var/lib/acme/conf/target",
files.template(
src=importlib.resources.files(__package__).joinpath("target.yaml.j2"),
dest="/var/lib/acme/conf/target",
user="root",
group="root",
mode="644",
)
server.shell(
name=f"Remove old acmetool desired files for {self.domains[0]}",
commands=[f"rm -f /var/lib/acme/desired/{self.domains[0]}-*"],
)
self.put_template(
"acmetool/desired.yaml.j2",
f"/var/lib/acme/desired/{self.domains[0]}",
files.template(
src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"),
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD
user="root",
group="root",
mode="644",
domains=self.domains,
)
self.ensure_systemd_unit("acmetool/acmetool-redirector.service")
self.ensure_systemd_unit("acmetool/acmetool-reconcile.service")
self.ensure_systemd_unit("acmetool/acmetool-reconcile.timer")
service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-redirector.service"
),
dest="/etc/systemd/system/acmetool-redirector.service",
user="root",
group="root",
mode="644",
)
self.need_restart_redirector = service_file.changed
reconcile_service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.service"
),
dest="/etc/systemd/system/acmetool-reconcile.service",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_service = reconcile_service_file.changed
reconcile_timer_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.timer"
),
dest="/etc/systemd/system/acmetool-reconcile.timer",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_timer = reconcile_timer_file.changed
def activate(self):
self.ensure_service("acmetool-redirector.service")
self.ensure_service("acmetool-reconcile.service", running=False, enabled=False)
self.ensure_service("acmetool-reconcile.timer")
systemd.service(
name="Setup acmetool-redirector service",
service="acmetool-redirector.service",
running=True,
enabled=True,
restarted=self.need_restart_redirector,
)
self.need_restart_redirector = False
systemd.service(
name="Setup acmetool-reconcile service",
service="acmetool-reconcile.service",
running=False,
enabled=False,
daemon_reload=self.need_restart_reconcile_service,
)
self.need_restart_reconcile_service = False
systemd.service(
name="Setup acmetool-reconcile timer",
service="acmetool-reconcile.timer",
running=True,
enabled=True,
daemon_reload=self.need_restart_reconcile_timer,
)
self.need_restart_reconcile_timer = False
server.shell(
name=f"Reconcile certificates for: {', '.join(self.domains)}",

View File

@@ -4,7 +4,6 @@ import os
from contextlib import contextmanager
from pyinfra import host
from pyinfra.facts.files import Sha256File
from pyinfra.facts.server import Command
from pyinfra.operations import files, server, systemd
@@ -51,10 +50,11 @@ def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg)
def configure_remote_units(deployer, mail_domain, units) -> None:
def configure_remote_units(mail_domain, units) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
# install systemd units
for fn in units:
@@ -70,13 +70,15 @@ def configure_remote_units(deployer, mail_domain, units) -> None:
source_path = get_resource(f"service/{basename}.f")
content = source_path.read_text().format(**params).encode()
deployer.put_file(
files.put(
name=f"Upload {basename}",
src=io.BytesIO(content),
dest=f"/etc/systemd/system/{basename}",
**root_owned,
)
def activate_remote_units(deployer, units) -> None:
def activate_remote_units(units) -> None:
# activate systemd units
for fn in units:
basename = fn if "." in fn else f"{fn}.service"
@@ -86,8 +88,14 @@ def activate_remote_units(deployer, units) -> None:
enabled = False
else:
enabled = True
deployer.ensure_service(basename, running=enabled, enabled=enabled)
systemd.service(
name=f"Setup {basename}",
service=basename,
running=enabled,
enabled=enabled,
restarted=enabled,
daemon_reload=True,
)
class Deployment:
@@ -133,7 +141,6 @@ class Deployment:
class Deployer:
need_restart = False
daemon_reload = False
def install(self):
pass
@@ -143,113 +150,3 @@ class Deployer:
def activate(self):
pass
def ensure_service(self, service, running=True, enabled=True):
if running:
verb = "Start and enable"
else:
verb = "Stop"
systemd.service(
name=f"{verb} {service}",
service=service,
running=running,
enabled=enabled,
restarted=self.need_restart if running else False,
daemon_reload=self.daemon_reload,
)
self.daemon_reload = False
def ensure_systemd_unit(self, src, **kwargs):
dest_name = src.split("/")[-1].replace(".j2", "")
dest = f"/etc/systemd/system/{dest_name}"
if src.endswith(".j2"):
return self.put_template(src, dest, **kwargs)
return self.put_file(src, dest)
def put_file(self, src, dest, mode="644"):
if isinstance(src, str):
src = get_resource(src)
res = files.put(
name=f"Upload {dest}",
src=src,
dest=dest,
user="root",
group="root",
mode=mode,
)
return self._update_restart_signals(dest, res)
def put_executable(self, src, dest):
return self.put_file(src, dest, mode="755")
def put_template(self, src, dest, owner="root", **kwargs):
if isinstance(src, str):
src = get_resource(src)
res = files.template(
name=f"Upload {dest}",
src=src,
dest=dest,
user=owner,
group=owner,
mode="644",
**kwargs,
)
return self._update_restart_signals(dest, res)
def remove_file(self, dest):
res = files.file(name=f"Remove {dest}", path=dest, present=False)
return self._update_restart_signals(dest, res)
def ensure_line(self, path, line, **kwargs):
name = kwargs.pop("name", f"Ensure line in {path}")
res = files.line(name=name, path=path, line=line, **kwargs)
return self._update_restart_signals(path, res)
def ensure_directory(self, path, owner="root", mode="755", **kwargs):
name = kwargs.pop("name", f"Ensure directory {path}")
res = files.directory(
name=name,
path=path,
user=owner,
group=owner,
mode=mode,
present=True,
**kwargs,
)
return self._update_restart_signals(path, res)
def remove_directory(self, path, **kwargs):
name = kwargs.pop("name", f"Remove directory {path}")
res = files.directory(name=name, path=path, present=False, **kwargs)
return self._update_restart_signals(path, res)
def download_executable(self, url, dest, sha256sum, extract=None):
existing = host.get_fact(Sha256File, dest)
if existing == sha256sum:
return
tmp = f"{dest}.new"
if extract:
dl_cmd = f"curl -fSL {url} | {extract} >{tmp}"
else:
dl_cmd = f"curl -fSL {url} -o {tmp}"
server.shell(
name=f"Download {dest}",
commands=[
f"({dl_cmd}"
f" && echo '{sha256sum} {tmp}' | sha256sum -c"
f" && mv {tmp} {dest})",
f"chmod 755 {dest}",
],
)
self.need_restart = True
def _update_restart_signals(self, path, res):
if res.changed:
self.need_restart = True
if str(path).startswith("/etc/systemd/system/"):
self.daemon_reload = True
return res

View File

@@ -87,12 +87,10 @@ def run_cmd_options(parser):
def run_cmd(args, out):
"""Deploy chatmail services on the remote server."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain_bare
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host)
require_iroh = args.config.enable_iroh_relay
strict_tls = args.config.tls_cert_mode == "acme"
if args.config.ipv4_relay:
args.dns_check_disabled = True
if not args.dns_check_disabled:
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
if not dns.check_initial_remote_data(remote_data, strict_tls=strict_tls, print=out.red):
@@ -103,6 +101,9 @@ def run_cmd(args, out):
env["CHATMAIL_WEBSITE_ONLY"] = "True" if args.website_only else ""
env["CHATMAIL_DISABLE_MAIL"] = "True" if args.disable_mail else ""
env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else ""
if not args.dns_check_disabled:
env["CHATMAIL_ADDR_V4"] = remote_data.get("A") or ""
env["CHATMAIL_ADDR_V6"] = remote_data.get("AAAA") or ""
deploy_path = importlib.resources.files(__package__).joinpath("run.py").resolve()
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
@@ -121,8 +122,6 @@ def run_cmd(args, out):
elif not args.dns_check_disabled and strict_tls and not remote_data["acme_account_url"]:
out.red("Deploy completed but letsencrypt not configured")
out.red("Run 'cmdeploy run' again")
elif args.config.ipv4_relay:
out.green("Deploy completed.")
else:
out.green("Deploy completed, call `cmdeploy dns` next.")
return 0
@@ -144,10 +143,6 @@ def dns_cmd_options(parser):
def dns_cmd(args, out):
"""Check DNS entries and optionally generate dns zone file."""
if args.config.ipv4_relay:
ipv4 = args.config.ipv4_relay
print(f"[WARNING] {ipv4} is not a domain, skipping DNS checks.")
return 0
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
tls_cert_mode = args.config.tls_cert_mode
@@ -185,7 +180,7 @@ def status_cmd_options(parser):
def status_cmd(args, out):
"""Display status for online chatmail instance."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain_bare
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
out.green(f"chatmail domain: {args.config.mail_domain}")
@@ -199,6 +194,12 @@ def status_cmd(args, out):
def test_cmd_options(parser):
parser.add_argument(
"--slow",
dest="slow",
action="store_true",
help="also run slow tests",
)
add_ssh_host_option(parser)
@@ -220,6 +221,8 @@ def test_cmd(args, out):
"-v",
"--durations=5",
]
if args.slow:
pytest_args.append("--slow")
ret = out.run_ret(pytest_args, env=env)
return ret

View File

@@ -12,6 +12,7 @@ from chatmaild.config import read_config
from pyinfra import facts, host, logger
from pyinfra.api import FactBase
from pyinfra.facts import hardware
from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd
@@ -24,6 +25,7 @@ from .basedeploy import (
activate_remote_units,
blocked_service_startup,
configure_remote_units,
get_resource,
has_systemd,
is_in_container,
)
@@ -80,22 +82,25 @@ def remove_legacy_artifacts():
)
def _install_remote_venv_with_chatmaild(deployer) -> None:
def _install_remote_venv_with_chatmaild() -> None:
remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist"))
remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv"
root_owned = dict(user="root", group="root", mode="644")
apt.packages(
name="apt install python3-virtualenv",
packages=["python3-virtualenv"],
)
deployer.ensure_directory(f"{remote_base_dir}/dist")
deployer.put_file(
files.put(
name="Upload chatmaild source package",
src=dist_file.open("rb"),
dest=remote_dist_file,
create_remote_dir=True,
**root_owned,
)
pip.virtualenv(
@@ -117,54 +122,46 @@ def _install_remote_venv_with_chatmaild(deployer) -> None:
)
def _configure_remote_venv_with_chatmaild(deployer, config) -> None:
def _configure_remote_venv_with_chatmaild(config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
deployer.put_file(
files.put(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(),
dest=remote_chatmail_inipath,
**root_owned,
)
deployer.remove_file("/etc/cron.d/chatmail-metrics")
deployer.remove_file("/var/www/html/metrics")
files.file(
path="/etc/cron.d/chatmail-metrics",
present=False,
)
files.file(
path="/var/www/html/metrics",
present=False,
)
class UnboundDeployer(Deployer):
def __init__(self, config):
self.config = config
self.need_restart = False
def install(self):
# Run local DNS resolver `unbound`. `resolvconf` takes care of
# setting up /etc/resolv.conf to use 127.0.0.1 as the resolver.
# On an IPv4-only system, if unbound is started but not configured,
# it causes subsequent steps to fail to resolve hosts.
with blocked_service_startup():
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
packages=["unbound", "unbound-anchor", "dnsutils", "resolvconf"],
)
def configure(self):
# Remove dynamic resolver managers that compete for /etc/resolv.conf.
apt.packages(
name="Purge resolvconf",
packages=["resolvconf"],
present=False,
extra_uninstall_args="--purge",
)
# systemd-resolved can't be purged due to dependencies; stop and mask.
server.shell(
name="Stop and mask systemd-resolved",
commands=[
"systemctl stop systemd-resolved.service || true",
"systemctl mask systemd-resolved.service",
],
)
# Configure unbound resolver with Quad9 fallback and a trailing newline
# (SolusVM bug).
self.put_file(
src=BytesIO(b"nameserver 127.0.0.1\nnameserver 9.9.9.9\n"),
dest="/etc/resolv.conf",
)
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
@@ -172,15 +169,26 @@ class UnboundDeployer(Deployer):
],
)
if self.config.disable_ipv6:
self.ensure_directory(
files.directory(
path="/etc/unbound/unbound.conf.d",
present=True,
user="root",
group="root",
mode="755",
)
self.put_template(
"unbound/unbound.conf.j2",
"/etc/unbound/unbound.conf.d/chatmail.conf",
conf = files.put(
src=get_resource("unbound/unbound.conf.j2"),
dest="/etc/unbound/unbound.conf.d/chatmail.conf",
user="root",
group="root",
mode="644",
)
else:
self.remove_file("/etc/unbound/unbound.conf.d/chatmail.conf")
conf = files.file(
path="/etc/unbound/unbound.conf.d/chatmail.conf",
present=False,
)
self.need_restart |= conf.changed
def activate(self):
server.shell(
@@ -190,25 +198,27 @@ class UnboundDeployer(Deployer):
],
)
self.ensure_service("unbound.service")
self.ensure_service(
"unbound-resolvconf.service",
running=False,
enabled=False,
systemd.service(
name="Start and enable unbound",
service="unbound.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
class MtastsDeployer(Deployer):
def configure(self):
# Remove configuration.
self.remove_file("/etc/mta-sts-daemon.yml")
self.remove_directory("/usr/local/lib/postfix-mta-sts-resolver")
self.remove_file("/etc/systemd/system/mta-sts-daemon.service")
files.file("/etc/mta-sts-daemon.yml", present=False)
files.directory("/usr/local/lib/postfix-mta-sts-resolver", present=False)
files.file("/etc/systemd/system/mta-sts-daemon.service", present=False)
def activate(self):
self.ensure_service(
"mta-sts-daemon.service",
systemd.service(
name="Stop MTA-STS daemon",
service="mta-sts-daemon.service",
daemon_reload=True,
running=False,
enabled=False,
)
@@ -219,7 +229,14 @@ class WebsiteDeployer(Deployer):
self.config = config
def install(self):
self.ensure_directory("/var/www")
files.directory(
name="Ensure /var/www exists",
path="/var/www",
user="root",
group="root",
mode="755",
present=True,
)
def configure(self):
www_path, src_dir, build_dir = get_paths(self.config)
@@ -249,11 +266,15 @@ class LegacyRemoveDeployer(Deployer):
# remove historic expunge script
# which is now implemented through a systemd timer (chatmail-expire)
self.remove_file("/etc/cron.d/expunge")
files.file(
path="/etc/cron.d/expunge",
present=False,
)
# Remove OBS repository key that is no longer used.
self.remove_file("/etc/apt/keyrings/obs-home-deltachat.gpg")
self.ensure_line(
files.file("/etc/apt/keyrings/obs-home-deltachat.gpg", present=False)
files.line(
name="Remove DeltaChat OBS home repository from sources.list",
path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True,
@@ -261,7 +282,11 @@ class LegacyRemoveDeployer(Deployer):
)
# prior relay versions used filelogging
self.remove_directory("/var/log/journal/")
files.directory(
name="Ensure old logs on disk are deleted",
path="/var/log/journal/",
present=False,
)
# remove echobot if it is still running
if has_systemd() and host.get_fact(SystemdEnabled).get("echobot.service"):
systemd.service(
@@ -303,13 +328,22 @@ class TurnDeployer(Deployer):
"0fb3e792419494e21ecad536464929dba706bb2c88884ed8f1788141d26fc756",
),
}[host.get_fact(facts.server.Arch)]
self.download_executable(url, "/usr/local/bin/chatmail-turn", sha256sum)
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn")
if existing_sha256sum != sha256sum:
server.shell(
name="Download chatmail-turn",
commands=[
f"(curl -L {url} >/usr/local/bin/chatmail-turn.new && (echo '{sha256sum} /usr/local/bin/chatmail-turn.new' | sha256sum -c) && mv /usr/local/bin/chatmail-turn.new /usr/local/bin/chatmail-turn)",
"chmod 755 /usr/local/bin/chatmail-turn",
],
)
def configure(self):
configure_remote_units(self, self.mail_domain, self.units)
configure_remote_units(self.mail_domain, self.units)
def activate(self):
activate_remote_units(self, self.units)
activate_remote_units(self.units)
class IrohDeployer(Deployer):
@@ -327,30 +361,72 @@ class IrohDeployer(Deployer):
"f8ef27631fac213b3ef668d02acd5b3e215292746a3fc71d90c63115446008b1",
),
}[host.get_fact(facts.server.Arch)]
self.download_executable(
url,
"/usr/local/bin/iroh-relay",
sha256sum,
extract="gunzip | tar -xf - ./iroh-relay -O",
)
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay")
if existing_sha256sum != sha256sum:
server.shell(
name="Download iroh-relay",
commands=[
f"(curl -L {url} | gunzip | tar -x -f - ./iroh-relay -O >/usr/local/bin/iroh-relay.new && (echo '{sha256sum} /usr/local/bin/iroh-relay.new' | sha256sum -c) && mv /usr/local/bin/iroh-relay.new /usr/local/bin/iroh-relay)",
"chmod 755 /usr/local/bin/iroh-relay",
],
)
self.need_restart = True
def configure(self):
self.ensure_systemd_unit("iroh-relay.service")
self.put_file("iroh-relay.toml", "/etc/iroh-relay.toml")
systemd_unit = files.put(
name="Upload iroh-relay systemd unit",
src=get_resource("iroh-relay.service"),
dest="/etc/systemd/system/iroh-relay.service",
user="root",
group="root",
mode="644",
)
self.need_restart |= systemd_unit.changed
iroh_config = files.put(
name="Upload iroh-relay config",
src=get_resource("iroh-relay.toml"),
dest="/etc/iroh-relay.toml",
user="root",
group="root",
mode="644",
)
self.need_restart |= iroh_config.changed
def activate(self):
self.ensure_service(
"iroh-relay.service",
systemd.service(
name="Start and enable iroh-relay",
service="iroh-relay.service",
running=True,
enabled=self.enable_iroh_relay,
restarted=self.need_restart,
)
self.need_restart = False
class JournaldDeployer(Deployer):
def configure(self):
self.put_file("journald.conf", "/etc/systemd/journald.conf")
journald_conf = files.put(
name="Configure journald",
src=get_resource("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
self.need_restart = journald_conf.changed
def activate(self):
self.ensure_service("systemd-journald.service")
systemd.service(
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
class ChatmailVenvDeployer(Deployer):
@@ -366,14 +442,14 @@ class ChatmailVenvDeployer(Deployer):
)
def install(self):
_install_remote_venv_with_chatmaild(self)
_install_remote_venv_with_chatmaild()
def configure(self):
_configure_remote_venv_with_chatmaild(self, self.config)
configure_remote_units(self, self.config.mail_domain_bare, self.units)
_configure_remote_venv_with_chatmaild(self.config)
configure_remote_units(self.config.mail_domain, self.units)
def activate(self):
activate_remote_units(self, self.units)
activate_remote_units(self.units)
class ChatmailDeployer(Deployer):
@@ -387,9 +463,13 @@ class ChatmailDeployer(Deployer):
self.mail_domain = config.mail_domain
def install(self):
self.put_file(
files.put(
name="Disable installing recommended packages globally",
src=BytesIO(b'APT::Install-Recommends "false";\n'),
dest="/etc/apt/apt.conf.d/00InstallRecommends",
user="root",
group="root",
mode="644",
)
apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -406,10 +486,13 @@ class ChatmailDeployer(Deployer):
def configure(self):
# metadata crashes if the mailboxes dir does not exist
self.ensure_directory(
str(self.config.mailboxes_dir),
owner="vmail",
files.directory(
name="Ensure vmail mailbox directory exists",
path=str(self.config.mailboxes_dir),
user="vmail",
group="vmail",
mode="700",
present=True,
)
# This file is used by auth proxy.
@@ -430,7 +513,12 @@ class FcgiwrapDeployer(Deployer):
)
def activate(self):
self.ensure_service("fcgiwrap.service")
systemd.service(
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
class GithashDeployer(Deployer):
@@ -443,7 +531,12 @@ class GithashDeployer(Deployer):
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
self.put_file(src=StringIO(git_hash + git_diff), dest="/etc/chatmail-version")
files.put(
name="Upload chatmail relay git commit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
)
def get_tls_deployer(config, mail_domain):
@@ -469,24 +562,26 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
"""
config = read_config(config_path)
check_config(config)
bare_host = config.mail_domain_bare
mail_domain = config.mail_domain
if website_only:
Deployment().perform_stages([WebsiteDeployer(config)])
return
if host.get_fact(Port, port=53) != "unbound":
files.line(
name="Add 9.9.9.9 to resolv.conf",
path="/etc/resolv.conf",
# Guard against resolv.conf missing a trailing newline (SolusVM bug).
line="\nnameserver 9.9.9.9",
)
# Check if mtail_address interface is available (if configured)
if config.mtail_address and config.mtail_address not in (
"127.0.0.1",
"::1",
"localhost",
):
if config.mtail_address and config.mtail_address not in ('127.0.0.1', '::1', 'localhost'):
ipv4_addrs = host.get_fact(hardware.Ipv4Addrs)
all_addresses = [addr for addrs in ipv4_addrs.values() for addr in addrs]
if config.mtail_address not in all_addresses:
Out().red(
f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n"
)
Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n")
exit(1)
if not is_in_container():
@@ -526,7 +621,7 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
)
exit(1)
tls_deployer = get_tls_deployer(config, bare_host)
tls_deployer = get_tls_deployer(config, mail_domain)
all_deployers = [
ChatmailDeployer(config),
@@ -534,13 +629,13 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
FiltermailDeployer(),
JournaldDeployer(),
UnboundDeployer(config),
TurnDeployer(bare_host),
TurnDeployer(mail_domain),
IrohDeployer(config.enable_iroh_relay),
tls_deployer,
WebsiteDeployer(config),
ChatmailVenvDeployer(config),
MtastsDeployer(),
*([] if config.ipv4_relay else [OpendkimDeployer(bare_host)]),
OpendkimDeployer(mail_domain),
# Dovecot should be started before Postfix
# because it creates authentication socket
# required by Postfix.

View File

@@ -5,13 +5,14 @@ from chatmaild.config import Config
from pyinfra import host
from pyinfra.facts.deb import DebPackages
from pyinfra.facts.server import Arch, Command, Sysctl
from pyinfra.operations import apt, files, server
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import (
Deployer,
activate_remote_units,
blocked_service_startup,
configure_remote_units,
get_resource,
is_in_container,
)
@@ -58,21 +59,26 @@ class DovecotDeployer(Deployer):
],
)
self.need_restart = True
self.put_file(
files.put(
name="Pin dovecot packages to block Debian dist-upgrades",
src=io.StringIO(
"Package: dovecot-*\n"
"Pin: version *\n"
"Pin-Priority: -1\n"
),
dest="/etc/apt/preferences.d/pin-dovecot",
user="root",
group="root",
mode="644",
)
def configure(self):
configure_remote_units(self, self.config.mail_domain_bare, self.units)
_configure_dovecot(self, self.config)
configure_remote_units(self.config.mail_domain, self.units)
config_restart, self.daemon_reload = _configure_dovecot(self.config)
self.need_restart |= config_restart
def activate(self):
activate_remote_units(self, self.units)
activate_remote_units(self.units)
# Detect stale binary: package installed but service still runs old (deleted) binary.
if not self.disable_mail and not self.need_restart:
@@ -85,12 +91,19 @@ class DovecotDeployer(Deployer):
if stale == "STALE":
self.need_restart = True
active = not self.disable_mail
self.ensure_service(
"dovecot.service",
running=active,
enabled=active,
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="Disable dovecot for now"
if self.disable_mail
else "Start and enable Dovecot",
service="dovecot.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
)
self.need_restart = False
def _pick_url(primary, fallback):
@@ -134,26 +147,46 @@ def _download_dovecot_package(package: str, arch: str) -> tuple[str | None, bool
return deb_filename, True
def _configure_dovecot(deployer, config: Config, debug: bool = False):
def _configure_dovecot(config: Config, debug: bool = False) -> tuple[bool, bool]:
"""Configures Dovecot IMAP server."""
deployer.put_template(
"dovecot/dovecot.conf.j2",
"/etc/dovecot/dovecot.conf",
need_restart = False
daemon_reload = False
main_config = files.template(
src=get_resource("dovecot/dovecot.conf.j2"),
dest="/etc/dovecot/dovecot.conf",
user="root",
group="root",
mode="644",
config=config,
debug=debug,
disable_ipv6=config.disable_ipv6,
)
deployer.put_file("dovecot/auth.conf", "/etc/dovecot/auth.conf")
deployer.put_file(
"dovecot/push_notification.lua", "/etc/dovecot/push_notification.lua"
need_restart |= main_config.changed
auth_config = files.put(
src=get_resource("dovecot/auth.conf"),
dest="/etc/dovecot/auth.conf",
user="root",
group="root",
mode="644",
)
need_restart |= auth_config.changed
lua_push_notification_script = files.put(
src=get_resource("dovecot/push_notification.lua"),
dest="/etc/dovecot/push_notification.lua",
user="root",
group="root",
mode="644",
)
need_restart |= lua_push_notification_script.changed
# as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits
can_modify = not is_in_container()
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
value = host.get_fact(Sysctl).get(key, 0)
value = host.get_fact(Sysctl)[key]
if value > 65534:
continue
if not can_modify:
@@ -170,20 +203,25 @@ def _configure_dovecot(deployer, config: Config, debug: bool = False):
persist=True,
)
deployer.ensure_line(
timezone_env = files.line(
name="Set TZ environment variable",
path="/etc/environment",
line="TZ=:/etc/localtime",
)
need_restart |= timezone_env.changed
deployer.put_file(
"service/10_restart_on_failure.conf",
"/etc/systemd/system/dovecot.service.d/10_restart.conf",
restart_conf = files.put(
name="dovecot: restart automatically on failure",
src=get_resource("service/10_restart.conf"),
dest="/etc/systemd/system/dovecot.service.d/10_restart.conf",
)
daemon_reload |= restart_conf.changed
# Validate dovecot configuration before restart
if deployer.need_restart:
if need_restart:
server.shell(
name="Validate dovecot configuration",
commands=["doveconf -n >/dev/null"],
)
return need_restart, daemon_reload

View File

@@ -7,7 +7,6 @@ listen = 0.0.0.0
protocols = imap lmtp
auth_mechanisms = plain
auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@[]
{% if debug == true %}
auth_verbose = yes
@@ -150,25 +149,21 @@ plugin {
}
plugin {
# for now we define static quota-rules for all users
quota = maildir:User quota
quota_rule = *:storage={{ config.max_mailbox_size }}
quota_max_mail_size={{ config.max_message_size }}
quota_grace = 0
quota_rule = *:storage={{ config.max_mailbox_size_mb }}M
# Trigger at 75%% of quota, expire oldest messages down to 70%%.
# The percentages are chosen to prevent current Delta Chat users
# from seeing "quota warnings" which trigger at 80% and 95%.
quota_warning = storage=75%% quota-warning {{ config.max_mailbox_size_mb * 70 // 100 }} {{ config.mailboxes_dir }}/%u
# When a user reaches 90% quota, run chatmail-quota-expire
# to remove large/old messages until usage is below 80%.
quota_warning = storage=90%% quota-warning {{ config.max_mailbox_size_mb * 80 // 100 }} {{ config.mailboxes_dir }}/%u
}
service quota-warning {
executable = script /usr/local/lib/chatmaild/venv/bin/chatmail-quota-expire
user = vmail
unix_listener quota-warning {
user = vmail
mode = 0600
}
}

View File

@@ -1,7 +1,10 @@
import io
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import files, systemd
from ..basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class ExternalTlsDeployer(Deployer):
@@ -20,22 +23,45 @@ class ExternalTlsDeployer(Deployer):
def configure(self):
# Verify cert and key exist on the remote host using pyinfra facts.
for path in (self.cert_path, self.key_path):
if host.get_fact(File, path=path) is None:
info = host.get_fact(File, path=path)
if info is None:
raise Exception(f"External TLS file not found on server: {path}")
self.ensure_systemd_unit(
"external/tls-cert-reload.path.j2",
cert_path=self.cert_path,
)
self.ensure_systemd_unit(
"external/tls-cert-reload.service",
# Deploy the .path unit (templated with the cert path).
# pkg=__package__ is required here because the resource files
# live in cmdeploy.external, not the default cmdeploy package.
source = get_resource("tls-cert-reload.path.f", pkg=__package__)
content = source.read_text().format(cert_path=self.cert_path).encode()
path_unit = files.put(
name="Upload tls-cert-reload.path",
src=io.BytesIO(content),
dest="/etc/systemd/system/tls-cert-reload.path",
user="root",
group="root",
mode="644",
)
service_unit = files.put(
name="Upload tls-cert-reload.service",
src=get_resource("tls-cert-reload.service", pkg=__package__),
dest="/etc/systemd/system/tls-cert-reload.service",
user="root",
group="root",
mode="644",
)
if path_unit.changed or service_unit.changed:
self.need_restart = True
def activate(self):
# No explicit reload needed here: dovecot/nginx read the cert
# on startup, and the .path watcher handles live changes.
self.ensure_service(
"tls-cert-reload.path",
systemd.service(
name="Enable tls-cert-reload path watcher",
service="tls-cert-reload.path",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=self.need_restart,
)
# No explicit reload needed here: dovecot/nginx read the cert
# on startup, and the .path watcher handles live changes.

View File

@@ -9,7 +9,7 @@
Description=Watch TLS certificate for changes
[Path]
PathChanged={{ cert_path }}
PathChanged={cert_path}
[Install]
WantedBy=multi-user.target

View File

@@ -1,40 +1,52 @@
import os
from pyinfra import facts, host
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class FiltermailDeployer(Deployer):
services = ["filtermail", "filtermail-incoming", "filtermail-transport"]
services = ["filtermail", "filtermail-incoming"]
bin_path = "/usr/local/bin/filtermail"
config_path = "/usr/local/lib/chatmaild/chatmail.ini"
def install(self):
local_bin = os.environ.get("CHATMAIL_FILTERMAIL_BINARY")
if local_bin:
self.put_executable(
src=local_bin,
dest=self.bin_path,
)
return
def __init__(self):
self.need_restart = False
def install(self):
arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.4/filtermail-{arch}"
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.1/filtermail-{arch}"
sha256sum = {
"x86_64": "5295115952c72e4c4ec3c85546e094b4155a4c702c82bd71fcdcb744dc73adf6",
"aarch64": "6892244f17b8f26ccb465766e96028e7222b3c8adefca9fc6bfe9ff332ca8dff",
"x86_64": "48b3fb80c092d00b9b0a0ef77a8673496da3b9aed5ec1851e1df936d5589d62f",
"aarch64": "c65bd5f45df187d3d65d6965a285583a3be0f44a6916ff12909ff9a8d702c22e",
}[arch]
self.download_executable(url, self.bin_path, sha256sum)
self.need_restart |= files.download(
name="Download filtermail",
src=url,
sha256sum=sha256sum,
dest=self.bin_path,
mode="755",
).changed
def configure(self):
for service in self.services:
self.ensure_systemd_unit(
f"filtermail/{service}.service.j2",
self.need_restart |= files.template(
src=get_resource(f"filtermail/{service}.service.j2"),
dest=f"/etc/systemd/system/{service}.service",
user="root",
group="root",
mode="644",
bin_path=self.bin_path,
config_path=self.config_path,
)
).changed
def activate(self):
for service in self.services:
self.ensure_service(f"{service}.service")
systemd.service(
name=f"Start and enable {service}",
service=f"{service}.service",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=True,
)
self.need_restart = False

View File

@@ -1,11 +0,0 @@
[Unit]
Description=Chatmail transport service
[Service]
ExecStart={{ bin_path }} {{ config_path }} transport
Restart=always
RestartSec=30
User=vmail
[Install]
WantedBy=multi-user.target

View File

@@ -78,11 +78,3 @@ counter rejected_unencrypted_mail_count
/Rejected unencrypted mail/ {
rejected_unencrypted_mail_count++
}
counter quota_expire_runs
counter quota_expire_removed_files
/quota-expire: removed (?P<count>\d+) message\(s\)/ {
quota_expire_runs++
quota_expire_removed_files += $count
}

View File

@@ -1,7 +1,10 @@
from pyinfra import facts, host
from pyinfra.operations import apt
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import (
Deployer,
get_resource,
)
class MtailDeployer(Deployer):
@@ -15,30 +18,51 @@ class MtailDeployer(Deployer):
(url, sha256sum) = {
"x86_64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz",
"d55cb601049c5e61eabab29998dbbcea95d480e5448544f9470337ba2eea882e",
"123c2ee5f48c3eff12ebccee38befd2233d715da736000ccde49e3d5607724e4",
),
"aarch64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz",
"f748db8ad2a1e0b63684d4c8868cf6a373a20f7e6922e5ece601fff0ee00eb1a",
"aa04811c0929b6754408676de520e050c45dddeb3401881888a092c9aea89cae",
),
}[host.get_fact(facts.server.Arch)]
self.download_executable(
url,
"/usr/local/bin/mtail",
sha256sum,
extract="gunzip | tar -xf - mtail -O",
server.shell(
name="Download mtail",
commands=[
f"(echo '{sha256sum} /usr/local/bin/mtail' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - mtail -O >/usr/local/bin/mtail.new && mv /usr/local/bin/mtail.new /usr/local/bin/mtail)",
"chmod 755 /usr/local/bin/mtail",
],
)
def configure(self):
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
# This allows to read from journalctl instead of log files.
self.ensure_systemd_unit(
"mtail/mtail.service.j2",
files.template(
src=get_resource("mtail/mtail.service.j2"),
dest="/etc/systemd/system/mtail.service",
user="root",
group="root",
mode="644",
address=self.mtail_address or "127.0.0.1",
port=3903,
)
self.put_file("mtail/delivered_mail.mtail", "/etc/mtail/delivered_mail.mtail")
mtail_conf = files.put(
name="Mtail configuration",
src=get_resource("mtail/delivered_mail.mtail"),
dest="/etc/mtail/delivered_mail.mtail",
user="root",
group="root",
mode="644",
)
self.need_restart = mtail_conf.changed
def activate(self):
active = bool(self.mtail_address)
self.ensure_service("mtail.service", running=active, enabled=active)
systemd.service(
name="Start and enable mtail",
service="mtail.service",
running=bool(self.mtail_address),
enabled=bool(self.mtail_address),
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -1,13 +1,10 @@
[Unit]
Description=mtail
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -"
Restart=on-failure
RestartSec=2s
[Install]
WantedBy=multi-user.target

View File

@@ -1,5 +1,5 @@
from chatmaild.config import Config
from pyinfra.operations import apt
from pyinfra.operations import apt, files, systemd
from cmdeploy.basedeploy import (
Deployer,
@@ -31,50 +31,87 @@ class NginxDeployer(Deployer):
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
self.put_executable(src="policy-rc.d", dest="/usr/sbin/policy-rc.d")
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install nginx",
packages=["nginx", "libnginx-mod-stream"],
)
self.remove_file("/usr/sbin/policy-rc.d")
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
_configure_nginx(self, self.config)
self.need_restart = _configure_nginx(self.config)
def activate(self):
self.ensure_service("nginx.service")
systemd.service(
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
def _configure_nginx(deployer, config: Config, debug: bool = False):
def _configure_nginx(config: Config, debug: bool = False) -> bool:
"""Configures nginx HTTP server."""
need_restart = False
deployer.put_template(
"nginx/nginx.conf.j2",
"/etc/nginx/nginx.conf",
main_config = files.template(
src=get_resource("nginx/nginx.conf.j2"),
dest="/etc/nginx/nginx.conf",
user="root",
group="root",
mode="644",
config=config,
disable_ipv6=config.disable_ipv6,
)
need_restart |= main_config.changed
deployer.put_template(
"nginx/autoconfig.xml.j2",
"/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
autoconfig = files.template(
src=get_resource("nginx/autoconfig.xml.j2"),
dest="/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
user="root",
group="root",
mode="644",
config=config,
)
need_restart |= autoconfig.changed
deployer.put_template(
"nginx/mta-sts.txt.j2",
"/var/www/html/.well-known/mta-sts.txt",
mta_sts_config = files.template(
src=get_resource("nginx/mta-sts.txt.j2"),
dest="/var/www/html/.well-known/mta-sts.txt",
user="root",
group="root",
mode="644",
config=config,
)
need_restart |= mta_sts_config.changed
# install CGI newemail script
#
cgi_dir = "/usr/lib/cgi-bin"
deployer.ensure_directory(cgi_dir)
files.directory(
name=f"Ensure {cgi_dir} exists",
path=cgi_dir,
user="root",
group="root",
)
deployer.put_executable(
files.put(
name="Upload cgi newemail.py script",
src=get_resource("newemail.py", pkg="chatmaild").open("rb"),
dest=f"{cgi_dir}/newemail.py",
user="root",
group="root",
mode="755",
)
return need_restart

View File

@@ -42,9 +42,6 @@ stream {
}
http {
# access_log setting is inherited by all server sections
access_log syslog:server=unix:/dev/log,facility=local7;
{% if config.tls_cert_mode == "self" %}
limit_req_zone $binary_remote_addr zone=newaccount:10m rate=2r/s;
{% endif %}
@@ -72,9 +69,11 @@ http {
index index.html index.htm;
server_name {{ config.mail_domain }} mta-sts.{{ config.mail_domain }};
server_name {{ config.mail_domain }} www.{{ config.mail_domain }} mta-sts.{{ config.mail_domain }};
location /mxdeliv {
access_log syslog:server=unix:/dev/log,facility=local7;
location /mxdeliv/ {
proxy_pass http://127.0.0.1:{{ config.filtermail_http_port_incoming }};
}
@@ -144,6 +143,7 @@ http {
listen 127.0.0.1:8443 ssl;
server_name www.{{ config.mail_domain }};
return 301 $scheme://{{ config.mail_domain }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7;
}
server {

View File

@@ -4,9 +4,9 @@ Installs OpenDKIM
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import apt, files, server
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class OpendkimDeployer(Deployer):
@@ -25,39 +25,65 @@ class OpendkimDeployer(Deployer):
domain = self.mail_domain
dkim_selector = "opendkim"
"""Configures OpenDKIM"""
need_restart = False
self.put_template(
"opendkim/opendkim.conf",
"/etc/opendkim.conf",
main_config = files.template(
src=get_resource("opendkim/opendkim.conf"),
dest="/etc/opendkim.conf",
user="root",
group="root",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= main_config.changed
self.remove_file("/etc/opendkim/screen.lua")
self.remove_file("/etc/opendkim/final.lua")
screen_script = files.file(
path="/etc/opendkim/screen.lua",
present=False,
)
need_restart |= screen_script.changed
self.ensure_directory(
"/etc/opendkim",
owner="opendkim",
final_script = files.file(
path="/etc/opendkim/final.lua",
present=False,
)
need_restart |= final_script.changed
files.directory(
name="Add opendkim directory to /etc",
path="/etc/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
self.put_template(
"opendkim/KeyTable",
"/etc/dkimkeys/KeyTable",
owner="opendkim",
keytable = files.template(
src=get_resource("opendkim/KeyTable"),
dest="/etc/dkimkeys/KeyTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= keytable.changed
self.put_template(
"opendkim/SigningTable",
"/etc/dkimkeys/SigningTable",
owner="opendkim",
signing_table = files.template(
src=get_resource("opendkim/SigningTable"),
dest="/etc/dkimkeys/SigningTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
self.ensure_directory(
"/var/spool/postfix/opendkim",
owner="opendkim",
need_restart |= signing_table.changed
files.directory(
name="Add opendkim socket directory to /var/spool/postfix",
path="/var/spool/postfix/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"):
@@ -70,10 +96,12 @@ class OpendkimDeployer(Deployer):
_su_user="opendkim",
)
self.put_file(
"opendkim/systemd.conf",
"/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
service_file = files.put(
name="Configure opendkim to restart once a day",
src=get_resource("opendkim/systemd.conf"),
dest="/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
)
need_restart |= service_file.changed
files.file(
name="chown opendkim: /etc/dkimkeys/opendkim.private",
@@ -82,5 +110,15 @@ class OpendkimDeployer(Deployer):
group="opendkim",
)
self.need_restart = need_restart
def activate(self):
self.ensure_service("opendkim.service")
systemd.service(
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
daemon_reload=self.need_restart,
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -1,10 +1,11 @@
from pyinfra.operations import apt, server
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer
from cmdeploy.basedeploy import Deployer, get_resource
class PostfixDeployer(Deployer):
required_users = [("postfix", None, ["opendkim"])]
daemon_reload = False
def __init__(self, config, disable_mail):
self.config = config
@@ -18,46 +19,81 @@ class PostfixDeployer(Deployer):
def configure(self):
config = self.config
need_restart = False
self.put_template(
"postfix/main.cf.j2",
"/etc/postfix/main.cf",
main_config = files.template(
src=get_resource("postfix/main.cf.j2"),
dest="/etc/postfix/main.cf",
user="root",
group="root",
mode="644",
config=config,
disable_ipv6=config.disable_ipv6,
)
need_restart |= main_config.changed
self.put_template(
"postfix/master.cf.j2",
"/etc/postfix/master.cf",
master_config = files.template(
src=get_resource("postfix/master.cf.j2"),
dest="/etc/postfix/master.cf",
user="root",
group="root",
mode="644",
debug=False,
config=config,
)
need_restart |= master_config.changed
self.put_file(
"postfix/submission_header_cleanup",
"/etc/postfix/submission_header_cleanup",
header_cleanup = files.put(
src=get_resource("postfix/submission_header_cleanup"),
dest="/etc/postfix/submission_header_cleanup",
user="root",
group="root",
mode="644",
)
self.put_file("postfix/lmtp_header_cleanup", "/etc/postfix/lmtp_header_cleanup")
need_restart |= header_cleanup.changed
res = self.put_file(
"postfix/smtp_tls_policy_map", "/etc/postfix/smtp_tls_policy_map"
lmtp_header_cleanup = files.put(
src=get_resource("postfix/lmtp_header_cleanup"),
dest="/etc/postfix/lmtp_header_cleanup",
user="root",
group="root",
mode="644",
)
tls_policy_changed = res.changed
if tls_policy_changed:
need_restart |= lmtp_header_cleanup.changed
tls_policy_map = files.put(
name="Upload SMTP TLS Policy that accepts self-signed certificates for IP-only hosts",
src=get_resource("postfix/smtp_tls_policy_map"),
dest="/etc/postfix/smtp_tls_policy_map",
user="root",
group="root",
mode="644",
)
need_restart |= tls_policy_map.changed
if tls_policy_map.changed:
server.shell(
commands=["postmap /etc/postfix/smtp_tls_policy_map"],
)
# Login map that 1:1 maps email address to login.
self.put_file("postfix/login_map", "/etc/postfix/login_map")
self.put_file(
"service/10_restart_on_failure.conf",
"/etc/systemd/system/postfix@.service.d/10_restart.conf",
login_map = files.put(
src=get_resource("postfix/login_map"),
dest="/etc/postfix/login_map",
user="root",
group="root",
mode="644",
)
need_restart |= login_map.changed
restart_conf = files.put(
name="postfix: restart automatically on failure",
src=get_resource("service/10_restart.conf"),
dest="/etc/systemd/system/postfix@.service.d/10_restart.conf",
)
self.daemon_reload = restart_conf.changed
# Validate postfix configuration before restart
if self.need_restart:
if need_restart:
server.shell(
name="Validate postfix configuration",
# Extract stderr and quit with error if non-zero
@@ -65,11 +101,19 @@ class PostfixDeployer(Deployer):
"""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""
],
)
self.need_restart = need_restart
def activate(self):
active = not self.disable_mail
self.ensure_service(
"postfix.service",
running=active,
enabled=active,
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="disable postfix for now"
if self.disable_mail
else "Start and enable Postfix",
service="postfix.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
)
self.need_restart = False

View File

@@ -54,16 +54,14 @@ smtpd_tls_exclude_ciphers = aNULL, RC4, MD5, DES
tls_preempt_cipherlist = yes
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = {{ config.postfix_myhostname }}
myhostname = {{ config.mail_domain }}
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
# When postfix receives mail for $mydestination,
# it hands it over to dovecot via $local_transport.
mydestination = {{ config.mail_domain }}
local_transport = lmtp:unix:private/dovecot-lmtp
# postfix doesn't check whether local users exist or not:
local_recipient_maps =
# Postfix does not deliver mail for any domain by itself.
# Primary domain is listed in `virtual_mailbox_domains` instead
# and handed over to Dovecot.
mydestination =
relayhost =
{% if disable_ipv6 %}
@@ -71,6 +69,15 @@ mynetworks = 127.0.0.0/8
{% else %}
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
{% endif %}
{% if config.addr_v4 %}
smtp_bind_address = {{ config.addr_v4 }}
{% endif %}
{% if config.addr_v6 %}
smtp_bind_address6 = {{ config.addr_v6 }}
{% endif %}
{% if config.addr_v4 or config.addr_v6 %}
smtp_bind_address_enforce = yes
{% endif %}
mailbox_size_limit = 0
message_size_limit = {{config.max_message_size}}
recipient_delimiter = +
@@ -81,6 +88,24 @@ inet_protocols = ipv4
inet_protocols = all
{% endif %}
# Postfix does not try IPv4 and IPv6 connections
# concurrently as of version 3.7.11.
#
# When relay has both A (IPv4) and AAAA (IPv6) records,
# but broken IPv6 connectivity,
# every second message is delayed by the connection timeout
# <https://www.postfix.org/postconf.5.html#smtp_connect_timeout>
# which defaults to 30 seconds. Reducing timeouts is not a solution
# as this will result in a failure to connect to slow servers.
#
# As a workaround we always prefer IPv4 when it is available.
#
# The setting is documented at
# <https://www.postfix.org/postconf.5.html#smtp_address_preference>
smtp_address_preference=ipv4
virtual_transport = lmtp:unix:private/dovecot-lmtp
virtual_mailbox_domains = {{ config.mail_domain }}
lmtp_header_checks = regexp:/etc/postfix/lmtp_header_cleanup
mua_client_restrictions = permit_sasl_authenticated, reject
@@ -93,12 +118,3 @@ smtpd_sender_login_maps = regexp:/etc/postfix/login_map
# Do not lookup SMTP client hostnames to reduce delays
# and avoid unnecessary DNS requests.
smtpd_peername_lookup = no
# Use filtermail-transport to relay messages.
# We can't force postfix to split messages per destination,
# when specifying a custom next-hop,
# so instead this is handled in filtermail.
# We use LMTP instead SMTP so we can communicate per-recipient errors back to postfix.
default_transport = lmtp-filtermail:inet:[127.0.0.1]:{{ config.filtermail_lmtp_port_transport }}
lmtp-filtermail_initial_destination_concurrency=10000
lmtp-filtermail_destination_concurrency_limit=10000

View File

@@ -80,9 +80,8 @@ filter unix - n n - - lmtp
127.0.0.1:{{ config.postfix_reinject_port }} inet n - n - 100 smtpd
-o syslog_name=postfix/reinject
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_milters=unix:opendkim/opendkim.sock
-o cleanup_service_name=authclean
{% if not config.ipv4_relay %} -o smtpd_milters=unix:opendkim/opendkim.sock
{% endif %}
# Local SMTP server for reinjecting incoming filtered mail
127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd
@@ -101,8 +100,3 @@ filter unix - n n - - lmtp
# cannot send unprotected Subject.
authclean unix n - - - 0 cleanup
-o header_checks=regexp:/etc/postfix/submission_header_cleanup
lmtp-filtermail unix - - y - 10000 lmtp
-o syslog_name=postfix/lmtp-filtermail
-o lmtp_header_checks=
-o lmtp_tls_security_level=none

View File

@@ -64,25 +64,21 @@ def get_dkim_entry(mail_domain, pre_command, dkim_selector):
)
def get_authoritative_ns(domain):
ns_replies = [
def query_dns(typ, domain):
# Get autoritative nameserver from the SOA record.
soa_answers = [
x.split()
for x in shell(
f"dig -r -q {domain} -t NS +noall +authority +answer", print=log_progress
f"dig -r -q {domain} -t SOA +noall +authority +answer", print=log_progress
).split("\n")
]
filtered_replies = [a for a in ns_replies if len(a) >= 5 and a[3] == "NS"]
if not filtered_replies:
soa = [a for a in soa_answers if len(a) >= 3 and a[3] == "SOA"]
if not soa:
return
return filtered_replies[0][4]
def query_dns(typ, domain):
ns = get_authoritative_ns(domain)
ns = soa[0][4]
# Query authoritative nameserver directly to bypass DNS cache.
direct_ns = f"@{ns}" if ns else ""
res = shell(f"dig {direct_ns} -r -q {domain} -t {typ} +short", print=log_progress)
res = shell(f"dig @{ns} -r -q {domain} -t {typ} +short", print=log_progress)
return next((line for line in res.split("\n") if not line.startswith(";")), "")

View File

@@ -1,8 +1,8 @@
import shlex
from pyinfra.operations import server
from pyinfra.operations import apt, server
from ..basedeploy import Deployer
from cmdeploy.basedeploy import Deployer
def openssl_selfsigned_args(domain, cert_path, key_path, days=36500):
@@ -34,7 +34,11 @@ class SelfSignedTlsDeployer(Deployer):
self.cert_path = "/etc/ssl/certs/mailserver.pem"
self.key_path = "/etc/ssl/private/mailserver.key"
def install(self):
apt.packages(
name="Install openssl",
packages=["openssl"],
)
def configure(self):
args = openssl_selfsigned_args(
@@ -48,5 +52,3 @@ class SelfSignedTlsDeployer(Deployer):
def activate(self):
pass

View File

@@ -12,7 +12,7 @@ def test_init(tmp_path, maildomain):
inipath = tmp_path.joinpath("chatmail.ini")
main(["init", "--config", str(inipath), maildomain])
config = read_config(inipath)
assert config.mail_domain_bare == maildomain
assert config.mail_domain == maildomain
def test_capabilities(imap):
@@ -89,11 +89,12 @@ def test_concurrent_logins_same_account(
assert login_results.get()
def test_no_vrfy(cmfactory, chatmail_config, maildomain):
def test_no_vrfy(cmfactory, chatmail_config):
ac = cmfactory.get_online_account()
addr = ac.get_config("addr")
domain = chatmail_config.mail_domain
s = smtplib.SMTP(maildomain)
s = smtplib.SMTP(domain)
s.starttls()
s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}")

View File

@@ -5,7 +5,6 @@ import subprocess
import time
import pytest
from chatmaild.config import is_valid_ipv4
from cmdeploy import remote
from cmdeploy.cmdeploy import get_sshexec
@@ -22,8 +21,6 @@ class TestSSHExecutor:
assert out == out2
def test_perform_initial(self, sshexec, maildomain):
if is_valid_ipv4(maildomain):
pytest.skip(f"{maildomain} is not a domain")
res = sshexec(
remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain)
)
@@ -64,10 +61,8 @@ class TestSSHExecutor:
else:
pytest.fail("didn't raise exception")
def test_opendkim_restarted(self, sshexec, maildomain):
def test_opendkim_restarted(self, sshexec):
"""check that opendkim is not running for longer than a day."""
if is_valid_ipv4(maildomain):
pytest.skip(f"{maildomain} is an IPv4 relay, opendkim is not installed")
cmd = "systemctl show opendkim --timestamp=utc --property=ActiveEnterTimestamp"
out = sshexec(call=remote.rshell.shell, kwargs=dict(command=cmd))
datestring = out.split("=")[1]
@@ -226,6 +221,7 @@ def test_rewrite_subject(cmsetup, maildata):
assert "Subject: Unencrypted subject" not in rcvd_msg
@pytest.mark.slow
def test_exceed_rate_limit(cmsetup, gencreds, maildata, chatmail_config):
"""Test that the per-account send-mail limit is exceeded."""
user1, user2 = cmsetup.gen_users(2)
@@ -248,23 +244,6 @@ def test_exceed_rate_limit(cmsetup, gencreds, maildata, chatmail_config):
pytest.fail("Rate limit was not exceeded")
def test_expunged(remote, chatmail_config):
outdated_days = int(chatmail_config.delete_mails_after) + 1
find_cmds = [
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/cur/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/new/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/new/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f",
]
outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append(
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
)
for cmd in find_cmds:
for line in remote.iter_output(cmd):
assert not line
def test_deployed_state(remote):
@@ -286,15 +265,3 @@ def test_deployed_state(remote):
# assert len(git_status) == len(remote_version) # for some reason, we only get 11 lines from remote.iter_output()
for i in range(len(remote_version)):
assert git_status[i] == remote_version[i], "You have undeployed changes."
def test_nginx_access_log_only_defined_once(sshdomain):
sshexec = get_sshexec(sshdomain)
conf = sshexec(
call=remote.rshell.shell,
kwargs=dict(command="nginx -T 2>/dev/null"),
)
access_logs = [l for l in conf.splitlines() if l.strip().startswith("access_log")]
assert len(access_logs) == 1, (
f"expected 1 access_log, found {len(access_logs)}: {access_logs}"
)

View File

@@ -15,7 +15,7 @@ def imap_mailbox(cmfactory, ssl_context):
(ac1,) = cmfactory.get_online_accounts(1)
user = ac1.get_config("addr")
password = ac1.get_config("mail_pw")
host = user.split("@")[1].strip("[").strip("]")
host = user.split("@")[1]
mailbox = imap_tools.MailBox(host, ssl_context=ssl_context)
mailbox.login(user, password)
mailbox.dc_ac = ac1
@@ -178,7 +178,7 @@ def test_hide_senders_ip_address(cmfactory, ssl_context):
chat.send_text("testing submission header cleanup")
user2.wait_for_incoming_msg()
addr = user2.get_config("addr")
host = addr.split("@")[1].strip("[").strip("]")
host = addr.split("@")[1]
pw = user2.get_config("mail_pw")
mailbox = imap_tools.MailBox(host, ssl_context=ssl_context)
mailbox.login(addr, pw)

View File

@@ -1,4 +1,5 @@
import imaplib
import ipaddress
import itertools
import os
import random
@@ -9,11 +10,25 @@ import time
from pathlib import Path
import pytest
from chatmaild.config import format_mail_domain, is_valid_ipv4, read_config
from chatmaild.config import read_config
conftestdir = Path(__file__).parent
def _is_ip(domain):
try:
ipaddress.ip_address(domain)
return True
except ValueError:
return False
def pytest_addoption(parser):
parser.addoption(
"--slow", action="store_true", default=False, help="also run slow tests"
)
def pytest_configure(config):
config._benchresults = {}
config.addinivalue_line(
@@ -21,6 +36,13 @@ def pytest_configure(config):
)
def pytest_runtest_setup(item):
markers = list(item.iter_markers(name="slow"))
if markers:
if not item.config.getoption("--slow"):
pytest.skip("skipping slow test, use --slow to run")
def _get_chatmail_config():
inipath = os.environ.get("CHATMAIL_INI")
if inipath:
@@ -49,12 +71,7 @@ def chatmail_config(pytestconfig):
@pytest.fixture(scope="session")
def maildomain(chatmail_config):
return chatmail_config.mail_domain_bare
@pytest.fixture(scope="session")
def maildomain_deliverable(maildomain):
return format_mail_domain(maildomain)
return chatmail_config.mail_domain
@pytest.fixture(scope="session")
@@ -274,6 +291,7 @@ def gencreds(chatmail_config):
def gen(domain=None):
domain = domain if domain else chatmail_config.mail_domain
addr_domain = f"[{domain}]" if _is_ip(domain) else domain
while 1:
num = next(count)
alphanumeric = "abcdefghijklmnopqrstuvwxyz1234567890"
@@ -287,7 +305,7 @@ def gencreds(chatmail_config):
password = "".join(
random.choices(alphanumeric, k=chatmail_config.password_min_length)
)
yield f"{user}@{domain}", f"{password}"
yield f"{user}@{addr_domain}", f"{password}"
return lambda domain=None: next(gen(domain))
@@ -312,8 +330,7 @@ class ChatmailACFactory:
def _make_transport(self, domain):
"""Build a transport config dict for the given domain."""
domain_deliverable = format_mail_domain(domain)
addr, password = self.gencreds(domain_deliverable)
addr, password = self.gencreds(domain)
transport = {
"addr": addr,
"password": password,
@@ -322,7 +339,7 @@ class ChatmailACFactory:
"imapServer": domain,
"smtpServer": domain,
}
if domain.startswith("_") or is_valid_ipv4(domain):
if self.chatmail_config.tls_cert_mode == "self":
transport["certificateChecks"] = "acceptInvalidCertificates"
return transport
@@ -337,9 +354,8 @@ class ChatmailACFactory:
accounts = []
for _ in range(num):
account = self.dc.add_account()
domain_deliverable = format_mail_domain(domain)
addr, password = self.gencreds(domain_deliverable)
if is_valid_ipv4(domain):
addr, password = self.gencreds(domain)
if _is_ip(domain):
# Use DCLOGIN scheme with explicit server hosts,
# matching how madmail presents its addresses to users.
qr = (
@@ -413,10 +429,10 @@ class Remote:
def iter_output(self, logcmd="", ready=None):
getjournal = "journalctl -f" if not logcmd else logcmd
print(self.sshdomain)
if self.sshdomain in ("@local", "localhost"):
command = []
else:
command = ["ssh", f"root@{self.sshdomain}"]
match self.sshdomain:
case "@local": command = []
case "localhost": command = []
case _: command = ["ssh", f"root@{self.sshdomain}"]
[command.append(arg) for arg in getjournal.split()]
popen = subprocess.Popen(
command,

View File

@@ -1,118 +0,0 @@
from unittest.mock import MagicMock, patch
from cmdeploy.basedeploy import Deployer
def test_put_file_restart_and_reload():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res):
deployer.put_file("foo.conf", "/etc/foo.conf")
assert deployer.need_restart is True
assert deployer.daemon_reload is False
deployer = Deployer()
deployer.put_file("test.service", "/etc/systemd/system/test.service")
assert deployer.need_restart is True
assert deployer.daemon_reload is True
def test_remove_file():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
with patch("cmdeploy.basedeploy.files.file", return_value=mock_res) as mock_file:
deployer.remove_file("/etc/foo.conf")
mock_file.assert_called_once_with(
name="Remove /etc/foo.conf", path="/etc/foo.conf", present=False
)
assert deployer.need_restart is True
def test_ensure_systemd_unit():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
# Plain service file
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res) as mock_put:
deployer.ensure_systemd_unit("iroh-relay.service")
assert (
mock_put.call_args.kwargs["dest"]
== "/etc/systemd/system/iroh-relay.service"
)
assert deployer.need_restart is True
assert deployer.daemon_reload is True
deployer = Deployer()
# Template (.j2) dispatches to put_template and strips .j2 suffix
with patch("cmdeploy.basedeploy.files.template", return_value=mock_res) as mock_tpl:
deployer.ensure_systemd_unit(
"filtermail/chatmaild.service.j2",
bin_path="/usr/local/bin/filtermail",
)
assert (
mock_tpl.call_args.kwargs["dest"] == "/etc/systemd/system/chatmaild.service"
)
deployer = Deployer()
# Explicit dest_name override
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res) as mock_put:
deployer.ensure_systemd_unit(
"acmetool/acmetool-reconcile.timer",
dest_name="acmetool-reconcile.timer",
)
assert (
mock_put.call_args.kwargs["dest"]
== "/etc/systemd/system/acmetool-reconcile.timer"
)
def test_ensure_service():
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service("nginx.service")
mock_svc.assert_called_once_with(
name="Start and enable nginx.service",
service="nginx.service",
running=True,
enabled=True,
restarted=True,
daemon_reload=True,
)
# daemon_reload is cleared to avoid multiple systemctl daemon-reload calls
# need_restart is kept to ensure all subsequent services also restart
assert deployer.need_restart is True
assert deployer.daemon_reload is False
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
# Stopping suppresses restarted even when need_restart is True
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service(
"mta-sts-daemon.service",
running=False,
enabled=False,
)
assert mock_svc.call_args.kwargs["restarted"] is False
assert deployer.need_restart is True
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
# Multiple calls: daemon_reload resets after first, need_restart persists
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service("chatmaild.service")
deployer.ensure_service("chatmaild-metadata.service")
second_call = mock_svc.call_args_list[1]
assert second_call.kwargs["restarted"] is True
assert second_call.kwargs["daemon_reload"] is False

View File

@@ -39,14 +39,6 @@ class TestCmdline:
out, err = capsys.readouterr()
assert "deleting config file" in out.lower()
def test_dns_skip_on_ip(self, capsys, tmp_path, monkeypatch):
monkeypatch.delenv("CHATMAIL_INI", raising=False)
inipath = tmp_path / "chatmail.ini"
assert main(["init", "--config", str(inipath), "1.3.3.7"]) == 0
assert main(["dns", "--config", str(inipath)]) == 0
out, err = capsys.readouterr()
assert out == "[WARNING] 1.3.3.7 is not a domain, skipping DNS checks.\n"
def test_www_folder(example_config, tmp_path):
reporoot = importlib.resources.files(__package__).joinpath("../../../../").resolve()

View File

@@ -4,7 +4,6 @@ import pytest
from cmdeploy import remote
from cmdeploy.dns import check_full_zone, check_initial_remote_data, parse_zone_records
from cmdeploy.remote.rdns import get_authoritative_ns
@pytest.fixture
@@ -15,15 +14,11 @@ def mockdns_base(monkeypatch):
if command.startswith("dig"):
if command == "dig":
return "."
if "with.public.soa" in command and "NS" in command:
return "domain.with.public.soa. 2419 IN NS ns1.first-ns.de."
if "with.hidden.soa" in command and "NS" in command:
if "SOA" in command:
return (
"domain.with.hidden.soa. 2137 IN NS ns1.desec.io.\n"
"domain.with.hidden.soa. 2137 IN NS ns2.desec.org."
"delta.chat. 21600 IN SOA ns1.first-ns.de. dns.hetzner.com."
" 2025102800 14400 1800 604800 3600"
)
if "NS" in command:
return "delta.chat. 21600 IN NS ns1.first-ns.de."
command_chunks = command.split()
domain, typ = command_chunks[4], command_chunks[6]
try:
@@ -130,17 +125,6 @@ class TestPerformInitialChecks:
assert not l
@pytest.mark.parametrize(
("domain", "ns"),
[
("domain.with.public.soa", "ns1.first-ns.de."),
("domain.with.hidden.soa", "ns1.desec.io."),
],
)
def test_get_authoritative_ns(domain, ns, mockdns):
assert get_authoritative_ns(domain) == ns
def test_parse_zone_records():
text = """
; This is a comment

View File

@@ -23,7 +23,8 @@ def make_host(*fact_pairs):
if cls not in facts:
registered = ", ".join(c.__name__ for c in facts)
raise LookupError(
f"unexpected get_fact({cls.__name__}); only registered: {registered}"
f"unexpected get_fact({cls.__name__}); "
f"only registered: {registered}"
)
return facts[cls]

View File

@@ -1,10 +1,11 @@
from pathlib import Path
import importlib.resources
from cmdeploy.www import build_webpages
def test_build_webpages(tmp_path, make_config):
src_dir = (Path(__file__).resolve() / "../../../../../www/src").resolve()
pkgroot = importlib.resources.files("cmdeploy")
src_dir = pkgroot.joinpath("../../../www/src").resolve()
assert src_dir.exists(), src_dir
config = make_config("chat.example.org")
build_dir = tmp_path.joinpath("build")

View File

@@ -1,4 +1,5 @@
import hashlib
import importlib.resources
import re
import time
import traceback
@@ -36,7 +37,7 @@ def prepare_template(source):
def get_paths(config) -> (Path, Path, Path):
reporoot = (Path(__file__).resolve() / "../../../../").resolve()
reporoot = importlib.resources.files(__package__).joinpath("../../../").resolve()
www_path = Path(config.www_folder)
# if www_folder was not set, use default directory
if config.www_folder == "":
@@ -132,7 +133,8 @@ def find_merge_conflict(src_dir) -> Path:
def main():
reporoot = (Path(__file__).resolve() / "../../../../").resolve()
path = importlib.resources.files(__package__)
reporoot = path.joinpath("../../../").resolve()
inipath = reporoot.joinpath("chatmail.ini")
config = read_config(inipath)
config.webdev = True

View File

@@ -4,14 +4,12 @@
You can use the `make` command and `make html` to build web pages.
You need a Python environment with `sphinx` and other
dependencies, you can create it by running `scripts/initenv.sh`
from the repository root.
You need a Python environment where the following install was excuted:
pip install furo sphinx-autobuild
To develop/change documentation, you can then do:
. venv/bin/activate
cd doc
make auto
A page will open at https://127.0.0.1:8000/ serving the docs and it will

View File

@@ -14,6 +14,8 @@ Minimal requirements and prerequisites
You will need the following:
- Control over a domain through a DNS provider of your choice.
- A Debian 12 **deployment server** with reachable SMTP/SUBMISSIONS/IMAPS/HTTPS ports.
IPv6 is encouraged if available. Chatmail relay servers only require
1GB RAM, one CPU, and perhaps 10GB storage for a few thousand active
@@ -26,11 +28,6 @@ You will need the following:
(An ed25519 private key is required due to an `upstream bug in
paramiko <https://github.com/paramiko/paramiko/issues/2191>`_)
- Control over a domain through a DNS provider of your choice
(there is experimental support for :ref:`DNS-less relays <iponly>`).
.. _setup:
Setup with ``scripts/cmdeploy``
-------------------------------------

View File

@@ -16,7 +16,5 @@ Contributions and feedback welcome through the https://github.com/chatmail/relay
proxy
migrate
overview
reverse_dns
related
faq
iponly

View File

@@ -1,29 +0,0 @@
.. _iponly:
Hosting without DNS records
===========================
.. note::
This option is experimental and might change without notice.
In case you don't have a domain,
for example in a local network,
you can run a chatmail relay with only an IPv4 address as well.
To deploy a relay without a domain,
run ``cmdeploy init`` with only the IPv4 address
during the :ref:`installation steps <setup>`,
for example ``cmdeploy init 13.12.23.42``.
Drawbacks
---------
- your transport encryption will only use self-signed TLS certificates,
which are vulnerable against MITM attacks.
the chatmail core's end-to-end encryption should suffice in most scenarios though.
- your messages will not be DKIM-signed;
experimentally, most chatmail relays accept non-DKIM-signed messages from IPv4-only relays,
but some relays might not accept messages from yours.

View File

@@ -102,12 +102,14 @@ short overview of ``chatmaild`` services:
Apple/Google/Huawei.
- `chatmail-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/expire.py>`_
deletes old messages, large messages, and entire mailboxes
of users who have not logged in for longer than
``delete_inactive_users_after`` days.
deletes entire mailboxes of users who have not logged in
for longer than ``delete_inactive_users_after`` days.
- ``chatmail-quota-expire`` is called by Dovecot's ``quota_warning`` mechanism
and will automatically remove oldest messages to keep mailboxes well under ``max_mailbox_size``.
- `chatmail-quota-expire <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/quota_expire.py>`_
is called by Dovecot's ``quota_warning`` mechanism when a
user reaches 90% of their mailbox quota.
It removes the largest and oldest messages
until usage drops below 80% of the quota.
- `lastlogin <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/lastlogin.py>`_
is contacted by Dovecot when a user logs in and stores the date of
@@ -143,7 +145,7 @@ Chatmail relay dependency diagram
certs-nginx[("`TLS certs
/var/lib/acme`")] --> nginx-internal;
systemd-timer --- acmetool;
systemd-timer --- chatmail-expire-daily;
systemd-timer --- chatmail-expire-inactive;
systemd-timer --- chatmail-fsreport-daily;
acmetool --> certs[("`TLS certs
/var/lib/acme`")];
@@ -153,7 +155,6 @@ Chatmail relay dependency diagram
autoconfig.xml --- dovecot;
postfix --- |10080|filtermail-outgoing;
postfix --- |10081|filtermail-incoming;
postfix --- |10083|filtermail-transport;
filtermail-outgoing --- |10025 reinject|postfix;
filtermail-incoming --- |10026 reinject|postfix;
dovecot --- |doveauth.socket|doveauth;
@@ -165,7 +166,7 @@ Chatmail relay dependency diagram
chatmail-quota-expire --- maildir;
lastlogin --- maildir;
doveauth --- maildir;
chatmail-expire-daily --- maildir;
chatmail-expire-inactive --- maildir;
chatmail-fsreport-daily --- maildir;
chatmail-metadata --- iroh-relay;
chatmail-metadata --- |encrypted device token| notifications.delta.chat;
@@ -296,7 +297,9 @@ ensured by ``filtermail`` proxy.
TLS requirements
~~~~~~~~~~~~~~~~
Filtermail (used for delivery) requires a valid TLS.
Postfix is configured to require valid TLS by setting
`smtp_tls_security_level <https://www.postfix.org/postconf.5.html#smtp_tls_security_level>`_
to ``verify``.
You can test it by resolving ``MX`` records of your relay domain and
then connecting to MX relays (e.g ``mx.example.org``) with

View File

@@ -1,64 +0,0 @@
Configuring reverse DNS
=======================
Some email servers reject the emails
if they don't pass `FCrDNS`_ check, also known as `iprev`_ check.
.. _FCrDNS: https://en.wikipedia.org/wiki/Forward-confirmed_reverse_DNS
.. _iprev: https://datatracker.ietf.org/doc/html/rfc8601#section-3
Passing the check requires that the IP address that email is sent from
should have a ``PTR`` record pointing to the domain name of the server,
and domain name record should have an ``A/AAAA`` record
pointing to the IP address.
Modern email relies on DKIM and SPF for authentication,
while iprev check exists for
`historical reasons <https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-reverse-mapping-considerations-06#section-2.1>`_.
Chatmail relays don't resolve ``PTR`` records,
so you can ignore this section if configuring ``PTR`` records
is difficult and federation with legacy email servers that don't accept
valid DKIM signature for authentication is not important.
Multi-homed setups
------------------
If you have a server with multiple IP addresses,
also known as multi-homed setup,
and don't publish all IP addresses in DNS,
you need to make sure you are using
the published address when making outgoing connections.
For example, your server may have a static IP
address, and a so-called Floating IP or Virtual IP
that can be moved between servers in case of
migration or for failover.
By using Floating IP you can avoid downtime
and keep the IP address reputation
for destinatinons that rely on IP reputation and IP blocklists.
In this case you will only publish
the Floating IP to DNS and only use the static IP
to SSH into the server.
If you have such setup, make sure that
you not only set ``PTR`` records for the Floating IP,
but make outgoing connections using the Floating IP.
Otherwise reverse DNS check succeed,
but forward check making sure your domain name points
to the IP address will fail.
Such setup is indistinguishable from someone
setting IP address ``PTR`` with the domain they don't own
and as a result don't succeed.
On Linux you can configure source IP address with ``ip route`` command,
for example:
::
ip route change default via <default-gateway> dev eth0 src <source-address>
Make sure to persist the change after verifying it is working.
You can check what your outgoing IP address is
with ``curl icanhazip.com``.
Check both the IPv4 and IPv6 addresses.
For IPv4 address use ``curl ipv4.icanhazip.com`` or ``curl -4 icanhazip.com``
and similarly for IPv6 if you have it.