Compare commits

...

19 Commits

Author SHA1 Message Date
link2xt
c2c73fc7a1 feat: add tool to analyze deferred queue
It prints all destinations with the number of recipients
and all the reasons. Operator can then try
to fix the problems for destinations,
e.g. by manually adding reverse proxy
addresses to /etc/hosts for failing domains
or routing IP addresses to another interface.
2026-05-12 19:08:30 +02:00
holger krekel
8db668c037 fix(logging): log all http requests to syslog 2026-05-10 23:32:42 +02:00
holger krekel
45fafa10a9 fix: legacy token metadata storage used list type, but if no new setmetadata happened, the user would not be notified at all. 2026-05-08 21:39:40 +02:00
missytake
ee435a7ef7 fix(dns): query correct NS if MNAME server is hidden (#954)
replaces #870
fix #851

* fix(dns): address possible IndexError
* fix(dns): remove redundant docstring
* fix(dns): don't make NS explicit if None
* bump cmlxc to 0.13.5 which fixes a powerdns config issue
* remove the unneccessary SOA mocks, simplify mock tests, and run ruff format

Co-authored-by: holger krekel <holger@merlinux.eu>
2026-05-08 19:34:42 +02:00
missytake
8fafd4e79f fix(nginx): properly redirect www to mail_domain 2026-05-07 23:00:02 +02:00
punkero-org
129b8a20bc fix(cmdeploy): stop and disable unbound-resolvconf
Commit 825831e purges resolvconf, however the unbound service
activates a 'wants' unit for async resolvconf updates. This
results in errors in systemd startup as the unit will now always fail.

Stop and disable the unbound-resolvconf unit activation
2026-05-07 13:40:19 +02:00
holger krekel
a1f64ebd96 refactor: introduce automated change-tracking across deployers 2026-05-06 20:02:13 +02:00
j4n
fb64be97b5 fix(mtail): correct boot ordering and deploy restart logic
Correct the systemd unit modifications in 98bc1503 that lead to startup
failures in some instances. Switch to After+Wants = network-online.target
and add RestartSec=2s to give late-binding more interfaces time to appear.

In the deployer, capture the files.template() return value and
appropriately set need_restart and daemon_reload.
2026-05-06 14:04:32 +02:00
Jagoda Estera Ślązak
b05e26819f fix: Increase concurrency limit and re-enable filtermail-transport (#949) 2026-05-05 18:30:20 +02:00
Jagoda Estera Ślązak
1db586b3eb fix(filtermail): Disable filtermail-transport for now (#948)
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-05-05 09:07:06 +02:00
Jagoda Ślązak
44fe2dc08f fix: Use path with no leading slash for mxdeliv
For compatibility with madmail,
we want to use path with no leading
slash. This change saves us from
having to follow redirects.

Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-05-01 17:37:35 +02:00
Jagoda Ślązak
8721600d13 build(deps): Upgrade to filtermail v0.6.4
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-05-01 17:37:31 +02:00
Jagoda Ślązak
dfed2b4681 feat: Use filtermail for delivery to remote MTAs
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-05-01 17:37:28 +02:00
holger krekel
f5fd286663 fix: make www tests work with editable instead of just plain installs 2026-05-01 16:52:09 +02:00
missytake
16b00da373 chore: prepare 1.10.0 release (#943)
Co-authored-by: j4n <j4n@systemli.org>
2026-04-30 15:51:17 +02:00
j4n
75606f5eb8 fix(mtail): start after networking is fully up 2026-04-30 14:23:32 +02:00
holger krekel
d256538f81 testing: support custom filtermail binary through CHATMAIL_FILTERMAIL_BINARY env var 2026-04-29 20:27:12 +02:00
link2xt
fdf8e5e345 ci: setup zizmor
Zizmor is a linter for GitHub Actions
2026-04-29 16:58:19 +00:00
j4n
81a161d433 feat(ci): add repository_dispatch trigger to chatmail/docker
On push to main send a repository_dispatch event to chatmail/docker with
relay_ref, relay_sha, and relay_sha_short.

This triggers docker-ci.yaml to build a new Docker image from
the updated relay code, push to GHCR, and eventually run integration
tests via cmlxc's reusable lxc-test workflow.

Requires DOCKER_DISPATCH_TOKEN secret with repo scope on
chatmail/docker.

Also set workflow_dispatch to allow manual triggering of Docker builds
from any relay branch via the GitHub UI.
2026-04-29 15:43:19 +02:00
38 changed files with 775 additions and 700 deletions

View File

@@ -9,6 +9,8 @@ on:
pull_request: pull_request:
branches: [ "main" ] branches: [ "main" ]
permissions: {}
# Newest push wins: Prevents multiple runs from clashing and wasting runner efforts # Newest push wins: Prevents multiple runs from clashing and wasting runner efforts
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -25,8 +27,9 @@ jobs:
# Otherwise `test_deployed_state` will be unhappy. # Otherwise `test_deployed_state` will be unhappy.
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
persist-credentials: false
- name: download filtermail - name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.1/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.4/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests - name: run chatmaild tests
working-directory: chatmaild working-directory: chatmaild
run: pipx run tox run: pipx run tox
@@ -38,6 +41,7 @@ jobs:
- uses: actions/checkout@v6 - uses: actions/checkout@v6
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
persist-credentials: false
- name: initenv - name: initenv
run: scripts/initenv.sh run: scripts/initenv.sh
@@ -53,8 +57,9 @@ jobs:
lxc-test: lxc-test:
name: LXC deploy and test name: LXC deploy and test
uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.10.0 uses: chatmail/cmlxc/.github/workflows/lxc-test.yml@v0.13.5
with: with:
cmlxc_version: v0.13.5
cmlxc_commands: | cmlxc_commands: |
cmlxc init cmlxc init
# single cmdeploy relay test # single cmdeploy relay test

37
.github/workflows/docker-dispatch.yaml vendored Normal file
View File

@@ -0,0 +1,37 @@
# Notify the docker repo to build and test a new image after relay CI passes.
#
# Sends a repository_dispatch event to chatmail/docker with the relay ref
# and short SHA, which triggers docker-ci.yaml to build, push to GHCR,
# and run integration tests via cmlxc.
name: Trigger Docker build
on:
push:
branches: [main]
workflow_dispatch:
permissions: {}
jobs:
dispatch:
name: Dispatch build to chatmail/docker
runs-on: ubuntu-latest
if: github.repository == 'chatmail/relay'
steps:
- name: Compute short SHA
id: sha
run: echo "short=$(echo '${{ github.sha }}' | cut -c1-7)" >> "$GITHUB_OUTPUT"
- name: Send repository_dispatch
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3
with:
token: ${{ secrets.CHATMAIL_DOCKER_DISPATCH_TOKEN }}
repository: chatmail/docker
event-type: relay-updated
client-payload: >-
{
"relay_ref": "${{ github.ref_name }}",
"relay_sha": "${{ github.sha }}",
"relay_sha_short": "${{ steps.sha.outputs.short }}"
}

View File

@@ -7,6 +7,8 @@ on:
- 'scripts/build-docs.sh' - 'scripts/build-docs.sh'
- '.github/workflows/docs-preview.yaml' - '.github/workflows/docs-preview.yaml'
permissions: {}
jobs: jobs:
scripts: scripts:
name: build name: build
@@ -16,6 +18,8 @@ jobs:
url: https://staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }} url: https://staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with:
persist-credentials: false
- name: initenv - name: initenv
run: scripts/initenv.sh run: scripts/initenv.sh
@@ -34,18 +38,22 @@ jobs:
- name: Get Pullrequest ID - name: Get Pullrequest ID
id: prepare id: prepare
run: | run: |
export PULLREQUEST_ID=$(echo "${{ github.ref }}" | cut -d "/" -f3) export PULLREQUEST_ID=$(echo "${GITHUB_REF}" | cut -d "/" -f3)
echo "prid=$PULLREQUEST_ID" >> $GITHUB_OUTPUT echo "prid=$PULLREQUEST_ID" >> $GITHUB_OUTPUT
if [ $(expr length "${{ secrets.USERNAME }}") -gt "1" ]; then echo "uploadtoserver=true" >> $GITHUB_OUTPUT; fi if [ $(expr length "${{ secrets.USERNAME }}") -gt "1" ]; then echo "uploadtoserver=true" >> $GITHUB_OUTPUT; fi
- run: | - run: |
echo "baseurl: /${{ steps.prepare.outputs.prid }}" >> _config.yml echo "baseurl: /${STEPS_PREPARE_OUTPUTS_PRID}" >> _config.yml
env:
STEPS_PREPARE_OUTPUTS_PRID: ${{ steps.prepare.outputs.prid }}
- name: Upload preview - name: Upload preview
run: | run: |
mkdir -p "$HOME/.ssh" mkdir -p "$HOME/.ssh"
echo "${{ secrets.CHATMAIL_STAGING_SSHKEY }}" > "$HOME/.ssh/key" echo "${{ secrets.CHATMAIL_STAGING_SSHKEY }}" > "$HOME/.ssh/key"
chmod 600 "$HOME/.ssh/key" chmod 600 "$HOME/.ssh/key"
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}/" rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${STEPS_PREPARE_OUTPUTS_PRID}/"
env:
STEPS_PREPARE_OUTPUTS_PRID: ${{ steps.prepare.outputs.prid }}
- name: check links - name: check links
working-directory: doc working-directory: doc

View File

@@ -10,6 +10,8 @@ on:
- 'scripts/build-docs.sh' - 'scripts/build-docs.sh'
- '.github/workflows/docs.yaml' - '.github/workflows/docs.yaml'
permissions: {}
jobs: jobs:
scripts: scripts:
name: build name: build
@@ -19,6 +21,8 @@ jobs:
url: https://chatmail.at/doc/relay/ url: https://chatmail.at/doc/relay/
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with:
persist-credentials: false
- name: initenv - name: initenv
run: scripts/initenv.sh run: scripts/initenv.sh

26
.github/workflows/zizmor-scan.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: GitHub Actions Security Analysis with zizmor
on:
push:
branches: ["main"]
pull_request:
branches: ["**"]
permissions: {}
jobs:
zizmor:
name: Run zizmor
runs-on: ubuntu-latest
permissions:
security-events: write # Required for upload-sarif (used by zizmor-action) to upload SARIF files.
contents: read
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@b1d7e1fb5de872772f31590499237e7cce841e8e # v0.5.3

7
.github/zizmor.yml vendored Normal file
View File

@@ -0,0 +1,7 @@
rules:
unpinned-uses:
config:
policies:
actions/*: ref-pin
dependabot/*: ref-pin
chatmail/*: ref-pin

View File

@@ -1,5 +1,89 @@
# Changelog for chatmail deployment # Changelog for chatmail deployment
## 1.10.0 2026-04-30
* start mtail after networking is fully up <https://github.com/chatmail/relay/pull/942>
* support specifying custom filtermail binary through environment variable <https://github.com/chatmail/relay/pull/941>
* add automated zizmor scanning of github workflows <https://github.com/chatmail/relay/pull/938>
* added dispatch for *automated builds of chatmail relay docker images* <https://github.com/chatmail/relay/pull/934>
* do not bind SMTP client sockets to public addresses <https://github.com/chatmail/relay/pull/932>
* underline in docs that scripts/initenv.sh should be used for building the docs <https://github.com/chatmail/relay/pull/933>
* automatic oldest-first message removal from mailboxes to always stay under max_mailbox_size <https://github.com/chatmail/relay/pull/929>
* remove --slow from cmdeploy test <https://github.com/chatmail/relay/pull/931>
* handle missing inotify sysctl keys in containers <https://github.com/chatmail/relay/pull/930>
* replace resolvconf with static resolv.conf <https://github.com/chatmail/relay/pull/928>
* disable fsync for LMTP and IMAP services <https://github.com/chatmail/relay/pull/925>
* re-use cmlxc workflow, replacing CI with hetzner staging servers with local lxc containers <https://github.com/chatmail/relay/pull/917>
* explicitly install resolvconf <https://github.com/chatmail/relay/pull/924>
* detect stale dovecot binary and force restart in activate() <https://github.com/chatmail/relay/pull/922>
* Rename filtermail_http_port to filtermail_http_port_incoming <https://github.com/chatmail/relay/pull/921>
* consolidated is_in_container() check https://github.com/chatmail/relay/pull/920>
* restart dovecot after package replacement (rebase, test condense) <https://github.com/chatmail/relay/pull/913>
* Set permissions on dovecot pin prefs <https://github.com/chatmail/relay/pull/915>
* Route `/mxdeliv/` to configurable port <https://github.com/chatmail/relay/pull/901>
* fix VM detection, automated testing fixes, use newer chatmail-turn and move to standard BIND DNS zone format <https://github.com/chatmail/relay/pull/912>
* Upgrade to filtermail 0.6.1 <https://github.com/chatmail/relay/pull/910>
* pin dovecot packages to prevent apt upgrades <https://github.com/chatmail/relay/pull/908>
* add rpc server to cmdeploy along with client <https://github.com/chatmail/relay/pull/906>
* remove unused deps from chatmaild <https://github.com/chatmail/relay/pull/905>
* set default smtp_tls_security_level to "verify" unconditionally <https://github.com/chatmail/relay/pull/902>
* featprefer IPv4 in SMTP client <https://github.com/chatmail/relay/pull/900>
* Install dovecot .deb packages atomically <https://github.com/chatmail/relay/pull/899>
* stop installing cron package <https://github.com/chatmail/relay/pull/898>
* Rewrite dovecot install logic, update <https://github.com/chatmail/relay/pull/862>
* fix a test and some linting fixes <https://github.com/chatmail/relay/pull/897>
* Disable IP verification on domain-literal addresses <https://github.com/chatmail/relay/pull/895>
* disable installing recommended packages globally on the relay <https://github.com/chatmail/relay/pull/887>
* multiple bug fixes across chatmaild and cmdeploy <https://github.com/chatmail/relay/pull/883>
* remove /metrics from the website <https://github.com/chatmail/relay/pull/703>
* add Prometheus textfile output to fsreport <https://github.com/chatmail/relay/pull/881>
* chown opendkim: private key <https://github.com/chatmail/relay/pull/879>
* make sure chatmail-metadata was started <https://github.com/chatmail/relay/pull/882>
* dovecot update url <https://github.com/chatmail/relay/pull/880>
* upgrade to filtermail v0.5.2 <https://github.com/chatmail/relay/pull/876>
* download dovecot packages from github release <https://github.com/chatmail/relay/pull/875>
* replace DKIM verification with filtermail v0.5 <https://github.com/chatmail/relay/pull/831>
* remove CFFI deltachat bindings usage, and consolidate test support with rpc-bindings <https://github.com/chatmail/relay/pull/872>
* prepare chatmaild/cmdeploy changes for Docker support <https://github.com/chatmail/relay/pull/857>
* stabilize online benchmark timing adding rate-limit-aware cooldown between iterations <https://github.com/chatmail/relay/pull/867>
* move rate-limit cooldown to benchmark fixture <https://github.com/chatmail/relay/pull/868>
* reconfigure acmetool from redirector to proxy mode <https://github.com/chatmail/relay/pull/861>
* make tests work with `--ssh-host localhost` <https://github.com/chatmail/relay/pull/856>
* mark f-string with f prefix in test_expunged <https://github.com/chatmail/relay/pull/863>
* install also if dovecot.service=False in SystemdEnabled Fact <https://github.com/chatmail/relay/pull/841>
* Introduce support for self-signed chatmail relays <https://github.com/chatmail/relay/pull/855>
* Strip Received headers before delivery <https://github.com/chatmail/relay/pull/849>
* upgrade to filtermail v0.3 <https://github.com/chatmail/relay/pull/850>
* fix link to Maddy and update madmail URL <https://github.com/chatmail/relay/pull/847>
* accept self-signed certificates for IP-only relays <https://github.com/chatmail/relay/pull/846>
* enforce sending from public IP addresses <https://github.com/chatmail/relay/pull/845>
* port check: check addresses, fix single services <https://github.com/chatmail/relay/pull/844>
* remediates issue with improper concat on resolver injection <https://github.com/chatmail/relay/pull/834>
* ipv6 boolean not being respected during operations <https://github.com/chatmail/relay/pull/832>
* upgrade to filtermail v0.2 by <https://github.com/chatmail/relay/pull/825>
* fix link to filtermail <https://github.com/chatmail/relay/pull/824>
* print timestamps when sending messages <https://github.com/chatmail/relay/pull/823>
* fix flaky test_exceed_rate_limit <https://github.com/chatmail/relay/pull/822>
* Replace filtermail with rust reimplementation <https://github.com/chatmail/relay/pull/808>
* Set default internal SMTP ports in Config <https://github.com/chatmail/relay/pull/819>
* separate metrics for incoming and outgoing messages <https://github.com/chatmail/relay/pull/820>
* disable appending the Received header <https://github.com/chatmail/relay/pull/815>
* fail on errors in postfix/dovecot config <https://github.com/chatmail/relay/pull/813>
* tweak idle/hibernate metrics some more <https://github.com/chatmail/relay/pull/811>
* add config flag to export statistics <https://github.com/chatmail/relay/pull/806>
* add --website-only option to run subcommand <https://github.com/chatmail/relay/pull/768>
* Strip DKIM-Signature header before LMTP <https://github.com/chatmail/relay/pull/803>
* properly make sure that postfix gets restarted on failure <https://github.com/chatmail/relay/pull/802>
* expire.py: use absolute path to maildirsize <https://github.com/chatmail/relay/pull/807>
* pin Dovecot documentation URLs to version 2.3 <https://github.com/chatmail/relay/pull/800>
* try to use "build machine" and "deployment server" consistently <https://github.com/chatmail/relay/pull/797>
* adds instructions for migrating control machines <https://github.com/chatmail/relay/pull/795>
* use consistent naming schema in getting started <https://github.com/chatmail/relay/pull/793>
* remove jsok/serialize-workflow-action dependency <https://github.com/chatmail/relay/pull/790>
* streamline migration guide wording, provide titled steps <https://github.com/chatmail/relay/pull/789>
* increases default max mailbox size <https://github.com/chatmail/relay/pull/792>
* use daemon_name for OpenDKIM sign-verify decision instead of IP <https://github.com/chatmail/relay/pull/784>
## 1.9.0 2025-12-18 ## 1.9.0 2025-12-18
### Documentation ### Documentation

View File

@@ -24,6 +24,7 @@ chatmail-metadata = "chatmaild.metadata:main"
chatmail-expire = "chatmaild.expire:daily_expire_main" chatmail-expire = "chatmaild.expire:daily_expire_main"
chatmail-quota-expire = "chatmaild.expire:quota_expire_main" chatmail-quota-expire = "chatmaild.expire:quota_expire_main"
chatmail-fsreport = "chatmaild.fsreport:main" chatmail-fsreport = "chatmaild.fsreport:main"
chatmail-deferred = "chatmaild.deferred:main"
lastlogin = "chatmaild.lastlogin:main" lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main" turnserver = "chatmaild.turnserver:main"

View File

@@ -40,6 +40,9 @@ class Config:
self.filtermail_http_port_incoming = int( self.filtermail_http_port_incoming = int(
params.get("filtermail_http_port_incoming", "10082") params.get("filtermail_http_port_incoming", "10082")
) )
self.filtermail_lmtp_port_transport = int(
params.get("filtermail_lmtp_port_transport", "10083")
)
self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025")) self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025"))
self.postfix_reinject_port_incoming = int( self.postfix_reinject_port_incoming = int(
params.get("postfix_reinject_port_incoming", "10026") params.get("postfix_reinject_port_incoming", "10026")

View File

@@ -0,0 +1,37 @@
"""
Analyze deferred mails and print most common failing destinations.
Example:
python -m chatmaild.deferred
"""
import json
import subprocess
from collections import Counter, defaultdict
def main():
p = subprocess.Popen(["postqueue", "-j"], text=True, stdout=subprocess.PIPE)
domain_reasons = defaultdict(Counter)
domain_total = Counter()
for line in p.stdout:
item = json.loads(line)
if item["queue_name"] != "deferred":
continue
for recipient in item["recipients"]:
_, domain = recipient["address"].rsplit("@", 1)
reason = recipient["delay_reason"].removeprefix("host 127.0.0.1[127.0.0.1] said: ")
domain_total[domain] += 1
domain_reasons[domain][reason] += 1
for domain, total in reversed(domain_total.most_common()):
print(f"{domain} ({total} recipients)")
for reason, count in domain_reasons[domain].most_common():
print(f" {count}: {reason}")
if __name__ == "__main__":
main()

View File

@@ -70,6 +70,9 @@ class Metadata:
# Some tokens have expired, remove them. # Some tokens have expired, remove them.
with self._modify_tokens(addr) as _tokens: with self._modify_tokens(addr) as _tokens:
pass pass
elif isinstance(tokens, list):
with self._modify_tokens(addr) as tokens:
token_list = list(tokens.keys())
else: else:
token_list = [] token_list = []
return token_list return token_list

View File

@@ -372,3 +372,14 @@ def test_iroh_relay(dictproxy):
dictproxy.iroh_relay = "https://example.org/" dictproxy.iroh_relay = "https://example.org/"
dictproxy.loop_forever(rfile, wfile) dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"Ohttps://example.org/\n" assert wfile.getvalue() == b"Ohttps://example.org/\n"
def test_legacy_token_migration(metadata, testaddr):
with metadata.get_metadata_dict(testaddr).modify() as data:
data[metadata.DEVICETOKEN_KEY] = ["oldtoken1", "oldtoken2"]
assert metadata.get_tokens_for_addr(testaddr) == ["oldtoken1", "oldtoken2"]
mdict = metadata.get_metadata_dict(testaddr).read()
tokens = mdict[metadata.DEVICETOKEN_KEY]
assert isinstance(tokens, dict)
assert "oldtoken1" in tokens and "oldtoken2" in tokens

View File

@@ -48,6 +48,8 @@ def test_migration(tmp_path, example_config, caplog):
assert passdb_path.stat().st_size > 10000 assert passdb_path.stat().st_size > 10000
example_config.passdb_path = passdb_path example_config.passdb_path = passdb_path
# ensure logging.info records are captured regardless of global configuration
caplog.set_level("INFO")
assert not caplog.records assert not caplog.records

View File

@@ -1,6 +1,4 @@
import importlib.resources from pyinfra.operations import apt, server
from pyinfra.operations import apt, files, server, systemd
from ..basedeploy import Deployer from ..basedeploy import Deployer
@@ -9,9 +7,6 @@ class AcmetoolDeployer(Deployer):
def __init__(self, email, domains): def __init__(self, email, domains):
self.domains = domains self.domains = domains
self.email = email self.email = email
self.need_restart_redirector = False
self.need_restart_reconcile_service = False
self.need_restart_reconcile_timer = False
def install(self): def install(self):
apt.packages( apt.packages(
@@ -19,121 +14,41 @@ class AcmetoolDeployer(Deployer):
packages=["acmetool"], packages=["acmetool"],
) )
files.file( self.remove_file("/etc/cron.d/acmetool")
name="Remove old acmetool cronjob, it is replaced with systemd timer.",
path="/etc/cron.d/acmetool",
present=False,
)
files.put( self.put_executable("acmetool/acmetool.hook", "/etc/acme/hooks/nginx")
name="Install acmetool hook.", self.remove_file("/usr/lib/acme/hooks/nginx")
src=importlib.resources.files(__package__)
.joinpath("acmetool.hook")
.open("rb"),
dest="/etc/acme/hooks/nginx",
user="root",
group="root",
mode="755",
)
files.file(
name="Remove acmetool hook from the wrong location where it was previously installed.",
path="/usr/lib/acme/hooks/nginx",
present=False,
)
def configure(self): def configure(self):
files.template( self.put_template(
src=importlib.resources.files(__package__).joinpath( "acmetool/response-file.yaml.j2",
"response-file.yaml.j2" "/var/lib/acme/conf/responses",
),
dest="/var/lib/acme/conf/responses",
user="root",
group="root",
mode="644",
email=self.email, email=self.email,
) )
files.template( self.put_template(
src=importlib.resources.files(__package__).joinpath("target.yaml.j2"), "acmetool/target.yaml.j2",
dest="/var/lib/acme/conf/target", "/var/lib/acme/conf/target",
user="root",
group="root",
mode="644",
) )
server.shell( server.shell(
name=f"Remove old acmetool desired files for {self.domains[0]}", name=f"Remove old acmetool desired files for {self.domains[0]}",
commands=[f"rm -f /var/lib/acme/desired/{self.domains[0]}-*"], commands=[f"rm -f /var/lib/acme/desired/{self.domains[0]}-*"],
) )
files.template( self.put_template(
src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"), "acmetool/desired.yaml.j2",
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD f"/var/lib/acme/desired/{self.domains[0]}",
user="root",
group="root",
mode="644",
domains=self.domains, domains=self.domains,
) )
service_file = files.put( self.ensure_systemd_unit("acmetool/acmetool-redirector.service")
src=importlib.resources.files(__package__).joinpath( self.ensure_systemd_unit("acmetool/acmetool-reconcile.service")
"acmetool-redirector.service" self.ensure_systemd_unit("acmetool/acmetool-reconcile.timer")
),
dest="/etc/systemd/system/acmetool-redirector.service",
user="root",
group="root",
mode="644",
)
self.need_restart_redirector = service_file.changed
reconcile_service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.service"
),
dest="/etc/systemd/system/acmetool-reconcile.service",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_service = reconcile_service_file.changed
reconcile_timer_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.timer"
),
dest="/etc/systemd/system/acmetool-reconcile.timer",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_timer = reconcile_timer_file.changed
def activate(self): def activate(self):
systemd.service( self.ensure_service("acmetool-redirector.service")
name="Setup acmetool-redirector service", self.ensure_service("acmetool-reconcile.service", running=False, enabled=False)
service="acmetool-redirector.service", self.ensure_service("acmetool-reconcile.timer")
running=True,
enabled=True,
restarted=self.need_restart_redirector,
)
self.need_restart_redirector = False
systemd.service(
name="Setup acmetool-reconcile service",
service="acmetool-reconcile.service",
running=False,
enabled=False,
daemon_reload=self.need_restart_reconcile_service,
)
self.need_restart_reconcile_service = False
systemd.service(
name="Setup acmetool-reconcile timer",
service="acmetool-reconcile.timer",
running=True,
enabled=True,
daemon_reload=self.need_restart_reconcile_timer,
)
self.need_restart_reconcile_timer = False
server.shell( server.shell(
name=f"Reconcile certificates for: {', '.join(self.domains)}", name=f"Reconcile certificates for: {', '.join(self.domains)}",

View File

@@ -4,6 +4,7 @@ import os
from contextlib import contextmanager from contextlib import contextmanager
from pyinfra import host from pyinfra import host
from pyinfra.facts.files import Sha256File
from pyinfra.facts.server import Command from pyinfra.facts.server import Command
from pyinfra.operations import files, server, systemd from pyinfra.operations import files, server, systemd
@@ -50,11 +51,10 @@ def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg) return importlib.resources.files(pkg).joinpath(arg)
def configure_remote_units(mail_domain, units) -> None: def configure_remote_units(deployer, mail_domain, units) -> None:
remote_base_dir = "/usr/local/lib/chatmaild" remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv" remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini" remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
# install systemd units # install systemd units
for fn in units: for fn in units:
@@ -70,15 +70,13 @@ def configure_remote_units(mail_domain, units) -> None:
source_path = get_resource(f"service/{basename}.f") source_path = get_resource(f"service/{basename}.f")
content = source_path.read_text().format(**params).encode() content = source_path.read_text().format(**params).encode()
files.put( deployer.put_file(
name=f"Upload {basename}",
src=io.BytesIO(content), src=io.BytesIO(content),
dest=f"/etc/systemd/system/{basename}", dest=f"/etc/systemd/system/{basename}",
**root_owned,
) )
def activate_remote_units(units) -> None: def activate_remote_units(deployer, units) -> None:
# activate systemd units # activate systemd units
for fn in units: for fn in units:
basename = fn if "." in fn else f"{fn}.service" basename = fn if "." in fn else f"{fn}.service"
@@ -88,14 +86,8 @@ def activate_remote_units(units) -> None:
enabled = False enabled = False
else: else:
enabled = True enabled = True
systemd.service(
name=f"Setup {basename}", deployer.ensure_service(basename, running=enabled, enabled=enabled)
service=basename,
running=enabled,
enabled=enabled,
restarted=enabled,
daemon_reload=True,
)
class Deployment: class Deployment:
@@ -141,6 +133,7 @@ class Deployment:
class Deployer: class Deployer:
need_restart = False need_restart = False
daemon_reload = False
def install(self): def install(self):
pass pass
@@ -150,3 +143,113 @@ class Deployer:
def activate(self): def activate(self):
pass pass
def ensure_service(self, service, running=True, enabled=True):
if running:
verb = "Start and enable"
else:
verb = "Stop"
systemd.service(
name=f"{verb} {service}",
service=service,
running=running,
enabled=enabled,
restarted=self.need_restart if running else False,
daemon_reload=self.daemon_reload,
)
self.daemon_reload = False
def ensure_systemd_unit(self, src, **kwargs):
dest_name = src.split("/")[-1].replace(".j2", "")
dest = f"/etc/systemd/system/{dest_name}"
if src.endswith(".j2"):
return self.put_template(src, dest, **kwargs)
return self.put_file(src, dest)
def put_file(self, src, dest, mode="644"):
if isinstance(src, str):
src = get_resource(src)
res = files.put(
name=f"Upload {dest}",
src=src,
dest=dest,
user="root",
group="root",
mode=mode,
)
return self._update_restart_signals(dest, res)
def put_executable(self, src, dest):
return self.put_file(src, dest, mode="755")
def put_template(self, src, dest, owner="root", **kwargs):
if isinstance(src, str):
src = get_resource(src)
res = files.template(
name=f"Upload {dest}",
src=src,
dest=dest,
user=owner,
group=owner,
mode="644",
**kwargs,
)
return self._update_restart_signals(dest, res)
def remove_file(self, dest):
res = files.file(name=f"Remove {dest}", path=dest, present=False)
return self._update_restart_signals(dest, res)
def ensure_line(self, path, line, **kwargs):
name = kwargs.pop("name", f"Ensure line in {path}")
res = files.line(name=name, path=path, line=line, **kwargs)
return self._update_restart_signals(path, res)
def ensure_directory(self, path, owner="root", mode="755", **kwargs):
name = kwargs.pop("name", f"Ensure directory {path}")
res = files.directory(
name=name,
path=path,
user=owner,
group=owner,
mode=mode,
present=True,
**kwargs,
)
return self._update_restart_signals(path, res)
def remove_directory(self, path, **kwargs):
name = kwargs.pop("name", f"Remove directory {path}")
res = files.directory(name=name, path=path, present=False, **kwargs)
return self._update_restart_signals(path, res)
def download_executable(self, url, dest, sha256sum, extract=None):
existing = host.get_fact(Sha256File, dest)
if existing == sha256sum:
return
tmp = f"{dest}.new"
if extract:
dl_cmd = f"curl -fSL {url} | {extract} >{tmp}"
else:
dl_cmd = f"curl -fSL {url} -o {tmp}"
server.shell(
name=f"Download {dest}",
commands=[
f"({dl_cmd}"
f" && echo '{sha256sum} {tmp}' | sha256sum -c"
f" && mv {tmp} {dest})",
f"chmod 755 {dest}",
],
)
self.need_restart = True
def _update_restart_signals(self, path, res):
if res.changed:
self.need_restart = True
if str(path).startswith("/etc/systemd/system/"):
self.daemon_reload = True
return res

View File

@@ -12,7 +12,6 @@ from chatmaild.config import read_config
from pyinfra import facts, host, logger from pyinfra import facts, host, logger
from pyinfra.api import FactBase from pyinfra.api import FactBase
from pyinfra.facts import hardware from pyinfra.facts import hardware
from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
@@ -25,7 +24,6 @@ from .basedeploy import (
activate_remote_units, activate_remote_units,
blocked_service_startup, blocked_service_startup,
configure_remote_units, configure_remote_units,
get_resource,
has_systemd, has_systemd,
is_in_container, is_in_container,
) )
@@ -82,25 +80,22 @@ def remove_legacy_artifacts():
) )
def _install_remote_venv_with_chatmaild() -> None: def _install_remote_venv_with_chatmaild(deployer) -> None:
remove_legacy_artifacts() remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist")) dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist"))
remote_base_dir = "/usr/local/lib/chatmaild" remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}" remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv" remote_venv_dir = f"{remote_base_dir}/venv"
root_owned = dict(user="root", group="root", mode="644")
apt.packages( apt.packages(
name="apt install python3-virtualenv", name="apt install python3-virtualenv",
packages=["python3-virtualenv"], packages=["python3-virtualenv"],
) )
files.put( deployer.ensure_directory(f"{remote_base_dir}/dist")
name="Upload chatmaild source package", deployer.put_file(
src=dist_file.open("rb"), src=dist_file.open("rb"),
dest=remote_dist_file, dest=remote_dist_file,
create_remote_dir=True,
**root_owned,
) )
pip.virtualenv( pip.virtualenv(
@@ -122,32 +117,22 @@ def _install_remote_venv_with_chatmaild() -> None:
) )
def _configure_remote_venv_with_chatmaild(config) -> None: def _configure_remote_venv_with_chatmaild(deployer, config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild" remote_base_dir = "/usr/local/lib/chatmaild"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini" remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
files.put( deployer.put_file(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(), src=config._getbytefile(),
dest=remote_chatmail_inipath, dest=remote_chatmail_inipath,
**root_owned,
) )
files.file( deployer.remove_file("/etc/cron.d/chatmail-metrics")
path="/etc/cron.d/chatmail-metrics", deployer.remove_file("/var/www/html/metrics")
present=False,
)
files.file(
path="/var/www/html/metrics",
present=False,
)
class UnboundDeployer(Deployer): class UnboundDeployer(Deployer):
def __init__(self, config): def __init__(self, config):
self.config = config self.config = config
self.need_restart = False
def install(self): def install(self):
# On an IPv4-only system, if unbound is started but not configured, # On an IPv4-only system, if unbound is started but not configured,
@@ -176,13 +161,9 @@ class UnboundDeployer(Deployer):
) )
# Configure unbound resolver with Quad9 fallback and a trailing newline # Configure unbound resolver with Quad9 fallback and a trailing newline
# (SolusVM bug). # (SolusVM bug).
files.put( self.put_file(
name="Write static resolv.conf",
src=BytesIO(b"nameserver 127.0.0.1\nnameserver 9.9.9.9\n"), src=BytesIO(b"nameserver 127.0.0.1\nnameserver 9.9.9.9\n"),
dest="/etc/resolv.conf", dest="/etc/resolv.conf",
user="root",
group="root",
mode="644",
) )
server.shell( server.shell(
name="Generate root keys for validating DNSSEC", name="Generate root keys for validating DNSSEC",
@@ -191,26 +172,15 @@ class UnboundDeployer(Deployer):
], ],
) )
if self.config.disable_ipv6: if self.config.disable_ipv6:
files.directory( self.ensure_directory(
path="/etc/unbound/unbound.conf.d", path="/etc/unbound/unbound.conf.d",
present=True,
user="root",
group="root",
mode="755",
) )
conf = files.put( self.put_template(
src=get_resource("unbound/unbound.conf.j2"), "unbound/unbound.conf.j2",
dest="/etc/unbound/unbound.conf.d/chatmail.conf", "/etc/unbound/unbound.conf.d/chatmail.conf",
user="root",
group="root",
mode="644",
) )
else: else:
conf = files.file( self.remove_file("/etc/unbound/unbound.conf.d/chatmail.conf")
path="/etc/unbound/unbound.conf.d/chatmail.conf",
present=False,
)
self.need_restart |= conf.changed
def activate(self): def activate(self):
server.shell( server.shell(
@@ -220,27 +190,25 @@ class UnboundDeployer(Deployer):
], ],
) )
systemd.service( self.ensure_service("unbound.service")
name="Start and enable unbound",
service="unbound.service", self.ensure_service(
running=True, "unbound-resolvconf.service",
enabled=True, running=False,
restarted=self.need_restart, enabled=False,
) )
class MtastsDeployer(Deployer): class MtastsDeployer(Deployer):
def configure(self): def configure(self):
# Remove configuration. # Remove configuration.
files.file("/etc/mta-sts-daemon.yml", present=False) self.remove_file("/etc/mta-sts-daemon.yml")
files.directory("/usr/local/lib/postfix-mta-sts-resolver", present=False) self.remove_directory("/usr/local/lib/postfix-mta-sts-resolver")
files.file("/etc/systemd/system/mta-sts-daemon.service", present=False) self.remove_file("/etc/systemd/system/mta-sts-daemon.service")
def activate(self): def activate(self):
systemd.service( self.ensure_service(
name="Stop MTA-STS daemon", "mta-sts-daemon.service",
service="mta-sts-daemon.service",
daemon_reload=True,
running=False, running=False,
enabled=False, enabled=False,
) )
@@ -251,14 +219,7 @@ class WebsiteDeployer(Deployer):
self.config = config self.config = config
def install(self): def install(self):
files.directory( self.ensure_directory("/var/www")
name="Ensure /var/www exists",
path="/var/www",
user="root",
group="root",
mode="755",
present=True,
)
def configure(self): def configure(self):
www_path, src_dir, build_dir = get_paths(self.config) www_path, src_dir, build_dir = get_paths(self.config)
@@ -288,15 +249,11 @@ class LegacyRemoveDeployer(Deployer):
# remove historic expunge script # remove historic expunge script
# which is now implemented through a systemd timer (chatmail-expire) # which is now implemented through a systemd timer (chatmail-expire)
files.file( self.remove_file("/etc/cron.d/expunge")
path="/etc/cron.d/expunge",
present=False,
)
# Remove OBS repository key that is no longer used. # Remove OBS repository key that is no longer used.
files.file("/etc/apt/keyrings/obs-home-deltachat.gpg", present=False) self.remove_file("/etc/apt/keyrings/obs-home-deltachat.gpg")
files.line( self.ensure_line(
name="Remove DeltaChat OBS home repository from sources.list",
path="/etc/apt/sources.list", path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./", line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True, escape_regex_characters=True,
@@ -304,11 +261,7 @@ class LegacyRemoveDeployer(Deployer):
) )
# prior relay versions used filelogging # prior relay versions used filelogging
files.directory( self.remove_directory("/var/log/journal/")
name="Ensure old logs on disk are deleted",
path="/var/log/journal/",
present=False,
)
# remove echobot if it is still running # remove echobot if it is still running
if has_systemd() and host.get_fact(SystemdEnabled).get("echobot.service"): if has_systemd() and host.get_fact(SystemdEnabled).get("echobot.service"):
systemd.service( systemd.service(
@@ -350,22 +303,13 @@ class TurnDeployer(Deployer):
"0fb3e792419494e21ecad536464929dba706bb2c88884ed8f1788141d26fc756", "0fb3e792419494e21ecad536464929dba706bb2c88884ed8f1788141d26fc756",
), ),
}[host.get_fact(facts.server.Arch)] }[host.get_fact(facts.server.Arch)]
self.download_executable(url, "/usr/local/bin/chatmail-turn", sha256sum)
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn")
if existing_sha256sum != sha256sum:
server.shell(
name="Download chatmail-turn",
commands=[
f"(curl -L {url} >/usr/local/bin/chatmail-turn.new && (echo '{sha256sum} /usr/local/bin/chatmail-turn.new' | sha256sum -c) && mv /usr/local/bin/chatmail-turn.new /usr/local/bin/chatmail-turn)",
"chmod 755 /usr/local/bin/chatmail-turn",
],
)
def configure(self): def configure(self):
configure_remote_units(self.mail_domain, self.units) configure_remote_units(self, self.mail_domain, self.units)
def activate(self): def activate(self):
activate_remote_units(self.units) activate_remote_units(self, self.units)
class IrohDeployer(Deployer): class IrohDeployer(Deployer):
@@ -383,72 +327,30 @@ class IrohDeployer(Deployer):
"f8ef27631fac213b3ef668d02acd5b3e215292746a3fc71d90c63115446008b1", "f8ef27631fac213b3ef668d02acd5b3e215292746a3fc71d90c63115446008b1",
), ),
}[host.get_fact(facts.server.Arch)] }[host.get_fact(facts.server.Arch)]
self.download_executable(
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay") url,
if existing_sha256sum != sha256sum: "/usr/local/bin/iroh-relay",
server.shell( sha256sum,
name="Download iroh-relay", extract="gunzip | tar -xf - ./iroh-relay -O",
commands=[ )
f"(curl -L {url} | gunzip | tar -x -f - ./iroh-relay -O >/usr/local/bin/iroh-relay.new && (echo '{sha256sum} /usr/local/bin/iroh-relay.new' | sha256sum -c) && mv /usr/local/bin/iroh-relay.new /usr/local/bin/iroh-relay)",
"chmod 755 /usr/local/bin/iroh-relay",
],
)
self.need_restart = True
def configure(self): def configure(self):
systemd_unit = files.put( self.ensure_systemd_unit("iroh-relay.service")
name="Upload iroh-relay systemd unit", self.put_file("iroh-relay.toml", "/etc/iroh-relay.toml")
src=get_resource("iroh-relay.service"),
dest="/etc/systemd/system/iroh-relay.service",
user="root",
group="root",
mode="644",
)
self.need_restart |= systemd_unit.changed
iroh_config = files.put(
name="Upload iroh-relay config",
src=get_resource("iroh-relay.toml"),
dest="/etc/iroh-relay.toml",
user="root",
group="root",
mode="644",
)
self.need_restart |= iroh_config.changed
def activate(self): def activate(self):
systemd.service( self.ensure_service(
name="Start and enable iroh-relay", "iroh-relay.service",
service="iroh-relay.service",
running=True,
enabled=self.enable_iroh_relay, enabled=self.enable_iroh_relay,
restarted=self.need_restart,
) )
self.need_restart = False
class JournaldDeployer(Deployer): class JournaldDeployer(Deployer):
def configure(self): def configure(self):
journald_conf = files.put( self.put_file("journald.conf", "/etc/systemd/journald.conf")
name="Configure journald",
src=get_resource("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
self.need_restart = journald_conf.changed
def activate(self): def activate(self):
systemd.service( self.ensure_service("systemd-journald.service")
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
class ChatmailVenvDeployer(Deployer): class ChatmailVenvDeployer(Deployer):
@@ -464,14 +366,14 @@ class ChatmailVenvDeployer(Deployer):
) )
def install(self): def install(self):
_install_remote_venv_with_chatmaild() _install_remote_venv_with_chatmaild(self)
def configure(self): def configure(self):
_configure_remote_venv_with_chatmaild(self.config) _configure_remote_venv_with_chatmaild(self, self.config)
configure_remote_units(self.config.mail_domain, self.units) configure_remote_units(self, self.config.mail_domain, self.units)
def activate(self): def activate(self):
activate_remote_units(self.units) activate_remote_units(self, self.units)
class ChatmailDeployer(Deployer): class ChatmailDeployer(Deployer):
@@ -485,13 +387,9 @@ class ChatmailDeployer(Deployer):
self.mail_domain = config.mail_domain self.mail_domain = config.mail_domain
def install(self): def install(self):
files.put( self.put_file(
name="Disable installing recommended packages globally",
src=BytesIO(b'APT::Install-Recommends "false";\n'), src=BytesIO(b'APT::Install-Recommends "false";\n'),
dest="/etc/apt/apt.conf.d/00InstallRecommends", dest="/etc/apt/apt.conf.d/00InstallRecommends",
user="root",
group="root",
mode="644",
) )
apt.update(name="apt update", cache_time=24 * 3600) apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True) apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -508,13 +406,10 @@ class ChatmailDeployer(Deployer):
def configure(self): def configure(self):
# metadata crashes if the mailboxes dir does not exist # metadata crashes if the mailboxes dir does not exist
files.directory( self.ensure_directory(
name="Ensure vmail mailbox directory exists", str(self.config.mailboxes_dir),
path=str(self.config.mailboxes_dir), owner="vmail",
user="vmail",
group="vmail",
mode="700", mode="700",
present=True,
) )
# This file is used by auth proxy. # This file is used by auth proxy.
@@ -535,12 +430,7 @@ class FcgiwrapDeployer(Deployer):
) )
def activate(self): def activate(self):
systemd.service( self.ensure_service("fcgiwrap.service")
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
class GithashDeployer(Deployer): class GithashDeployer(Deployer):
@@ -553,12 +443,7 @@ class GithashDeployer(Deployer):
git_diff = subprocess.check_output(["git", "diff"]).decode() git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception: except Exception:
git_diff = "" git_diff = ""
files.put( self.put_file(src=StringIO(git_hash + git_diff), dest="/etc/chatmail-version")
name="Upload chatmail relay git commit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
)
def get_tls_deployer(config, mail_domain): def get_tls_deployer(config, mail_domain):
@@ -591,11 +476,17 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
return return
# Check if mtail_address interface is available (if configured) # Check if mtail_address interface is available (if configured)
if config.mtail_address and config.mtail_address not in ('127.0.0.1', '::1', 'localhost'): if config.mtail_address and config.mtail_address not in (
"127.0.0.1",
"::1",
"localhost",
):
ipv4_addrs = host.get_fact(hardware.Ipv4Addrs) ipv4_addrs = host.get_fact(hardware.Ipv4Addrs)
all_addresses = [addr for addrs in ipv4_addrs.values() for addr in addrs] all_addresses = [addr for addrs in ipv4_addrs.values() for addr in addrs]
if config.mtail_address not in all_addresses: if config.mtail_address not in all_addresses:
Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n") Out().red(
f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n"
)
exit(1) exit(1)
if not is_in_container(): if not is_in_container():

View File

@@ -5,14 +5,13 @@ from chatmaild.config import Config
from pyinfra import host from pyinfra import host
from pyinfra.facts.deb import DebPackages from pyinfra.facts.deb import DebPackages
from pyinfra.facts.server import Arch, Command, Sysctl from pyinfra.facts.server import Arch, Command, Sysctl
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, files, server
from cmdeploy.basedeploy import ( from cmdeploy.basedeploy import (
Deployer, Deployer,
activate_remote_units, activate_remote_units,
blocked_service_startup, blocked_service_startup,
configure_remote_units, configure_remote_units,
get_resource,
is_in_container, is_in_container,
) )
@@ -59,26 +58,21 @@ class DovecotDeployer(Deployer):
], ],
) )
self.need_restart = True self.need_restart = True
files.put( self.put_file(
name="Pin dovecot packages to block Debian dist-upgrades",
src=io.StringIO( src=io.StringIO(
"Package: dovecot-*\n" "Package: dovecot-*\n"
"Pin: version *\n" "Pin: version *\n"
"Pin-Priority: -1\n" "Pin-Priority: -1\n"
), ),
dest="/etc/apt/preferences.d/pin-dovecot", dest="/etc/apt/preferences.d/pin-dovecot",
user="root",
group="root",
mode="644",
) )
def configure(self): def configure(self):
configure_remote_units(self.config.mail_domain, self.units) configure_remote_units(self, self.config.mail_domain, self.units)
config_restart, self.daemon_reload = _configure_dovecot(self.config) _configure_dovecot(self, self.config)
self.need_restart |= config_restart
def activate(self): def activate(self):
activate_remote_units(self.units) activate_remote_units(self, self.units)
# Detect stale binary: package installed but service still runs old (deleted) binary. # Detect stale binary: package installed but service still runs old (deleted) binary.
if not self.disable_mail and not self.need_restart: if not self.disable_mail and not self.need_restart:
@@ -91,19 +85,12 @@ class DovecotDeployer(Deployer):
if stale == "STALE": if stale == "STALE":
self.need_restart = True self.need_restart = True
restart = False if self.disable_mail else self.need_restart active = not self.disable_mail
self.ensure_service(
systemd.service( "dovecot.service",
name="Disable dovecot for now" running=active,
if self.disable_mail enabled=active,
else "Start and enable Dovecot",
service="dovecot.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
) )
self.need_restart = False
def _pick_url(primary, fallback): def _pick_url(primary, fallback):
@@ -147,39 +134,19 @@ def _download_dovecot_package(package: str, arch: str) -> tuple[str | None, bool
return deb_filename, True return deb_filename, True
def _configure_dovecot(deployer, config: Config, debug: bool = False):
def _configure_dovecot(config: Config, debug: bool = False) -> tuple[bool, bool]:
"""Configures Dovecot IMAP server.""" """Configures Dovecot IMAP server."""
need_restart = False deployer.put_template(
daemon_reload = False "dovecot/dovecot.conf.j2",
"/etc/dovecot/dovecot.conf",
main_config = files.template(
src=get_resource("dovecot/dovecot.conf.j2"),
dest="/etc/dovecot/dovecot.conf",
user="root",
group="root",
mode="644",
config=config, config=config,
debug=debug, debug=debug,
disable_ipv6=config.disable_ipv6, disable_ipv6=config.disable_ipv6,
) )
need_restart |= main_config.changed deployer.put_file("dovecot/auth.conf", "/etc/dovecot/auth.conf")
auth_config = files.put( deployer.put_file(
src=get_resource("dovecot/auth.conf"), "dovecot/push_notification.lua", "/etc/dovecot/push_notification.lua"
dest="/etc/dovecot/auth.conf",
user="root",
group="root",
mode="644",
) )
need_restart |= auth_config.changed
lua_push_notification_script = files.put(
src=get_resource("dovecot/push_notification.lua"),
dest="/etc/dovecot/push_notification.lua",
user="root",
group="root",
mode="644",
)
need_restart |= lua_push_notification_script.changed
# as per https://doc.dovecot.org/2.3/configuration_manual/os/ # as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits # it is recommended to set the following inotify limits
@@ -203,25 +170,20 @@ def _configure_dovecot(config: Config, debug: bool = False) -> tuple[bool, bool]
persist=True, persist=True,
) )
timezone_env = files.line( deployer.ensure_line(
name="Set TZ environment variable", name="Set TZ environment variable",
path="/etc/environment", path="/etc/environment",
line="TZ=:/etc/localtime", line="TZ=:/etc/localtime",
) )
need_restart |= timezone_env.changed
restart_conf = files.put( deployer.put_file(
name="dovecot: restart automatically on failure", "service/10_restart_on_failure.conf",
src=get_resource("service/10_restart.conf"), "/etc/systemd/system/dovecot.service.d/10_restart.conf",
dest="/etc/systemd/system/dovecot.service.d/10_restart.conf",
) )
daemon_reload |= restart_conf.changed
# Validate dovecot configuration before restart # Validate dovecot configuration before restart
if need_restart: if deployer.need_restart:
server.shell( server.shell(
name="Validate dovecot configuration", name="Validate dovecot configuration",
commands=["doveconf -n >/dev/null"], commands=["doveconf -n >/dev/null"],
) )
return need_restart, daemon_reload

View File

@@ -1,10 +1,8 @@
import io
from pyinfra import host from pyinfra import host
from pyinfra.facts.files import File from pyinfra.facts.files import File
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer, get_resource from ..basedeploy import Deployer
class ExternalTlsDeployer(Deployer): class ExternalTlsDeployer(Deployer):
@@ -23,45 +21,24 @@ class ExternalTlsDeployer(Deployer):
def configure(self): def configure(self):
# Verify cert and key exist on the remote host using pyinfra facts. # Verify cert and key exist on the remote host using pyinfra facts.
for path in (self.cert_path, self.key_path): for path in (self.cert_path, self.key_path):
info = host.get_fact(File, path=path) if host.get_fact(File, path=path) is None:
if info is None: raise Exception(f"External TLS file not found on server: {path}")
raise Exception(f"External TLS file not found on server: {path}")
# Deploy the .path unit (templated with the cert path). self.ensure_systemd_unit(
# pkg=__package__ is required here because the resource files "external/tls-cert-reload.path.j2",
# live in cmdeploy.external, not the default cmdeploy package. cert_path=self.cert_path,
source = get_resource("tls-cert-reload.path.f", pkg=__package__)
content = source.read_text().format(cert_path=self.cert_path).encode()
path_unit = files.put(
name="Upload tls-cert-reload.path",
src=io.BytesIO(content),
dest="/etc/systemd/system/tls-cert-reload.path",
user="root",
group="root",
mode="644",
) )
self.ensure_systemd_unit(
service_unit = files.put( "external/tls-cert-reload.service",
name="Upload tls-cert-reload.service",
src=get_resource("tls-cert-reload.service", pkg=__package__),
dest="/etc/systemd/system/tls-cert-reload.service",
user="root",
group="root",
mode="644",
) )
if path_unit.changed or service_unit.changed:
self.need_restart = True
def activate(self): def activate(self):
systemd.service(
name="Enable tls-cert-reload path watcher",
service="tls-cert-reload.path",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=self.need_restart,
)
# No explicit reload needed here: dovecot/nginx read the cert # No explicit reload needed here: dovecot/nginx read the cert
# on startup, and the .path watcher handles live changes. # on startup, and the .path watcher handles live changes.
self.ensure_service(
"tls-cert-reload.path",
running=True,
enabled=True,
)

View File

@@ -9,7 +9,7 @@
Description=Watch TLS certificate for changes Description=Watch TLS certificate for changes
[Path] [Path]
PathChanged={cert_path} PathChanged={{ cert_path }}
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@@ -1,52 +1,40 @@
from pyinfra import facts, host import os
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer, get_resource from pyinfra import facts, host
from cmdeploy.basedeploy import Deployer
class FiltermailDeployer(Deployer): class FiltermailDeployer(Deployer):
services = ["filtermail", "filtermail-incoming"] services = ["filtermail", "filtermail-incoming", "filtermail-transport"]
bin_path = "/usr/local/bin/filtermail" bin_path = "/usr/local/bin/filtermail"
config_path = "/usr/local/lib/chatmaild/chatmail.ini" config_path = "/usr/local/lib/chatmaild/chatmail.ini"
def __init__(self):
self.need_restart = False
def install(self): def install(self):
local_bin = os.environ.get("CHATMAIL_FILTERMAIL_BINARY")
if local_bin:
self.put_executable(
src=local_bin,
dest=self.bin_path,
)
return
arch = host.get_fact(facts.server.Arch) arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.1/filtermail-{arch}" url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.4/filtermail-{arch}"
sha256sum = { sha256sum = {
"x86_64": "48b3fb80c092d00b9b0a0ef77a8673496da3b9aed5ec1851e1df936d5589d62f", "x86_64": "5295115952c72e4c4ec3c85546e094b4155a4c702c82bd71fcdcb744dc73adf6",
"aarch64": "c65bd5f45df187d3d65d6965a285583a3be0f44a6916ff12909ff9a8d702c22e", "aarch64": "6892244f17b8f26ccb465766e96028e7222b3c8adefca9fc6bfe9ff332ca8dff",
}[arch] }[arch]
self.need_restart |= files.download( self.download_executable(url, self.bin_path, sha256sum)
name="Download filtermail",
src=url,
sha256sum=sha256sum,
dest=self.bin_path,
mode="755",
).changed
def configure(self): def configure(self):
for service in self.services: for service in self.services:
self.need_restart |= files.template( self.ensure_systemd_unit(
src=get_resource(f"filtermail/{service}.service.j2"), f"filtermail/{service}.service.j2",
dest=f"/etc/systemd/system/{service}.service",
user="root",
group="root",
mode="644",
bin_path=self.bin_path, bin_path=self.bin_path,
config_path=self.config_path, config_path=self.config_path,
).changed )
def activate(self): def activate(self):
for service in self.services: for service in self.services:
systemd.service( self.ensure_service(f"{service}.service")
name=f"Start and enable {service}",
service=f"{service}.service",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=True,
)
self.need_restart = False

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Chatmail transport service
[Service]
ExecStart={{ bin_path }} {{ config_path }} transport
Restart=always
RestartSec=30
User=vmail
[Install]
WantedBy=multi-user.target

View File

@@ -1,10 +1,7 @@
from pyinfra import facts, host from pyinfra import facts, host
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt
from cmdeploy.basedeploy import ( from cmdeploy.basedeploy import Deployer
Deployer,
get_resource,
)
class MtailDeployer(Deployer): class MtailDeployer(Deployer):
@@ -18,51 +15,30 @@ class MtailDeployer(Deployer):
(url, sha256sum) = { (url, sha256sum) = {
"x86_64": ( "x86_64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz", "https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz",
"123c2ee5f48c3eff12ebccee38befd2233d715da736000ccde49e3d5607724e4", "d55cb601049c5e61eabab29998dbbcea95d480e5448544f9470337ba2eea882e",
), ),
"aarch64": ( "aarch64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz", "https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz",
"aa04811c0929b6754408676de520e050c45dddeb3401881888a092c9aea89cae", "f748db8ad2a1e0b63684d4c8868cf6a373a20f7e6922e5ece601fff0ee00eb1a",
), ),
}[host.get_fact(facts.server.Arch)] }[host.get_fact(facts.server.Arch)]
self.download_executable(
server.shell( url,
name="Download mtail", "/usr/local/bin/mtail",
commands=[ sha256sum,
f"(echo '{sha256sum} /usr/local/bin/mtail' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - mtail -O >/usr/local/bin/mtail.new && mv /usr/local/bin/mtail.new /usr/local/bin/mtail)", extract="gunzip | tar -xf - mtail -O",
"chmod 755 /usr/local/bin/mtail",
],
) )
def configure(self): def configure(self):
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`. # Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
# This allows to read from journalctl instead of log files. # This allows to read from journalctl instead of log files.
files.template( self.ensure_systemd_unit(
src=get_resource("mtail/mtail.service.j2"), "mtail/mtail.service.j2",
dest="/etc/systemd/system/mtail.service",
user="root",
group="root",
mode="644",
address=self.mtail_address or "127.0.0.1", address=self.mtail_address or "127.0.0.1",
port=3903, port=3903,
) )
self.put_file("mtail/delivered_mail.mtail", "/etc/mtail/delivered_mail.mtail")
mtail_conf = files.put(
name="Mtail configuration",
src=get_resource("mtail/delivered_mail.mtail"),
dest="/etc/mtail/delivered_mail.mtail",
user="root",
group="root",
mode="644",
)
self.need_restart = mtail_conf.changed
def activate(self): def activate(self):
systemd.service( active = bool(self.mtail_address)
name="Start and enable mtail", self.ensure_service("mtail.service", running=active, enabled=active)
service="mtail.service",
running=bool(self.mtail_address),
enabled=bool(self.mtail_address),
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -1,10 +1,13 @@
[Unit] [Unit]
Description=mtail Description=mtail
After=network-online.target
Wants=network-online.target
[Service] [Service]
Type=simple Type=simple
ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -" ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -"
Restart=on-failure Restart=on-failure
RestartSec=2s
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@@ -1,5 +1,5 @@
from chatmaild.config import Config from chatmaild.config import Config
from pyinfra.operations import apt, files, systemd from pyinfra.operations import apt
from cmdeploy.basedeploy import ( from cmdeploy.basedeploy import (
Deployer, Deployer,
@@ -31,87 +31,50 @@ class NginxDeployer(Deployer):
# For documentation about policy-rc.d, see: # For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt # https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
# #
files.put( self.put_executable(src="policy-rc.d", dest="/usr/sbin/policy-rc.d")
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages( apt.packages(
name="Install nginx", name="Install nginx",
packages=["nginx", "libnginx-mod-stream"], packages=["nginx", "libnginx-mod-stream"],
) )
files.file("/usr/sbin/policy-rc.d", present=False) self.remove_file("/usr/sbin/policy-rc.d")
def configure(self): def configure(self):
self.need_restart = _configure_nginx(self.config) _configure_nginx(self, self.config)
def activate(self): def activate(self):
systemd.service( self.ensure_service("nginx.service")
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
def _configure_nginx(config: Config, debug: bool = False) -> bool: def _configure_nginx(deployer, config: Config, debug: bool = False):
"""Configures nginx HTTP server.""" """Configures nginx HTTP server."""
need_restart = False
main_config = files.template( deployer.put_template(
src=get_resource("nginx/nginx.conf.j2"), "nginx/nginx.conf.j2",
dest="/etc/nginx/nginx.conf", "/etc/nginx/nginx.conf",
user="root",
group="root",
mode="644",
config=config, config=config,
disable_ipv6=config.disable_ipv6, disable_ipv6=config.disable_ipv6,
) )
need_restart |= main_config.changed
autoconfig = files.template( deployer.put_template(
src=get_resource("nginx/autoconfig.xml.j2"), "nginx/autoconfig.xml.j2",
dest="/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml", "/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
user="root",
group="root",
mode="644",
config=config, config=config,
) )
need_restart |= autoconfig.changed
mta_sts_config = files.template( deployer.put_template(
src=get_resource("nginx/mta-sts.txt.j2"), "nginx/mta-sts.txt.j2",
dest="/var/www/html/.well-known/mta-sts.txt", "/var/www/html/.well-known/mta-sts.txt",
user="root",
group="root",
mode="644",
config=config, config=config,
) )
need_restart |= mta_sts_config.changed
# install CGI newemail script # install CGI newemail script
# #
cgi_dir = "/usr/lib/cgi-bin" cgi_dir = "/usr/lib/cgi-bin"
files.directory( deployer.ensure_directory(cgi_dir)
name=f"Ensure {cgi_dir} exists",
path=cgi_dir,
user="root",
group="root",
)
files.put( deployer.put_executable(
name="Upload cgi newemail.py script",
src=get_resource("newemail.py", pkg="chatmaild").open("rb"), src=get_resource("newemail.py", pkg="chatmaild").open("rb"),
dest=f"{cgi_dir}/newemail.py", dest=f"{cgi_dir}/newemail.py",
user="root",
group="root",
mode="755",
) )
return need_restart

View File

@@ -42,6 +42,9 @@ stream {
} }
http { http {
# access_log setting is inherited by all server sections
access_log syslog:server=unix:/dev/log,facility=local7;
{% if config.tls_cert_mode == "self" %} {% if config.tls_cert_mode == "self" %}
limit_req_zone $binary_remote_addr zone=newaccount:10m rate=2r/s; limit_req_zone $binary_remote_addr zone=newaccount:10m rate=2r/s;
{% endif %} {% endif %}
@@ -69,11 +72,9 @@ http {
index index.html index.htm; index index.html index.htm;
server_name {{ config.mail_domain }} www.{{ config.mail_domain }} mta-sts.{{ config.mail_domain }}; server_name {{ config.mail_domain }} mta-sts.{{ config.mail_domain }};
access_log syslog:server=unix:/dev/log,facility=local7; location /mxdeliv {
location /mxdeliv/ {
proxy_pass http://127.0.0.1:{{ config.filtermail_http_port_incoming }}; proxy_pass http://127.0.0.1:{{ config.filtermail_http_port_incoming }};
} }
@@ -143,7 +144,6 @@ http {
listen 127.0.0.1:8443 ssl; listen 127.0.0.1:8443 ssl;
server_name www.{{ config.mail_domain }}; server_name www.{{ config.mail_domain }};
return 301 $scheme://{{ config.mail_domain }}$request_uri; return 301 $scheme://{{ config.mail_domain }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7;
} }
server { server {

View File

@@ -4,9 +4,9 @@ Installs OpenDKIM
from pyinfra import host from pyinfra import host
from pyinfra.facts.files import File from pyinfra.facts.files import File
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, files, server
from cmdeploy.basedeploy import Deployer, get_resource from cmdeploy.basedeploy import Deployer
class OpendkimDeployer(Deployer): class OpendkimDeployer(Deployer):
@@ -25,65 +25,39 @@ class OpendkimDeployer(Deployer):
domain = self.mail_domain domain = self.mail_domain
dkim_selector = "opendkim" dkim_selector = "opendkim"
"""Configures OpenDKIM""" """Configures OpenDKIM"""
need_restart = False
main_config = files.template( self.put_template(
src=get_resource("opendkim/opendkim.conf"), "opendkim/opendkim.conf",
dest="/etc/opendkim.conf", "/etc/opendkim.conf",
user="root",
group="root",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector}, config={"domain_name": domain, "opendkim_selector": dkim_selector},
) )
need_restart |= main_config.changed
screen_script = files.file( self.remove_file("/etc/opendkim/screen.lua")
path="/etc/opendkim/screen.lua", self.remove_file("/etc/opendkim/final.lua")
present=False,
)
need_restart |= screen_script.changed
final_script = files.file( self.ensure_directory(
path="/etc/opendkim/final.lua", "/etc/opendkim",
present=False, owner="opendkim",
)
need_restart |= final_script.changed
files.directory(
name="Add opendkim directory to /etc",
path="/etc/opendkim",
user="opendkim",
group="opendkim",
mode="750", mode="750",
present=True,
) )
keytable = files.template( self.put_template(
src=get_resource("opendkim/KeyTable"), "opendkim/KeyTable",
dest="/etc/dkimkeys/KeyTable", "/etc/dkimkeys/KeyTable",
user="opendkim", owner="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector}, config={"domain_name": domain, "opendkim_selector": dkim_selector},
) )
need_restart |= keytable.changed
signing_table = files.template( self.put_template(
src=get_resource("opendkim/SigningTable"), "opendkim/SigningTable",
dest="/etc/dkimkeys/SigningTable", "/etc/dkimkeys/SigningTable",
user="opendkim", owner="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector}, config={"domain_name": domain, "opendkim_selector": dkim_selector},
) )
need_restart |= signing_table.changed self.ensure_directory(
files.directory( "/var/spool/postfix/opendkim",
name="Add opendkim socket directory to /var/spool/postfix", owner="opendkim",
path="/var/spool/postfix/opendkim",
user="opendkim",
group="opendkim",
mode="750", mode="750",
present=True,
) )
if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"): if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"):
@@ -96,12 +70,10 @@ class OpendkimDeployer(Deployer):
_su_user="opendkim", _su_user="opendkim",
) )
service_file = files.put( self.put_file(
name="Configure opendkim to restart once a day", "opendkim/systemd.conf",
src=get_resource("opendkim/systemd.conf"), "/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
dest="/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
) )
need_restart |= service_file.changed
files.file( files.file(
name="chown opendkim: /etc/dkimkeys/opendkim.private", name="chown opendkim: /etc/dkimkeys/opendkim.private",
@@ -110,15 +82,5 @@ class OpendkimDeployer(Deployer):
group="opendkim", group="opendkim",
) )
self.need_restart = need_restart
def activate(self): def activate(self):
systemd.service( self.ensure_service("opendkim.service")
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
daemon_reload=self.need_restart,
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -1,11 +1,10 @@
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, server
from cmdeploy.basedeploy import Deployer, get_resource from cmdeploy.basedeploy import Deployer
class PostfixDeployer(Deployer): class PostfixDeployer(Deployer):
required_users = [("postfix", None, ["opendkim"])] required_users = [("postfix", None, ["opendkim"])]
daemon_reload = False
def __init__(self, config, disable_mail): def __init__(self, config, disable_mail):
self.config = config self.config = config
@@ -19,81 +18,46 @@ class PostfixDeployer(Deployer):
def configure(self): def configure(self):
config = self.config config = self.config
need_restart = False
main_config = files.template( self.put_template(
src=get_resource("postfix/main.cf.j2"), "postfix/main.cf.j2",
dest="/etc/postfix/main.cf", "/etc/postfix/main.cf",
user="root",
group="root",
mode="644",
config=config, config=config,
disable_ipv6=config.disable_ipv6, disable_ipv6=config.disable_ipv6,
) )
need_restart |= main_config.changed
master_config = files.template( self.put_template(
src=get_resource("postfix/master.cf.j2"), "postfix/master.cf.j2",
dest="/etc/postfix/master.cf", "/etc/postfix/master.cf",
user="root",
group="root",
mode="644",
debug=False, debug=False,
config=config, config=config,
) )
need_restart |= master_config.changed
header_cleanup = files.put( self.put_file(
src=get_resource("postfix/submission_header_cleanup"), "postfix/submission_header_cleanup",
dest="/etc/postfix/submission_header_cleanup", "/etc/postfix/submission_header_cleanup",
user="root",
group="root",
mode="644",
) )
need_restart |= header_cleanup.changed self.put_file("postfix/lmtp_header_cleanup", "/etc/postfix/lmtp_header_cleanup")
lmtp_header_cleanup = files.put( res = self.put_file(
src=get_resource("postfix/lmtp_header_cleanup"), "postfix/smtp_tls_policy_map", "/etc/postfix/smtp_tls_policy_map"
dest="/etc/postfix/lmtp_header_cleanup",
user="root",
group="root",
mode="644",
) )
need_restart |= lmtp_header_cleanup.changed tls_policy_changed = res.changed
if tls_policy_changed:
tls_policy_map = files.put(
name="Upload SMTP TLS Policy that accepts self-signed certificates for IP-only hosts",
src=get_resource("postfix/smtp_tls_policy_map"),
dest="/etc/postfix/smtp_tls_policy_map",
user="root",
group="root",
mode="644",
)
need_restart |= tls_policy_map.changed
if tls_policy_map.changed:
server.shell( server.shell(
commands=["postmap /etc/postfix/smtp_tls_policy_map"], commands=["postmap /etc/postfix/smtp_tls_policy_map"],
) )
# Login map that 1:1 maps email address to login. # Login map that 1:1 maps email address to login.
login_map = files.put( self.put_file("postfix/login_map", "/etc/postfix/login_map")
src=get_resource("postfix/login_map"),
dest="/etc/postfix/login_map",
user="root",
group="root",
mode="644",
)
need_restart |= login_map.changed
restart_conf = files.put( self.put_file(
name="postfix: restart automatically on failure", "service/10_restart_on_failure.conf",
src=get_resource("service/10_restart.conf"), "/etc/systemd/system/postfix@.service.d/10_restart.conf",
dest="/etc/systemd/system/postfix@.service.d/10_restart.conf",
) )
self.daemon_reload = restart_conf.changed
# Validate postfix configuration before restart # Validate postfix configuration before restart
if need_restart: if self.need_restart:
server.shell( server.shell(
name="Validate postfix configuration", name="Validate postfix configuration",
# Extract stderr and quit with error if non-zero # Extract stderr and quit with error if non-zero
@@ -101,19 +65,11 @@ class PostfixDeployer(Deployer):
"""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'""" """bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""
], ],
) )
self.need_restart = need_restart
def activate(self): def activate(self):
restart = False if self.disable_mail else self.need_restart active = not self.disable_mail
self.ensure_service(
systemd.service( "postfix.service",
name="disable postfix for now" running=active,
if self.disable_mail enabled=active,
else "Start and enable Postfix",
service="postfix.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
) )
self.need_restart = False

View File

@@ -79,22 +79,6 @@ inet_protocols = ipv4
inet_protocols = all inet_protocols = all
{% endif %} {% endif %}
# Postfix does not try IPv4 and IPv6 connections
# concurrently as of version 3.7.11.
#
# When relay has both A (IPv4) and AAAA (IPv6) records,
# but broken IPv6 connectivity,
# every second message is delayed by the connection timeout
# <https://www.postfix.org/postconf.5.html#smtp_connect_timeout>
# which defaults to 30 seconds. Reducing timeouts is not a solution
# as this will result in a failure to connect to slow servers.
#
# As a workaround we always prefer IPv4 when it is available.
#
# The setting is documented at
# <https://www.postfix.org/postconf.5.html#smtp_address_preference>
smtp_address_preference=ipv4
virtual_transport = lmtp:unix:private/dovecot-lmtp virtual_transport = lmtp:unix:private/dovecot-lmtp
virtual_mailbox_domains = {{ config.mail_domain }} virtual_mailbox_domains = {{ config.mail_domain }}
lmtp_header_checks = regexp:/etc/postfix/lmtp_header_cleanup lmtp_header_checks = regexp:/etc/postfix/lmtp_header_cleanup
@@ -109,3 +93,12 @@ smtpd_sender_login_maps = regexp:/etc/postfix/login_map
# Do not lookup SMTP client hostnames to reduce delays # Do not lookup SMTP client hostnames to reduce delays
# and avoid unnecessary DNS requests. # and avoid unnecessary DNS requests.
smtpd_peername_lookup = no smtpd_peername_lookup = no
# Use filtermail-transport to relay messages.
# We can't force postfix to split messages per destination,
# when specifying a custom next-hop,
# so instead this is handled in filtermail.
# We use LMTP instead SMTP so we can communicate per-recipient errors back to postfix.
default_transport = lmtp-filtermail:inet:[127.0.0.1]:{{ config.filtermail_lmtp_port_transport }}
lmtp-filtermail_initial_destination_concurrency=10000
lmtp-filtermail_destination_concurrency_limit=10000

View File

@@ -100,3 +100,8 @@ filter unix - n n - - lmtp
# cannot send unprotected Subject. # cannot send unprotected Subject.
authclean unix n - - - 0 cleanup authclean unix n - - - 0 cleanup
-o header_checks=regexp:/etc/postfix/submission_header_cleanup -o header_checks=regexp:/etc/postfix/submission_header_cleanup
lmtp-filtermail unix - - y - 10000 lmtp
-o syslog_name=postfix/lmtp-filtermail
-o lmtp_header_checks=
-o lmtp_tls_security_level=none

View File

@@ -64,21 +64,25 @@ def get_dkim_entry(mail_domain, pre_command, dkim_selector):
) )
def query_dns(typ, domain): def get_authoritative_ns(domain):
# Get autoritative nameserver from the SOA record. ns_replies = [
soa_answers = [
x.split() x.split()
for x in shell( for x in shell(
f"dig -r -q {domain} -t SOA +noall +authority +answer", print=log_progress f"dig -r -q {domain} -t NS +noall +authority +answer", print=log_progress
).split("\n") ).split("\n")
] ]
soa = [a for a in soa_answers if len(a) >= 3 and a[3] == "SOA"] filtered_replies = [a for a in ns_replies if len(a) >= 5 and a[3] == "NS"]
if not soa: if not filtered_replies:
return return
ns = soa[0][4] return filtered_replies[0][4]
def query_dns(typ, domain):
ns = get_authoritative_ns(domain)
# Query authoritative nameserver directly to bypass DNS cache. # Query authoritative nameserver directly to bypass DNS cache.
res = shell(f"dig @{ns} -r -q {domain} -t {typ} +short", print=log_progress) direct_ns = f"@{ns}" if ns else ""
res = shell(f"dig {direct_ns} -r -q {domain} -t {typ} +short", print=log_progress)
return next((line for line in res.split("\n") if not line.startswith(";")), "") return next((line for line in res.split("\n") if not line.startswith(";")), "")

View File

@@ -1,8 +1,8 @@
import shlex import shlex
from pyinfra.operations import apt, server from pyinfra.operations import server
from cmdeploy.basedeploy import Deployer from ..basedeploy import Deployer
def openssl_selfsigned_args(domain, cert_path, key_path, days=36500): def openssl_selfsigned_args(domain, cert_path, key_path, days=36500):
@@ -34,11 +34,7 @@ class SelfSignedTlsDeployer(Deployer):
self.cert_path = "/etc/ssl/certs/mailserver.pem" self.cert_path = "/etc/ssl/certs/mailserver.pem"
self.key_path = "/etc/ssl/private/mailserver.key" self.key_path = "/etc/ssl/private/mailserver.key"
def install(self):
apt.packages(
name="Install openssl",
packages=["openssl"],
)
def configure(self): def configure(self):
args = openssl_selfsigned_args( args = openssl_selfsigned_args(
@@ -52,3 +48,5 @@ class SelfSignedTlsDeployer(Deployer):
def activate(self): def activate(self):
pass pass

View File

@@ -281,3 +281,13 @@ def test_deployed_state(remote):
# assert len(git_status) == len(remote_version) # for some reason, we only get 11 lines from remote.iter_output() # assert len(git_status) == len(remote_version) # for some reason, we only get 11 lines from remote.iter_output()
for i in range(len(remote_version)): for i in range(len(remote_version)):
assert git_status[i] == remote_version[i], "You have undeployed changes." assert git_status[i] == remote_version[i], "You have undeployed changes."
def test_nginx_access_log_only_defined_once(sshdomain):
sshexec = get_sshexec(sshdomain)
conf = sshexec(
call=remote.rshell.shell,
kwargs=dict(command="nginx -T 2>/dev/null"),
)
access_logs = [l for l in conf.splitlines() if l.strip().startswith("access_log")]
assert len(access_logs) == 1, f"expected 1 access_log, found {len(access_logs)}: {access_logs}"

View File

@@ -0,0 +1,118 @@
from unittest.mock import MagicMock, patch
from cmdeploy.basedeploy import Deployer
def test_put_file_restart_and_reload():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res):
deployer.put_file("foo.conf", "/etc/foo.conf")
assert deployer.need_restart is True
assert deployer.daemon_reload is False
deployer = Deployer()
deployer.put_file("test.service", "/etc/systemd/system/test.service")
assert deployer.need_restart is True
assert deployer.daemon_reload is True
def test_remove_file():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
with patch("cmdeploy.basedeploy.files.file", return_value=mock_res) as mock_file:
deployer.remove_file("/etc/foo.conf")
mock_file.assert_called_once_with(
name="Remove /etc/foo.conf", path="/etc/foo.conf", present=False
)
assert deployer.need_restart is True
def test_ensure_systemd_unit():
deployer = Deployer()
mock_res = MagicMock()
mock_res.changed = True
# Plain service file
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res) as mock_put:
deployer.ensure_systemd_unit("iroh-relay.service")
assert (
mock_put.call_args.kwargs["dest"]
== "/etc/systemd/system/iroh-relay.service"
)
assert deployer.need_restart is True
assert deployer.daemon_reload is True
deployer = Deployer()
# Template (.j2) dispatches to put_template and strips .j2 suffix
with patch("cmdeploy.basedeploy.files.template", return_value=mock_res) as mock_tpl:
deployer.ensure_systemd_unit(
"filtermail/chatmaild.service.j2",
bin_path="/usr/local/bin/filtermail",
)
assert (
mock_tpl.call_args.kwargs["dest"] == "/etc/systemd/system/chatmaild.service"
)
deployer = Deployer()
# Explicit dest_name override
with patch("cmdeploy.basedeploy.files.put", return_value=mock_res) as mock_put:
deployer.ensure_systemd_unit(
"acmetool/acmetool-reconcile.timer",
dest_name="acmetool-reconcile.timer",
)
assert (
mock_put.call_args.kwargs["dest"]
== "/etc/systemd/system/acmetool-reconcile.timer"
)
def test_ensure_service():
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service("nginx.service")
mock_svc.assert_called_once_with(
name="Start and enable nginx.service",
service="nginx.service",
running=True,
enabled=True,
restarted=True,
daemon_reload=True,
)
# daemon_reload is cleared to avoid multiple systemctl daemon-reload calls
# need_restart is kept to ensure all subsequent services also restart
assert deployer.need_restart is True
assert deployer.daemon_reload is False
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
# Stopping suppresses restarted even when need_restart is True
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service(
"mta-sts-daemon.service",
running=False,
enabled=False,
)
assert mock_svc.call_args.kwargs["restarted"] is False
assert deployer.need_restart is True
with patch("cmdeploy.basedeploy.systemd.service") as mock_svc:
# Multiple calls: daemon_reload resets after first, need_restart persists
deployer = Deployer()
deployer.need_restart = True
deployer.daemon_reload = True
deployer.ensure_service("chatmaild.service")
deployer.ensure_service("chatmaild-metadata.service")
second_call = mock_svc.call_args_list[1]
assert second_call.kwargs["restarted"] is True
assert second_call.kwargs["daemon_reload"] is False

View File

@@ -4,6 +4,7 @@ import pytest
from cmdeploy import remote from cmdeploy import remote
from cmdeploy.dns import check_full_zone, check_initial_remote_data, parse_zone_records from cmdeploy.dns import check_full_zone, check_initial_remote_data, parse_zone_records
from cmdeploy.remote.rdns import get_authoritative_ns
@pytest.fixture @pytest.fixture
@@ -14,11 +15,15 @@ def mockdns_base(monkeypatch):
if command.startswith("dig"): if command.startswith("dig"):
if command == "dig": if command == "dig":
return "." return "."
if "SOA" in command: if "with.public.soa" in command and "NS" in command:
return "domain.with.public.soa. 2419 IN NS ns1.first-ns.de."
if "with.hidden.soa" in command and "NS" in command:
return ( return (
"delta.chat. 21600 IN SOA ns1.first-ns.de. dns.hetzner.com." "domain.with.hidden.soa. 2137 IN NS ns1.desec.io.\n"
" 2025102800 14400 1800 604800 3600" "domain.with.hidden.soa. 2137 IN NS ns2.desec.org."
) )
if "NS" in command:
return "delta.chat. 21600 IN NS ns1.first-ns.de."
command_chunks = command.split() command_chunks = command.split()
domain, typ = command_chunks[4], command_chunks[6] domain, typ = command_chunks[4], command_chunks[6]
try: try:
@@ -125,6 +130,17 @@ class TestPerformInitialChecks:
assert not l assert not l
@pytest.mark.parametrize(
("domain", "ns"),
[
("domain.with.public.soa", "ns1.first-ns.de."),
("domain.with.hidden.soa", "ns1.desec.io."),
],
)
def test_get_authoritative_ns(domain, ns, mockdns):
assert get_authoritative_ns(domain) == ns
def test_parse_zone_records(): def test_parse_zone_records():
text = """ text = """
; This is a comment ; This is a comment

View File

@@ -1,11 +1,10 @@
import importlib.resources from pathlib import Path
from cmdeploy.www import build_webpages from cmdeploy.www import build_webpages
def test_build_webpages(tmp_path, make_config): def test_build_webpages(tmp_path, make_config):
pkgroot = importlib.resources.files("cmdeploy") src_dir = (Path(__file__).resolve() / "../../../../../www/src").resolve()
src_dir = pkgroot.joinpath("../../../www/src").resolve()
assert src_dir.exists(), src_dir assert src_dir.exists(), src_dir
config = make_config("chat.example.org") config = make_config("chat.example.org")
build_dir = tmp_path.joinpath("build") build_dir = tmp_path.joinpath("build")

View File

@@ -1,5 +1,4 @@
import hashlib import hashlib
import importlib.resources
import re import re
import time import time
import traceback import traceback
@@ -37,7 +36,7 @@ def prepare_template(source):
def get_paths(config) -> (Path, Path, Path): def get_paths(config) -> (Path, Path, Path):
reporoot = importlib.resources.files(__package__).joinpath("../../../").resolve() reporoot = (Path(__file__).resolve() / "../../../../").resolve()
www_path = Path(config.www_folder) www_path = Path(config.www_folder)
# if www_folder was not set, use default directory # if www_folder was not set, use default directory
if config.www_folder == "": if config.www_folder == "":
@@ -133,8 +132,7 @@ def find_merge_conflict(src_dir) -> Path:
def main(): def main():
path = importlib.resources.files(__package__) reporoot = (Path(__file__).resolve() / "../../../../").resolve()
reporoot = path.joinpath("../../../").resolve()
inipath = reporoot.joinpath("chatmail.ini") inipath = reporoot.joinpath("chatmail.ini")
config = read_config(inipath) config = read_config(inipath)
config.webdev = True config.webdev = True

View File

@@ -153,6 +153,7 @@ Chatmail relay dependency diagram
autoconfig.xml --- dovecot; autoconfig.xml --- dovecot;
postfix --- |10080|filtermail-outgoing; postfix --- |10080|filtermail-outgoing;
postfix --- |10081|filtermail-incoming; postfix --- |10081|filtermail-incoming;
postfix --- |10083|filtermail-transport;
filtermail-outgoing --- |10025 reinject|postfix; filtermail-outgoing --- |10025 reinject|postfix;
filtermail-incoming --- |10026 reinject|postfix; filtermail-incoming --- |10026 reinject|postfix;
dovecot --- |doveauth.socket|doveauth; dovecot --- |doveauth.socket|doveauth;
@@ -295,9 +296,7 @@ ensured by ``filtermail`` proxy.
TLS requirements TLS requirements
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
Postfix is configured to require valid TLS by setting Filtermail (used for delivery) requires a valid TLS.
`smtp_tls_security_level <https://www.postfix.org/postconf.5.html#smtp_tls_security_level>`_
to ``verify``.
You can test it by resolving ``MX`` records of your relay domain and You can test it by resolving ``MX`` records of your relay domain and
then connecting to MX relays (e.g ``mx.example.org``) with then connecting to MX relays (e.g ``mx.example.org``) with