Compare commits

...

850 Commits

Author SHA1 Message Date
j4n
606f36ee13 docker: integrate documentation
delete original markdown notes (the russian version was severely
outdated) and add a new section.
2026-02-18 17:05:28 +01:00
j4n
72973631f7 docker: move key files to repo root as per convention 2026-02-18 17:05:28 +01:00
j4n
5b5b09dc2e filtermail wait dont know where this came from 2026-02-18 17:05:28 +01:00
j4n
aa2f41158f docker: move all relevant files to repository root as per convention 2026-02-18 17:05:28 +01:00
j4n
e0ca4b25f4 docker/chatmaild/config: default to false on CHATMAIL_NOACME 2026-02-18 17:05:28 +01:00
j4n
0b593f98bf docker: chatmail.ini.f empty line remove 2026-02-18 17:05:28 +01:00
j4n
2e23fadb54 docker: skip redundant cmdeploy run on container restart
Replace the old IMAGE_VERSION_FILE/RUNNING_VERSION_FILE mechanism with a
single deploy fingerprint (image_version:sha256(chatmail.ini)) stored at
/etc/chatmail/.deploy-fingerprint. On restart, if the fingerprint matches
the last successful deploy, skip cmdeploy run entirely. The fingerprint
lives on the container's writable layer. On fresh containers, setting
CMDEPLOY_STAGES non-empty in env forces a deploy run regardless of
fingerprint.

Also narrow the /home volume mount to /home/vmail.
2026-02-18 17:05:28 +01:00
j4n
bc19966801 docker: run a dummy git init to make cmdeploy tooling happy 2026-02-18 17:05:28 +01:00
j4n
bafbaa1b81 docker: fix DKIM key permission denied on bind-mounted volumes
chown the entire /etc/acmekeys directory
2026-02-18 17:05:28 +01:00
j4n
feecf6affd docker: add build.sh to set GIT_HASH for local builds
Simple wrapper as .git is excluded from build context.
2026-02-18 17:05:28 +01:00
j4n
c2c3be1115 docker: add DKIM/ACME mount examples for bare-metal migration 2026-02-18 17:05:28 +01:00
j4n
c6d6e272be docker: pass CHATMAIL_NOSYSCTL and CHATMAIL_NOPORTCHECK to container
These got lost somehow
2026-02-18 17:05:28 +01:00
j4n
425e3db07a docker: slim build by excluding .git and non-essential files
Replace the in-Dockerfile `git rev-parse HEAD` with a GIT_HASH build arg
passed from docker-compose (local) or github.sha (CI), defaulting to
"unknown" when unset.

Also exclude .github/, docs/, tests/, and *.md (except www/**/*.md).
2026-02-18 17:05:28 +01:00
j4n
c22efeb74b docker: use docker-compose.override.yaml for user customizations
The base docker-compose.yaml was checked into git and thus would get
overwritten on pull.
- docker-compose.yaml uses named volumes as safe defaults
- docker-compose.override.yaml (gitignored) holds user customizations
- Compose automatically merges both files
2026-02-18 17:05:28 +01:00
j4n
71bd0da51a docs: document ghcr.io built images
Added pull instructions for pre-built images from ghcr.io (main branch,
tagged releases, feature branches).
2026-02-18 17:05:28 +01:00
j4n
0ed5ec75fb docker: add GitHub Action to build Docker image
Builds the Docker image on PRs and pushes that touch docker/, compose,
chatmaild/, or cmdeploy/ files.
- PRs: build only (no push, no login)
- Branch push (main, j4n/docker): build + push as :main or :j4n-docker
- Tagged release (v*): build + push as :1.2.3, :1.2, :sha-<hash>
Uses GITHUB_TOKEN for ghcr.io auth.
2026-02-18 17:05:28 +01:00
j4n
4fd0429cd3 docker: add Traefik support
USE_FOREIGN_CERT_MANAGER existed in compose/example.env but was never
read by any code. This wires it up end-to-end based on PR 662.

- Preliminarily add config options for this, and skip AcmetoolDeployer if
set.
- Add Traefik integration in docker/docker-compose-traefik.yaml, with
  traefik-certs-dumper
- post-hook.sh creates fullchain/privkey symlinks for chatmail
- Chatmail container uses ports 25/143/465/587/993 directly, Traefik
  handles 80/443
- docker/traefik/ contains config.yaml and dynamic configs
- docker/example-traefik.env for the Traefik setup
- rename USE_FOREIGN_CERT_MANAGER to CHATMAIL_NOACME
2026-02-18 17:05:28 +01:00
j4n
45717de6cb docker: add set -u to setup_chatmail_docker.sh 2026-02-18 17:05:28 +01:00
j4n
77dc67dde9 docker: auto-detect image upgrades and include install stage
Without version tracking, if a new image requires the install stage
(e.g. new package versions), the default configure,activate will skip
it and potentially fail silently.

At build time, the git hash is written to /etc/chatmail-image-version.
At runtime, setup_chatmail_docker.sh compares it against the persisted
/home/.chatmail-running-version (survives container restarts via the
/home volume). If they differ, the install stage is automatically
prepended to CMDEPLOY_STAGES. After a successful deploy, the running
version is updated.

Files: docker/chatmail_relay.dockerfile:68-69, docker/files/setup_chatmail_docker.sh:27-48
2026-02-18 17:05:28 +01:00
j4n
f017f88901 docker: docs: replace outdated Russian Docker docs with redirect to EN 2026-02-18 17:05:28 +01:00
j4n
0585314468 docker: extract cert monitor from background process to systemd timer
The cert monitoring was an orphaned background process (`monitor_certificates &`)
Replace with a proper systemd timer/service (every 60s).
Also made journald ForwardToConsole=yes idempotent.
2026-02-18 17:05:28 +01:00
j4n
85ee7dbeb5 docker: document security implications of host networking + cgroups 2026-02-18 17:05:28 +01:00
j4n
e503e120e5 docker: add HEALTHCHECK, remove VOLUME, fix Dockerfile hygiene
- Added HEALTHCHECK that verifies chatmail services are active via systemctl
- Removed `VOLUME ["/sys/fs/cgroup", "/home"]` as anonymous volumes are
  an anti-pattern for user data (leads to data loss on upgrades). Let
  compose/`docker run -v` handle volume management.
- Changed TZ from Europe/London to UTC (server best practice)
- Removed duplicate WORKDIR /opt/chatmail
- Moved `unlink /etc/nginx/sites-enabled/default` from entrypoint.sh to
  Dockerfile build time
2026-02-18 17:05:28 +01:00
j4n
475975dfa0 docker: use @local instead of @docker inside container 2026-02-18 17:05:28 +01:00
j4n
a930f8f46b docker: whitelist env vars in entrypoint, quote $@ and paths
Instead of forwarding ALL environment variables into systemd's
PassEnvironment, only forward a whitelist of variables to prevent
leaking of environment variables.
2026-02-18 17:05:28 +01:00
j4n
75ef0f2698 docker: remove duplicated dovecot hashes from Dockerfile 2026-02-18 17:05:28 +01:00
j4n
57f9327d4d docker: fix cert monitoring — wait for certs dir, use return not exit
Fix bugs in certificate monitoring function:
- `exit 0` inside monitor_certificates() would kill the background process
- calculate_hash() now checks dir existence instead of silenty dying
- Added wait loop until $PATH_TO_SSL exists before monitoring

Files: docker/files/setup_chatmail_docker.sh:16-41
2026-02-18 17:05:28 +01:00
j4n
e99d979eb8 docker: update docs to use @local for cmdeploy 2026-02-18 17:05:28 +01:00
j4n
ffa45c1ca1 docker: symlink chatmail.ini into /opt/chatmail for bench/tests 2026-02-18 17:05:28 +01:00
j4n
9f6de19121 fix(cmdeploy): add __call__ to LocalExec so status works with @local 2026-02-18 17:05:28 +01:00
j4n
cc779ec04f feat(cmdeploy): read CHATMAIL_INI env var for default --config path
Avoids needing --config /etc/chatmail/chatmail.ini on every command
inside the Docker container, where CHATMAIL_INI is already set.
2026-02-18 17:05:28 +01:00
j4n
04bd38baaa docker: add quickstart docker send note 2026-02-18 17:05:28 +01:00
j4n
4df6a96a14 docker: keep .git in build context for GithashDeployer 2026-02-18 17:05:28 +01:00
j4n
47131533df docker: set CHATMAIL_NOPORTCHECK during build-time install 2026-02-18 17:05:28 +01:00
j4n
a84c02e1e5 docker: replace config flags with env vars, drop docker param from deploy_chatmail
Remove change_kernel_settings/fs_inotify_max_user_instances_and_watchers
from chatmail.ini — use CHATMAIL_NOSYSCTL and CHATMAIL_NOPORTCHECK env
vars instead. deploy_chatmail() no longer takes a docker flag; deployers
check the env directly.
2026-02-18 17:05:28 +01:00
j4n
0edff3205f docker: remove dead utilities, fix cmdeploy run using wrong config path
Remove no longer needed docker/files/update_ini.sh and docker/cm_ini_to_env.py.
Fix cmdeploy run not receiving --config.
Document env files
2026-02-18 17:05:28 +01:00
j4n
a48552d69e docker: drop env to ini translation, use chatmail.ini directly
Remove update_ini.sh and the env-var-to-ini pipeline. The container now
has two config modes:

- Simple: set MAIL_DOMAIN in .env, container generates chatmail.ini
  with defaults via `cmdeploy init` on first start.
- Advanced: mount a custom chatmail.ini into the container; the init
  step is skipped when the file already exists.

This eliminates the fragile FORCE_REINIT_INI_FILE / INI_CMD_ARGS
machinery and the env vars that duplicated chatmail.ini settings

Also add *.ini and .env to .dockerignore so local config files
don't leak into the image.
2026-02-18 17:05:28 +01:00
j4n
0c746553b3 docker: move cmdeploy into docker image 2026-02-18 17:05:28 +01:00
j4n
ce65866595 docker: make compose work with cgroups (v2), conversion scripts/docs 2026-02-18 17:05:28 +01:00
j4n
557ad2ed3c docker: don't overwrite existing DKIM keys on container start
opendkim-genkey was running unconditionally on every startup,
check if file exists and skip.
2026-02-18 17:05:28 +01:00
j4n
87b1680621 docker: run install stage at build time, configure+activate at startup
Move the CMDEPLOY_STAGES=install execution into the Dockerfile these
operations baked into the image layer. On container start, only
configure and activate stages run by default. Users can override with
CMDEPLOY_STAGES="install,configure,activate" to force a full reinstall
without rebuilding the image.

Also fixes CERTS_MONITORING_TIMEOUT typo in docker-compose.yaml (was
"$CERTS MONITORING TIMEOUT"), and replaces the docker-commit workaround
in docs with CMDEPLOY_STAGES documentation.
2026-02-18 17:05:28 +01:00
j4n
872fd2d846 docker: widen build context to repo root for build-time install stage
The Dockerfile will need access to chatmaild/ and cmdeploy/ source
trees to run CMDEPLOY_STAGES=install via pyinfra during image build,
moving install-time work out of container startup. The previous context
(./docker) only included helper scripts.

Also adds .dockerignore to exclude .git, data/, venv/ etc. from the
build context, and updates COPY paths accordingly.
2026-02-18 17:05:28 +01:00
j4n
fa2827a07e feat(cmdeploy): guard against non-running systemd
This enables docker image building without systemd running, which would
make pyinfra SystemdEnabled fail.
2026-02-18 17:05:28 +01:00
j4n
c68df8551c docker: remove echobot parts that were lingering in the feature branch 2026-02-18 17:05:28 +01:00
Keonik1
23ddd087ad cmdeploy: Add config parameters change_kernel_settings and fs_inotify_max_user_instances_and_watchers 2026-02-18 17:05:28 +01:00
missytake
4278799f51 cmdeploy: add config (, ) 2026-02-18 17:05:28 +01:00
missytake
ec26ac5dbf docker: use --network=host so chatmail-turn can use any port 2026-02-18 17:05:28 +01:00
missytake
ee4648967e docker: open ports for TURN + STUN 2026-02-18 17:05:28 +01:00
missytake
92c8b83a5e docker: move all configuration to example.env 2026-02-18 17:05:28 +01:00
missytake
c33b5ade30 doc: fix linebreak 2026-02-18 17:05:28 +01:00
missytake
09c0af2c99 docker: disable port check if docker is running. fix #694 2026-02-18 17:05:28 +01:00
missytake
8d76b28a59 Suggestions from @Keonik1
Co-authored-by: Keonik <57857901+Keonik1@users.noreply.github.com>
2026-02-18 17:05:28 +01:00
missytake
ed9c7631bc docker: enable DNS checks before cmdeploy run again 2026-02-18 17:05:28 +01:00
Keonik1
9c0a3a1718 fix unlink if default nginx conf is not exist
- https://github.com/chatmail/relay/pull/614#discussion_r2297828830
2026-02-18 17:05:28 +01:00
Keonik1
bb590bb5ae Fix issue with acmetool
- https://github.com/chatmail/relay/pull/614#discussion_r2279630626
2026-02-18 17:05:28 +01:00
Keonik1
e1c0bffa52 Delete ssh connection from docker installation
- https://github.com/chatmail/relay/pull/614#discussion_r2269986372
- https://github.com/chatmail/relay/pull/614#discussion_r2269991175
- https://github.com/chatmail/relay/pull/614#discussion_r2269995037
- https://github.com/chatmail/relay/pull/614#discussion_r2270004922
2026-02-18 17:05:28 +01:00
Keonik1
e272bb9069 fix docs - nginx "restart" to "reload"
https://github.com/chatmail/relay/pull/614#discussion_r2269896158
2026-02-18 17:05:27 +01:00
Keonik1
87bd0323c2 Fix bug with attaching certs 2026-02-18 17:05:27 +01:00
Keonik1
d2f169af0d pass values to MAIL_DOMAIN and ACME_EMAIL from vars for docker-compose-default
https://github.com/chatmail/relay/pull/614#discussion_r2279591922
2026-02-18 17:05:27 +01:00
Keonik1
0603be8cff change "restart nginx" to "reload nginx"
https://github.com/chatmail/relay/pull/614#discussion_r2269896158
2026-02-18 17:05:27 +01:00
Keonik1
5b66fb9ade add RECREATE_VENV var
https://github.com/chatmail/relay/pull/614#discussion_r2279742769
2026-02-18 17:05:27 +01:00
Keonik1
7f151b368b add 465 port
https://github.com/chatmail/relay/pull/614#discussion_r2279707059
2026-02-18 17:05:27 +01:00
Keonik1
59362b4cf9 add port 80 to docker-compose-default
https://github.com/chatmail/relay/pull/614#discussion_r2279656441
2026-02-18 17:05:27 +01:00
Keonik1
f8af0e2c33 rename dockerfile
https://github.com/chatmail/relay/pull/614#discussion_r2270031966
2026-02-18 17:05:27 +01:00
Keonik1
beef0ecb19 Add installation via docker compose (MVP 1) 2026-02-18 17:05:27 +01:00
Mark Felder
36eb63faa1 feat: Strip Received headers before delivery 2026-02-17 21:16:11 +01:00
Jagoda Estera Ślązak
91df11015e chore(deps): upgrade to filtermail v0.3 (#850)
## 0.3.0 - 2026-02-14

### Features

- Support legacy, pre-OpenPGP packet format

### Miscellaneous Tasks

- *(dist)* Switch to musl targets

### Refactor

- Remove unnecessary Arc
- Use a custom, minimal SMTP client instead of lettre

Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-02-14 18:02:05 +01:00
link2xt
d4f8a29243 docs: fix link to Maddy and update madmail URL 2026-02-13 09:49:29 +00:00
missytake
0144fc3ea8 postfix: only look for square brackets, they are only allowed for address literals 2026-02-12 10:45:15 +01:00
missytake
e7ce6679b9 postfix: IPv6 literals have a prefix 2026-02-12 10:45:15 +01:00
missytake
d1adf52f89 postfix: also accept self-signed for IPv6-only 2026-02-12 10:45:15 +01:00
missytake
56d0e2ca27 postfix: be more exact with nauta.cu 2026-02-12 10:45:15 +01:00
missytake
2613558db6 postfix uses POSIX EREs, not PCRE, so some stuff doesn't work 2026-02-12 10:45:15 +01:00
missytake
6843fcb1a0 postfix: fix tls policy regexp map 2026-02-12 10:45:15 +01:00
missytake
ff54ad88d8 postfix: use regexp to match IPv4 addresses 2026-02-12 10:45:15 +01:00
missytake
cce2b27ae7 postfix: accept self-signed certificates for IP-only relays 2026-02-12 10:45:15 +01:00
j4n
87022e3681 fix(cmdeploy): check if dns_check_disabled before trying to warn about LE
If --skip-dns-check is used and retcode != 0, remote_data is undefined.
2026-02-11 12:13:24 +01:00
j4n
06560dd071 feat(postfix): bind to mail_domain's A/AAAA addresses for outbound mail
Carry forward A/AAAA address from the DNS check to the postfix deploy
stage and set accordingly in main.cf.
2026-02-11 12:13:24 +01:00
j4n
1b0337a5f7 fix(cmdeploy): port check: check addresses, fix single services
Ensure that the interface for mtail_address is available and fix a bug
in port checking where single services were always passing regardless of
the specified service name.
2026-02-11 09:36:04 +01:00
373[Ø]™
dfcaf415b1 Merge pull request #834 from chatmail/373/fix-dns-resolver-injection
fix: remediates issue with improper concat on resolver injection
2026-01-30 23:36:46 +00:00
ccclxxiii
c0718325ef fix: simplify resolver fix 2026-01-30 22:17:53 +00:00
ccclxxiii
7d72b0e592 fix:[wip] fix concact issue which causes dns failure 2026-01-30 21:10:19 +00:00
373[Ø]™
8f1e23d98e Merge pull request #832 from chatmail/373/respect-ipv4-ipv6-boolean-config
remediates ipv6 boolean not being respected during operations
2026-01-30 17:53:36 +00:00
ccclxxiii
56aaf2649b chore: fixes bug in dovecot template 2026-01-30 15:52:32 +00:00
ccclxxiii
2660b4d24c feat: updates postfix for ipv4/v6 2026-01-30 15:27:02 +00:00
ccclxxiii
ea60ecfb57 feat: updates deployers for ipv4/v6 bool 2026-01-30 15:26:45 +00:00
ccclxxiii
2a3a224cc2 feat: adds template for unbound v4/v6 2026-01-30 15:24:26 +00:00
Jagoda Ślązak
e42139e97b chore(deps): upgrade to filtermail v0.2
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-01-28 20:46:02 +00:00
Jagoda Estera Ślązak
65b660c413 docs: update information about filtermail (#824)
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-01-27 13:20:09 +01:00
link2xt
dd2beb226a test(test_exceed_rate_limit): print timestamps when sending messages 2026-01-26 14:25:06 +00:00
link2xt
9c7508cc33 test: fix flaky test_exceed_rate_limit
filtermail rate limiter is using leaky bucket
algorithm (GCRA).
Exceeting the limit requires sending
at least max_user_send_per_minute
messages to exhaust allowed burst,
and then sending messages faster
than the leak rate.
As we don't know how fast is the network
between the server and test runner,
try to send 3 times max_user_send_per_minute
messages to ensure the test does not
fail randomly.
2026-01-26 12:07:10 +00:00
Jagoda Estera Ślązak
ab3492d9a1 feat(filtermail): Replace filtermail with rust reimplementation (#808)
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-01-23 16:31:45 +01:00
Jagoda Estera Ślązak
032faf0a94 feat(config): Set default internal SMTP ports in Config (#819)
Signed-off-by: Jagoda Ślązak <jslazak@jslazak.com>
2026-01-23 09:34:16 +01:00
link2xt
c45fe03652 fix(mtail): separate metrics for incoming and outgoing messages 2026-01-22 14:45:33 +00:00
feld
08bf4c234b Merge pull request #815 from chatmail/dovecot_lmtp_header
Dovecot: disable appending the Received header
2026-01-21 12:18:54 -08:00
j4n
2d0ccdb4a3 cmdeploy/{postfix,dovecot}/deployer.py: check config before restarting
postfix: also fail on warnings
2026-01-21 16:33:38 +01:00
j4n
3abba6f2fa dovecot.conf: fix the stats syntax
The ! character in != is an invalid token in Dovecot's unified filter
language (2.3.12+). The parser expected a comparison operator (=, >, <)
and choked on !.
2026-01-21 16:33:38 +01:00
missytake
f9aaeb0f42 ci: enable mtail for CI
github deployments: be lenient on the whitespace in sed replace of
mtail_address
2026-01-21 16:33:38 +01:00
missytake
e0c44bf04f Reapply "cmdeploy/dovecot/dovecot.conf.j2: tweak idle/hibernate metrics"
This reverts commit 0aa0324c81.
2026-01-21 16:33:38 +01:00
Mark Felder
8ff53d12cb Dovecot: disable appending the Received header
This suppresses a Received header only applied during LMTP processing
and includes a timestamp.
2026-01-20 15:16:33 -08:00
missytake
0aa0324c81 Revert "cmdeploy/dovecot/dovecot.conf.j2: tweak idle/hibernate metrics"
This reverts commit bfcfc9b090.
2026-01-20 19:11:24 +01:00
j4n
bfcfc9b090 cmdeploy/dovecot/dovecot.conf.j2: tweak idle/hibernate metrics
Tune stats collection in tandem with the dashboard.
2026-01-20 17:38:04 +01:00
j4n
e101c36ab4 fix(cmdeploy): comiit typo 2026-01-20 17:38:04 +01:00
j4n
be7aa21039 feat(dovecot): add config flag to export statistics (#806)
This adds exporting of some dovecot event metrics to help debugging slow IMAP login and hibernation. For now, re-using mtail_address config flag and configure the port of the dovecot exporter to be 3904.
2026-01-16 11:35:19 +01:00
adbenitez
4906b82e44 add --website-only option to run subcommand 2026-01-15 16:58:06 +01:00
missytake
5d49b4c0fd postfix: also strip Authentication-Results header 2026-01-15 15:41:21 +01:00
missytake
56c8f9faae tests: add test for stripping DKIM + Authentication-Results headers 2026-01-15 15:41:21 +01:00
Mark Felder
203a7da3f4 Strip DKIM-Signature header before LMTP
Currently we strip the DKIM-Signature header in the OpenDKIM final.lua
script after validation of the signature. We sign all messages upon
submission, but we do not verify messages which are from a local account
and delivered to another local account.

This corrects the problem and ensures that the plaintext headers of a
local to local delivery are sanitized the same as a message received
from another server.

The functionality in final.lua to strip the DKIM-Signature header can
now be retired.
2026-01-15 15:41:21 +01:00
missytake
a1667ca54d Update cmdeploy/src/cmdeploy/postfix/deployer.py 2026-01-15 12:51:29 +01:00
holger krekel
6401bbb32c fix: properly make sure that postfix gets restarted on failure 2026-01-15 12:51:29 +01:00
Mark Felder
325cc7a7b4 expire.py: use absolute path to maildirsize 2026-01-15 12:50:38 +01:00
link2xt
c2acbad802 docs: pin Dovecot documentation URLs to version 2.3
At least some old URLs are 404 already.
2026-01-07 20:08:45 +00:00
holger krekel
0e7ab96dc8 docs: use "build machine" and "deployment server" consistently in getting-started (#797) 2026-01-04 15:05:09 +01:00
373
d1f9523836 docs: adds instructions for migrating control machines (#795)
* docs: update index reference

* docs: adds control machine migration instructions

* docs: rename index ref

* docs: remove maddy-chatmail (404)

* docs: consistent underlining in header text

* docs: remove dedicated page reference

* docs: remove dedicated page for control machine migration

* docs: condense deployment machine migration into getting started per feedback

* docs: correct link to madmail

* docs: update verbiage based on feedback
2026-01-04 14:21:12 +01:00
373
bcf2fdb5d0 docs: consistent naming schema in documentation 2025-12-28 23:57:39 +01:00
link2xt
77a6f49c9b ci: remove jsok/serialize-workflow-action dependency
Deployments to test servers will not be cancelled anymore,
but it is not clear if we even want it.
This setup is much simpler because it only depends
on GitHub Actions features and does not allocate
a runner just to sleep there and wait in the queue.
2025-12-27 14:36:39 +00:00
holger krekel
99630e4d1b docs: streamline migration guide wording, provide titled steps (#789)
* docs: update migration guide after nine migration

* use $OLD_IP4 and $NEW_IP4 to make docs more readable. Also streamline "set TTL to 5 minute" phrasing a bit.

* fix tar commands

* refactor: streamline and refactor the migration guide to provide more clarity and focus

* recommend a "higher TTL" concrete value

Co-authored-by: missytake <missytake@systemli.org>

* scriptify another location

---------

Co-authored-by: missytake <missytake@systemli.org>
2025-12-27 13:10:56 +01:00
373
2f8199a7c6 test: update config test for proper assertion 2025-12-26 20:46:03 +01:00
373
4eeead2826 feat: increases default max mailbox size
this changeset increases the default max mailbox or quota size per a conversation in our development channel
2025-12-26 20:46:03 +01:00
link2xt
0d890274fd feat: use daemon_name for OpenDKIM sign-verify decision instead of IP
On FreeBSD 127.0.0.2 is not assigned to any interface by default,
so 127.0.0.2 source address hack cannot be used to make OpenDKIM
verify the signature instead of signing.

This change sets InternalHosts to `-` so no IP addresses
make OpenDKIM sign the message. Instead of IP address,
OpenDKIM in the outgoing pipeline is explicitly told
to sign messages by setting `{daemon_name}` macro to `ORIGINATING`.
2025-12-19 17:09:33 +00:00
link2xt
7191329a9f chore(release): prepare for 1.9.0 2025-12-18 23:49:48 +00:00
link2xt
1ae4c8451a ci: run tests against ci-chatmail.testrun.org instead of nine.testrun.org 2025-12-18 23:06:05 +00:00
holger krekel
f04a624e19 fix: use absolute path instead of relative path, and streamline some code parts according to comments at https://github.com/chatmail/relay/pull/785 2025-12-18 23:31:39 +01:00
holger krekel
24e3f33acd fix: expire messages also from DeltaChat IMAP subfolders 2025-12-18 23:04:50 +01:00
link2xt
610843a44a docs: add RELEASE.md and CONTRIBUTING.md 2025-12-18 09:21:19 +01:00
link2xt
966754a346 chore: setup git-cliff
I am running git-cliff 2.11.0.
Ran `git-cliff --init` to generate `cliff.toml`.
Removed emojis, replaced `doc` with `docs` to match chatmail core
convention.
2025-12-18 09:21:19 +01:00
link2xt
87153667ed chore: update the heading in the CHANGELOG.md
I have checked that nobody added any entries since 1.8.0 was released.
2025-12-17 21:18:02 +00:00
holger krekel
abe0cb5d08 address cliff's comments about dovecot/postfix 2025-12-17 16:21:40 +01:00
missytake
8c8c37c822 postfix: restart automatically on failure 2025-12-17 16:21:40 +01:00
missytake
e7bed4d2a1 dovecot: restart automatically on failure 2025-12-17 16:21:40 +01:00
j4n
df21076e9b acmetool: use a fixed name and reconcile instead of want 2025-12-17 11:57:41 +01:00
missytake
70da217442 opendkim: only display last sigerror 2025-12-17 10:39:50 +01:00
missytake
40fd62c562 opendkim: report DKIM error code in SMTP response 2025-12-17 10:39:50 +01:00
cliffmccarthy
d76b33def1 feat: Remove echo from passthrough recipients 2025-12-17 10:35:47 +01:00
cliffmccarthy
bab3de9768 feat: Remove echobot user from deployment 2025-12-17 10:35:47 +01:00
cliffmccarthy
49c66116bf feat: Remove echobot special cases 2025-12-17 10:35:47 +01:00
373
9bf99cc8a9 removes development notice 2025-12-16 15:06:45 +01:00
Mark Felder
1188aed061 Related: Add the Chatmail Cookbook project 2025-12-14 20:32:08 +01:00
Mark Felder
e15b8ebf11 docs README update
There is no sphinx-build to pip install
2025-12-14 20:31:19 +01:00
missytake
c84ddf69e8 add missing changelog entries 2025-12-12 14:18:42 +01:00
missytake
96fc3d9ff6 tests: don't let test_status_cmd test server state 2025-12-12 14:00:53 +01:00
missytake
4b5e8feb96 ci: run test_status_cmd at the end to avoid flakiness 2025-12-12 14:00:53 +01:00
Rodrigo Camacho
c98853570b updated location of the documentation for custom webpage location 2025-12-11 22:50:02 +01:00
Simon Laux
bad356503e Merge pull request #745 from chatmail/simon/i744
fix: Handle case where user followed the tutorial and set the CNAME reccord for mta-sts, but no TXT record for it yet.
2025-12-11 22:41:14 +01:00
adb
dba48e88d1 Merge pull request #760 from chatmail/adb/issue-734
add imap_compress option to chatmail.ini
2025-12-11 08:33:41 +01:00
adbenitez
3ae8834cbe update changelog 2025-12-11 08:33:24 +01:00
adb
81391f4066 Update cmdeploy/src/cmdeploy/dovecot/dovecot.conf.j2
Co-authored-by: missytake <missytake@systemli.org>
2025-12-10 20:43:03 +01:00
adbenitez
55cfd00505 add imap_compress option to chatmail.ini 2025-12-09 09:32:53 +01:00
holger krekel
b000213c68 remove echobot from relay deployment and make sure it's un-installed during "cmdeploy run" 2025-12-07 20:14:35 +01:00
link2xt
51d16b6bb8 Add hpk42 SSH key to staging server for debugging 2025-12-07 20:13:38 +01:00
link2xt
2beba8c455 ci: add deployment environments for all deployment workflows
Code posting the link to comments is removed
as deployment URLs are directly visible in the UI.
2025-12-07 15:21:44 +01:00
link2xt
33c67d22fa Add execnet dependency 2025-12-07 15:21:44 +01:00
j4n
166bf68915 Remove DKIM-Signature from incoming mail after checking (#747)
The original https://github.com/chatmail/relay/pull/533 attempted to remove the header through postfix, but that is too early. Instead, remove the headers in the OpenDKIM `final.lua` script after the validation.
2025-12-04 12:23:27 +01:00
Treefit
abb70a6b14 Handle case where user followed the tutorial and set the CNAME reccord
for mta-sts, but no TXT record for it yet.
2025-11-28 09:34:44 +01:00
Maikel Frias Mosquea
96108bbaba fix: cmdeploy webdev now works as intended
Before: cmdeploy webdev just kept running non-stop regeneration of the
files with this it truly stop unless there's an actual change.
2025-11-25 22:26:47 +01:00
Mark Felder
8f68672e31 FreeBSD/pf example: fix small inconsistency
harmless, but better to be consistent
2025-11-21 10:02:44 +01:00
Mark Felder
9e6e3af534 Proxy example for FreeBSD/pf 2025-11-20 17:03:31 +01:00
missytake
fa5a6a64b3 opendkim: use opendkim as selector as before 2025-11-16 19:53:54 +01:00
holger krekel
6b7c002e24 use non-underscore naming for basedeploy helpers 2025-11-16 19:53:54 +01:00
holger krekel
4b2f98788d remove unneeded __init__ files 2025-11-16 19:53:54 +01:00
holger krekel
13faa42abd shift mtail deployer to subdir 2025-11-16 19:53:54 +01:00
holger krekel
7c12136991 move out nginx deployer 2025-11-16 19:53:54 +01:00
holger krekel
3637bba5dc move dovecot deployer out to dovecot/ directory 2025-11-16 19:53:54 +01:00
holger krekel
e2b157bd96 move postfix deployer to postfix directory 2025-11-16 19:53:54 +01:00
holger krekel
83abb3a3e1 factor out opendkim deployer 2025-11-16 19:53:54 +01:00
link2xt
2e3e3101b6 Add robots.txt to exclude all web crawlers 2025-11-16 10:31:14 +00:00
missytake
213d68ed02 acmetool: accept new Let's Encrypt Terms of Services (#729) 2025-11-16 09:51:39 +01:00
link2xt
68cc6676ef Update changelog 2025-11-15 10:51:04 +00:00
link2xt
14ca95d25a fix(postfix): set smtpd_tls_mandatory_protocols for port 25
smtp_tls_mandatory_protocols does not affect port 25
because we require STARTTLS on port 25 since commit
8d7e1dad0e

We don't have any smtpd ports with opportunistic TLS.
Submission ports require TLSv1.3 and starting with this commit
MX port will require TLSv1.2 instead of TLSv1.

I have not managed to connect using TLSv1.1
even without this fix to reproduce the problem,
but I have checked that setting
`-o smtpd_tls_mandatory_protocols=>=TLSv1.3`
does not allow to connect using TLSv1.2 anymore using
`openssl s_client -connect example.org:25 -starttls smtp -tls1_2`.

`smtpd_tls_protocols` setting is removed
because it does not affect anything except the internal ports
and its `git blame` points to the wrong commit.
2025-11-15 10:51:04 +00:00
link2xt
3524b055db fix(postfix): set smtp_tls_mandatory_protocols to require TLSv1.2 for outgoing connections
According to
<https://www.postfix.org/postconf.5.html#smtp_tls_security_level>
for outgoing connections with smtp_tls_security_level
`encrypt` and higher (such as `verify` that we currently use)
the setting `smtp_tls_mandatory_protocols`
is used instead of `smtp_tls_protocols`.
According to `postconf -d`
(and `postconf` because the default is not changed)
current setting value is `smtp_tls_mandatory_protocols = >=TLSv1`.
But we only want to connect outside with TLS 1.2 and TLS 1.3.

`smtp_tls_protocols` which was already set to `>= TLSv1.2`
in commit 0155f32df6
only affected outgoing connections with the `may` level
exception set for nauta.cu domain via `smtp_tls_policy_maps`
which does not support STARTTLS at all.
2025-11-15 10:51:04 +00:00
holger krekel
7b16f1330d Update doc/source/overview.rst
Co-authored-by: missytake <missytake@systemli.org>
2025-11-13 21:03:54 +01:00
holger krekel
7a907b138c fix heading 2025-11-13 21:03:54 +01:00
holger krekel
0ff0159a89 update mermaid overview graph 2025-11-13 21:03:54 +01:00
holger krekel
81d2bf89c7 move all cleanup of historic artifacts into LegacyRemoveDeployer 2025-11-13 21:03:30 +01:00
missytake
514a911529 docs: document which services are involved in delivering an internal msg (#678)
* doc: add diagram for internal message

* doc: apostrophe for clarity
2025-11-13 21:02:19 +01:00
holger krekel
fc7240a1ad simplify importing of resource files (avoid importlib.resources.files boilerplate) 2025-11-13 18:59:03 +01:00
holger krekel
bdcccd858c add a comment about absolute imports 2025-11-13 18:59:03 +01:00
holger krekel
af30d2b55d fix import to work with "pyinfra" which needs a file location and thus does not start "run.py" as part of the package 2025-11-13 18:59:03 +01:00
holger krekel
5664b97db4 fixing path resolution for "fmt" command 2025-11-13 18:59:03 +01:00
holger krekel
81364bd523 fix an import 2025-11-13 18:59:03 +01:00
holger krekel
3c3e54fceb apply results of "cmdeploy fmt" 2025-11-13 18:59:03 +01:00
holger krekel
ae96b752a3 rename "deployer.py" to "basedeploy.py" 2025-11-13 18:59:03 +01:00
holger krekel
33b69fac95 move the monster __init__.py to a deployers.py file 2025-11-13 18:59:03 +01:00
holger krekel
165dc10f59 avoid "deploy.py" next to "deployer.py" 2025-11-13 18:59:03 +01:00
cliffmccarthy
3df3c031d4 Organize cmdeploy into install, configure, and activate stages (#695)
* refactor: Move all imports to top of cmdeploy/__init__.py

* refactor: Move addition of 9.9.9.9 resolver earlier

- Moved the "Add 9.9.9.9 to resolv.conf" step earlier, before the
  creation of users or updates to any config files.  This should not
  affect any of those operations.  Moving this step earlier makes it
  easier to accommodate the restructuring of the deployment process
  into separate components with separate stages for install,
  configure, and activate.

- Added a Deployer class that defines the base for objects that will
  handle installation of individual components, with install,
  configure, and activate stages.  
- The CMDEPLOY_STAGES environment variable is used to determine what
  stages to run.  If this is not defined, all stages run as usual.
- Added import of Deployer to cmdeploy/__init__.py.  This is not yet
  used, but the next series of commits will use it.
- In deploy_chatmail(), define an empty list of deployers, and call
  the create_groups() and create_users() methods for the items in the
  list.  This list will get filled with Deployer objects in the next
  series of commits.

* refactor: Add DovecotDeployer

* refactor: Add PostfixDeployer

- Removed now-unused 'debug' variable from deploy_chatmail().

* refactor: Add NginxDeployer

- Use policy-rc.d during nginx install.  This is needed to keep nginx
  from starting up and interfering with acmetool.  For more information see:
    - https://serverfault.com/questions/861583/how-to-stop-nginx-from-being-automatically-started-on-install
    - https://major.io/p/install-debian-packages-without-starting-daemons/
    - https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt

* refactor: Add OpendkimDeployer

- Note that this moves the installation of the opendkim package
  earlier in the deployment sequence.  Previously, it was installed
  during the _configure_opendkim() routine.

* refactor: Add UnboundDeployer

* refactor: Add IrohDeployer

- This splits the existing deploy_iroh_relay() routine into methods
  for the install, configure, and activate stages.

* refactor: Add JournaldDeployer

* refactor: Add AcmetoolDeployer

- This splits the existing deploy_acmetool() routine into methods for
  the install, configure, and activate stages.

* refactor: Add MtailDeployer

- This splits the existing deploy_mtail() routine into methods for the
  install, configure, and activate stages.

* refactor: Add MtastsDeployer

- This splits the existing _uninstall_mta_sts_daemon() routine into
  methods for the configure and activate stages.

* refactor: Add RspamdDeployer

- This replaces the existing _remove_rspamd() routine with a method
  for the install stage.

* refactor: Split _install_remote_venv_with_chatmaild into stages

- Split _install_remote_venv_with_chatmaild() into three routines, to
  handle the install, configure, and activate stages.
- This moves the upload of chatmail.ini later in the deployment
  process, because it is a configuration file specific to the
  instance, not software installation that would be uniform across all
  deployments.

* refactor: Add ChatmailVenvDeployer

* refactor: Add ChatmailDeployer

- This moves the installation of cron earlier in the deployment sequence.

* refactor: Add FcgiwrapDeployer

* refactor: Add EchobotDeployer

- This class is a special case because it has a dependency on the
  Postfix and Dovecot deployers.  When deciding whether to restart the
  echobot service, it needs to know whether the Postfix and Dovecot
  deployers restarted their services.  To support this dependency, the
  PostfixDeployer and DovecotDeployer objects are passed to the
  EchobotDeployer object, so it can check their was_restarted
  attributes.

* refactor: Add WebsiteDeployer

- This adds a step to create /var/www in the install stage, because
  the directory needs to exist for the rsync in the configure stage to
  work.

* refactor: Add TurnDeployer

- This splits the existing deploy_turn_server() routine into methods
  for the install, configure, and activate stages.

* refactor: Move curl installation from IrohDeployer to ChatmailDeployer

- The 'curl' program is used in TurnDeployer and IrohDeployer, so it
  makes more sense to install it at the beginning in ChatmailDeployer,
  rather than have each thing that uses it install it separately.

* refactor: Reorder deploy_chatmail()

- The previous commits that added Deployer classes mostly kept
  deployment operations in the same order that they were in before.
  To organize the process into separate stages for install, configure,
  and activate, we need to reorder the method calls.  This is the
  commit that does that, and thus this is the commit that has the
  largest effect on the order of operations.
- The calls for the deployer objects are all reordered here so that
  the methods are called in the same sequence for each stage.  This
  will allow us to collect the calls into loops in the next commit.
  This commit provides a way to see a diff showing exactly how the
  sequence changed.
- The sequence of deployers was largely based on preserving the order
  of the "activate" stage, as this seems like the place order might be
  the most likely to matter.  Installation of packages and
  configuration of files should generally be able to run in any order.
  (ChatmailDeployer handles updating the apt data, and therefore needs
  to be first, however.)

* refactor: Call install, configure, and activate methods in loops

- Revised deploy_chatmail() to use all_deployers to call the
  install(), configure(), and activate() methods on all the deployers,
  rather than listing them explicitly in the code.

* docs: Add architectural information about deployer classes

- Updated overview.rst to describe the Deployer class hierarchy and
  the motivations behind it.

* fix: Block unbound from starting up on install

- On an IPv4-only system, if unbound is started but not configured, it
  causes subsequent steps to fail to resolve hosts.
- Revised UnboundDeployer.install_impl() to use policy-rc.d to prevent
  the service from starting when installed.  This is the same
  mechanism used to keep nginx from starting on install.

* feat: Remove obs-home-deltachat.gpg

- We don't install Dovecot from OBS anymore.
- Removed files.put() that creates
  /etc/apt/keyrings/obs-home-deltachat.gpg; replaced this with a
  files.file() that sets present=False to remove the file from any
  existing installations where it already has been installed.
- Removed now-unused obs-home-deltachat.gpg file.
- Clarified description of sources.list operation.
- Suggested in review by missytake and hpk42.

* feat: Reorder deployers

- Moved fcgiwrap before nginx.
- Exchanged order of turn and unbound.
- Moved journald as early as possible.
- Suggested in review by missytake.

* chore: Add CHANGELOG.md entry for cmdeploy refactor

* refactor: Move unit list to ChatmailVenvDeployer

- Split _configure_remote_venv_with_chatmaild() into two functions.
  _configure_remote_venv_with_chatmaild() handles details specific to
  the "venv", while the new _configure_remote_units() is a more
  general function that is applicable to several services.
- Renamed _activate_remote_venv_with_chatmaild() to
  _activate_remote_units() because doesn't have anything
  venv-specific.
- Removed list of units from helper functions (where it appeared
  twice); moved it to ChatmailVenvDeployer, where its is passed as an
  argument to _configure_remote_units() and _activate_remote_units().

* refactor: Move turnserver out of ChatmailVenvDeployer

- Revised TurnDeployer to use _configure_remote_units() and
  _activate_remote_units().  This class no longer uses need_restart
  and daemon_reload attributes to keep track of state.  The activate
  stage of ChatmailVenvDeployer was unconditionally restarting the
  service every time, so we don't need to keep track of extra state in
  an attempt to avoid restarting it; we can just handle the
  unconditional restart in TurnDeployer.activate_impl().
- Removed turnserver from the unit list in ChatmailVenvDeployer.

* refactor: Move echobot out of ChatmailVenvDeployer

- Revised EchobotDeployer to use _configure_remote_units() and
  _activate_remote_units().  The 'activate' stage of
  ChatmailVenvDeployer was unconditionally restarting the service
  every time, so EchobotDeployer no longer needs to depend on the
  was_restarted attributes of the postfix and dovecot deployers in an
  attempt to avoid restarting it; we can just handle the unconditional
  restart in EchobotDeployer.activate_impl().
- Removed echobot from the unit list in ChatmailVenvDeployer.
- Removed now-unused was_restarted attribute from PostfixDeployer and
  DovecotDeployer.

* refactor: Move doveauth out of ChatmailVenvDeployer

- Revised DovecotDeployer to use _configure_remote_units() and
  _activate_remote_units() to deploy doveauth.  This keeps the
  Dovecot-related services in a single deployer class, leaving only
  services that are part of the chatmail project in
  ChatmailVenvDeployer.
- Removed doveauth from the unit list in ChatmailVenvDeployer.

* strike unnccessary deployer variables

* remove indirection with "stages"

* simplify required_users configuration (a method is not needed for now)

* further reduce indirections for staged install

* now that Deployer class is clean and not mixed with what is in Deployment, use the simpler "install", "configure" and "activate" namings instead of *_impl

* remove static method and Make Deployer instances not set any default state

* strike unneccessary *,** argument flexibility

* use a Deployer for setting the remote git hash

* refactor: Revise AcmetoolDeployer for new Deployer interface

* style: Formatting revisions

* refactor: Pass all constructor arguments by position

- The constructor arguments do not have default values; they are all
  required.  Revised deploy_chatmail() to pass them by position rather
  than name, so that the caller is not coupled to the names of the
  arguments inside the method definition.

* refactor: Simplify interface to Deployer.install()

- In the current code, the only class using the interface that sets
  need_restart() from the return value of the install() method was
  IrohDeployer.  That interface was created when the install method
  was a static method, but now it is an instance method with access to
  'self'.  Therefore, we don't need to pass anything up to the caller
  to have them set the attribute, we can just set it.
- Revised IrohDeployer.install() to set self.need_restart directly,
  rather than returning a value.
- Revised Deployment.install() to ignore the return value of the
  deployers' install() methods.
- need_restart is still present in the base Deployer class to ensure
  that it is always defined, even when classes do not set it in a
  constructor.  Apart from this initialization for convenience, there
  is no longer any specific exposure of need_restart in the interface
  of the Deployer class.
- In general, install() methods should use 'self' as little as
  possible, preferably not at all.  In particular, install() methods
  should never depend on "config" data, such as the config dictionary
  in self.config or specific values like self.mail_domain.  This
  ensures that these methods can be used to perform generic
  installation operations that are applicable across multiple relay
  deployments, and therefore can be called in the process of building
  a general-purpose container image.

* docs: Update cmdeploy architecture details

- Revised cmdeploy documentation in doc/source/overview.rst to reflect
  the recent revisions to the Deployer interface.

* docs: Remove section about use of objects

---------

Co-authored-by: holger krekel <holger@merlinux.eu>
2025-11-13 16:51:51 +01:00
missytake
5515dc4c4b cmdeploy: fix status cmd after sshexec rework (#723)
* cmdeploy: fix status cmd after sshexec rework

* tests: test cmdeploy status

* tests: move test to online tests

* tests: require chatmail_config for status test
2025-11-12 12:24:31 +01:00
holger krekel
50b986a265 Split README into sphinx doc structured sections (#711)
refactor README.rst and architecture file into sphinx doc project, automatically deploying on main merges and PRs.

* add FAQs from https://chatmail.at/relays landing page

* fix links, and streamline postfix/dovecot mentioning

* add linkcheck to CI, fix several links and streamlihne DKIM section while at it

* some streamlining, rename to "overview"

* ci: upload documentation to chatmail.at/doc/relay

* ci: main should be uploaded when docs.yaml changes

* ci: fix typo

* Update .github/workflows/docs-preview.yaml

Co-authored-by: missytake <missytake@systemli.org>
2025-11-11 14:49:25 +01:00
missytake
f24bc99c6f config: xstore@testrun.org is deprecated (#722) 2025-11-11 11:46:35 +01:00
link2xt
a0ebb2bdbc ci: pin jsok/serialize-workflow-action 2025-11-08 21:03:48 +00:00
link2xt
132bdcb5e5 Update the changelog 2025-11-08 19:20:39 +00:00
link2xt
7d593841bb fix: change hook permissions from 744 to 755
There is no reason for it to be not executable by non-owner.
2025-11-08 19:20:39 +00:00
link2xt
83e7caeaf8 Replace acmetool cronjob with a timer 2025-11-08 19:20:39 +00:00
link2xt
1cff4a94f1 Setup acmetool hook into correct place 2025-11-08 19:20:39 +00:00
missytake
ded9dd470d www: add changelog 2025-11-06 16:19:02 +01:00
Alexander
b94ad729fd Update cmdeploy/src/cmdeploy/__init__.py
Co-authored-by: missytake <missytake@systemli.org>
2025-11-06 16:17:12 +01:00
Alexander Dietrich
b60267f37f Skip www_folder if merge conflict marker found 2025-11-06 16:17:12 +01:00
missytake
a0aa2912dd ci: fix test methods for deltachat 2.23.0 2025-11-06 12:33:59 +01:00
Serge Matveenko
76108c1c03 Test dig output with dns comments 2025-11-06 11:26:02 +01:00
Serge Matveenko
61b8dc4637 Improve dns responses parsing 2025-11-06 11:26:02 +01:00
Lars-Dominik Braun
d42f579291 turnserver: Strip newline from response. 2025-11-03 22:57:43 +00:00
Serge Matveenko
dd3cf4d449 Update dovecot-core deb sha256 sums 2025-10-30 11:23:19 +01:00
holger krekel
7361cc9350 fix changelog references 2025-10-29 13:33:25 +01:00
missytake
00f199816d unpublish mutual help group invite link 2025-10-28 16:12:07 +01:00
link2xt
8d7e1dad0e Require STARTTLS for incoming port 25 connections
We already require that outgoing connections
use STARTTLS so other servers need a valid TLS
certificate to accept messages from us.
It is then very unlikely that they cannot use TLS
to send messages to us.

Conversely, if they only can send messages to use without TLS,
it likely does not have STARTLS on its port 25
and then we don't want to accept messages from them
because we will likely not be able to reply.
2025-10-28 01:44:14 +00:00
link2xt
c0da7bb3bf docs: chatmail-turn listens on 3478 UDP, not TCP port 2025-10-28 01:08:06 +00:00
holger krekel
863ded6480 try to limit index cache max size 2025-10-28 01:42:37 +01:00
missytake
d75321b355 doc: write down some basic infos on chatmail-turn (#693)
Co-authored-by: l <link2xt@testrun.org>
2025-10-27 09:00:07 +01:00
link2xt
9148b16d81 acmetool: use ECDSA keys instead of RSA 2025-10-25 08:00:31 +00:00
holger krekel
fa9aa5b015 guard expire/fsreport file iteration against vanishing, improve reporting
also activates actual deletion (after quite some dry test runs on nine)
2025-10-22 20:30:12 +02:00
link2xt
0155f32df6 Require TLS 1.2 for outgoing SMTP connections 2025-10-22 02:46:29 +00:00
holger krekel
9ddd5d8b2b Replace expiry "find" commands with a new chatmaild.expire python module + a reporting one 2025-10-21 20:50:46 +00:00
missytake
4cfe228a1f filtermail: further optimize check_armored_payload() 2025-10-21 00:57:27 +02:00
holger krekel
741a20450c Add a system test for running the filtermail module 2025-10-20 19:02:14 +00:00
adb
b7fadcd4be filtermail: improve check_armored_payload() (#679) 2025-10-20 09:55:53 +02:00
missytake
7db26f33d9 nginx: be more specific with the server name (#636) 2025-10-19 14:02:41 +02:00
link2xt
2b90f7db37 filtermail: run CPU-intensive handle_DATA in a thread pool executor
See
<https://docs.python.org/3/library/asyncio-eventloop.html#executing-code-in-thread-or-process-pools>
for the documentation.

This should avoid processing of large messages from hogging asyncio
thread and delaying async operations like accepting new connections.
2025-10-19 10:43:11 +00:00
holger krekel
e37dd5153a remove logging and just print to sys.stderr 2025-10-18 19:50:13 +00:00
missytake
f21e4ff55b opendkim: increase DNSTimeout from 5 (default) to 60
fix #667
2025-10-17 11:27:18 +02:00
cliffmccarthy
21258a267a test: Handle Git errors in test_deployed_state()
- This is a counterpart to pull request #607.  Revised
  test_deployed_state() to perform the same error-handling on Git
  commands that cmdeploy does.  If 'git rev-parse' returns an error,
  the value "unknown" is used.  If 'git diff' returns an error, the
  null string is used.
- This fixes failures in environments where Git is not installed or
  where the .git subdirectory is not present (as long as the server
  was deployed in the same way).
2025-10-16 16:15:35 +02:00
missytake
e7ddf6dc32 cmdeploy: make --ssh-host expect '@docker' instead of 'docker' 2025-10-14 22:27:02 +02:00
missytake
e3c77a5b37 cmdeploy: introduce LocalExec object 2025-10-14 22:27:02 +02:00
missytake
8256080ad1 Revert "tests: first attempt to mock shell() call"
This reverts commit a0c632a7006a83c8b39cff86228296c32c5c5b9e.
2025-10-14 22:27:02 +02:00
missytake
248b225665 tests: first attempt to mock shell() call 2025-10-14 22:27:02 +02:00
missytake
79591adca4 cmdeploy: prepare for being able to run commands in docker containers 2025-10-14 22:27:02 +02:00
missytake
185757cf40 tests: disable failing stderr capturing in test_logged for now 2025-10-14 22:27:02 +02:00
missytake
87a3adec03 cmdeploy: allow to run SSH commands locally
fix #604
related to #629
pulled out of https://github.com/Keonik1/relay/pull/3
2025-10-14 22:27:02 +02:00
cliffmccarthy
4f5719f590 test: Add retries to test_rewrite_subject() (#670)
- test_rewrite_subject() is prone to failure when it checks for the
  delivered message, because fetch_all_messages() raises "ValueError:
  no messages in imap folder".  The check has the potential to happen
  before the server has had a chance to deliver the message to the
  user's inbox.
- Added a function try_n_times() that attempts to call a function the
  specified number of times, with a 1-second sleep between calls.  The
  call is retried until it doesn't raise an exception.  The last call
  is made without a 'try' block, so that the final exception passes
  through to the caller if it does not return.
- Wrapped call to fetch_all_messages() in try_n_times(), with 5
  attempts specified.  This should usually allow enough time for the
  message to get moved from the postfix queue to the user's inbox.
2025-10-14 21:18:15 +02:00
cliffmccarthy
9787b63cbb test: Return None for success in test_timezone_env() (#671)
- test_timezone_env() is producing the warning,
  "PytestReturnNotNoneWarning: Test functions should return None, but
  src/cmdeploy/tests/online/test_1_basic.py::test_timezone_env
  returned <class 'bool'>".
- Revised test_timezone_env() to return None for success instead of
  True.
2025-10-14 21:17:56 +02:00
missytake
6f600fa329 config: add www_folder to default config (#634) 2025-10-14 21:17:08 +02:00
missytake
20b6e0c528 www: chown /var/www/html to www-data 2025-10-14 21:16:49 +02:00
missytake
262e98f0ba filtermail: allow Version comment in incoming PGP messages (#655)
fix #616

* filtermail: accept any Version comment in incoming messages
2025-10-14 19:15:13 +02:00
cliffmccarthy
d720b8107d Don't print echobot link when disabling mail
- On a fresh install, if cmdeploy is run the first time with the
  --disable-mail option, the echobot invite-link.txt file will not
  exist yet.
- Only print the echobot invite link if --disable-mail was not
  specified.  This fixes the fresh-install error case, and also makes
  sense when disabling mail in general, because the echo bot will not
  be available at that time.
2025-10-13 21:46:47 +02:00
link2xt
d7f50183ea feat: setup TURN server 2025-10-10 18:32:32 +00:00
missytake
248603ab0a cmdeploy: remove colors from cmdeploy init again, hard to test 2025-10-09 23:54:44 +02:00
missytake
123531f1eb cmdeploy: add --force to cmdeploy init for recreating chatmail.ini 2025-10-09 23:54:44 +02:00
Keonik1
1170adc1d4 cmdeploy: start and enable fcgiwrap 2025-10-08 13:11:02 +02:00
missytake
a6f7ff3652 ci: skip DNS checks during cmdeploy run 2025-10-08 13:07:24 +02:00
Keonik1
d39076f0d6 cmdeploy: cmdeploy run option to skip DNS checks 2025-10-08 13:07:24 +02:00
Keonik1
65c0bf13f2 cmdeploy: add acme_email config value 2025-10-08 13:06:48 +02:00
link2xt
0ed7c360a9 Update changelog 2025-10-05 02:37:50 +00:00
link2xt
af272545dd Restart iroh-relay if the binary is updated 2025-10-05 02:37:23 +00:00
link2xt
7725a73cf5 Ensure that downloaded iroh-relay matches expected SHA-256 sum
Previously we only used SHA-256 sum
to check if we need to update the binary.
2025-10-05 02:37:23 +00:00
link2xt
e65311c0df Update iroh-relay to 0.35.0 2025-10-05 02:37:23 +00:00
link2xt
d091b865c7 fix: ignore all RCPT TO: parameters
Stalwart sends `NOTIFY=DELAY,FAILURE`
to request Delivery Status Notifications.
aiosmtpd does not support any parameters,
not just ORCPT, so we have to ignore all of them.
2025-10-05 02:36:40 +00:00
cliffmccarthy
6e28cf9ca1 Add CHANGELOG.md entry for #648 2025-10-03 19:48:32 +00:00
cliffmccarthy
9b6dfa9cdc Use max username length in newemail.py, not min
- username_min_length and username_max_length are both set to a
  default value of 9 in the chatmail.ini.f template.  When they have
  the same value, it doesn't matter which one we use in newemail.py
  (which handles the /new URL).  However, if they are configured to
  different values by the admin, then the current implementation using
  username_min_length chooses from a smaller set of possible
  usernames.
- Revised create_newemail_dict() in newemail.py to use
  username_max_length as the length of the random username it offers
  via the /new URL.  This randomizes within a much larger set of
  possible usernames.
2025-10-03 19:48:32 +00:00
missytake
44ab006dca echobot: restart after postfix + dovecot were started (#642)
* echobot: restart after postfix + dovecot were started

fix #641

* cmdeploy: restart echobot only if dovecot *and* postfix were restarted
2025-09-25 09:00:26 +02:00
link2xt
c56805211f Increase maxproc for reinjecting ports from 10 to 100
Otherwise under high load filtermail
starts printing "Connection refused" errors to the log.
2025-09-24 16:10:26 +00:00
missytake
05ec64bf4a fix link to Mutual Help group 2025-09-23 13:42:47 +02:00
link2xt
290e80e795 Revert "dovecot: keep mailbox index only in memory (#632)"
This reverts commit 7bf2dfd62e.
2025-09-22 22:55:57 +00:00
missytake
56fab1b071 CI: fix lint (#633) 2025-09-22 12:57:43 +02:00
link2xt
00ab53800e Update changelog 2025-09-18 15:28:15 +00:00
link2xt
fc65072edb Allow ports 143 and 993 to be used by dovecot process 2025-09-18 15:26:58 +00:00
missytake
7bf2dfd62e dovecot: keep mailbox index only in memory (#632)
Co-authored-by: holger krekel  <holger@merlinux.eu>
2025-09-12 09:30:17 +02:00
missytake
b801838b69 doc: released 1.7.0 2025-09-12 00:55:49 +02:00
missytake
abd50e20ed cmdeploy: suppress SSH login info message 2025-09-11 20:31:03 +02:00
missytake
d6fb38750a www: make www_folder behavior testable 2025-09-11 19:51:32 +02:00
missytake
3b73457de3 www: introduce www_folder config item
fix #529
2025-09-11 19:51:32 +02:00
missytake
ba06a4ff70 cmdeploy: postfix runs on other ports as well, of course 2025-08-29 23:48:54 +02:00
missytake
7fdaffe829 cmdeploy: on Ubuntu, postfix calls its port 25 process 'smtpd' 2025-08-29 23:48:54 +02:00
missytake
73831c74d9 cmdeploy: fix lint 2025-08-27 08:36:33 +02:00
missytake
d8cbe9d6af cmdeploy: use ports from config for port checking 2025-08-27 08:36:33 +02:00
missytake
180ddb8168 doc: add changelog entry 2025-08-27 08:36:33 +02:00
missytake
a1eeea4632 acmetool: remove unused imports 2025-08-27 08:36:33 +02:00
missytake
a49aa0e655 acmetool: remove outdated systemctl stop nginx 2025-08-27 08:36:33 +02:00
missytake
7e81495b51 cmdeploy: exit if a necessary port is occupied by an unexpected process 2025-08-27 08:36:33 +02:00
missytake
6fde062613 fix lint 2025-08-27 08:35:04 +02:00
missytake
84e0376762 cmdeploy: get SSHExec again, timeout is likely 2025-08-27 08:35:04 +02:00
missytake
d690c22c06 cmdeploy: print echobot link at the end of cmdeploy run 2025-08-27 08:35:04 +02:00
missytake
5410c1bebc CI: remove lint checks from test deployments 2025-08-27 08:34:26 +02:00
missytake
915bd39dd5 CI: fail on lint issues 2025-08-27 08:34:26 +02:00
cliffmccarthy
2de8b155c2 docs: Rework architecture diagram based on review feedback
- Implemented changes suggested in review by missytake:
    - Removed relation between acmetool-redirector and certs.
    - Added internal nginx listening on port 8443.
    - Changed direction of arrows between certs and the services that
      use them.  This makes the arrow show the direction of
      information flow, rather than a "depends on" relation.
    - For filesystem paths, added a descriptive name to the node.
- Replaced most arrows with plain lines, to simply show that a
  relationship exists between the two nodes.  This also reduces visual
  clutter, since the graph is pretty dense with information already.
- Split nginx and certs into two nodes, to reduce entanglement in the
  graph.  These "linked" nodes are given a different shape and filled
  with a different colour, to highlight the fact that they are the
  same node.
- Revised text about the meaning of edges in the graph.
2025-08-19 13:04:33 +02:00
cliffmccarthy
c975aa3bd1 docs: Indicate draft status in ARCHITECTURE.md
- Suggested in review by hpk42.
2025-08-19 13:04:33 +02:00
cliffmccarthy
6b73f6933a docs: Add ARCHITECTURE.md with diagram of components
- For starters, this file is just a diagram of components of a
  chatmail server.  In the future, this document can grow into a more
  complete description of the architecture of the server, the
  deployment process, and the design intent behind what is and isn't
  in the code base.
- The name ARCHITECTURE.md is inspired by this article, which also has
  good suggestions about what to put in the file:
  https://matklad.github.io/2021/02/06/ARCHITECTURE.md.html
2025-08-19 13:04:33 +02:00
cliffmccarthy
3ce350de9e feat: Check whether GCC is installed in initenv.sh
- Before proceeding with installation of Python dependencies, check
  whether the 'gcc' command is available by running it with the
  --version argument.  If it is not available, print a helpful message
  and exit.
- For the current set of Python dependencies, without GCC, the build
  process fails when building the crypt-r package.  According to the
  error message, on my system the exact command it tries to run is
  'x86_64-linux-gnu-gcc', but rather than depend on this variant
  specifically, the script checks for the generic 'gcc' command, so as
  to avoid coupling the check to an architecture or operating system.
  Similar problems arise if we attempt to check for packages by name;
  the compiler binary is provided by 'gcc-11', but the symlinks that
  provide the unversioned commands (as used by the Python build) come
  from a package named 'gcc'.  Trying to be too precise in what we
  check for could lead to unnecessary failures in some environments,
  or become a maintenance challenge in the future.  For that reason,
  this change simply attempts to run 'gcc' and uses that as a
  probably-sufficient proxy for having what the Python package install
  will need.
2025-08-16 10:04:44 +02:00
cliffmccarthy
1e05974970 feat: Make sure build-essential is installed
- The Python modules installed by initenv.sh require a compiler to build.
- Revised initenv.sh to check whether build-essential is installed
  before proceeding, if the system is based on Debian or Ubuntu.
2025-08-16 10:04:44 +02:00
cliffmccarthy
577c04d537 feat: Add try blocks around Git commands in cmdeploy/__init__.py
- Added 'try' blocks around the 'git rev-parse' and 'git diff'
  commands that are run in deploy_chatmail().  If there is an error
  running rev-parse, git_hash is set to "unknown".  If there is an
  error running diff, git_diff is set to the null string.
- This allows the deployment process work in two scenarios that would
  otherwise fail with an exception:
    - Systems where the 'git' command is not available.
    - When running with a copy of the tree content of chatmail/relay,
      but without a copy of the .git directory.
2025-08-08 12:28:29 +02:00
missytake
d880937d44 doc: added maddy-chatmail to README (#605)
* doc: added maddy-chatmail to README

* Update README.md

Co-authored-by: holger krekel  <holger@merlinux.eu>

---------

Co-authored-by: holger krekel <holger@merlinux.eu>
2025-07-28 16:16:14 +02:00
missytake
46d2334e9c add changelog 2025-07-09 08:42:25 +02:00
missytake
0ba94dc613 dovecot: set TZ=:/etc/localtime to improve performance 2025-07-09 08:42:25 +02:00
missytake
d379feea4f dovecot: only install if it isn't installed already 2025-07-08 19:41:19 +00:00
missytake
e82abee1b9 dovecot: fix errors on re-deployment 2025-07-08 19:41:19 +00:00
missytake
94060ff254 dovecot: never redownload the .deb file 2025-07-08 14:01:50 +02:00
missytake
1b5cbfbc3d dovecot: if architecture isn't supported, install dovecot from apt 2025-07-08 14:01:50 +02:00
missytake
f1dcecaa8f dovecot: verify checksums when downloading debs 2025-07-08 14:01:50 +02:00
missytake
650338925a add changelog 2025-07-08 14:01:50 +02:00
missytake
44f653ccca dovecot: install other dovecot packages 2025-07-08 14:01:50 +02:00
missytake
6c686da937 dovecot: apt install -f 2025-07-08 14:01:50 +02:00
missytake
387532cfca dovecot: download deb for correct arch 2025-07-08 14:01:50 +02:00
missytake
68904f8f61 dovecot: detect architecture 2025-07-08 14:01:50 +02:00
missytake
740fe8b146 dovecot: install from download.delta.chat instead of opensuse 2025-07-08 14:01:50 +02:00
Andrey
162dc85635 clarify about remote/local in readme (#597)
Closes #588
2025-07-07 10:24:38 +02:00
missytake
b699be3ac8 doc: specify where it needs to be the local PC 2025-07-07 10:24:38 +02:00
missytake
b4122beec4 fix lint 2025-06-29 19:49:49 +02:00
missytake
1596b2517c tests: test more reliably if port 25 is reachable 2025-06-29 19:49:49 +02:00
missytake
1f5b2e947c CI: ignore PLC0415 in ruff (imports outside top level) 2025-06-29 19:49:17 +02:00
holger krekel
8a59d94105 Update notifier.py docs
Update to current status and naming
2025-06-27 11:08:31 +02:00
link2xt
96a1dbac08 Expire push notification tokens after 90 days 2025-06-10 22:27:21 +00:00
link2xt
5215e1dc2b Update changelog 2025-06-04 20:57:31 +00:00
link2xt
624a33a61e Use static binary from official mtail release instead of Debian package
Debian has outdated version that does not actually work
with logs from stdin. It gets stuck after some time.
2025-06-04 20:56:27 +00:00
link2xt
6bc751213f Checkout non-merge commit in CI 2025-06-04 20:12:22 +00:00
link2xt
4b721bfcd4 Reconfigure imap-login to high-performance mode
High-security mode could be configured
to handle more connections by increasing process_limit,
but has problems logging in many users at once after
each Dovecot restart or config reload.
2025-06-03 16:30:06 +00:00
link2xt
4a6aa446cd Increase nginx connection limits 2025-06-02 18:28:57 +00:00
Sandra Snan
e0140bbad5 Remove contains from lua
Is this function even doing anything? If so reject PR. I'm still
trying to understand the code.
2025-06-02 18:12:58 +00:00
missytake
6cede707ac Update cmdeploy/src/cmdeploy/__init__.py
Co-authored-by: holger krekel  <holger@merlinux.eu>
2025-05-25 09:12:59 +02:00
missytake
b27937a16d doc: add changelog 2025-05-25 09:12:59 +02:00
missytake
30b6df20a9 cmdeploy: upload chatmail/relay version to /etc 2025-05-25 09:12:59 +02:00
missytake
6c27eaa506 cmdeploy fmt 2025-05-25 09:12:59 +02:00
missytake
0c28310861 make cmdeploy fmt happy 2025-05-24 08:47:49 +02:00
missytake
0125dda6d7 echo: add echo@ to passthrough_senders in default config 2025-05-24 08:47:49 +02:00
missytake
fe38fcbeba filtermail: add echo to passthrough_recipients by default 2025-05-24 08:47:49 +02:00
missytake
b4af6df55c chatmaild: allow echobot to receive unencrypted messages by default 2025-05-24 08:47:49 +02:00
missytake
15244f6462 lint: make ruff happy 2025-05-17 19:31:33 +02:00
missytake
23655df08a doc: add changelog 2025-05-17 19:31:33 +02:00
missytake
b925f3b5ab filtermail: respect message size limit in the config 2025-05-17 19:31:33 +02:00
missytake
823bc90eb1 cmdeploy: make it work without bash
Co-authored-by: link2xt <link2xt@testrun.org>
2025-05-16 21:27:50 +02:00
missytake
ed93678c9d cmdeploy: on ubuntu/debian, test if python3-dev is installed 2025-05-16 21:27:50 +02:00
Adon Metcalfe
2b4e18d16f Only update sysctl settings if needed
If running in a constrained environment (e.g. an incus / systemd container), setting sysctl limits is constrained, this tweak just checks existing settings and if large enough continues instead of applying
2025-05-15 12:39:01 +02:00
adbenitez
09ff56e5b9 add test 2025-05-05 12:59:09 +02:00
adbenitez
b35e84e479 avoid crash on spurious empty file in the pending_notifications dir 2025-05-05 12:59:09 +02:00
link2xt
0638bea363 filtermail: allow partial body length in OpenPGP payloads 2025-05-05 07:03:09 +00:00
Adon Metcalfe
ab9ec98bcc Update README.md
minor doc fix
2025-04-26 09:17:21 +02:00
missytake
b9a4471ee4 cmdeploy: run apt update to make sure dns-utils can be installed 2025-04-24 18:04:00 +02:00
link2xt
5f29c53232 Fix mox URL in the README 2025-04-23 16:59:26 +00:00
s0ph0s
1d4aa3d205 Add note to README about related projects 2025-04-17 11:54:23 +02:00
missytake
a78c903521 cmdeploy: config value for deleting large messages after X days 2025-04-16 14:14:44 +02:00
missytake
a0a1dd65a6 release v1.6.0 2025-04-11 12:21:53 +02:00
missytake
046552061e tests: maximum diff between timezones is 27h, +24h 2025-04-11 00:44:08 +02:00
missytake
1fba4a3cdf tests: check whether opendkim restarted in the last 48 hours 2025-04-11 00:44:08 +02:00
missytake
44ff6da5d2 DNS: add 9.9.9.9 to resolv.conf if unbound isn't there yet 2025-04-10 19:32:01 +02:00
holger krekel
71160b8f65 fix timezone handling such that client/server do not need to have the same 2025-04-10 17:55:16 +02:00
holger krekel
9f74d0a608 cleanly time out trying to connect to port 25 and treat failure as "skip" not real failure. 2025-04-10 17:09:20 +02:00
missytake
c9078d7c92 doc: add changelog 2025-04-10 15:12:49 +02:00
Mark Felder
aa4259477f Postfix master.cf: use 127.0.0.1 for consistency 2025-04-10 15:12:49 +02:00
missytake
21f9885ffe unbound: check that 53 is not occupied by a different process 2025-04-10 15:12:31 +02:00
missytake
f9e885c442 doc: add changelog 2025-04-10 15:12:31 +02:00
missytake
b45be700a8 cmdeploy: disable nsd so it doesn't block port 53 2025-04-10 15:12:31 +02:00
missytake
9c381e1fbf added changelog 2025-04-09 17:41:38 +02:00
holger krekel
3cc9bc3ceb avoid initial runs to show acmetool not found errors 2025-04-09 17:41:38 +02:00
bjoern
2a89be8209 Merge pull request #549 from chatmail/r10s/conretize-timings
add a hint that deletion may be earlier
2025-04-08 22:59:59 +02:00
B. Petersen
c848b61346 add a hint that deletion may be earlier
there is another mention of times in privacy.md,
however, there the gist is about that things are deleted,
it is fine if that happens earlier there (also it is not excluded).

targets discussion from https://github.com/chatmail/relay/pull/504
2025-04-08 14:57:15 +02:00
link2xt
49787044ff Do not encourage non-random addresses and weak passwords 2025-04-08 11:49:39 +00:00
missytake
04ae0b86fb added invite link to mutual help group 2025-04-08 10:39:57 +02:00
missytake
b0434dc927 chore: add blank issues + the mutual help chat group 2025-04-08 10:38:17 +02:00
missytake
7578c5f1d3 chore: add issue template 2025-04-08 10:38:17 +02:00
holger krekel
5ba99dc782 shorten first title to fit in one line in default layout 2025-04-02 16:41:01 +02:00
holger krekel
6d898d5431 use "relay" instead of "server" in most places, and "address" instead of "account" (#539)
* use "relay" instead of "server" and "address" instead of "account"

* consistent Dovecot capitalization and striking relay in two places

* address missytake's comments: use "chatmail relay servers" sometimes -- it's still fine to talk about relays being, or running on, servers
2025-04-02 16:23:23 +02:00
holger krekel
fc3fb93432 use default config files for any missing ini setting 2025-04-02 08:48:45 +02:00
holger krekel
c4f0146e16 Reject unencrypted incoming mail (#538)
* draft blocking of incoming non-encrypted mail

* create a new enforceE2EE file in address dirs by default and only accept incoming cleartext file if the enforceE2EE file is missing

* Update cmdeploy/src/cmdeploy/service/filtermail.service.f

Co-authored-by: l <link2xt@testrun.org>

* fix benchmark so they setup encryption

* hack around limitations of aiosmtpd's handliung of RCPTO options

* add tests, and split incoming/outgoing handlers for clarity

* document mailbox directory structure, some streamlining of features/E2EE in intro

* use SMTP response code "523 Encryption Needed"

* filtermail: care for the case that the recipient does not exist


Co-authored-by: missytake <missytake@systemli.org>

* Update chatmaild/src/chatmaild/filtermail.py

Co-authored-by: l <link2xt@testrun.org>

* Update chatmaild/src/chatmaild/filtermail.py

Co-authored-by: l <link2xt@testrun.org>

* remove debug info print

* ensure multipart/report type for mailer-daemon messages

* Allow sending out Autocrypt Setup Messages

---------

Co-authored-by: l <link2xt@testrun.org>
Co-authored-by: missytake <missytake@systemli.org>
2025-04-01 20:52:43 +02:00
holger krekel
194030a456 enforce encryption for in-server mails (#535)
* enforce encryption for in-server mails

* make tests work with chatmail server only support e2ee internally

* fix echobot test

* simplify quota-exceeded test

* work around rpc-server fixture changes
2025-03-29 21:22:26 +01:00
holger krekel
ce240083c4 fix test and streamline impl 2025-03-27 18:00:09 +01:00
Mark Felder
0722876603 Ensure the candidates for possible accounts are directories with a "cur" subdir to ensure it is a real maildir 2025-03-27 18:00:09 +01:00
Mark Felder
724020ec2a Update URL 2025-03-26 10:24:42 +01:00
feld
b01348d313 Update README.md
Co-authored-by: holger krekel  <holger@merlinux.eu>
2025-03-26 10:24:42 +01:00
feld
46e31bbce3 Update README.md
Co-authored-by: holger krekel  <holger@merlinux.eu>
2025-03-26 10:24:42 +01:00
Mark Felder
a4f4627a75 Various README improvements
- e-mail -> email
- capitalization of software and protocols (Postfix, Dovecot, SMTP, etc)

Rephrased the install instructions to start by recommending the DNS records that cmdeploy is going to check for immediately anyway.

Refactored the migration guide to fix incorrect tar commands (wrong usage of -C flag) and suggest how to copy directly from server to server by logging in with SSH agent forwarding. The mail services should be disabled first before proceeding with any changes and also ensure the echobot and mail spool are copied over to the new server so no messages are lost.
2025-03-26 10:24:42 +01:00
Mark Felder
8d34e036ec Limit the bind for the HTTPS server on 8443 to 127.0.0.1
This server bind was overlooked
2025-03-25 09:48:31 +01:00
bjoern
e004a5e2f6 Merge pull request #528 from chatmail/r10s/update-readme
update some links
2025-03-23 23:54:34 +01:00
B. Petersen
acf6e862d0 update some links 2025-03-23 20:42:25 +01:00
holger krekel
31faf2c78e remove Delta Chat mentionings 2025-03-21 21:47:24 +01:00
holger krekel
f8c28d8b9f clarify client/server setup for deploying a chatmail server 2025-03-21 21:41:51 +01:00
holger krekel
f69a2355f6 better prerequisites 2025-03-21 21:40:05 +01:00
holger krekel
388c01105c streamline intro/getting started 2025-03-21 21:36:26 +01:00
holger krekel
f8996e1d7d Update README.md
Co-authored-by: bjoern <r10s@b44t.com>
2025-03-21 20:32:38 +01:00
holger krekel
6b3d5025d9 Update README.md
Co-authored-by: bjoern <r10s@b44t.com>
2025-03-21 20:32:38 +01:00
holger krekel
ed271189d2 trim the feature list 2025-03-21 20:32:38 +01:00
holger krekel
65f8a9a652 provide more summary info/perequisites in the README 2025-03-21 20:32:38 +01:00
holger krekel
6c5b9fde1f Update README.md
Co-authored-by: l <link2xt@testrun.org>
2025-03-21 06:50:17 +01:00
holger krekel
258436442f rework intro to read when coming from chatmail.at 2025-03-21 06:50:17 +01:00
link2xt
05a32efa50 fix: send SNI when connecting to outside servers
Otherwise email providers which allow to bring your own domain
and use the same IP addresses for all customers
send wildcard certificate instead of the correct one
and Postfix refuses to connect with an error

    server certificate verification failed for example.org[A.B.C.D]:25: num=62:hostname mismatch
2025-03-16 11:21:16 +00:00
Mark Felder
1142d06fdb Limit the bind for the HTTPS server on 8443 to 127.0.0.1 2025-03-15 07:42:09 +00:00
link2xt
35fe189be7 Pass through original_content instead of content in filtermail
This avoids unnecessary UTF-8 recoding and passes bytestring through.
2025-03-11 13:27:16 +00:00
missytake
a78e8e10d2 Merge pull request #517 from chatmail/opendkim-path
opendkim: add absolute path to opendkim-genkey
2025-03-11 12:20:17 +01:00
missytake
9af37ccfbf opendkim: add absolute path to opendkim-genkey 2025-03-11 11:56:07 +01:00
l
803f3e6181 Merge pull request #514 from chatmail/link2xt/readme-tls
Document TLS requirements in the readme
2025-03-10 22:47:43 +00:00
link2xt
f188aef11e Document TLS requirements in the readme 2025-03-09 15:52:44 +00:00
link2xt
76d7e60018 Remove cleanup service from submission ports
It does not work because of `smtpd_proxy_filter`
forwarding the message to filtermail
and we cleanup the message once
filtermail reinjects it on port 10025.
2025-03-09 10:26:53 +00:00
link2xt
fe749159e4 Document that authclean cleans up the Subject 2025-03-08 02:42:35 +00:00
adbenitez
3c3532a292 update links in CHANGELOG.md 2025-03-06 22:10:15 +01:00
adb
710ca0070f Merge pull request #504 from chatmail/adb/delete-big-messages
delete big messages after 7 days
2025-03-04 17:40:44 +01:00
adbenitez
4038fefefd add changelog entry 2025-03-04 17:37:58 +01:00
Timotheus Pokorra
cdcdc0b724 update Let's encrypt Subscriber Agreement 2025-03-04 16:00:28 +01:00
adbenitez
2313093b55 delete big messages after 7 days 2025-03-03 17:19:15 +01:00
missytake
3f2ec54725 mtail: fix getting logs from STDIN 2025-02-25 16:23:13 +01:00
missytake
e928a33f95 opendkim: restart once every day (#498)
fix #495
2025-02-19 21:50:48 +01:00
missytake
2780f53d3b CI: accept ns.testrun.org host key (#499) 2025-02-19 21:24:23 +01:00
missytake
c3f1bdca52 filtermail: strip any empty lines at the end (#496) 2025-02-19 16:38:01 +01:00
missytake
f4e371676b chatmaild: fix umask for doveauth + metadata (#494)
* chatmaild: fix umask for doveauth + metadata

fix #453
2025-02-17 19:10:26 +01:00
link2xt
8ec6e6e985 opendkim: use su instead of sudo 2025-02-17 19:09:50 +01:00
missytake
f4fc1a3f93 CI: stop nested acme directories on staging-ipv4 2025-02-17 01:17:11 +01:00
missytake
42bfb9f22f journald: remove old logs from disk. (#490)
fix #486
2025-02-17 00:27:04 +01:00
link2xt
1a35cdc7a9 Require TLS 1.3 on client-facing ports
I tested with -tls1_2 option
of openssl s_client
that TLS 1.2 connections
are no longer possible
on any ports except port 25.

Port 25 requires at least TLS 1.2
for encrypted connections.
2025-02-16 23:01:56 +00:00
link2xt
2daac76574 Replace subject with [...] for outgoing mail
`authclean` cleanup server is used by
reinjecting smtpd running on localhost:10025 by default.
It runs after filtermail
and currently removes `Received` header
to avoid leaking IP address.
Can as well be used to replace `Subject` lines
with `Subject: [...]`.
If there are multiple `Subject` lines,
all of them should be replaced.

This allows us to avoid dealing with
localized subjects, including SecureJoin
messages `vc-request` and `vg-request`
which can have Subject lines like
Subject: =?utf-8?q?Nachricht_von_nrn178fi4=40nine=2Etestrun=2Eorg?=
2025-02-16 22:35:51 +00:00
link2xt
5633582d31 Add changelog entry for MTA-STS daemon removal 2025-02-16 21:27:15 +00:00
link2xt
667a987dfc Remove MTA-STS daemon 2025-02-16 20:31:07 +00:00
link2xt
49907c78a3 Add changelog entry for crypt compatibility 2025-02-16 15:15:31 +00:00
adb
5cfdb0698f use old crypt lib in python < 3.11 (#483) 2025-02-16 12:18:42 +00:00
link2xt
7e6f8ddfba Simplify SPF record
There is no need to explicitly specify domain for `a` rule.
2025-02-15 03:51:49 +00:00
adb
4d915f9800 improve secure-join message detection (#473) 2025-01-28 04:48:07 +00:00
l
9e6ba1a164 fix: install gcc and python3-dev (#477)
These are needed to build crypt-r
2025-01-27 14:38:18 +00:00
adbenitez
20f76c83f8 replace deprecated crypt package with crypt-r 2025-01-26 19:48:46 +00:00
link2xt
b2995551a2 ci: remove iroh relay from zonefiles
iroh subdomain is not needed
since 95f8c4b269
2025-01-26 19:22:45 +00:00
link2xt
c8f46147e0 chore: ruff 0.9.2 fixes and formatting 2025-01-24 20:57:13 +01:00
missytake
9f6ea8121c added changelog 2025-01-08 17:21:18 +01:00
missytake
9c08cbfbec DNS: recommend DKIM record without space in between for some DNS web interfaces 2025-01-08 17:21:18 +01:00
missytake
c3190dd51a doc: fix migration guide
fix #464
2025-01-08 16:55:10 +01:00
missytake
5b8de76c22 fix tests 2024-12-21 00:04:40 +01:00
missytake
d6205d9a04 add changelog 2024-12-21 00:04:40 +01:00
missytake
6a32192e50 Revert rest of #462
This reverts commit 88a8dc905b.
2024-12-21 00:04:40 +01:00
missytake
5c78619750 DNS: make --all non-optional for cmdeploy dns 2024-12-21 00:04:40 +01:00
missytake
a7b808ebaf Release 1.5.0 2024-12-20 10:53:36 +01:00
missytake
d11038b7b3 DNS: out() instead of print() 2024-12-20 10:46:42 +01:00
missytake
88a8dc905b DNS: recommend cmdeploy dns --all in the README 2024-12-20 10:46:42 +01:00
missytake
a2fbb5dc37 add changelog 2024-12-20 10:46:42 +01:00
missytake
97c31e3820 fix tests 2024-12-20 10:46:42 +01:00
missytake
08c88caa46 CI: test all DNS records 2024-12-20 10:46:42 +01:00
missytake
8e5174ae44 DNS: add -all to cmdeploy dns 2024-12-20 10:46:42 +01:00
missytake
69fe5eac2b DNS: more elegant solution to fix mta-sts record 2024-12-17 18:27:56 +01:00
missytake
46f6a07239 Revert "DNS: fix _mta-sts TXT record on initial setup"
This reverts commit 6d4af3cf0c.
2024-12-17 18:27:56 +01:00
missytake
b268efbc6e DNS: fix _mta-sts TXT record on initial setup 2024-12-17 18:27:56 +01:00
link2xt
95f8c4b269 Update iroh and remove iroh. subdomain 2024-11-09 01:02:20 +00:00
missytake
12217437e3 cmdeploy: install curl for downloading iroh 2024-11-02 15:54:11 +00:00
missytake
35a254fc1c acmetool: only request iroh certificate if it's required 2024-10-31 18:10:58 +01:00
missytake
2c0b659893 dns: add iroh CNAME to zonefile 2024-10-31 18:10:58 +01:00
holger krekel
fe51dbd844 streamline 2024-10-31 17:30:09 +01:00
holger krekel
99fbe1d4c4 Apply suggestions from code review
Co-authored-by: missytake <missytake@systemli.org>
2024-10-31 17:30:09 +01:00
holger krekel
d3e71aa394 streamline intro, mention IP addresses 2024-10-31 17:30:09 +01:00
holger krekel
72df078d02 add support for specifying whole domains for passthrough 2024-10-30 17:17:08 +01:00
missytake
8ea96e505e dovecot: fix syntax error 2024-10-30 16:34:53 +01:00
missytake
a5fd5cfb55 dovecot: disable anvil authentication penalty
fix #441
2024-10-30 16:34:53 +01:00
missytake
3098afb342 CI: fix accepting ns.testrun.org SSH Host Key 2024-10-30 13:30:44 +01:00
missytake
dfc1042a3f CI: fix #422 nested acme&dkimkeys folders 2024-10-30 13:30:44 +01:00
holger krekel
af17b459ba also change privacy policy to circumscribe iroh-relay services 2024-10-30 13:30:44 +01:00
missytake
aae05ac832 CI: set necessary DNS records before cmdeploy run, so it doesn't fail 2024-10-30 13:30:44 +01:00
link2xt
5048bde6d0 Deploy iroh relay 2024-10-30 13:30:44 +01:00
missytake
b92d9c889b doc: use ssh+tar to transfer vmail + dkimkeys as well 2024-10-29 17:17:17 +01:00
link2xt
c35c44ad8d Replace rsync with tar 2024-10-29 17:17:17 +01:00
missytake
a9779d7e7c add changelog 2024-10-29 17:17:17 +01:00
missytake
70f77a93ea doc: fix step 9 -> step 6
Co-authored-by: holger krekel  <holger@merlinux.eu>
2024-10-29 17:17:17 +01:00
missytake
ebed7ebf5e doc: migration guide should use new --ssh-host command 2024-10-29 17:17:17 +01:00
missytake
648bf53e83 Guide on how to migrate chatmail to a new host
This guide doesn't require knowing about firewalls,
but utilizes the `cmdeploy run --disable-mail` command from #428.

supercedes #417
2024-10-29 17:17:17 +01:00
missytake
75f11e68de updated privacy policy to testrun UG 2024-10-29 16:53:33 +01:00
missytake
579e6fd1cd added changelog 2024-10-29 16:53:04 +01:00
missytake
30392df901 cmdeploy: add argument to specify different SSH host than mail_domain 2024-10-29 16:53:04 +01:00
link2xt
7f3f69fa72 fix: increase request_queue_size for UNIX sockets to 1000
Default value is 5.
This setting was lost during refactoring in commit bf0f6e2303
2024-10-27 14:20:42 +00:00
missytake
3e646efee9 add PR link to CHANGELOG.md 2024-10-27 12:23:03 +01:00
Mark Felder
8fe173439d Dovecot quota_max_mail_size to use the Chatmail max_message_size value 2024-10-27 12:23:03 +01:00
holger krekel
48fdff6700 fix wrong ref in changelog 2024-10-23 13:49:46 +02:00
link2xt
5055434e48 Fix OpenPGP payload check
Replace \r\r\n in literal.eml test with \r\n
to make `test_filtermail_no_literal_packets`
actually reach `check_openpgp_payload()`
and make `check_openpgp_payload()` more strict.
2024-10-22 18:41:27 +00:00
missytake
bbf508d95e docs: nicer linebreaks 2024-10-16 16:45:06 +02:00
missytake
80cbdda772 docs: mention the chatmail.ini in the cmdeploy description 2024-10-16 16:45:06 +02:00
missytake
babdff361c docs: more details for the repo overview #419 2024-10-16 16:45:06 +02:00
missytake
15f30d8841 cmdeploy: flag to disable postfix + dovecot for migration 2024-10-16 12:15:59 +02:00
link2xt
737ab54bf2 ci: test cmdeploy dns only once
It should be reliable.
2024-10-16 12:06:55 +02:00
link2xt
20fa5d9656 Query autoritative nameserver directly to bypass DNS cache
unbound-control is not installed out of the box
and even once installed `flush_zone` does not seem
to work reliably.

Instead of trying to flush the cache from unbound,
we now query authoritative nameserver directly using `dig`.
2024-10-15 22:19:47 +00:00
link2xt
a2f2e04ff9 fix: set acme_account_url even if some DNS records are not set
perform_initial_checks may exit early
and not add `acme_account_url` if required DNS
records are not found.
In this case other `cmdeploy run` fails
with KeyError.

To avoid this, `acme_account_url` should always be set.

Unlike DNS checks, running acmetool
may not fail due to network errors,
so it is more reliable and should be checked first.
2024-10-15 16:10:36 +00:00
link2xt
7573ef928f mention wireguard 2024-10-14 12:22:02 +02:00
link2xt
46297d4839 Document setting up DNAT 2024-10-14 12:22:02 +02:00
link2xt
5515607b63 Setup mtail (#388)
Co-authored-by: holger krekel <holger@merlinux.eu>
2024-10-14 09:18:35 +00:00
link2xt
d0ed8830f7 Add IMAP capabilities instead of overwriting them
I wanted to add `COMPRESS=DEFLATE`,
but it should be added only for sessions
that are logged in because `COMPRESS`
command does not work before logging in.

Dovecot already does it correctly
if we don't overwrite the capability string.
2024-10-13 20:18:34 +02:00
link2xt
a6bdbb748b Set CAA record flags to 0 2024-09-15 02:57:38 +00:00
missytake
ba811c2e1c DNS: fix checking for required DNS records (#412)
* Improve README for first setup

* DNS: fix flushing DNS when requesting records

* DNS: actually check whether mta-sts record is set correctly

* DNS: add changelog

* DNS: also check for www CNAME record

* DNS: fix tests

* lint: update ruff to 0.6.5 locally
2024-09-13 21:55:54 +02:00
holger krekel
3ef45c2ffd add changelog entry for #405 2024-09-02 23:02:34 +02:00
holger krekel
8d72d770a3 don't rename import as link2xt prefers 2024-09-02 23:01:28 +02:00
holger krekel
e32d81520a use "walrus" operator (didn't know about it, doh!) 2024-09-02 23:01:28 +02:00
holger krekel
e973bc1f41 organize remotely executing functions in "cmdeploy.remote" sub package 2024-09-02 23:01:28 +02:00
holger krekel
cdfce25494 add a note on deletion of accounts 2024-09-02 19:40:42 +02:00
link2xt
a1e80fdca1 Fix ruff warnings 2024-08-23 11:57:47 +00:00
holger krekel
7aa876a0bb remove dysfunct hispanilandia ref 2024-08-09 00:05:56 +02:00
holger krekel
dee36638cf fix #399 2024-08-09 00:02:34 +02:00
holger krekel
effd5bc6e9 upgrade debian packages on "cmdeploy run" 2024-08-02 13:30:36 +02:00
holger krekel
29eabba5a0 fix links 2024-08-01 19:22:37 +02:00
holger krekel
e7a9bf2a6c start more changes 2024-07-31 22:01:20 +02:00
holger krekel
93423ee1d1 make another release 2024-07-31 21:59:55 +02:00
holger krekel
888f7e669a simplify handle_set method for dictproxy subclasses 2024-07-31 21:51:35 +02:00
link2xt
1f1d1fdf59 fix: use separate transaction storage for each DictProxy handler
DictProxy can have transactions with the same name
(most frequently `1`) processed in parallel.
Dovecot expects that transaction names on each connection
are independent.
2024-07-31 18:49:18 +00:00
zrknlzr
dcab097e00 www: update custom chatmail address steps 2024-07-31 20:25:10 +02:00
missytake
a9bdc3d1d0 www: add button to sign-up on chatmail server 2024-07-31 11:21:42 +02:00
holger krekel
a7101be284 introduce imap_rawlog option for debugging 2024-07-31 02:01:06 +02:00
holger krekel
3ee0b7e288 fix #385 2024-07-30 17:37:33 +02:00
link2xt
e3f0bb195d Fix doveauth logging for created accounts
Currently logs look like this:
`Created address: <chatmaild.user.User object at 0x7fafac36bcd0>`
2024-07-30 12:09:12 +02:00
holger krekel
fae0863633 make disable_ipv6 optional (and default to false) to not break existing chatmail.ini's unneccessarily 2024-07-28 20:38:53 +02:00
missytake
7a64333c25 tests: fix wait_next_incoming_message() in cmdeploy bench 2024-07-28 20:21:09 +02:00
Christian Hagenest
1331e7e77a Add config option for ipv6 usage (#312)
* add allow_ipv6 config option

* add ipv6 config changes to cmdeploy

* fix name of config option for ipv6 in config.py

* move configure ipv6 before service start

* Use templates for disabling ipv6

* lint

* fix parameters in _configure_dovecot

* dont pass domain to _configure_nginx

* make disable_ipv6 boolean

Co-authored-by: missytake <missytake@systemli.org>

* implement namis suggestions reg boolean for ipv6

* Update chatmaild/src/chatmaild/config.py

Co-authored-by: missytake <missytake@systemli.org>

* ruff

* ruff again :)

* fix merge conflict

* CI: add CI machine with IPv6 disabled

* CI: fix sed statement

* CI: fix ubuntu reset

* CI: separate cert storage for staging2 and staging-ipv4

* add DNS records to proper zone

* CI: ignore if folders are missing

* CI: renames are not needed like this

* CI: fix default DNS zone for ipv4

* CI: use debian 12 instead of ubuntu, tired of trying to guess the correct image

* remove duplicared listen on 8443

* use jinja templates for disable_ipv6

* remove unused variable

* add missing % sign in jinja tempalte

* more fun with jinja syntax

* CI: proper rsync paths for acme & DKIM caching

* Changelog: add disable_ipv6 config option

---------

Co-authored-by: missytake <missytake@systemli.org>
Co-authored-by: holger krekel <holger@merlinux.eu>
2024-07-28 20:06:24 +02:00
holger krekel
ac1f2dadad introduce max_message_size config option 2024-07-28 19:51:05 +02:00
holger krekel
4858a67be1 run filtermail as dedicated user 2024-07-28 19:02:22 +02:00
holger krekel
1238ed95da remove mailboxes_dir as default option 2024-07-28 18:17:10 +02:00
holger krekel
b32a57105d remove "passdb_path" as default option 2024-07-28 18:17:10 +02:00
holger krekel
87d6d2d5cb shift code around a bit and add changelog 2024-07-28 17:13:32 +02:00
Daniel Kahn Gillmor
5b05e0194f Add additional known obscured subjects from k-9/thunderbird-android.
These additional subjects were extracted from the thunderbird-android
source (which is inherited from k-9).

The extraction was done with:

```
git clone https://github.com/thunderbird/thunderbird-android/
cd thunderbird-android/legacy/ui/legacy/src/main/res
grep string\ name=\"encrypted_subject values-*/strings.xml | cut -f2 -d'>' | cut -f1 -d'<' | sort -u | sed -e 's/^/    "/' -e 's/$/",/'
```

(i did need to clean up one line's escaping to pass the linter's
expectations)

See also https://github.com/thunderbird/thunderbird-android/issues/8011
2024-07-28 17:13:32 +02:00
missytake
24843abed3 changelog: hint how admins can update 2024-07-28 16:30:34 +02:00
holger krekel
1f96334f8e add changelog 2024-07-28 16:30:34 +02:00
missytake
4db953b22b cmdeploy re-add -y for pyinfra 3 2024-07-28 16:30:34 +02:00
missytake
8e847093da chore: require pyinfra v3 2024-07-28 16:30:34 +02:00
missytake
023253ad9c cmdeploy: skip warnings only in pyinfra 3; pyinfra crashes otherwise 2024-07-28 16:30:34 +02:00
holger krekel
89c65d30d3 remove debug log 2024-07-28 11:19:03 +02:00
holger krekel
c4499d6c85 remove neccessity for FileLock on set_password 2024-07-28 11:12:00 +02:00
holger krekel
29888c2f03 create mailboxes parent directories if needed 2024-07-28 11:12:00 +02:00
holger krekel
eaff92cebc don't use filelocks for writing password because there only is a single doveauth process anyway 2024-07-28 11:12:00 +02:00
holger krekel
4f4fd6a90c log error when a transaction id is not there 2024-07-28 11:12:00 +02:00
holger krekel
da3eb89b67 try debug a CI failure 2024-07-28 11:12:00 +02:00
holger krekel
765f081f6f refactor password/login-timestamp handling into a User object 2024-07-28 11:12:00 +02:00
holger krekel
5c87d69d46 simplify get_user_maildir 2024-07-28 11:12:00 +02:00
holger krekel
686f32d6b3 implement and test migration from sqlite to storing password in userdir 2024-07-28 11:12:00 +02:00
holger krekel
68a62537e1 merge lastlogin and doveauth logic to use the "password" file for both states 2024-07-28 11:12:00 +02:00
holger krekel
e3ff82544a shift lookup methods to class for consistency 2024-07-28 11:12:00 +02:00
holger krekel
eddfadaf7f move passwords to file in user maildir 2024-07-28 11:12:00 +02:00
holger krekel
1b3e2b32f2 only write last-login files for e-mail address directories 2024-07-28 11:12:00 +02:00
holger krekel
353d3bfb3f introduce last-login proxy 2024-07-28 11:12:00 +02:00
holger krekel
4a8fc84c82 Update chatmaild/src/chatmaild/delete_inactive_users.py
Co-authored-by: link2xt <link2xt@testrun.org>
2024-07-28 11:12:00 +02:00
holger krekel
641a6f8d2e streamline: make Config determine uid/gid/maildir of a user 2024-07-28 11:12:00 +02:00
holger krekel
7f3996ef58 make read/write user data atomic 2024-07-28 11:12:00 +02:00
holger krekel
dd770f7e10 small streamlining 2024-07-28 11:12:00 +02:00
holger krekel
4dbb19db46 delete users from mailboxes_dir 2024-07-28 11:12:00 +02:00
holger krekel
ad151c2cc1 remove last_login 2024-07-28 11:12:00 +02:00
holger krekel
28f357b598 write last login differently 2024-07-28 11:12:00 +02:00
holger krekel
bf0f6e2303 address review comments: renamed test and using socketserver ThreadingUnixStreamServer 2024-07-22 13:51:32 +02:00
holger krekel
35a0f07887 remove startup/socket setup from metadata 2024-07-22 13:51:32 +02:00
holger krekel
52aa7cad06 make doveauth also use generic dictproxy 2024-07-22 13:51:32 +02:00
holger krekel
22d77f4680 splitout base class for dictproxy 2024-07-22 13:51:32 +02:00
holger krekel
46c34bfbea use class for dispatching lookups 2024-07-22 13:51:32 +02:00
link2xt
052fb64a3d nginx: use numbers for upstream ports
Otherwise nginx fails when user actually tries to connect,
logs have errors such as
`invalid port in upstream "127.0.0.1:imaps"`
and
`invalid port in upstream "127.0.0.1:submissions"`.
2024-07-17 17:13:05 +00:00
link2xt
e8bf051cd0 refactor: use f-string in logging where it is easy
% is only interpreted if there are two or more arguments:
<https://docs.python.org/3/library/logging.html#logging.Logger.debug>
So it is safe to pass a single argument with already formatted
string.
2024-07-16 09:13:56 +00:00
holger krekel
d3c29b2f6e rename chatmail_domain to mail_domain like is used everywhere else 2024-07-16 10:34:08 +02:00
holger krekel
ef7f4965d4 add changelog entry 2024-07-16 10:34:08 +02:00
holger krekel
c593906c26 fix dns zone file comment syntax 2024-07-16 10:34:08 +02:00
holger krekel
27eea671dc fix pyinfra run to account for new pyinfra release 2024-07-16 10:34:08 +02:00
holger krekel
79a9d2345b more tests and refinements 2024-07-16 10:34:08 +02:00
holger krekel
c3caddcec9 separate between required and recommended entries 2024-07-16 10:34:08 +02:00
holger krekel
6d90182d2e add DNS tests, make remote ssh-exec errors show locally, cleanup ssh-bootstrap 2024-07-16 10:34:08 +02:00
holger krekel
ea503a6075 restructure DNS checks 2024-07-16 10:34:08 +02:00
holger krekel
ffe313528e simplify remote zone-file checking and insist for "dns" subcommand that all records are present 2024-07-16 10:34:08 +02:00
holger krekel
9b5b4c3787 - better debugging for DNS queries
- don't try to guess IP addresses but insist on A and AAAA records
- try to allow ipv4 or ipv6 only zones
- move chatmail.zone generation to jinja so we can have conditionals
2024-07-16 10:34:08 +02:00
holger krekel
c5bf3188a4 report back on ip determination -- deal with failure to obtain ip address 2024-07-16 10:34:08 +02:00
holger krekel
c4f46dc499 fix maildata handling after prematurely merging #369 2024-07-13 19:20:06 +02:00
Daniel Kahn Gillmor
c1fd573de2 Add tests for alternate mail subjects 2024-07-13 18:33:42 +02:00
Daniel Kahn Gillmor
c6b083472f Accept encrypted messages that use hcp_minimal
in draft-ietf-lamps-header-protection-22, hcp_minimal recommends
"[...]" as the obscured Subject header.  In the pending draft
-23 (hopefully released this week, going into a working group last
call), the same HCP will be renamed to hcp_baseline, but it still
recommends the use of "[...]" for the obscured Subject header.
2024-07-13 18:33:42 +02:00
holger krekel
254fe95394 postfix was hitting the "100 clients" smtp-submission connected limit (DC apps) and switched to stress mode which brings more randomness/relay to smtp-connections. We now allow 5K because it should be fine for the machine. 2024-07-13 17:19:15 +02:00
holger krekel
ac61ac082e Revert "postfix: fix timeout to 300s on submission ports"
This reverts commit 39584c7b7d.
2024-07-13 16:13:54 +02:00
link2xt
02df395dab filtermail: do not inject addresses into fromat string 2024-07-13 11:46:49 +02:00
link2xt
39584c7b7d postfix: fix timeout to 300s on submission ports
Otherwise smtpd reduces it to 10s on "overload".
2024-07-13 11:46:20 +02:00
link2xt
4ebc4f3069 postfix: do not lookup client hostnames 2024-07-13 11:45:54 +02:00
missytake
1eca8aa143 CI: don't let commits in other PRs interrupt CI runs (#361) 2024-07-12 12:05:21 +02:00
missytake
9c09d50e8f acmetool: reload nginx after requesting new cert 2024-07-12 11:07:35 +02:00
link2xt
d73e896e66 Add changelog entry for HTTPS/IMAP/SMTP multiplexing 2024-07-11 10:31:45 +00:00
link2xt
283045dc4a Multiplex HTTPS, IMAP and SMTP on port 443
Services are distinguished based on ALPN.
For example,
    openssl s_client -connect example.org:443 -alpn smtp
gives SMTP connection and
    openssl s_client -connect example.org:443 -alpn imap
gives IMAP connection.
2024-07-11 10:30:46 +00:00
holger krekel
180cfb3951 get rid of xfailing test 2024-07-11 12:08:33 +02:00
holger krekel
610637da80 don't report on xfail, it's useless 2024-07-11 02:16:08 +02:00
holger krekel
73e6f5e6da apply last review suggestions 2024-07-10 19:20:51 +02:00
holger krekel
b7e6926880 changing newline-naming as suggested 2024-07-10 19:20:51 +02:00
holger krekel
a7ef6ee35b don't use kwargs for overrides parameter 2024-07-10 19:20:51 +02:00
holger krekel
920e062293 let config.get_user_maildir return a Path 2024-07-10 19:20:51 +02:00
holger krekel
794a0608a1 Path-ify config.mailboxes_dir 2024-07-10 19:20:51 +02:00
holger krekel
fc09653de3 remove all occurences of hardcoded /home/vmail for database and mailbox dirs 2024-07-10 19:20:51 +02:00
holger krekel
c8661fd135 introduce "mailboxes_dir" config ini option to avoid hardcoding /home/vmail/mail/....
in source code and to improve testability.
2024-07-10 19:20:51 +02:00
holger krekel
4b0600a453 be a bit more lenient on keeping old users 2024-07-10 00:02:34 +02:00
holger krekel
f1c10cac2b chunked deletion 2024-07-10 00:02:34 +02:00
holger krekel
af83ca0235 ensuring int-ness of last_login 2024-07-09 19:12:55 +02:00
holger krekel
8f6870ebb7 fix and streamline deletion test 2024-07-09 19:12:55 +02:00
holger krekel
0e8bdbd3e3 streamline address deletion test 2024-07-09 19:12:55 +02:00
holger krekel
0d593c22d1 apply code review and also catch "." as username 2024-07-09 19:12:55 +02:00
holger krekel
a1f0a3e23b Apply suggestions from code review
Co-authored-by: link2xt <link2xt@testrun.org>
2024-07-09 19:12:55 +02:00
holger krekel
9b15d8de24 more precise test, streamline wording (accounts -> address) 2024-07-09 19:12:55 +02:00
holger krekel
aaa51cf234 add changelog PR link 2024-07-09 19:12:55 +02:00
holger krekel
66c7115cfc run removal of inactive users daily 2024-07-09 19:12:55 +02:00
holger krekel
823386d824 delete inactive users works 2024-07-09 19:12:55 +02:00
holger krekel
433cb71211 basic remove-users functionality and tests 2024-07-09 19:12:55 +02:00
link2xt
62c60d3070 doveauth: log when a new account is created 2024-07-09 00:24:06 +02:00
holger krekel
698d328620 don't do PTR reverse checking 2024-07-08 21:48:27 +02:00
link2xt
4292355310 Add nonci_accounts metric
Calculating this with PromQL is not easy
due to interpolation.

Also add HELP and TYPE metadata for each metric.
2024-07-08 18:33:18 +00:00
holger krekel
85bb301255 feat: faster and simpler DNS checks, better ip-address determination (#346)
* drastically reduce round-trips for dns checks, and do it during 'run' and 'dns' sub commands 
* provide progress-dots for dns checks and "--verbose" for seeing what is executed remotely 
* introduce ssh-mediated remote python function execution mechanism
2024-07-08 20:10:52 +02:00
link2xt
0d61c13c58 DKIM-sign Content-Type and oversign all signed headers
Oversigning (including header name in DKIM-Signature
more times that it appears in the mail) prevents
adding more headers with the same name
without invalidating DKIM signature.

We don't want middleboxes to insert a second From header,
adding Cc field to mails that don't have one etc.
2024-07-08 14:27:11 +00:00
holger krekel
15f79e0826 remove fix-file-owner which takes forever on servers with many mail directories
(it's unclear why this is still needed and should be fixed differently in any case)
2024-07-06 10:31:41 +02:00
holger krekel
3d96f0fdfa Support iterating over all users with doveadm commands (#344) 2024-07-06 01:19:57 +00:00
link2xt
733b9604ba dovecot: enable gzip compression on disk 2024-07-05 20:13:03 +00:00
link2xt
969fdd7995 Remove sieve to enable hardlink deduplication in LMTP
LMTP does not deduplicate messages
if sieve plugin is used.

We don't check for Auto-Submitted header anymore
as iOS application has a notification service
and should not display "You have a new message".
2024-07-05 19:22:26 +00:00
link2xt
b1d11d7747 Revert 57c29c14a4
Apparently this causes outlook.com messages to be rejected
even though they don't use `l=` tag.
2024-07-03 20:36:31 +00:00
link2xt
e948bdaea8 filtermail: do not allow ASCII armor without actual payload
Last line is removed as "optional checksum",
so it can contain anything.
Make sure that there is at least some actual payload
besides this line.
2024-07-03 19:36:07 +00:00
link2xt
17389b8667 Increase number of logged in IMAP sessions to 50000 2024-07-01 17:20:23 +00:00
link2xt
635b5de304 Replace bash with /bin/sh 2024-07-01 11:47:38 +02:00
holger krekel
67be981176 make a more complete test 2024-06-27 15:36:39 +02:00
missytake
0b8402c187 doveauth: ensure username length 2024-06-27 15:36:39 +02:00
missytake
7c98c1f8c9 test: ensure minimum username length 2024-06-27 15:36:39 +02:00
B. Petersen
0483603d4a fix headline ordering numbers, typo
before, the order was 2 - 3.1 - 3.2 - 3
i think, the gist was to have subheadlines under "2.";
this is fixed by this PR.

moreover, the PR contains a small typo fix.
2024-06-24 14:26:55 +02:00
missytake
6b59b8be44 CI: accept ns.testrun.org host key 2024-06-19 14:34:17 +02:00
missytake
07ffc003e4 CI: fix check whether acme certs exist 2024-06-18 14:49:37 +02:00
missytake
4cb62df33f CI: change to staging2.testrun.org 2024-06-18 14:49:37 +02:00
missytake
ef58f011fb CI: disable CAA record for now 2024-06-18 14:49:37 +02:00
Christian Hagenest
f7ef236ac8 Revert "CI: disable requesting new certs for staging.testrun.org"
This reverts commit 127d9d6460.
2024-06-18 14:49:37 +02:00
Christian Hagenest
dbe906a331 bump actions/checkout to v4 in test-and-deploy.yml 2024-06-18 14:49:37 +02:00
Christian Hagenest
3899f41c61 switch to checkout@v4 #301 2024-06-18 14:49:37 +02:00
link2xt
57c29c14a4 Reject DKIM signatures that do not cover the whole message body 2024-06-18 02:48:54 +00:00
link2xt
2b5d903cc5 Allow SKESK packets in encrypted mails
They are not used by Delta Chat now,
but this will allow to start using them in the future.
2024-06-13 19:48:59 +02:00
link2xt
c8d270a853 Check that OpenPGP has only PKESK and SEIPD packets (#323) 2024-06-12 17:21:37 +00:00
link2xt
72f4e9edbf filtermail: remove support for unencrypted MDNs
Delta Chat does not send them since 1.43.
1.44 has been released for a while already
and 1.46 is in the process of being released.
2024-06-11 16:18:39 +00:00
link2xt
1ce0a2b0ba Improve filtermail checks for encrypted messages
Ensure that first part only contains "Version: 1"
and second part only contains base64 payload
enclosed in "-----BEGIN PGP MESSAGE-----"
and "-----END PGP MESSAGE-----".
2024-06-11 16:18:39 +00:00
Christian Hagenest
044ebfb9a2 delete buggy dovecot submodule for dovebuild 2024-06-11 16:51:29 +02:00
missytake
a41b034aa2 update version to 1.3.0 2024-06-06 16:03:57 +02:00
missytake
e00f0b852d doc: add acl installation to changelog 2024-06-06 16:02:15 +02:00
missytake
501b12564c tests: mark expunged test as slow 2024-06-06 14:14:31 +02:00
holger krekel
229ad15a28 fix link 2024-06-04 16:58:25 +02:00
missytake
e4f35d8dae add changelog for #316 2024-06-04 14:30:39 +02:00
missytake
4271573e15 DNS: don't check DNS on cmdeploy init anymore 2024-06-04 14:30:39 +02:00
holger krekel
b651a9046b Apply suggestions from code review
Co-authored-by: missytake <missytake@systemli.org>
2024-05-30 19:03:09 +02:00
holger krekel
6b84eaf8af Update www/src/info.md
Co-authored-by: missytake <missytake@systemli.org>
2024-05-30 19:03:09 +02:00
holger krekel
1b076bcd22 more refinement 2024-05-30 19:03:09 +02:00
holger krekel
30437f6c46 refine 2024-05-30 19:03:09 +02:00
holger krekel
3171e40a26 reword further 2024-05-30 19:03:09 +02:00
holger krekel
61c915995b reworking the privacy policy entry point 2024-05-30 19:03:09 +02:00
Christian Hagenest
073bd86344 add changelog for PR 310 (cron) 2024-05-27 14:07:01 +02:00
Christian Hagenest
777a7addd2 Ensure cron is installed #282 (#310) 2024-05-27 14:04:40 +02:00
Christian Hagenest
4f28476c47 add a doc about dovecot building based on internal sysadmin docs (now with squash) (#309)
* add a doc about dovecot building based on internal sysadmin docs

* track discussion from chat

* WIP build-obs.sh

* add precise links for dovecot unstable

* WIP build-obs.sh

* WIP

* WIP IT BUILDS

* WIP: Build builds, OBS pushes, OBs doesn't build :( problem with .dsc

* it works

* move obs dir into script dir

* clean curl

* hack for file length problem

* wip hack

* wip hack

* wip try dpkg-source

* wip test without curl

* wip

* clean up

* remove unnecessary dependencies

* move readme wip

* edit README

* Update scripts/dovecot/build-obs.sh

Co-authored-by: missytake <missytake@systemli.org>

* Update scripts/dovecot/README.md

Co-authored-by: missytake <missytake@systemli.org>

* move SCRIPT_DIR

* fix up readme for dovecot script

* Add OBS

* clarify backports policy

---------

Co-authored-by: holger krekel <holger@merlinux.eu>
Co-authored-by: missytake <missytake@systemli.org>
2024-05-26 19:49:06 +02:00
Christian Hagenest
b05aec72c2 Revert "add a doc about dovecot building based on internal sysadmin docs" (#308)
* Revert "clarify backports policy"

This reverts commit 610675452e.

* Revert "Add OBS"

This reverts commit 83387f5d08.

* Revert "fix up readme for dovecot script"

This reverts commit 142206529c.

* Revert "move SCRIPT_DIR"

This reverts commit c0f200b1a9.

* Revert "Update scripts/dovecot/README.md"

This reverts commit 6d55f75bee.

* Revert "Update scripts/dovecot/build-obs.sh"

This reverts commit c68cbf1806.

* Revert "edit README"

This reverts commit 9677617c7f.

* Revert "move readme wip"

This reverts commit d8cf282953.

* Revert "remove unnecessary dependencies"

This reverts commit b959f57058.

* Revert "clean up"

This reverts commit 8768e6fd0b.

* Revert "wip"

This reverts commit acbf370383.

* Revert "wip test without curl"

This reverts commit 80dfdaee06.

* Revert "wip try dpkg-source"

This reverts commit 4d15ae9452.

* Revert "wip hack"

This reverts commit 9a68d42ee8.

* Revert "wip hack"

This reverts commit d732d099ac.

* Revert "hack for file length problem"

This reverts commit 582a2af799.

* Revert "clean curl"

This reverts commit fba3963d47.

* Revert "move obs dir into script dir"

This reverts commit e80d33e2e0.

* Revert "it works"

This reverts commit 6a3001bf22.

* Revert "WIP: Build builds, OBS pushes, OBs doesn't build :( problem with .dsc"

This reverts commit 368c41ba27.

* Revert "WIP IT BUILDS"

This reverts commit fa0d8432bc.

* Revert "WIP"

This reverts commit 2811e08563.

* Revert "WIP build-obs.sh"

This reverts commit 846a4066d8.

* Revert "add precise links for dovecot unstable"

This reverts commit 6e1477666e.

* Revert "WIP build-obs.sh"

This reverts commit 013def94f9.

* Revert "track discussion from chat"

This reverts commit 468bb04149.

* Revert "add a doc about dovecot building based on internal sysadmin docs"

This reverts commit 30a23dad17.
2024-05-26 19:46:43 +02:00
Christian Hagenest
610675452e clarify backports policy 2024-05-23 14:33:45 +02:00
Christian Hagenest
83387f5d08 Add OBS 2024-05-23 14:33:45 +02:00
Christian Hagenest
142206529c fix up readme for dovecot script 2024-05-23 14:33:45 +02:00
Christian Hagenest
c0f200b1a9 move SCRIPT_DIR 2024-05-23 14:33:45 +02:00
Christian Hagenest
6d55f75bee Update scripts/dovecot/README.md
Co-authored-by: missytake <missytake@systemli.org>
2024-05-23 14:33:45 +02:00
Christian Hagenest
c68cbf1806 Update scripts/dovecot/build-obs.sh
Co-authored-by: missytake <missytake@systemli.org>
2024-05-23 14:33:45 +02:00
Christian Hagenest
9677617c7f edit README 2024-05-23 14:33:45 +02:00
Christian Hagenest
d8cf282953 move readme wip 2024-05-23 14:33:45 +02:00
Christian Hagenest
b959f57058 remove unnecessary dependencies 2024-05-23 14:33:45 +02:00
Christian Hagenest
8768e6fd0b clean up 2024-05-23 14:33:45 +02:00
Christian Hagenest
acbf370383 wip 2024-05-23 14:33:45 +02:00
Christian Hagenest
80dfdaee06 wip test without curl 2024-05-23 14:33:45 +02:00
Christian Hagenest
4d15ae9452 wip try dpkg-source 2024-05-23 14:33:45 +02:00
Christian Hagenest
9a68d42ee8 wip hack 2024-05-23 14:33:45 +02:00
Christian Hagenest
d732d099ac wip hack 2024-05-23 14:33:45 +02:00
Christian Hagenest
582a2af799 hack for file length problem 2024-05-23 14:33:45 +02:00
Christian Hagenest
fba3963d47 clean curl 2024-05-23 14:33:45 +02:00
Christian Hagenest
e80d33e2e0 move obs dir into script dir 2024-05-23 14:33:45 +02:00
Christian Hagenest
6a3001bf22 it works 2024-05-23 14:33:45 +02:00
Christian Hagenest
368c41ba27 WIP: Build builds, OBS pushes, OBs doesn't build :( problem with .dsc 2024-05-23 14:33:45 +02:00
Christian Hagenest
fa0d8432bc WIP IT BUILDS 2024-05-23 14:33:45 +02:00
Christian Hagenest
2811e08563 WIP 2024-05-23 14:33:45 +02:00
Christian Hagenest
846a4066d8 WIP build-obs.sh 2024-05-23 14:33:45 +02:00
holger krekel
6e1477666e add precise links for dovecot unstable 2024-05-23 14:33:45 +02:00
Christian Hagenest
013def94f9 WIP build-obs.sh 2024-05-23 14:33:45 +02:00
holger krekel
468bb04149 track discussion from chat 2024-05-23 14:33:45 +02:00
holger krekel
30a23dad17 add a doc about dovecot building based on internal sysadmin docs 2024-05-23 14:33:45 +02:00
Christian Hagenest
17af249f90 fix link in changelog 2024-05-19 17:53:55 +02:00
Christian Hagenest
4e65291304 fix up 2024-05-19 17:09:35 +02:00
Christian Hagenest
505ad36b36 fix nginx.conf 2024-05-19 17:09:35 +02:00
Christian Hagenest
dcb614911a update changelog 2024-05-19 17:09:35 +02:00
Christian Hagenest
e06c3631b2 nginx logs => journald 2024-05-19 17:09:35 +02:00
Christian Hagenest
da236e6e1b only restart journald if conf was changed 2024-05-19 17:09:35 +02:00
Christian Hagenest
2796730a87 journald.conf storage=volatile 2024-05-19 17:09:35 +02:00
Christian Hagenest
f32e18c32a Recommend authentication via ssh key with ed25519 algorithm (#231) (#291)
* fix #231

* CI: disable CI for markdown files

* clarify need for ssh-add

* Update README.md

Co-authored-by: missytake <missytake@systemli.org>

---------

Co-authored-by: missytake <missytake@systemli.org>
2024-05-18 23:31:03 +02:00
Christian Hagenest
1a5fd331b6 add changelog 2024-05-18 23:06:03 +02:00
Christian Hagenest
772b86a4b5 update delete-mails-after value in test_config.py 2024-05-18 23:06:03 +02:00
Christian Hagenest
e0013b9bee change delete_mails_after default to 20 2024-05-18 23:06:03 +02:00
missytake
127d9d6460 CI: disable requesting new certs for staging.testrun.org 2024-05-18 22:02:51 +02:00
Christian Hagenest
cb7de8019b add acl to apt.packages (#293) 2024-05-17 21:36:36 +02:00
Christian Hagenest
2b5b06316d fix #272 (#290)
@missytake and me both tested the deployment manually, so I'll merge
2024-05-17 17:45:28 +02:00
link2xt
76b56d7b78 metadata: add support for /shared/vendor/deltachat/irohrelay 2024-05-07 15:52:54 +00:00
holger krekel
c1163228f6 add a test for imap capabilities offered from chatmail 2024-05-06 19:57:31 +02:00
holger krekel
8af825d7ea add chatmail entry 2024-05-06 19:57:31 +02:00
holger krekel
0a968aae93 add XCHATMAIL marker 2024-05-06 19:57:31 +02:00
link2xt
879cffc056 Configure more lints and switch from black to ruff format 2024-05-06 14:41:00 +00:00
link2xt
462e92cca0 Add changelog entry for 281 2024-05-05 21:21:06 +00:00
link2xt
e1b1a945b1 Authenticate echobot by passing /run/echobot/password to doveauth 2024-05-05 15:25:44 +00:00
link2xt
0493e27312 Move echobot into /var/lib/echobot 2024-05-05 15:25:44 +00:00
link2xt
e4f8c78efe Merge pull request #276 from deltachat/acmetool-tos
acmetool: accept new terms of services
2024-05-02 13:29:28 +00:00
missytake
e2cbf4e3e4 changelog for #276 2024-05-02 13:28:42 +00:00
missytake
f35d98bb40 acmetool: enable debugging 2024-05-01 10:45:21 +02:00
missytake
7ce1a5e841 ci: don't fail if /var/lib/acme isn't present 2024-05-01 00:41:11 +02:00
missytake
0a72c2fba7 acmetool: accept new terms of services
closes #275
2024-05-01 00:21:58 +02:00
link2xt
824f70f463 Document email authentication requirements 2024-04-19 21:12:54 +02:00
link2xt
39f5f64998 Reload Dovecot and Postfix when TLS certificate updates (#271) 2024-04-15 14:08:32 +00:00
Christian Hagenest
1752803199 changelog for #270 2024-04-11 19:41:43 +02:00
Christian Hagenest
e372599ce7 change location of changes per nami's recommendation 2024-04-11 19:15:28 +02:00
Christian Hagenest
ce9fb02a75 correct key for obs home deltachat 2024-04-11 19:15:28 +02:00
Christian Hagenest
4526f5e772 apt update after adding new repository 2024-04-11 19:15:28 +02:00
Christian Hagenest
616a42c8f3 add our obs repo to cmdeploy init 2024-04-11 19:15:28 +02:00
holger krekel
ecb5ef8a10 start new untagged section post 1.2.0 2024-04-04 18:30:11 +02:00
holger krekel
824c3dc1d7 prepare tagging 1.2.0 2024-04-04 18:28:35 +02:00
holger krekel
9b76d46558 refinements and fixes 2024-04-04 12:57:49 +02:00
holger krekel
cc4920ddc7 a bit of renaming 2024-04-04 12:57:49 +02:00
holger krekel
2af10175fa ignore and remove .tmp files in notification_dir 2024-04-04 12:57:49 +02:00
holger krekel
ae455fa9e1 avoid float with time, and be safe against crashes during file writing 2024-04-04 12:57:49 +02:00
holger krekel
60d7e516dd implemented suggestion fopr using an absolute deadline instead of retrying but choose 5 hours for now because if our own notification server is down/buggy we have at least a bit of time to fix it 2024-04-04 12:57:49 +02:00
holger krekel
bf18905e02 address typo-level review comments 2024-04-04 12:57:49 +02:00
holger krekel
4d6f520f18 finally use persistent queue items with random file names, simplifying the flows 2024-04-04 12:57:49 +02:00
holger krekel
9da626dfc8 proper doc string for Notifier 2024-04-04 12:57:49 +02:00
holger krekel
1cca9aa441 fix failing CI (uncovering real bug) 2024-04-04 12:57:49 +02:00
holger krekel
3d054847a0 split metadata and notifier into separate files 2024-04-04 12:57:49 +02:00
holger krekel
a31d998e67 separate notification thread into own class, and test start_notification_threads 2024-04-04 12:57:49 +02:00
holger krekel
d313bea97f some more renaming 2024-04-04 12:57:49 +02:00
holger krekel
da04226594 fix 2024-04-04 12:57:49 +02:00
holger krekel
eb2de26638 fix changelog 2024-04-04 12:57:49 +02:00
holger krekel
f5652cdbc4 better naming 2024-04-04 12:57:49 +02:00
holger krekel
13172c92f3 some refinements and extending the tests 2024-04-04 12:57:49 +02:00
holger krekel
09df636183 extend testing 2024-04-04 12:57:49 +02:00
holger krekel
2b45ace3ba refine testing and code 2024-04-04 12:57:49 +02:00
holger krekel
9e05a7d1eb more precision 2024-04-04 12:57:49 +02:00
holger krekel
21e7c09c43 remove redundant test code for requests mocking 2024-04-04 12:57:49 +02:00
holger krekel
14d96e0a9b snap somewhat working again 2024-04-04 12:57:49 +02:00
holger krekel
459ffcabd6 better preserve notification order, using a queue again 2024-04-04 12:57:49 +02:00
missytake
75cc3fdab0 DNS: add changelog entry 2024-04-03 15:12:52 +02:00
missytake
2d26a40c2b DNS: lint 2024-04-03 15:12:52 +02:00
missytake
a78d4e6198 DNS: optimize dnsutils installation command 2024-04-03 15:12:52 +02:00
missytake
2a1e004962 DNS: ensure dig is installed 2024-04-03 15:12:52 +02:00
link2xt
5e55cc205d Run chatmail-metadata and doveauth as vmail 2024-03-30 23:08:42 +01:00
missytake
476c732373 CI: use [] consistently 2024-03-30 21:42:19 +01:00
missytake
71c50b7936 CI: fix local paths (this time\!) 2024-03-30 21:42:19 +01:00
missytake
79cb390f16 CI: fix local paths 2024-03-30 21:42:19 +01:00
missytake
c1452c9c6f CI: fix paths on ns.testrun.org 2024-03-30 21:42:19 +01:00
missytake
6e903d7498 CI: restore ACME & DKIM state from ns.testrun.org 2024-03-30 21:42:19 +01:00
link2xt
221f4a2b0c Apply systemd restrictions to echobot
These options are suggested by
`systemd-analyze security echobot.service`
2024-03-30 14:17:48 +00:00
link2xt
080ae058d8 Remove non-existent file pattern from MANIFEST.in 2024-03-30 09:14:01 +00:00
missytake
edb84c0b3b CI: chown /var/lib/acme to root after restoring state 2024-03-30 01:49:03 +01:00
missytake
04ef477d51 CI: fix rsync statements 2024-03-30 01:49:03 +01:00
holger krekel
5696788d3a add changelog entry 2024-03-29 08:54:11 +01:00
link2xt
1c2bf919ed Start Dovecot before Postfix 2024-03-29 04:24:54 +00:00
link2xt
d15c22c1e8 Configure users and groups before installing any packages
Otherwise packages may add user
without correct configuration such as groups
and the step adding user will be skipped.
2024-03-29 04:24:54 +00:00
missytake
9c6e90ae27 make sure fmt and offline checks are only run after DKIM & ACME is restored 2024-03-29 04:24:54 +00:00
missytake
481791c277 re-enable running the CI in pull requests, but not concurrently 2024-03-29 04:24:54 +00:00
holger krekel
a25c7981f9 start unreleased changelog 2024-03-28 18:02:05 +01:00
holger krekel
53519f2865 prepare 1.1.0 tag 2024-03-28 17:59:42 +01:00
link2xt
3a50d82657 Move systemd unit templates to cmdeploy
They are part of deployment rather than service itself.
Different deployments may have different users,
filesystem layout etc.
2024-03-28 16:38:30 +01:00
holger krekel
c640087498 fix error string 2024-03-28 16:11:00 +01:00
holger krekel
2089f3ab58 persist pending notifications to directory so that they survive a restart 2024-03-28 16:11:00 +01:00
holger krekel
cbaa6924c1 use json instead of python's marshal 2024-03-28 16:11:00 +01:00
holger krekel
6ab3e9657d test and fix for edge case 2024-03-28 16:11:00 +01:00
holger krekel
16f237dc60 add changelog entry 2024-03-28 16:11:00 +01:00
holger krekel
554c33423f various naming refinements 2024-03-28 16:11:00 +01:00
holger krekel
5d5e2b199c remove timeout support, it's not needed 2024-03-28 16:11:00 +01:00
holger krekel
989ce70f97 refine logging 2024-03-28 16:11:00 +01:00
holger krekel
f5dc4cb71e more resilience 2024-03-28 16:11:00 +01:00
holger krekel
76512dfa2d move persistentdict into own file, rename 2024-03-28 16:11:00 +01:00
holger krekel
850112502f extend imap online test to cover multi-device 2024-03-28 16:11:00 +01:00
holger krekel
888fa88aa3 back to using marshal, and a filelock 2024-03-28 16:11:00 +01:00
holger krekel
15e7458666 add a persistent dict impl 2024-03-28 16:11:00 +01:00
holger krekel
0a93c76e66 add multi-token support 2024-03-28 16:11:00 +01:00
holger krekel
312f86223c fix target dir 2024-03-28 16:11:00 +01:00
holger krekel
27a60418ad use "devicetoken" consistently and take it from a var 2024-03-28 16:11:00 +01:00
holger krekel
46d31a91da properly startup metadata service and add online test for metadata 2024-03-28 16:11:00 +01:00
holger krekel
a8765d8847 store metadata in a per-mbox dir 2024-03-28 16:11:00 +01:00
holger krekel
8ee6ca1b80 store tokens on a per-maildir basis 2024-03-28 16:11:00 +01:00
holger krekel
1a2b73a862 store tokens in guid-directories 2024-03-28 16:11:00 +01:00
link2xt
c44f4efced Store raw tokens instead of dictionaries in metadata 2024-03-28 16:11:00 +01:00
holger krekel
9fdf4fd2af add to changelog 2024-03-26 23:37:48 +01:00
holger krekel
33353ccaf6 don't warn on hello 2024-03-26 23:37:01 +01:00
holger krekel
5fe3a269be add changelog entries 2024-03-25 17:51:15 +01:00
holger krekel
0b4770018d add a first changelog for the last week of changes 2024-03-25 17:51:15 +01:00
link2xt
75fcbd03ce echobot: ignore info messages 2024-03-25 14:38:41 +00:00
link2xt
377121bdee Fix echobot logging
Do not put log messages into format string
and enable INFO level when bot is started
via main() as it happens with systemd.
2024-03-25 14:38:41 +00:00
missytake
e5e58f4e38 tests: fix quota test after log line changed 2024-03-25 13:55:53 +01:00
missytake
04517f284c acmetool: reload postfix+dovecot after cert renew.
fix #234
2024-03-25 11:36:29 +01:00
holger krekel
e32fb37b5d fix some test and formatting/ruff issues 2024-03-21 16:19:54 +01:00
holger krekel
8d9019b1c5 fix runtime dovecot/sieve-compile error on every incoming message 2024-03-20 19:10:54 +01:00
holger krekel
63d3e05674 remove superflous check in tests 2024-03-20 19:10:44 +01:00
holger krekel
e466a03055 fixes 2024-03-20 19:10:44 +01:00
holger krekel
1819a276cb implement persistence via marshal 2024-03-20 19:10:44 +01:00
holger krekel
9ec6430b71 make notifier take a directory 2024-03-20 19:10:44 +01:00
missytake
2097233fd6 expunge: reset maildirsize after expunging old mails 2024-03-18 07:03:06 +01:00
link2xt
4bca7891a2 Switch SPF from fail to softfail (~all instead of -all)
This is recommended to prevent SPF failure
from rejecting the message early in case messages
are remailed without breaking DKIM.
2024-03-09 20:02:29 +00:00
link2xt
2e23e743fd dovecot: increase default_client_limit 2024-03-09 14:01:00 +01:00
link2xt
edc593586b Implement "iterate" command in metadata server
Otherwise Dovecot times out when trying to iterate over metadata
of the folder. Apparently it happens when attempting to delete
folder from the server over IMAP.
2024-03-08 05:39:59 +01:00
holger krekel
1e229ad2de Add tests to metadata/token handling and post notifications in background thread (#224) 2024-03-08 01:56:33 +00:00
missytake
8baee557ee make sure rsync is installed, later commands depend on it 2024-03-07 19:14:48 +01:00
link2xt
42e50b089f Push notification extension
This change adds XDELTAPUSH capability.

Delta Chat clients detecting this capability
can set /private/devicetoken IMAP metadata
on the inbox to subscribe for Apple (APNS)
notifications.

Notifications are implemented in a new
`chatmail-metadata` service
which handles requests to set /private/devicetoken
IMAP metadata from Delta Chat clients
and /private/messagenew requests from
push_notification_lua script.

To avoid sending notifications for
MDNs, webxdc updates and Delta Chat sync messages,
messages with Auto-Submitted header are ignored
by setting $Auto keyword (flag) on them in Sieve script
and skipping such messages in push_notification_lua script.
Outgoing messages are also ignored.
2024-03-06 19:00:04 +00:00
missytake
e6a3fab6aa config: only block words if they are in privacy* config keys 2024-03-05 00:38:23 +01:00
holger krekel
ccd6e3e99c fix bailout if there is no TXT entry 2024-03-04 20:04:11 +01:00
missytake
21778fa4f3 tests: add test that we don't leak email addresses via VRFY 2024-03-03 22:49:03 +01:00
link2xt
14342383cf Generate our own single-line DKIM entry 2024-02-17 09:34:25 +00:00
missytake
926de76010 tests: make maildata work with python3.9 2024-02-17 09:27:02 +00:00
link2xt
ee25d35db1 Fix Python 3.9 support
I installed pyenv and then installed Python 3.9:
$ pyenv install 3.9
$ eval "$(pyenv init -)"
$ pyenv shell 3.9

In a clean repository I ran
$ scripts/cmdeploy init
$ scripts/cmdeploy run
$ scripts/cmdeploy dns
$ scripts/cmdeploy fmt

With the changes made all these commands work.

scripts/cmdeploy test fails some tests
using maildata fixture at
  importlib.resources.files(__package__).joinpath("mail-data")
line but this is not critical.
2024-02-17 09:27:02 +00:00
link2xt
ee2115584b Run scripts/cmdeploy fmt 2024-02-15 14:07:10 +00:00
missytake
1c9c088657 tests: add test that currently no outdated mails are stored on the server 2024-02-14 12:19:12 +01:00
missytake
b5afac2f1a expunge: run cronjob with vmail instead of dovecot. fix #210 2024-02-14 12:19:12 +01:00
link2xt
c8d9f20a48 fix: avoid "Argument list too long" in expunge.cron
Make `find` look for accounts.
2024-02-13 07:37:23 +00:00
missytake
6a30db7ce0 tests: test that echobot replies to msg. closes #199 2024-01-31 16:45:26 +01:00
link2xt
9e9ab80422 Do not subscribe to TLS reports 2024-01-31 14:35:54 +01:00
link2xt
5b9debfbdf Test dict protocol handler as a separate function 2024-01-30 23:49:17 +00:00
link2xt
788309b85a Merge Postfix TLS hardening
https://github.com/deltachat/chatmail/pull/97
2024-01-30 18:45:34 +00:00
link2xt
5bbb3d9b21 Rewrite and document smtpd_tls_exclude_ciphers 2024-01-27 02:10:02 +00:00
link2xt
6bc2186912 postfix: set tls_preempt_cipherlist 2024-01-26 19:45:53 +00:00
link2xt
8d5f91bf98 postfix: use new syntax for TLS version 2024-01-26 19:42:18 +00:00
missytake
9ddf60d0fc postfix: enforce TLS 1.2, disallow some insecure TLS ciphers 2024-01-26 19:41:48 +00:00
link2xt
05bdf65996 Add ADSP DNS record
ADSP RFC 5617 is declared historic because of no deployment:
<https://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-historic/>

However, it is declared as supported by <https://github.com/fastmail/authentication_milter>.

OpenDKIM has a release note from 2014-12-27 saying "Discontinue support for ADSP"
and does not support ADSP anymore.

Anyway, it does not hurt to publish a TXT record
indicating the strictest possible ADSP policy
that we apply to all incoming mail ourselves.
Unlike DMARC which allows either SPF or DKIM to pass,
ADSP requires that DKIM passes.
2024-01-26 15:04:09 +00:00
link2xt
6d6217812d Add missing login map 2024-01-25 23:17:57 +00:00
link2xt
ea36e73b8e postfix: require that login matches envelope FROM
Testing that envelope FROM matches From: header
already happens in filtermail
and tested with `test_reject_forged_from`.

The most important part here is
`reject_sender_login_mismatch` check
documented in
<https://www.postfix.org/postconf.5.html#reject_sender_login_mismatch>.
2024-01-25 23:17:57 +00:00
missytake
da268b57d4 tests: fix missing DKIM error message 2024-01-24 13:29:24 +00:00
link2xt
5588e13e54 Create opendkim configs before installing 2024-01-24 13:29:24 +00:00
link2xt
7c7f1cad7f Replace rspamd with OpenDKIM
OpenDKIM configuration
has two Lua scripts defining strict DKIM policy.

screen.lua filters out signatures that do not correspond
to the From: domain so they are not even checked.
final.lua rejects mail if it is not outgoing
and has no valid DKIM signatures.

OpenDKIM is configured as a milter on port 25 smtpd
to check DKIM signatures
and on mail reinjecting smtpd
to sign outgoing messages with DKIM signatures.
2024-01-24 13:29:24 +00:00
link2xt
a6b333672d Revert "Pin deltachat-rpc-server version"
This reverts commit 3940b9256d.

1.133.2 release has OpenSSL 3.2 downgraded to 3.1 and pass the tests.
2024-01-24 03:53:23 +00:00
link2xt
29857143c9 Dovecot: setup METADATA
There is no dictionary to set additional attributes,
but admin email can already be retrieved:

? GETMETADATA "" (/shared/admin)
* METADATA "" (/shared/admin {27}
mailto:root@c20.testrun.org)
? OK Getmetadata completed (0.001 + 0.000 secs).
2024-01-24 01:55:13 +00:00
missytake
d1460e7a1a tests: other bots could be in passthrough_recipients 2024-01-24 02:36:27 +01:00
missytake
87ab7e83d5 config: add xstore and groupsbot to default passthrough_recipients 2024-01-24 02:36:27 +01:00
link2xt
9f31357a9c Remove postscreen-related entries from Postfix master.cf
All these entries are related to `postscreen` service
which is currently not enabled.

For documentation see https://www.postfix.org/POSTSCREEN_README.html

If we later want to enable it, we can readd uncommented entries
and document it.
2024-01-24 02:08:30 +01:00
link2xt
c94ef0379a Update pip and setuptools in scripts/initenv.sh
This is to support Debian 11 which ships setuptools
that do not support `-e` without setup.py
2024-01-24 01:31:48 +01:00
link2xt
bc66325d71 Cleanup Received headers after filtermail as well 2024-01-23 21:27:23 +00:00
link2xt
27f44ae911 Cleanup Received headers only on outgoing mail 2024-01-23 20:28:34 +00:00
link2xt
3940b9256d Pin deltachat-rpc-server version 2024-01-22 14:44:39 +00:00
link2xt
4886ff9b86 Do not use redirect on /cgi-bin/newemail.py
Delta Chat does not follow redirects,
so it breaks old QR codes printed on paper
and published on various web pages.
2024-01-21 13:20:00 +00:00
missytake
38a9fc3d6e CI: fix GH action description 2024-01-19 20:36:49 +01:00
missytake
e676545f7a CI: DEFAULT_DNS_ZONE doesn't need to be secret 2024-01-19 20:36:49 +01:00
missytake
ef95627138 CI: don't reset staging.testrun.org VPS on every CI run 2024-01-19 20:36:49 +01:00
missytake
bfaedb5cf1 CI: save /var/lib/rspamd/dkim from getting wiped 2024-01-19 20:36:49 +01:00
missytake
ea8d53aa9b CI: test DNS entries after online tests, less flaky 2024-01-19 20:36:49 +01:00
missytake
be7a000de6 CI: try cmdeploy dns 3 times as it is a bit flaky 2024-01-19 20:36:49 +01:00
missytake
ad3cf9ecaa CI: enable tests with 2 chatmail servers, with nine.testrun.org for now 2024-01-19 20:36:49 +01:00
missytake
691324a3e8 DNS: revert hardcoded DNS server for reverse DNS checks 2024-01-19 20:36:49 +01:00
missytake
23a9f893b4 CI: save /var/lib/acme from getting wiped 2024-01-19 20:36:49 +01:00
missytake
3ea826aecb CI: don't deploy to nine.testrun.org automatically 2024-01-19 20:36:49 +01:00
missytake
532d094a08 CI: check whether cmdeploy dns --zonefile works 2024-01-19 20:36:49 +01:00
missytake
0cea5840df CI: don't reset staging.testrun.org after each run 2024-01-19 20:36:49 +01:00
missytake
45686778ea unbound: ensure systemd service can be started after root keys were generated 2024-01-19 20:36:49 +01:00
missytake
45108d9c93 CI: deploy on staging.testrun.org and if it works, on nine.testrun.org 2024-01-19 20:36:49 +01:00
missytake
3665d957a7 tests: fix tests for new fastCGI route and DKIM responses 2024-01-17 11:23:04 +01:00
link2xt
86940b2ee1 Stop requesting DMARC reports
Nobody reads these XML reports
and we know our DKIM is valid
when `cmdeploy dns` is happy.
2024-01-17 01:37:54 +00:00
link2xt
24fb9eb65b Nicer /new URL for new accounts and redirect GET requests
If user types in https://nine.testrun.org/new manually
in the browser, at least Firefox and Brave suggest
to open the app after following the redirect.
2024-01-15 13:06:29 +00:00
link2xt
700256c273 Split DKIM checks into separate rules
Now errors distinguish between missing DKIM singature,
missing DNS entry or invalid DKIM signature.
2024-01-15 02:36:10 +00:00
link2xt
d575d62b18 rspamd: give the reason to MTA when incoming mail is rejected
This is not secret but makes it easier for mail server admins
to debug why chatmail does not accept their emails.
If the server generates bounce messages, users will also see this
and can redirect to their server support.
It also shows up in /var/log/rspamd/rspamd.log on chatmail server.
2024-01-14 13:12:46 +00:00
link2xt
8cdf8ce376 Merge 'rspamd' branch, replacing OpenDKIM with rspamd
This adds DKIM and SPF checks and replaces OpenDKIM with rspamd for
DKIM signing.
2024-01-14 09:30:31 +00:00
link2xt
7c9abfbde3 Reject on DKIM PERMFAIL and SPF PERMFAIL as well 2024-01-14 09:19:04 +00:00
link2xt
95de87a325 Fixup rspamd disabled.conf deployment message 2024-01-14 08:45:39 +00:00
link2xt
5366df8dc6 Replace rspamd rule weights with a strict rule 2024-01-14 08:45:23 +00:00
link2xt
0a6db5161d Remove unused _configure_opendkim 2024-01-12 19:05:23 +00:00
link2xt
62e25e44fd Disable ratelimit module like other modules 2024-01-12 18:56:11 +00:00
link2xt
ce9fe920dc Do not return anything from remove_opendkim() 2024-01-12 18:47:57 +00:00
link2xt
c171866faf Actually disable phising, rbl and hfilter 2024-01-12 18:46:07 +00:00
missytake
7758c94e31 rspamd: remove redis (not needed) 2024-01-12 15:49:06 +00:00
missytake
66debb9245 lint fixes, final touch 2024-01-12 15:49:06 +00:00
missytake
3542232393 rspamd: reject emails with invalid SPF, DKIM, DMARC 2024-01-12 15:49:06 +00:00
missytake
536c12d989 tests: use generic recipient for DKIM testing 2024-01-12 15:49:06 +00:00
missytake
265403e110 revert "Significantly lower ratelimit" 2024-01-12 15:49:01 +00:00
missytake
fd679af577 rspamd: generate DKIM keys with rspamadm 2024-01-12 15:47:36 +00:00
missytake
ecbf135549 rspamd: install rspamd + redis 2024-01-12 15:47:36 +00:00
missytake
7b90b936dd tests: add test for rejecting SPF & DMARC fails 2024-01-12 15:47:36 +00:00
missytake
17a919ee53 lint: fix 3 issues 2024-01-12 15:47:36 +00:00
missytake
1b15ec0eae rspamd: Significantly lower ratelimit; without read receipts this should be more than fine 2024-01-12 15:47:36 +00:00
missytake
bf863f05b6 rspamd: add redis-server for caching 2024-01-12 15:47:36 +00:00
missytake
a2316beab1 rspamd: disable RBL checks 2024-01-12 15:47:36 +00:00
missytake
28fc91f5f3 rspamd: add rate limiting 2024-01-12 15:47:36 +00:00
missytake
67062677b0 disable some unnecessary rspamd modules 2024-01-12 15:47:36 +00:00
missytake
faf8ffe678 do DKIM signing with rspamd instead of openDKIM 2024-01-12 15:47:36 +00:00
missytake
5821098699 DNS: added www subdomain to zonefile 2024-01-12 13:34:23 +00:00
link2xt
542d63888a nginx: redirect www. to non-www 2024-01-12 13:34:23 +00:00
link2xt
449f8a014c Fix indentation in nginx.conf.j2 2024-01-12 13:34:23 +00:00
link2xt
57764d0cf5 dns: require www. subdomain and request TLS certificate for it 2024-01-12 13:34:23 +00:00
link2xt
c39a79e26a dns: check mta-sts CNAME directly without resolving to IP 2024-01-12 13:34:23 +00:00
link2xt
b6622fc68e chore: run scripts/cmdeploy fmt 2024-01-12 12:18:28 +00:00
link2xt
75b41641f0 doveauth: fix home directory returned from lookup_passdb
It is currently unused, but better have it correct
in case of enabling debugging options such as rawlogs.
2024-01-08 16:40:08 +00:00
link2xt
30a61972fb Update autoconfig XML URL with RFC draft
Old page does not exist anymore and linking to web archive is not nice.
2024-01-08 16:33:04 +01:00
missytake
bcc54602ee postfix: cleanup submission headers 2024-01-05 12:13:31 +01:00
missytake
f9998d5721 tests: if sender's public IP address is in the Received header 2024-01-05 12:13:31 +01:00
nudeldudel
8605ceba5e Update master.cf.j2
Add submission-header-cleanup to reduce the meta-data
2024-01-05 12:13:31 +01:00
missytake
30bcf9ff77 www: change nine.testrun.org occurence to mail_domain 2024-01-05 12:12:52 +01:00
177 changed files with 9423 additions and 2049 deletions

18
.dockerignore Normal file
View File

@@ -0,0 +1,18 @@
data/
venv/
__pycache__
*.pyc
*.orig
*.ini
.pytest_cache
.env
# Slim build context — .git/ alone can be 100s of MB
.git
.github/
docs/
tests/
# Exclude markdown files but keep www/src/*.md (used by WebsiteDeployer)
*.md
!www/**/*.md

33
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,33 @@
---
name: Bug report
about: Report something that isn't working.
title: ''
assignees: ''
---
<!--
Please fill out as much of this form as you can (leaving out stuff that is not applicable is ok).
-->
- Server OS (Operating System) - preferably Debian 12:
- On which OS you run cmdeploy:
- chatmail/relay version: `git rev-parse HEAD`
## Expected behavior
*What did you try to achieve?*
## Actual behavior
*What happened instead?*
### Steps to reproduce the problem:
1.
2.
### Screenshots
### Logs

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: true

View File

@@ -9,8 +9,13 @@ jobs:
name: isolated chatmaild tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
# Checkout pull request HEAD commit instead of merge commit
# Otherwise `test_deployed_state` will be unhappy.
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests
working-directory: chatmaild
run: pipx run tox
@@ -19,7 +24,7 @@ jobs:
name: deploy-chatmail tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: initenv
run: scripts/initenv.sh

76
.github/workflows/docker-build.yaml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Docker Build
on:
pull_request:
paths:
- 'docker/**'
- 'docker-compose.yaml'
- '.dockerignore'
- 'chatmaild/**'
- 'cmdeploy/**'
- '.github/workflows/docker-build.yaml'
push:
branches:
- main
- j4n/docker
paths:
- 'docker/**'
- 'docker-compose.yaml'
- '.dockerignore'
- 'chatmaild/**'
- 'cmdeploy/**'
- '.github/workflows/docker-build.yaml'
tags:
- 'v*'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
name: Build Docker image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tagged releases: v1.2.3 → :1.2.3, :1.2, :latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
# Branch pushes: j4n/docker → :j4n-docker
type=ref,event=branch
# Always: :sha-<hash>
type=sha
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: docker/chatmail_relay.dockerfile
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
GIT_HASH=${{ github.sha }}

53
.github/workflows/docs-preview.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: documentation preview
on:
pull_request:
paths:
- 'doc/**'
- 'scripts/build-docs.sh'
- '.github/workflows/docs-preview.yaml'
jobs:
scripts:
name: build
runs-on: ubuntu-latest
environment:
name: 'staging.chatmail.at/doc/relay/'
url: https://staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}
steps:
- uses: actions/checkout@v4
- name: initenv
run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo `pwd`/venv/bin >>$GITHUB_PATH
- name: build documentation
working-directory: doc
run: sphinx-build source build
- name: build documentation second time (for TOC)
working-directory: doc
run: sphinx-build source build
- name: Get Pullrequest ID
id: prepare
run: |
export PULLREQUEST_ID=$(echo "${{ github.ref }}" | cut -d "/" -f3)
echo "prid=$PULLREQUEST_ID" >> $GITHUB_OUTPUT
if [ $(expr length "${{ secrets.USERNAME }}") -gt "1" ]; then echo "uploadtoserver=true" >> $GITHUB_OUTPUT; fi
- run: |
echo "baseurl: /${{ steps.prepare.outputs.prid }}" >> _config.yml
- name: Upload preview
run: |
mkdir -p "$HOME/.ssh"
echo "${{ secrets.CHATMAIL_STAGING_SSHKEY }}" > "$HOME/.ssh/key"
chmod 600 "$HOME/.ssh/key"
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/staging.chatmail.at/doc/relay/${{ steps.prepare.outputs.prid }}/"
- name: check links
working-directory: doc
run: sphinx-build --builder linkcheck source build

47
.github/workflows/docs.yaml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: build and upload documentation
on:
push:
branches:
- main
- 'missytake/docs-ci'
paths:
- 'doc/**'
- 'scripts/build-docs.sh'
- '.github/workflows/docs.yaml'
jobs:
scripts:
name: build
runs-on: ubuntu-latest
environment:
name: 'chatmail.at/doc/relay/'
url: https://chatmail.at/doc/relay/
steps:
- uses: actions/checkout@v4
- name: initenv
run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo `pwd`/venv/bin >>$GITHUB_PATH
- name: build documentation
working-directory: doc
run: sphinx-build source build
- name: build documentation second time (for TOC)
working-directory: doc
run: sphinx-build source build
- name: check links
working-directory: doc
run: sphinx-build --builder linkcheck source build
- name: upload documentation
run: |
mkdir -p "$HOME/.ssh"
echo "${{ secrets.CHATMAIL_STAGING_SSHKEY }}" > "$HOME/.ssh/key"
chmod 600 "$HOME/.ssh/key"
rsync -rILvh -e "ssh -i $HOME/.ssh/key -o StrictHostKeyChecking=no" $GITHUB_WORKSPACE/doc/build/ "${{ secrets.USERNAME }}@chatmail.at:/var/www/html/chatmail.at/doc/relay/"

View File

@@ -0,0 +1,20 @@
;; Zone file for staging-ipv4.testrun.org
$ORIGIN staging-ipv4.testrun.org.
$TTL 300
@ IN SOA ns.testrun.org. root.nine.testrun.org (
2023010101 ; Serial
7200 ; Refresh
3600 ; Retry
1209600 ; Expire
3600 ; Negative response caching TTL
)
;; Nameservers.
@ IN NS ns.testrun.org.
;; DNS records.
@ IN A 37.27.95.249
mta-sts.staging-ipv4.testrun.org. CNAME staging-ipv4.testrun.org.
www.staging-ipv4.testrun.org. CNAME staging-ipv4.testrun.org.

View File

@@ -0,0 +1,21 @@
;; Zone file for staging2.testrun.org
$ORIGIN staging2.testrun.org.
$TTL 300
@ IN SOA ns.testrun.org. root.nine.testrun.org (
2023010101 ; Serial
7200 ; Refresh
3600 ; Retry
1209600 ; Expire
3600 ; Negative response caching TTL
)
;; Nameservers.
@ IN NS ns.testrun.org.
;; DNS records.
@ IN A 37.27.24.139
mta-sts.staging2.testrun.org. CNAME staging2.testrun.org.
www.staging2.testrun.org. CNAME staging2.testrun.org.

View File

@@ -0,0 +1,96 @@
name: deploy on staging-ipv4.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging-ipv4.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging-ipv4.testrun.org
url: https://staging-ipv4.testrun.org/
concurrency: staging-ipv4.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging-ipv4.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging-ipv4.testrun.org:/var/lib/acme acme-ipv4 || true
rsync -avz root@staging-ipv4.testrun.org:/etc/dkimkeys dkimkeys-ipv4 || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys-ipv4/dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys-ipv4 root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme-ipv4/acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme-ipv4 root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging-ipv4.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_IPV4_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme-ipv4/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys-ipv4/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging-ipv4.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging-ipv4.testrun.org
sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' chatmail.ini
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone
cat staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

98
.github/workflows/test-and-deploy.yaml vendored Normal file
View File

@@ -0,0 +1,98 @@
name: deploy on staging2.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging2.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging2.testrun.org
url: https://staging2.testrun.org/
concurrency: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging2.testrun.org:/var/lib/acme . || true
rsync -avz root@staging2.testrun.org:/etc/dkimkeys . || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging2.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging2.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging2.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true
- name: add hpk42 key to staging server
run: ssh root@staging2.testrun.org 'curl -s https://github.com/hpk42.keys >> .ssh/authorized_keys'
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging2.testrun.org
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
ssh -o StrictHostKeyChecking=accept-new root@staging2.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone --verbose
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone
scp .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

6
.gitignore vendored
View File

@@ -164,3 +164,9 @@ cython_debug/
#.idea/
chatmail.zone
# docker
/data/
/custom/
docker-compose.override.yaml
.env

526
CHANGELOG.md Normal file
View File

@@ -0,0 +1,526 @@
# Changelog for chatmail deployment
## 1.9.0 2025-12-18
### Documentation
- Add RELEASE.md and CONTRIBUTING.md
- README update, mention Chatmail Cookbook project
### Bug Fixes
- Expire messages also from IMAP subfolders
- Use absolute path instead of relative path in message expiration script
- Restart Postfix and Dovecot automatically on failure
- acmetool: Use a fixed name and `reconcile` instead of `want`
### Features
- Report DKIM error code in SMTP response
- Remove development notice from the web pages
### Miscellaneous Tasks
- Update the heading in the CHANGELOG.md
- Setup git-cliff
- Run tests against ci-chatmail.testrun.org instead of nine.testrun.org
- Cleanup remaining echobot code, remove echobot user from deployment and passthrough recipients
## 1.8.0 2025-12-12
- Add imap_compress option to chatmail.ini
([#760](https://github.com/chatmail/relay/pull/760))
- Remove echobot from relays
([#753](https://github.com/chatmail/relay/pull/753))
- Fix `cmdeploy webdev`
([#743](https://github.com/chatmail/relay/pull/743))
- Add robots.txt to exclude all web crawlers
([#732](https://github.com/chatmail/relay/pull/732))
- acmetool: accept new Let's Encrypt ToS: https://letsencrypt.org/documents/LE-SA-v1.6-August-18-2025.pdf
([#729](https://github.com/chatmail/relay/pull/729))
- Organized cmdeploy into install, configure, and activate stages
([#695](https://github.com/chatmail/relay/pull/695))
- docs: move readme.md docs to sphinx documentation rendered at https://chatmail.at/doc/relay
([#711](https://github.com/chatmail/relay/pull/711))
- acmetool: replace cronjob with a systemd timer
([#719](https://github.com/chatmail/relay/pull/719))
- remove xstore@testrun.org from default passthrough recipients
([#722](https://github.com/chatmail/relay/pull/722))
- don't deploy the website if there are merge conflicts in the www folder
([#714](https://github.com/chatmail/relay/pull/714))
- acmetool: use ECDSA keys instead of RSA
([#689](https://github.com/chatmail/relay/pull/689))
- Require TLS 1.2 for outgoing SMTP connections
([#685](https://github.com/chatmail/relay/pull/685), [#730](https://github.com/chatmail/relay/pull/730))
- require STARTTLS for incoming port 25 connections
([#684](https://github.com/chatmail/relay/pull/684), [#730](https://github.com/chatmail/relay/pull/730))
- filtermail: run CPU-intensive handle_DATA in a thread pool executor
([#676](https://github.com/chatmail/relay/pull/676))
- don't use the complicated logging module in filtermail to exclude a potential source of errors.
([#674](https://github.com/chatmail/relay/pull/674))
- Specify nginx.conf to only handle `mail_domain`, www, and mta-sts domains
([#636](https://github.com/chatmail/relay/pull/636))
- Setup TURN server
([#621](https://github.com/chatmail/relay/pull/621))
- cmdeploy: make --ssh-host work with localhost
([#659](https://github.com/chatmail/relay/pull/659))
- Update iroh-relay to 0.35.0
([#650](https://github.com/chatmail/relay/pull/650))
- filtermail: accept mails from Protonmail
([#616](https://github.com/chatmail/relay/pull/616))
- Ignore all RCPT TO: parameters
([#651](https://github.com/chatmail/relay/pull/651))
- Increase opendkim DNS Timeout from 5 to 60 seconds
([#672](https://github.com/chatmail/relay/pull/672))
- Add config parameter for Let's Encrypt ACME email
([#663](https://github.com/chatmail/relay/pull/663))
- Use max username length in newemail.py, not min
([#648](https://github.com/chatmail/relay/pull/648))
- Add startup for `fcgiwrap.service` because sometimes it did not start automatically.
([#657](https://github.com/chatmail/relay/pull/657))
- Add `cmdeploy init --force` command for recreating chatmail.ini
([#656](https://github.com/chatmail/relay/pull/656))
- Increase maxproc for reinjecting ports from 10 to 100
([#646](https://github.com/chatmail/relay/pull/646))
- Allow ports 143 and 993 to be used by `dovecot` process
([#639](https://github.com/chatmail/relay/pull/639))
- Add `--skip-dns-check` argument to `cmdeploy run` command, which disables DNS record checking before installation.
([#661](https://github.com/chatmail/relay/pull/661))
- Rework expiry of message files and mailboxes in Python
to only do a single iteration over sometimes millions of messages
instead of doing "find" commands that iterate 9 times over the messages.
Provide an "fsreport" CLI for more fine grained analysis of message files.
([#637](https://github.com/chatmail/relay/pull/637))
- Add installation via docker compose (MVP 1). The instructions, known issues and limitations are located in `/docs`
([#614](https://github.com/chatmail/relay/pull/614))
- Add configuration parameters
([#614](https://github.com/chatmail/relay/pull/614)):
- `change_kernel_settings` - Whether to change kernel parameters during installation (default: `True`)
- `fs_inotify_max_user_instances_and_watchers` - Value for kernel parameters `fs.inotify.max_user_instances` and `fs.inotify.max_user_watches` (default: `65535`)
## 1.7.0 2025-09-11
- Make www upload path configurable
([#618](https://github.com/chatmail/relay/pull/618))
- Check whether GCC is installed in initenv.sh
([#608](https://github.com/chatmail/relay/pull/608))
- Expire push notification tokens after 90 days
([#583](https://github.com/chatmail/relay/pull/583))
- Use official `mtail` binary instead of `mtail` package
([#581](https://github.com/chatmail/relay/pull/581))
- dovecot: install from download.delta.chat instead of openSUSE Build Service
([#590](https://github.com/chatmail/relay/pull/590))
- Reconfigure Dovecot imap-login service to high-performance mode
([#578](https://github.com/chatmail/relay/pull/578))
- Set timezone to improve dovecot performance
([#584](https://github.com/chatmail/relay/pull/584))
- Increase nginx connection limits
([#576](https://github.com/chatmail/relay/pull/576))
- If `dns-utils` needs to be installed before cmdeploy run, apt update to make sure it works
([#560](https://github.com/chatmail/relay/pull/560))
- filtermail: respect config message size limit
([#572](https://github.com/chatmail/relay/pull/572))
- Don't deploy if one of the ports used for chatmail relay services is occupied by an unexpected process
([#568](https://github.com/chatmail/relay/pull/568))
- Add config value after how many days large files are deleted
([#555](https://github.com/chatmail/relay/pull/555))
- cmdeploy: push relay version to /etc/chatmail-version
([#573](https://github.com/chatmail/relay/pull/573))
- filtermail: allow partial body length in OpenPGP payloads
([#570](https://github.com/chatmail/relay/pull/570))
- chatmaild: allow echobot to receive unencrypted messages by default
([#556](https://github.com/chatmail/relay/pull/556))
## 1.6.0 2025-04-11
- Handle Port-25 connect errors more gracefully (common with VPNs)
([#552](https://github.com/chatmail/relay/pull/552))
- Avoid "acmetool not found" during initial run
([#550](https://github.com/chatmail/relay/pull/550))
- Fix timezone handling such that client/servers do not need to use
same timezone.
([#553](https://github.com/chatmail/relay/pull/553))
- Enforce end-to-end encryption for incoming messages.
New user address mailboxes now get a `enforceE2EEincoming` file
which prohibits incoming cleartext messages from other domains.
An outside MTA trying to submit a cleartext message will
get a "523 Encryption Needed" response, see RFC5248.
If the file does not exist (as it the case for all existing accounts)
incoming cleartext messages are accepted.
([#538](https://github.com/chatmail/server/pull/538))
- Enforce end-to-end encryption between local addresses
([#535](https://github.com/chatmail/server/pull/535))
- unbound: check that port 53 is not occupied by a different process
([#537](https://github.com/chatmail/server/pull/537))
- unbound: before unbound is there, use 9.9.9.9 for resolving
([#518](https://github.com/chatmail/relay/pull/518))
- Limit the bind for the HTTPS server on 8443 to 127.0.0.1
([#522](https://github.com/chatmail/server/pull/522))
([#532](https://github.com/chatmail/server/pull/532))
- Send SNI when connecting to outside servers
([#524](https://github.com/chatmail/server/pull/524))
- postfix master.cf: use 127.0.0.1 for consistency
([#544](https://github.com/chatmail/relay/pull/544))
- Pass through `original_content` instead of `content` in filtermail
([#509](https://github.com/chatmail/server/pull/509))
- Document TLS requirements in the readme
([#514](https://github.com/chatmail/server/pull/514))
- Remove cleanup service from submission ports
([#512](https://github.com/chatmail/server/pull/512))
- cmdeploy dovecot: delete big messages after 7 days
([#504](https://github.com/chatmail/server/pull/504))
- mtail: fix getting logs from STDIN
([#502](https://github.com/chatmail/server/pull/502))
- filtermail: don't require exactly 2 lines after openPGP payload
([#497](https://github.com/chatmail/server/pull/497))
- cmdeploy dns: offer alternative DKIM record format for some web interfaces
([#470](https://github.com/chatmail/server/pull/470))
- journald: remove old logs from disk
([#490](https://github.com/chatmail/server/pull/490))
- opendkim: restart once every day to mend RAM leaks
([#498](https://github.com/chatmail/server/pull/498)
- migration guide: let opendkim own the DKIM keys directory
([#468](https://github.com/chatmail/server/pull/468))
- improve secure-join message detection
([#473](https://github.com/chatmail/server/pull/473))
- use old crypt lib in python < 3.11
([#483](https://github.com/chatmail/server/pull/483))
- chatmaild: set umask to 0700 for doveauth + metadata
([#490](https://github.com/chatmail/server/pull/492))
- remove MTA-STS daemon
([#488](https://github.com/chatmail/server/pull/488))
- replace `Subject` with `[...]` for all outgoing mails.
([#481](https://github.com/chatmail/server/pull/481))
- opendkim: use su instead of sudo
([#491](https://github.com/chatmail/server/pull/491))
## 1.5.0 2024-12-20
- cmdeploy dns: always show recommended DNS records
([#463](https://github.com/chatmail/server/pull/463))
- add `--all` to `cmdeploy dns`
([#462](https://github.com/chatmail/server/pull/462))
- fix `_mta-sts` TXT DNS record
([#461](https://github.com/chatmail/server/pull/461)
- deploy `iroh-relay` and also update "realtime relay services" in privacy policy.
([#434](https://github.com/chatmail/server/pull/434))
([#451](https://github.com/chatmail/server/pull/451))
- add guide to migrate chatmail to a new server
([#429](https://github.com/chatmail/server/pull/429))
- disable anvil authentication penalty
([#414](https://github.com/chatmail/server/pull/444)
- increase `request_queue_size` for UNIX sockets to 1000.
([#437](https://github.com/chatmail/server/pull/437))
- add argument to `cmdeploy run` for specifying
a different SSH host than `mail_domain`
([#439](https://github.com/chatmail/server/pull/439))
- query autoritative nameserver to bypass DNS cache
([#424](https://github.com/chatmail/server/pull/424))
- add mtail support (new optional `mtail_address` ini value)
This defines the address on which [`mtail`](https://google.github.io/mtail/)
exposes its metrics collected from the logs.
If you want to collect the metrics with Prometheus,
setup a private network (e.g. WireGuard interface)
and assign an IP address from this network to the host.
If you do not plan to collect metrics,
keep this setting unset.
([#388](https://github.com/chatmail/server/pull/388))
- fix checking for required DNS records
([#412](https://github.com/chatmail/server/pull/412))
- add support for specifying whole domains for recipient passthrough list
([#408](https://github.com/chatmail/server/pull/408))
- add a paragraph about "account deletion" to info page
([#405](https://github.com/chatmail/server/pull/405))
- avoid nginx listening on ipv6 if v6 is dsiabled
([#402](https://github.com/chatmail/server/pull/402))
- refactor ssh-based execution to allow organizing remote functions in
modules.
([#396](https://github.com/chatmail/server/pull/396))
- trigger "apt upgrade" during "cmdeploy run"
([#398](https://github.com/chatmail/server/pull/398))
- drop hispanilandia passthrough address
([#401](https://github.com/chatmail/server/pull/401))
- set CAA record flags to 0
- add IMAP capabilities instead of overwriting them
([#413](https://github.com/chatmail/server/pull/413))
- fix OpenPGP payload check
([#435](https://github.com/chatmail/server/pull/435))
- fix Dovecot quota_max_mail_size to use max_message_size config value
([#438](https://github.com/chatmail/server/pull/438))
## 1.4.1 2024-07-31
- fix metadata dictproxy which would confuse transactions
resulting in missed notifications and other issues.
([#393](https://github.com/chatmail/server/pull/393))
([#394](https://github.com/chatmail/server/pull/394))
- add optional "imap_rawlog" config option. If true,
.in/.out files are created in user home dirs
containing the imap protocol messages.
([#389](https://github.com/chatmail/server/pull/389))
## 1.4.0 2024-07-28
- Add `disable_ipv6` config option to chatmail.ini.
Required if the server doesn't have IPv6 connectivity.
([#312](https://github.com/chatmail/server/pull/312))
- allow current K9/Thunderbird-mail releases to send encrypted messages
outside by accepting their localized "encrypted subject" strings.
([#370](https://github.com/chatmail/server/pull/370))
- Migrate and remove sqlite database in favor of password/lastlogin tracking
in a user's maildir.
([#379](https://github.com/chatmail/server/pull/379))
- Require pyinfra V3 installed on the client side,
run `./scripts/initenv.sh` to upgrade locally.
([#378](https://github.com/chatmail/server/pull/378))
- don't hardcode "/home/vmail" paths but rather set them
once in the config object and use it everywhere else,
thereby also improving testability.
([#351](https://github.com/chatmail/server/pull/351))
temporarily introduced obligatory "passdb_path" and "mailboxes_dir"
settings but they were removed/obsoleted in
([#380](https://github.com/chatmail/server/pull/380))
- BREAKING: new required chatmail.ini value 'delete_inactive_users_after = 100'
which removes users from database and mails after 100 days without any login.
([#350](https://github.com/chatmail/server/pull/350))
- Refine DNS checking to distinguish between "required" and "recommended" settings
([#372](https://github.com/chatmail/server/pull/372))
- reload nginx in the acmetool cronjob
([#360](https://github.com/chatmail/server/pull/360))
- remove checking of reverse-DNS PTR records. Chatmail-servers don't
depend on it and even in the wider e-mail system it's not common anymore.
If it's an issue, a chatmail operator can still care to properly set reverse DNS.
([#348](https://github.com/chatmail/server/pull/348))
- Make DNS-checking faster and more interactive, run it fully during "cmdeploy run",
also introducing a generic mechanism for rapid remote ssh-based python function execution.
([#346](https://github.com/chatmail/server/pull/346))
- Don't fix file owner ship of /home/vmail
([#345](https://github.com/chatmail/server/pull/345))
- Support iterating over all users with doveadm commands
([#344](https://github.com/chatmail/server/pull/344))
- Test and fix for attempts to create inadmissible accounts
([#333](https://github.com/chatmail/server/pull/321))
- check that OpenPGP has only PKESK, SKESK and SEIPD packets
([#323](https://github.com/chatmail/server/pull/323),
[#324](https://github.com/chatmail/server/pull/324))
- improve filtermail checks for encrypted messages and drop support for unencrypted MDNs
([#320](https://github.com/chatmail/server/pull/320))
- replace `bash` with `/bin/sh`
([#334](https://github.com/chatmail/server/pull/334))
- Increase number of logged in IMAP sessions to 50000
([#335](https://github.com/chatmail/server/pull/335))
- filtermail: do not allow ASCII armor without actual payload
([#325](https://github.com/chatmail/server/pull/325))
- Remove sieve to enable hardlink deduplication in LMTP
([#343](https://github.com/chatmail/server/pull/343))
- dovecot: enable gzip compression on disk
([#341](https://github.com/chatmail/server/pull/341))
- DKIM-sign Content-Type and oversign all signed headers
([#296](https://github.com/chatmail/server/pull/296))
- Add nonci_accounts metric
([#347](https://github.com/chatmail/server/pull/347))
- doveauth: log when a new account is created
([#349](https://github.com/chatmail/server/pull/349))
- Multiplex HTTPS, IMAP and SMTP on port 443
([#357](https://github.com/chatmail/server/pull/357))
## 1.3.0 - 2024-06-06
- don't check necessary DNS records on cmdeploy init anymore
([#316](https://github.com/chatmail/server/pull/316))
- ensure cron and acl are installed
([#293](https://github.com/chatmail/server/pull/293),
[#310](https://github.com/chatmail/server/pull/310))
- change default for delete_mails_after from 40 to 20 days
([#300](https://github.com/chatmail/server/pull/300))
- save journald logs only to memory and save nginx logs to journald instead of file
([#299](https://github.com/chatmail/server/pull/299))
- fix writing of multiple obs repositories in `/etc/apt/sources.list`
([#290](https://github.com/chatmail/server/pull/290))
- metadata: add support for `/shared/vendor/deltachat/irohrelay`
([#284](https://github.com/chatmail/server/pull/284))
- Emit "XCHATMAIL" capability from IMAP server
([#278](https://github.com/chatmail/server/pull/278))
- Move echobot `into /var/lib/echobot`
([#281](https://github.com/chatmail/server/pull/281))
- Accept Let's Encrypt's new Terms of Services
([#275](https://github.com/chatmail/server/pull/276))
- Reload Dovecot and Postfix when TLS certificate updates
([#271](https://github.com/chatmail/server/pull/271))
- Use forked version of dovecot without hardcoded delays
([#270](https://github.com/chatmail/server/pull/270))
## 1.2.0 - 2024-04-04
- Install dig on the server to resolve DNS records
([#267](https://github.com/chatmail/server/pull/267))
- preserve notification order and exponentially backoff with
retries for tokens where we didn't get a successful return
([#265](https://github.com/chatmail/server/pull/263))
- Run chatmail-metadata and doveauth as vmail
([#261](https://github.com/chatmail/server/pull/261))
- Apply systemd restrictions to echobot
([#259](https://github.com/chatmail/server/pull/259))
- re-enable running the CI in pull requests, but not concurrently
([#258](https://github.com/chatmail/server/pull/258))
## 1.1.0 - 2024-03-28
### The changelog starts to record changes from March 15th, 2024
- Move systemd unit templates to cmdeploy package
([#255](https://github.com/chatmail/server/pull/255))
- Persist push tokens and support multiple device per address
([#254](https://github.com/chatmail/server/pull/254))
- Avoid warning for regular doveauth protocol's hello message.
([#250](https://github.com/chatmail/server/pull/250))
- Fix various tests to pass again with "cmdeploy test".
([#245](https://github.com/chatmail/server/pull/245),
[#242](https://github.com/chatmail/server/pull/242)
- Ensure lets-encrypt certificates are reloaded after renewal
([#244]) https://github.com/chatmail/server/pull/244
- Persist tokens to avoid iOS users loosing push-notifications when the
chatmail metadata service is restarted (happens regularly during deploys)
([#238](https://github.com/chatmail/server/pull/239)
- Fix failing sieve-script compile errors on incoming messages
([#237](https://github.com/chatmail/server/pull/239)
- Fix quota reporting after expunging of old mails
([#233](https://github.com/chatmail/server/pull/239)

7
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,7 @@
# Contributing to the chatmail relay
Commit messages follow the [Conventional Commits] notation.
We use [git-cliff] to generate the changelog from commit messages before the release.
[Conventional Commits]: https://www.conventionalcommits.org/
[git-cliff]: https://git-cliff.org/

164
README.md
View File

@@ -1,162 +1,20 @@
<img width="800px" src="www/src/collage-top.png"/>
# Chatmail relays for end-to-end encrypted email
# Chatmail services optimized for Delta Chat apps
Chatmail relay servers are interoperable Mail Transport Agents (MTAs) designed for:
This repository helps to setup a ready-to-use chatmail server
comprised of a minimal setup of the battle-tested
[postfix smtp](https://www.postfix.org) and [dovecot imap](https://www.dovecot.org) services.
- **Zero State:** no private data or metadata collected, messages are auto-deleted, low disk usage
The setup is designed and optimized for providing chatmail accounts
for use by [Delta Chat apps](https://delta.chat).
- **Instant/Realtime:** sub-second message delivery, realtime P2P
streaming, privacy-preserving Push Notifications for Apple, Google, and Huawei;
Chatmail accounts are automatically created by a first login,
after which the initially specified password is required for using them.
- **Security Enforcement**: only strict TLS, DKIM and OpenPGP with minimized metadata accepted
## Deploying your own chatmail server
- **Reliable Federation and Decentralization:** No spam or IP reputation checks, federating
depends on established IETF standards and protocols.
We use `chat.example.org` as the chatmail domain in the following steps.
Please substitute it with your own domain.
1. Install the `cmdeploy` command in a virtualenv
```
git clone https://github.com/deltachat/chatmail
cd chatmail
scripts/initenv.sh
```
2. Create chatmail configuration file `chatmail.ini`:
```
scripts/cmdeploy init chat.example.org # <-- use your domain
```
3. Setup first DNS records for your chatmail domain,
according to the hints provided by `cmdeploy init`.
Verify that SSH root login works:
```
ssh root@chat.example.org # <-- use your domain
```
4. Deploy to the remote chatmail server:
```
scripts/cmdeploy run
```
This script will also show you additional DNS records
which you should configure at your DNS provider
(it can take some time until they are public).
### Other helpful commands:
To check the status of your remotely running chatmail service:
```
scripts/cmdeploy status
```
To check whether your DNS records are correct:
```
scripts/cmdeploy dns
```
To test whether your chatmail service is working correctly:
```
scripts/cmdeploy test
```
To measure the performance of your chatmail service:
```
scripts/cmdeploy bench
```
## Overview of this repository
This repository drives the development of chatmail services,
comprised of minimal setups of
- [postfix smtp server](https://www.postfix.org)
- [dovecot imap server](https://www.dovecot.org)
as well as custom services that are integrated with these two:
- `chatmaild/src/chatmaild/doveauth.py` implements
create-on-login account creation semantics and is used
by Dovecot during login authentication and by Postfix
which in turn uses [Dovecot SASL](https://doc.dovecot.org/configuration_manual/authentication/dict/#complete-example-for-authenticating-via-a-unix-socket)
to authenticate users
to send mails for them.
- `chatmaild/src/chatmaild/filtermail.py` prevents
unencrypted e-mail from leaving the chatmail service
and is integrated into postfix's outbound mail pipelines.
There is also the `cmdeploy/src/cmdeploy/cmdeploy.py` command line tool
which helps with setting up and managing the chatmail service.
`cmdeploy run` uses [pyinfra-based scripting](https://pyinfra.com/)
in `cmdeploy/src/cmdeploy/__init__.py`
to automatically install all chatmail components on a server.
### Home page and getting started for users
`cmdeploy run` also creates default static Web pages and deploys them
to a nginx web server with:
- a default `index.html` along with a QR code that users can click to
create accounts on your chatmail provider,
- a default `info.html` that is linked from the home page,
- a default `policy.html` that is linked from the home page.
All `.html` files are generated
by the according markdown `.md` file in the `www/src` directory.
### Refining the web pages
```
scripts/cmdeploy webdev
```
This starts a local live development cycle for chatmail Web pages:
- uses the `www/src/page-layout.html` file for producing static
HTML pages from `www/src/*.md` files
- continously builds the web presence reading files from `www/src` directory
and generating html files and copying assets to the `www/build` directory.
- Starts a browser window automatically where you can "refresh" as needed.
## Emergency Commands to disable automatic account creation
If you need to stop account creation,
e.g. because some script is wildly creating accounts,
login to the server with ssh and run:
```
touch /etc/chatmail-nocreate
```
While this file is present, account creation will be blocked.
### Ports
[Postfix](http://www.postfix.org/) listens on ports 25 (smtp) and 587 (submission) and 465 (submissions).
[Dovecot](https://www.dovecot.org/) listens on ports 143 (imap) and 993 (imaps).
[nginx](https://www.nginx.com/) listens on port 443 (https).
[acmetool](https://hlandau.github.io/acmetool/) listens on port 80 (http).
Delta Chat apps will, however, discover all ports and configurations
automatically by reading the [autoconfig XML file](https://web.archive.org/web/20210624004729/https://developer.mozilla.org/en-US/docs/Mozilla/Thunderbird/Autoconfiguration) from the chatmail service.
This repository contains everything needed to setup a ready-to-use chatmail relay on an ssh-reachable host.
For getting started and more information please refer to the web version of this repositories' documentation at
[https://chatmail.at/doc/relay](https://chatmail.at/doc/relay)

15
RELEASE.md Normal file
View File

@@ -0,0 +1,15 @@
# Releasing a new version of chatmail relay
For example, to release version 1.9.0 of chatmail relay, do the following steps.
1. Update the changelog: `git cliff --unreleased --tag 1.9.0 --prepend CHANGELOG.md` or `git cliff -u -t 1.9.0 -p CHANGELOG.md`.
2. Open the changelog in the editor, edit it if required.
3. Commit the changes to the changelog with a commit message `chore(release): prepare for 1.9.0`.
3. Tag the release: `git tag --annotate 1.9.0`.
4. Push the release tag: `git push origin 1.9.0`.
5. Create a GitHub release: `gh release create 1.9.0`.

View File

@@ -1,4 +1,3 @@
include src/chatmaild/*.f
include src/chatmaild/ini/*.ini.f
include src/chatmaild/ini/*.ini
include src/chatmaild/tests/mail-data/*

View File

@@ -4,12 +4,15 @@ build-backend = "setuptools.build_meta"
[project]
name = "chatmaild"
version = "0.2"
version = "0.3"
dependencies = [
"aiosmtpd",
"iniconfig",
"deltachat-rpc-server",
"deltachat-rpc-client",
"filelock",
"requests",
"crypt-r >= 3.13.1 ; python_version >= '3.11'",
]
[tool.setuptools]
@@ -20,9 +23,12 @@ where = ['src']
[project.scripts]
doveauth = "chatmaild.doveauth:main"
filtermail = "chatmaild.filtermail:main"
echobot = "chatmaild.echo:main"
chatmail-metadata = "chatmaild.metadata:main"
chatmail-metrics = "chatmaild.metrics:main"
chatmail-expire = "chatmaild.expire:main"
chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main"
[project.entry-points.pytest11]
"chatmaild.testplugin" = "chatmaild.tests.plugin"
@@ -33,6 +39,19 @@ log_format = "%(asctime)s %(levelname)s %(message)s"
log_date_format = "%Y-%m-%d %H:%M:%S"
log_level = "INFO"
[tool.ruff]
lint.select = [
"F", # Pyflakes
"I", # isort
"PLC", # Pylint Convention
"PLE", # Pylint Error
"PLW", # Pylint Warning
]
lint.ignore = [
"PLC0415" # import-outside-top-level
]
[tool.tox]
legacy_tox_ini = """
[tox]
@@ -44,13 +63,14 @@ skipdist = True
skip_install = True
deps =
ruff
black
commands =
black --quiet --check --diff src/
ruff src/
ruff format --quiet --diff src/
ruff check src/
[testenv]
deps = pytest
pdbpp
pytest-localserver
execnet
commands = pytest -v -rsXx {posargs}
"""

View File

@@ -0,0 +1 @@

View File

@@ -1,48 +1,125 @@
import os
from pathlib import Path
import iniconfig
from chatmaild.user import User
def read_config(inipath):
assert Path(inipath).exists(), inipath
cfg = iniconfig.IniConfig(inipath)
return Config(inipath, params=cfg.sections["params"])
params = cfg.sections["params"]
default_config_content = get_default_config_content(params["mail_domain"])
df_params = iniconfig.IniConfig("ini", data=default_config_content)["params"]
new_params = dict(df_params.items())
new_params.update(params)
return Config(inipath, params=new_params)
class Config:
def __init__(self, inipath, params):
self._inipath = inipath
self.mail_domain = params["mail_domain"]
self.max_user_send_per_minute = int(params["max_user_send_per_minute"])
self.max_user_send_per_minute = int(params.get("max_user_send_per_minute", 60))
self.max_user_send_burst_size = int(params.get("max_user_send_burst_size", 10))
self.max_mailbox_size = params["max_mailbox_size"]
self.max_message_size = int(params.get("max_message_size", "31457280"))
self.delete_mails_after = params["delete_mails_after"]
self.delete_large_after = params["delete_large_after"]
self.delete_inactive_users_after = int(params["delete_inactive_users_after"])
self.username_min_length = int(params["username_min_length"])
self.username_max_length = int(params["username_max_length"])
self.password_min_length = int(params["password_min_length"])
self.passthrough_senders = params["passthrough_senders"].split()
self.passthrough_recipients = params["passthrough_recipients"].split()
self.filtermail_smtp_port = int(params["filtermail_smtp_port"])
self.postfix_reinject_port = int(params["postfix_reinject_port"])
self.www_folder = params.get("www_folder", "")
self.filtermail_smtp_port = int(params.get("filtermail_smtp_port", "10080"))
self.filtermail_smtp_port_incoming = int(
params.get("filtermail_smtp_port_incoming", "10081")
)
self.postfix_reinject_port = int(params.get("postfix_reinject_port", "10025"))
self.postfix_reinject_port_incoming = int(
params.get("postfix_reinject_port_incoming", "10026")
)
self.mtail_address = params.get("mtail_address")
self.disable_ipv6 = params.get("disable_ipv6", "false").lower() == "true"
self.noacme = os.environ.get("CHATMAIL_NOACME", "false").lower() == "true"
self.addr_v4 = os.environ.get("CHATMAIL_ADDR_V4", "")
self.addr_v6 = os.environ.get("CHATMAIL_ADDR_V6", "")
self.acme_email = params.get("acme_email", "")
self.imap_rawlog = params.get("imap_rawlog", "false").lower() == "true"
self.imap_compress = params.get("imap_compress", "false").lower() == "true"
if "iroh_relay" not in params:
self.iroh_relay = "https://" + params["mail_domain"]
self.enable_iroh_relay = True
else:
self.iroh_relay = params["iroh_relay"].strip()
self.enable_iroh_relay = False
self.privacy_postal = params.get("privacy_postal")
self.privacy_mail = params.get("privacy_mail")
self.privacy_pdo = params.get("privacy_pdo")
self.privacy_supervisor = params.get("privacy_supervisor")
# deprecated option
mbdir = params.get("mailboxes_dir", f"/home/vmail/mail/{self.mail_domain}")
self.mailboxes_dir = Path(mbdir.strip())
# old unused option (except for first migration from sqlite to maildir store)
self.passdb_path = Path(params.get("passdb_path", "/home/vmail/passdb.sqlite"))
def _getbytefile(self):
return open(self._inipath, "rb")
def get_user(self, addr) -> User:
if not addr or "@" not in addr or "/" in addr:
raise ValueError(f"invalid address {addr!r}")
def write_initial_config(inipath, mail_domain):
maildir = self.mailboxes_dir.joinpath(addr)
password_path = maildir.joinpath("password")
return User(maildir, addr, password_path, uid="vmail", gid="vmail")
def write_initial_config(inipath, mail_domain, overrides):
"""Write out default config file, using the specified config value overrides."""
content = get_default_config_content(mail_domain, **overrides)
inipath.write_text(content)
def get_default_config_content(mail_domain, **overrides):
from importlib.resources import files
inidir = files(__package__).joinpath("ini")
content = (
inidir.joinpath("chatmail.ini.f").read_text().format(mail_domain=mail_domain)
)
source_inipath = inidir.joinpath("chatmail.ini.f")
content = source_inipath.read_text().format(mail_domain=mail_domain)
# apply config overrides
new_lines = []
extra = overrides.copy()
for line in content.split("\n"):
new_line = line.strip()
if new_line and new_line[0] not in "#[":
name, value = map(str.strip, new_line.split("=", maxsplit=1))
value = extra.pop(name, value)
new_line = f"{name} = {value}"
new_lines.append(new_line)
for name, value in extra.items():
new_line = f"{name} = {value}"
new_lines.append(new_line)
content = "\n".join(new_lines)
# apply testrun privacy overrides
if mail_domain.endswith(".testrun.org"):
override_inipath = inidir.joinpath("override-testrun.ini")
privacy = iniconfig.IniConfig(override_inipath)["privacy"]
lines = []
for line in content.split("\n"):
for key, value in privacy.items():
value_lines = value.strip().split("\n")
value_lines = value.format(mail_domain=mail_domain).strip().split("\n")
if not line.startswith(f"{key} =") or not value_lines:
continue
if len(value_lines) == 1:
@@ -55,5 +132,4 @@ def write_initial_config(inipath, mail_domain):
else:
lines.append(line)
content = "\n".join(lines)
inipath.write_text(content)
return content

View File

@@ -1,133 +0,0 @@
import sqlite3
import contextlib
import time
from pathlib import Path
class DBError(Exception):
"""error during an operation on the database."""
class Connection:
def __init__(self, sqlconn, write):
self._sqlconn = sqlconn
self._write = write
def close(self):
self._sqlconn.close()
def commit(self):
self._sqlconn.commit()
def rollback(self):
self._sqlconn.rollback()
def execute(self, query, params=()):
cur = self.cursor()
try:
cur.execute(query, params)
except sqlite3.IntegrityError as e:
raise DBError(e)
return cur
def cursor(self):
return self._sqlconn.cursor()
def get_user(self, addr: str) -> {}:
"""Get a row from the users table."""
q = "SELECT addr, password, last_login from users WHERE addr = ?"
row = self._sqlconn.execute(q, (addr,)).fetchone()
result = {}
if row:
result = dict(
user=row[0],
password=row[1],
last_login=row[2],
)
return result
class Database:
def __init__(self, path: str):
self.path = Path(path)
self.ensure_tables()
def _get_connection(
self, write=False, transaction=False, closing=False
) -> Connection:
# we let the database serialize all writers at connection time
# to play it very safe (we don't have massive amounts of writes).
mode = "ro"
if write:
mode = "rw"
if not self.path.exists():
mode = "rwc"
uri = "file:%s?mode=%s" % (self.path, mode)
sqlconn = sqlite3.connect(
uri,
timeout=60,
isolation_level=None if transaction else "DEFERRED",
uri=True,
)
# Enable Write-Ahead Logging to avoid readers blocking writers and vice versa.
if write:
sqlconn.execute("PRAGMA journal_mode=wal")
if transaction:
start_time = time.time()
while 1:
try:
sqlconn.execute("begin immediate")
break
except sqlite3.OperationalError:
# another thread may be writing, give it a chance to finish
time.sleep(0.1)
if time.time() - start_time > 5:
# if it takes this long, something is wrong
raise
conn = Connection(sqlconn, write=write)
if closing:
conn = contextlib.closing(conn)
return conn
@contextlib.contextmanager
def write_transaction(self):
conn = self._get_connection(closing=False, write=True, transaction=True)
try:
yield conn
except Exception:
conn.rollback()
conn.close()
raise
else:
conn.commit()
conn.close()
def read_connection(self, closing=True) -> Connection:
return self._get_connection(closing=closing, write=False)
def get_schema_version(self) -> int:
with self.read_connection() as conn:
dbversion = conn.execute("PRAGMA user_version").fetchone()[0]
return dbversion
CURRENT_DBVERSION = 1
def ensure_tables(self):
with self.write_transaction() as conn:
if self.get_schema_version() > 1:
raise DBError(
"version is %s; downgrading schema is not supported"
% (self.get_schema_version(),)
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS users (
addr TEXT PRIMARY KEY,
password TEXT,
last_login INTEGER
)
""",
)
conn.execute("PRAGMA user_version=%s" % (self.CURRENT_DBVERSION,))

View File

@@ -0,0 +1,98 @@
import logging
import os
from socketserver import StreamRequestHandler, ThreadingUnixStreamServer
class DictProxy:
def loop_forever(self, rfile, wfile):
# Transaction storage is local to each handler loop.
# Dovecot reuses transaction IDs across connections,
# starting transaction with the name `1`
# on two different connections to the same proxy sometimes.
transactions = {}
while True:
msg = rfile.readline().strip().decode()
if not msg:
break
res = self.handle_dovecot_request(msg, transactions)
if res:
wfile.write(res.encode("ascii"))
wfile.flush()
def handle_dovecot_request(self, msg, transactions):
# see https://doc.dovecot.org/2.3/developer_manual/design/dict_protocol/#dovecot-dict-protocol
short_command = msg[0]
parts = msg[1:].split("\t")
if short_command == "L":
return self.handle_lookup(parts)
elif short_command == "I":
return self.handle_iterate(parts)
elif short_command == "H":
return # no version checking
if short_command not in ("BSC"):
logging.warning(f"unknown dictproxy request: {msg!r}")
return
transaction_id = parts[0]
if short_command == "B":
return self.handle_begin_transaction(transaction_id, parts, transactions)
elif short_command == "C":
return self.handle_commit_transaction(transaction_id, parts, transactions)
elif short_command == "S":
addr = transactions[transaction_id]["addr"]
if not self.handle_set(addr, parts):
transactions[transaction_id]["res"] = "F\n"
logging.error(f"dictproxy-set failed for {addr!r}: {msg!r}")
def handle_lookup(self, parts):
logging.warning(f"lookup ignored: {parts!r}")
return "N\n"
def handle_iterate(self, parts):
# Empty line means ITER_FINISHED.
# If we don't return empty line Dovecot will timeout.
return "\n"
def handle_begin_transaction(self, transaction_id, parts, transactions):
addr = parts[1]
transactions[transaction_id] = dict(addr=addr, res="O\n")
def handle_set(self, addr, parts):
# For documentation on key structure see
# https://github.com/dovecot/core/blob/main/src/lib-storage/mailbox-attribute.h
return False
def handle_commit_transaction(self, transaction_id, parts, transactions):
# return whatever "set" command(s) set as result.
return transactions.pop(transaction_id)["res"]
def serve_forever_from_socket(self, socket):
dictproxy = self
class Handler(StreamRequestHandler):
def handle(self):
try:
dictproxy.loop_forever(self.rfile, self.wfile)
except Exception:
logging.exception("Exception in the handler")
raise
try:
os.unlink(socket)
except FileNotFoundError:
pass
with CustomThreadingUnixStreamServer(socket, Handler) as server:
try:
server.serve_forever()
except KeyboardInterrupt:
pass
class CustomThreadingUnixStreamServer(ThreadingUnixStreamServer):
request_queue_size = 1000

View File

@@ -1,25 +1,23 @@
import json
import logging
import os
import time
import sys
import json
import crypt
from socketserver import (
UnixStreamServer,
StreamRequestHandler,
ThreadingMixIn,
)
import pwd
from .database import Database
from .config import read_config, Config
try:
import crypt_r
except ImportError:
import crypt as crypt_r
from .config import Config, read_config
from .dictproxy import DictProxy
from .migrate_db import migrate_from_db_to_maildir
NOCREATE_FILE = "/etc/chatmail-nocreate"
def encrypt_password(password: str):
# https://doc.dovecot.org/configuration_manual/authentication/password_schemes/
passhash = crypt.crypt(password, crypt.METHOD_SHA512)
# https://doc.dovecot.org/2.3/configuration_manual/authentication/password_schemes/
passhash = crypt_r.crypt(password, crypt_r.METHOD_SHA512)
return "{SHA512-CRYPT}" + passhash
@@ -46,58 +44,17 @@ def is_allowed_to_create(config: Config, user, cleartext_password) -> bool:
len(localpart) > config.username_max_length
or len(localpart) < config.username_min_length
):
if localpart != "echo":
logging.warning(
"localpart %s has to be between %s and %s chars long",
localpart,
config.username_min_length,
config.username_max_length,
)
return False
logging.warning(
"localpart %s has to be between %s and %s chars long",
localpart,
config.username_min_length,
config.username_max_length,
)
return False
return True
def get_user_data(db, user):
with db.read_connection() as conn:
result = conn.get_user(user)
if result:
result["uid"] = "vmail"
result["gid"] = "vmail"
return result
def lookup_userdb(db, user):
return get_user_data(db, user)
def lookup_passdb(db, config: Config, user, cleartext_password):
with db.write_transaction() as conn:
userdata = conn.get_user(user)
if userdata:
# Update last login time.
conn.execute(
"UPDATE users SET last_login=? WHERE addr=?", (int(time.time()), user)
)
userdata["uid"] = "vmail"
userdata["gid"] = "vmail"
return userdata
if not is_allowed_to_create(config, user, cleartext_password):
return
encrypted_password = encrypt_password(cleartext_password)
q = """INSERT INTO users (addr, password, last_login)
VALUES (?, ?, ?)"""
conn.execute(q, (user, encrypted_password, int(time.time())))
return dict(
home=f"/home/vmail/{user}",
uid="vmail",
gid="vmail",
password=encrypted_password,
)
def split_and_unescape(s):
"""Split strings using double quote as a separator and backslash as escape character
into parts."""
@@ -124,11 +81,12 @@ def split_and_unescape(s):
yield out
def handle_dovecot_request(msg, db, config: Config):
short_command = msg[0]
if short_command == "L": # LOOKUP
parts = msg[1:].split("\t")
class AuthDictProxy(DictProxy):
def __init__(self, config):
super().__init__()
self.config = config
def handle_lookup(self, parts):
# Dovecot <2.3.17 has only one part,
# do not attempt to read any other parts for compatibility.
keyname = parts[0]
@@ -136,13 +94,14 @@ def handle_dovecot_request(msg, db, config: Config):
namespace, type, args = keyname.split("/", 2)
args = list(split_and_unescape(args))
config = self.config
reply_command = "F"
res = ""
if namespace == "shared":
if type == "userdb":
user = args[0]
if user.endswith(f"@{config.mail_domain}"):
res = lookup_userdb(db, user)
res = self.lookup_userdb(user)
if res:
reply_command = "O"
else:
@@ -150,51 +109,48 @@ def handle_dovecot_request(msg, db, config: Config):
elif type == "passdb":
user = args[1]
if user.endswith(f"@{config.mail_domain}"):
res = lookup_passdb(db, config, user, cleartext_password=args[0])
res = self.lookup_passdb(user, cleartext_password=args[0])
if res:
reply_command = "O"
else:
reply_command = "N"
json_res = json.dumps(res) if res else ""
return f"{reply_command}{json_res}\n"
return None
def handle_iterate(self, parts):
# example: I0\t0\tshared/userdb/
if parts[2] == "shared/userdb/":
result = "".join(
f"Oshared/userdb/{user}\t\n" for user in self.iter_userdb()
)
return f"{result}\n"
class ThreadedUnixStreamServer(ThreadingMixIn, UnixStreamServer):
request_queue_size = 100
def iter_userdb(self) -> list:
"""Get a list of all user addresses."""
return [x for x in os.listdir(self.config.mailboxes_dir) if "@" in x]
def lookup_userdb(self, addr):
return self.config.get_user(addr).get_userdb_dict()
def lookup_passdb(self, addr, cleartext_password):
user = self.config.get_user(addr)
userdata = user.get_userdb_dict()
if userdata:
return userdata
if not is_allowed_to_create(self.config, addr, cleartext_password):
return
user.set_password(encrypt_password(cleartext_password))
print(f"Created address: {addr}", file=sys.stderr)
return user.get_userdb_dict()
def main():
socket = sys.argv[1]
passwd_entry = pwd.getpwnam(sys.argv[2])
db = Database(sys.argv[3])
config = read_config(sys.argv[4])
socket, cfgpath = sys.argv[1:]
config = read_config(cfgpath)
class Handler(StreamRequestHandler):
def handle(self):
try:
while True:
msg = self.rfile.readline().strip().decode()
if not msg:
break
res = handle_dovecot_request(msg, db, config)
if res:
self.wfile.write(res.encode("ascii"))
self.wfile.flush()
else:
logging.warn("request had no answer: %r", msg)
except Exception:
logging.exception("Exception in the handler")
raise
migrate_from_db_to_maildir(config)
try:
os.unlink(socket)
except FileNotFoundError:
pass
dictproxy = AuthDictProxy(config=config)
with ThreadedUnixStreamServer(socket, Handler) as server:
os.chown(socket, uid=passwd_entry.pw_uid, gid=passwd_entry.pw_gid)
try:
server.serve_forever()
except KeyboardInterrupt:
pass
dictproxy.serve_forever_from_socket(socket)

View File

@@ -1,84 +0,0 @@
#!/usr/bin/env python3
"""Advanced echo bot example.
it will echo back any message that has non-empty text and also supports the /help command.
"""
import logging
import os
import sys
from deltachat_rpc_client import Bot, DeltaChat, EventType, Rpc, events
from chatmaild.newemail import create_newemail_dict
from chatmaild.config import read_config
hooks = events.HookCollection()
@hooks.on(events.RawEvent)
def log_event(event):
if event.kind == EventType.INFO:
logging.info(event.msg)
elif event.kind == EventType.WARNING:
logging.warning(event.msg)
@hooks.on(events.RawEvent(EventType.ERROR))
def log_error(event):
logging.error(event.msg)
@hooks.on(events.MemberListChanged)
def on_memberlist_changed(event):
logging.info(
"member %s was %s", event.member, "added" if event.member_added else "removed"
)
@hooks.on(events.GroupImageChanged)
def on_group_image_changed(event):
logging.info("group image %s", "deleted" if event.image_deleted else "changed")
@hooks.on(events.GroupNameChanged)
def on_group_name_changed(event):
logging.info("group name changed, old name: %s", event.old_name)
@hooks.on(events.NewMessage(func=lambda e: not e.command))
def echo(event):
snapshot = event.message_snapshot
if snapshot.text or snapshot.file:
snapshot.chat.send_message(text=snapshot.text, file=snapshot.file)
@hooks.on(events.NewMessage(command="/help"))
def help_command(event):
snapshot = event.message_snapshot
snapshot.chat.send_text("Send me any message and I will echo it back")
def main():
path = os.environ.get("PATH")
venv_path = sys.argv[0].strip("echobot")
os.environ["PATH"] = path + ":" + venv_path
with Rpc() as rpc:
deltachat = DeltaChat(rpc)
system_info = deltachat.get_system_info()
logging.info("Running deltachat core %s", system_info.deltachat_core_version)
accounts = deltachat.get_all_accounts()
account = accounts[0] if accounts else deltachat.add_account()
bot = Bot(account, hooks)
if not bot.is_configured():
config = read_config(sys.argv[1])
password = create_newemail_dict(config).get("password")
email = "echo@" + config.mail_domain
bot.configure(email, password)
bot.run_forever()
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
main()

View File

@@ -1,11 +0,0 @@
[Unit]
Description=Chatmail echo bot for testing it works
[Service]
ExecStart={execpath} {config_path}
Environment="PATH={remote_venv_dir}:$PATH"
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,206 @@
"""
Expire old messages and addresses.
"""
import os
import shutil
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from datetime import datetime
from stat import S_ISREG
from chatmaild.config import read_config
FileEntry = namedtuple("FileEntry", ("path", "mtime", "size"))
def iter_mailboxes(basedir, maxnum):
if not os.path.exists(basedir):
print_info(f"no mailboxes found at: {basedir}")
return
for name in os_listdir_if_exists(basedir)[:maxnum]:
if "@" in name:
yield MailboxStat(basedir + "/" + name)
def get_file_entry(path):
"""return a FileEntry or None if the path does not exist or is not a regular file."""
try:
st = os.stat(path)
except FileNotFoundError:
return None
if not S_ISREG(st.st_mode):
return None
return FileEntry(path, st.st_mtime, st.st_size)
def os_listdir_if_exists(path):
"""return a list of names obtained from os.listdir or an empty list if the path does not exist."""
try:
return os.listdir(path)
except FileNotFoundError:
return []
class MailboxStat:
last_login = None
def __init__(self, basedir):
self.basedir = str(basedir)
self.messages = []
self.extrafiles = []
self.scandir(self.basedir)
def scandir(self, folderdir):
for name in os_listdir_if_exists(folderdir):
path = f"{folderdir}/{name}"
if name in ("cur", "new", "tmp"):
for msg_name in os_listdir_if_exists(path):
entry = get_file_entry(f"{path}/{msg_name}")
if entry is not None:
self.messages.append(entry)
elif os.path.isdir(path):
self.scandir(path)
else:
entry = get_file_entry(path)
if entry is not None:
self.extrafiles.append(entry)
if name == "password":
self.last_login = entry.mtime
self.extrafiles.sort(key=lambda x: -x.size)
def print_info(msg):
print(msg, file=sys.stderr)
class Expiry:
def __init__(self, config, dry, now, verbose):
self.config = config
self.dry = dry
self.now = now
self.verbose = verbose
self.del_mboxes = 0
self.all_mboxes = 0
self.del_files = 0
self.all_files = 0
self.start = time.time()
def remove_mailbox(self, mboxdir):
if self.verbose:
print_info(f"removing {mboxdir}")
if not self.dry:
shutil.rmtree(mboxdir)
self.del_mboxes += 1
def remove_file(self, path, mtime=None):
if self.verbose:
if mtime is not None:
date = datetime.fromtimestamp(mtime).strftime("%b %d")
print_info(f"removing {date} {path}")
else:
print_info(f"removing {path}")
if not self.dry:
try:
os.unlink(path)
except FileNotFoundError:
print_info(f"file not found/vanished {path}")
self.del_files += 1
def process_mailbox_stat(self, mbox):
cutoff_without_login = (
self.now - int(self.config.delete_inactive_users_after) * 86400
)
cutoff_mails = self.now - int(self.config.delete_mails_after) * 86400
cutoff_large_mails = self.now - int(self.config.delete_large_after) * 86400
self.all_mboxes += 1
changed = False
if mbox.last_login and mbox.last_login < cutoff_without_login:
self.remove_mailbox(mbox.basedir)
return
mboxname = os.path.basename(mbox.basedir)
if self.verbose:
date = datetime.fromtimestamp(mbox.last_login) if mbox.last_login else None
if date:
print_info(f"checking mailbox {date.strftime('%b %d')} {mboxname}")
else:
print_info(f"checking mailbox (no last_login) {mboxname}")
self.all_files += len(mbox.messages)
for message in mbox.messages:
if message.mtime < cutoff_mails:
self.remove_file(message.path, mtime=message.mtime)
elif message.size > 200000 and message.mtime < cutoff_large_mails:
# we only remove noticed large files (not unnoticed ones in new/)
parts = message.path.split("/")
if len(parts) >= 2 and parts[-2] == "cur":
self.remove_file(message.path, mtime=message.mtime)
else:
continue
changed = True
if changed:
self.remove_file(f"{mbox.basedir}/maildirsize")
def get_summary(self):
return (
f"Removed {self.del_mboxes} out of {self.all_mboxes} mailboxes "
f"and {self.del_files} out of {self.all_files} files in existing mailboxes "
f"in {time.time() - self.start:2.2f} seconds"
)
def main(args=None):
"""Expire mailboxes and messages according to chatmail config"""
parser = ArgumentParser(description=main.__doc__)
ini = "/usr/local/lib/chatmaild/chatmail.ini"
parser.add_argument(
"chatmail_ini",
action="store",
nargs="?",
help=f"path pointing to chatmail.ini file, default: {ini}",
default=ini,
)
parser.add_argument(
"--days", action="store", help="assume date to be days older than now"
)
parser.add_argument(
"--maxnum",
default=None,
action="store",
help="maximum number of mailboxes to iterate on",
)
parser.add_argument(
"-v",
dest="verbose",
action="store_true",
help="print out removed files and mailboxes",
)
parser.add_argument(
"--remove",
dest="remove",
action="store_true",
help="actually remove all expired files and dirs",
)
args = parser.parse_args(args)
config = read_config(args.chatmail_ini)
now = datetime.utcnow().timestamp()
if args.days:
now = now - 86400 * int(args.days)
maxnum = int(args.maxnum) if args.maxnum else None
exp = Expiry(config, dry=not args.remove, now=now, verbose=args.verbose)
for mailbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
exp.process_mailbox_stat(mailbox)
print(exp.get_summary())
if __name__ == "__main__":
main(sys.argv[1:])

View File

@@ -0,0 +1,44 @@
import json
import logging
import os
from contextlib import contextmanager
from random import randint
import filelock
class FileDict:
"""Concurrency-safe multi-reader/single-writer persistent dict."""
def __init__(self, path):
self.path = path
self.lock_path = path.with_name(path.name + ".lock")
@contextmanager
def modify(self):
# the OS will release the lock if the process dies,
# and the contextmanager will otherwise guarantee release
with filelock.FileLock(self.lock_path):
data = self.read()
yield data
write_path = self.path.with_name(self.path.name + ".tmp")
with write_path.open("w") as f:
json.dump(data, f)
os.rename(write_path, self.path)
def read(self):
try:
with self.path.open("r") as f:
return json.load(f)
except FileNotFoundError:
return {}
except Exception:
logging.warning(f"corrupt serialization state at: {self.path!r}")
return {}
def write_bytes_atomic(path, content):
rint = randint(0, 10000000)
tmp = path.with_name(path.name + f".tmp-{rint}")
tmp.write_bytes(content)
os.rename(tmp, path)

View File

@@ -1,163 +0,0 @@
#!/usr/bin/env python3
import asyncio
import logging
import time
import sys
from email.parser import BytesParser
from email import policy
from email.utils import parseaddr
from aiosmtpd.controller import Controller
from smtplib import SMTP as SMTPClient
from .config import read_config
def check_encrypted(message):
"""Check that the message is an OpenPGP-encrypted message."""
if not message.is_multipart():
return False
if message.get("subject") != "...":
return False
if message.get_content_type() != "multipart/encrypted":
return False
parts_count = 0
for part in message.iter_parts():
if parts_count == 0:
if part.get_content_type() != "application/pgp-encrypted":
return False
elif parts_count == 1:
if part.get_content_type() != "application/octet-stream":
return False
else:
return False
parts_count += 1
return True
def check_mdn(message, envelope):
if len(envelope.rcpt_tos) != 1:
return False
for name in ["auto-submitted", "chat-version"]:
if not message.get(name):
return False
if message.get_content_type() != "multipart/report":
return False
body = message.get_body()
if body.get_content_type() != "text/plain":
return False
if list(body.iter_attachments()) or list(body.iter_parts()):
return False
# even with all mime-structural checks an attacker
# could try to abuse the subject or body to contain links or other
# annoyance -- we skip on checking subject/body for now as Delta Chat
# should evolve to create E2E-encrypted read receipts anyway.
# and then MDNs are just encrypted mail and can pass the border
# to other instances.
return True
async def asyncmain_beforequeue(config):
port = config.filtermail_smtp_port
Controller(BeforeQueueHandler(config), hostname="127.0.0.1", port=port).start()
class BeforeQueueHandler:
def __init__(self, config):
self.config = config
self.send_rate_limiter = SendRateLimiter()
async def handle_MAIL(self, server, session, envelope, address, mail_options):
logging.info(f"handle_MAIL from {address}")
envelope.mail_from = address
max_sent = self.config.max_user_send_per_minute
if not self.send_rate_limiter.is_sending_allowed(address, max_sent):
return f"450 4.7.1: Too much mail from {address}"
parts = envelope.mail_from.split("@")
if len(parts) != 2:
return f"500 Invalid from address <{envelope.mail_from!r}>"
return "250 OK"
async def handle_DATA(self, server, session, envelope):
logging.info("handle_DATA before-queue")
error = self.check_DATA(envelope)
if error:
return error
logging.info("re-injecting the mail that passed checks")
client = SMTPClient("localhost", self.config.postfix_reinject_port)
client.sendmail(envelope.mail_from, envelope.rcpt_tos, envelope.content)
return "250 OK"
def check_DATA(self, envelope):
"""the central filtering function for e-mails."""
logging.info(f"Processing DATA message from {envelope.mail_from}")
message = BytesParser(policy=policy.default).parsebytes(envelope.content)
mail_encrypted = check_encrypted(message)
_, from_addr = parseaddr(message.get("from").strip())
logging.info(f"mime-from: {from_addr} envelope-from: {envelope.mail_from!r}")
if envelope.mail_from.lower() != from_addr.lower():
return f"500 Invalid FROM <{from_addr!r}> for <{envelope.mail_from!r}>"
if not mail_encrypted and check_mdn(message, envelope):
return
if envelope.mail_from in self.config.passthrough_senders:
return
passthrough_recipients = self.config.passthrough_recipients
envelope_from_domain = from_addr.split("@").pop()
for recipient in envelope.rcpt_tos:
if envelope.mail_from == recipient:
# Always allow sending emails to self.
continue
if recipient in passthrough_recipients:
continue
res = recipient.split("@")
if len(res) != 2:
return f"500 Invalid address <{recipient}>"
_recipient_addr, recipient_domain = res
is_outgoing = recipient_domain != envelope_from_domain
if is_outgoing and not mail_encrypted:
is_securejoin = message.get("secure-join") in [
"vc-request",
"vg-request",
]
if not is_securejoin:
return f"500 Invalid unencrypted mail to <{recipient}>"
class SendRateLimiter:
def __init__(self):
self.addr2timestamps = {}
def is_sending_allowed(self, mail_from, max_send_per_minute):
last = self.addr2timestamps.setdefault(mail_from, [])
now = time.time()
last[:] = [ts for ts in last if ts >= (now - 60)]
if len(last) <= max_send_per_minute:
last.append(now)
return True
return False
def main():
args = sys.argv[1:]
assert len(args) == 1
config = read_config(args[0])
logging.basicConfig(level=logging.WARN)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
task = asyncmain_beforequeue(config)
loop.create_task(task)
loop.run_forever()

View File

@@ -1,10 +0,0 @@
[Unit]
Description=Chatmail Postfix BeforeQeue filter
[Service]
ExecStart={execpath} {config_path}
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,168 @@
"""
command line tool to analyze mailbox message storage
example invocation:
python -m chatmaild.fsreport /path/to/chatmail.ini
to show storage summaries for all "cur" folders
python -m chatmaild.fsreport /path/to/chatmail.ini --mdir cur
to show storage summaries only for first 1000 mailboxes
python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000
"""
import os
from argparse import ArgumentParser
from datetime import datetime
from chatmaild.config import read_config
from chatmaild.expire import iter_mailboxes
DAYSECONDS = 24 * 60 * 60
MONTHSECONDS = DAYSECONDS * 30
def HSize(size: int):
"""Format a size integer as a Human-readable string Kilobyte, Megabyte or Gigabyte"""
if size < 10000:
return f"{size / 1000:5.2f}K"
if size < 1000 * 1000:
return f"{size / 1000:5.0f}K"
if size < 1000 * 1000 * 1000:
return f"{int(size / 1000000):5.0f}M"
return f"{size / 1000000000:5.2f}G"
class Report:
def __init__(self, now, min_login_age, mdir):
self.size_extra = 0
self.size_messages = 0
self.now = now
self.min_login_age = min_login_age
self.mdir = mdir
self.num_ci_logins = self.num_all_logins = 0
self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)}
self.message_buckets = {x: 0 for x in (0, 160000, 500000, 2000000)}
def process_mailbox_stat(self, mailbox):
# categorize login times
last_login = mailbox.last_login
if last_login:
self.num_all_logins += 1
if os.path.basename(mailbox.basedir)[:3] == "ci-":
self.num_ci_logins += 1
else:
for days in self.login_buckets:
if last_login >= self.now - days * DAYSECONDS:
self.login_buckets[days] += 1
cutoff_login_date = self.now - self.min_login_age * DAYSECONDS
if last_login and last_login <= cutoff_login_date:
# categorize message sizes
for size in self.message_buckets:
for msg in mailbox.messages:
if msg.size >= size:
if self.mdir and not msg.relpath.startswith(self.mdir):
continue
self.message_buckets[size] += msg.size
self.size_messages += sum(entry.size for entry in mailbox.messages)
self.size_extra += sum(entry.size for entry in mailbox.extrafiles)
def dump_summary(self):
all_messages = self.size_messages
print()
print("## Mailbox storage use analysis")
print(f"Mailbox data total size: {HSize(self.size_extra + all_messages)}")
print(f"Messages total size : {HSize(all_messages)}")
try:
percent = self.size_extra / (self.size_extra + all_messages) * 100
except ZeroDivisionError:
percent = 100
print(f"Extra files : {HSize(self.size_extra)} ({percent:.2f}%)")
print()
if self.min_login_age:
print(f"### Message storage for {self.min_login_age} days old logins")
pref = f"[{self.mdir}] " if self.mdir else ""
for minsize, sumsize in self.message_buckets.items():
percent = (sumsize / all_messages * 100) if all_messages else 0
print(
f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%)"
)
user_logins = self.num_all_logins - self.num_ci_logins
def p(num):
return f"({num / user_logins * 100:2.2f}%)" if user_logins else "100%"
print()
print(f"## Login stats, from date reference {datetime.fromtimestamp(self.now)}")
print(f"all: {HSize(self.num_all_logins)}")
print(f"non-ci: {HSize(user_logins)}")
print(f"ci: {HSize(self.num_ci_logins)}")
for days, active in self.login_buckets.items():
print(f"last {days:3} days: {HSize(active)} {p(active)}")
def main(args=None):
"""Report about filesystem storage usage of all mailboxes and messages"""
parser = ArgumentParser(description=main.__doc__)
ini = "/usr/local/lib/chatmaild/chatmail.ini"
parser.add_argument(
"chatmail_ini",
action="store",
nargs="?",
help=f"path pointing to chatmail.ini file, default: {ini}",
default=ini,
)
parser.add_argument(
"--days",
default=0,
action="store",
help="assume date to be days older than now",
)
parser.add_argument(
"--min-login-age",
default=0,
dest="min_login_age",
action="store",
help="only sum up message size if last login is at least min-login-age days old",
)
parser.add_argument(
"--mdir",
action="store",
help="only consider 'cur' or 'new' or 'tmp' messages for summary",
)
parser.add_argument(
"--maxnum",
default=None,
action="store",
help="maximum number of mailboxes to iterate on",
)
args = parser.parse_args(args)
config = read_config(args.chatmail_ini)
now = datetime.utcnow().timestamp()
if args.days:
now = now - 86400 * int(args.days)
maxnum = int(args.maxnum) if args.maxnum else None
rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir)
for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
rep.process_mailbox_stat(mbox)
rep.dump_summary()
if __name__ == "__main__":
main()

View File

@@ -8,17 +8,29 @@ mail_domain = {mail_domain}
#
#
# Account Restrictions
# Restrictions on user addresses
#
# how many mails a user can send out per minute
# email sending rate per user and minute
max_user_send_per_minute = 60
# maximum mailbox size of a chatmail account
max_mailbox_size = 100M
# per-user max burst size for sending rate limiting (GCRA bucket capacity)
max_user_send_burst_size = 10
# maximum mailbox size of a chatmail address
max_mailbox_size = 500M
# maximum message size for an e-mail in bytes
max_message_size = 31457280
# days after which mails are unconditionally deleted
delete_mails_after = 40
delete_mails_after = 20
# days after which large messages (>200k) are unconditionally deleted
delete_large_after = 7
# days after which users without a successful login are deleted (database and mails)
delete_inactive_users_after = 90
# minimum length a username must have
username_min_length = 9
@@ -29,22 +41,74 @@ username_max_length = 9
# minimum length a password must have
password_min_length = 9
# list of chatmail accounts which can send outbound un-encrypted mail
# list of chatmail addresses which can send outbound un-encrypted mail
passthrough_senders =
# list of e-mail recipients for which to accept outbound un-encrypted mails
# (space-separated, item may start with "@" to whitelist whole recipient domains)
passthrough_recipients =
# path to www directory - documented here: https://chatmail.at/doc/relay/getting_started.html#custom-web-pages
#www_folder = www
#
# Deployment Details
#
# where the filtermail SMTP service listens
# SMTP outgoing filtermail and reinjection
filtermail_smtp_port = 10080
# postfix accepts on the localhost reinject SMTP port
postfix_reinject_port = 10025
# SMTP incoming filtermail and reinjection
filtermail_smtp_port_incoming = 10081
postfix_reinject_port_incoming = 10026
# if set to "True" IPv6 is disabled
disable_ipv6 = False
# Your email adress, which will be used in acmetool to manage Let's Encrypt SSL certificates
acme_email =
# Defaults to https://iroh.{{mail_domain}} and running `iroh-relay` on the chatmail
# service.
# If you set it to anything else, the service will be disabled
# and users will be directed to use the given iroh relay URL.
# Set it to empty string if you want users to use their default iroh relay.
# iroh_relay =
# Address on which `mtail` listens,
# e.g. 127.0.0.1 or some private network
# address like 192.168.10.1.
# You can point Prometheus
# or some other OpenMetrics-compatible
# collector to
# http://{{mtail_address}}:3903/metrics
# and display collected metrics with Grafana.
#
# WARNING: do not expose this service
# to the public IP address.
#
# `mtail is not running if the setting is not set.
# mtail_address = 127.0.0.1
#
# Debugging options
#
# set to True if you want to track imap protocol execution
# in per-maildir ".in/.out" files.
# Note that you need to manually cleanup these files
# so use this option with caution on production servers.
imap_rawlog = false
# set to true if you want to enable the IMAP COMPRESS Extension,
# which allows IMAP connections to be efficiently compressed.
# WARNING: Enabling this makes it impossible to hibernate IMAP
# processes which will result in much higher memory/RAM usage.
imap_compress = false
#
# Privacy Policy
#
@@ -60,4 +124,3 @@ privacy_pdo =
# postal address of the privacy supervisor
privacy_supervisor =

View File

@@ -1,7 +1,7 @@
[privacy]
passthrough_recipients = privacy@testrun.org
passthrough_recipients = privacy@testrun.org echo@{mail_domain}
privacy_postal =
Merlinux GmbH, Represented by the managing director H. Krekel,

View File

@@ -0,0 +1,29 @@
import sys
from .config import read_config
from .dictproxy import DictProxy
class LastLoginDictProxy(DictProxy):
def __init__(self, config):
super().__init__()
self.config = config
def handle_set(self, addr, parts):
keyname = parts[1].split("/")
value = parts[2] if len(parts) > 2 else ""
if keyname[0] == "shared" and keyname[1] == "last-login":
addr = keyname[2]
timestamp = int(value)
user = self.config.get_user(addr)
user.set_last_login_timestamp(timestamp)
return True
return False
def main():
socket, config_path = sys.argv[1:]
config = read_config(config_path)
dictproxy = LastLoginDictProxy(config=config)
dictproxy.serve_forever_from_socket(socket)

View File

@@ -0,0 +1,151 @@
import logging
import sys
import time
from contextlib import contextmanager
from .config import read_config
from .dictproxy import DictProxy
from .filedict import FileDict
from .notifier import Notifier
from .turnserver import turn_credentials
def _is_valid_token_timestamp(timestamp, now):
# Token if invalid after 90 days
# or if the timestamp is in the future.
return timestamp > now - 3600 * 24 * 90 and timestamp < now + 60
class Metadata:
# each SETMETADATA on this key appends to dictionary
# mapping of unique device tokens
# which only ever get removed if the upstream indicates the token is invalid
DEVICETOKEN_KEY = "devicetoken"
def __init__(self, vmail_dir):
self.vmail_dir = vmail_dir
def get_metadata_dict(self, addr):
return FileDict(self.vmail_dir / addr / "metadata.json")
@contextmanager
def _modify_tokens(self, addr):
with self.get_metadata_dict(addr).modify() as data:
tokens = data.setdefault(self.DEVICETOKEN_KEY, {})
now = int(time.time())
if isinstance(tokens, list):
data[self.DEVICETOKEN_KEY] = tokens = {t: now for t in tokens}
expired_tokens = [
token
for token, timestamp in tokens.items()
if not _is_valid_token_timestamp(tokens[token], now)
]
for expired_token in expired_tokens:
del tokens[expired_token]
yield tokens
def add_token_to_addr(self, addr, token):
with self._modify_tokens(addr) as tokens:
tokens[token] = int(time.time())
def remove_token_from_addr(self, addr, token):
with self._modify_tokens(addr) as tokens:
if token in tokens:
del tokens[token]
def get_tokens_for_addr(self, addr):
mdict = self.get_metadata_dict(addr).read()
tokens = mdict.get(self.DEVICETOKEN_KEY, {})
now = int(time.time())
if isinstance(tokens, dict):
token_list = [
token
for token, timestamp in tokens.items()
if _is_valid_token_timestamp(timestamp, now)
]
if len(token_list) < len(tokens):
# Some tokens have expired, remove them.
with self._modify_tokens(addr) as _tokens:
pass
else:
token_list = []
return token_list
class MetadataDictProxy(DictProxy):
def __init__(self, notifier, metadata, iroh_relay=None, turn_hostname=None):
super().__init__()
self.notifier = notifier
self.metadata = metadata
self.iroh_relay = iroh_relay
self.turn_hostname = turn_hostname
def handle_lookup(self, parts):
# Lpriv/43f5f508a7ea0366dff30200c15250e3/devicetoken\tlkj123poi@c2.testrun.org
keyparts = parts[0].split("/", 2)
if keyparts[0] == "priv":
keyname = keyparts[2]
addr = parts[1]
if keyname == self.metadata.DEVICETOKEN_KEY:
res = " ".join(self.metadata.get_tokens_for_addr(addr))
return f"O{res}\n"
elif keyparts[0] == "shared":
keyname = keyparts[2]
if (
keyname == "vendor/vendor.dovecot/pvt/server/vendor/deltachat/irohrelay"
and self.iroh_relay
):
# Handle `GETMETADATA "" /shared/vendor/deltachat/irohrelay`
return f"O{self.iroh_relay}\n"
elif keyname == "vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn":
res = turn_credentials()
port = 3478
return f"O{self.turn_hostname}:{port}:{res}\n"
logging.warning(f"lookup ignored: {parts!r}")
return "N\n"
def handle_set(self, addr, parts):
# For documentation on key structure see
# https://github.com/dovecot/core/blob/main/src/lib-storage/mailbox-attribute.h
keyname = parts[1].split("/")
value = parts[2] if len(parts) > 2 else ""
if keyname[0] == "priv" and keyname[2] == self.metadata.DEVICETOKEN_KEY:
self.metadata.add_token_to_addr(addr, value)
return True
elif keyname[0] == "priv" and keyname[2] == "messagenew":
self.notifier.new_message_for_addr(addr, self.metadata)
return True
return False
def main():
socket, config_path = sys.argv[1:]
config = read_config(config_path)
iroh_relay = config.iroh_relay
mail_domain = config.mail_domain
vmail_dir = config.mailboxes_dir
if not vmail_dir.exists():
logging.error("vmail dir does not exist: %r", vmail_dir)
return 1
queue_dir = vmail_dir / "pending_notifications"
queue_dir.mkdir(exist_ok=True)
metadata = Metadata(vmail_dir)
notifier = Notifier(queue_dir)
notifier.start_notification_threads(metadata.remove_token_from_addr)
dictproxy = MetadataDictProxy(
notifier=notifier,
metadata=metadata,
iroh_relay=iroh_relay,
turn_hostname=mail_domain,
)
dictproxy.serve_forever_from_socket(socket)

View File

@@ -1,7 +1,6 @@
#!/usr/bin/env python3
from pathlib import Path
import time
import sys
from pathlib import Path
def main(vmail_dir=None):
@@ -12,13 +11,21 @@ def main(vmail_dir=None):
ci_accounts = 0
for path in Path(vmail_dir).iterdir():
if not path.joinpath("cur").is_dir():
continue
accounts += 1
if path.name[:3] in ("ci-", "ac_"):
ci_accounts += 1
timestamp = int(time.time() * 1000)
print(f"accounts {accounts} {timestamp}")
print(f"ci_accounts {ci_accounts} {timestamp}")
print("# HELP total number of accounts")
print("# TYPE accounts gauge")
print(f"accounts {accounts}")
print("# HELP number of CI accounts")
print("# TYPE ci_accounts gauge")
print(f"ci_accounts {ci_accounts}")
print("# HELP number of non-CI accounts")
print("# TYPE nonci_accounts gauge")
print(f"nonci_accounts {accounts - ci_accounts}")
if __name__ == "__main__":

View File

@@ -0,0 +1,63 @@
"""
migration code from old sqlite databases into per-maildir "password" files
where mtime reflects and is updated to be the "last-login" time.
"""
import logging
import os
import sqlite3
import sys
from chatmaild.config import read_config
def get_all_rows(path):
assert path.exists()
uri = f"file:{path}?mode=ro"
sqlconn = sqlite3.connect(uri, timeout=60, isolation_level="DEFERRED", uri=True)
cur = sqlconn.cursor()
cur.execute("SELECT * from users")
rows = cur.fetchall()
sqlconn.close()
return rows
def migrate_from_db_to_maildir(config, chunking=10000):
path = config.passdb_path
if not path.exists():
return
all_rows = get_all_rows(path)
# don't transfer special/CI accounts
rows = [row for row in all_rows if row[0][:3] not in ("ci-", "ac_")]
logging.info(f"ignoring {len(all_rows) - len(rows)} CI accounts")
logging.info(f"migrating {len(rows)} sqlite database passwords to user dirs")
for i, row in enumerate(rows):
addr = row[0]
enc_password = row[1]
user = config.get_user(addr)
user.set_password(enc_password)
if len(row) == 3 and row[2]:
timestamp = int(row[2])
user.set_last_login_timestamp(timestamp)
if i > 0 and i % chunking == 0:
logging.info(f"migration-progress: {i} passwords transferred")
logging.info("migration: all passwords migrated")
oldpath = config.passdb_path.with_suffix(config.passdb_path.suffix + ".old")
os.rename(config.passdb_path, oldpath)
for path in config.passdb_path.parent.iterdir():
if path.name.startswith(config.passdb_path.name + "-"):
path.unlink()
logging.info(f"migration: moved database to {oldpath!r}")
if __name__ == "__main__":
config = read_config(sys.argv[1])
logging.basicConfig(level=logging.INFO)
migrate_from_db_to_maildir(config)

View File

@@ -1,13 +1,13 @@
#!/usr/local/lib/chatmaild/venv/bin/python3
""" CGI script for creating new accounts. """
"""CGI script for creating new accounts."""
import json
import random
import secrets
import string
from chatmaild.config import read_config, Config
from chatmaild.config import Config, read_config
CONFIG_PATH = "/usr/local/lib/chatmaild/chatmail.ini"
ALPHANUMERIC = string.ascii_lowercase + string.digits
@@ -15,7 +15,7 @@ ALPHANUMERIC_PUNCT = string.ascii_letters + string.digits + string.punctuation
def create_newemail_dict(config: Config):
user = "".join(random.choices(ALPHANUMERIC, k=config.username_min_length))
user = "".join(random.choices(ALPHANUMERIC, k=config.username_max_length))
password = "".join(
secrets.choice(ALPHANUMERIC_PUNCT)
for _ in range(config.password_min_length + 3)

View File

@@ -0,0 +1,171 @@
"""
This modules provides notification machinery for transmitting device tokens to
a central notification server which in turn contacts a phone provider's notification server
to trigger Delta Chat apps to retrieve messages and provide instant notifications to users.
The Notifier class arranges the queuing of tokens in separate PriorityQueues
from which NotifyThreads take and transmit them via HTTPS
to the `notifications.delta.chat` service.
The current lack of proper HTTP/2-support in Python leads us
to use multiple threads and connections to the Rust-implemented `notifications.delta.chat`
which itself uses HTTP/2 and thus only a single connection to phone-notification providers.
If a token fails to cause a successful notification
it is moved to a retry-number specific PriorityQueue
which handles all tokens that failed a particular number of times
and which are scheduled for retry using exponential back-off timing.
If a token notification would be scheduled more than DROP_DEADLINE seconds
after its first attempt, it is dropped with a log error.
Note that tokens are opaque to the notification machinery here
and are encrypted foreclosing all ability to distinguish
which device token ultimately goes to which phone-provider notification service,
or to understand the relation of "device tokens" and chatmail addresses.
The meaning and format of tokens is basically a matter of chatmail Core and
the `notification.delta.chat` service.
"""
import logging
import math
import os
import time
from dataclasses import dataclass
from pathlib import Path
from queue import PriorityQueue
from threading import Thread
from uuid import uuid4
import requests
@dataclass
class PersistentQueueItem:
path: Path
addr: str
start_ts: int
token: str
def delete(self):
self.path.unlink(missing_ok=True)
@classmethod
def create(cls, queue_dir, addr, start_ts, token):
queue_id = uuid4().hex
start_ts = int(start_ts)
path = queue_dir.joinpath(queue_id)
tmp_path = path.with_name(path.name + ".tmp")
tmp_path.write_text(f"{addr}\n{start_ts}\n{token}")
os.rename(tmp_path, path)
return cls(path, addr, start_ts, token)
@classmethod
def read_from_path(cls, path):
addr, start_ts, token = path.read_text().split("\n", maxsplit=2)
return cls(path, addr, int(start_ts), token)
def __lt__(self, other):
return self.start_ts < other.start_ts
class Notifier:
URL = "https://notifications.delta.chat/notify"
CONNECTION_TIMEOUT = 60.0 # seconds until http-request is given up
BASE_DELAY = 8.0 # base seconds for exponential back-off delay
DROP_DEADLINE = 5 * 60 * 60 # drop notifications after 5 hours
def __init__(self, queue_dir):
self.queue_dir = queue_dir
max_tries = int(math.log(self.DROP_DEADLINE, self.BASE_DELAY)) + 1
self.retry_queues = [PriorityQueue() for _ in range(max_tries)]
def compute_delay(self, retry_num):
return 0 if retry_num == 0 else pow(self.BASE_DELAY, retry_num)
def new_message_for_addr(self, addr, metadata):
start_ts = int(time.time())
for token in metadata.get_tokens_for_addr(addr):
queue_item = PersistentQueueItem.create(
self.queue_dir, addr, start_ts, token
)
self.queue_for_retry(queue_item)
def requeue_persistent_queue_items(self):
for queue_path in self.queue_dir.iterdir():
if queue_path.name.endswith(".tmp"):
logging.warning(f"removing spurious queue item: {queue_path!r}")
queue_path.unlink()
continue
try:
queue_item = PersistentQueueItem.read_from_path(queue_path)
except ValueError:
logging.warning(f"removing spurious queue item: {queue_path!r}")
queue_path.unlink()
continue
self.queue_for_retry(queue_item)
def queue_for_retry(self, queue_item, retry_num=0):
delay = self.compute_delay(retry_num)
when = int(time.time()) + delay
deadline = queue_item.start_ts + self.DROP_DEADLINE
if retry_num >= len(self.retry_queues) or when > deadline:
queue_item.delete()
logging.error(f"notification exceeded deadline: {queue_item.token!r}")
return
self.retry_queues[retry_num].put((when, queue_item))
def start_notification_threads(self, remove_token_from_addr):
self.requeue_persistent_queue_items()
threads = {}
for retry_num in range(len(self.retry_queues)):
# use 4 threads for first-try tokens and less for subsequent tries
num_threads = 4 if retry_num == 0 else 2
threads[retry_num] = []
for _ in range(num_threads):
thread = NotifyThread(self, retry_num, remove_token_from_addr)
threads[retry_num].append(thread)
thread.start()
return threads
class NotifyThread(Thread):
def __init__(self, notifier, retry_num, remove_token_from_addr):
super().__init__(daemon=True)
self.notifier = notifier
self.retry_num = retry_num
self.remove_token_from_addr = remove_token_from_addr
def stop(self):
self.notifier.retry_queues[self.retry_num].put((None, None))
def run(self):
requests_session = requests.Session()
while self.retry_one(requests_session):
pass
def retry_one(self, requests_session, sleep=time.sleep):
when, queue_item = self.notifier.retry_queues[self.retry_num].get()
if when is None:
return False
wait_time = when - int(time.time())
if wait_time > 0:
sleep(wait_time)
self.perform_request_to_notification_server(requests_session, queue_item)
return True
def perform_request_to_notification_server(self, requests_session, queue_item):
timeout = self.notifier.CONNECTION_TIMEOUT
token = queue_item.token
try:
res = requests_session.post(self.notifier.URL, data=token, timeout=timeout)
except requests.exceptions.RequestException as e:
res = e
else:
if res.status_code in (200, 410):
if res.status_code == 410:
self.remove_token_from_addr(queue_item.addr, token)
queue_item.delete()
return
logging.warning(f"Notification request failed: {res!r}")
self.notifier.queue_for_retry(queue_item, retry_num=self.retry_num + 1)

View File

@@ -0,0 +1,56 @@
From: {from_addr}
To: {to_addr}
Autocrypt-Setup-Message: v1
Subject: Autocrypt Setup Message
Date: Tue, 22 Jan 2019 12:56:29 +0100
Content-type: multipart/mixed; boundary="Y6fyGi9SoGeH8WwRaEdC6bbBcYOedDzrQ"
--Y6fyGi9SoGeH8WwRaEdC6bbBcYOedDzrQ
Content-Type: text/plain
This message contains all information to transfer your Autocrypt
settings along with your secret key securely from your original
device.
To set up your new device for Autocrypt, please follow the
instuctions that should be presented by your new device.
You can keep this message and use it as a backup for your secret
key. If you want to do this, you should write down the Setup Code
and store it securely.
--Y6fyGi9SoGeH8WwRaEdC6bbBcYOedDzrQ
Content-Type: application/autocrypt-setup
Content-Disposition: attachment; filename="autocrypt-setup-message.html"
<html><body>
<p>
This is the Autocrypt setup file used to transfer settings and
keys between clients. You can decrypt it using the Setup Code
presented on your old device, and then import the contained key
into your keyring.
</p>
<pre>
-----BEGIN PGP MESSAGE-----
Passphrase-Format: numeric9x4
Passphrase-Begin: 17
jA0EBwMCFAxADoCdzeX/0ukBlqI5+pfpKb751qd/7nLNbkpy3gVcaf1QwRPZYt40
Ynp08UqRQ2g48ZlnzHLSwlTGOPTuv2Jt8ka+pgZ45xzvJSG2gau03xP4VsC271kR
VmCjdb0Y6Rk96mAwfGzrkbaRQ9Z7fIoL866GOv6h9neiVIkp+JYlTV6ISD0ZQJ4Q
I6dOQkB/TWZyVjtiJDOQHdfNWliA6NtqaLq19wlu9L5xXjuNpY95KwR8EJXWe0+o
Y3d2U/KxOAkXKghP2Qg1GtlPVeGC5T4p03TGI6pzKT+kHX6Rrm9wK6sM9aTquMmF
Vok84Jg1DFnwivWC2RILR81rXi7k/+Y6MUbveFgJ9cQduqpxnmD7TjOblYu7M6zp
YGAUxh8DRKlIMn2QsA++DBYQ6ACZvwuY8qTDLkqPDo4WqM313dsMJbyGjDdVE7EM
PESS+RlABETpZXz8g/ycr6DIUNdlbPcmYlsBfHWDOuR2GFFTwmlv5slWS39dJv38
E0eIe1CwdxI801Se7t7dUUS/ZF8wb6GlmxOcqGbF8eko1Z0S64IAm7/h13MRQCxI
geQnHfGYVJ2FOimoCMEKwfa9x++RFTDW0u7spDC2uWvK/1viV8OfRppFhLr/kmKb
18lWXuAz80DAjUDUsVqEq2MvJBJGoCJUEyjuRsLkHYRM5jYk4v50LyyR0Om73nWF
nZBqmqNzdr7Xb9PHHdFhnEc0VvoYbrcM0RVYcEMW3YbmejM891j1d6Iv+/n/qND/
NdebGrfWJMmFLf/iEkzTZ3/v5inW9LpWoRc94ioCjJTaEo8Rib6ARRFaJVIsmNXi
YicFGO98D+zX+a2t9Yz6IpPajVslnOp6ScpmXgts/2XWD7oE+JgxSAqo/dLVsHgP
Ufo=
=pulM
-----END PGP MESSAGE-----
</pre></body></html>
--Y6fyGi9SoGeH8WwRaEdC6bbBcYOedDzrQ--

View File

@@ -1,6 +1,6 @@
From: {from_addr}
To: {to_addr}
Subject: ...
Subject: {subject}
Date: Sun, 15 Oct 2023 16:43:21 +0000
Message-ID: <Mr.UVyJWZmkCKM.hGzNc6glBE_@c2.testrun.org>
In-Reply-To: <Mr.MvmCz-GQbi_.6FGRkhDf05c@c2.testrun.org>

View File

@@ -0,0 +1,44 @@
From: {from_addr}
To: {to_addr}
Subject: ...
Date: Sun, 15 Oct 2023 16:43:21 +0000
Message-ID: <Mr.UVyJWZmkCKM.hGzNc6glBE_@c2.testrun.org>
In-Reply-To: <Mr.MvmCz-GQbi_.6FGRkhDf05c@c2.testrun.org>
References: <Mr.3gckbNy5bch.uK3Hd2Ws6-w@c2.testrun.org>
<Mr.MvmCz-GQbi_.6FGRkhDf05c@c2.testrun.org>
Chat-Version: 1.0
Autocrypt: addr={from_addr}; prefer-encrypt=mutual;
keydata=xjMEZSwWjhYJKwYBBAHaRw8BAQdAQBEhqeJh0GueHB6kF/DUQqYCxARNBVokg/AzT+7LqH
rNFzxiYXJiYXpAYzIudGVzdHJ1bi5vcmc+wosEEBYIADMCGQEFAmUsFo4CGwMECwkIBwYVCAkKCwID
FgIBFiEEFTfUNvVnY3b9F7yHnmme1PfUhX8ACgkQnmme1PfUhX9A4AEAnHWHp49eBCMHK5t66gYPiW
XQuB1mwUjzGfYWB+0RXUoA/0xcQ3FbUNlGKW7Blp6eMFfViv6Mv2d3kNSXACB6nmcMzjgEZSwWjhIK
KwYBBAGXVQEFAQEHQBpY5L2M1XHo0uxf8SX1wNLBp/OVvidoWHQF2Jz+kJsUAwEIB8J4BBgWCAAgBQ
JlLBaOAhsMFiEEFTfUNvVnY3b9F7yHnmme1PfUhX8ACgkQnmme1PfUhX/INgEA37AJaNvruYsJVanP
IXnYw4CKd55UAwl8Zcy+M2diAbkA/0fHHcGV4r78hpbbL1Os52DPOdqYQRauIeJUeG+G6bQO
MIME-Version: 1.0
Content-Type: multipart/encrypted; protocol="application/pgp-encrypted";
boundary="YFrteb74qSXmggbOxZL9dRnhymywAi"
--YFrteb74qSXmggbOxZL9dRnhymywAi
Content-Description: PGP/MIME version identification
Content-Type: application/pgp-encrypted
Version: 1
--YFrteb74qSXmggbOxZL9dRnhymywAi
Content-Description: OpenPGP encrypted message
Content-Disposition: inline; filename="encrypted.asc";
Content-Type: application/octet-stream; name="encrypted.asc"
-----BEGIN PGP MESSAGE-----
yxJiAAAAAABIZWxsbyB3b3JsZCE=
=1I/B
-----END PGP MESSAGE-----
--YFrteb74qSXmggbOxZL9dRnhymywAi--

View File

@@ -0,0 +1,46 @@
Date: Fri, 8 Jul 1994 09:21:47 -0400
From: Mail Delivery Subsystem <MAILER-DAEMON@example.org>
Subject: Returned mail: User unknown
To: <owner-ups-mib@CS.UTK.EDU>
Auto-Submitted: auto-replied
MIME-Version: 1.0
Content-Type: multipart/report; report-type=delivery-status;
boundary="JAA13167.773673707/CS.UTK.EDU"
--JAA13167.773673707/CS.UTK.EDU
content-type: text/plain; charset=us-ascii
----- The following addresses had delivery problems -----
<arathib@vnet.ibm.com> (unrecoverable error)
<wsnell@sdcc13.ucsd.edu> (unrecoverable error)
--JAA13167.773673707/CS.UTK.EDU
content-type: message/delivery-status
Reporting-MTA: dns; cs.utk.edu
Original-Recipient: rfc822;arathib@vnet.ibm.com
Final-Recipient: rfc822;arathib@vnet.ibm.com
Action: failed
Status: 5.0.0 (permanent failure)
Diagnostic-Code: smtp;
550 'arathib@vnet.IBM.COM' is not a registered gateway user
Remote-MTA: dns; vnet.ibm.com
Original-Recipient: rfc822;johnh@hpnjld.njd.hp.com
Final-Recipient: rfc822;johnh@hpnjld.njd.hp.com
Action: delayed
Status: 4.0.0 (hpnjld.njd.jp.com: host name lookup failure)
Original-Recipient: rfc822;wsnell@sdcc13.ucsd.edu
Final-Recipient: rfc822;wsnell@sdcc13.ucsd.edu
Action: failed
Status: 5.0.0
Diagnostic-Code: smtp; 550 user unknown
Remote-MTA: dns; sdcc13.ucsd.edu
--JAA13167.773673707/CS.UTK.EDU
content-type: message/rfc822
[original message goes here]
--JAA13167.773673707/CS.UTK.EDU--

View File

@@ -7,7 +7,7 @@ Date: Sun, 15 Oct 2023 16:41:44 +0000
Message-ID: <Mr.3gckbNy5bch.uK3Hd2Ws6-w@c2.testrun.org>
References: <Mr.3gckbNy5bch.uK3Hd2Ws6-w@c2.testrun.org>
Chat-Version: 1.0
Autocrypt: addr=foobar@c2.testrun.org; prefer-encrypt=mutual;
Autocrypt: addr={from_addr}; prefer-encrypt=mutual;
keydata=xjMEZSrw3hYJKwYBBAHaRw8BAQdAiEKNQFU28c6qsx4vo/JHdt73RXdjMOmByf/XsGiJ7m
nNFzxmb29iYXJAYzIudGVzdHJ1bi5vcmc+wosEEBYIADMCGQEFAmUq8N4CGwMECwkIBwYVCAkKCwID
FgIBFiEEGil0OvTIa6RngmCLUYNnEa9leJAACgkQUYNnEa9leJCX3gEAhm0MehE5byBBU1avPczr/I
@@ -20,4 +20,4 @@ Content-Type: text/plain; charset=utf-8; format=flowed; delsp=no
Hi!

View File

@@ -0,0 +1,21 @@
Subject: Message from {from_addr}
From: <{from_addr}>
To: <{to_addr}>
Date: Sun, 15 Oct 2023 16:43:25 +0000
Message-ID: <Mr.78MWtlV7RAi.goCFzBhCYfy@c2.testrun.org>
Chat-Version: 1.0
Secure-Join: vc-request
Secure-Join-Invitenumber: RANDOM-TOKEN
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="Gl92xgZjOShJ5PGHntqYkoo2OK2Dvi"
--Gl92xgZjOShJ5PGHntqYkoo2OK2Dvi
Content-Type: text/plain; charset=utf-8
Buy viagra!
--Gl92xgZjOShJ5PGHntqYkoo2OK2Dvi--

View File

@@ -0,0 +1,21 @@
Subject: Message from {from_addr}
From: <{from_addr}>
To: <{to_addr}>
Date: Sun, 15 Oct 2023 16:43:25 +0000
Message-ID: <Mr.78MWtlV7RAi.goCFzBhCYfy@c2.testrun.org>
Chat-Version: 1.0
Secure-Join: vc-request
Secure-Join-Invitenumber: RANDOM-TOKEN
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="Gl92xgZjOShJ5PGHntqYkoo2OK2Dvi"
--Gl92xgZjOShJ5PGHntqYkoo2OK2Dvi
Content-Type: text/plain; charset=utf-8
Secure-Join: vc-request
--Gl92xgZjOShJ5PGHntqYkoo2OK2Dvi--

View File

@@ -1,11 +1,13 @@
import random
import importlib.resources
import itertools
from email.parser import BytesParser
import os
import random
from email import policy
from email.parser import BytesParser
from pathlib import Path
import pytest
from chatmaild.database import Database
from chatmaild.config import read_config, write_initial_config
@@ -13,8 +15,12 @@ from chatmaild.config import read_config, write_initial_config
def make_config(tmp_path):
inipath = tmp_path.joinpath("chatmail.ini")
def make_conf(mail_domain):
write_initial_config(inipath, mail_domain=mail_domain)
def make_conf(mail_domain, settings=None):
basedir = tmp_path.joinpath(f"vmail/{mail_domain}")
basedir.mkdir(parents=True, exist_ok=True)
overrides = settings.copy() if settings else {}
overrides["mailboxes_dir"] = str(basedir)
write_initial_config(inipath, mail_domain, overrides=overrides)
return read_config(inipath)
return make_conf
@@ -30,6 +36,11 @@ def maildomain(example_config):
return example_config.mail_domain
@pytest.fixture
def testaddr(maildomain):
return f"user.name@{maildomain}"
@pytest.fixture
def gencreds(maildomain):
count = itertools.count()
@@ -48,21 +59,39 @@ def gencreds(maildomain):
return lambda domain=None: next(gen(domain))
@pytest.fixture()
def db(tmpdir):
db_path = tmpdir / "passdb.sqlite"
print("database path:", db_path)
return Database(db_path)
@pytest.fixture
def maildata(request):
try:
datadir = importlib.resources.files(__package__).joinpath("mail-data")
except TypeError:
# in python3.9 or lower, the above doesn't work, so we get datadir this way:
datadir = Path(os.getcwd()).joinpath("chatmaild/src/chatmaild/tests/mail-data")
assert datadir.exists(), datadir
def maildata(name, from_addr, to_addr, subject="[...]"):
# Using `.read_bytes().decode()` instead of `.read_text()` to preserve newlines.
data = datadir.joinpath(name).read_bytes().decode()
text = data.format(from_addr=from_addr, to_addr=to_addr, subject=subject)
return BytesParser(policy=policy.SMTP).parsebytes(text.encode())
return maildata
@pytest.fixture
def maildata(request):
datadir = importlib.resources.files(__package__).joinpath("mail-data")
assert datadir.exists(), datadir
def mockout():
class MockOut:
captured_red = []
captured_green = []
captured_plain = []
def maildata(name, from_addr, to_addr):
data = datadir.joinpath(name).read_text()
text = data.format(from_addr=from_addr, to_addr=to_addr)
return BytesParser(policy=policy.default).parsebytes(text.encode())
def red(self, msg):
self.captured_red.append(msg)
return maildata
def green(self, msg):
self.captured_green.append(msg)
def __call__(self, msg):
self.captured_plain.append(msg)
return MockOut()

View File

@@ -1,3 +1,5 @@
import pytest
from chatmaild.config import read_config
@@ -13,6 +15,14 @@ def test_read_config_basic(example_config):
assert example_config.mail_domain == "chat.example.org"
def test_read_config_basic_using_defaults(tmp_path, maildomain):
inipath = tmp_path.joinpath("chatmail.ini")
inipath.write_text(f"[params]\nmail_domain = {maildomain}")
example_config = read_config(inipath)
assert example_config.max_user_send_per_minute == 60
assert example_config.filtermail_smtp_port_incoming == 10081
def test_read_config_testrun(make_config):
config = make_config("something.testrun.org")
assert config.mail_domain == "something.testrun.org"
@@ -23,10 +33,43 @@ def test_read_config_testrun(make_config):
assert config.filtermail_smtp_port == 10080
assert config.postfix_reinject_port == 10025
assert config.max_user_send_per_minute == 60
assert config.max_mailbox_size == "100M"
assert config.delete_mails_after == "40"
assert config.max_mailbox_size == "500M"
assert config.delete_mails_after == "20"
assert config.delete_large_after == "7"
assert config.username_min_length == 9
assert config.username_max_length == 9
assert config.password_min_length == 9
assert config.passthrough_recipients == ["privacy@testrun.org"]
assert "privacy@testrun.org" in config.passthrough_recipients
assert config.passthrough_senders == []
def test_config_userstate_paths(make_config, tmp_path):
config = make_config("something.testrun.org")
mailboxes_dir = config.mailboxes_dir
passdb_path = config.passdb_path
assert mailboxes_dir.name == "something.testrun.org"
assert str(passdb_path) == "/home/vmail/passdb.sqlite"
assert config.mail_domain == "something.testrun.org"
path = config.get_user("user1@something.testrun.org").maildir
assert not path.exists()
assert path == mailboxes_dir.joinpath("user1@something.testrun.org")
with pytest.raises(ValueError):
config.get_user("")
with pytest.raises(ValueError):
config.get_user(None)
with pytest.raises(ValueError):
config.get_user("../some@something.testrun.org").maildir
with pytest.raises(ValueError):
config.get_user("..").maildir
with pytest.raises(ValueError):
config.get_user(".")
def test_config_max_message_size(make_config, tmp_path):
config = make_config("something.testrun.org", dict(max_message_size="10000"))
assert config.max_message_size == 10000

View File

@@ -0,0 +1,64 @@
import time
from chatmaild.doveauth import AuthDictProxy
from chatmaild.expire import main as main_expire
def test_login_timestamps(example_config):
testaddr = "someuser@chat.example.org"
user = example_config.get_user(testaddr)
# password file needs to be set because it's mtime tracks last-login time
user.set_password("1l2k3j1l2k3j123")
for i in range(10):
user.set_last_login_timestamp(86400 * 4 + i)
assert user.get_last_login_timestamp() == 86400 * 4
def test_delete_inactive_users(example_config):
new = time.time()
old = new - (example_config.delete_inactive_users_after * 86400) - 1
dictproxy = AuthDictProxy(example_config)
def create_user(addr, last_login):
dictproxy.lookup_passdb(addr, "q9mr3faue")
user = example_config.get_user(addr)
user.maildir.joinpath("cur").mkdir()
user.maildir.joinpath("cur", "something").mkdir()
user.set_last_login_timestamp(timestamp=last_login)
# create some stale and some new accounts
to_remove = []
for i in range(150):
addr = f"oldold{i:03}@chat.example.org"
create_user(addr, last_login=old)
to_remove.append(addr)
remain = []
for i in range(5):
addr = f"newnew{i:03}@chat.example.org"
create_user(addr, last_login=new)
remain.append(addr)
# check pre and post-conditions for delete_inactive_users()
for addr in to_remove:
assert example_config.get_user(addr).maildir.exists()
main_expire(
args=[
"--remove",
str(example_config._inipath),
]
)
for p in example_config.mailboxes_dir.iterdir():
assert not p.name.startswith("old")
for addr in to_remove:
assert not example_config.get_user(addr).maildir.exists()
for addr in remain:
userdir = example_config.get_user(addr).maildir
assert userdir.exists()
assert userdir.joinpath("password").read_text()

View File

@@ -1,81 +1,135 @@
import io
import json
import pytest
import threading
import queue
import threading
import traceback
import pytest
import chatmaild.doveauth
from chatmaild.doveauth import get_user_data, lookup_passdb, handle_dovecot_request
from chatmaild.database import DBError
from chatmaild.doveauth import (
AuthDictProxy,
is_allowed_to_create,
)
from chatmaild.newemail import create_newemail_dict
def test_basic(db, example_config):
lookup_passdb(db, example_config, "asdf12345@chat.example.org", "q9mr3faue")
data = get_user_data(db, "asdf12345@chat.example.org")
@pytest.fixture
def dictproxy(example_config):
return AuthDictProxy(config=example_config)
def test_basic(dictproxy, gencreds):
addr, password = gencreds()
dictproxy.lookup_passdb(addr, password)
data = dictproxy.lookup_userdb(addr)
assert data
data2 = lookup_passdb(
db, example_config, "asdf12345@chat.example.org", "q9mr3jewvadsfaue"
)
data2 = dictproxy.lookup_passdb(addr, password)
assert data == data2
def test_dont_overwrite_password_on_wrong_login(db, example_config):
def test_iterate_addresses(dictproxy):
addresses = []
for i in range(10):
addresses.append(f"asdf1234{i}@chat.example.org")
dictproxy.lookup_passdb(addresses[-1], "q9mr3faue")
res = dictproxy.iter_userdb()
assert set(res) == set(addresses)
def test_invalid_username_length(example_config):
config = example_config
config.username_min_length = 6
config.username_max_length = 10
password = create_newemail_dict(config)["password"]
assert not is_allowed_to_create(config, f"a1234@{config.mail_domain}", password)
assert is_allowed_to_create(config, f"012345@{config.mail_domain}", password)
assert is_allowed_to_create(config, f"0123456@{config.mail_domain}", password)
assert is_allowed_to_create(config, f"0123456789@{config.mail_domain}", password)
assert not is_allowed_to_create(
config, f"0123456789x@{config.mail_domain}", password
)
def test_dont_overwrite_password_on_wrong_login(dictproxy):
"""Test that logging in with a different password doesn't create a new user"""
res = lookup_passdb(
db, example_config, "newuser12@chat.example.org", "kajdlkajsldk12l3kj1983"
res = dictproxy.lookup_passdb(
"newuser12@chat.example.org", "kajdlkajsldk12l3kj1983"
)
assert res["password"]
res2 = lookup_passdb(db, example_config, "newuser12@chat.example.org", "kajdslqwe")
res2 = dictproxy.lookup_passdb("newuser12@chat.example.org", "kajdslqwe")
# this function always returns a password hash, which is actually compared by dovecot.
assert res["password"] == res2["password"]
def test_nocreate_file(db, monkeypatch, tmpdir, example_config):
def test_nocreate_file(monkeypatch, tmpdir, dictproxy):
p = tmpdir.join("nocreate")
p.write("")
monkeypatch.setattr(chatmaild.doveauth, "NOCREATE_FILE", str(p))
lookup_passdb(
db, example_config, "newuser12@chat.example.org", "zequ0Aimuchoodaechik"
)
assert not get_user_data(db, "newuser12@chat.example.org")
dictproxy.lookup_passdb("newuser12@chat.example.org", "zequ0Aimuchoodaechik")
assert not dictproxy.lookup_userdb("newuser12@chat.example.org")
def test_db_version(db):
assert db.get_schema_version() == 1
def test_too_high_db_version(db):
with db.write_transaction() as conn:
conn.execute("PRAGMA user_version=%s;" % (999,))
with pytest.raises(DBError):
db.ensure_tables()
def test_handle_dovecot_request(db, example_config):
def test_handle_dovecot_request(dictproxy):
transactions = {}
# Test that password can contain ", ', \ and /
msg = (
'Lshared/passdb/laksjdlaksjdlak\\\\sjdlk\\"12j\\\'3l1/k2j3123"'
"some42123@chat.example.org\tsome42123@chat.example.org"
)
res = handle_dovecot_request(msg, db, example_config)
res = dictproxy.handle_dovecot_request(msg, transactions)
assert res
assert res[0] == "O" and res.endswith("\n")
userdata = json.loads(res[1:].strip())
assert userdata["home"] == "/home/vmail/some42123@chat.example.org"
assert userdata["home"].endswith("chat.example.org/some42123@chat.example.org")
assert userdata["uid"] == userdata["gid"] == "vmail"
assert userdata["password"].startswith("{SHA512-CRYPT}")
def test_50_concurrent_lookups_different_accounts(db, gencreds, example_config):
def test_handle_dovecot_protocol_hello_is_skipped(example_config, caplog):
dictproxy = AuthDictProxy(config=example_config)
rfile = io.BytesIO(b"H3\t2\t0\t\tauth\n")
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b""
assert not caplog.messages
def test_handle_dovecot_protocol_user_not_exists(example_config):
dictproxy = AuthDictProxy(config=example_config)
rfile = io.BytesIO(
b"H3\t2\t0\t\tauth\nLshared/userdb/foobar@chat.example.org\tfoobar@chat.example.org\n"
)
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"N\n"
def test_handle_dovecot_protocol_iterate(gencreds, example_config):
dictproxy = AuthDictProxy(config=example_config)
dictproxy.lookup_passdb("asdf00000@chat.example.org", "q9mr3faue")
dictproxy.lookup_passdb("asdf11111@chat.example.org", "q9mr3faue")
rfile = io.BytesIO(b"H3\t2\t0\t\tauth\nI0\t0\tshared/userdb/")
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
lines = wfile.getvalue().decode("ascii").split("\n")
assert "Oshared/userdb/asdf00000@chat.example.org\t" in lines
assert "Oshared/userdb/asdf11111@chat.example.org\t" in lines
assert not lines[2]
def test_50_concurrent_lookups_different_accounts(gencreds, dictproxy):
num_threads = 50
req_per_thread = 5
results = queue.Queue()
def lookup(db):
def lookup():
for i in range(req_per_thread):
addr, password = gencreds()
try:
lookup_passdb(db, example_config, addr, password)
dictproxy.lookup_passdb(addr, password)
except Exception:
results.put(traceback.format_exc())
else:
@@ -83,7 +137,7 @@ def test_50_concurrent_lookups_different_accounts(db, gencreds, example_config):
threads = []
for i in range(num_threads):
thread = threading.Thread(target=lookup, args=(db,), daemon=True)
thread = threading.Thread(target=lookup, daemon=True)
threads.append(thread)
print(f"created {num_threads} threads, starting them and waiting for results")

View File

@@ -0,0 +1,161 @@
import os
import random
from datetime import datetime
from fnmatch import fnmatch
from pathlib import Path
import pytest
from chatmaild.expire import (
FileEntry,
MailboxStat,
get_file_entry,
iter_mailboxes,
os_listdir_if_exists,
)
from chatmaild.expire import main as expiry_main
from chatmaild.fsreport import main as report_main
def fill_mbox(folderdir):
password = folderdir.joinpath("password")
password.write_text("xxx")
folderdir.joinpath("maildirsize").write_text("xxx")
garbagedir = folderdir.joinpath("garbagedir")
garbagedir.mkdir()
garbagedir.joinpath("bimbum").write_text("hello")
create_new_messages(folderdir, ["cur/msg1"], size=500)
create_new_messages(folderdir, ["new/msg2"], size=600)
def create_new_messages(basedir, relpaths, size=1000, days=0):
now = datetime.utcnow().timestamp()
for relpath in relpaths:
msg_path = Path(basedir).joinpath(relpath)
msg_path.parent.mkdir(parents=True, exist_ok=True)
msg_path.write_text("x" * size)
# accessed now, modified N days ago
os.utime(msg_path, (now, now - days * 86400))
@pytest.fixture
def mbox1(example_config):
mboxdir = example_config.mailboxes_dir.joinpath("mailbox1@example.org")
mboxdir.mkdir()
fill_mbox(mboxdir)
return MailboxStat(mboxdir)
def test_deltachat_folder(example_config):
"""Test old setups that might have a .DeltaChat folder where messages also need to get removed."""
mboxdir = example_config.mailboxes_dir.joinpath("mailbox1@example.org")
mboxdir.mkdir()
mbox2dir = mboxdir.joinpath(".DeltaChat")
mbox2dir.mkdir()
fill_mbox(mbox2dir)
mb = MailboxStat(mboxdir)
assert len(mb.messages) == 2
def test_filentry_ordering(tmp_path):
l = [FileEntry(f"x{i}", size=i + 10, mtime=1000 - i) for i in range(10)]
sorted = list(l)
random.shuffle(l)
l.sort(key=lambda x: x.size)
assert l == sorted
def test_no_mailbxoes(tmp_path, capsys):
assert [] == list(iter_mailboxes(str(tmp_path.joinpath("notexists")), maxnum=10))
out, err = capsys.readouterr()
assert "no mailboxes" in err
def test_stats_mailbox(mbox1):
password = Path(mbox1.basedir).joinpath("password")
assert mbox1.last_login == password.stat().st_mtime
assert len(mbox1.messages) == 2
msgs = list(sorted(mbox1.messages, key=lambda x: x.size))
assert len(msgs) == 2
assert msgs[0].size == 500 # cur
assert msgs[1].size == 600 # new
create_new_messages(mbox1.basedir, ["large-extra"], size=1000)
create_new_messages(mbox1.basedir, ["index-something"], size=3)
mbox2 = MailboxStat(mbox1.basedir)
assert len(mbox2.extrafiles) == 5
assert mbox2.extrafiles[0].size == 1000
# cope well with mailbox dirs that have no password (for whatever reason)
Path(mbox1.basedir).joinpath("password").unlink()
mbox3 = MailboxStat(mbox1.basedir)
assert mbox3.last_login is None
def test_report_no_mailboxes(example_config):
args = (str(example_config._inipath),)
report_main(args)
def test_report(mbox1, example_config):
args = (str(example_config._inipath),)
report_main(args)
args = list(args) + "--days 1".split()
report_main(args)
args = list(args) + "--min-login-age 1".split()
report_main(args)
args = list(args) + "--mdir cur".split()
report_main(args)
def test_expiry_cli_basic(example_config, mbox1):
args = (str(example_config._inipath),)
expiry_main(args)
def test_expiry_cli_old_files(capsys, example_config, mbox1):
relpaths_old = ["cur/msg_old1", "cur/msg_old1"]
cutoff_days = int(example_config.delete_mails_after) + 1
create_new_messages(mbox1.basedir, relpaths_old, size=1000, days=cutoff_days)
relpaths_large = ["cur/msg_old_large1", "new/msg_old_large2"]
cutoff_days = int(example_config.delete_large_after) + 1
create_new_messages(
mbox1.basedir, relpaths_large, size=1000 * 300, days=cutoff_days
)
create_new_messages(mbox1.basedir, ["cur/shouldstay"], size=1000 * 300, days=1)
args = str(example_config._inipath), "--remove", "-v"
expiry_main(args)
out, err = capsys.readouterr()
allpaths = relpaths_old + relpaths_large + ["maildirsize"]
for path in allpaths:
for line in err.split("\n"):
if fnmatch(line, f"removing*{path}"):
break
else:
if path != "new/msg_old_large2":
pytest.fail(f"failed to remove {path}\n{err}")
assert "shouldstay" not in err
def test_get_file_entry(tmp_path):
assert get_file_entry(str(tmp_path.joinpath("123123"))) is None
p = tmp_path.joinpath("x")
p.write_text("hello")
entry = get_file_entry(str(p))
assert entry.size == 5
assert entry.mtime
def test_os_listdir_if_exists(tmp_path):
tmp_path.joinpath("x").write_text("hello")
assert len(os_listdir_if_exists(str(tmp_path))) == 1
assert len(os_listdir_if_exists(str(tmp_path.joinpath("123123")))) == 0

View File

@@ -0,0 +1,39 @@
import threading
from chatmaild.filedict import FileDict, write_bytes_atomic
def test_basic(tmp_path):
fdict = FileDict(tmp_path.joinpath("metadata"))
assert fdict.read() == {}
with fdict.modify() as d:
d["devicetoken"] = [1, 2, 3]
d["456"] = 4.2
new = fdict.read()
assert new["devicetoken"] == [1, 2, 3]
assert new["456"] == 4.2
def test_bad_marshal_file(tmp_path, caplog):
fdict1 = FileDict(tmp_path.joinpath("metadata"))
fdict1.path.write_bytes(b"l12k3l12k3l")
assert fdict1.read() == {}
assert "corrupt" in caplog.records[0].msg
def test_write_bytes_atomic_concurrent(tmp_path):
p = tmp_path.joinpath("somefile.ext")
write_bytes_atomic(p, b"hello")
threads = []
for i in range(30):
content = f"hello{i}".encode("ascii")
t = threading.Thread(target=lambda: write_bytes_atomic(p, content))
t.start()
threads.append(t)
for t in threads:
t.join()
assert p.read_text().strip() != "hello"
assert len(list(p.parent.iterdir())) == 1

View File

@@ -1,145 +0,0 @@
from chatmaild.filtermail import (
check_encrypted,
BeforeQueueHandler,
SendRateLimiter,
check_mdn,
)
import pytest
@pytest.fixture
def maildomain():
# let's not depend on a real chatmail instance for the offline tests below
return "chatmail.example.org"
@pytest.fixture
def handler(make_config, maildomain):
config = make_config(maildomain)
return BeforeQueueHandler(config)
def test_reject_forged_from(maildata, gencreds, handler):
class env:
mail_from = gencreds()[0]
rcpt_tos = [gencreds()[0]]
# test that the filter lets good mail through
to_addr = gencreds()[0]
env.content = maildata(
"plain.eml", from_addr=env.mail_from, to_addr=to_addr
).as_bytes()
assert not handler.check_DATA(envelope=env)
# test that the filter rejects forged mail
env.content = maildata(
"plain.eml", from_addr="forged@c3.testrun.org", to_addr=to_addr
).as_bytes()
error = handler.check_DATA(envelope=env)
assert "500" in error
def test_filtermail_no_encryption_detection(maildata):
msg = maildata(
"plain.eml", from_addr="some@example.org", to_addr="other@example.org"
)
assert not check_encrypted(msg)
# https://xkcd.com/1181/
msg = maildata(
"fake-encrypted.eml", from_addr="some@example.org", to_addr="other@example.org"
)
assert not check_encrypted(msg)
def test_filtermail_encryption_detection(maildata):
msg = maildata("encrypted.eml", from_addr="1@example.org", to_addr="2@example.org")
assert check_encrypted(msg)
# if the subject is not "..." it is not considered ac-encrypted
msg.replace_header("Subject", "Click this link")
assert not check_encrypted(msg)
def test_filtermail_is_mdn(maildata, gencreds, handler):
from_addr = gencreds()[0]
to_addr = gencreds()[0] + ".other"
msg = maildata("mdn.eml", from_addr, to_addr)
class env:
mail_from = from_addr
rcpt_tos = [to_addr]
content = msg.as_bytes()
assert check_mdn(msg, env)
print(msg.as_string())
assert not handler.check_DATA(env)
def test_filtermail_to_multiple_recipients_no_mdn(maildata, gencreds):
from_addr = gencreds()[0]
to_addr = gencreds()[0] + ".other"
thirdaddr = gencreds()[0]
msg = maildata("mdn.eml", from_addr, to_addr)
class env:
mail_from = from_addr
rcpt_tos = [to_addr, thirdaddr]
content = msg.as_bytes()
assert not check_mdn(msg, env)
def test_send_rate_limiter():
limiter = SendRateLimiter()
for i in range(100):
if limiter.is_sending_allowed("some@example.org", 10):
if i <= 10:
continue
pytest.fail("limiter didn't work")
else:
assert i == 11
break
def test_excempt_privacy(maildata, gencreds, handler):
from_addr = gencreds()[0]
to_addr = "privacy@testrun.org"
handler.config.passthrough_recipients = [to_addr]
false_to = "privacy@something.org"
msg = maildata("plain.eml", from_addr, to_addr)
class env:
mail_from = from_addr
rcpt_tos = [to_addr]
content = msg.as_bytes()
# assert that None/no error is returned
assert not handler.check_DATA(envelope=env)
class env2:
mail_from = from_addr
rcpt_tos = [to_addr, false_to]
content = msg.as_bytes()
assert "500" in handler.check_DATA(envelope=env2)
def test_passthrough_senders(gencreds, handler, maildata):
acc1 = gencreds()[0]
to_addr = "recipient@something.org"
handler.config.passthrough_senders = [acc1]
msg = maildata("plain.eml", acc1, to_addr)
class env:
mail_from = acc1
rcpt_tos = to_addr
content = msg.as_bytes()
# assert that None/no error is returned
assert not handler.check_DATA(envelope=env)

View File

@@ -0,0 +1,87 @@
import smtplib
import subprocess
import sys
import pytest
@pytest.fixture
def smtpserver():
from pytest_localserver import smtp
server = smtp.Server("127.0.0.1")
server.start()
yield server
server.stop()
@pytest.fixture
def make_popen(request):
def popen(cmdargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kw):
p = subprocess.Popen(
cmdargs,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
def fin():
p.terminate()
out, err = p.communicate()
print(out.decode("ascii"))
print(err.decode("ascii"), file=sys.stderr)
request.addfinalizer(fin)
return p
return popen
@pytest.mark.parametrize("filtermail_mode", ["outgoing", "incoming"])
def test_one_mail(
make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch
):
monkeypatch.setenv("PYTHONUNBUFFERED", "1")
smtp_inject_port = 20025
if filtermail_mode == "outgoing":
settings = dict(
postfix_reinject_port=smtpserver.port,
filtermail_smtp_port=smtp_inject_port,
)
else:
settings = dict(
postfix_reinject_port_incoming=smtpserver.port,
filtermail_smtp_port_incoming=smtp_inject_port,
)
config = make_config("example.org", settings=settings)
path = str(config._inipath)
popen = make_popen(["filtermail", path, filtermail_mode])
# Wait for filtermail to start accepting connections
import socket
import time
for _ in range(50): # 5 second timeout
try:
sock = socket.create_connection(("127.0.0.1", smtp_inject_port), timeout=0.1)
sock.close()
break
except (ConnectionRefusedError, OSError):
time.sleep(0.1)
else:
pytest.fail("filtermail failed to start accepting connections")
addr = f"user1@{config.mail_domain}"
config.get_user(addr).set_password("l1k2j3l1k2j3l")
# send encrypted mail
data = str(maildata("encrypted.eml", from_addr=addr, to_addr=addr))
client = smtplib.SMTP("localhost", smtp_inject_port)
client.sendmail(addr, [addr], data)
assert len(smtpserver.outbox) == 1
# send un-encrypted mail that errors
data = str(maildata("fake-encrypted.eml", from_addr=addr, to_addr=addr))
with pytest.raises(smtplib.SMTPDataError) as e:
client.sendmail(addr, [addr], data)
assert e.value.smtp_code == 523

View File

@@ -0,0 +1,38 @@
import time
from chatmaild.doveauth import AuthDictProxy
from chatmaild.lastlogin import (
LastLoginDictProxy,
)
def test_handle_dovecot_request_last_login(testaddr, example_config):
dictproxy = LastLoginDictProxy(config=example_config)
authproxy = AuthDictProxy(config=example_config)
authproxy.lookup_passdb(testaddr, "1l2k3j1l2k3jl123")
dictproxy_transactions = {}
# Begin transaction
tx = "1111"
msg = f"B{tx}\t{testaddr}"
res = dictproxy.handle_dovecot_request(msg, dictproxy_transactions)
assert not res
assert dictproxy_transactions == {tx: dict(addr=testaddr, res="O\n")}
# set last-login info for user
user = dictproxy.config.get_user(testaddr)
timestamp = int(time.time())
msg = f"S{tx}\tshared/last-login/{testaddr}\t{timestamp}"
res = dictproxy.handle_dovecot_request(msg, dictproxy_transactions)
assert not res
assert len(dictproxy_transactions) == 1
read_timestamp = user.get_last_login_timestamp()
assert read_timestamp == timestamp // 86400 * 86400
# finish transaction
msg = f"C{tx}"
res = dictproxy.handle_dovecot_request(msg, dictproxy_transactions)
assert res == "O\n"
assert len(dictproxy_transactions) == 0

View File

@@ -0,0 +1,329 @@
import io
import time
import pytest
import requests
from chatmaild.metadata import (
Metadata,
MetadataDictProxy,
)
from chatmaild.notifier import (
Notifier,
NotifyThread,
PersistentQueueItem,
)
@pytest.fixture
def notifier(metadata):
queue_dir = metadata.vmail_dir.joinpath("pending_notifications")
queue_dir.mkdir()
return Notifier(queue_dir)
@pytest.fixture
def metadata(tmp_path):
vmail_dir = tmp_path.joinpath("vmaildir")
vmail_dir.mkdir()
return Metadata(vmail_dir)
@pytest.fixture
def dictproxy(notifier, metadata):
return MetadataDictProxy(notifier=notifier, metadata=metadata)
@pytest.fixture
def testaddr2():
return "user2@example.org"
@pytest.fixture
def token():
return "01234"
def get_mocked_requests(statuslist):
class ReqMock:
requests = []
def post(self, url, data, timeout):
self.requests.append((url, data, timeout))
res = statuslist.pop(0)
if isinstance(res, Exception):
raise res
class Result:
status_code = res
return Result()
return ReqMock()
def test_metadata_persistence(tmp_path, testaddr, testaddr2):
metadata1 = Metadata(tmp_path)
metadata2 = Metadata(tmp_path)
assert not metadata1.get_tokens_for_addr(testaddr)
assert not metadata2.get_tokens_for_addr(testaddr)
metadata1.add_token_to_addr(testaddr, "01234")
metadata1.add_token_to_addr(testaddr2, "456")
assert metadata2.get_tokens_for_addr(testaddr) == ["01234"]
assert metadata2.get_tokens_for_addr(testaddr2) == ["456"]
metadata2.remove_token_from_addr(testaddr, "01234")
assert not metadata1.get_tokens_for_addr(testaddr)
assert metadata1.get_tokens_for_addr(testaddr2) == ["456"]
def test_remove_nonexisting(metadata, tmp_path, testaddr):
metadata.add_token_to_addr(testaddr, "123")
metadata.remove_token_from_addr(testaddr, "1l23k1l2k3")
assert metadata.get_tokens_for_addr(testaddr) == ["123"]
def test_notifier_remove_without_set(metadata, testaddr):
metadata.remove_token_from_addr(testaddr, "123")
assert not metadata.get_tokens_for_addr(testaddr)
def test_handle_dovecot_request_lookup_fails(dictproxy, testaddr):
transactions = {}
res = dictproxy.handle_dovecot_request(
f"Lpriv/123/chatmail\t{testaddr}", transactions
)
assert res == "N\n"
def test_handle_dovecot_request_happy_path(dictproxy, testaddr, token):
metadata = dictproxy.metadata
transactions = {}
notifier = dictproxy.notifier
# set device token in a transaction
tx = "1111"
msg = f"B{tx}\t{testaddr}"
res = dictproxy.handle_dovecot_request(msg, transactions)
assert not res and not metadata.get_tokens_for_addr(testaddr)
assert transactions == {tx: dict(addr=testaddr, res="O\n")}
msg = f"S{tx}\tpriv/guid00/devicetoken\t{token}"
res = dictproxy.handle_dovecot_request(msg, transactions)
assert not res
assert len(transactions) == 1
assert metadata.get_tokens_for_addr(testaddr) == [token]
msg = f"C{tx}"
res = dictproxy.handle_dovecot_request(msg, transactions)
assert res == "O\n"
assert len(transactions) == 0
assert metadata.get_tokens_for_addr(testaddr) == [token]
# trigger notification for incoming message
tx2 = "2222"
assert dictproxy.handle_dovecot_request(f"B{tx2}\t{testaddr}", transactions) is None
msg = f"S{tx2}\tpriv/guid00/messagenew"
assert dictproxy.handle_dovecot_request(msg, transactions) is None
queue_item = notifier.retry_queues[0].get()[1]
assert queue_item.token == token
assert dictproxy.handle_dovecot_request(f"C{tx2}", transactions) == "O\n"
assert not transactions
assert queue_item.path.exists()
def test_handle_dovecot_protocol_set_devicetoken(dictproxy):
rfile = io.BytesIO(
b"\n".join(
[
b"HELLO",
b"Btx00\tuser@example.org",
b"Stx00\tpriv/guid00/devicetoken\t01234",
b"Ctx00",
]
)
)
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"O\n"
assert dictproxy.metadata.get_tokens_for_addr("user@example.org") == ["01234"]
def test_handle_dovecot_protocol_set_get_devicetoken(dictproxy):
rfile = io.BytesIO(
b"\n".join(
[
b"HELLO",
b"Btx00\tuser@example.org",
b"Stx00\tpriv/guid00/devicetoken\t01234",
b"Ctx00",
]
)
)
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
assert dictproxy.metadata.get_tokens_for_addr("user@example.org") == ["01234"]
assert wfile.getvalue() == b"O\n"
rfile = io.BytesIO(
b"\n".join([b"HELLO", b"Lpriv/0123/devicetoken\tuser@example.org"])
)
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"O01234\n"
def test_handle_dovecot_protocol_iterate(dictproxy):
rfile = io.BytesIO(
b"\n".join(
[
b"H",
b"I9\t0\tpriv/5cbe730f146fea6535be0d003dd4fc98/\tci-2dzsrs@nine.testrun.org",
]
)
)
wfile = io.BytesIO()
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"\n"
def test_notifier_thread_deletes_persistent_file(metadata, notifier, testaddr):
reqmock = get_mocked_requests([200])
metadata.add_token_to_addr(testaddr, "01234")
notifier.new_message_for_addr(testaddr, metadata)
NotifyThread(notifier, 0, None).retry_one(reqmock)
url, data, timeout = reqmock.requests[0]
assert data == "01234"
assert metadata.get_tokens_for_addr(testaddr) == ["01234"]
notifier.requeue_persistent_queue_items()
assert notifier.retry_queues[0].qsize() == 0
@pytest.mark.parametrize("status", [requests.exceptions.RequestException(), 404, 500])
def test_notifier_thread_connection_failures(
metadata, notifier, testaddr, status, caplog
):
"""test that tokens keep getting retried until they are given up."""
metadata.add_token_to_addr(testaddr, "01234")
notifier.new_message_for_addr(testaddr, metadata)
notifier.NOTIFICATION_RETRY_DELAY = 5
max_tries = len(notifier.retry_queues)
for i in range(max_tries):
caplog.clear()
reqmock = get_mocked_requests([status])
sleep_calls = []
NotifyThread(notifier, i, None).retry_one(reqmock, sleep=sleep_calls.append)
assert notifier.retry_queues[i].qsize() == 0
assert "request failed" in caplog.records[0].msg
if i > 0:
assert len(sleep_calls) == 1
if i + 1 < max_tries:
assert notifier.retry_queues[i + 1].qsize() == 1
assert len(caplog.records) == 1
else:
assert len(caplog.records) == 2
assert "deadline" in caplog.records[1].msg
notifier.requeue_persistent_queue_items()
assert notifier.retry_queues[0].qsize() == 0
def test_requeue_removes_tmp_files(notifier, metadata, testaddr, caplog):
metadata.add_token_to_addr(testaddr, "01234")
notifier.new_message_for_addr(testaddr, metadata)
p = notifier.queue_dir.joinpath("1203981203.tmp")
p.touch()
notifier2 = notifier.__class__(notifier.queue_dir)
notifier2.requeue_persistent_queue_items()
assert "spurious" in caplog.records[0].msg
assert not p.exists()
assert notifier2.retry_queues[0].qsize() == 1
when, queue_item = notifier2.retry_queues[0].get()
assert when <= int(time.time())
assert queue_item.addr == testaddr
def test_requeue_removes_invalid_files(notifier, metadata, testaddr, caplog):
metadata.add_token_to_addr(testaddr, "01234")
notifier.new_message_for_addr(testaddr, metadata)
# empty/invalid files should be ignored
p = notifier.queue_dir.joinpath("1203981203")
p.touch()
notifier2 = notifier.__class__(notifier.queue_dir)
notifier2.requeue_persistent_queue_items()
assert "spurious" in caplog.records[0].msg
assert not p.exists()
assert notifier2.retry_queues[0].qsize() == 1
when, queue_item = notifier2.retry_queues[0].get()
assert when <= int(time.time())
assert queue_item.addr == testaddr
def test_start_and_stop_notification_threads(notifier, testaddr):
threads = notifier.start_notification_threads(None)
for retry_num, threadlist in threads.items():
for t in threadlist:
t.stop()
t.join()
def test_multi_device_notifier(metadata, notifier, testaddr):
metadata.add_token_to_addr(testaddr, "01234")
metadata.add_token_to_addr(testaddr, "56789")
notifier.new_message_for_addr(testaddr, metadata)
reqmock = get_mocked_requests([200, 200])
NotifyThread(notifier, 0, None).retry_one(reqmock)
NotifyThread(notifier, 0, None).retry_one(reqmock)
assert notifier.retry_queues[0].qsize() == 0
assert notifier.retry_queues[1].qsize() == 0
url, data, timeout = reqmock.requests[0]
assert data == "01234"
url, data, timeout = reqmock.requests[1]
assert data == "56789"
assert metadata.get_tokens_for_addr(testaddr) == ["01234", "56789"]
def test_notifier_thread_run_gone_removes_token(metadata, notifier, testaddr):
metadata.add_token_to_addr(testaddr, "01234")
metadata.add_token_to_addr(testaddr, "45678")
notifier.new_message_for_addr(testaddr, metadata)
reqmock = get_mocked_requests([410, 200])
NotifyThread(notifier, 0, metadata.remove_token_from_addr).retry_one(reqmock)
NotifyThread(notifier, 0, None).retry_one(reqmock)
url, data, timeout = reqmock.requests[0]
assert data == "01234"
url, data, timeout = reqmock.requests[1]
assert data == "45678"
assert metadata.get_tokens_for_addr(testaddr) == ["45678"]
assert notifier.retry_queues[0].qsize() == 0
assert notifier.retry_queues[1].qsize() == 0
def test_persistent_queue_items(tmp_path, testaddr, token):
queue_item = PersistentQueueItem.create(tmp_path, testaddr, 432, token)
assert queue_item.addr == testaddr
assert queue_item.start_ts == 432
assert queue_item.token == token
item2 = PersistentQueueItem.read_from_path(queue_item.path)
assert item2.addr == testaddr
assert item2.start_ts == 432
assert item2.token == token
assert item2 == queue_item
item2.delete()
assert not item2.path.exists()
assert not queue_item < item2 and not item2 < queue_item
def test_iroh_relay(dictproxy):
rfile = io.BytesIO(
b"\n".join(
[
b"H",
b"Lshared/0123/vendor/vendor.dovecot/pvt/server/vendor/deltachat/irohrelay\tuser@example.org",
]
)
)
wfile = io.BytesIO()
dictproxy.iroh_relay = "https://example.org/"
dictproxy.loop_forever(rfile, wfile)
assert wfile.getvalue() == b"Ohttps://example.org/\n"

View File

@@ -2,15 +2,23 @@ from chatmaild.metrics import main
def test_main(tmp_path, capsys):
paths = []
for x in ("ci-asllkj", "ac_12l3kj", "qweqwe", "ci-l1k2j31l2k3"):
tmp_path.joinpath(x).mkdir()
p = tmp_path.joinpath(x)
p.mkdir()
p.joinpath("cur").mkdir()
paths.append(p)
tmp_path.joinpath("nomailbox").mkdir()
main(tmp_path)
out, _ = capsys.readouterr()
d = {}
for line in out.split("\n"):
if line.strip():
name, num, _ = line.split()
if line.strip() and not line.startswith("#"):
name, num = line.split()
d[name] = int(num)
assert d["accounts"] == 4
assert d["ci_accounts"] == 3
assert d["nonci_accounts"] == 1

View File

@@ -0,0 +1,67 @@
import sqlite3
from chatmaild.migrate_db import migrate_from_db_to_maildir
def test_migration_not_exists(tmp_path, example_config):
example_config.passdb_path = tmp_path.joinpath("sqlite")
def test_migration(tmp_path, example_config, caplog):
passdb_path = tmp_path.joinpath("passdb.sqlite")
uri = f"file:{passdb_path}?mode=rwc"
sqlconn = sqlite3.connect(uri, timeout=60, uri=True)
sqlconn.execute(
"""
CREATE TABLE users (
addr TEXT PRIMARY KEY,
password TEXT,
last_login INTEGER
)
"""
)
all = {}
for i in range(500):
values = (f"somsom{i:03}@example.org", f"passwo{i:03}", i * 86400)
sqlconn.execute(
"""
INSERT INTO users (addr, password, last_login)
VALUES (?, ?, ?)""",
values,
)
all[values[0]] = values[1:]
for i in range(500):
values = (f"pompom{i:03}@example.org", f"wopass{i:03}", "")
sqlconn.execute(
"""
INSERT INTO users (addr, password, last_login)
VALUES (?, ?, ?)""",
values,
)
all[values[0]] = values[1:]
sqlconn.commit()
sqlconn.close()
assert passdb_path.stat().st_size > 10000
example_config.passdb_path = passdb_path
assert not caplog.records
migrate_from_db_to_maildir(example_config, chunking=500)
assert len(caplog.records) > 3
for path in example_config.mailboxes_dir.iterdir():
if "@" not in path.name:
continue
password, last_login = all.pop(path.name)
user = example_config.get_user(path.name)
if last_login:
assert user.get_last_login_timestamp() == last_login
assert password == user.get_userdb_dict()["password"]
assert not all
assert not example_config.passdb_path.exists()

View File

@@ -0,0 +1,56 @@
def test_login_timestamp(testaddr, example_config):
user = example_config.get_user(testaddr)
user.set_password("someeqkjwelkqwjleqwe")
user.set_last_login_timestamp(100000)
assert user.get_last_login_timestamp() == 86400
user.set_last_login_timestamp(200000)
assert user.get_last_login_timestamp() == 86400 * 2
def test_get_user_dict_not_set(testaddr, example_config, caplog):
user = example_config.get_user(testaddr)
assert not caplog.records
assert user.get_userdb_dict() == {}
assert len(caplog.records) == 0
user.set_password("")
assert user.get_userdb_dict() == {}
assert len(caplog.records) == 1
def test_get_user_dict(make_config, tmp_path):
config = make_config("something.testrun.org")
addr = "user1@something.org"
user = config.get_user(addr)
enc_password = "l1k2j31lk2j3l1k23j123"
user.set_password(enc_password)
data = user.get_userdb_dict()
assert addr in str(data["home"])
assert data["uid"] == "vmail"
assert data["gid"] == "vmail"
assert data["password"] == enc_password
def test_no_mailboxes_dir(testaddr, example_config, tmp_path):
p = tmp_path.joinpath("a", "mailboxes")
example_config.mailboxes_dir = p
user = example_config.get_user(testaddr)
user.set_password("someeqkjwelkqwjleqwe")
user.set_last_login_timestamp(100000)
assert user.get_last_login_timestamp() == 86400
def test_set_get_cleartext_flag(testaddr, example_config, tmp_path):
p = tmp_path.joinpath("a", "mailboxes")
example_config.mailboxes_dir = p
user = example_config.get_user(testaddr)
user.set_password("someeqkjwelkqwjleqwe")
user.set_last_login_timestamp(100000)
assert user.get_last_login_timestamp() == 86400
assert not user.is_incoming_cleartext_ok()
user.allow_incoming_cleartext()
assert user.is_incoming_cleartext_ok()

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env python3
import socket
def turn_credentials() -> str:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as client_socket:
client_socket.connect("/run/chatmail-turn/turn.socket")
with client_socket.makefile("rb") as file:
return file.readline().decode("utf-8").strip()

View File

@@ -0,0 +1,82 @@
import logging
import os
from chatmaild.filedict import write_bytes_atomic
def get_daytimestamp(timestamp) -> int:
return int(timestamp) // 86400 * 86400
class User:
def __init__(self, maildir, addr, password_path, uid, gid):
self.maildir = maildir
self.addr = addr
self.password_path = password_path
self.enforce_E2EE_path = maildir.joinpath("enforceE2EEincoming")
self.uid = uid
self.gid = gid
@property
def can_track(self):
return "@" in self.addr
def get_userdb_dict(self):
"""Return a non-empty dovecot 'userdb' style dict
if the user has an existing non-empty password"""
try:
pw = self.password_path.read_text()
except FileNotFoundError:
return {}
if not pw:
logging.error(f"password is empty for: {self.addr}")
return {}
home = str(self.maildir)
return dict(addr=self.addr, home=home, uid=self.uid, gid=self.gid, password=pw)
def is_incoming_cleartext_ok(self):
return not self.enforce_E2EE_path.exists()
def allow_incoming_cleartext(self):
if self.enforce_E2EE_path.exists():
self.enforce_E2EE_path.unlink()
def set_password(self, enc_password):
"""Set the specified password for this user.
This method can be called concurrently
but there is no guarantee which of the password-set calls will win.
"""
self.maildir.mkdir(exist_ok=True, parents=True)
password = enc_password.encode("ascii")
try:
write_bytes_atomic(self.password_path, password)
except PermissionError:
logging.error(f"could not write password for: {self.addr}")
raise
self.enforce_E2EE_path.touch()
def set_last_login_timestamp(self, timestamp):
"""Track login time with daily granularity
to minimize touching files and to minimize metadata leakage."""
if not self.can_track:
return
try:
mtime = int(os.stat(self.password_path).st_mtime)
except FileNotFoundError:
logging.error(f"Can not get last login timestamp for {self.addr}")
return
timestamp = get_daytimestamp(timestamp)
if mtime != timestamp:
os.utime(self.password_path, (timestamp, timestamp))
def get_last_login_timestamp(self):
if self.can_track:
try:
return int(self.password_path.stat().st_mtime)
except FileNotFoundError:
pass

94
cliff.toml Normal file
View File

@@ -0,0 +1,94 @@
# git-cliff ~ configuration file
# https://git-cliff.org/docs/configuration
[changelog]
# A Tera template to be rendered for each release in the changelog.
# See https://keats.github.io/tera/docs/#introduction
body = """
{% if version %}\
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
{% else %}\
## [unreleased]
{% endif %}\
{% for group, commits in commits | group_by(attribute="group") %}
### {{ group | striptags | trim | upper_first }}
{% for commit in commits %}
- {% if commit.scope %}*({{ commit.scope }})* {% endif %}\
{% if commit.breaking %}[**breaking**] {% endif %}\
{{ commit.message | upper_first }}\
{% endfor %}
{% endfor %}
"""
# Remove leading and trailing whitespaces from the changelog's body.
trim = true
# Render body even when there are no releases to process.
render_always = true
# An array of regex based postprocessors to modify the changelog.
postprocessors = [
# Replace the placeholder <REPO> with a URL.
#{ pattern = '<REPO>', replace = "https://github.com/orhun/git-cliff" },
]
# render body even when there are no releases to process
# render_always = true
# output file path
# output = "test.md"
[git]
# Parse commits according to the conventional commits specification.
# See https://www.conventionalcommits.org
conventional_commits = true
# Exclude commits that do not match the conventional commits specification.
filter_unconventional = true
# Require all commits to be conventional.
# Takes precedence over filter_unconventional.
require_conventional = false
# Split commits on newlines, treating each line as an individual commit.
split_commits = false
# An array of regex based parsers to modify commit messages prior to further processing.
commit_preprocessors = [
# Replace issue numbers with link templates to be updated in `changelog.postprocessors`.
#{ pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](<REPO>/issues/${2}))"},
# Check spelling of the commit message using https://github.com/crate-ci/typos.
# If the spelling is incorrect, it will be fixed automatically.
#{ pattern = '.*', replace_command = 'typos --write-changes -' },
]
# Prevent commits that are breaking from being excluded by commit parsers.
protect_breaking_commits = false
# An array of regex based parsers for extracting data from the commit message.
# Assigns commits to groups.
# Optionally sets the commit's scope and can decide to exclude commits from further processing.
commit_parsers = [
{ message = "^feat", group = "Features" },
{ message = "^fix", group = "Bug Fixes" },
{ message = "^docs", group = "Documentation" },
{ message = "^perf", group = "Performance" },
{ message = "^refactor", group = "Refactor" },
{ message = "^style", group = "Styling" },
{ message = "^test", group = "Testing" },
{ message = "^chore\\(release\\): prepare for", skip = true },
{ message = "^chore\\(deps.*\\)", skip = true },
{ message = "^chore\\(pr\\)", skip = true },
{ message = "^chore\\(pull\\)", skip = true },
{ message = "^chore|^ci", group = "Miscellaneous Tasks" },
{ body = ".*security", group = "Security" },
{ message = "^revert", group = "Revert" },
{ message = ".*", group = "Other" },
]
# Exclude commits that are not matched by any commit parser.
filter_commits = false
# Fail on a commit that is not matched by any commit parser.
fail_on_unmatched_commit = false
# An array of link parsers for extracting external references, and turning them into URLs, using regex.
link_parsers = []
# Include only the tags that belong to the current branch.
use_branch_tags = false
# Order releases topologically instead of chronologically.
topo_order = false
# Order commits topologically instead of chronologically.
topo_order_commits = true
# Order of commits in each group/release within the changelog.
# Allowed values: newest, oldest
sort_commits = "oldest"
# Process submodules commits
recurse_submodules = false

View File

@@ -6,7 +6,7 @@ build-backend = "setuptools.build_meta"
name = "cmdeploy"
version = "0.2"
dependencies = [
"pyinfra",
"pyinfra>=3",
"pillow",
"qrcode",
"markdown",
@@ -16,9 +16,10 @@ dependencies = [
"build",
"tox",
"ruff",
"black",
"pytest",
"pytest-xdist",
"execnet",
"imap_tools",
]
[project.scripts]
@@ -30,3 +31,16 @@ cmdeploy = "cmdeploy.cmdeploy:main"
[tool.pytest.ini_options]
addopts = "-v -ra --strict-markers"
[tool.ruff]
lint.select = [
"F", # Pyflakes
"I", # isort
"PLC", # Pylint Convention
"PLE", # Pylint Error
"PLW", # Pylint Warning
]
lint.ignore = [
"PLC0415" # import-outside-top-level
]

View File

@@ -1,523 +0,0 @@
"""
Chat Mail pyinfra deploy.
"""
import sys
import importlib.resources
import subprocess
import shutil
import io
from pathlib import Path
from pyinfra import host
from pyinfra.operations import apt, files, server, systemd, pip
from pyinfra.facts.files import File
from pyinfra.facts.systemd import SystemdEnabled
from .acmetool import deploy_acmetool
from chatmaild.config import read_config, Config
def _build_chatmaild(dist_dir) -> None:
dist_dir = Path(dist_dir).resolve()
if dist_dir.exists():
shutil.rmtree(dist_dir)
dist_dir.mkdir()
subprocess.check_output(
[sys.executable, "-m", "build", "-n"]
+ ["--sdist", "chatmaild", "--outdir", str(dist_dir)]
)
entries = list(dist_dir.iterdir())
assert len(entries) == 1
return entries[0]
def remove_legacy_artifacts():
# disable legacy doveauth-dictproxy.service
if host.get_fact(SystemdEnabled).get("doveauth-dictproxy.service"):
systemd.service(
name="Disable legacy doveauth-dictproxy.service",
service="doveauth-dictproxy.service",
running=False,
enabled=False,
)
def _install_remote_venv_with_chatmaild(config) -> None:
remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist"))
remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
apt.packages(
name="apt install python3-virtualenv",
packages=["python3-virtualenv"],
)
files.put(
name="Upload chatmaild source package",
src=dist_file.open("rb"),
dest=remote_dist_file,
create_remote_dir=True,
**root_owned,
)
files.put(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(),
dest=remote_chatmail_inipath,
**root_owned,
)
pip.virtualenv(
name=f"chatmaild virtualenv {remote_venv_dir}",
path=remote_venv_dir,
always_copy=True,
)
server.shell(
name=f"forced pip-install {dist_file.name}",
commands=[
f"{remote_venv_dir}/bin/pip install --force-reinstall {remote_dist_file}"
],
)
files.template(
src=importlib.resources.files(__package__).joinpath("metrics.cron.j2"),
dest="/etc/cron.d/chatmail-metrics",
user="root",
group="root",
mode="644",
config={
"mail_domain": config.mail_domain,
"execpath": f"{remote_venv_dir}/bin/chatmail-metrics",
},
)
# install systemd units
for fn in (
"doveauth",
"filtermail",
"echobot",
):
params = dict(
execpath=f"{remote_venv_dir}/bin/{fn}",
config_path=remote_chatmail_inipath,
remote_venv_dir=remote_venv_dir,
)
source_path = importlib.resources.files("chatmaild").joinpath(f"{fn}.service.f")
content = source_path.read_text().format(**params).encode()
files.put(
name=f"Upload {fn}.service",
src=io.BytesIO(content),
dest=f"/etc/systemd/system/{fn}.service",
**root_owned,
)
systemd.service(
name=f"Setup {fn} service",
service=f"{fn}.service",
running=True,
enabled=True,
restarted=True,
daemon_reload=True,
)
def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool:
"""Configures OpenDKIM"""
need_restart = False
main_config = files.template(
src=importlib.resources.files(__package__).joinpath("opendkim/opendkim.conf"),
dest="/etc/opendkim.conf",
user="root",
group="root",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= main_config.changed
files.directory(
name="Add opendkim directory to /etc",
path="/etc/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
keytable = files.template(
src=importlib.resources.files(__package__).joinpath("opendkim/KeyTable"),
dest="/etc/dkimkeys/KeyTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= keytable.changed
signing_table = files.template(
src=importlib.resources.files(__package__).joinpath("opendkim/SigningTable"),
dest="/etc/dkimkeys/SigningTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= signing_table.changed
files.directory(
name="Add opendkim socket directory to /var/spool/postfix",
path="/var/spool/postfix/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"):
server.shell(
name="Generate OpenDKIM domain keys",
commands=[
f"opendkim-genkey -D /etc/dkimkeys -d {domain} -s {dkim_selector}"
],
_sudo=True,
_sudo_user="opendkim",
)
return need_restart
def _install_mta_sts_daemon() -> bool:
need_restart = False
config = files.put(
name="upload postfix-mta-sts-resolver config",
src=importlib.resources.files(__package__).joinpath(
"postfix/mta-sts-daemon.yml"
),
dest="/etc/mta-sts-daemon.yml",
user="root",
group="root",
mode="644",
)
need_restart |= config.changed
server.shell(
name="install postfix-mta-sts-resolver with pip",
commands=[
"python3 -m virtualenv /usr/local/lib/postfix-mta-sts-resolver",
"/usr/local/lib/postfix-mta-sts-resolver/bin/pip install postfix-mta-sts-resolver",
],
)
systemd_unit = files.put(
name="upload mta-sts-daemon systemd unit",
src=importlib.resources.files(__package__).joinpath(
"postfix/mta-sts-daemon.service"
),
dest="/etc/systemd/system/mta-sts-daemon.service",
user="root",
group="root",
mode="644",
)
need_restart |= systemd_unit.changed
return need_restart
def _configure_postfix(config: Config, debug: bool = False) -> bool:
"""Configures Postfix SMTP server."""
need_restart = False
main_config = files.template(
src=importlib.resources.files(__package__).joinpath("postfix/main.cf.j2"),
dest="/etc/postfix/main.cf",
user="root",
group="root",
mode="644",
config=config,
)
need_restart |= main_config.changed
master_config = files.template(
src=importlib.resources.files(__package__).joinpath("postfix/master.cf.j2"),
dest="/etc/postfix/master.cf",
user="root",
group="root",
mode="644",
debug=debug,
config=config,
)
need_restart |= master_config.changed
return need_restart
def _configure_dovecot(config: Config, debug: bool = False) -> bool:
"""Configures Dovecot IMAP server."""
need_restart = False
main_config = files.template(
src=importlib.resources.files(__package__).joinpath("dovecot/dovecot.conf.j2"),
dest="/etc/dovecot/dovecot.conf",
user="root",
group="root",
mode="644",
config=config,
debug=debug,
)
need_restart |= main_config.changed
auth_config = files.put(
src=importlib.resources.files(__package__).joinpath("dovecot/auth.conf"),
dest="/etc/dovecot/auth.conf",
user="root",
group="root",
mode="644",
)
need_restart |= auth_config.changed
files.template(
src=importlib.resources.files(__package__).joinpath("dovecot/expunge.cron.j2"),
dest="/etc/cron.d/expunge",
user="root",
group="root",
mode="644",
config=config,
)
# as per https://doc.dovecot.org/configuration_manual/os/
# it is recommended to set the following inotify limits
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
server.sysctl(
name=f"Change {key}",
key=key,
value=65535,
persist=True,
)
return need_restart
def _configure_nginx(domain: str, debug: bool = False) -> bool:
"""Configures nginx HTTP server."""
need_restart = False
main_config = files.template(
src=importlib.resources.files(__package__).joinpath("nginx/nginx.conf.j2"),
dest="/etc/nginx/nginx.conf",
user="root",
group="root",
mode="644",
config={"domain_name": domain},
)
need_restart |= main_config.changed
autoconfig = files.template(
src=importlib.resources.files(__package__).joinpath("nginx/autoconfig.xml.j2"),
dest="/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
user="root",
group="root",
mode="644",
config={"domain_name": domain},
)
need_restart |= autoconfig.changed
mta_sts_config = files.template(
src=importlib.resources.files(__package__).joinpath("nginx/mta-sts.txt.j2"),
dest="/var/www/html/.well-known/mta-sts.txt",
user="root",
group="root",
mode="644",
config={"domain_name": domain},
)
need_restart |= mta_sts_config.changed
# install CGI newemail script
#
cgi_dir = "/usr/lib/cgi-bin"
files.directory(
name=f"Ensure {cgi_dir} exists",
path=cgi_dir,
user="root",
group="root",
)
files.put(
name="Upload cgi newemail.py script",
src=importlib.resources.files("chatmaild").joinpath("newemail.py").open("rb"),
dest=f"{cgi_dir}/newemail.py",
user="root",
group="root",
mode="755",
)
return need_restart
def check_config(config):
mail_domain = config.mail_domain
if mail_domain != "testrun.org" and not mail_domain.endswith(".testrun.org"):
blocked_words = "merlinux schmieder testrun.org".split()
for value in config.__dict__.values():
if any(x in str(value) for x in blocked_words):
raise ValueError(
f"please set your own privacy contacts/addresses in {config._inipath}"
)
return config
def deploy_chatmail(config_path: Path) -> None:
"""Deploy a chat-mail instance.
:param config_path: path to chatmail.ini
"""
config = read_config(config_path)
check_config(config)
mail_domain = config.mail_domain
from .www import build_webpages
apt.update(name="apt update", cache_time=24 * 3600)
server.group(name="Create vmail group", group="vmail", system=True)
server.user(name="Create vmail user", user="vmail", group="vmail", system=True)
server.group(name="Create opendkim group", group="opendkim", system=True)
server.user(
name="Add postfix user to opendkim group for socket access",
user="postfix",
groups=["opendkim"],
system=True,
)
# Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
server.shell(
name="Generate root keys for validating DNSSEC",
commands=["unbound-anchor -a /var/lib/unbound/root.key || true"],
)
systemd.service(
name="Start and enable unbound",
service="unbound.service",
running=True,
enabled=True,
)
# Deploy acmetool to have TLS certificates.
deploy_acmetool(nginx_hook=True, domains=[mail_domain, f"mta-sts.{mail_domain}"])
apt.packages(
name="Install Postfix",
packages="postfix",
)
apt.packages(
name="Install Dovecot",
packages=["dovecot-imapd", "dovecot-lmtpd"],
)
apt.packages(
name="Install OpenDKIM",
packages=[
"opendkim",
"opendkim-tools",
],
)
apt.packages(
name="Install nginx",
packages=["nginx"],
)
apt.packages(
name="Install fcgiwrap",
packages=["fcgiwrap"],
)
www_path = importlib.resources.files(__package__).joinpath("../../../www").resolve()
build_dir = www_path.joinpath("build")
src_dir = www_path.joinpath("src")
build_webpages(src_dir, build_dir, config)
files.rsync(f"{build_dir}/", "/var/www/html", flags=["-avz"])
_install_remote_venv_with_chatmaild(config)
debug = False
dovecot_need_restart = _configure_dovecot(config, debug=debug)
postfix_need_restart = _configure_postfix(config, debug=debug)
opendkim_need_restart = _configure_opendkim(mail_domain)
mta_sts_need_restart = _install_mta_sts_daemon()
nginx_need_restart = _configure_nginx(mail_domain)
systemd.service(
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
restarted=opendkim_need_restart,
)
systemd.service(
name="Start and enable MTA-STS daemon",
service="mta-sts-daemon.service",
daemon_reload=True,
running=True,
enabled=True,
restarted=mta_sts_need_restart,
)
systemd.service(
name="Start and enable Postfix",
service="postfix.service",
running=True,
enabled=True,
restarted=postfix_need_restart,
)
systemd.service(
name="Start and enable Dovecot",
service="dovecot.service",
running=True,
enabled=True,
restarted=dovecot_need_restart,
)
systemd.service(
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=nginx_need_restart,
)
# This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName
server.shell(
name="Setup /etc/mailname",
commands=[f"echo {mail_domain} >/etc/mailname; chmod 644 /etc/mailname"],
)
journald_conf = files.put(
name="Configure journald",
src=importlib.resources.files(__package__).joinpath("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
systemd.service(
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=journald_conf,
)

View File

@@ -1,78 +1,141 @@
import importlib.resources
from pyinfra.operations import apt, files, systemd, server
from pyinfra import host
from pyinfra.facts.systemd import SystemdStatus
from pyinfra.operations import apt, files, server, systemd
from ..basedeploy import Deployer
def deploy_acmetool(nginx_hook=False, email="", domains=[]):
"""Deploy acmetool."""
apt.packages(
name="Install acmetool",
packages=["acmetool"],
)
class AcmetoolDeployer(Deployer):
def __init__(self, email, domains):
self.domains = domains
self.email = email
self.need_restart_redirector = False
self.need_restart_reconcile_service = False
self.need_restart_reconcile_timer = False
files.put(
src=importlib.resources.files(__package__).joinpath("acmetool.cron").open("rb"),
dest="/etc/cron.d/acmetool",
user="root",
group="root",
mode="644",
)
def install(self):
apt.packages(
name="Install acmetool",
packages=["acmetool"],
)
files.file(
name="Remove old acmetool cronjob, it is replaced with systemd timer.",
path="/etc/cron.d/acmetool",
present=False,
)
if nginx_hook:
files.put(
name="Install acmetool hook.",
src=importlib.resources.files(__package__)
.joinpath("acmetool.hook")
.open("rb"),
dest="/usr/lib/acme/hooks/nginx",
dest="/etc/acme/hooks/nginx",
user="root",
group="root",
mode="744",
mode="755",
)
files.file(
name="Remove acmetool hook from the wrong location where it was previously installed.",
path="/usr/lib/acme/hooks/nginx",
present=False,
)
files.template(
src=importlib.resources.files(__package__).joinpath("response-file.yaml.j2"),
dest="/var/lib/acme/conf/responses",
user="root",
group="root",
mode="644",
email=email,
)
def configure(self):
files.template(
src=importlib.resources.files(__package__).joinpath(
"response-file.yaml.j2"
),
dest="/var/lib/acme/conf/responses",
user="root",
group="root",
mode="644",
email=self.email,
)
files.template(
src=importlib.resources.files(__package__).joinpath("target.yaml.j2"),
dest="/var/lib/acme/conf/target",
user="root",
group="root",
mode="644",
)
files.template(
src=importlib.resources.files(__package__).joinpath("target.yaml.j2"),
dest="/var/lib/acme/conf/target",
user="root",
group="root",
mode="644",
)
service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-redirector.service"
),
dest="/etc/systemd/system/acmetool-redirector.service",
user="root",
group="root",
mode="644",
)
if host.get_fact(SystemdStatus).get("nginx.service"):
server.shell(
name=f"Remove old acmetool desired files for {self.domains[0]}",
commands=[f"rm -f /var/lib/acme/desired/{self.domains[0]}-*"],
)
files.template(
src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"),
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD
user="root",
group="root",
mode="644",
domains=self.domains,
)
service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-redirector.service"
),
dest="/etc/systemd/system/acmetool-redirector.service",
user="root",
group="root",
mode="644",
)
self.need_restart_redirector = service_file.changed
reconcile_service_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.service"
),
dest="/etc/systemd/system/acmetool-reconcile.service",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_service = reconcile_service_file.changed
reconcile_timer_file = files.put(
src=importlib.resources.files(__package__).joinpath(
"acmetool-reconcile.timer"
),
dest="/etc/systemd/system/acmetool-reconcile.timer",
user="root",
group="root",
mode="644",
)
self.need_restart_reconcile_timer = reconcile_timer_file.changed
def activate(self):
systemd.service(
name="Stop nginx service to free port 80",
service="nginx",
running=False,
name="Setup acmetool-redirector service",
service="acmetool-redirector.service",
running=True,
enabled=True,
restarted=self.need_restart_redirector,
)
self.need_restart_redirector = False
systemd.service(
name="Setup acmetool-redirector service",
service="acmetool-redirector.service",
running=True,
enabled=True,
restarted=service_file.changed,
)
systemd.service(
name="Setup acmetool-reconcile service",
service="acmetool-reconcile.service",
running=False,
enabled=False,
daemon_reload=self.need_restart_reconcile_service,
)
self.need_restart_reconcile_service = False
server.shell(
name=f"Request certificate for: { ', '.join(domains) }",
commands=[f"acmetool want { ' '.join(domains)}"],
)
systemd.service(
name="Setup acmetool-reconcile timer",
service="acmetool-reconcile.timer",
running=True,
enabled=True,
daemon_reload=self.need_restart_reconcile_timer,
)
self.need_restart_reconcile_timer = False
server.shell(
name=f"Reconcile certificates for: {', '.join(self.domains)}",
commands=["acmetool --batch --xlog.severity=debug reconcile"],
)

View File

@@ -0,0 +1,8 @@
[Unit]
Description=Renew TLS certificates with acmetool
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/bin/acmetool --batch reconcile

View File

@@ -0,0 +1,8 @@
[Unit]
Description=Renew TLS certificates with acmetool
[Timer]
OnCalendar=*-*-* 16:20:00
[Install]
WantedBy=timers.target

View File

@@ -1,4 +0,0 @@
SHELL=/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
MAILTO=root
20 16 * * * root /usr/bin/acmetool --batch reconcile

View File

@@ -3,3 +3,5 @@ set -e
EVENT_NAME="$1"
[ "$EVENT_NAME" = "live-updated" ] || exit 42
systemctl restart nginx.service
systemctl reload dovecot.service
systemctl reload postfix.service

View File

@@ -0,0 +1,6 @@
satisfy:
names:
{%- for domain in domains %}
- {{ domain }}
{%- endfor %}

View File

@@ -1,2 +1,2 @@
"acme-enter-email": "{{ email }}"
"acme-agreement:https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf": true
"acme-agreement:https://letsencrypt.org/documents/LE-SA-v1.6-August-18-2025.pdf": true

View File

@@ -1,7 +1,8 @@
request:
provider: https://acme-v02.api.letsencrypt.org/directory
key:
type: rsa
type: ecdsa
ecdsa-curve: nistp256
challenge:
webroot-paths:
- /var/www/html/.well-known/acme-challenge

View File

@@ -0,0 +1,116 @@
import importlib.resources
import io
import os
from pyinfra.operations import files, server, systemd
def has_systemd():
"""Returns False during Docker image builds or any other non-systemd environment."""
return os.path.isdir("/run/systemd/system")
def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg)
def configure_remote_units(mail_domain, units) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
# install systemd units
for fn in units:
params = dict(
execpath=f"{remote_venv_dir}/bin/{fn}",
config_path=remote_chatmail_inipath,
remote_venv_dir=remote_venv_dir,
mail_domain=mail_domain,
)
basename = fn if "." in fn else f"{fn}.service"
source_path = get_resource(f"service/{basename}.f")
content = source_path.read_text().format(**params).encode()
files.put(
name=f"Upload {basename}",
src=io.BytesIO(content),
dest=f"/etc/systemd/system/{basename}",
**root_owned,
)
def activate_remote_units(units) -> None:
# activate systemd units
for fn in units:
basename = fn if "." in fn else f"{fn}.service"
if fn == "chatmail-expire" or fn == "chatmail-fsreport":
# don't auto-start but let the corresponding timer trigger execution
enabled = False
else:
enabled = True
systemd.service(
name=f"Setup {basename}",
service=basename,
running=enabled,
enabled=enabled,
restarted=enabled,
daemon_reload=True,
)
class Deployment:
def install(self, deployer):
# optional 'required_users' contains a list of (user, group, secondary-group-list) tuples.
# If the group is None, no group is created corresponding to that user.
# If the secondary group list is not None, all listed groups are created as well.
required_users = getattr(deployer, "required_users", [])
for user, group, groups in required_users:
if group is not None:
server.group(
name="Create {} group".format(group), group=group, system=True
)
if groups is not None:
for group2 in groups:
server.group(
name="Create {} group".format(group2), group=group2, system=True
)
server.user(
name="Create {} user".format(user),
user=user,
group=group,
groups=groups,
system=True,
)
deployer.install()
def configure(self, deployer):
deployer.configure()
def activate(self, deployer):
deployer.activate()
def perform_stages(self, deployers):
default_stages = "install,configure,activate"
stages = os.getenv("CMDEPLOY_STAGES", default_stages).split(",")
for stage in stages:
for deployer in deployers:
getattr(self, stage)(deployer)
class Deployer:
need_restart = False
def install(self):
pass
def configure(self):
pass
def activate(self):
pass

View File

@@ -1,14 +0,0 @@
{chatmail_domain}. A {ipv4}
{chatmail_domain}. AAAA {ipv6}
{chatmail_domain}. MX 10 {chatmail_domain}.
_submission._tcp.{chatmail_domain}. SRV 0 1 587 {chatmail_domain}.
_submissions._tcp.{chatmail_domain}. SRV 0 1 465 {chatmail_domain}.
_imap._tcp.{chatmail_domain}. SRV 0 1 143 {chatmail_domain}.
_imaps._tcp.{chatmail_domain}. SRV 0 1 993 {chatmail_domain}.
{chatmail_domain}. CAA 128 issue "letsencrypt.org;accounturi={acme_account_url}"
{chatmail_domain}. TXT "v=spf1 a:{chatmail_domain} -all"
_dmarc.{chatmail_domain}. TXT "v=DMARC1;p=reject;rua=mailto:{email};ruf=mailto:{email};fo=1;adkim=r;aspf=r"
_mta-sts.{chatmail_domain}. TXT "v=STSv1; id={sts_id}"
mta-sts.{chatmail_domain}. CNAME {chatmail_domain}.
_smtp._tls.{chatmail_domain}. TXT "v=TLSRPTv1;rua=mailto:{email}"
{dkim_entry}

View File

@@ -0,0 +1,30 @@
;
; Required DNS entries for chatmail servers
;
{% if A %}
{{ mail_domain }}. A {{ A }}
{% endif %}
{% if AAAA %}
{{ mail_domain }}. AAAA {{ AAAA }}
{% endif %}
{{ mail_domain }}. MX 10 {{ mail_domain }}.
_mta-sts.{{ mail_domain }}. TXT "v=STSv1; id={{ sts_id }}"
mta-sts.{{ mail_domain }}. CNAME {{ mail_domain }}.
www.{{ mail_domain }}. CNAME {{ mail_domain }}.
{{ dkim_entry }}
;
; Recommended DNS entries for interoperability and security-hardening
;
{{ mail_domain }}. TXT "v=spf1 a ~all"
_dmarc.{{ mail_domain }}. TXT "v=DMARC1;p=reject;adkim=s;aspf=s"
{% if acme_account_url %}
{{ mail_domain }}. CAA 0 issue "letsencrypt.org;accounturi={{ acme_account_url }}"
{% endif %}
_adsp._domainkey.{{ mail_domain }}. TXT "dkim=discardable"
_submission._tcp.{{ mail_domain }}. SRV 0 1 587 {{ mail_domain }}.
_submissions._tcp.{{ mail_domain }}. SRV 0 1 465 {{ mail_domain }}.
_imap._tcp.{{ mail_domain }}. SRV 0 1 143 {{ mail_domain }}.
_imaps._tcp.{{ mail_domain }}. SRV 0 1 993 {{ mail_domain }}.

View File

@@ -2,20 +2,24 @@
Provides the `cmdeploy` entry point function,
along with command line option and subcommand parsing.
"""
import argparse
import shutil
import subprocess
import importlib.resources
import importlib.util
import os
import pathlib
import shutil
import subprocess
import sys
from pathlib import Path
from termcolor import colored
import pyinfra
from chatmaild.config import read_config, write_initial_config
from cmdeploy.dns import show_dns, check_necessary_dns
from packaging import version
from termcolor import colored
from . import dns, remote
from .sshexec import LocalExec, SSHExec
#
# cmdeploy sub commands and options
@@ -28,20 +32,30 @@ def init_cmd_options(parser):
action="store",
help="fully qualified DNS domain name for your chatmail instance",
)
parser.add_argument(
"--force",
dest="recreate_ini",
action="store_true",
help="force reacreate ini file",
)
def init_cmd(args, out):
"""Initialize chatmail config file."""
mail_domain = args.chatmail_domain
inipath = args.inipath
if args.inipath.exists():
print(f"Path exists, not modifying: {args.inipath}")
else:
write_initial_config(args.inipath, mail_domain)
out.green(f"created config file for {mail_domain} in {args.inipath}")
check_necessary_dns(
out,
mail_domain,
)
if not args.recreate_ini:
print(f"[WARNING] Path exists, not modifying: {inipath}")
return 1
else:
print(
f"[WARNING] Force argument was provided, deleting config file: {inipath}"
)
inipath.unlink()
write_initial_config(inipath, mail_domain, overrides={})
out.green(f"created config file for {mail_domain} in {inipath}")
def run_cmd_options(parser):
@@ -51,44 +65,129 @@ def run_cmd_options(parser):
action="store_true",
help="don't actually modify the server",
)
parser.add_argument(
"--disable-mail",
dest="disable_mail",
action="store_true",
help="install/upgrade the server, but disable postfix & dovecot for now",
)
parser.add_argument(
"--website-only",
action="store_true",
help="only update/deploy the website, skipping full server upgrade/deployment, useful when you only changed/updated the web pages and don't need to re-run a full server upgrade",
)
parser.add_argument(
"--skip-dns-check",
dest="dns_check_disabled",
action="store_true",
help="disable checks nslookup for dns",
)
add_ssh_host_option(parser)
def run_cmd(args, out):
"""Deploy chatmail services on the remote server."""
mail_domain = args.config.mail_domain
if not check_necessary_dns(
out,
mail_domain,
):
sys.exit(1)
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host)
require_iroh = args.config.enable_iroh_relay
if not args.dns_check_disabled:
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
if not dns.check_initial_remote_data(remote_data, print=out.red):
return 1
env = os.environ.copy()
env["CHATMAIL_INI"] = args.inipath
deploy_path = importlib.resources.files(__package__).joinpath("deploy.py").resolve()
env["CHATMAIL_WEBSITE_ONLY"] = "True" if args.website_only else ""
env["CHATMAIL_DISABLE_MAIL"] = "True" if args.disable_mail else ""
env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else ""
if not args.dns_check_disabled:
env["CHATMAIL_ADDR_V4"] = remote_data.get("A") or ""
env["CHATMAIL_ADDR_V6"] = remote_data.get("AAAA") or ""
deploy_path = importlib.resources.files(__package__).joinpath("run.py").resolve()
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
cmd = f"{pyinf} --ssh-user root {args.config.mail_domain} {deploy_path}"
out.check_call(cmd, env=env)
print("Deploy completed, call `cmdeploy dns` next.")
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host in ["localhost", "@docker"]:
if ssh_host == "@docker":
env["CHATMAIL_NOPORTCHECK"] = "True"
env["CHATMAIL_NOSYSCTL"] = "True"
cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"):
out.red("Please re-run scripts/initenv.sh to update pyinfra to version 3.")
return 1
try:
retcode = out.check_call(cmd, env=env)
if args.website_only:
if retcode == 0:
out.green("Website deployment completed.")
else:
out.red("Website deployment failed.")
elif retcode == 0:
out.green("Deploy completed, call `cmdeploy dns` next.")
elif not args.dns_check_disabled and not remote_data["acme_account_url"]:
out.red("Deploy completed but letsencrypt not configured")
out.red("Run 'cmdeploy run' again")
retcode = 0
else:
out.red("Deploy failed")
except subprocess.CalledProcessError:
out.red("Deploy failed")
retcode = 1
return retcode
def dns_cmd_options(parser):
parser.add_argument(
"--zonefile",
dest="zonefile",
help="print the whole zonefile for deploying directly",
type=pathlib.Path,
default=None,
help="write out a zonefile",
)
add_ssh_host_option(parser)
def dns_cmd(args, out):
"""Generate dns zone file."""
show_dns(args, out)
"""Check DNS entries and optionally generate dns zone file."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
if not remote_data:
return 1
if not remote_data["acme_account_url"]:
out.red("could not get letsencrypt account url, please run 'cmdeploy run'")
return 1
if not remote_data["dkim_entry"]:
out.red("could not determine dkim_entry, please run 'cmdeploy run'")
return 1
zonefile = dns.get_filled_zone_file(remote_data)
if args.zonefile:
args.zonefile.write_text(zonefile)
out.green(f"DNS records successfully written to: {args.zonefile}")
return 0
retcode = dns.check_full_zone(
sshexec, remote_data=remote_data, zonefile=zonefile, out=out
)
return retcode
def status_cmd_options(parser):
add_ssh_host_option(parser)
def status_cmd(args, out):
"""Display status for online chatmail instance."""
ssh = f"ssh root@{args.config.mail_domain}"
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
out.green(f"chatmail domain: {args.config.mail_domain}")
if args.config.privacy_mail:
@@ -96,10 +195,8 @@ def status_cmd(args, out):
else:
out.red("no privacy settings")
s1 = "systemctl --type=service --state=running"
for line in out.shell_output(f"{ssh} -- {s1}").split("\n"):
if line.startswith(" "):
print(line)
for line in sshexec(remote.rshell.get_systemd_running):
print(line)
def test_cmd_options(parser):
@@ -128,7 +225,7 @@ def test_cmd(args, out):
"-n4",
"-rs",
"-x",
"-vrx",
"-v",
"--durations=5",
]
if args.slow:
@@ -138,14 +235,6 @@ def test_cmd(args, out):
def fmt_cmd_options(parser):
parser.add_argument(
"--verbose",
"-v",
dest="verbose",
action="store_true",
help="provide information on invocations",
)
parser.add_argument(
"--check",
"-c",
@@ -155,27 +244,31 @@ def fmt_cmd_options(parser):
def fmt_cmd(args, out):
"""Run formattting fixes (ruff and black) on all chatmail source code."""
"""Run formattting fixes on all chatmail source code."""
sources = [str(importlib.resources.files(x)) for x in ("chatmaild", "cmdeploy")]
black_args = [shutil.which("black")]
ruff_args = [shutil.which("ruff")]
chatmaild_dir = importlib.resources.files("chatmaild").resolve()
cmdeploy_dir = chatmaild_dir.joinpath(
"..", "..", "..", "cmdeploy", "src", "cmdeploy"
).resolve()
sources = [str(chatmaild_dir), str(cmdeploy_dir)]
format_args = [shutil.which("ruff"), "format"]
check_args = [shutil.which("ruff"), "check"]
if args.check:
black_args.append("--check")
format_args.append("--diff")
else:
ruff_args.append("--fix")
check_args.append("--fix")
if not args.verbose:
black_args.append("-q")
ruff_args.append("-q")
check_args.append("--quiet")
format_args.append("--quiet")
black_args.extend(sources)
ruff_args.extend(sources)
format_args.extend(sources)
check_args.extend(sources)
out.check_call(" ".join(black_args), quiet=not args.verbose)
out.check_call(" ".join(ruff_args), quiet=not args.verbose)
return 0
out.check_call(" ".join(format_args), quiet=not args.verbose)
out.check_call(" ".join(check_args), quiet=not args.verbose)
def bench_cmd(args, out):
@@ -211,16 +304,6 @@ class Out:
color = "red" if red else ("green" if green else None)
print(colored(msg, color), file=file)
def shell_output(self, arg, no_print=False, timeout=10):
if not no_print:
self(f"[$ {arg}]", file=sys.stderr)
output = subprocess.STDOUT
else:
output = subprocess.DEVNULL
return subprocess.check_output(
arg, shell=True, timeout=timeout, stderr=output
).decode()
def check_call(self, arg, env=None, quiet=False):
if not quiet:
self(f"[$ {arg}]", file=sys.stderr)
@@ -230,19 +313,36 @@ class Out:
if not quiet:
cmdstring = " ".join(args)
self(f"[$ {cmdstring}]", file=sys.stderr)
proc = subprocess.run(args, env=env)
proc = subprocess.run(args, env=env, check=False)
return proc.returncode
def add_ssh_host_option(parser):
parser.add_argument(
"--ssh-host",
dest="ssh_host",
help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
"instead of chatmail.ini's mail_domain.",
)
def add_config_option(parser):
parser.add_argument(
"--config",
dest="inipath",
action="store",
default=Path("chatmail.ini"),
default=Path(os.environ.get("CHATMAIL_INI", "chatmail.ini")),
type=Path,
help="path to the chatmail.ini file",
)
parser.add_argument(
"--verbose",
"-v",
dest="verbose",
action="store_true",
default=False,
help="provide verbose logging",
)
def add_subcommand(subparsers, func):
@@ -281,12 +381,23 @@ def get_parser():
return parser
def get_sshexec(ssh_host: str, verbose=True):
if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
if verbose:
print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose)
def main(args=None):
"""Provide main entry point for 'xdcget' CLI invocation."""
"""Provide main entry point for 'cmdeploy' CLI invocation."""
parser = get_parser()
args = parser.parse_args(args=args)
if not hasattr(args, "func"):
return parser.parse_args(["-h"])
out = Out()
kwargs = {}
if args.func.__name__ not in ("init_cmd", "fmt_cmd"):
@@ -304,7 +415,6 @@ def main(args=None):
if res is None:
res = 0
return res
except KeyboardInterrupt:
out.red("KeyboardInterrupt")
sys.exit(130)

View File

@@ -1,17 +0,0 @@
import os
import importlib.resources
import pyinfra
from cmdeploy import deploy_chatmail
def main():
config_path = os.getenv(
"CHATMAIL_INI",
importlib.resources.files("cmdeploy").joinpath("../../../chatmail.ini"),
)
deploy_chatmail(config_path)
if pyinfra.is_cli:
main()

View File

@@ -0,0 +1,634 @@
"""
Chat Mail pyinfra deploy.
"""
import os
import shutil
import subprocess
import sys
from io import StringIO
from pathlib import Path
from chatmaild.config import read_config
from pyinfra import facts, host, logger
from pyinfra.facts import hardware
from pyinfra.api import FactBase
from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd
from cmdeploy.cmdeploy import Out
from .acmetool import AcmetoolDeployer
from .basedeploy import (
Deployer,
Deployment,
activate_remote_units,
configure_remote_units,
get_resource,
has_systemd,
)
from .dovecot.deployer import DovecotDeployer
from .filtermail.deployer import FiltermailDeployer
from .mtail.deployer import MtailDeployer
from .nginx.deployer import NginxDeployer
from .opendkim.deployer import OpendkimDeployer
from .postfix.deployer import PostfixDeployer
from .www import build_webpages, find_merge_conflict, get_paths
class Port(FactBase):
"""
Returns the process occupying a port.
"""
def command(self, port: int) -> str:
return (
"ss -lptn 'src :%d' | awk 'NR>1 {print $6,$7}' | sed 's/users:((\"//;s/\".*//'"
% (port,)
)
def process(self, output: [str]) -> str:
return output[0]
def _build_chatmaild(dist_dir) -> None:
dist_dir = Path(dist_dir).resolve()
if dist_dir.exists():
shutil.rmtree(dist_dir)
dist_dir.mkdir()
subprocess.check_output(
[sys.executable, "-m", "build", "-n"]
+ ["--sdist", "chatmaild", "--outdir", str(dist_dir)]
)
entries = list(dist_dir.iterdir())
assert len(entries) == 1
return entries[0]
def remove_legacy_artifacts():
if not has_systemd():
return
# disable legacy doveauth-dictproxy.service
if host.get_fact(SystemdEnabled).get("doveauth-dictproxy.service"):
systemd.service(
name="Disable legacy doveauth-dictproxy.service",
service="doveauth-dictproxy.service",
running=False,
enabled=False,
)
def _install_remote_venv_with_chatmaild() -> None:
remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist"))
remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv"
root_owned = dict(user="root", group="root", mode="644")
apt.packages(
name="apt install python3-virtualenv",
packages=["python3-virtualenv"],
)
files.put(
name="Upload chatmaild source package",
src=dist_file.open("rb"),
dest=remote_dist_file,
create_remote_dir=True,
**root_owned,
)
pip.virtualenv(
name=f"chatmaild virtualenv {remote_venv_dir}",
path=remote_venv_dir,
always_copy=True,
)
apt.packages(
name="install gcc and headers to build crypt_r source package",
packages=["gcc", "python3-dev"],
)
server.shell(
name=f"forced pip-install {dist_file.name}",
commands=[
f"{remote_venv_dir}/bin/pip install --force-reinstall {remote_dist_file}"
],
)
def _configure_remote_venv_with_chatmaild(config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
files.put(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(),
dest=remote_chatmail_inipath,
**root_owned,
)
files.template(
src=get_resource("metrics.cron.j2"),
dest="/etc/cron.d/chatmail-metrics",
user="root",
group="root",
mode="644",
config={
"mailboxes_dir": config.mailboxes_dir,
"execpath": f"{remote_venv_dir}/bin/chatmail-metrics",
},
)
class UnboundDeployer(Deployer):
def __init__(self, config):
self.config = config
self.need_restart = False
def install(self):
# Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
#
# On an IPv4-only system, if unbound is started but not
# configured, it causes subsequent steps to fail to resolve hosts.
# Here, we use policy-rc.d to prevent unbound from starting up
# on initial install. Later, we will configure it and start it.
#
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
"unbound-anchor -a /var/lib/unbound/root.key || true",
],
)
if self.config.disable_ipv6:
files.directory(
path="/etc/unbound/unbound.conf.d",
present=True,
user="root",
group="root",
mode="755",
)
conf = files.put(
src=get_resource("unbound/unbound.conf.j2"),
dest="/etc/unbound/unbound.conf.d/chatmail.conf",
user="root",
group="root",
mode="644",
)
else:
conf = files.file(
path="/etc/unbound/unbound.conf.d/chatmail.conf",
present=False,
)
self.need_restart |= conf.changed
def activate(self):
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
"systemctl reset-failed unbound.service",
],
)
systemd.service(
name="Start and enable unbound",
service="unbound.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
class MtastsDeployer(Deployer):
def configure(self):
# Remove configuration.
files.file("/etc/mta-sts-daemon.yml", present=False)
files.directory("/usr/local/lib/postfix-mta-sts-resolver", present=False)
files.file("/etc/systemd/system/mta-sts-daemon.service", present=False)
def activate(self):
systemd.service(
name="Stop MTA-STS daemon",
service="mta-sts-daemon.service",
daemon_reload=True,
running=False,
enabled=False,
)
class WebsiteDeployer(Deployer):
def __init__(self, config):
self.config = config
def install(self):
files.directory(
name="Ensure /var/www exists",
path="/var/www",
user="root",
group="root",
mode="755",
present=True,
)
def configure(self):
www_path, src_dir, build_dir = get_paths(self.config)
# if www_folder was set to a non-existing folder, skip upload
if not www_path.is_dir():
logger.warning("Building web pages is disabled in chatmail.ini, skipping")
elif (path := find_merge_conflict(src_dir)) is not None:
logger.warning(
f"Merge conflict found in {path}, skipping website deployment. Fix merge conflict if you want to upload your web page."
)
else:
# if www_folder is a hugo page, build it
if build_dir:
www_path = build_webpages(src_dir, build_dir, self.config)
# if it is not a hugo page, upload it as is
files.rsync(
f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"]
)
class LegacyRemoveDeployer(Deployer):
def install(self):
apt.packages(name="Remove rspamd", packages="rspamd", present=False)
# remove historic expunge script
# which is now implemented through a systemd timer (chatmail-expire)
files.file(
path="/etc/cron.d/expunge",
present=False,
)
# Remove OBS repository key that is no longer used.
files.file("/etc/apt/keyrings/obs-home-deltachat.gpg", present=False)
files.line(
name="Remove DeltaChat OBS home repository from sources.list",
path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True,
present=False,
)
# prior relay versions used filelogging
files.directory(
name="Ensure old logs on disk are deleted",
path="/var/log/journal/",
present=False,
)
# remove echobot if it is still running
if has_systemd() and host.get_fact(SystemdEnabled).get("echobot.service"):
systemd.service(
name="Disable echobot.service",
service="echobot.service",
running=False,
enabled=False,
)
def check_config(config):
mail_domain = config.mail_domain
if mail_domain != "testrun.org" and not mail_domain.endswith(".testrun.org"):
blocked_words = "merlinux schmieder testrun.org".split()
for key in config.__dict__:
value = config.__dict__[key]
if key.startswith("privacy") and any(
x in str(value) for x in blocked_words
):
raise ValueError(
f"please set your own privacy contacts/addresses in {config._inipath}"
)
return config
class TurnDeployer(Deployer):
def __init__(self, mail_domain):
self.mail_domain = mail_domain
self.units = ["turnserver"]
def install(self):
(url, sha256sum) = {
"x86_64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-x86_64-linux",
"841e527c15fdc2940b0469e206188ea8f0af48533be12ecb8098520f813d41e4",
),
"aarch64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-aarch64-linux",
"a5fc2d06d937b56a34e098d2cd72a82d3e89967518d159bf246dc69b65e81b42",
),
}[host.get_fact(facts.server.Arch)]
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn")
if existing_sha256sum != sha256sum:
server.shell(
name="Download chatmail-turn",
commands=[
f"(curl -L {url} >/usr/local/bin/chatmail-turn.new && (echo '{sha256sum} /usr/local/bin/chatmail-turn.new' | sha256sum -c) && mv /usr/local/bin/chatmail-turn.new /usr/local/bin/chatmail-turn)",
"chmod 755 /usr/local/bin/chatmail-turn",
],
)
def configure(self):
configure_remote_units(self.mail_domain, self.units)
def activate(self):
activate_remote_units(self.units)
class IrohDeployer(Deployer):
def __init__(self, enable_iroh_relay):
self.enable_iroh_relay = enable_iroh_relay
def install(self):
(url, sha256sum) = {
"x86_64": (
"https://github.com/n0-computer/iroh/releases/download/v0.35.0/iroh-relay-v0.35.0-x86_64-unknown-linux-musl.tar.gz",
"45c81199dbd70f8c4c30fef7f3b9727ca6e3cea8f2831333eeaf8aa71bf0fac1",
),
"aarch64": (
"https://github.com/n0-computer/iroh/releases/download/v0.35.0/iroh-relay-v0.35.0-aarch64-unknown-linux-musl.tar.gz",
"f8ef27631fac213b3ef668d02acd5b3e215292746a3fc71d90c63115446008b1",
),
}[host.get_fact(facts.server.Arch)]
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay")
if existing_sha256sum != sha256sum:
server.shell(
name="Download iroh-relay",
commands=[
f"(curl -L {url} | gunzip | tar -x -f - ./iroh-relay -O >/usr/local/bin/iroh-relay.new && (echo '{sha256sum} /usr/local/bin/iroh-relay.new' | sha256sum -c) && mv /usr/local/bin/iroh-relay.new /usr/local/bin/iroh-relay)",
"chmod 755 /usr/local/bin/iroh-relay",
],
)
self.need_restart = True
def configure(self):
systemd_unit = files.put(
name="Upload iroh-relay systemd unit",
src=get_resource("iroh-relay.service"),
dest="/etc/systemd/system/iroh-relay.service",
user="root",
group="root",
mode="644",
)
self.need_restart |= systemd_unit.changed
iroh_config = files.put(
name="Upload iroh-relay config",
src=get_resource("iroh-relay.toml"),
dest="/etc/iroh-relay.toml",
user="root",
group="root",
mode="644",
)
self.need_restart |= iroh_config.changed
def activate(self):
systemd.service(
name="Start and enable iroh-relay",
service="iroh-relay.service",
running=True,
enabled=self.enable_iroh_relay,
restarted=self.need_restart,
)
self.need_restart = False
class JournaldDeployer(Deployer):
def configure(self):
journald_conf = files.put(
name="Configure journald",
src=get_resource("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
self.need_restart = journald_conf.changed
def activate(self):
systemd.service(
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
class ChatmailVenvDeployer(Deployer):
def __init__(self, config):
self.config = config
self.units = (
"chatmail-metadata",
"lastlogin",
"chatmail-expire",
"chatmail-expire.timer",
"chatmail-fsreport",
"chatmail-fsreport.timer",
)
def install(self):
_install_remote_venv_with_chatmaild()
def configure(self):
_configure_remote_venv_with_chatmaild(self.config)
configure_remote_units(self.config.mail_domain, self.units)
def activate(self):
activate_remote_units(self.units)
class ChatmailDeployer(Deployer):
required_users = [
("vmail", "vmail", None),
("iroh", None, None),
]
def __init__(self, mail_domain):
self.mail_domain = mail_domain
def install(self):
apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True)
apt.packages(
name="Install curl",
packages=["curl"],
)
apt.packages(
name="Install rsync",
packages=["rsync"],
)
apt.packages(
name="Ensure cron is installed",
packages=["cron"],
)
def configure(self):
# This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName
server.shell(
name="Setup /etc/mailname",
commands=[
f"echo {self.mail_domain} >/etc/mailname; chmod 644 /etc/mailname"
],
)
class FcgiwrapDeployer(Deployer):
def install(self):
apt.packages(
name="Install fcgiwrap",
packages=["fcgiwrap"],
)
def activate(self):
systemd.service(
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
class GithashDeployer(Deployer):
def activate(self):
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
files.put(
name="Upload chatmail relay git commit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
)
def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -> None:
"""Deploy a chat-mail instance.
:param config_path: path to chatmail.ini
:param disable_mail: whether to disable postfix & dovecot
:param website_only: if True, only deploy the website
"""
config = read_config(config_path)
check_config(config)
mail_domain = config.mail_domain
if website_only:
Deployment().perform_stages([WebsiteDeployer(config)])
return
if host.get_fact(Port, port=53) != "unbound":
files.line(
name="Add 9.9.9.9 to resolv.conf",
path="/etc/resolv.conf",
# Guard against resolv.conf missing a trailing newline (SolusVM bug).
line="\nnameserver 9.9.9.9",
)
# Check if mtail_address interface is available (if configured)
if config.mtail_address and config.mtail_address not in ('127.0.0.1', '::1', 'localhost'):
ipv4_addrs = host.get_fact(hardware.Ipv4Addrs)
all_addresses = [addr for addrs in ipv4_addrs.values() for addr in addrs]
if config.mtail_address not in all_addresses:
Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n")
exit(1)
if not os.environ.get("CHATMAIL_NOPORTCHECK"):
port_services = [
(["master", "smtpd"], 25),
("unbound", 53),
("acmetool", 80),
(["imap-login", "dovecot"], 143),
("nginx", 443),
(["master", "smtpd"], 465),
(["master", "smtpd"], 587),
(["imap-login", "dovecot"], 993),
("iroh-relay", 3340),
("mtail", 3903),
("stats", 3904),
("nginx", 8443),
(["master", "smtpd"], config.postfix_reinject_port),
(["master", "smtpd"], config.postfix_reinject_port_incoming),
("filtermail", config.filtermail_smtp_port),
("filtermail", config.filtermail_smtp_port_incoming),
]
for service, port in port_services:
print(f"Checking if port {port} is available for {service}...")
running_service = host.get_fact(Port, port=port)
services = [service] if isinstance(service, str) else service
if running_service:
if running_service not in services:
Out().red(
f"Deploy failed: port {port} is occupied by: {running_service}"
)
exit(1)
tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"]
all_deployers = [
ChatmailDeployer(mail_domain),
LegacyRemoveDeployer(),
FiltermailDeployer(),
JournaldDeployer(),
UnboundDeployer(config),
TurnDeployer(mail_domain),
IrohDeployer(config.enable_iroh_relay),
]
if not config.noacme:
all_deployers.append(AcmetoolDeployer(config.acme_email, tls_domains))
all_deployers += [
WebsiteDeployer(config),
ChatmailVenvDeployer(config),
MtastsDeployer(),
OpendkimDeployer(mail_domain),
# Dovecot should be started before Postfix
# because it creates authentication socket
# required by Postfix.
DovecotDeployer(config, disable_mail),
PostfixDeployer(config, disable_mail),
FcgiwrapDeployer(),
NginxDeployer(config),
MtailDeployer(config.mtail_address),
GithashDeployer(),
]
Deployment().perform_stages(all_deployers)

View File

@@ -1,205 +1,71 @@
import sys
import requests
import importlib
import subprocess
import datetime
from ipaddress import ip_address
import importlib
from jinja2 import Template
from . import remote
class DNS:
def __init__(self, out, mail_domain):
self.session = requests.Session()
self.out = out
self.ssh = f"ssh root@{mail_domain} -- "
try:
self.shell(f"unbound-control flush_zone {mail_domain}")
except subprocess.CalledProcessError:
pass
def shell(self, cmd):
try:
return self.out.shell_output(f"{self.ssh}{cmd}", no_print=True)
except (subprocess.CalledProcessError, subprocess.TimeoutExpired) as e:
if "exit status 255" in str(e) or "timed out" in str(e):
self.out.red(f"Error: can't reach the server with: {self.ssh[:-4]}")
sys.exit(1)
else:
raise
def get_ipv4(self):
cmd = "ip a | grep 'inet ' | grep 'scope global' | grep -oE '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' | head -1"
return self.shell(cmd).strip()
def get_ipv6(self):
cmd = "ip a | grep inet6 | grep 'scope global' | sed -e 's#/64 scope global##' | sed -e 's#inet6##'"
return self.shell(cmd).strip()
def get(self, typ: str, domain: str) -> str | None:
"""Get a DNS entry"""
dig_result = self.shell(f"dig -r -q {domain} -t {typ} +short")
line = dig_result.partition("\n")[0]
if line:
return line
def check_ptr_record(self, ip: str, mail_domain) -> bool:
"""Check the PTR record for an IPv4 or IPv6 address."""
result = self.shell(f"dig -r -x {ip} +short").rstrip()
return result == f"{mail_domain}."
def get_initial_remote_data(sshexec, mail_domain):
return sshexec.logged(
call=remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=mail_domain)
)
def show_dns(args, out):
template = importlib.resources.files(__package__).joinpath("chatmail.zone.f")
mail_domain = args.config.mail_domain
ssh = f"ssh root@{mail_domain}"
dns = DNS(out, mail_domain)
def read_dkim_entries(entry):
lines = []
for line in entry.split("\n"):
if line.startswith(";") or not line.strip():
continue
line = line.replace("\t", " ")
lines.append(line)
return "\n".join(lines)
print("Checking your DKIM keys and DNS entries...")
try:
acme_account_url = out.shell_output(f"{ssh} -- acmetool account-url")
except subprocess.CalledProcessError:
print("Please run `cmdeploy run` first.")
return
dkim_entry = read_dkim_entries(out.shell_output(f"{ssh} -- opendkim-genzone -F"))
ipv6 = dns.get_ipv6()
reverse_ipv6 = dns.check_ptr_record(ipv6, mail_domain)
ipv4 = dns.get_ipv4()
reverse_ipv4 = dns.check_ptr_record(ipv4, mail_domain)
to_print = []
with open(template, "r") as f:
zonefile = (
f.read()
.format(
acme_account_url=acme_account_url,
email=f"root@{args.config.mail_domain}",
sts_id=datetime.datetime.now().strftime("%Y%m%d%H%M"),
chatmail_domain=args.config.mail_domain,
dkim_entry=dkim_entry,
ipv6=ipv6,
ipv4=ipv4,
)
.strip()
)
try:
with open(args.zonefile, "w+") as zf:
zf.write(zonefile)
print(f"DNS records successfully written to: {args.zonefile}")
return
except TypeError:
pass
started_dkim_parsing = False
for line in zonefile.splitlines():
line = line.format(
acme_account_url=acme_account_url,
email=f"root@{args.config.mail_domain}",
sts_id=datetime.datetime.now().strftime("%Y%m%d%H%M"),
chatmail_domain=args.config.mail_domain,
dkim_entry=dkim_entry,
ipv6=ipv6,
).strip()
for typ in ["A", "AAAA", "CNAME", "CAA"]:
if f" {typ} " in line:
domain, value = line.split(f" {typ} ")
current = dns.get(typ, domain.strip()[:-1])
if current != value.strip():
to_print.append(line)
if " MX " in line:
domain, typ, prio, value = line.split()
current = dns.get(typ, domain[:-1])
if not current:
to_print.append(line)
elif current.split()[1] != value:
print(line.replace(prio, str(int(current[0]) + 1)))
if " SRV " in line:
domain, typ, prio, weight, port, value = line.split()
current = dns.get("SRV", domain[:-1])
if current != f"{prio} {weight} {port} {value}":
to_print.append(line)
if " TXT " in line:
domain, value = line.split(" TXT ")
current = dns.get("TXT", domain.strip()[:-1])
if domain.startswith("_mta-sts."):
if current:
if current.split("id=")[0] == value.split("id=")[0]:
continue
if current != value:
to_print.append(line)
if " IN TXT ( " in line:
started_dkim_parsing = True
dkim_lines = [line]
if started_dkim_parsing and line.startswith('"'):
dkim_lines.append(" " + line)
domain, data = "\n".join(dkim_lines).split(" IN TXT ")
current = dns.get("TXT", domain.strip()[:-1])
if current:
current = "( %s )" % (current.replace('" "', '"\n "'))
if current.replace(";", "\\;") != data:
to_print.append(dkim_entry)
else:
to_print.append(dkim_entry)
if to_print:
to_print.insert(
0, "You should configure the following DNS entries at your provider:\n"
)
to_print.append(
"\nIf you already configured the DNS entries, wait a bit until the DNS entries propagate to the Internet."
)
print("\n".join(to_print))
def check_initial_remote_data(remote_data, *, print=print):
mail_domain = remote_data["mail_domain"]
if not remote_data["A"] and not remote_data["AAAA"]:
print(f"Missing A and/or AAAA DNS records for {mail_domain}!")
elif remote_data["MTA_STS"] != f"{mail_domain}.":
print("Missing MTA-STS CNAME record:")
print(f"mta-sts.{mail_domain}. CNAME {mail_domain}.")
elif remote_data["WWW"] != f"{mail_domain}.":
print("Missing www CNAME record:")
print(f"www.{mail_domain}. CNAME {mail_domain}.")
else:
out.green("Great! All your DNS entries are correct.")
return remote_data
to_print = []
if not reverse_ipv4:
to_print.append(f"\tIPv4:\t{ipv4}\t{args.config.mail_domain}")
if not reverse_ipv6:
to_print.append(f"\tIPv6:\t{ipv6}\t{args.config.mail_domain}")
if len(to_print) > 0:
if len(to_print) == 1:
warning = "You should add the following PTR/reverse DNS entry:"
else:
warning = "You should add the following PTR/reverse DNS entries:"
out.red(warning)
for entry in to_print:
print(entry)
print(
"You can do so at your hosting provider (maybe this isn't your DNS provider)."
def get_filled_zone_file(remote_data):
sts_id = remote_data.get("sts_id")
if not sts_id:
remote_data["sts_id"] = datetime.datetime.now().strftime("%Y%m%d%H%M")
template = importlib.resources.files(__package__).joinpath("chatmail.zone.j2")
content = template.read_text()
zonefile = Template(content).render(**remote_data)
lines = [x.strip() for x in zonefile.split("\n") if x.strip()]
lines.append("")
zonefile = "\n".join(lines)
return zonefile
def check_full_zone(sshexec, remote_data, out, zonefile) -> int:
"""Check existing DNS records, optionally write them to zone file
and return (exitcode, remote_data) tuple."""
required_diff, recommended_diff = sshexec.logged(
remote.rdns.check_zonefile,
kwargs=dict(zonefile=zonefile, verbose=False),
)
returncode = 0
if required_diff:
out.red("Please set required DNS entries at your DNS provider:\n")
for line in required_diff:
out(line)
out("")
returncode = 1
if remote_data.get("dkim_entry") in required_diff:
out(
"If the DKIM entry above does not work with your DNS provider, you can try this one:\n"
)
out(remote_data.get("web_dkim_entry") + "\n")
if recommended_diff:
out("WARNING: these recommended DNS entries are not set:\n")
for line in recommended_diff:
out(line)
def check_necessary_dns(out, mail_domain):
"""Check whether $mail_domain and mta-sts.$mail_domain resolve."""
dns = DNS(out, mail_domain)
ipv4 = dns.get("A", mail_domain)
ipv6 = dns.get("AAAA", mail_domain)
mta_entry = dns.get("CNAME", "mta-sts." + mail_domain)
mta_ip = dns.get("A", mta_entry)
if not mta_ip:
mta_ip = dns.get("AAAA", mta_entry)
to_print = []
if not (ipv4 or ipv6):
to_print.append(f"\t{mail_domain}.\t\t\tA<your server's IPv4 address>")
if not mta_ip or not (mta_ip == ipv4 or mta_ip == ipv6):
to_print.append(f"\tmta-sts.{mail_domain}.\tCNAME\t{mail_domain}.")
if to_print:
to_print.insert(
0,
"\nFor chatmail to work, you need to configure this at your DNS provider:\n",
)
for line in to_print:
print(line)
print()
else:
dns.out.green("\nAll necessary DNS entries seem to be set.")
return True
if not (recommended_diff or required_diff):
out.green("Great! All your DNS entries are verified and correct.")
return returncode

View File

@@ -1,8 +1,10 @@
uri = proxy:/run/dovecot/doveauth.socket:auth
iterate_disable = yes
uri = proxy:/run/doveauth/doveauth.socket:auth
iterate_disable = no
iterate_prefix = userdb/
default_pass_scheme = plain
# %E escapes characters " (double quote), ' (single quote) and \ (backslash) with \ (backslash).
# See <https://doc.dovecot.org/configuration_manual/config_file/config_variables/#modifiers>
# See <https://doc.dovecot.org/2.3/configuration_manual/config_file/config_variables/#modifiers>
# for documentation.
#
# We escape user-provided input and use double quote as a separator.

View File

@@ -0,0 +1,158 @@
import os
from chatmaild.config import Config
from pyinfra import host
from pyinfra.facts.server import Arch, Sysctl
from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import (
Deployer,
activate_remote_units,
configure_remote_units,
get_resource,
has_systemd,
)
class DovecotDeployer(Deployer):
daemon_reload = False
def __init__(self, config, disable_mail):
self.config = config
self.disable_mail = disable_mail
self.units = ["doveauth"]
def install(self):
arch = host.get_fact(Arch)
if has_systemd() and "dovecot.service" in host.get_fact(SystemdEnabled):
return # already installed and running
_install_dovecot_package("core", arch)
_install_dovecot_package("imapd", arch)
_install_dovecot_package("lmtpd", arch)
def configure(self):
configure_remote_units(self.config.mail_domain, self.units)
self.need_restart, self.daemon_reload = _configure_dovecot(self.config)
def activate(self):
activate_remote_units(self.units)
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="Disable dovecot for now" if self.disable_mail else "Start and enable Dovecot",
service="dovecot.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
daemon_reload=self.daemon_reload,
)
self.need_restart = False
def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch
url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
deb_filename = "/root/" + url.split("/")[-1]
match (package, arch):
case ("core", "amd64"):
sha256 = "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d"
case ("core", "arm64"):
sha256 = "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9"
case ("imapd", "amd64"):
sha256 = "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86"
case ("imapd", "arm64"):
sha256 = "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f"
case ("lmtpd", "amd64"):
sha256 = "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab"
case ("lmtpd", "arm64"):
sha256 = "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f"
case _:
apt.packages(packages=[f"dovecot-{package}"])
return
files.download(
name=f"Download dovecot-{package}",
src=url,
dest=deb_filename,
sha256sum=sha256,
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
)
apt.deb(name=f"Install dovecot-{package}", src=deb_filename)
def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
"""Configures Dovecot IMAP server."""
need_restart = False
daemon_reload = False
main_config = files.template(
src=get_resource("dovecot/dovecot.conf.j2"),
dest="/etc/dovecot/dovecot.conf",
user="root",
group="root",
mode="644",
config=config,
debug=debug,
disable_ipv6=config.disable_ipv6,
)
need_restart |= main_config.changed
auth_config = files.put(
src=get_resource("dovecot/auth.conf"),
dest="/etc/dovecot/auth.conf",
user="root",
group="root",
mode="644",
)
need_restart |= auth_config.changed
lua_push_notification_script = files.put(
src=get_resource("dovecot/push_notification.lua"),
dest="/etc/dovecot/push_notification.lua",
user="root",
group="root",
mode="644",
)
need_restart |= lua_push_notification_script.changed
# as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits
if not os.environ.get("CHATMAIL_NOSYSCTL"):
for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}"
if host.get_fact(Sysctl)[key] > 65535:
# Skip updating limits if already sufficient
# (enables running in incus containers where sysctl readonly)
continue
server.sysctl(
name=f"Change {key}",
key=key,
value=65535,
persist=True,
)
timezone_env = files.line(
name="Set TZ environment variable",
path="/etc/environment",
line="TZ=:/etc/localtime",
)
need_restart |= timezone_env.changed
restart_conf = files.put(
name="dovecot: restart automatically on failure",
src=get_resource("service/10_restart.conf"),
dest="/etc/systemd/system/dovecot.service.d/10_restart.conf",
)
daemon_reload |= restart_conf.changed
# Validate dovecot configuration before restart
if need_restart:
server.shell(
name="Validate dovecot configuration",
commands=["doveconf -n >/dev/null"],
)
return need_restart, daemon_reload

View File

@@ -1,5 +1,9 @@
## Dovecot configuration file
{% if disable_ipv6 %}
listen = 0.0.0.0
{% endif %}
protocols = imap lmtp
auth_mechanisms = plain
@@ -13,13 +17,41 @@ auth_cache_size = 100M
mail_debug = yes
{% endif %}
# Prevent warnings similar to:
# config: Warning: service auth { client_limit=1000 } is lower than required under max. load (10200). Counted for protocol services with service_count != 1: service lmtp { process_limit=100 } + service imap-urlauth-login { process_limit=100 } + service imap-login { process_limit=10000 }
# config: Warning: service anvil { client_limit=1000 } is lower than required under max. load (10103). Counted with: service imap-urlauth-login { process_limit=100 } + service imap-login { process_limit=10000 } + service auth { process_limit=1 }
# master: Warning: service(stats): client_limit (1000) reached, client connections are being dropped
default_client_limit = 20000
mail_plugins = quota
# Increase number of logged in IMAP connections.
# Each connection is handled by a separate `imap` process.
# `imap` process should have `client_limit=1` as described in
# <https://doc.dovecot.org/2.3/configuration_manual/service_configuration/#service-limits>
# so each logged in IMAP session will need its own `imap` process.
#
# If this limit is reached,
# users will fail to LOGIN as `imap-login` process
# will accept them logging in but fail to transfer logged in
# connection to `imap` process until someone logs out and
# the following warning will be logged:
# Warning: service(imap): process_limit (1024) reached, client connections are being dropped
service imap {
process_limit = 50000
}
# these are the capabilities Delta Chat cares about actually
# so let's keep the network overhead per login small
# https://github.com/deltachat/deltachat-core-rust/blob/master/src/imap/capabilities.rs
imap_capability = IMAP4rev1 IDLE MOVE QUOTA CONDSTORE NOTIFY
mail_server_admin = mailto:root@{{ config.mail_domain }}
mail_server_comment = Chatmail server
# `zlib` enables compressing messages stored in the maildir.
# See
# <https://doc.dovecot.org/2.3/configuration_manual/zlib_plugin/>
# for documentation.
#
# quota plugin documentation:
# <https://doc.dovecot.org/2.3/configuration_manual/quota_plugin/>
mail_plugins = zlib quota
imap_capability = +XDELTAPUSH XCHATMAIL
# Authentication for system users.
@@ -36,7 +68,13 @@ userdb {
##
# Mailboxes are stored in the "mail" directory of the vmail user home.
mail_location = maildir:/home/vmail/mail/%d/%u
mail_location = maildir:{{ config.mailboxes_dir }}/%u
# index/cache files are not very useful for chatmail relay operations
# but it's not clear how to disable them completely.
# According to https://doc.dovecot.org/2.3/settings/advanced/#core_setting-mail_cache_max_size
# if the cache file becomes larger than the specified size, it is truncated by dovecot
mail_cache_max_size = 500K
namespace inbox {
inbox = yes
@@ -69,14 +107,36 @@ mail_privileged_group = vmail
## Mail processes
##
# Enable IMAP COMPRESS (RFC 4978).
# Pass all IMAP METADATA requests to the server implementing Dovecot's dict protocol.
mail_attribute_dict = proxy:/run/chatmail-metadata/metadata.socket:metadata
# `imap_zlib` enables IMAP COMPRESS (RFC 4978).
# <https://datatracker.ietf.org/doc/html/rfc4978.html>
protocol imap {
mail_plugins = $mail_plugins imap_zlib imap_quota
mail_plugins = $mail_plugins imap_quota last_login {% if config.imap_compress %}imap_zlib{% endif %}
imap_metadata = yes
}
plugin {
last_login_dict = proxy:/run/chatmail-lastlogin/lastlogin.socket:lastlogin
#last_login_key = last-login/%u # default
last_login_precision = s
}
protocol lmtp {
mail_plugins = $mail_plugins quota
# notify plugin is a dependency of push_notification plugin:
# <https://doc.dovecot.org/2.3/settings/plugin/notify-plugin/>
#
# push_notification plugin documentation:
# <https://doc.dovecot.org/2.3/configuration_manual/push_notification/>
#
# mail_lua and push_notification_lua are needed for Lua push notification handler.
# <https://doc.dovecot.org/2.3/configuration_manual/push_notification/#configuration>
mail_plugins = $mail_plugins mail_lua notify push_notification push_notification_lua
}
plugin {
zlib_save = gz
}
plugin {
@@ -87,12 +147,16 @@ plugin {
# for now we define static quota-rules for all users
quota = maildir:User quota
quota_rule = *:storage={{ config.max_mailbox_size }}
quota_max_mail_size=30M
quota_max_mail_size={{ config.max_message_size }}
quota_grace = 0
# quota_over_flag_value = TRUE
}
# push_notification configuration
plugin {
# <https://doc.dovecot.org/2.3/configuration_manual/push_notification/#lua-lua>
push_notification_driver = lua:file=/etc/dovecot/push_notification.lua
}
service lmtp {
user=vmail
@@ -104,6 +168,8 @@ service lmtp {
}
}
lmtp_add_received_header = no
service auth {
unix_listener /var/spool/postfix/private/auth {
mode = 0660
@@ -119,26 +185,250 @@ service auth-worker {
}
service imap-login {
# High-security mode.
# Each process serves a single connection and exits afterwards.
# This is the default, but we set it explicitly to be sure.
# See <https://doc.dovecot.org/admin_manual/login_processes/#high-security-mode> for details.
service_count = 1
# Inrease the number of simultaneous connections.
# High-performance mode as described in
# <https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-performance-mode>
#
# As of Dovecot 2.3.19.1 the default is 100 processes.
# Combined with `service_count = 1` it means only 100 connections
# can be handled simultaneously.
process_limit = 10000
# So-called high-security mode described in
# <https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-security-mode>
# and enabled by default with `service_count = 1` starts one process per connection
# and has problems logging in thousands of users after Dovecot restart.
service_count = 0
# Increase virtual memory size limit.
# Since imap-login processes handle TLS connections
# even after logging users in
# and many connections are handled by each process,
# memory size limit should be increased.
#
# Otherwise the whole process eventually dies
# with an error similar to
# imap-login: Fatal: master: service(imap-login):
# child 1422951 returned error 83
# (Out of memory (service imap-login { vsz_limit=256 MB },
# you may need to increase it)
# and takes down all its TLS connections at once.
vsz_limit = 1G
# Avoid startup latency for new connections.
#
# Should be set to at least the number of CPU cores
# according to the documentation.
process_min_avail = 10
}
service anvil {
# We are disabling anvil penalty on failed login attempts
# because it can only detect brute forcing by IP address
# not by username. As the correct IP address is not handed
# to dovecot anyway, it is more of hindrance than of use.
# See <https://www.dovecot.org/list/dovecot/2012-May/135485.html> for details.
unix_listener anvil-auth-penalty {
mode = 0
}
}
ssl = required
ssl_cert = </var/lib/acme/live/{{ config.mail_domain }}/fullchain
ssl_key = </var/lib/acme/live/{{ config.mail_domain }}/privkey
ssl_dh = </usr/share/dovecot/dh.pem
ssl_min_protocol = TLSv1.2
ssl_min_protocol = TLSv1.3
ssl_prefer_server_ciphers = yes
{% if config.imap_rawlog %}
service postlogin {
executable = script-login -d rawlog
unix_listener postlogin {
}
}
service imap {
executable = imap postlogin
}
protocol imap {
#rawlog_dir = /tmp/rawlog/%u
# Put .in and .out imap protocol logging files into per-user homedir
# You can use a command like this to combine into one protocol stream:
# sort -sn <(sed 's/ / C: /' *.in) <(sed 's/ / S: /' cat *.out)
rawlog_dir = %h
}
{% endif %}
{% if not config.imap_compress %}
# Hibernate IDLE users to save memory and CPU resources
# NOTE: this will have no effect if imap_zlib plugin is used
imap_hibernate_timeout = 30s
service imap {
# Note that this change will allow any process running as
# $default_internal_user (dovecot) to access mails as any other user.
# This may be insecure in some installations, which is why this isn't
# done by default.
unix_listener imap-master {
user = $default_internal_user
}
}
# The following is the default already in v2.3.1+:
service imap {
extra_groups = $default_internal_group
}
service imap-hibernate {
unix_listener imap-hibernate {
mode = 0660
group = $default_internal_group
}
}
{% endif %}
{% if config.mtail_address %}
#
# Dovecot Statistics
#
# OpenMetrics endpoint at http://{{- config.mtail_address}}:3904/metrics
service stats {
inet_listener http {
port = 3904
address = {{- config.mtail_address}}
}
}
# IMAP Command Metrics
# - Bytes in/out for compression efficiency analysis
# - Lock wait time for contention debugging
# - Grouped by command name and reply state
metric imap_command {
filter = event=imap_command_finished
fields = bytes_in bytes_out lock_wait_usecs running_usecs
group_by = cmd_name tagged_reply_state
}
# Duration buckets for latency histograms (base 10: 10us, 100us, 1ms, 10ms, 100ms, 1s, 10s, 100s)
metric imap_command_duration {
filter = event=imap_command_finished
group_by = cmd_name duration:exponential:1:8:10
}
# Slow command outliers (>1 second = 1000000 usecs)
# Useful for alerting without high cardinality
metric imap_command_slow {
filter = event=imap_command_finished AND duration>1000000 AND NOT cmd_name=IDLE
group_by = cmd_name
}
# IDLE-specific Metrics
metric imap_idle {
filter = event=imap_command_finished AND cmd_name=IDLE
fields = bytes_in bytes_out running_usecs
group_by = tagged_reply_state
}
metric imap_idle_duration {
filter = event=imap_command_finished AND cmd_name=IDLE
# Base 10: 100ms to 27h (covers short wakeups to long idle sessions)
group_by = duration:exponential:5:11:10
}
metric imap_idle_commands {
filter = event=imap_command_finished AND cmd_name=IDLE
group_by = tagged_reply_state
}
metric imap_idle_failed {
filter = event=imap_command_finished AND cmd_name=IDLE AND NOT tagged_reply_state=OK
}
# Hibernation Metrics (requires imap_hibernate_timeout)
metric imap_hibernated {
filter = event=imap_client_hibernated
}
metric imap_hibernated_failed {
filter = event=imap_client_hibernated AND error=*
}
metric imap_unhibernated {
filter = event=imap_client_unhibernated
fields = hibernation_usecs
}
metric imap_unhibernated_reason {
filter = event=imap_client_unhibernated
group_by = reason
fields = hibernation_usecs
}
metric imap_unhibernated_reason_sleep {
filter = event=imap_client_unhibernated
group_by = reason hibernation_usecs:exponential:4:8:10
}
metric imap_unhibernated_failed {
filter = event=imap_client_unhibernated AND error=*
}
# Hibernation duration buckets (how long clients stayed hibernated)
# Base 10: 100ms to 27h
metric imap_hibernation_duration {
filter = event=imap_client_unhibernated
group_by = reason duration:exponential:5:11:10
}
# Authentication / Login Metrics
metric auth_request {
filter = event=auth_request_finished
group_by = success
}
metric auth_request_duration {
filter = event=auth_request_finished
group_by = success duration:exponential:2:6:10
}
metric auth_failed {
filter = event=auth_request_finished AND success=no
}
# Passdb cache effectiveness
metric auth_passdb {
filter = event=auth_passdb_request_finished
group_by = result cache
}
# Master login (post-auth userdb lookup)
metric auth_master_login {
filter = event=auth_master_client_login_finished
}
metric auth_master_login_failed {
filter = event=auth_master_client_login_finished AND error=*
}
# Mail Delivery (LMTP) - affects IDLE wakeup latency
metric mail_delivery {
filter = event=mail_delivery_finished
}
metric mail_delivery_duration {
filter = event=mail_delivery_finished
group_by = duration:exponential:3:7:10
}
metric mail_delivery_failed {
filter = event=mail_delivery_finished AND error=*
}
# Connection Events
metric client_connected {
filter = event=client_connection_connected AND category="service:imap"
}
metric client_disconnected {
filter = event=client_connection_disconnected AND category="service:imap"
fields = bytes_in bytes_out
}
{% endif %}

View File

@@ -1,10 +0,0 @@
# delete all mails after {{ config.delete_mails_after }} days, in the Inbox
2 0 * * * dovecot find /home/vmail/mail/{{ config.mail_domain }}/*/cur -mtime +{{ config.delete_mails_after }} -type f -delete
# or in any IMAP subfolder
2 0 * * * dovecot find /home/vmail/mail/{{ config.mail_domain }}/*/.*/cur -mtime +{{ config.delete_mails_after }} -type f -delete
# even if they are unseen
2 0 * * * dovecot find /home/vmail/mail/{{ config.mail_domain }}/*/new -mtime +{{ config.delete_mails_after }} -type f -delete
2 0 * * * dovecot find /home/vmail/mail/{{ config.mail_domain }}/*/.*/new -mtime +{{ config.delete_mails_after }} -type f -delete
# or only temporary (but then they shouldn't be around after {{ config.delete_mails_after }} days anyway).
2 0 * * * dovecot find /home/vmail/mail/{{ config.mail_domain }}/*/tmp -mtime +{{ config.delete_mails_after }} -type f -delete
2 0 * * * dovecot find /home/vmail/mail/{{ config.mail_domain }}/*/.*/tmp -mtime +{{ config.delete_mails_after }} -type f -delete

View File

@@ -0,0 +1,19 @@
function dovecot_lua_notify_begin_txn(user)
return user
end
function dovecot_lua_notify_event_message_new(user, event)
local mbox = user:mailbox(event.mailbox)
mbox:sync()
if user.username ~= event.from_address then
-- Incoming message
-- Notify METADATA server about new message.
mbox:metadata_set("/private/messagenew", "")
end
mbox:free()
end
function dovecot_lua_notify_end_txn(ctx, success)
end

View File

@@ -0,0 +1,52 @@
from pyinfra import facts, host
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer, get_resource
class FiltermailDeployer(Deployer):
services = ["filtermail", "filtermail-incoming"]
bin_path = "/usr/local/bin/filtermail"
config_path = "/usr/local/lib/chatmaild/chatmail.ini"
def __init__(self):
self.need_restart = False
def install(self):
arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-{arch}"
sha256sum = {
"x86_64": "f14a31323ae2dad3b59d3fdafcde507521da2f951a9478cd1f2fe2b4463df71d",
"aarch64": "933770d75046c4fd7084ce8d43f905f8748333426ad839154f0fc654755ef09f",
}[arch]
self.need_restart |= files.download(
name="Download filtermail",
src=url,
sha256sum=sha256sum,
dest=self.bin_path,
mode="755",
).changed
def configure(self):
for service in self.services:
self.need_restart |= files.template(
src=get_resource(f"filtermail/{service}.service.j2"),
dest=f"/etc/systemd/system/{service}.service",
user="root",
group="root",
mode="644",
bin_path=self.bin_path,
config_path=self.config_path,
).changed
def activate(self):
for service in self.services:
systemd.service(
name=f"Start and enable {service}",
service=f"{service}.service",
running=True,
enabled=True,
restarted=self.need_restart,
daemon_reload=True,
)
self.need_restart = False

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Incoming Chatmail Postfix before queue filter
[Service]
ExecStart={{ bin_path }} {{ config_path }} incoming
Restart=always
RestartSec=30
User=vmail
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Outgoing Chatmail Postfix before queue filter
[Service]
ExecStart={{ bin_path }} {{ config_path }} outgoing
Restart=always
RestartSec=30
User=vmail
[Install]
WantedBy=multi-user.target

View File

@@ -1,12 +1,13 @@
import importlib
import qrcode
import os
from PIL import ImageFont, ImageDraw, Image
import io
import os
import qrcode
from PIL import Image, ImageDraw, ImageFont
def gen_qr_png_data(maildomain):
url = f"DCACCOUNT:https://{maildomain}/cgi-bin/newemail.py"
url = f"DCACCOUNT:https://{maildomain}/new"
image = gen_qr(maildomain, url)
temp = io.BytesIO()
image.save(temp, format="png")

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Iroh relay
[Service]
ExecStart=/usr/local/bin/iroh-relay --config-path /etc/iroh-relay.toml
Restart=on-failure
RestartSec=5s
User=iroh
Group=iroh
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
enable_relay = true
http_bind_addr = "[::]:3340"
# Disable built-in STUN server in iroh-relay 0.35
# as we deploy our own TURN server instead.
# STUN server is going to be removed in iroh-relay 1.0
# and this line can be removed after upgrade.
enable_stun = false
enable_metrics = false
metrics_bind_addr = "127.0.0.1:9092"

View File

@@ -1,2 +1,3 @@
[Journal]
MaxRetentionSec=3d
Storage=volatile

View File

@@ -1 +1 @@
*/5 * * * * root {{ config.execpath }} /home/vmail/mail/{{ config.mail_domain }} >/var/www/html/metrics
*/5 * * * * root {{ config.execpath }} {{ config.mailboxes_dir }} >/var/www/html/metrics

View File

@@ -0,0 +1,80 @@
counter delivered_mail
/saved mail to INBOX$/ {
delivered_mail++
}
counter quota_exceeded
/Quota exceeded \(mailbox for user is full\)$/ {
quota_exceeded++
}
# Essentially the number of outgoing messages.
counter dkim_signed
/DKIM-Signature field added/ {
dkim_signed++
}
counter created_accounts
counter created_ci_accounts
counter created_nonci_accounts
/: Created address: (?P<addr>.*)$/ {
created_accounts++
$addr =~ /ci-/ {
created_ci_accounts++
} else {
created_nonci_accounts++
}
}
counter postfix_timeouts
/timeout after DATA/ {
postfix_timeouts++
}
counter postfix_noqueue
/postfix\/.*NOQUEUE/ {
postfix_noqueue++
}
counter warning_count
/warning/ {
warning_count++
}
counter filtered_outgoing_mail_count
counter outgoing_encrypted_mail_count
/Outgoing: Filtering encrypted mail\./ {
outgoing_encrypted_mail_count++
filtered_outgoing_mail_count++
}
counter outgoing_unencrypted_mail_count
/Outgoing: Filtering unencrypted mail\./ {
outgoing_unencrypted_mail_count++
filtered_outgoing_mail_count++
}
counter filtered_incoming_mail_count
counter incoming_encrypted_mail_count
/Incoming: Filtering encrypted mail\./ {
incoming_encrypted_mail_count++
filtered_incoming_mail_count++
}
counter incoming_unencrypted_mail_count
/Incoming: Filtering unencrypted mail\./ {
incoming_unencrypted_mail_count++
filtered_incoming_mail_count++
}
counter rejected_unencrypted_mail_count
/Rejected unencrypted mail/ {
rejected_unencrypted_mail_count++
}

View File

@@ -0,0 +1,68 @@
from pyinfra import facts, host
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import (
Deployer,
get_resource,
)
class MtailDeployer(Deployer):
def __init__(self, mtail_address):
self.mtail_address = mtail_address
def install(self):
# Uninstall mtail package to install a static binary.
apt.packages(name="Uninstall mtail", packages=["mtail"], present=False)
(url, sha256sum) = {
"x86_64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz",
"123c2ee5f48c3eff12ebccee38befd2233d715da736000ccde49e3d5607724e4",
),
"aarch64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz",
"aa04811c0929b6754408676de520e050c45dddeb3401881888a092c9aea89cae",
),
}[host.get_fact(facts.server.Arch)]
server.shell(
name="Download mtail",
commands=[
f"(echo '{sha256sum} /usr/local/bin/mtail' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - mtail -O >/usr/local/bin/mtail.new && mv /usr/local/bin/mtail.new /usr/local/bin/mtail)",
"chmod 755 /usr/local/bin/mtail",
],
)
def configure(self):
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
# This allows to read from journalctl instead of log files.
files.template(
src=get_resource("mtail/mtail.service.j2"),
dest="/etc/systemd/system/mtail.service",
user="root",
group="root",
mode="644",
address=self.mtail_address or "127.0.0.1",
port=3903,
)
mtail_conf = files.put(
name="Mtail configuration",
src=get_resource("mtail/delivered_mail.mtail"),
dest="/etc/mtail/delivered_mail.mtail",
user="root",
group="root",
mode="644",
)
self.need_restart = mtail_conf.changed
def activate(self):
systemd.service(
name="Start and enable mtail",
service="mtail.service",
running=bool(self.mtail_address),
enabled=bool(self.mtail_address),
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -0,0 +1,10 @@
[Unit]
Description=mtail
[Service]
Type=simple
ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -"
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@@ -19,6 +19,13 @@
<authentication>password-cleartext</authentication>
<username>%EMAILADDRESS%</username>
</incomingServer>
<incomingServer type="imap">
<hostname>{{ config.domain_name }}</hostname>
<port>443</port>
<socketType>SSL</socketType>
<authentication>password-cleartext</authentication>
<username>%EMAILADDRESS%</username>
</incomingServer>
<outgoingServer type="smtp">
<hostname>{{ config.domain_name }}</hostname>
<port>465</port>
@@ -33,5 +40,12 @@
<authentication>password-cleartext</authentication>
<username>%EMAILADDRESS%</username>
</outgoingServer>
<outgoingServer type="smtp">
<hostname>{{ config.domain_name }}</hostname>
<port>443</port>
<socketType>SSL</socketType>
<authentication>password-cleartext</authentication>
<username>%EMAILADDRESS%</username>
</outgoingServer>
</emailProvider>
</clientConfig>

View File

@@ -0,0 +1,117 @@
from chatmaild.config import Config
from pyinfra.operations import apt, files, systemd
from cmdeploy.basedeploy import (
Deployer,
get_resource,
)
class NginxDeployer(Deployer):
def __init__(self, config):
self.config = config
def install(self):
#
# If we allow nginx to start up on install, it will grab port
# 80, which then will block acmetool from listening on the port.
# That in turn prevents getting certificates, which then causes
# an error when we try to start nginx on the custom config
# that leaves port 80 open but also requires certificates to
# be present. To avoid getting into that interlocking mess,
# we use policy-rc.d to prevent nginx from starting up when it
# is installed.
#
# This approach allows us to avoid performing any explicit
# systemd operations during the install stage (as opposed to
# allowing it to start and then forcing it to stop), which allows
# the install stage to run in non-systemd environments like a
# container image build.
#
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install nginx",
packages=["nginx", "libnginx-mod-stream"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
self.need_restart = _configure_nginx(self.config)
def activate(self):
systemd.service(
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
def _configure_nginx(config: Config, debug: bool = False) -> bool:
"""Configures nginx HTTP server."""
need_restart = False
main_config = files.template(
src=get_resource("nginx/nginx.conf.j2"),
dest="/etc/nginx/nginx.conf",
user="root",
group="root",
mode="644",
config={"domain_name": config.mail_domain},
disable_ipv6=config.disable_ipv6,
)
need_restart |= main_config.changed
autoconfig = files.template(
src=get_resource("nginx/autoconfig.xml.j2"),
dest="/var/www/html/.well-known/autoconfig/mail/config-v1.1.xml",
user="root",
group="root",
mode="644",
config={"domain_name": config.mail_domain},
)
need_restart |= autoconfig.changed
mta_sts_config = files.template(
src=get_resource("nginx/mta-sts.txt.j2"),
dest="/var/www/html/.well-known/mta-sts.txt",
user="root",
group="root",
mode="644",
config={"domain_name": config.mail_domain},
)
need_restart |= mta_sts_config.changed
# install CGI newemail script
#
cgi_dir = "/usr/lib/cgi-bin"
files.directory(
name=f"Ensure {cgi_dir} exists",
path=cgi_dir,
user="root",
group="root",
)
files.put(
name="Upload cgi newemail.py script",
src=get_resource("newemail.py", pkg="chatmaild").open("rb"),
dest=f"{cgi_dir}/newemail.py",
user="root",
group="root",
mode="755",
)
return need_restart

View File

@@ -1,13 +1,46 @@
load_module modules/ngx_stream_module.so;
user www-data;
worker_processes auto;
# Increase the number of connections
# that a worker process can open
# to avoid errors such as
# accept4() failed (24: Too many open files)
# and
# socket() failed (24: Too many open files) while connecting to upstream
# in the logs.
# <https://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile>
worker_rlimit_nofile 2048;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
error_log syslog:server=unix:/dev/log,facility=local3;
events {
worker_connections 768;
# Increase to avoid errors such as
# 768 worker_connections are not enough while connecting to upstream
# in the logs.
# <https://nginx.org/en/docs/ngx_core_module.html#worker_connections>
worker_connections 2048;
# multi_accept on;
}
stream {
map $ssl_preread_alpn_protocols $proxy {
default 127.0.0.1:8443;
~\bsmtp\b 127.0.0.1:465;
~\bimap\b 127.0.0.1:993;
}
server {
listen 443;
{% if not disable_ipv6 %}
listen [::]:443;
{% endif %}
proxy_pass $proxy;
ssl_preread on;
}
}
http {
sendfile on;
tcp_nopush on;
@@ -26,14 +59,16 @@ http {
gzip on;
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
listen 127.0.0.1:8443 ssl default_server;
root /var/www/html;
index index.html index.htm;
server_name _;
server_name {{ config.domain_name }} www.{{ config.domain_name }} mta-sts.{{ config.domain_name }};
access_log syslog:server=unix:/dev/log,facility=local7;
location / {
# First attempt to serve request as file, then
@@ -41,11 +76,64 @@ http {
try_files $uri $uri/ =404;
}
location /metrics {
default_type text/plain;
location /metrics {
default_type text/plain;
}
location /new {
if ($request_method = GET) {
# Redirect to Delta Chat,
# which will in turn do a POST request.
return 301 dcaccount:https://{{ config.domain_name }}/new;
}
fastcgi_pass unix:/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/lib/cgi-bin/newemail.py;
}
# Old URL for compatibility with e.g. printed QR codes.
#
# Copy-paste instead of redirect to /new
# because Delta Chat core does not follow redirects.
#
# Redirects are only for browsers.
location /cgi-bin/newemail.py {
if ($request_method = GET) {
return 301 dcaccount:https://{{ config.domain_name }}/new;
}
fastcgi_pass unix:/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/lib/cgi-bin/newemail.py;
}
# Proxy to iroh-relay service.
location /relay {
proxy_pass http://127.0.0.1:3340;
proxy_http_version 1.1;
# Upgrade header is normally set to "iroh derp http" or "websocket".
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# add cgi-bin support
include /usr/share/doc/fcgiwrap/examples/nginx.conf;
location /relay/probe {
proxy_pass http://127.0.0.1:3340;
proxy_http_version 1.1;
}
location /generate_204 {
proxy_pass http://127.0.0.1:3340;
proxy_http_version 1.1;
}
}
# Redirect www. to non-www
server {
listen 127.0.0.1:8443 ssl;
server_name www.{{ config.domain_name }};
return 301 $scheme://{{ config.domain_name }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7;
}
}

View File

@@ -1 +1 @@
dkim._domainkey.{{ config.domain_name }} {{ config.domain_name }}:{{ config.opendkim_selector }}:/etc/dkimkeys/dkim.private
{{ config.opendkim_selector }}._domainkey.{{ config.domain_name }} {{ config.domain_name }}:{{ config.opendkim_selector }}:/etc/dkimkeys/{{ config.opendkim_selector }}.private

View File

@@ -0,0 +1,123 @@
"""
Installs OpenDKIM
"""
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import Deployer, get_resource
class OpendkimDeployer(Deployer):
required_users = [("opendkim", None, ["opendkim"])]
def __init__(self, mail_domain):
self.mail_domain = mail_domain
def install(self):
apt.packages(
name="apt install opendkim opendkim-tools",
packages=["opendkim", "opendkim-tools"],
)
def configure(self):
domain = self.mail_domain
dkim_selector = "opendkim"
"""Configures OpenDKIM"""
need_restart = False
main_config = files.template(
src=get_resource("opendkim/opendkim.conf"),
dest="/etc/opendkim.conf",
user="root",
group="root",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= main_config.changed
screen_script = files.put(
src=get_resource("opendkim/screen.lua"),
dest="/etc/opendkim/screen.lua",
user="root",
group="root",
mode="644",
)
need_restart |= screen_script.changed
final_script = files.put(
src=get_resource("opendkim/final.lua"),
dest="/etc/opendkim/final.lua",
user="root",
group="root",
mode="644",
)
need_restart |= final_script.changed
files.directory(
name="Add opendkim directory to /etc",
path="/etc/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
keytable = files.template(
src=get_resource("opendkim/KeyTable"),
dest="/etc/dkimkeys/KeyTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= keytable.changed
signing_table = files.template(
src=get_resource("opendkim/SigningTable"),
dest="/etc/dkimkeys/SigningTable",
user="opendkim",
group="opendkim",
mode="644",
config={"domain_name": domain, "opendkim_selector": dkim_selector},
)
need_restart |= signing_table.changed
files.directory(
name="Add opendkim socket directory to /var/spool/postfix",
path="/var/spool/postfix/opendkim",
user="opendkim",
group="opendkim",
mode="750",
present=True,
)
if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"):
server.shell(
name="Generate OpenDKIM domain keys",
commands=[
f"/usr/sbin/opendkim-genkey -D /etc/dkimkeys -d {domain} -s {dkim_selector}"
],
_use_su_login=True,
_su_user="opendkim",
)
service_file = files.put(
name="Configure opendkim to restart once a day",
src=get_resource("opendkim/systemd.conf"),
dest="/etc/systemd/system/opendkim.service.d/10-prevent-memory-leak.conf",
)
need_restart |= service_file.changed
self.need_restart = need_restart
def activate(self):
systemd.service(
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
daemon_reload=self.need_restart,
restarted=self.need_restart,
)
self.need_restart = False

View File

@@ -0,0 +1,42 @@
mtaname = odkim.get_mtasymbol(ctx, "{daemon_name}")
if mtaname == "ORIGINATING" then
-- Outgoing message will be signed,
-- no need to look for signatures.
return nil
end
nsigs = odkim.get_sigcount(ctx)
if nsigs == nil then
return nil
end
local valid = false
local error_msg = "No valid DKIM signature found."
for i = 1, nsigs do
sig = odkim.get_sighandle(ctx, i - 1)
sigres = odkim.sig_result(sig)
-- All signatures that do not correspond to From:
-- were ignored in screen.lua and return sigres -1.
--
-- Any valid signature that was not ignored like this
-- means the message is acceptable.
if sigres == 0 then
valid = true
else
error_msg = "DKIM signature is invalid, error code " .. tostring(sigres) .. ", search https://github.com/trusteddomainproject/OpenDKIM/blob/master/libopendkim/dkim.h#L108"
end
end
if valid then
-- Strip all DKIM-Signature headers after successful validation
-- Delete in reverse order to avoid index shifting.
for i = nsigs, 1, -1 do
odkim.del_header(ctx, "DKIM-Signature", i)
end
else
odkim.set_reply(ctx, "554", "5.7.1", error_msg)
odkim.set_result(ctx, SMFIS_REJECT)
end
return nil

Some files were not shown because too many files have changed in this diff Show More