Compare commits

..

13 Commits

Author SHA1 Message Date
j4n
07938544a1 docker: trim compose override example 2026-02-20 17:02:34 +01:00
j4n
3cc74a4c9a docker: get rid of CHATMAIL_* in compose 2026-02-20 16:56:05 +01:00
j4n
77676a4e87 docker: streamline overrides, rename datadirs, external TLS 2026-02-20 16:38:35 +01:00
j4n
dc2a6fda05 docker: migrate to new external tls logic
- remove all traces of CHATMAIL_NOACME; purge certwatch service
- introduce TLS_EXTERNAL_CERT_AND_KEY as per new logic
2026-02-20 10:00:44 +01:00
j4n
d9dce2ccee Merge remote-tracking branch 'origin/hpk/tls-external' into j4n/docker-traefik 2026-02-19 21:04:21 +01:00
j4n
fcfc2cca1a fix(docker): remove CHATMAIL_INI from env 2026-02-19 20:41:18 +01:00
j4n
beb4041e3f fix(docker): Add TZ to env 2026-02-19 20:36:51 +01:00
holger krekel
da3d726fb1 feat: support externally managed TLS via tls_external_cert_and_key option
Adds a new tls_external_cert_and_key config option for chatmail servers
that manage their own TLS certificates (e.g. via an external ACME client
or a load balancer).

A systemd path unit (tls-cert-reload.path) watches the certificate file
via inotify and automatically reloads dovecot and nginx when it changes.
Postfix reads certs per TLS handshake so needs no reload.

Also extracts openssl_selfsigned_args() so cert generation parameters
are shared between SelfSignedTlsDeployer and the e2e test.
2026-02-19 19:49:53 +01:00
j4n
854b7ef368 typo 2026-02-19 16:03:41 +01:00
j4n
7e30bafd57 docker: clear up docker compose v1/v2 differences (doc/compose.yaml) 2026-02-19 16:03:41 +01:00
j4n
3ef59c3def feat: add Docker and Compose support
Add Docker-based deployment: Dockerfile based on systemd image,
docker-compose.yaml, build script, entrypoint, external certificate
monitoring, CI workflow, and documentation.

This builds on the chatmaild/cmdeploy preparation in the previous
commit (j4n/docker-prep-chatmail) which added the env-var-driven
feature flags (CHATMAIL_NOSYSCTL, CHATMAIL_NOPORTCHECK, CHATMAIL_NOACME)
and @local deployment support needed by the container.

This is commit 2 of 3 to merge squashed changes on j4n/docker and docker
branches, original commits were beef0ec..606f36e

Architecture overview (mostly by original author Keonik1):
- Debian-systemd image wrapping the existing cmdeploy install
- Host networking to not manually expose the many ports needed
- Config via MAIL_DOMAIN env var or (new) mounted chatmail.ini
- New: cmdeploy stages: install at build, configure+activate at startup
- New: Monitoring service for external certs via systemd timer (chatmail-certmon)
- New: Image version tracking for automatic upgrade detection (cm + config hash)
- New: docker-compose.override.yaml pattern for user customizations
- New: GitHub Actions CI for ghcr.io image builds

Traefik reverse-proxy support is prepared but the specific files are
excluded from this PR and will be submitted separately.

TODO:
- [ ] Pull out CHATMAIL_NOACME as PR #855 introduced a proper mechanism
- [ ] Check if underlying image could be based on regular debian-slim
  images with a step to enable systemd, similar to
  https://github.com/alexdzyoba/docker-debian-systemd

Files added:
  .dockerignore
  .github/workflows/docker-build.yaml
  docker-compose.yaml
  docker-compose.override.yaml.example
  docker/build.sh
  docker/chatmail_relay.dockerfile
  docker/files/chatmail-certmon.{service,sh,timer}
  docker/files/entrypoint.sh
  docker/files/setup_chatmail.service
  docker/files/setup_chatmail_docker.sh
  env.example
  doc/source/docker.rst

Files modified:
  .gitignore
  doc/source/getting_started.rst
  doc/source/index.rst

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-19 16:03:41 +01:00
j4n
a7b3893fee cmdeploy: prepare chatmaild/cmdeploy changes for Docker support
- chatmaild:
  - basedeploy.py: Add has_systemd() guard. During Docker image builds
    there's no running systemd, so deployers that query SystemdEnabled
    facts would crash; this change might also be helpful for non-systemd
    platforms.
- cmdeploy:
  - cmdeploy.py:
    - when deploying to @docker, auto-set CHATMAIL_NOPORTCHECK and
      CHATMAIL_NOSYSCTL since neither makes sense inside a container
    - --config default now reads CHATMAIL_INI env var, so Docker
      entrypoints can point to a mounted ini without CLI flags.
  - deployers.py:
    - skip port check / CHATMAIL_NOPORTCHECK
    - skip echobot systemd cleanup w/ has_systemd
  - dovecot/deployer.py:
    - Guard sysctl writes behind CHATMAIL_NOSYSCTL
    - invert dovecot install check so it works without systemd
  - sshexec.py: Add __call__ to LocalExec so cmdeploy status works with
    @local target. Without it, cmdeploy status tried to call the
    executor directly and got TypeError.

Consolidated from j4n/docker branch commits (selection):
- 8953fde feat(cmdeploy): read CHATMAIL_INI env var for default --config path
- 81d7782 fix(cmdeploy): add __call__ to LocalExec so status works with @local
- 8bba78e docker: disable port check if docker is running. fix #694
- 865b514 docker: replace config flags with env vars, drop docker param (instead of f26cb08)

Files: cmdeploy/src/cmdeploy/{basedeploy,cmdeploy,deployers,sshexec,dovecot/deployer}.py

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-19 16:03:41 +01:00
j4n
58fa5e5c98 cmdeploy: prepare chatmaild/cmdeploy changes for Docker support
- chatmaild:
  - basedeploy.py: Add has_systemd() guard. During Docker image builds
    there's no running systemd, so deployers that query SystemdEnabled
    facts would crash; this change might also be helpful for non-systemd
    platforms.
- cmdeploy:
  - cmdeploy.py:
    - when deploying to @docker, auto-set CHATMAIL_NOPORTCHECK and
      CHATMAIL_NOSYSCTL since neither makes sense inside a container
    - --config default now reads CHATMAIL_INI env var, so Docker
      entrypoints can point to a mounted ini without CLI flags.
  - deployers.py:
    - skip port check / CHATMAIL_NOPORTCHECK
    - skip echobot systemd cleanup w/ has_systemd
  - dovecot/deployer.py:
    - Guard sysctl writes behind CHATMAIL_NOSYSCTL
    - invert dovecot install check so it works without systemd
  - sshexec.py: Add __call__ to LocalExec so cmdeploy status works with
    @local target. Without it, cmdeploy status tried to call the
    executor directly and got TypeError.

Consolidated from j4n/docker branch commits (selection):
- 8953fde feat(cmdeploy): read CHATMAIL_INI env var for default --config path
- 81d7782 fix(cmdeploy): add __call__ to LocalExec so status works with @local
- 8bba78e docker: disable port check if docker is running. fix #694
- 865b514 docker: replace config flags with env vars, drop docker param (instead of f26cb08)

Files: cmdeploy/src/cmdeploy/{basedeploy,cmdeploy,deployers,sshexec,dovecot/deployer}.py

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-19 16:03:39 +01:00
69 changed files with 1027 additions and 1512 deletions

View File

@@ -15,7 +15,7 @@ jobs:
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: download filtermail - name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.6.0/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests - name: run chatmaild tests
working-directory: chatmaild working-directory: chatmaild
run: pipx run tox run: pipx run tox

76
.github/workflows/docker-build.yaml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Docker Build
on:
pull_request:
paths:
- 'docker/**'
- 'docker-compose.yaml'
- '.dockerignore'
- 'chatmaild/**'
- 'cmdeploy/**'
- '.github/workflows/docker-build.yaml'
push:
branches:
- main
- j4n/docker
paths:
- 'docker/**'
- 'docker-compose.yaml'
- '.dockerignore'
- 'chatmaild/**'
- 'cmdeploy/**'
- '.github/workflows/docker-build.yaml'
tags:
- 'v*'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
name: Build Docker image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tagged releases: v1.2.3 → :1.2.3, :1.2, :latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
# Branch pushes: j4n/docker → :j4n-docker
type=ref,event=branch
# Always: :sha-<hash>
type=sha
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: docker/chatmail_relay.dockerfile
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
GIT_HASH=${{ github.sha }}

View File

@@ -4,7 +4,6 @@ on:
push: push:
branches: branches:
- main - main
- j4n/docker-pr
pull_request: pull_request:
paths-ignore: paths-ignore:
- 'scripts/**' - 'scripts/**'
@@ -12,67 +11,7 @@ on:
- 'CHANGELOG.md' - 'CHANGELOG.md'
- 'LICENSE' - 'LICENSE'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs: jobs:
build-docker:
name: Build Docker image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image: ${{ steps.image-ref.outputs.image }}
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tagged releases: v1.2.3 -> :1.2.3, :1.2, :latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
# Branch pushes: foo/docker-pr -> :foo-docker-pr
type=ref,event=branch
# Always: :sha-<hash>
type=sha
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: docker/chatmail_relay.dockerfile
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
GIT_HASH=${{ github.sha }}
- name: Output image reference
id: image-ref
run: |
SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7)
IMAGE="${{ env.REGISTRY }}/$(echo "${{ env.IMAGE_NAME }}" | tr '[:upper:]' '[:lower:]'):sha-${SHORT_SHA}"
echo "image=${IMAGE}" >> "$GITHUB_OUTPUT"
deploy: deploy:
name: deploy on staging-ipv4.testrun.org, and run tests name: deploy on staging-ipv4.testrun.org, and run tests
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -116,7 +55,6 @@ jobs:
run: echo venv/bin >>$GITHUB_PATH run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding - name: upload TLS cert after rebuilding
id: wait-for-vps
run: | run: |
echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- " echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts rm ~/.ssh/known_hosts
@@ -133,164 +71,25 @@ jobs:
- name: run deploy-chatmail offline tests - name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy run: pytest --pyargs cmdeploy
- name: setup dependencies - run: |
run: | cmdeploy init staging-ipv4.testrun.org
ssh root@staging-ipv4.testrun.org apt update sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' chatmail.ini
ssh root@staging-ipv4.testrun.org apt install -y git python3.11-venv python3-dev gcc sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
ssh root@staging-ipv4.testrun.org git clone https://github.com/chatmail/relay cmdeploy run --verbose --skip-dns-check
ssh root@staging-ipv4.testrun.org "cd relay && git checkout " ${{ github.head_ref }}
ssh root@staging-ipv4.testrun.org "cd relay && scripts/initenv.sh"
- name: initialize config
run: |
ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy init staging-ipv4.testrun.org"
ssh root@staging-ipv4.testrun.org "sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' relay/chatmail.ini"
ssh root@staging-ipv4.testrun.org "sed -i 's/#\s*mtail_address/mtail_address/' relay/chatmail.ini"
- run: ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy run --verbose --skip-dns-check --ssh-host localhost"
- name: set DNS entries - name: set DNS entries
run: | run: |
ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy dns --zonefile staging-generated.zone --ssh-host localhost" ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
ssh root@staging-ipv4.testrun.org cat relay/staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone cmdeploy dns --zonefile staging-generated.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: ssh root@staging-ipv4.testrun.org "cd relay && CHATMAIL_DOMAIN2=ci-chatmail.testrun.org scripts/cmdeploy test --slow --ssh-host localhost"
- name: cmdeploy dns
run: ssh root@staging-ipv4.testrun.org "cd relay && scripts/cmdeploy dns -v --ssh-host localhost"
# --- Docker deploy (push only, runs even if bare failed) ---
- name: stop bare services
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging-ipv4.testrun.org 'systemctl stop postfix dovecot nginx opendkim unbound filtermail doveauth chatmail-metadata iroh-relay mtail fcgiwrap acmetool 2>/dev/null || true'
- name: install Docker on VPS
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging-ipv4.testrun.org 'apt-get update && apt-get install -y ca-certificates curl'
ssh root@staging-ipv4.testrun.org 'install -m 0755 -d /etc/apt/keyrings'
ssh root@staging-ipv4.testrun.org 'curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc && chmod a+r /etc/apt/keyrings/docker.asc'
ssh root@staging-ipv4.testrun.org 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list'
ssh root@staging-ipv4.testrun.org 'apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin'
- name: prepare Docker bind mounts
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging-ipv4.testrun.org 'mkdir -p /srv/chatmail/certs /srv/chatmail/dkim'
ssh root@staging-ipv4.testrun.org 'cp -a /var/lib/acme/. /srv/chatmail/certs/ && cp -a /etc/dkimkeys/. /srv/chatmail/dkim/' || true
- name: upload chatmail.ini for Docker
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
# Reuse chatmail.ini already created by the bare-metal deploy steps
ssh root@staging-ipv4.testrun.org "cp relay/chatmail.ini /srv/chatmail/chatmail.ini"
- name: deploy with Docker
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7)
GHCR_IMAGE="${{ env.REGISTRY }}/$(echo "${{ env.IMAGE_NAME }}" | tr '[:upper:]' '[:lower:]'):sha-${SHORT_SHA}"
rsync -avz --exclude='.git' --exclude='venv' --exclude='__pycache__' ./ root@staging-ipv4.testrun.org:/srv/chatmail/relay/
# Login to GHCR on VPS and pull pre-built image
echo "${{ secrets.GITHUB_TOKEN }}" | ssh root@staging-ipv4.testrun.org 'docker login ghcr.io -u ${{ github.actor }} --password-stdin'
ssh root@staging-ipv4.testrun.org "docker pull ${GHCR_IMAGE}"
ssh root@staging-ipv4.testrun.org "cd /srv/chatmail/relay && CHATMAIL_IMAGE=${GHCR_IMAGE} MAIL_DOMAIN=staging-ipv4.testrun.org docker compose -f docker/docker-compose.yaml -f docker/docker-compose.ci.yaml up -d"
- name: wait for container healthy
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
# Stream journald inside the container
ssh root@staging-ipv4.testrun.org 'docker exec chatmail journalctl -f --no-pager' &
LOG_PID=$!
trap "kill $LOG_PID 2>/dev/null || true" EXIT
for i in $(seq 1 60); do
status=$(ssh root@staging-ipv4.testrun.org 'docker inspect --format={{.State.Health.Status}} chatmail 2>/dev/null' || echo "missing")
echo " [$i/60] status=$status"
if [ "$status" = "healthy" ]; then
echo "Container is healthy."
exit 0
fi
if [ "$status" = "unhealthy" ]; then
echo "Container is unhealthy!"
break
fi
sleep 5
done
echo "Container did not become healthy."
kill $LOG_PID 2>/dev/null || true
echo "--- failed units ---"
ssh root@staging-ipv4.testrun.org 'docker exec chatmail systemctl --failed --no-pager' || true
echo "--- service logs ---"
ssh root@staging-ipv4.testrun.org 'docker exec chatmail journalctl -u dovecot -u postfix -u nginx -u unbound --no-pager -n 50' || true
echo "--- listening ports ---"
ssh root@staging-ipv4.testrun.org 'docker exec chatmail ss -tlnp' || true
echo "--- chatmail.ini ---"
ssh root@staging-ipv4.testrun.org 'docker exec chatmail cat /etc/chatmail/chatmail.ini' || true
exit 1
- name: show container state
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
echo "--- listening ports ---"
ssh root@staging-ipv4.testrun.org 'docker exec chatmail ss -tlnp'
echo "--- chatmail.ini ---"
ssh root@staging-ipv4.testrun.org 'docker exec chatmail cat /etc/chatmail/chatmail.ini'
- name: Docker integration tests
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging-ipv4.testrun.org 'docker exec chatmail cmdeploy test --slow --ssh-host @local'
- name: Docker DNS
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
# Reset zone file in case bare DNS already appended to it
git checkout .github/workflows/staging-ipv4.testrun.org-default.zone
ssh root@staging-ipv4.testrun.org 'docker exec chatmail chown opendkim:opendkim -R /etc/dkimkeys'
ssh root@staging-ipv4.testrun.org 'docker exec chatmail cmdeploy dns --ssh-host @local --zonefile /opt/chatmail/staging.zone --verbose'
ssh root@staging-ipv4.testrun.org 'docker cp chatmail:/opt/chatmail/staging.zone /tmp/staging.zone'
scp root@staging-ipv4.testrun.org:/tmp/staging.zone staging-generated.zone
cat staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone cat staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd ssh root@ns.testrun.org systemctl reload nsd
- name: Docker final DNS check - name: cmdeploy test
if: >- run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: ssh root@staging-ipv4.testrun.org 'docker exec chatmail cmdeploy dns -v --ssh-host @local'
# --- Cleanup --- - name: cmdeploy dns
run: cmdeploy dns -v
- name: add SSH keys
if: >-
!cancelled()
&& steps.wait-for-vps.outcome == 'success'
run: ssh root@staging-ipv4.testrun.org 'curl -s https://github.com/hpk42.keys https://github.com/j4n.keys >> .ssh/authorized_keys'

View File

@@ -4,7 +4,6 @@ on:
push: push:
branches: branches:
- main - main
- j4n/docker-pr
pull_request: pull_request:
paths-ignore: paths-ignore:
- 'scripts/**' - 'scripts/**'
@@ -12,67 +11,7 @@ on:
- 'CHANGELOG.md' - 'CHANGELOG.md'
- 'LICENSE' - 'LICENSE'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs: jobs:
build-docker:
name: Build Docker image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image: ${{ steps.image-ref.outputs.image }}
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tagged releases: v1.2.3 -> :1.2.3, :1.2, :latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
# Branch pushes: foo/docker-pr -> :foo-docker-pr
type=ref,event=branch
# Always: :sha-<hash>
type=sha
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: docker/chatmail_relay.dockerfile
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
GIT_HASH=${{ github.sha }}
- name: Output image reference
id: image-ref
run: |
SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7)
IMAGE="${{ env.REGISTRY }}/$(echo "${{ env.IMAGE_NAME }}" | tr '[:upper:]' '[:lower:]'):sha-${SHORT_SHA}"
echo "image=${IMAGE}" >> "$GITHUB_OUTPUT"
deploy: deploy:
name: deploy on staging2.testrun.org, and run tests name: deploy on staging2.testrun.org, and run tests
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -116,7 +55,6 @@ jobs:
run: echo venv/bin >>$GITHUB_PATH run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding - name: upload TLS cert after rebuilding
id: wait-for-vps
run: | run: |
echo " --- wait until staging2.testrun.org VPS is rebuilt --- " echo " --- wait until staging2.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts rm ~/.ssh/known_hosts
@@ -144,6 +82,7 @@ jobs:
- name: set DNS entries - name: set DNS entries
run: | run: |
ssh -o StrictHostKeyChecking=accept-new root@staging2.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone --verbose cmdeploy dns --zonefile staging-generated.zone --verbose
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone cat .github/workflows/staging.testrun.org-default.zone
@@ -157,133 +96,3 @@ jobs:
- name: cmdeploy dns - name: cmdeploy dns
run: cmdeploy dns -v run: cmdeploy dns -v
# --- Docker deploy (push only, runs even if bare failed) ---
- name: stop bare services
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging2.testrun.org 'systemctl stop postfix dovecot nginx opendkim unbound filtermail doveauth chatmail-metadata iroh-relay mtail fcgiwrap acmetool 2>/dev/null || true'
- name: install Docker on VPS
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging2.testrun.org 'apt-get update && apt-get install -y ca-certificates curl'
ssh root@staging2.testrun.org 'install -m 0755 -d /etc/apt/keyrings'
ssh root@staging2.testrun.org 'curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc && chmod a+r /etc/apt/keyrings/docker.asc'
ssh root@staging2.testrun.org 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list'
ssh root@staging2.testrun.org 'apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin'
- name: prepare Docker bind mounts
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging2.testrun.org 'mkdir -p /srv/chatmail/certs /srv/chatmail/dkim'
ssh root@staging2.testrun.org 'cp -a /var/lib/acme/. /srv/chatmail/certs/ && cp -a /etc/dkimkeys/. /srv/chatmail/dkim/' || true
- name: upload chatmail.ini for Docker
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
# Reuse chatmail.ini already created by the bare-metal deploy steps
scp chatmail.ini root@staging2.testrun.org:/srv/chatmail/chatmail.ini
- name: deploy with Docker
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7)
GHCR_IMAGE="${{ env.REGISTRY }}/$(echo "${{ env.IMAGE_NAME }}" | tr '[:upper:]' '[:lower:]'):sha-${SHORT_SHA}"
rsync -avz --exclude='.git' --exclude='venv' --exclude='__pycache__' ./ root@staging2.testrun.org:/srv/chatmail/relay/
# Login to GHCR on VPS and pull pre-built image
echo "${{ secrets.GITHUB_TOKEN }}" | ssh root@staging2.testrun.org 'docker login ghcr.io -u ${{ github.actor }} --password-stdin'
ssh root@staging2.testrun.org "docker pull ${GHCR_IMAGE}"
ssh root@staging2.testrun.org "cd /srv/chatmail/relay && CHATMAIL_IMAGE=${GHCR_IMAGE} MAIL_DOMAIN=staging2.testrun.org docker compose -f docker/docker-compose.yaml -f docker/docker-compose.ci.yaml up -d"
- name: wait for container healthy
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
# Stream journald inside the container
ssh root@staging2.testrun.org 'docker exec chatmail journalctl -f --no-pager' &
LOG_PID=$!
trap "kill $LOG_PID 2>/dev/null || true" EXIT
for i in $(seq 1 60); do
status=$(ssh root@staging2.testrun.org 'docker inspect --format={{.State.Health.Status}} chatmail 2>/dev/null' || echo "missing")
echo " [$i/60] status=$status"
if [ "$status" = "healthy" ]; then
echo "Container is healthy."
exit 0
fi
if [ "$status" = "unhealthy" ]; then
echo "Container is unhealthy!"
break
fi
sleep 5
done
echo "Container did not become healthy."
kill $LOG_PID 2>/dev/null || true
echo "--- failed units ---"
ssh root@staging2.testrun.org 'docker exec chatmail systemctl --failed --no-pager' || true
echo "--- service logs ---"
ssh root@staging2.testrun.org 'docker exec chatmail journalctl -u dovecot -u postfix -u nginx -u unbound --no-pager -n 50' || true
echo "--- listening ports ---"
ssh root@staging2.testrun.org 'docker exec chatmail ss -tlnp' || true
echo "--- chatmail.ini ---"
ssh root@staging2.testrun.org 'docker exec chatmail cat /etc/chatmail/chatmail.ini' || true
exit 1
- name: show container state
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
echo "--- listening ports ---"
ssh root@staging2.testrun.org 'docker exec chatmail ss -tlnp'
echo "--- chatmail.ini ---"
ssh root@staging2.testrun.org 'docker exec chatmail cat /etc/chatmail/chatmail.ini'
- name: Docker integration tests
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
ssh root@staging2.testrun.org 'docker exec chatmail cmdeploy test --slow --ssh-host @local'
- name: Docker DNS
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: |
# Reset zone file in case bare DNS already appended to it
git checkout .github/workflows/staging.testrun.org-default.zone
ssh root@staging2.testrun.org 'docker exec chatmail chown opendkim:opendkim -R /etc/dkimkeys'
ssh root@staging2.testrun.org 'docker exec chatmail cmdeploy dns --ssh-host @local --zonefile /opt/chatmail/staging.zone --verbose'
ssh root@staging2.testrun.org 'docker cp chatmail:/opt/chatmail/staging.zone /tmp/staging.zone'
scp root@staging2.testrun.org:/tmp/staging.zone staging-generated.zone
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone
scp .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: Docker final DNS check
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: ssh root@staging2.testrun.org 'docker exec chatmail cmdeploy dns -v --ssh-host @local'
# --- Cleanup ---
- name: add SSH keys
if: >-
!cancelled()
&& steps.wait-for-vps.outcome == 'success'
run: ssh root@staging2.testrun.org 'curl -s https://github.com/hpk42.keys https://github.com/j4n.keys >> .ssh/authorized_keys'

View File

@@ -0,0 +1,37 @@
name: test tls_external_cert_and_key on staging2.testrun.org
on:
workflow_run:
workflows:
- "deploy on staging2.testrun.org, and run tests"
types:
- completed
jobs:
test-tls-external:
name: test tls_external_cert_and_key
runs-on: ubuntu-latest
timeout-minutes: 30
concurrency: staging2.testrun.org
environment:
name: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org >> ~/.ssh/known_hosts 2>/dev/null
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: run tls_external e2e test
run: |
python -m cmdeploy.tests.setup_tls_external \
staging2.testrun.org

6
.gitignore vendored
View File

@@ -4,7 +4,7 @@ __pycache__/
*$py.class *$py.class
*.swp *.swp
*qr-*.png *qr-*.png
chatmail*.ini chatmail.ini
# C extensions # C extensions
@@ -168,5 +168,5 @@ chatmail.zone
# docker # docker
/data/ /data/
/custom/ /custom/
docker/docker-compose.override.yaml docker-compose.override.yaml
docker/.env .env

View File

@@ -24,6 +24,7 @@ where = ['src']
[project.scripts] [project.scripts]
doveauth = "chatmaild.doveauth:main" doveauth = "chatmaild.doveauth:main"
chatmail-metadata = "chatmaild.metadata:main" chatmail-metadata = "chatmaild.metadata:main"
chatmail-metrics = "chatmaild.metrics:main"
chatmail-expire = "chatmaild.expire:main" chatmail-expire = "chatmaild.expire:main"
chatmail-fsreport = "chatmaild.fsreport:main" chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main" lastlogin = "chatmaild.lastlogin:main"

View File

@@ -75,7 +75,8 @@ class Config:
" paths: CERT_PATH KEY_PATH" " paths: CERT_PATH KEY_PATH"
) )
self.tls_cert_mode = "external" self.tls_cert_mode = "external"
self.tls_cert_path, self.tls_key_path = parts self.tls_cert_path = parts[0]
self.tls_key_path = parts[1]
elif self.mail_domain.startswith("_"): elif self.mail_domain.startswith("_"):
self.tls_cert_mode = "self" self.tls_cert_mode = "self"
self.tls_cert_path = "/etc/ssl/certs/mailserver.pem" self.tls_cert_path = "/etc/ssl/certs/mailserver.pem"

View File

@@ -1,11 +1,8 @@
import json import json
import logging import logging
import os import os
import re
import sys import sys
import filelock
try: try:
import crypt_r import crypt_r
except ImportError: except ImportError:
@@ -16,7 +13,6 @@ from .dictproxy import DictProxy
from .migrate_db import migrate_from_db_to_maildir from .migrate_db import migrate_from_db_to_maildir
NOCREATE_FILE = "/etc/chatmail-nocreate" NOCREATE_FILE = "/etc/chatmail-nocreate"
VALID_LOCALPART_RE = re.compile(r"^[a-z0-9._-]+$")
def encrypt_password(password: str): def encrypt_password(password: str):
@@ -56,10 +52,6 @@ def is_allowed_to_create(config: Config, user, cleartext_password) -> bool:
) )
return False return False
if not VALID_LOCALPART_RE.match(localpart):
logging.warning("localpart %r contains invalid characters", localpart)
return False
return True return True
@@ -148,13 +140,8 @@ class AuthDictProxy(DictProxy):
if not is_allowed_to_create(self.config, addr, cleartext_password): if not is_allowed_to_create(self.config, addr, cleartext_password):
return return
lock = filelock.FileLock(str(user.password_path) + ".lock", timeout=5) user.set_password(encrypt_password(cleartext_password))
with lock: print(f"Created address: {addr}", file=sys.stderr)
userdata = user.get_userdb_dict()
if userdata:
return userdata
user.set_password(encrypt_password(cleartext_password))
print(f"Created address: {addr}", file=sys.stderr)
return user.get_userdb_dict() return user.get_userdb_dict()

View File

@@ -13,20 +13,9 @@ to show storage summaries only for first 1000 mailboxes
python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000 python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000
to write Prometheus textfile for node_exporter
python -m chatmaild.fsreport --textfile /var/lib/prometheus/node-exporter/
writes to /var/lib/prometheus/node-exporter/fsreport.prom
to also write legacy metrics.py style output (default: /var/www/html/metrics):
python -m chatmaild.fsreport --textfile /var/lib/prometheus/node-exporter/ --legacy-metrics
""" """
import os import os
import tempfile
from argparse import ArgumentParser from argparse import ArgumentParser
from datetime import datetime from datetime import datetime
@@ -59,19 +48,7 @@ class Report:
self.num_ci_logins = self.num_all_logins = 0 self.num_ci_logins = self.num_all_logins = 0
self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)} self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)}
KiB = 1024 self.message_buckets = {x: 0 for x in (0, 160000, 500000, 2000000)}
MiB = 1024 * KiB
self.message_size_thresholds = (
0,
100 * KiB,
MiB // 2,
1 * MiB,
2 * MiB,
5 * MiB,
10 * MiB,
)
self.message_buckets = {x: 0 for x in self.message_size_thresholds}
self.message_count_buckets = {x: 0 for x in self.message_size_thresholds}
def process_mailbox_stat(self, mailbox): def process_mailbox_stat(self, mailbox):
# categorize login times # categorize login times
@@ -91,10 +68,9 @@ class Report:
for size in self.message_buckets: for size in self.message_buckets:
for msg in mailbox.messages: for msg in mailbox.messages:
if msg.size >= size: if msg.size >= size:
if self.mdir and f"/{self.mdir}/" not in msg.path: if self.mdir and not msg.relpath.startswith(self.mdir):
continue continue
self.message_buckets[size] += msg.size self.message_buckets[size] += msg.size
self.message_count_buckets[size] += 1
self.size_messages += sum(entry.size for entry in mailbox.messages) self.size_messages += sum(entry.size for entry in mailbox.messages)
self.size_extra += sum(entry.size for entry in mailbox.extrafiles) self.size_extra += sum(entry.size for entry in mailbox.extrafiles)
@@ -117,10 +93,9 @@ class Report:
pref = f"[{self.mdir}] " if self.mdir else "" pref = f"[{self.mdir}] " if self.mdir else ""
for minsize, sumsize in self.message_buckets.items(): for minsize, sumsize in self.message_buckets.items():
count = self.message_count_buckets[minsize]
percent = (sumsize / all_messages * 100) if all_messages else 0 percent = (sumsize / all_messages * 100) if all_messages else 0
print( print(
f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%), {count} msgs" f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%)"
) )
user_logins = self.num_all_logins - self.num_ci_logins user_logins = self.num_all_logins - self.num_ci_logins
@@ -136,75 +111,6 @@ class Report:
for days, active in self.login_buckets.items(): for days, active in self.login_buckets.items():
print(f"last {days:3} days: {HSize(active)} {p(active)}") print(f"last {days:3} days: {HSize(active)} {p(active)}")
def _write_atomic(self, filepath, content):
"""Atomically write content to filepath via tmp+rename."""
dirpath = os.path.dirname(os.path.abspath(filepath))
fd, tmppath = tempfile.mkstemp(dir=dirpath, suffix=".tmp")
try:
with os.fdopen(fd, "w") as f:
f.write(content)
os.chmod(tmppath, 0o644)
os.rename(tmppath, filepath)
except BaseException:
try:
os.unlink(tmppath)
except OSError:
pass
raise
def dump_textfile(self, filepath):
"""Dump metrics in Prometheus exposition format."""
lines = []
lines.append("# HELP chatmail_storage_bytes Mailbox storage in bytes.")
lines.append("# TYPE chatmail_storage_bytes gauge")
lines.append(f'chatmail_storage_bytes{{kind="messages"}} {self.size_messages}')
lines.append(f'chatmail_storage_bytes{{kind="extra"}} {self.size_extra}')
total = self.size_extra + self.size_messages
lines.append(f'chatmail_storage_bytes{{kind="total"}} {total}')
lines.append("# HELP chatmail_messages_bytes Sum of msg bytes >= threshold.")
lines.append("# TYPE chatmail_messages_bytes gauge")
for minsize, sumsize in self.message_buckets.items():
lines.append(f'chatmail_messages_bytes{{min_size="{minsize}"}} {sumsize}')
lines.append("# HELP chatmail_messages_count Number of msgs >= size threshold.")
lines.append("# TYPE chatmail_messages_count gauge")
for minsize, count in self.message_count_buckets.items():
lines.append(f'chatmail_messages_count{{min_size="{minsize}"}} {count}')
lines.append("# HELP chatmail_accounts Number of accounts.")
lines.append("# TYPE chatmail_accounts gauge")
user_logins = self.num_all_logins - self.num_ci_logins
lines.append(f'chatmail_accounts{{kind="all"}} {self.num_all_logins}')
lines.append(f'chatmail_accounts{{kind="ci"}} {self.num_ci_logins}')
lines.append(f'chatmail_accounts{{kind="user"}} {user_logins}')
lines.append(
"# HELP chatmail_accounts_active Non-CI accounts active within N days."
)
lines.append("# TYPE chatmail_accounts_active gauge")
for days, active in self.login_buckets.items():
lines.append(f'chatmail_accounts_active{{days="{days}"}} {active}')
self._write_atomic(filepath, "\n".join(lines) + "\n")
def dump_compat_textfile(self, filepath):
"""Dump legacy metrics.py style metrics."""
user_logins = self.num_all_logins - self.num_ci_logins
lines = [
"# HELP total number of accounts",
"# TYPE accounts gauge",
f"accounts {self.num_all_logins}",
"# HELP number of CI accounts",
"# TYPE ci_accounts gauge",
f"ci_accounts {self.num_ci_logins}",
"# HELP number of non-CI accounts",
"# TYPE nonci_accounts gauge",
f"nonci_accounts {user_logins}",
]
self._write_atomic(filepath, "\n".join(lines) + "\n")
def main(args=None): def main(args=None):
"""Report about filesystem storage usage of all mailboxes and messages""" """Report about filesystem storage usage of all mailboxes and messages"""
@@ -221,21 +127,19 @@ def main(args=None):
"--days", "--days",
default=0, default=0,
action="store", action="store",
help="assume date to be DAYS older than now", help="assume date to be days older than now",
) )
parser.add_argument( parser.add_argument(
"--min-login-age", "--min-login-age",
default=0, default=0,
metavar="DAYS",
dest="min_login_age", dest="min_login_age",
action="store", action="store",
help="only sum up message size if last login is at least DAYS days old", help="only sum up message size if last login is at least min-login-age days old",
) )
parser.add_argument( parser.add_argument(
"--mdir", "--mdir",
metavar="{cur,new,tmp}",
action="store", action="store",
help="only consider messages in specified Maildir subdirectory for summary", help="only consider 'cur' or 'new' or 'tmp' messages for summary",
) )
parser.add_argument( parser.add_argument(
@@ -244,21 +148,6 @@ def main(args=None):
action="store", action="store",
help="maximum number of mailboxes to iterate on", help="maximum number of mailboxes to iterate on",
) )
parser.add_argument(
"--textfile",
metavar="PATH",
default=None,
help="write Prometheus textfile to PATH (directory or file); "
"if PATH is a directory, writes 'fsreport.prom' inside it",
)
parser.add_argument(
"--legacy-metrics",
metavar="FILENAME",
nargs="?",
const="/var/www/html/metrics",
default=None,
help="write legacy metrics.py textfile (default: /var/www/html/metrics)",
)
args = parser.parse_args(args) args = parser.parse_args(args)
@@ -272,15 +161,7 @@ def main(args=None):
rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir) rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir)
for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum): for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
rep.process_mailbox_stat(mbox) rep.process_mailbox_stat(mbox)
if args.textfile: rep.dump_summary()
path = args.textfile
if os.path.isdir(path):
path = os.path.join(path, "fsreport.prom")
rep.dump_textfile(path)
if args.legacy_metrics:
rep.dump_compat_textfile(args.legacy_metrics)
if not args.textfile and not args.legacy_metrics:
rep.dump_summary()
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -101,11 +101,7 @@ class MetadataDictProxy(DictProxy):
# Handle `GETMETADATA "" /shared/vendor/deltachat/irohrelay` # Handle `GETMETADATA "" /shared/vendor/deltachat/irohrelay`
return f"O{self.iroh_relay}\n" return f"O{self.iroh_relay}\n"
elif keyname == "vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn": elif keyname == "vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn":
try: res = turn_credentials()
res = turn_credentials()
except Exception:
logging.exception("failed to get TURN credentials")
return "N\n"
port = 3478 port = 3478
return f"O{self.turn_hostname}:{port}:{res}\n" return f"O{self.turn_hostname}:{port}:{res}\n"

View File

@@ -0,0 +1,32 @@
#!/usr/bin/env python3
import sys
from pathlib import Path
def main(vmail_dir=None):
if vmail_dir is None:
vmail_dir = sys.argv[1]
accounts = 0
ci_accounts = 0
for path in Path(vmail_dir).iterdir():
if not path.joinpath("cur").is_dir():
continue
accounts += 1
if path.name[:3] in ("ci-", "ac_"):
ci_accounts += 1
print("# HELP total number of accounts")
print("# TYPE accounts gauge")
print(f"accounts {accounts}")
print("# HELP number of CI accounts")
print("# TYPE ci_accounts gauge")
print(f"ci_accounts {ci_accounts}")
print("# HELP number of non-CI accounts")
print("# TYPE nonci_accounts gauge")
print(f"nonci_accounts {accounts - ci_accounts}")
if __name__ == "__main__":
main()

View File

@@ -3,6 +3,7 @@
"""CGI script for creating new accounts.""" """CGI script for creating new accounts."""
import json import json
import random
import secrets import secrets
import string import string
from urllib.parse import quote from urllib.parse import quote
@@ -15,9 +16,7 @@ ALPHANUMERIC_PUNCT = string.ascii_letters + string.digits + string.punctuation
def create_newemail_dict(config: Config): def create_newemail_dict(config: Config):
user = "".join( user = "".join(random.choices(ALPHANUMERIC, k=config.username_max_length))
secrets.choice(ALPHANUMERIC) for _ in range(config.username_max_length)
)
password = "".join( password = "".join(
secrets.choice(ALPHANUMERIC_PUNCT) secrets.choice(ALPHANUMERIC_PUNCT)
for _ in range(config.password_min_length + 3) for _ in range(config.password_min_length + 3)

View File

@@ -110,7 +110,6 @@ def test_config_tls_external_overrides_underscore(make_config):
) )
assert config.tls_cert_mode == "external" assert config.tls_cert_mode == "external"
assert config.tls_cert_path == "/certs/fullchain.pem" assert config.tls_cert_path == "/certs/fullchain.pem"
assert config.tls_key_path == "/certs/privkey.pem"
def test_config_tls_external_bad_format(make_config): def test_config_tls_external_bad_format(make_config):

View File

@@ -120,60 +120,6 @@ def test_handle_dovecot_protocol_iterate(gencreds, example_config):
assert not lines[2] assert not lines[2]
def test_invalid_localpart_characters(make_config):
"""Test that is_allowed_to_create rejects localparts with invalid characters."""
config = make_config("chat.example.org", {"username_min_length": "3"})
password = "zequ0Aimuchoodaechik"
domain = config.mail_domain
# valid localparts
assert is_allowed_to_create(config, f"abc123@{domain}", password)
assert is_allowed_to_create(config, f"a.b-c_d@{domain}", password)
# uppercase rejected
assert not is_allowed_to_create(config, f"Abc123@{domain}", password)
assert not is_allowed_to_create(config, f"ABCDEFG@{domain}", password)
# spaces and special chars rejected
assert not is_allowed_to_create(config, f"a b cde@{domain}", password)
assert not is_allowed_to_create(config, f"abc+def@{domain}", password)
assert not is_allowed_to_create(config, f"abc!def@{domain}", password)
assert not is_allowed_to_create(config, f"ab@cdef@{domain}", password)
assert not is_allowed_to_create(config, f"abc/def@{domain}", password)
assert not is_allowed_to_create(config, f"abc\\def@{domain}", password)
def test_concurrent_creation_same_account(dictproxy):
"""Test that concurrent creation of the same account doesn't corrupt password."""
addr = "racetest1@chat.example.org"
password = "zequ0Aimuchoodaechik"
num_threads = 10
results = queue.Queue()
def create():
try:
res = dictproxy.lookup_passdb(addr, password)
results.put(("ok", res))
except Exception:
results.put(("err", traceback.format_exc()))
threads = [threading.Thread(target=create, daemon=True) for _ in range(num_threads)]
for t in threads:
t.start()
for t in threads:
t.join(timeout=10)
passwords_seen = set()
for _ in range(num_threads):
status, res = results.get()
if status == "err":
pytest.fail(f"concurrent creation failed\n{res}")
passwords_seen.add(res["password"])
# all threads must see the same password hash
assert len(passwords_seen) == 1
def test_50_concurrent_lookups_different_accounts(gencreds, dictproxy): def test_50_concurrent_lookups_different_accounts(gencreds, dictproxy):
num_threads = 50 num_threads = 50
req_per_thread = 5 req_per_thread = 5

View File

@@ -112,43 +112,6 @@ def test_report(mbox1, example_config):
report_main(args) report_main(args)
def test_report_mdir_filters_by_path(mbox1, example_config):
"""Test that Report with mdir='cur' only counts messages in cur/ subdirectory."""
from chatmaild.fsreport import Report
now = datetime.utcnow().timestamp()
# Set password mtime to old enough so min_login_age check passes
password = Path(mbox1.basedir).joinpath("password")
old_time = now - 86400 * 10 # 10 days ago
os.utime(password, (old_time, old_time))
# Reload mailbox with updated mtime
from chatmaild.expire import MailboxStat
mbox = MailboxStat(mbox1.basedir)
# Report without mdir — should count all messages
rep_all = Report(now=now, min_login_age=1, mdir=None)
rep_all.process_mailbox_stat(mbox)
total_all = rep_all.message_buckets[0]
# Report with mdir='cur' — should only count cur/ messages
rep_cur = Report(now=now, min_login_age=1, mdir="cur")
rep_cur.process_mailbox_stat(mbox)
total_cur = rep_cur.message_buckets[0]
# Report with mdir='new' — should only count new/ messages
rep_new = Report(now=now, min_login_age=1, mdir="new")
rep_new.process_mailbox_stat(mbox)
total_new = rep_new.message_buckets[0]
# cur has 500-byte msg, new has 600-byte msg (from fill_mbox)
assert total_cur == 500
assert total_new == 600
assert total_all == 500 + 600
def test_expiry_cli_basic(example_config, mbox1): def test_expiry_cli_basic(example_config, mbox1):
args = (str(example_config._inipath),) args = (str(example_config._inipath),)
expiry_main(args) expiry_main(args)

View File

@@ -47,8 +47,6 @@ def test_one_mail(
make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch
): ):
monkeypatch.setenv("PYTHONUNBUFFERED", "1") monkeypatch.setenv("PYTHONUNBUFFERED", "1")
# DKIM is tested by cmdeploy tests.
monkeypatch.setenv("FILTERMAIL_SKIP_DKIM", "1")
smtp_inject_port = 20025 smtp_inject_port = 20025
if filtermail_mode == "outgoing": if filtermail_mode == "outgoing":
settings = dict( settings = dict(
@@ -66,10 +64,6 @@ def test_one_mail(
popen = make_popen(["filtermail", path, filtermail_mode]) popen = make_popen(["filtermail", path, filtermail_mode])
line = popen.stderr.readline().strip() line = popen.stderr.readline().strip()
# skip a warning that FILTERMAIL_SKIP_DKIM shouldn't be used in prod
if b"DKIM verification DISABLED!" in line:
line = popen.stderr.readline().strip()
if b"loop" not in line: if b"loop" not in line:
print(line.decode("ascii"), file=sys.stderr) print(line.decode("ascii"), file=sys.stderr)
pytest.fail("starting filtermail failed") pytest.fail("starting filtermail failed")

View File

@@ -314,51 +314,6 @@ def test_persistent_queue_items(tmp_path, testaddr, token):
assert not queue_item < item2 and not item2 < queue_item assert not queue_item < item2 and not item2 < queue_item
def test_turn_credentials_exception_returns_N(notifier, metadata, monkeypatch):
"""Test that turn_credentials() failure returns N\\n instead of crashing."""
import chatmaild.metadata
dictproxy = MetadataDictProxy(
notifier=notifier,
metadata=metadata,
turn_hostname="turn.example.org",
)
def mock_turn_credentials():
raise ConnectionRefusedError("socket not available")
monkeypatch.setattr(chatmaild.metadata, "turn_credentials", mock_turn_credentials)
transactions = {}
res = dictproxy.handle_dovecot_request(
"Lshared/0123/vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn"
"\tuser@example.org",
transactions,
)
assert res == "N\n"
def test_turn_credentials_success(notifier, metadata, monkeypatch):
"""Test that valid turn_credentials() returns TURN URI."""
import chatmaild.metadata
dictproxy = MetadataDictProxy(
notifier=notifier,
metadata=metadata,
turn_hostname="turn.example.org",
)
monkeypatch.setattr(chatmaild.metadata, "turn_credentials", lambda: "user:pass")
transactions = {}
res = dictproxy.handle_dovecot_request(
"Lshared/0123/vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn"
"\tuser@example.org",
transactions,
)
assert res == "Oturn.example.org:3478:user:pass\n"
def test_iroh_relay(dictproxy): def test_iroh_relay(dictproxy):
rfile = io.BytesIO( rfile = io.BytesIO(
b"\n".join( b"\n".join(

View File

@@ -0,0 +1,24 @@
from chatmaild.metrics import main
def test_main(tmp_path, capsys):
paths = []
for x in ("ci-asllkj", "ac_12l3kj", "qweqwe", "ci-l1k2j31l2k3"):
p = tmp_path.joinpath(x)
p.mkdir()
p.joinpath("cur").mkdir()
paths.append(p)
tmp_path.joinpath("nomailbox").mkdir()
main(tmp_path)
out, _ = capsys.readouterr()
d = {}
for line in out.split("\n"):
if line.strip() and not line.startswith("#"):
name, num = line.split()
d[name] = int(num)
assert d["accounts"] == 4
assert d["ci_accounts"] == 3
assert d["nonci_accounts"] == 1

View File

@@ -1,73 +0,0 @@
import socket
import threading
import time
from unittest.mock import patch
import pytest
from chatmaild.turnserver import turn_credentials
SOCKET_PATH = "/run/chatmail-turn/turn.socket"
@pytest.fixture
def turn_socket(tmp_path):
"""Create a real Unix socket server at a temp path."""
sock_path = str(tmp_path / "turn.socket")
server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
server.bind(sock_path)
server.listen(1)
yield sock_path, server
server.close()
def _call_turn_credentials(sock_path):
"""Call turn_credentials but connect to sock_path instead of hardcoded path."""
original_connect = socket.socket.connect
def patched_connect(self, address):
if address == SOCKET_PATH:
address = sock_path
return original_connect(self, address)
with patch.object(socket.socket, "connect", patched_connect):
return turn_credentials()
def test_turn_credentials_timeout(turn_socket):
"""Server accepts but never responds — must raise socket.timeout."""
sock_path, server = turn_socket
def accept_and_hang():
conn, _ = server.accept()
time.sleep(30)
conn.close()
t = threading.Thread(target=accept_and_hang, daemon=True)
t.start()
with pytest.raises(socket.timeout):
_call_turn_credentials(sock_path)
def test_turn_credentials_connection_refused(tmp_path):
"""Socket file doesn't exist — must raise ConnectionRefusedError or FileNotFoundError."""
missing = str(tmp_path / "nonexistent.socket")
with pytest.raises((ConnectionRefusedError, FileNotFoundError)):
_call_turn_credentials(missing)
def test_turn_credentials_success(turn_socket):
"""Server responds with credentials — must return stripped string."""
sock_path, server = turn_socket
def respond():
conn, _ = server.accept()
conn.sendall(b"testuser:testpass\n")
conn.close()
t = threading.Thread(target=respond, daemon=True)
t.start()
result = _call_turn_credentials(sock_path)
assert result == "testuser:testpass"

View File

@@ -4,7 +4,6 @@ import socket
def turn_credentials() -> str: def turn_credentials() -> str:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as client_socket: with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as client_socket:
client_socket.settimeout(5)
client_socket.connect("/run/chatmail-turn/turn.socket") client_socket.connect("/run/chatmail-turn/turn.socket")
with client_socket.makefile("rb") as file: with client_socket.makefile("rb") as file:
return file.readline().decode("utf-8").strip() return file.readline().decode("utf-8").strip()

View File

@@ -20,7 +20,6 @@ dependencies = [
"pytest-xdist", "pytest-xdist",
"execnet", "execnet",
"imap_tools", "imap_tools",
"deltachat-rpc-client",
] ]
[project.scripts] [project.scripts]

View File

@@ -67,7 +67,7 @@ class AcmetoolDeployer(Deployer):
) )
files.template( files.template(
src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"), src=importlib.resources.files(__package__).joinpath("desired.yaml.j2"),
dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD dest=f"/var/lib/acme/desired/{self.domains[0]}", # 0 is mailhost TLD
user="root", user="root",
group="root", group="root",
mode="644", mode="644",

View File

@@ -3,7 +3,7 @@ Description=acmetool HTTP redirector
[Service] [Service]
Type=notify Type=notify
ExecStart=/usr/bin/acmetool redirector --service.uid=daemon --bind=127.0.0.1:402 ExecStart=/usr/bin/acmetool redirector --service.uid=daemon
Restart=always Restart=always
RestartSec=30 RestartSec=30

View File

@@ -1,7 +1,6 @@
import importlib.resources import importlib.resources
import io import io
import os import os
from contextlib import contextmanager
from pyinfra.operations import files, server, systemd from pyinfra.operations import files, server, systemd
@@ -11,28 +10,6 @@ def has_systemd():
return os.path.isdir("/run/systemd/system") return os.path.isdir("/run/systemd/system")
@contextmanager
def blocked_service_startup():
"""Prevent services from auto-starting during package installation.
Installs a ``/usr/sbin/policy-rc.d`` that exits 101, blocking any
service from being started by the package manager. This avoids bind
conflicts and CPU/RAM spikes during initial setup. The file is removed
when the context exits.
"""
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
files.put(
src=get_resource("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
yield
files.file("/usr/sbin/policy-rc.d", present=False)
def get_resource(arg, pkg=__package__): def get_resource(arg, pkg=__package__):
return importlib.resources.files(pkg).joinpath(arg) return importlib.resources.files(pkg).joinpath(arg)

View File

@@ -5,6 +5,7 @@ along with command line option and subcommand parsing.
import argparse import argparse
import importlib.resources import importlib.resources
import importlib.util
import os import os
import pathlib import pathlib
import shutil import shutil
@@ -108,7 +109,10 @@ def run_cmd(args, out):
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra" pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y" cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host == "localhost": if ssh_host in ["localhost", "@docker"]:
if ssh_host == "@docker":
env["CHATMAIL_NOPORTCHECK"] = "True"
env["CHATMAIL_NOSYSCTL"] = "True"
cmd = f"{pyinf} @local {deploy_path} -y" cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"): if version.parse(pyinfra.__version__) < version.parse("3"):
@@ -116,18 +120,24 @@ def run_cmd(args, out):
return 1 return 1
try: try:
out.check_call(cmd, env=env) retcode = out.check_call(cmd, env=env)
if args.website_only: if args.website_only:
out.green("Website deployment completed.") if retcode == 0:
out.green("Website deployment completed.")
else:
out.red("Website deployment failed.")
elif retcode == 0:
out.green("Deploy completed, call `cmdeploy dns` next.")
elif not args.dns_check_disabled and strict_tls and not remote_data["acme_account_url"]: elif not args.dns_check_disabled and strict_tls and not remote_data["acme_account_url"]:
out.red("Deploy completed but letsencrypt not configured") out.red("Deploy completed but letsencrypt not configured")
out.red("Run 'cmdeploy run' again") out.red("Run 'cmdeploy run' again")
retcode = 0
else: else:
out.green("Deploy completed, call `cmdeploy dns` next.") out.red("Deploy failed")
return 0
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
out.red("Deploy failed") out.red("Deploy failed")
return 1 retcode = 1
return retcode
def dns_cmd_options(parser): def dns_cmd_options(parser):
@@ -200,15 +210,17 @@ def test_cmd_options(parser):
action="store_true", action="store_true",
help="also run slow tests", help="also run slow tests",
) )
add_ssh_host_option(parser)
def test_cmd(args, out): def test_cmd(args, out):
"""Run local and online tests for chatmail deployment.""" """Run local and online tests for chatmail deployment.
env = os.environ.copy() This will automatically pip-install 'deltachat' if it's not available.
if args.ssh_host: """
env["CHATMAIL_SSH"] = args.ssh_host
x = importlib.util.find_spec("deltachat")
if x is None:
out.check_call(f"{sys.executable} -m pip install deltachat")
pytest_path = shutil.which("pytest") pytest_path = shutil.which("pytest")
pytest_args = [ pytest_args = [
@@ -222,7 +234,7 @@ def test_cmd(args, out):
] ]
if args.slow: if args.slow:
pytest_args.append("--slow") pytest_args.append("--slow")
ret = out.run_ret(pytest_args, env=env) ret = out.run_ret(pytest_args)
return ret return ret
@@ -313,7 +325,7 @@ def add_ssh_host_option(parser):
parser.add_argument( parser.add_argument(
"--ssh-host", "--ssh-host",
dest="ssh_host", dest="ssh_host",
help="Run commands on 'localhost' or on a specific SSH host " help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
"instead of chatmail.ini's mail_domain.", "instead of chatmail.ini's mail_domain.",
) )
@@ -375,7 +387,9 @@ def get_parser():
def get_sshexec(ssh_host: str, verbose=True): def get_sshexec(ssh_host: str, verbose=True):
if ssh_host in ["localhost", "@local"]: if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose) return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
if verbose: if verbose:
print(f"[ssh] login to {ssh_host}") print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose) return SSHExec(ssh_host, verbose=verbose)

View File

@@ -2,24 +2,25 @@
Chat Mail pyinfra deploy. Chat Mail pyinfra deploy.
""" """
import os
import shutil import shutil
import subprocess import subprocess
import sys import sys
from io import BytesIO, StringIO from io import StringIO
from pathlib import Path from pathlib import Path
from chatmaild.config import read_config from chatmaild.config import read_config
from pyinfra import facts, host, logger from pyinfra import facts, host, logger
from pyinfra.api import FactBase
from pyinfra.facts import hardware from pyinfra.facts import hardware
from pyinfra.api import FactBase
from pyinfra.facts.files import Sha256File from pyinfra.facts.files import Sha256File
from pyinfra.facts.server import Command
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
from cmdeploy.cmdeploy import Out from cmdeploy.cmdeploy import Out
from .acmetool import AcmetoolDeployer from .acmetool import AcmetoolDeployer
from .external.deployer import ExternalTlsDeployer
from .basedeploy import ( from .basedeploy import (
Deployer, Deployer,
Deployment, Deployment,
@@ -29,7 +30,6 @@ from .basedeploy import (
has_systemd, has_systemd,
) )
from .dovecot.deployer import DovecotDeployer from .dovecot.deployer import DovecotDeployer
from .external.deployer import ExternalTlsDeployer
from .filtermail.deployer import FiltermailDeployer from .filtermail.deployer import FiltermailDeployer
from .mtail.deployer import MtailDeployer from .mtail.deployer import MtailDeployer
from .nginx.deployer import NginxDeployer from .nginx.deployer import NginxDeployer
@@ -123,6 +123,7 @@ def _install_remote_venv_with_chatmaild() -> None:
def _configure_remote_venv_with_chatmaild(config) -> None: def _configure_remote_venv_with_chatmaild(config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild" remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini" remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644") root_owned = dict(user="root", group="root", mode="644")
@@ -133,13 +134,16 @@ def _configure_remote_venv_with_chatmaild(config) -> None:
**root_owned, **root_owned,
) )
files.file( files.template(
path="/etc/cron.d/chatmail-metrics", src=get_resource("metrics.cron.j2"),
present=False, dest="/etc/cron.d/chatmail-metrics",
) user="root",
files.file( group="root",
path="/var/www/html/metrics", mode="644",
present=False, config={
"mailboxes_dir": config.mailboxes_dir,
"execpath": f"{remote_venv_dir}/bin/chatmail-metrics",
},
) )
@@ -267,9 +271,6 @@ class WebsiteDeployer(Deployer):
# if www_folder is a hugo page, build it # if www_folder is a hugo page, build it
if build_dir: if build_dir:
www_path = build_webpages(src_dir, build_dir, self.config) www_path = build_webpages(src_dir, build_dir, self.config)
if www_path is None:
logger.warning("Web page build failed, skipping website deployment")
return
# if it is not a hugo page, upload it as is # if it is not a hugo page, upload it as is
files.rsync( files.rsync(
f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"] f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"]
@@ -478,14 +479,6 @@ class ChatmailDeployer(Deployer):
self.mail_domain = mail_domain self.mail_domain = mail_domain
def install(self): def install(self):
files.put(
name="Disable installing recommended packages globally",
src=BytesIO(b'APT::Install-Recommends "false";\n'),
dest="/etc/apt/apt.conf.d/00InstallRecommends",
user="root",
group="root",
mode="644",
)
apt.update(name="apt update", cache_time=24 * 3600) apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True) apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -593,19 +586,15 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n") Out().red(f"Deploy failed: mtail_address {config.mtail_address} is not available (VPN up?).\n")
exit(1) exit(1)
if host.get_fact(Command, "systemd-detect-virt -c || true") == "none": if not os.environ.get("CHATMAIL_NOPORTCHECK"):
port_services = [ port_services = [
(["master", "smtpd"], 25), (["master", "smtpd"], 25),
("unbound", 53), ("unbound", 53),
] ]
if config.tls_cert_mode == "acme": if config.tls_cert_mode == "acme":
port_services.append(("acmetool", 402)) port_services.append(("acmetool", 80))
port_services += [ port_services += [
(["imap-login", "dovecot"], 143), (["imap-login", "dovecot"], 143),
# acmetool previously listened on port 80,
# so don't complain during upgrade that moved it to port 402
# and gave the port to nginx.
(["acmetool", "nginx"], 80),
("nginx", 443), ("nginx", 443),
(["master", "smtpd"], 465), (["master", "smtpd"], 465),
(["master", "smtpd"], 587), (["master", "smtpd"], 587),

View File

@@ -1,30 +1,19 @@
import urllib.request import os
from chatmaild.config import Config from chatmaild.config import Config
from pyinfra import host from pyinfra import host
from pyinfra.facts.deb import DebPackages from pyinfra.facts.server import Arch, Sysctl
from pyinfra.facts.server import Arch, Command, Sysctl from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, files, server, systemd
from cmdeploy.basedeploy import ( from cmdeploy.basedeploy import (
Deployer, Deployer,
activate_remote_units, activate_remote_units,
blocked_service_startup,
configure_remote_units, configure_remote_units,
get_resource, get_resource,
has_systemd,
) )
DOVECOT_VERSION = "2.3.21+dfsg1-3"
DOVECOT_SHA256 = {
("core", "amd64"): "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d",
("core", "arm64"): "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9",
("imapd", "amd64"): "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86",
("imapd", "arm64"): "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f",
("lmtpd", "amd64"): "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab",
("lmtpd", "arm64"): "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f",
}
class DovecotDeployer(Deployer): class DovecotDeployer(Deployer):
daemon_reload = False daemon_reload = False
@@ -36,10 +25,11 @@ class DovecotDeployer(Deployer):
def install(self): def install(self):
arch = host.get_fact(Arch) arch = host.get_fact(Arch)
with blocked_service_startup(): if has_systemd() and "dovecot.service" in host.get_fact(SystemdEnabled):
_install_dovecot_package("core", arch) return # already installed and running
_install_dovecot_package("imapd", arch) _install_dovecot_package("core", arch)
_install_dovecot_package("lmtpd", arch) _install_dovecot_package("imapd", arch)
_install_dovecot_package("lmtpd", arch)
def configure(self): def configure(self):
configure_remote_units(self.config.mail_domain, self.units) configure_remote_units(self.config.mail_domain, self.units)
@@ -51,9 +41,7 @@ class DovecotDeployer(Deployer):
restart = False if self.disable_mail else self.need_restart restart = False if self.disable_mail else self.need_restart
systemd.service( systemd.service(
name="Disable dovecot for now" name="Disable dovecot for now" if self.disable_mail else "Start and enable Dovecot",
if self.disable_mail
else "Start and enable Dovecot",
service="dovecot.service", service="dovecot.service",
running=False if self.disable_mail else True, running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True, enabled=False if self.disable_mail else True,
@@ -63,45 +51,38 @@ class DovecotDeployer(Deployer):
self.need_restart = False self.need_restart = False
def _pick_url(primary, fallback):
try:
req = urllib.request.Request(primary, method="HEAD")
urllib.request.urlopen(req, timeout=10)
return primary
except Exception:
return fallback
def _install_dovecot_package(package: str, arch: str): def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch arch = "arm64" if arch == "aarch64" else arch
url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
deb_filename = "/root/" + url.split("/")[-1]
pkg_name = f"dovecot-{package}" match (package, arch):
sha256 = DOVECOT_SHA256.get((package, arch)) case ("core", "amd64"):
if sha256 is None: sha256 = "dd060706f52a306fa863d874717210b9fe10536c824afe1790eec247ded5b27d"
apt.packages(packages=[pkg_name]) case ("core", "arm64"):
return sha256 = "e7548e8a82929722e973629ecc40fcfa886894cef3db88f23535149e7f730dc9"
case ("imapd", "amd64"):
installed_versions = host.get_fact(DebPackages).get(pkg_name, []) sha256 = "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86"
if DOVECOT_VERSION in installed_versions: case ("imapd", "arm64"):
return sha256 = "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f"
case ("lmtpd", "amd64"):
url_version = DOVECOT_VERSION.replace("+", "%2B") sha256 = "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab"
deb_base = f"{pkg_name}_{url_version}_{arch}.deb" case ("lmtpd", "arm64"):
primary_url = f"https://download.delta.chat/dovecot/{deb_base}" sha256 = "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f"
fallback_url = f"https://github.com/chatmail/dovecot/releases/download/upstream%2F{url_version}/{deb_base}" case _:
url = _pick_url(primary_url, fallback_url) apt.packages(packages=[f"dovecot-{package}"])
deb_filename = f"/root/{deb_base}" return
files.download( files.download(
name=f"Download {pkg_name}", name=f"Download dovecot-{package}",
src=url, src=url,
dest=deb_filename, dest=deb_filename,
sha256sum=sha256, sha256sum=sha256,
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
) )
apt.deb(name=f"Install {pkg_name}", src=deb_filename) apt.deb(name=f"Install dovecot-{package}", src=deb_filename)
def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool): def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
@@ -139,25 +120,19 @@ def _configure_dovecot(config: Config, debug: bool = False) -> (bool, bool):
# as per https://doc.dovecot.org/2.3/configuration_manual/os/ # as per https://doc.dovecot.org/2.3/configuration_manual/os/
# it is recommended to set the following inotify limits # it is recommended to set the following inotify limits
can_modify = host.get_fact(Command, "systemd-detect-virt -c || true") == "none" if not os.environ.get("CHATMAIL_NOSYSCTL"):
for name in ("max_user_instances", "max_user_watches"): for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}" key = f"fs.inotify.{name}"
value = host.get_fact(Sysctl)[key] if host.get_fact(Sysctl)[key] > 65535:
if value > 65534: # Skip updating limits if already sufficient
continue # (enables running in incus containers where sysctl readonly)
if not can_modify: continue
print( server.sysctl(
"\n!!!! refusing to attempt sysctl setting in shared-kernel containers\n" name=f"Change {key}",
f"!!!! dovecot: sysctl {key!r}={value}, should be >65534 for production setups\n" key=key,
"!!!!" value=65535,
persist=True,
) )
continue
server.sysctl(
name=f"Change {key}",
key=key,
value=65535,
persist=True,
)
timezone_env = files.line( timezone_env = files.line(
name="Set TZ environment variable", name="Set TZ environment variable",

View File

@@ -1,8 +1,4 @@
import io from pyinfra.operations import files, server, systemd
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer, get_resource from cmdeploy.basedeploy import Deployer, get_resource
@@ -21,18 +17,19 @@ class ExternalTlsDeployer(Deployer):
self.key_path = key_path self.key_path = key_path
def configure(self): def configure(self):
# Verify cert and key exist on the remote host using pyinfra facts. server.shell(
for path in (self.cert_path, self.key_path): name="Verify external TLS certificate and key exist",
info = host.get_fact(File, path=path) commands=[
if info is None: f"test -f {self.cert_path} && test -f {self.key_path}",
raise Exception(f"External TLS file not found on server: {path}") ],
)
# Deploy the .path unit (templated with the cert path). # Deploy the .path unit (templated with the cert path).
# pkg=__package__ is required here because the resource files
# live in cmdeploy.external, not the default cmdeploy package.
source = get_resource("tls-cert-reload.path.f", pkg=__package__) source = get_resource("tls-cert-reload.path.f", pkg=__package__)
content = source.read_text().format(cert_path=self.cert_path).encode() content = source.read_text().format(cert_path=self.cert_path).encode()
import io
path_unit = files.put( path_unit = files.put(
name="Upload tls-cert-reload.path", name="Upload tls-cert-reload.path",
src=io.BytesIO(content), src=io.BytesIO(content),
@@ -63,5 +60,10 @@ class ExternalTlsDeployer(Deployer):
restarted=self.need_restart, restarted=self.need_restart,
daemon_reload=self.need_restart, daemon_reload=self.need_restart,
) )
# No explicit reload needed here: dovecot/nginx read the cert # Always trigger a reload so services pick up the current cert.
# on startup, and the .path watcher handles live changes. # The path unit handles future changes via inotify.
server.shell(
name="Reload TLS services for current certificate",
commands=["systemctl start tls-cert-reload.service"],
)

View File

@@ -1,10 +1,6 @@
# Watch the TLS certificate file for changes. # Watch the TLS certificate file for changes.
# When the cert is updated (e.g. renewed by an external process), # When the cert is updated (e.g. renewed by an external process),
# this triggers tls-cert-reload.service to reload the affected services. # this triggers tls-cert-reload.service to restart the affected services.
#
# NOTE: changes to the certificates are not detected if they cross bind-mount boundaries.
# After cert renewal, you must then trigger the reload explicitly:
# systemctl start tls-cert-reload.service
[Unit] [Unit]
Description=Watch TLS certificate for changes Description=Watch TLS certificate for changes

View File

@@ -11,5 +11,5 @@ Description=Reload TLS services after certificate change
[Service] [Service]
Type=oneshot Type=oneshot
ExecStart=/bin/systemctl try-reload-or-restart dovecot ExecStart=/bin/systemctl reload dovecot
ExecStart=/bin/systemctl try-reload-or-restart nginx ExecStart=/bin/systemctl reload nginx

View File

@@ -14,10 +14,10 @@ class FiltermailDeployer(Deployer):
def install(self): def install(self):
arch = host.get_fact(facts.server.Arch) arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.6.0/filtermail-{arch}" url = f"https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-{arch}"
sha256sum = { sha256sum = {
"x86_64": "3fd8b18282252c75a5bbfa603d8c1b65f6563e5e920bddf3e64e451b7cdb43ce", "x86_64": "f14a31323ae2dad3b59d3fdafcde507521da2f951a9478cd1f2fe2b4463df71d",
"aarch64": "2bd191de205f7fd60158dd8e3516ab7e3efb14627696f3d7dc186bdcd9e10a43", "aarch64": "933770d75046c4fd7084ce8d43f905f8748333426ad839154f0fc654755ef09f",
}[arch] }[arch]
self.need_restart |= files.download( self.need_restart |= files.download(
name="Download filtermail", name="Download filtermail",

View File

@@ -0,0 +1 @@
*/5 * * * * root {{ config.execpath }} {{ config.mailboxes_dir }} >/var/www/html/metrics

View File

@@ -54,7 +54,7 @@ http {
include /etc/nginx/mime.types; include /etc/nginx/mime.types;
default_type application/octet-stream; default_type application/octet-stream;
ssl_protocols TLSv1.2 TLSv1.3; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on; ssl_prefer_server_ciphers on;
ssl_certificate {{ config.tls_cert_path }}; ssl_certificate {{ config.tls_cert_path }};
ssl_certificate_key {{ config.tls_key_path }}; ssl_certificate_key {{ config.tls_key_path }};
@@ -79,6 +79,10 @@ http {
try_files $uri $uri/ =404; try_files $uri $uri/ =404;
} }
location /metrics {
default_type text/plain;
}
location /new { location /new {
{% if config.tls_cert_mode != "self" %} {% if config.tls_cert_mode != "self" %}
if ($request_method = GET) { if ($request_method = GET) {
@@ -141,25 +145,4 @@ http {
return 301 $scheme://{{ config.mail_domain }}$request_uri; return 301 $scheme://{{ config.mail_domain }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7; access_log syslog:server=unix:/dev/log,facility=local7;
} }
server {
listen 80;
{% if not disable_ipv6 %}
listen [::]:80;
{% endif %}
{% if config.tls_cert_mode == "acme" %}
location /.well-known/acme-challenge/ {
proxy_pass http://acmetool;
}
{% endif %}
return 301 https://$host$request_uri;
}
{% if config.tls_cert_mode == "acme" %}
upstream acmetool {
server 127.0.0.1:402;
}
{% endif %}
} }

View File

@@ -37,15 +37,21 @@ class OpendkimDeployer(Deployer):
) )
need_restart |= main_config.changed need_restart |= main_config.changed
screen_script = files.file( screen_script = files.put(
path="/etc/opendkim/screen.lua", src=get_resource("opendkim/screen.lua"),
present=False, dest="/etc/opendkim/screen.lua",
user="root",
group="root",
mode="644",
) )
need_restart |= screen_script.changed need_restart |= screen_script.changed
final_script = files.file( final_script = files.put(
path="/etc/opendkim/final.lua", src=get_resource("opendkim/final.lua"),
present=False, dest="/etc/opendkim/final.lua",
user="root",
group="root",
mode="644",
) )
need_restart |= final_script.changed need_restart |= final_script.changed
@@ -103,13 +109,6 @@ class OpendkimDeployer(Deployer):
) )
need_restart |= service_file.changed need_restart |= service_file.changed
files.file(
name="chown opendkim: /etc/dkimkeys/opendkim.private",
path="/etc/dkimkeys/opendkim.private",
user="opendkim",
group="opendkim",
)
self.need_restart = need_restart self.need_restart = need_restart
def activate(self): def activate(self):

View File

@@ -0,0 +1,42 @@
mtaname = odkim.get_mtasymbol(ctx, "{daemon_name}")
if mtaname == "ORIGINATING" then
-- Outgoing message will be signed,
-- no need to look for signatures.
return nil
end
nsigs = odkim.get_sigcount(ctx)
if nsigs == nil then
return nil
end
local valid = false
local error_msg = "No valid DKIM signature found."
for i = 1, nsigs do
sig = odkim.get_sighandle(ctx, i - 1)
sigres = odkim.sig_result(sig)
-- All signatures that do not correspond to From:
-- were ignored in screen.lua and return sigres -1.
--
-- Any valid signature that was not ignored like this
-- means the message is acceptable.
if sigres == 0 then
valid = true
else
error_msg = "DKIM signature is invalid, error code " .. tostring(sigres) .. ", search https://github.com/trusteddomainproject/OpenDKIM/blob/master/libopendkim/dkim.h#L108"
end
end
if valid then
-- Strip all DKIM-Signature headers after successful validation
-- Delete in reverse order to avoid index shifting.
for i = nsigs, 1, -1 do
odkim.del_header(ctx, "DKIM-Signature", i)
end
else
odkim.set_reply(ctx, "554", "5.7.1", error_msg)
odkim.set_result(ctx, SMFIS_REJECT)
end
return nil

View File

@@ -45,6 +45,12 @@ SignHeaders *,+autocrypt,+content-type
# Default is empty. # Default is empty.
OversignHeaders from,reply-to,subject,date,to,cc,resent-date,resent-from,resent-sender,resent-to,resent-cc,in-reply-to,references,list-id,list-help,list-unsubscribe,list-subscribe,list-post,list-owner,list-archive,autocrypt OversignHeaders from,reply-to,subject,date,to,cc,resent-date,resent-from,resent-sender,resent-to,resent-cc,in-reply-to,references,list-id,list-help,list-unsubscribe,list-subscribe,list-post,list-owner,list-archive,autocrypt
# Script to ignore signatures that do not correspond to the From: domain.
ScreenPolicyScript /etc/opendkim/screen.lua
# Script to reject mails without a valid DKIM signature.
FinalPolicyScript /etc/opendkim/final.lua
# In Debian, opendkim runs as user "opendkim". A umask of 007 is required when # In Debian, opendkim runs as user "opendkim". A umask of 007 is required when
# using a local socket with MTAs that access the socket as a non-privileged # using a local socket with MTAs that access the socket as a non-privileged
# user (for example, Postfix). You may need to add user "postfix" to group # user (for example, Postfix). You may need to add user "postfix" to group

View File

@@ -0,0 +1,21 @@
-- Ignore signatures that do not correspond to the From: domain.
from_domain = odkim.get_fromdomain(ctx)
if from_domain == nil then
return nil
end
n = odkim.get_sigcount(ctx)
if n == nil then
return nil
end
for i = 1, n do
sig = odkim.get_sighandle(ctx, i - 1)
sig_domain = odkim.sig_getdomain(sig)
if from_domain ~= sig_domain then
odkim.sig_ignore(sig)
end
end
return nil

View File

@@ -97,9 +97,7 @@ class PostfixDeployer(Deployer):
server.shell( server.shell(
name="Validate postfix configuration", name="Validate postfix configuration",
# Extract stderr and quit with error if non-zero # Extract stderr and quit with error if non-zero
commands=[ commands=["""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""],
"""bash -c 'w=$(postconf 2>&1 >/dev/null); [[ -z "$w" ]] || { echo "$w"; false; }'"""
],
) )
self.need_restart = need_restart self.need_restart = need_restart

View File

@@ -86,6 +86,7 @@ filter unix - n n - - lmtp
# Local SMTP server for reinjecting incoming filtered mail # Local SMTP server for reinjecting incoming filtered mail
127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd 127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd
-o syslog_name=postfix/reinject_incoming -o syslog_name=postfix/reinject_incoming
-o smtpd_milters=unix:opendkim/opendkim.sock
# Cleanup `Received` headers for authenticated mail # Cleanup `Received` headers for authenticated mail
# to avoid leaking client IP. # to avoid leaking client IP.

View File

@@ -53,7 +53,7 @@ def get_dkim_entry(mail_domain, pre_command, dkim_selector):
print=log_progress, print=log_progress,
) )
except CalledProcessError: except CalledProcessError:
return None, None return
dkim_value_raw = f"v=DKIM1;k=rsa;p={dkim_pubkey};s=email;t=s" dkim_value_raw = f"v=DKIM1;k=rsa;p={dkim_pubkey};s=email;t=s"
dkim_value = '" "'.join(re.findall(".{1,255}", dkim_value_raw)) dkim_value = '" "'.join(re.findall(".{1,255}", dkim_value_raw))
web_dkim_value = "".join(re.findall(".{1,255}", dkim_value_raw)) web_dkim_value = "".join(re.findall(".{1,255}", dkim_value_raw))

View File

@@ -40,5 +40,5 @@ def dovecot_recalc_quota(user):
# #
for line in output.split("\n"): for line in output.split("\n"):
parts = line.split() parts = line.split()
if len(parts) >= 6 and parts[2] == "STORAGE": if parts[2] == "STORAGE":
return dict(value=int(parts[3]), limit=int(parts[4]), percent=int(parts[5])) return dict(value=int(parts[3]), limit=int(parts[4]), percent=int(parts[5]))

View File

@@ -4,7 +4,7 @@ Description=Chatmail dict proxy for IMAP METADATA
[Service] [Service]
ExecStart={execpath} /run/chatmail-metadata/metadata.socket {config_path} ExecStart={execpath} /run/chatmail-metadata/metadata.socket {config_path}
Restart=always Restart=always
RestartSec=5 RestartSec=30
User=vmail User=vmail
RuntimeDirectory=chatmail-metadata RuntimeDirectory=chatmail-metadata
UMask=0077 UMask=0077

View File

@@ -85,10 +85,9 @@ class SSHExec:
class LocalExec: class LocalExec:
FuncError = FuncError def __init__(self, verbose=False, docker=False):
def __init__(self, verbose=False):
self.verbose = verbose self.verbose = verbose
self.docker = docker
def __call__(self, call, kwargs=None, log_callback=None): def __call__(self, call, kwargs=None, log_callback=None):
if kwargs is None: if kwargs is None:
@@ -96,15 +95,11 @@ class LocalExec:
return call(**kwargs) return call(**kwargs)
def logged(self, call, kwargs: dict): def logged(self, call, kwargs: dict):
title = call.__doc__
if not title:
title = call.__name__
where = "locally" where = "locally"
if self.docker:
if call == remote.rdns.perform_initial_checks:
kwargs["pre_command"] = "docker exec chatmail "
where = "in docker"
if self.verbose: if self.verbose:
print_stderr(f"Running {where}: {title}(**{kwargs})") print(f"Running {where}: {call.__name__}(**{kwargs})")
return self(call, kwargs, log_callback=print_stderr) return call(**kwargs)
else:
print_stderr(title, end="")
res = self(call, kwargs, log_callback=remote.rshell.log_progress)
print_stderr()
return res

View File

@@ -41,9 +41,9 @@ class TestDC:
def dc_ping_pong(): def dc_ping_pong():
chat.send_text("ping") chat.send_text("ping")
msg = ac2.wait_for_incoming_msg() msg = ac2._evtracker.wait_next_incoming_message()
msg.get_snapshot().chat.send_text("pong") msg.chat.send_text("pong")
ac1.wait_for_incoming_msg() ac1._evtracker.wait_next_incoming_message()
benchmark(dc_ping_pong, 5) benchmark(dc_ping_pong, 5)
@@ -55,6 +55,6 @@ class TestDC:
for i in range(10): for i in range(10):
chat.send_text(f"hello {i}") chat.send_text(f"hello {i}")
for i in range(10): for i in range(10):
ac2.wait_for_incoming_msg() ac2._evtracker.wait_next_incoming_message()
benchmark(dc_send_10_receive_10, 5, cooldown="auto") benchmark(dc_send_10_receive_10, 5)

View File

@@ -89,9 +89,7 @@ def test_concurrent_logins_same_account(
assert login_results.get() assert login_results.get()
def test_no_vrfy(cmfactory, chatmail_config): def test_no_vrfy(chatmail_config):
ac = cmfactory.get_online_account()
addr = ac.get_config("addr")
domain = chatmail_config.mail_domain domain = chatmail_config.mail_domain
s = smtplib.SMTP(domain) s = smtplib.SMTP(domain)
@@ -100,7 +98,7 @@ def test_no_vrfy(cmfactory, chatmail_config):
s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}") s.putcmd("vrfy", f"wrongaddress@{chatmail_config.mail_domain}")
result = s.getreply() result = s.getreply()
print(result) print(result)
s.putcmd("vrfy", addr) s.putcmd("vrfy", f"echo@{chatmail_config.mail_domain}")
result2 = s.getreply() result2 = s.getreply()
print(result2) print(result2)
assert result[0] == result2[0] == 252 assert result[0] == result2[0] == 252

View File

@@ -7,13 +7,13 @@ import time
import pytest import pytest
from cmdeploy import remote from cmdeploy import remote
from cmdeploy.cmdeploy import get_sshexec from cmdeploy.sshexec import SSHExec
class TestSSHExecutor: class TestSSHExecutor:
@pytest.fixture(scope="class") @pytest.fixture(scope="class")
def sshexec(self, sshdomain): def sshexec(self, sshdomain):
return get_sshexec(sshdomain) return SSHExec(sshdomain)
def test_ls(self, sshexec): def test_ls(self, sshexec):
out = sshexec(call=remote.rdns.shell, kwargs=dict(command="ls")) out = sshexec(call=remote.rdns.shell, kwargs=dict(command="ls"))
@@ -27,7 +27,6 @@ class TestSSHExecutor:
assert res["A"] or res["AAAA"] assert res["A"] or res["AAAA"]
def test_logged(self, sshexec, maildomain, capsys): def test_logged(self, sshexec, maildomain, capsys):
sshexec.verbose = False
sshexec.logged( sshexec.logged(
remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain) remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain)
) )
@@ -53,8 +52,6 @@ class TestSSHExecutor:
remote.rdns.perform_initial_checks, remote.rdns.perform_initial_checks,
kwargs=dict(mail_domain=None), kwargs=dict(mail_domain=None),
) )
except AssertionError:
pass
except sshexec.FuncError as e: except sshexec.FuncError as e:
assert "rdns.py" in str(e) assert "rdns.py" in str(e)
assert "AssertionError" in str(e) assert "AssertionError" in str(e)
@@ -86,8 +83,10 @@ def test_remote(remote, imap_or_smtp):
def test_use_two_chatmailservers(cmfactory, maildomain2): def test_use_two_chatmailservers(cmfactory, maildomain2):
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
cmfactory.get_accepted_chat(ac1, ac2) cmfactory.get_accepted_chat(ac1, ac2)
domain1 = ac1.get_config("addr").split("@")[1] domain1 = ac1.get_config("addr").split("@")[1]
domain2 = ac2.get_config("addr").split("@")[1] domain2 = ac2.get_config("addr").split("@")[1]
@@ -147,7 +146,7 @@ def test_reject_missing_dkim(cmsetup, maildata, from_addr):
conn.starttls() conn.starttls()
with conn as s: with conn as s:
with pytest.raises(smtplib.SMTPDataError, match="No DKIM signature found"): with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"):
s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg) s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg)
@@ -219,7 +218,7 @@ def test_expunged(remote, chatmail_config):
] ]
outdated_days = int(chatmail_config.delete_large_after) + 1 outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append( find_cmds.append(
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f" "find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
) )
for cmd in find_cmds: for cmd in find_cmds:
for line in remote.iter_output(cmd): for line in remote.iter_output(cmd):

View File

@@ -6,8 +6,8 @@ import imap_tools
import pytest import pytest
import requests import requests
from cmdeploy.cmdeploy import get_sshexec
from cmdeploy.remote import rshell from cmdeploy.remote import rshell
from cmdeploy.sshexec import SSHExec
@pytest.fixture @pytest.fixture
@@ -27,7 +27,6 @@ class TestMetadataTokens:
def test_set_get_metadata(self, imap_mailbox): def test_set_get_metadata(self, imap_mailbox):
"set and get metadata token for an account" "set and get metadata token for an account"
time.sleep(5) # make sure Metadata service had a chance to restart
client = imap_mailbox.client client = imap_mailbox.client
client.send(b'a01 SETMETADATA INBOX (/private/devicetoken "1111" )\n') client.send(b'a01 SETMETADATA INBOX (/private/devicetoken "1111" )\n')
res = client.readline() res = client.readline()
@@ -63,8 +62,8 @@ class TestEndToEndDeltaChat:
chat.send_text("message0") chat.send_text("message0")
lp.sec("wait for ac2 to receive message") lp.sec("wait for ac2 to receive message")
msg2 = ac2.wait_for_incoming_msg() msg2 = ac2._evtracker.wait_next_incoming_message()
assert msg2.get_snapshot().text == "message0" assert msg2.text == "message0"
def test_exceed_quota( def test_exceed_quota(
self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain
@@ -92,41 +91,45 @@ class TestEndToEndDeltaChat:
lp.sec(f"filling remote inbox for {user}") lp.sec(f"filling remote inbox for {user}")
fn = f"7743102289.M843172P2484002.c20,S={quota},W=2398:2," fn = f"7743102289.M843172P2484002.c20,S={quota},W=2398:2,"
path = chatmail_config.mailboxes_dir.joinpath(user, "cur", fn) path = chatmail_config.mailboxes_dir.joinpath(user, "cur", fn)
sshexec = get_sshexec(sshdomain) sshexec = SSHExec(sshdomain)
sshexec(call=rshell.write_numbytes, kwargs=dict(path=str(path), num=120)) sshexec(call=rshell.write_numbytes, kwargs=dict(path=str(path), num=120))
res = sshexec(call=rshell.dovecot_recalc_quota, kwargs=dict(user=user)) res = sshexec(call=rshell.dovecot_recalc_quota, kwargs=dict(user=user))
assert res["percent"] >= 100 assert res["percent"] >= 100
lp.sec("ac2: check quota is triggered") lp.sec("ac2: check quota is triggered")
def send_hello(): starting = True
chat.send_text("hello") for line in remote.iter_output("journalctl -n0 -f -u dovecot"):
if starting:
for line in remote.iter_output( chat.send_text("hello")
"journalctl -n1 -f -u dovecot", ready=send_hello starting = False
):
if user not in line: if user not in line:
# print(line)
continue continue
if "quota exceeded" in line: if "quota exceeded" in line:
return return
def test_securejoin(self, cmfactory, lp, maildomain2): def test_securejoin(self, cmfactory, lp, maildomain2):
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
lp.sec("ac1: create QR code and let ac2 scan it, starting the securejoin") lp.sec("ac1: create QR code and let ac2 scan it, starting the securejoin")
qr = ac1.get_qr_code() qr = ac1.get_setup_contact_qr()
lp.sec("ac2: start QR-code based setup contact protocol") lp.sec("ac2: start QR-code based setup contact protocol")
ch = ac2.secure_join(qr) ch = ac2.qr_setup_contact(qr)
assert ch.id >= 10 assert ch.id >= 10
ac1.wait_for_securejoin_inviter_success() ac1._evtracker.wait_securejoin_inviter_progress(1000)
def test_dkim_header_stripped(self, cmfactory, maildomain2, lp, imap_mailbox): def test_dkim_header_stripped(self, cmfactory, maildomain2, lp, imap_mailbox):
"""Test that if a DC address receives a message, it has no """Test that if a DC address receives a message, it has no
DKIM-Signature and Authentication-Results headers.""" DKIM-Signature and Authentication-Results headers."""
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
chat = cmfactory.get_accepted_chat(ac1, imap_mailbox.dc_ac) chat = cmfactory.get_accepted_chat(ac1, imap_mailbox.dc_ac)
chat.send_text("message0") chat.send_text("message0")
chat2 = cmfactory.get_accepted_chat(ac2, imap_mailbox.dc_ac) chat2 = cmfactory.get_accepted_chat(ac2, imap_mailbox.dc_ac)
@@ -143,28 +146,29 @@ class TestEndToEndDeltaChat:
assert "dkim-signature" not in msg.headers assert "dkim-signature" not in msg.headers
def test_read_receipts_between_instances(self, cmfactory, lp, maildomain2): def test_read_receipts_between_instances(self, cmfactory, lp, maildomain2):
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
lp.sec("setup encrypted comms between ac1 and ac2 on different instances") lp.sec("setup encrypted comms between ac1 and ac2 on different instances")
qr = ac1.get_qr_code() qr = ac1.get_setup_contact_qr()
ch = ac2.secure_join(qr) ch = ac2.qr_setup_contact(qr)
assert ch.id >= 10 assert ch.id >= 10
ac1.wait_for_securejoin_inviter_success() ac1._evtracker.wait_securejoin_inviter_progress(1000)
lp.sec("ac1 sends a message and ac2 marks it as seen") lp.sec("ac1 sends a message and ac2 marks it as seen")
chat = ac1.create_chat(ac2) chat = ac1.create_chat(ac2)
msg = chat.send_text("hi") msg = chat.send_text("hi")
m = ac2.wait_for_incoming_msg() m = ac2._evtracker.wait_next_incoming_message()
m.mark_seen() m.mark_seen()
# we can only indirectly wait for mark-seen to cause an smtp-error # we can only indirectly wait for mark-seen to cause an smtp-error
lp.sec("try to wait for markseen to complete and check error states") lp.sec("try to wait for markseen to complete and check error states")
deadline = time.time() + 3.1 deadline = time.time() + 3.1
while time.time() < deadline: while time.time() < deadline:
m_snap = m.get_snapshot() msgs = m.chat.get_messages()
msgs = m_snap.chat.get_messages()
for msg in msgs: for msg in msgs:
assert "error" not in m.get_info() assert "error" not in m.get_message_info()
time.sleep(1) time.sleep(1)
@@ -176,7 +180,7 @@ def test_hide_senders_ip_address(cmfactory, ssl_context):
chat = cmfactory.get_accepted_chat(user1, user2) chat = cmfactory.get_accepted_chat(user1, user2)
chat.send_text("testing submission header cleanup") chat.send_text("testing submission header cleanup")
user2.wait_for_incoming_msg() user2._evtracker.wait_next_incoming_message()
addr = user2.get_config("addr") addr = user2.get_config("addr")
host = addr.split("@")[1] host = addr.split("@")[1]
pw = user2.get_config("mail_pw") pw = user2.get_config("mail_pw")

View File

@@ -5,11 +5,7 @@ from cmdeploy.cmdeploy import main
def test_status_cmd(chatmail_config, capsys, request): def test_status_cmd(chatmail_config, capsys, request):
os.chdir(request.config.invocation_params.dir) os.chdir(request.config.invocation_params.dir)
command = ["status"] assert main(["status"]) == 0
if os.getenv("CHATMAIL_SSH"):
command.append("--ssh-host")
command.append(os.getenv("CHATMAIL_SSH"))
assert main(command) == 0
status_out = capsys.readouterr() status_out = capsys.readouterr()
print(status_out.out) print(status_out.out)

View File

@@ -1,4 +1,5 @@
import imaplib import imaplib
import io
import itertools import itertools
import os import os
import random import random
@@ -34,24 +35,17 @@ def pytest_runtest_setup(item):
pytest.skip("skipping slow test, use --slow to run") pytest.skip("skipping slow test, use --slow to run")
def _get_chatmail_config(): @pytest.fixture(scope="session")
current = Path().resolve() def chatmail_config(pytestconfig):
current = basedir = Path().resolve()
while 1: while 1:
path = current.joinpath("chatmail.ini").resolve() path = current.joinpath("chatmail.ini").resolve()
if path.exists(): if path.exists():
return read_config(path), path return read_config(path)
if current == current.parent: if current == current.parent:
break break
current = current.parent current = current.parent
return None, None
@pytest.fixture(scope="session")
def chatmail_config(pytestconfig):
config, path = _get_chatmail_config()
if config:
return config
basedir = Path().resolve()
pytest.skip(f"no chatmail.ini file found in {basedir} or parent dirs") pytest.skip(f"no chatmail.ini file found in {basedir} or parent dirs")
@@ -79,17 +73,10 @@ def sshdomain2(maildomain2):
def pytest_report_header(): def pytest_report_header():
config, path = _get_chatmail_config() domain = os.environ.get("CHATMAIL_DOMAIN")
domain2 = os.environ.get("CHATMAIL_DOMAIN2", "NOT SET") if domain:
domain = config.mail_domain if config else "NOT SET" text = f"chatmail test instance: {domain}"
path = path if path else "NOT SET" return ["-" * len(text), text, "-" * len(text)]
lines = [
f"chatmail.ini {domain} location: {path}",
f"chatmail2: {domain2}",
]
sep = "-" * max(map(len, lines))
return [sep, *lines, sep]
@pytest.fixture @pytest.fixture
@@ -104,22 +91,15 @@ def cm_data(request):
@pytest.fixture @pytest.fixture
def benchmark(request, chatmail_config): def benchmark(request):
def bench(func, num, name=None, reportfunc=None, cooldown=0.0): def bench(func, num, name=None, reportfunc=None):
if name is None: if name is None:
name = func.__name__ name = func.__name__
if cooldown == "auto":
per_minute = max(chatmail_config.max_user_send_per_minute, 1)
cooldown = chatmail_config.max_user_send_burst_size * 60 / per_minute
durations = [] durations = []
for i in range(num): for i in range(num):
now = time.time() now = time.time()
func() func()
durations.append(time.time() - now) durations.append(time.time() - now)
if cooldown > 0 and i + 1 < num:
# Keep post-run cooldown out of measured benchmark duration.
time.sleep(cooldown)
durations.sort() durations.sort()
request.config._benchresults[name] = (reportfunc, durations) request.config._benchresults[name] = (reportfunc, durations)
@@ -296,95 +276,79 @@ def gencreds(chatmail_config):
# #
# Delta Chat RPC-based test support # Delta Chat testplugin re-use
# use the cmfactory fixture to get chatmail instance accounts # use the cmfactory fixture to get chatmail instance accounts
# #
from deltachat_rpc_client import DeltaChat, Rpc
class ChatmailTestProcess:
"""Provider for chatmail instance accounts as used by deltachat.testplugin.acfactory"""
class ChatmailACFactory: def __init__(self, pytestconfig, maildomain, gencreds, chatmail_config):
"""RPC-based account factory for chatmail testing.""" self.pytestconfig = pytestconfig
self.maildomain = maildomain
def __init__(self, rpc, maildomain, gencreds, chatmail_config): assert "." in self.maildomain, maildomain
self.dc = DeltaChat(rpc)
self.rpc = rpc
self._maildomain = maildomain
self.gencreds = gencreds self.gencreds = gencreds
self.chatmail_config = chatmail_config self.chatmail_config = chatmail_config
self._addr2files = {}
def _make_transport(self, domain): def get_liveconfig_producer(self):
"""Build a transport config dict for the given domain.""" while 1:
addr, password = self.gencreds(domain) user, password = self.gencreds(self.maildomain)
transport = { config = {
"addr": addr, "addr": user,
"password": password, "mail_pw": password,
# Setting server explicitly skips requesting autoconfig XML, }
# see https://datatracker.ietf.org/doc/draft-ietf-mailmaint-autoconfig/ # speed up account configuration
"imapServer": domain, config["mail_server"] = self.maildomain
"smtpServer": domain, config["send_server"] = self.maildomain
} if self.chatmail_config.tls_cert_mode == "self":
if self.chatmail_config.tls_cert_mode == "self": # Accept self-signed TLS certificates
transport["certificateChecks"] = "acceptInvalidCertificates" config["imap_certificate_checks"] = "3"
return transport yield config
def get_online_account(self, domain=None): def cache_maybe_retrieve_configured_db_files(self, cache_addr, db_target_path):
"""Create, configure and bring online a single account.""" pass
return self.get_online_accounts(1, domain)[0]
def get_online_accounts(self, num, domain=None): def cache_maybe_store_configured_db_files(self, acc):
"""Create multiple online accounts in parallel.""" pass
domain = domain or self._maildomain
futures = []
accounts = []
for _ in range(num):
account = self.dc.add_account()
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
futures.append(future)
# ensure messages stay in INBOX so that they can be
# concurrently fetched via extra IMAP connections during tests
account.set_config("delete_server_after", "10")
accounts.append(account)
for future in futures:
future()
for account in accounts:
account.bring_online()
return accounts
def get_accepted_chat(self, ac1, ac2):
"""Create a 1:1 chat between ac1 and ac2 accepted on both sides."""
ac2.create_chat(ac1)
return ac1.create_chat(ac2)
@pytest.fixture(scope="session")
def rpc(tmp_path_factory):
"""Start a deltachat-rpc-server process for the test session."""
# NB: accounts_dir must NOT already exist as directory --
# core-rust only creates accounts.toml if the dir doesn't exist yet.
accounts_dir = str(tmp_path_factory.mktemp("dc") / "accounts")
rpc = Rpc(accounts_dir=accounts_dir)
rpc.start()
yield rpc
rpc.close()
@pytest.fixture @pytest.fixture
def cmfactory(rpc, gencreds, maildomain, chatmail_config): def cmfactory(request, gencreds, tmpdir, maildomain, chatmail_config):
"""Return a ChatmailACFactory for creating online Delta Chat accounts.""" # cloned from deltachat.testplugin.amfactory
return ChatmailACFactory( pytest.importorskip("deltachat")
rpc=rpc, from deltachat.testplugin import ACFactory
maildomain=maildomain,
gencreds=gencreds, testproc = ChatmailTestProcess(
chatmail_config=chatmail_config, request.config, maildomain, gencreds, chatmail_config
) )
class Data:
def read_path(self, path):
return
am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data())
# Skip upstream's init_imap to prevent extra imap connections not
# needed for relay testing
am._acsetup.init_imap = lambda acc: None
# nb. a bit hacky
# would probably be better if deltachat's test machinery grows native support
def switch_maildomain(maildomain2):
am.testprocess.maildomain = maildomain2
am.switch_maildomain = switch_maildomain
yield am
if hasattr(request.node, "rep_call") and request.node.rep_call.failed:
if testproc.pytestconfig.getoption("--extra-info"):
logfile = io.StringIO()
am.dump_imap_summary(logfile=logfile)
print(logfile.getvalue())
# request.node.add_report_section("call", "imap-server-state", s)
@pytest.fixture @pytest.fixture
def remote(sshdomain): def remote(sshdomain):
@@ -395,27 +359,19 @@ class Remote:
def __init__(self, sshdomain): def __init__(self, sshdomain):
self.sshdomain = sshdomain self.sshdomain = sshdomain
def iter_output(self, logcmd="", ready=None): def iter_output(self, logcmd=""):
getjournal = "journalctl -f" if not logcmd else logcmd getjournal = "journalctl -f" if not logcmd else logcmd
print(self.sshdomain)
match self.sshdomain:
case "@local": command = []
case "localhost": command = []
case _: command = ["ssh", f"root@{self.sshdomain}"]
[command.append(arg) for arg in getjournal.split()]
self.popen = subprocess.Popen( self.popen = subprocess.Popen(
command, ["ssh", f"root@{self.sshdomain}", getjournal],
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
) )
while 1: while 1:
line = self.popen.stdout.readline() line = self.popen.stdout.readline()
res = line.decode().strip().lower() res = line.decode().strip().lower()
if not res: if res:
yield res
else:
break break
if ready is not None:
ready()
ready = None
yield res
@pytest.fixture @pytest.fixture

View File

@@ -0,0 +1,362 @@
"""Setup and verify external TLS certificates for a chatmail server.
Generates a self-signed TLS certificate, uploads it to the chatmail
server via SCP, runs ``cmdeploy run``, and then probes all TLS-enabled
ports (nginx, postfix, dovecot) to verify the certificate is actually
served. After probing, checks remote service logs for errors.
Prerequisites
~~~~~~~~~~~~~
- SSH root access to the target server (same as ``cmdeploy run``)
- ``cmdeploy`` in PATH (activate the venv first)
How to run
~~~~~~~~~~
From the repository root::
# Full run: generate cert, deploy, probe ports, check services
python -m cmdeploy.tests.setup_tls_external DOMAIN
# Re-probe only (after a previous deploy)
python -m cmdeploy.tests.setup_tls_external DOMAIN \\
--skip-deploy --skip-certgen
# Override SSH host (e.g. when domain doesn't resolve to the server)
python -m cmdeploy.tests.setup_tls_external DOMAIN \\
--ssh-host staging-ipv4.testrun.org
Arguments
~~~~~~~~~
DOMAIN mail domain for the chatmail server (SSH root login must work)
Options
~~~~~~~
--skip-deploy skip ``cmdeploy run``, only probe ports
--skip-certgen skip cert generation/upload, use certs already on server
--ssh-host HOST SSH host override (defaults to DOMAIN)
"""
import argparse
import shutil
import smtplib
import socket
import ssl
import subprocess
import sys
import tempfile
import time
from pathlib import Path
# Cert paths on the remote server
REMOTE_CERT = "/etc/ssl/certs/tmp_fullchain.pem"
REMOTE_KEY = "/etc/ssl/private/tmp_privkey.pem"
# ---------------------------------------------------------------------------
# Config generation
# ---------------------------------------------------------------------------
def generate_config(domain: str, config_dir: Path) -> Path:
"""Generate a chatmail.ini with tls_external_cert_and_key for *domain*."""
from chatmaild.config import write_initial_config
ini_path = config_dir / "chatmail.ini"
write_initial_config(
ini_path,
domain,
overrides={
"tls_external_cert_and_key": f"{REMOTE_CERT} {REMOTE_KEY}",
},
)
print(f"[+] Generated chatmail.ini for {domain} in {config_dir}")
return ini_path
# ---------------------------------------------------------------------------
# Certificate generation
# ---------------------------------------------------------------------------
def generate_cert(domain: str, cert_dir: Path) -> tuple:
"""Generate a self-signed TLS cert+key for *domain* with proper SANs."""
from cmdeploy.selfsigned.deployer import openssl_selfsigned_args
cert_path = cert_dir / "fullchain.pem"
key_path = cert_dir / "privkey.pem"
subprocess.check_call(openssl_selfsigned_args(domain, cert_path, key_path, days=30))
print(f"[+] Generated cert for {domain} in {cert_dir}")
return cert_path, key_path
# ---------------------------------------------------------------------------
# Upload certs to remote server
# ---------------------------------------------------------------------------
def upload_certs(
ssh_host: str,
cert_path: Path,
key_path: Path,
) -> None:
"""SCP cert and key to the remote server."""
subprocess.check_call([
"scp", str(cert_path), f"root@{ssh_host}:{REMOTE_CERT}",
])
subprocess.check_call([
"scp", str(key_path), f"root@{ssh_host}:{REMOTE_KEY}",
])
# Ensure cert is world-readable and key is readable by ssl-cert group
# (dovecot/postfix/nginx need to read these files)
subprocess.check_call([
"ssh", f"root@{ssh_host}",
f"chmod 644 {REMOTE_CERT} && chmod 640 {REMOTE_KEY}"
f" && chgrp ssl-cert {REMOTE_KEY}",
])
print(f"[+] Uploaded cert/key to {ssh_host}")
# ---------------------------------------------------------------------------
# Deploy
# ---------------------------------------------------------------------------
def run_deploy(ini_path: str) -> None:
"""Run ``cmdeploy run --skip-dns-check --config <ini>``."""
cmd = ["cmdeploy", "run", "--config", str(ini_path), "--skip-dns-check"]
print(f"[+] Running: {' '.join(cmd)}")
subprocess.check_call(cmd)
print("[+] Deploy completed successfully")
# ---------------------------------------------------------------------------
# TLS port probing
# ---------------------------------------------------------------------------
def get_peer_cert_binary(host: str, port: int) -> bytes:
"""Connect to host:port with TLS and return the DER-encoded peer cert."""
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
with socket.create_connection((host, port), timeout=15) as sock:
with ctx.wrap_socket(sock, server_hostname=host) as ssock:
return ssock.getpeercert(binary_form=True)
def get_smtp_starttls_cert_binary(host: str, port: int = 587) -> bytes:
"""Connect via SMTP STARTTLS and return the DER cert."""
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
with smtplib.SMTP(host, port, timeout=15) as smtp:
smtp.starttls(context=ctx)
return smtp.sock.getpeercert(binary_form=True)
def check_cert_matches(
label: str, served_der: bytes, expected_der: bytes,
) -> bool:
"""Compare served DER cert against the expected cert."""
if served_der == expected_der:
print(f" [OK] {label}: certificate matches")
return True
else:
print(f" [FAIL] {label}: certificate does NOT match")
return False
def load_cert_der(cert_pem_path: Path) -> bytes:
"""Load a PEM cert file and return its DER encoding."""
pem_text = cert_pem_path.read_text()
start = pem_text.index("-----BEGIN CERTIFICATE-----")
end = pem_text.index("-----END CERTIFICATE-----") + len(
"-----END CERTIFICATE-----"
)
return ssl.PEM_cert_to_DER_cert(pem_text[start:end])
def probe_all_ports(host: str, expected_cert_der: bytes) -> bool:
"""Probe TLS ports and verify the served certificate matches.
Checks ports 993 (IMAP), 465 (SMTPS), 587 (STARTTLS), and 443
(nginx stream). Port 8443 is skipped as nginx binds it to
localhost behind the stream proxy on 443.
"""
print(f"\n[+] Probing TLS ports on {host}...")
all_ok = True
for label, port in [
("IMAP/TLS (993)", 993),
("SMTP/TLS (465)", 465),
]:
try:
served = get_peer_cert_binary(host, port)
if not check_cert_matches(label, served, expected_cert_der):
all_ok = False
except Exception as e:
print(f" [FAIL] {label}: connection failed: {e}")
all_ok = False
# STARTTLS on port 587
try:
served = get_smtp_starttls_cert_binary(host, 587)
if not check_cert_matches("SMTP/STARTTLS (587)", served, expected_cert_der):
all_ok = False
except Exception as e:
print(f" [FAIL] SMTP/STARTTLS (587): connection failed: {e}")
all_ok = False
# Port 443 (nginx stream proxy with ALPN routing)
try:
served = get_peer_cert_binary(host, 443)
if not check_cert_matches("nginx/443 (stream)", served, expected_cert_der):
all_ok = False
except Exception as e:
print(f" [FAIL] nginx/443 (stream): connection failed: {e}")
all_ok = False
return all_ok
# ---------------------------------------------------------------------------
# Post-deploy service health checks
# ---------------------------------------------------------------------------
SERVICES = ["dovecot", "postfix", "nginx"]
def check_remote_services(ssh_host: str, since: str = "") -> bool:
"""SSH to the server and check for service failures or errors.
*since* is a ``journalctl --since`` timestamp (e.g. ``"5 min ago"``).
If empty, checks the entire boot journal.
"""
print(f"\n[+] Checking remote service health on {ssh_host}...")
all_ok = True
for svc in SERVICES:
try:
result = subprocess.run(
["ssh", f"root@{ssh_host}",
f"systemctl is-active {svc}.service"],
capture_output=True, text=True, timeout=15, check=False,
)
status = result.stdout.strip()
if status == "active":
print(f" [OK] {svc}: active")
else:
print(f" [FAIL] {svc}: {status}")
all_ok = False
except Exception as e:
print(f" [FAIL] {svc}: check failed: {e}")
all_ok = False
since_arg = f'--since="{since}"' if since else ""
print(f"\n[+] Checking journal for errors on {ssh_host}...")
for svc in SERVICES:
try:
result = subprocess.run(
["ssh", f"root@{ssh_host}",
f"journalctl -u {svc}.service {since_arg}"
f" --no-pager -p err -q"],
capture_output=True, text=True, timeout=15, check=False,
)
errors = result.stdout.strip()
if errors:
print(f" [WARN] {svc} errors in journal:")
for line in errors.splitlines()[:10]:
print(f" {line}")
all_ok = False
else:
print(f" [OK] {svc}: no errors in journal")
except Exception as e:
print(f" [FAIL] {svc}: journal check failed: {e}")
all_ok = False
return all_ok
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"domain",
help="mail domain (SSH root login must work to this host)",
)
parser.add_argument(
"--skip-deploy",
action="store_true",
help="skip cmdeploy run, only probe ports",
)
parser.add_argument(
"--skip-certgen",
action="store_true",
help="skip cert generation and upload (use existing)",
)
parser.add_argument(
"--ssh-host",
help="SSH host override (defaults to DOMAIN)",
)
args = parser.parse_args()
domain = args.domain
ssh_host = args.ssh_host or domain
print(f"[+] Domain: {domain}")
print(f"[+] SSH host: {ssh_host}")
print(f"[+] Remote cert: {REMOTE_CERT}")
print(f"[+] Remote key: {REMOTE_KEY}")
work_dir = Path(tempfile.mkdtemp(prefix="tls-external-test-"))
try:
# Generate chatmail.ini
ini_path = generate_config(domain, work_dir)
if not args.skip_certgen:
local_cert, local_key = generate_cert(domain, work_dir)
upload_certs(ssh_host, local_cert, local_key)
else:
local_cert = work_dir / "fullchain.pem"
subprocess.check_call([
"scp", f"root@{ssh_host}:{REMOTE_CERT}", str(local_cert),
])
# Record timestamp before deploy for journal filtering
deploy_start = time.strftime("%Y-%m-%d %H:%M:%S")
if not args.skip_deploy:
run_deploy(ini_path)
# Probe TLS ports
expected_der = load_cert_der(local_cert)
ports_ok = probe_all_ports(domain, expected_der)
# Check service health (only errors since deploy started)
services_ok = check_remote_services(ssh_host, since=deploy_start)
if ports_ok and services_ok:
print(
"\n[SUCCESS] All TLS port probes passed and services are healthy"
)
return 0
else:
if not ports_ok:
print("\n[FAILURE] Some TLS port probes failed", file=sys.stderr)
if not services_ok:
print(
"\n[FAILURE] Some services have errors", file=sys.stderr
)
return 1
finally:
shutil.rmtree(work_dir, ignore_errors=True)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -60,29 +60,6 @@ def mockdns(request, mockdns_base, mockdns_expected):
return mockdns_base return mockdns_base
class TestGetDkimEntry:
def test_dkim_entry_returns_tuple_on_success(self, mockdns):
entry, web_entry = remote.rdns.get_dkim_entry(
"some.domain", "", dkim_selector="opendkim"
)
# May return None,None if openssl not available, but should never crash
if entry is not None:
assert "opendkim._domainkey.some.domain" in entry
assert "opendkim._domainkey.some.domain" in web_entry
def test_dkim_entry_returns_none_tuple_on_error(self, monkeypatch):
"""CalledProcessError must return (None, None), not bare None."""
from subprocess import CalledProcessError
def failing_shell(command, fail_ok=False, print=print):
raise CalledProcessError(1, command)
monkeypatch.setattr(remote.rdns, "shell", failing_shell)
result = remote.rdns.get_dkim_entry("some.domain", "", dkim_selector="opendkim")
assert result == (None, None)
assert result[0] is None and result[1] is None
class TestPerformInitialChecks: class TestPerformInitialChecks:
def test_perform_initial_checks_ok1(self, mockdns, mockdns_expected): def test_perform_initial_checks_ok1(self, mockdns, mockdns_expected):
remote_data = remote.rdns.perform_initial_checks("some.domain") remote_data = remote.rdns.perform_initial_checks("some.domain")

View File

@@ -1,68 +0,0 @@
from unittest.mock import patch
from cmdeploy.remote.rshell import dovecot_recalc_quota
def test_dovecot_recalc_quota_normal_output():
"""Normal doveadm output returns parsed dict."""
normal_output = (
"Quota name Type Value Limit %\n"
"User quota STORAGE 5 102400 0\n"
"User quota MESSAGE 2 - 0\n"
)
with patch("cmdeploy.remote.rshell.shell", return_value=normal_output):
result = dovecot_recalc_quota("user@example.org")
# shell is called twice (recalc + get), patch returns same for both
assert result == {"value": 5, "limit": 102400, "percent": 0}
def test_dovecot_recalc_quota_empty_output():
"""Empty doveadm output (trailing newline) must not IndexError."""
call_count = [0]
def mock_shell(cmd):
call_count[0] += 1
if "recalc" in cmd:
return ""
# quota get returns only empty lines
return "\n\n"
with patch("cmdeploy.remote.rshell.shell", side_effect=mock_shell):
result = dovecot_recalc_quota("user@example.org")
assert result is None
def test_dovecot_recalc_quota_malformed_output():
"""Malformed output with too few columns must not crash."""
call_count = [0]
def mock_shell(cmd):
call_count[0] += 1
if "recalc" in cmd:
return ""
# partial line, fewer than 6 parts
return "Quota name\nUser quota STORAGE\n"
with patch("cmdeploy.remote.rshell.shell", side_effect=mock_shell):
result = dovecot_recalc_quota("user@example.org")
assert result is None
def test_dovecot_recalc_quota_header_only():
"""Only header line, no data rows."""
call_count = [0]
def mock_shell(cmd):
call_count[0] += 1
if "recalc" in cmd:
return ""
return "Quota name Type Value Limit %\n"
with patch("cmdeploy.remote.rshell.shell", side_effect=mock_shell):
result = dovecot_recalc_quota("user@example.org")
assert result is None

View File

@@ -6,9 +6,9 @@ using Docker Compose.
.. note:: .. note::
- Docker support is experimental, CI builds and tests the image automatically, but please report bugs. - Docker support is experimental and not yet covered by automated tests, please report bugs.
- The image wraps the cmdeploy process detailed in the :doc:`getting_started` instructions in a Debian-systemd image with r/w access to `/sys/fs` - This preliminary image simply wraps the cmdeploy process detailed in the :doc:`getting_started` instructions in a full Debian-systemd image with r/w access to `/sys/fs`
- Currently amd64-only (arm64 should work but is untested). - Currently, the image has only been tested and built on amd64, though arm64 should theoretically work as well.
Setup Preparation Setup Preparation
@@ -17,10 +17,12 @@ Setup Preparation
We use ``chat.example.org`` as the chatmail domain in the following We use ``chat.example.org`` as the chatmail domain in the following
steps. Please substitute it with your own domain. steps. Please substitute it with your own domain.
1. Install docker and docker compose v2 (check with `docker compose version`), install, e.g., on 1. Install docker and docker compose v2 (check with `docker compose version`), install, e.g., through
- Debian 12 through the `official install instructions <https://docs.docker.com/engine/install/debian/#install-using-the-repository>`_ - Debian 12 through the `official install instructions <https://docs.docker.com/engine/install/debian/#install-using-the-repository>`_
- Debian 13+ with `apt install docker docker-compose` - Debian 13+ with `apt install docker docker-compose`
If you must use v1 (EOL since 2023), use `docker-compose` in the following and modify the `docker-compose.yaml` to use `privileged: true` instead of `cgroup: host`, though that will run give the container all priviledges.
2. Setup the initial DNS records. 2. Setup the initial DNS records.
The following is an example in the familiar BIND zone file format with The following is an example in the familiar BIND zone file format with
a TTL of 1 hour (3600 seconds). a TTL of 1 hour (3600 seconds).
@@ -55,16 +57,16 @@ Create service directory
Either: Either:
- Create a service directory and download the compose files:: - Create a service directory, e.g., `/srv/chatmail-relay`::
mkdir -p /srv/chatmail-relay && cd /srv/chatmail-relay mkdir -p /srv/chatmail-relay && cd /srv/chatmail-relay
wget https://raw.githubusercontent.com/chatmail/relay/refs/heads/main/docker/docker-compose.yaml wget https://raw.githubusercontent.com/chatmail/relay/refs/heads/main/docker-compose.yaml
wget https://raw.githubusercontent.com/chatmail/relay/refs/heads/main/docker/docker-compose.override.yaml.example -O docker-compose.override.yaml wget https://raw.githubusercontent.com/chatmail/relay/refs/heads/main/docker-compose.override.yaml.example -O docker-compose.override.yaml
- or clone the chatmail repo and enter the docker directory:: - or clone the chatmail repo ::
git clone https://github.com/chatmail/relay git clone https://github.com/chatmail/relay
cd relay/docker cd relay
Customize and start Customize and start
@@ -103,23 +105,19 @@ You can test the installation with::
You should check and extend your DNS records for better interoperability:: You should check and extend your DNS records for better interoperability::
# Show required DNS records # Show required DNS records
docker exec chatmail cmdeploy dns --ssh-host @local docker exec chatmail /opt/cmdeploy/bin/cmdeploy dns --ssh-host @local
You can check server status with:: You can check server status with::
docker exec chatmail cmdeploy status --ssh-host @local docker exec chatmail /opt/cmdeploy/bin/cmdeploy status --ssh-host @local
You can run some benchmarks (can also run from any machine with cmdeploy installed):: You can run some benchmarks (can also run from any machine with cmdeploy installed)
docker exec chatmail cmdeploy bench docker exec chatmail /opt/cmdeploy/bin/cmdeploy bench chat.example.org
You can run the test suite with:: You can run the test suite with
docker exec chatmail cmdeploy test --ssh-host localhost docker exec chatmail /opt/cmdeploy/bin/cmdeploy test chat.example.org --ssh-host localhost
You can look at logs::
docker exec chatmail journalctl -fu postfix@-
Customization Customization
@@ -235,13 +233,11 @@ Clone the repository and build the Docker image::
git clone https://github.com/chatmail/relay git clone https://github.com/chatmail/relay
cd relay cd relay
docker/build.sh docker compose build chatmail
The build bakes all binaries, Python packages, and the install stage The build bakes all binaries, Python packages, and the install stage
into the image. After building, only the ``docker/`` directory and a ``.env`` into the image. After building, only ``docker-compose.yaml`` and a ``.env`` with
with ``MAIL_DOMAIN`` are needed to run the container. The `build.sh` passes the ``MAIL_DOMAIN`` are needed to run the container.
git hash onto the docker build so it can be determined if there has been a
change that warrants a redeploy.
You can transfer a locally built image to your server directly (pigz is parallel `gzip` which can be used instead as well) :: You can transfer a locally built image to your server directly (pigz is parallel `gzip` which can be used instead as well) ::

View File

@@ -235,11 +235,7 @@ The deploy will verify that both files exist on the server.
You are responsible for certificate renewal. You are responsible for certificate renewal.
When the certificate file changes on disk, When the certificate file changes on disk,
all relay services pick up the new certificate automatically all relay services pick up the new certificate automatically
via a systemd path watcher installed during deploy. (via a systemd path watcher installed during deploy).
The watcher uses inotify, which does not cross bind-mount boundaries.
If you use such a setup, you must trigger the reload explicitly after renewal::
systemctl start tls-cert-reload.service
Migrating to a new build machine Migrating to a new build machine

View File

@@ -109,6 +109,10 @@ short overview of ``chatmaild`` services:
is contacted by Dovecot when a user logs in and stores the date of is contacted by Dovecot when a user logs in and stores the date of
the login. the login.
- `metrics <https://github.com/chatmail/relay/blob/main/chatmaild/src/chatmaild/metrics.py>`_
collects some metrics and displays them at
``https://example.org/metrics``.
``www/`` ``www/``
~~~~~~~~~ ~~~~~~~~~
@@ -138,9 +142,11 @@ Chatmail relay dependency diagram
nginx-internal --- autoconfig.xml; nginx-internal --- autoconfig.xml;
certs-nginx[("`TLS certs certs-nginx[("`TLS certs
/var/lib/acme`")] --> nginx-internal; /var/lib/acme`")] --> nginx-internal;
systemd-timer --- chatmail-metrics;
systemd-timer --- acmetool; systemd-timer --- acmetool;
systemd-timer --- chatmail-expire-daily; systemd-timer --- chatmail-expire-daily;
systemd-timer --- chatmail-fsreport-daily; systemd-timer --- chatmail-fsreport-daily;
chatmail-metrics --- website;
acmetool --> certs[("`TLS certs acmetool --> certs[("`TLS certs
/var/lib/acme`")]; /var/lib/acme`")];
nginx-external --- |993|dovecot; nginx-external --- |993|dovecot;

View File

@@ -1,9 +1,10 @@
# Local overrides: copy to docker-compose.override.yaml in this directory. # Local overrides copy to docker-compose.override.yaml in the repo root.
# Compose automatically merges this with docker-compose.yaml. # Compose automatically merges this with docker-compose.yaml.
# #
# cp docker-compose.override.yaml.example docker-compose.override.yaml # cp docker-compose.override.yaml.example docker-compose.override.yaml
# #
# Volumes are APPENDED to the base file's volumes list, environment and other scalar keys are MERGED by key. # Volumes are APPENDED to the base file's volumes list.
# Environment and other scalar keys are MERGED by key.
services: services:
chatmail: chatmail:
volumes: volumes:
@@ -22,18 +23,13 @@ services:
## Custom website: ## Custom website:
# - ./custom/www:/opt/chatmail-www # - ./custom/www:/opt/chatmail-www
## Debug — mount scripts for live editing: ## Debug — mount scripts from the repo for live editing:
# - ./chatmail-init.sh:/chatmail-init.sh # - ./docker/files/setup_chatmail_docker.sh:/setup_chatmail_docker.sh
# - ./entrypoint.sh:/entrypoint.sh # - ./docker/files/entrypoint.sh:/entrypoint.sh
# environment: # environment:
## Mount certs (above) and set TLS_EXTERNAL_CERT_AND_KEY to in-container paths. ## Mount certs (above) and set TLS_EXTERNAL_CERT_AND_KEY to in-container paths.
## A tls-cert-reload.path watcher inside the container reloads services ## Changed certs are picked up automatically (inotify via tls-cert-reload.path).
## when the cert file changes. However, inotify does not cross bind-mount
## boundaries, so host-side renewals (certbot, acmetool, etc.) must
## notify the container explicitly. Add this to your renewal hook:
##
## docker exec chatmail systemctl start tls-cert-reload.service
## ##
## Host acmetool (bare-metal migration): create mount above, and ## Host acmetool (bare-metal migration): create mount above, and
## rsync -a /var/lib/acme/live data/certs ## rsync -a /var/lib/acme/live data/certs

View File

@@ -1,20 +1,16 @@
# Base compose file — do not edit. Put customizations (data paths, extra # Base compose file — do not edit. Put customizations (data paths, extra
# volumes, env overrides) in docker-compose.override.yaml instead. # volumes, env overrides) in docker-compose.override.yaml instead.
# See docker-compose.override.yaml.example in this directory for a starting point. # See docker/docker-compose.override.yaml.example for a starting point.
# #
# Security notes: this container uses # Security note: this container uses network_mode:host (chatmail needs many
# - network_mode:host chatmail needs many ports (25, 53, 80, 143, 443, 465, # ports: 25, 53, 80, 143, 443, 465, 587, 993, 3340, 8443) and cgroup:host
# 587, 993, 3340, 8443) and needs to operate from the real IP, which bridging # (required for systemd). Together these give the container near-host-level
# would make tricky # access. This is acceptable for a dedicated mail server, but be aware that
# - cgroup:host (required for systemd). # the container can bind any port and see all host network traffic.
# Together these give the container near-host-level access. This is acceptable
# for a dedicated mail server, but be aware that the container can bind any
# port and see all host network traffic.
services: services:
chatmail: chatmail:
build: build:
context: ../ context: ./
dockerfile: docker/chatmail_relay.dockerfile dockerfile: docker/chatmail_relay.dockerfile
args: args:
GIT_HASH: ${GIT_HASH:-unknown} GIT_HASH: ${GIT_HASH:-unknown}
@@ -30,7 +26,10 @@ services:
- /run - /run
- /run/lock - /run/lock
logging: logging:
driver: none driver: json-file
options:
max-size: "10m"
max-file: "3"
environment: environment:
MAIL_DOMAIN: $MAIL_DOMAIN MAIL_DOMAIN: $MAIL_DOMAIN
network_mode: "host" network_mode: "host"

View File

@@ -5,5 +5,5 @@
# .git/ is excluded from the build context (.dockerignore) so the hash # .git/ is excluded from the build context (.dockerignore) so the hash
# must be passed as a build arg from the host. # must be passed as a build arg from the host.
export GIT_HASH=$(git rev-parse HEAD) export GIT_HASH=$(git rev-parse --short HEAD)
exec docker compose -f docker/docker-compose.yaml build "$@" exec docker compose build "$@"

View File

@@ -1,104 +1,94 @@
# syntax=docker/dockerfile:1
FROM jrei/systemd-debian:12 AS base FROM jrei/systemd-debian:12 AS base
ENV LANG=en_US.UTF-8 ENV LANG=en_US.UTF-8
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ RUN echo 'APT::Install-Recommends "0";' > /etc/apt/apt.conf.d/01norecommend && \
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
echo 'APT::Install-Recommends "0";' > /etc/apt/apt.conf.d/01norecommend && \
echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/01norecommend && \ echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/01norecommend && \
apt-get update && \ apt-get update && \
DEBIAN_FRONTEND=noninteractive TZ=UTC \
apt-get install -y \ apt-get install -y \
ca-certificates \ ca-certificates && \
gcc \ DEBIAN_FRONTEND=noninteractive \
git \ TZ=UTC \
python3 \ apt-get install -y tzdata && \
python3-dev \ apt-get install -y locales && \
python3-venv \
tzdata \
locales && \
sed -i -e "s/# $LANG.*/$LANG UTF-8/" /etc/locale.gen && \ sed -i -e "s/# $LANG.*/$LANG UTF-8/" /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \ dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=$LANG update-locale LANG=$LANG \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install -y \
git \
python3 \
python3-venv \
python3-virtualenv \
gcc \
python3-dev \
opendkim \
opendkim-tools \
curl \
rsync \
unbound \
unbound-anchor \
dnsutils \
postfix \
acl \
nginx \
libnginx-mod-stream \
fcgiwrap \
cron \
&& rm -rf /var/lib/apt/lists/*
# --- Build-time: install cmdeploy venv and run install stage --- # --- Build-time: install cmdeploy venv and run install stage ---
# Editable install so importlib.resources reads directly from the source tree. # Editable install so importlib.resources reads directly from the source tree.
# On container start only "configure,activate" stages run. # On container start only "configure,activate" stages run.
# Copy dependency metadata first so pip install layer is cached
COPY cmdeploy/pyproject.toml /opt/chatmail/cmdeploy/pyproject.toml
COPY chatmaild/pyproject.toml /opt/chatmail/chatmaild/pyproject.toml
# Dummy scaffolding so editable install can discover packages
RUN mkdir -p /opt/chatmail/cmdeploy/src/cmdeploy \
/opt/chatmail/chatmaild/src/chatmaild && \
touch /opt/chatmail/cmdeploy/src/cmdeploy/__init__.py \
/opt/chatmail/chatmaild/src/chatmaild/__init__.py
# Dummy git repo: .git/ is excluded from the build context (.dockerignore)
# but setuptools calls `git ls-files` when building the sdist.
WORKDIR /opt/chatmail
RUN --mount=type=cache,target=/root/.cache/pip \
git init -q && \
python3 -m venv /opt/cmdeploy && \
/opt/cmdeploy/bin/pip install -e chatmaild/ -e cmdeploy/
# Full source copy (editable install's .egg-link still points here)
COPY . /opt/chatmail/ COPY . /opt/chatmail/
WORKDIR /opt/chatmail
# Minimal chatmail.ini
RUN printf '[params]\nmail_domain = build.local\n' > /tmp/chatmail.ini RUN printf '[params]\nmail_domain = build.local\n' > /tmp/chatmail.ini
# Dummy git repo init: .git/ is excluded from the build context (.dockerignore)
# but setuptools calls `git ls-files` when building the sdist.
RUN git init -q && \
python3 -m venv /opt/cmdeploy && \
/opt/cmdeploy/bin/pip install --no-cache-dir \
-e chatmaild/ -e cmdeploy/
RUN CMDEPLOY_STAGES=install \ RUN CMDEPLOY_STAGES=install \
CHATMAIL_INI=/tmp/chatmail.ini \ CHATMAIL_INI=/tmp/chatmail.ini \
CHATMAIL_NOSYSCTL=True \
CHATMAIL_NOPORTCHECK=True \
/opt/cmdeploy/bin/pyinfra @local \ /opt/cmdeploy/bin/pyinfra @local \
/opt/chatmail/cmdeploy/src/cmdeploy/run.py -y /opt/chatmail/cmdeploy/src/cmdeploy/run.py -y
RUN cp -a www/ /opt/chatmail-www/ RUN cp -a www/ /opt/chatmail-www/
# Remove build-only packages — not needed at runtime. RUN rm -f /tmp/chatmail.ini
# Keep git: test_deployed_state needs `git rev-parse HEAD` to verify the
# deployed version hash matches /etc/chatmail-version.
RUN apt-get purge -y gcc python3-dev && \
apt-get autoremove -y && \
rm -f /tmp/chatmail.ini
# Record image version (used in deploy fingerprint at runtime). # Record image version (used in deploy fingerprint at runtime).
# GIT_HASH is passed as a build arg (from docker-compose or CI) so that # GIT_HASH is passed as a build arg (from docker-compose or CI) so that
# .git/ can be excluded from the build context via .dockerignore. # .git/ can be excluded from the build context via .dockerignore.
# Two files: chatmail-image-version is the immutable build hash (survives
# deploys); chatmail-version is overwritten by cmdeploy run and restored
# from the image version after each deploy in chatmail-init.sh.
ARG GIT_HASH=unknown ARG GIT_HASH=unknown
RUN echo "$GIT_HASH" > /etc/chatmail-image-version && \ RUN echo "$GIT_HASH" > /etc/chatmail-image-version && \
echo "$GIT_HASH" > /etc/chatmail-version echo "$GIT_HASH" > /etc/chatmail-version
# Mock git HEAD so `git rev-parse HEAD` returns the source repo's commit hash.
# The .git/ dir was created by `git init` earlier (for setuptools); we just
# write the build hash into whatever branch HEAD points to.
RUN head_ref=$(sed 's/^ref: //' /opt/chatmail/.git/HEAD) && \
mkdir -p "/opt/chatmail/.git/$(dirname "$head_ref")" && \
echo "$GIT_HASH" > "/opt/chatmail/.git/$head_ref"
# --- End build-time install --- # --- End build-time install ---
ENV TZ=:/etc/localtime ENV TZ=:/etc/localtime
ENV PATH="/opt/cmdeploy/bin:${PATH}" ENV PATH="/opt/cmdeploy/bin:${PATH}"
RUN ln -s /etc/chatmail/chatmail.ini /opt/chatmail/chatmail.ini RUN ln -s /etc/chatmail/chatmail.ini /opt/chatmail/chatmail.ini
ARG CHATMAIL_INIT_SERVICE_PATH=/lib/systemd/system/chatmail-init.service ARG SETUP_CHATMAIL_SERVICE_PATH=/lib/systemd/system/setup_chatmail.service
COPY ./docker/chatmail-init.service "$CHATMAIL_INIT_SERVICE_PATH" COPY ./docker/files/setup_chatmail.service "$SETUP_CHATMAIL_SERVICE_PATH"
RUN ln -sf "$CHATMAIL_INIT_SERVICE_PATH" "/etc/systemd/system/multi-user.target.wants/chatmail-init.service" RUN ln -sf "$SETUP_CHATMAIL_SERVICE_PATH" "/etc/systemd/system/multi-user.target.wants/setup_chatmail.service"
# Remove default nginx site config at build time (not in entrypoint) # Remove default nginx site config at build time (not in entrypoint)
RUN rm -f /etc/nginx/sites-enabled/default RUN rm -f /etc/nginx/sites-enabled/default
COPY --chmod=555 ./docker/chatmail-init.sh /chatmail-init.sh COPY --chmod=555 ./docker/files/setup_chatmail_docker.sh /setup_chatmail_docker.sh
COPY --chmod=555 ./docker/entrypoint.sh /entrypoint.sh COPY --chmod=555 ./docker/files/entrypoint.sh /entrypoint.sh
COPY --chmod=555 ./docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=10s --start-period=180s --timeout=10s --retries=3 \ HEALTHCHECK --interval=60s --timeout=10s --retries=3 \
CMD /healthcheck.sh CMD systemctl is-active dovecot postfix nginx unbound opendkim filtermail doveauth chatmail-metadata || exit 1
STOPSIGNAL SIGRTMIN+3 STOPSIGNAL SIGRTMIN+3
@@ -106,3 +96,4 @@ ENTRYPOINT ["/entrypoint.sh"]
CMD [ "--default-standard-output=journal+console", \ CMD [ "--default-standard-output=journal+console", \
"--default-standard-error=journal+console" ] "--default-standard-error=journal+console" ]

View File

@@ -1,70 +0,0 @@
# Traefik reverse-proxy example — use as a compose override:
#
# docker compose -f docker-compose.yaml -f docker-compose-traefik.yaml up -d
#
# Traefik handles HTTP→HTTPS redirect and ACME certificate issuance.
# traefik-certs-dumper extracts the certificates to the filesystem so
# chatmail's Postfix/Dovecot/nginx can use them via TLS_EXTERNAL_CERT_AND_KEY.
#
# Prerequisites:
# mkdir -p traefik/data traefik/dynamic-configs
# touch traefik/data/acme.json && chmod 600 traefik/data/acme.json
# cp traefik/config.yaml.example traefik/config.yaml # see below
#
# Required .env variables (in addition to MAIL_DOMAIN):
# ACME_EMAIL=admin@example.org
services:
chatmail:
environment:
# Point chatmail at the certs dumped by traefik-certs-dumper.
# The container's tls-cert-reload.path watches for changes.
TLS_EXTERNAL_CERT_AND_KEY: >-
/traefik-certs/${MAIL_DOMAIN}/certificate.crt
/traefik-certs/${MAIL_DOMAIN}/privatekey.key
volumes:
- traefik-certs:/traefik-certs:ro
depends_on:
- traefik-certs-dumper
labels:
- traefik.enable=true
- traefik.http.services.chatmail.loadbalancer.server.scheme=https
- traefik.http.services.chatmail.loadbalancer.server.port=443
- traefik.http.routers.chatmail.rule=Host(`${MAIL_DOMAIN}`) || Host(`mta-sts.${MAIL_DOMAIN}`) || Host(`www.${MAIL_DOMAIN}`)
- traefik.http.routers.chatmail.tls=true
- traefik.http.routers.chatmail.tls.certresolver=letsEncrypt
traefik:
image: traefik:v3.3
container_name: traefik
restart: unless-stopped
network_mode: host
command:
- "--configFile=/config.yaml"
- "--certificatesresolvers.letsEncrypt.acme.email=${ACME_EMAIL}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/config.yaml:/config.yaml:ro
- ./traefik/data/acme.json:/acme.json
- ./traefik/dynamic-configs:/dynamic/conf:ro
traefik-certs-dumper:
image: ldez/traefik-certs-dumper:v2.10.0
restart: unless-stopped
depends_on:
- traefik
entrypoint: sh -c '
apk add openssl
&& while ! [ -e /data/acme.json ]
|| ! [ $$(jq ".[] | .Certificates | length" /data/acme.json | jq -s "add") != 0 ]; do
sleep 1;
done
&& traefik-certs-dumper file
--version v3 --watch --domain-subdir=true
--source /data/acme.json --dest /certs'
volumes:
- ./traefik/data/acme.json:/data/acme.json:ro
- traefik-certs:/certs
volumes:
traefik-certs:

View File

@@ -1,11 +0,0 @@
# Used by .github/workflows/docker-ci.yaml
# The GHCR image is set via CHATMAIL_IMAGE env var at deploy time.
services:
chatmail:
image: ${CHATMAIL_IMAGE:-chatmail-relay:latest}
volumes:
- /srv/chatmail/chatmail.ini:/etc/chatmail/chatmail.ini
- /srv/chatmail/dkim:/etc/dkimkeys
- /srv/chatmail/certs:/var/lib/acme
environment:
TLS_EXTERNAL_CERT_AND_KEY: /var/lib/acme/live/${MAIL_DOMAIN}/fullchain /var/lib/acme/live/${MAIL_DOMAIN}/privkey

View File

@@ -1,9 +0,0 @@
#!/bin/bash
set -eo pipefail
CHATMAIL_INIT_SERVICE_PATH="${CHATMAIL_INIT_SERVICE_PATH:-/lib/systemd/system/chatmail-init.service}"
env_vars="MAIL_DOMAIN CMDEPLOY_STAGES CHATMAIL_INI TLS_EXTERNAL_CERT_AND_KEY PATH"
sed -i "s|<envs_list>|$env_vars|g" "$CHATMAIL_INIT_SERVICE_PATH"
exec /lib/systemd/systemd "$@"

12
docker/files/entrypoint.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -eo pipefail
SETUP_CHATMAIL_SERVICE_PATH="${SETUP_CHATMAIL_SERVICE_PATH:-/lib/systemd/system/setup_chatmail.service}"
# Whitelist only the env vars needed by setup_chatmail_docker.sh.
# Forwarding all env vars (via printenv) would leak Docker internals,
# orchestrator secrets, and other unrelated variables into systemd.
env_vars="MAIL_DOMAIN CMDEPLOY_STAGES CHATMAIL_INI TLS_EXTERNAL_CERT_AND_KEY PATH"
sed -i "s|<envs_list>|$env_vars|g" "$SETUP_CHATMAIL_SERVICE_PATH"
exec /lib/systemd/systemd "$@"

View File

@@ -1,11 +1,11 @@
[Unit] [Unit]
Description=Run container setup commands Description=Run container setup commands
After=multi-user.target After=multi-user.target
ConditionPathExists=/chatmail-init.sh ConditionPathExists=/setup_chatmail_docker.sh
[Service] [Service]
Type=oneshot Type=oneshot
ExecStart=/bin/bash /chatmail-init.sh ExecStart=/bin/bash /setup_chatmail_docker.sh
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/chatmail WorkingDirectory=/opt/chatmail
PassEnvironment=<envs_list> PassEnvironment=<envs_list>

View File

@@ -2,6 +2,8 @@
set -euo pipefail set -euo pipefail
export CHATMAIL_INI="${CHATMAIL_INI:-/etc/chatmail/chatmail.ini}" export CHATMAIL_INI="${CHATMAIL_INI:-/etc/chatmail/chatmail.ini}"
export CHATMAIL_NOSYSCTL=True
export CHATMAIL_NOPORTCHECK=True
CMDEPLOY=/opt/cmdeploy/bin/cmdeploy CMDEPLOY=/opt/cmdeploy/bin/cmdeploy
@@ -10,44 +12,37 @@ if [ -z "$MAIL_DOMAIN" ]; then
exit 1 exit 1
fi fi
# Generate DKIM keys if not mounted ### MAIN
if [ ! -f /etc/dkimkeys/opendkim.private ]; then if [ ! -f /etc/dkimkeys/opendkim.private ]; then
/usr/sbin/opendkim-genkey -D /etc/dkimkeys -d "$MAIL_DOMAIN" -s opendkim /usr/sbin/opendkim-genkey -D /etc/dkimkeys -d "$MAIL_DOMAIN" -s opendkim
fi fi
# Fix ownership for bind-mounted keys (host opendkim UID may differ from container) # Fix ownership for bind-mounted keys (host opendkim UID may differ from container)
chown -R opendkim:opendkim /etc/dkimkeys chown -R opendkim:opendkim /etc/dkimkeys
# Create chatmail.ini, skip if mounted # Journald: forward to console for docker logs
grep -q '^ForwardToConsole=yes' /etc/systemd/journald.conf \
|| echo "ForwardToConsole=yes" >> /etc/systemd/journald.conf
systemctl restart systemd-journald
# Create chatmail.ini (skips if file already exists, e.g. volume-mounted)
mkdir -p "$(dirname "$CHATMAIL_INI")" mkdir -p "$(dirname "$CHATMAIL_INI")"
if [ ! -f "$CHATMAIL_INI" ]; then if [ ! -f "$CHATMAIL_INI" ]; then
$CMDEPLOY init --config "$CHATMAIL_INI" "$MAIL_DOMAIN" $CMDEPLOY init --config "$CHATMAIL_INI" "$MAIL_DOMAIN"
fi fi
# Auto-detect IPv6: if the host has no IPv6 connectivity, set disable_ipv6 # Inject external TLS paths from env var (unless user mounted their own ini)
# in the ini so dovecot/postfix/nginx bind to IPv4 only.
# Uses network_mode:host so /proc/net/if_inet6 reflects the host's stack.
if [ ! -e /proc/net/if_inet6 ]; then
if grep -q '^disable_ipv6 = False' "$CHATMAIL_INI"; then
sed -i 's/^disable_ipv6 = False/disable_ipv6 = True/' "$CHATMAIL_INI"
echo "[INFO] IPv6 not available, set disable_ipv6 = True"
fi
fi
# Inject external TLS paths from env var unless defined in chatmail.ini
if [ -n "${TLS_EXTERNAL_CERT_AND_KEY:-}" ]; then if [ -n "${TLS_EXTERNAL_CERT_AND_KEY:-}" ]; then
if ! grep -q '^tls_external_cert_and_key' "$CHATMAIL_INI"; then if ! grep -q '^tls_external_cert_and_key' "$CHATMAIL_INI"; then
echo "tls_external_cert_and_key = $TLS_EXTERNAL_CERT_AND_KEY" >> "$CHATMAIL_INI" echo "tls_external_cert_and_key = $TLS_EXTERNAL_CERT_AND_KEY" >> "$CHATMAIL_INI"
fi fi
fi fi
# Ensure mailboxes directory exists (chatmail-metadata needs it at startup,
# but Dovecot only creates it on first mail delivery)
mkdir -p "/home/vmail/mail/${MAIL_DOMAIN}"
chown vmail:vmail "/home/vmail/mail/${MAIL_DOMAIN}"
# --- Deploy fingerprint: skip cmdeploy run if nothing changed --- # --- Deploy fingerprint: skip cmdeploy run if nothing changed ---
# On restart with identical image+config, systemd already brings up all # On restart with identical image+config, systemd already brings up all
# enabled services only configure+activate are needed here. # enabled services — the full cmdeploy run is redundant (~30s saved).
# The install stage runs at image build time (Dockerfile), so only
# configure+activate are needed here.
IMAGE_VERSION_FILE="/etc/chatmail-image-version" IMAGE_VERSION_FILE="/etc/chatmail-image-version"
FINGERPRINT_FILE="/etc/chatmail/.deploy-fingerprint" FINGERPRINT_FILE="/etc/chatmail/.deploy-fingerprint"
image_ver="none" image_ver="none"
@@ -55,7 +50,7 @@ image_ver="none"
config_hash=$(sha256sum "$CHATMAIL_INI" | cut -c1-16) config_hash=$(sha256sum "$CHATMAIL_INI" | cut -c1-16)
current_fp="${image_ver}:${config_hash}" current_fp="${image_ver}:${config_hash}"
# CMDEPLOY_STAGES non-empty in env = operator override -> always run. # CMDEPLOY_STAGES non-empty in env = operator override always run.
# Otherwise, if fingerprint matches the last successful deploy, skip. # Otherwise, if fingerprint matches the last successful deploy, skip.
if [ -z "${CMDEPLOY_STAGES:-}" ] \ if [ -z "${CMDEPLOY_STAGES:-}" ] \
&& [ -f "$FINGERPRINT_FILE" ] \ && [ -f "$FINGERPRINT_FILE" ] \
@@ -63,23 +58,6 @@ if [ -z "${CMDEPLOY_STAGES:-}" ] \
echo "[INFO] No changes detected ($current_fp), skipping deploy." echo "[INFO] No changes detected ($current_fp), skipping deploy."
else else
export CMDEPLOY_STAGES="${CMDEPLOY_STAGES:-configure,activate}" export CMDEPLOY_STAGES="${CMDEPLOY_STAGES:-configure,activate}"
$CMDEPLOY run --config "$CHATMAIL_INI" --ssh-host @local
# Skip DNS check when MAIL_DOMAIN is a bare IP address
SKIP_DNS=""
if [[ "$MAIL_DOMAIN" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]] || [[ "$MAIL_DOMAIN" =~ : ]]; then
SKIP_DNS="--skip-dns-check"
fi
$CMDEPLOY run --config "$CHATMAIL_INI" --ssh-host @local $SKIP_DNS
# Restore the build-time hash
cp /etc/chatmail-image-version /etc/chatmail-version
echo "$current_fp" > "$FINGERPRINT_FILE" echo "$current_fp" > "$FINGERPRINT_FILE"
fi fi
# Signal success to Docker healthcheck
touch /run/chatmail-init.done
# Forward journald to console so `docker compose logs` works
grep -q '^ForwardToConsole=yes' /etc/systemd/journald.conf \
|| echo "ForwardToConsole=yes" >> /etc/systemd/journald.conf
systemctl restart systemd-journald

View File

@@ -1,16 +0,0 @@
#!/bin/bash
# returns 0 when chatmail-init succeeded and all expected services are running.
set -e
test -f /run/chatmail-init.done
# Core services
services="chatmail-metadata doveauth dovecot filtermail filtermail-incoming nginx postfix unbound"
# Optional services
for svc in iroh-relay turnserver; do
systemctl is-enabled "$svc" 2>/dev/null && services="$services $svc"
done
exec systemctl is-active $services