Compare commits

..

13 Commits

Author SHA1 Message Date
j4n
07938544a1 docker: trim compose override example 2026-02-20 17:02:34 +01:00
j4n
3cc74a4c9a docker: get rid of CHATMAIL_* in compose 2026-02-20 16:56:05 +01:00
j4n
77676a4e87 docker: streamline overrides, rename datadirs, external TLS 2026-02-20 16:38:35 +01:00
j4n
dc2a6fda05 docker: migrate to new external tls logic
- remove all traces of CHATMAIL_NOACME; purge certwatch service
- introduce TLS_EXTERNAL_CERT_AND_KEY as per new logic
2026-02-20 10:00:44 +01:00
j4n
d9dce2ccee Merge remote-tracking branch 'origin/hpk/tls-external' into j4n/docker-traefik 2026-02-19 21:04:21 +01:00
j4n
fcfc2cca1a fix(docker): remove CHATMAIL_INI from env 2026-02-19 20:41:18 +01:00
j4n
beb4041e3f fix(docker): Add TZ to env 2026-02-19 20:36:51 +01:00
holger krekel
da3d726fb1 feat: support externally managed TLS via tls_external_cert_and_key option
Adds a new tls_external_cert_and_key config option for chatmail servers
that manage their own TLS certificates (e.g. via an external ACME client
or a load balancer).

A systemd path unit (tls-cert-reload.path) watches the certificate file
via inotify and automatically reloads dovecot and nginx when it changes.
Postfix reads certs per TLS handshake so needs no reload.

Also extracts openssl_selfsigned_args() so cert generation parameters
are shared between SelfSignedTlsDeployer and the e2e test.
2026-02-19 19:49:53 +01:00
j4n
854b7ef368 typo 2026-02-19 16:03:41 +01:00
j4n
7e30bafd57 docker: clear up docker compose v1/v2 differences (doc/compose.yaml) 2026-02-19 16:03:41 +01:00
j4n
3ef59c3def feat: add Docker and Compose support
Add Docker-based deployment: Dockerfile based on systemd image,
docker-compose.yaml, build script, entrypoint, external certificate
monitoring, CI workflow, and documentation.

This builds on the chatmaild/cmdeploy preparation in the previous
commit (j4n/docker-prep-chatmail) which added the env-var-driven
feature flags (CHATMAIL_NOSYSCTL, CHATMAIL_NOPORTCHECK, CHATMAIL_NOACME)
and @local deployment support needed by the container.

This is commit 2 of 3 to merge squashed changes on j4n/docker and docker
branches, original commits were beef0ec..606f36e

Architecture overview (mostly by original author Keonik1):
- Debian-systemd image wrapping the existing cmdeploy install
- Host networking to not manually expose the many ports needed
- Config via MAIL_DOMAIN env var or (new) mounted chatmail.ini
- New: cmdeploy stages: install at build, configure+activate at startup
- New: Monitoring service for external certs via systemd timer (chatmail-certmon)
- New: Image version tracking for automatic upgrade detection (cm + config hash)
- New: docker-compose.override.yaml pattern for user customizations
- New: GitHub Actions CI for ghcr.io image builds

Traefik reverse-proxy support is prepared but the specific files are
excluded from this PR and will be submitted separately.

TODO:
- [ ] Pull out CHATMAIL_NOACME as PR #855 introduced a proper mechanism
- [ ] Check if underlying image could be based on regular debian-slim
  images with a step to enable systemd, similar to
  https://github.com/alexdzyoba/docker-debian-systemd

Files added:
  .dockerignore
  .github/workflows/docker-build.yaml
  docker-compose.yaml
  docker-compose.override.yaml.example
  docker/build.sh
  docker/chatmail_relay.dockerfile
  docker/files/chatmail-certmon.{service,sh,timer}
  docker/files/entrypoint.sh
  docker/files/setup_chatmail.service
  docker/files/setup_chatmail_docker.sh
  env.example
  doc/source/docker.rst

Files modified:
  .gitignore
  doc/source/getting_started.rst
  doc/source/index.rst

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-19 16:03:41 +01:00
j4n
a7b3893fee cmdeploy: prepare chatmaild/cmdeploy changes for Docker support
- chatmaild:
  - basedeploy.py: Add has_systemd() guard. During Docker image builds
    there's no running systemd, so deployers that query SystemdEnabled
    facts would crash; this change might also be helpful for non-systemd
    platforms.
- cmdeploy:
  - cmdeploy.py:
    - when deploying to @docker, auto-set CHATMAIL_NOPORTCHECK and
      CHATMAIL_NOSYSCTL since neither makes sense inside a container
    - --config default now reads CHATMAIL_INI env var, so Docker
      entrypoints can point to a mounted ini without CLI flags.
  - deployers.py:
    - skip port check / CHATMAIL_NOPORTCHECK
    - skip echobot systemd cleanup w/ has_systemd
  - dovecot/deployer.py:
    - Guard sysctl writes behind CHATMAIL_NOSYSCTL
    - invert dovecot install check so it works without systemd
  - sshexec.py: Add __call__ to LocalExec so cmdeploy status works with
    @local target. Without it, cmdeploy status tried to call the
    executor directly and got TypeError.

Consolidated from j4n/docker branch commits (selection):
- 8953fde feat(cmdeploy): read CHATMAIL_INI env var for default --config path
- 81d7782 fix(cmdeploy): add __call__ to LocalExec so status works with @local
- 8bba78e docker: disable port check if docker is running. fix #694
- 865b514 docker: replace config flags with env vars, drop docker param (instead of f26cb08)

Files: cmdeploy/src/cmdeploy/{basedeploy,cmdeploy,deployers,sshexec,dovecot/deployer}.py

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-19 16:03:41 +01:00
j4n
58fa5e5c98 cmdeploy: prepare chatmaild/cmdeploy changes for Docker support
- chatmaild:
  - basedeploy.py: Add has_systemd() guard. During Docker image builds
    there's no running systemd, so deployers that query SystemdEnabled
    facts would crash; this change might also be helpful for non-systemd
    platforms.
- cmdeploy:
  - cmdeploy.py:
    - when deploying to @docker, auto-set CHATMAIL_NOPORTCHECK and
      CHATMAIL_NOSYSCTL since neither makes sense inside a container
    - --config default now reads CHATMAIL_INI env var, so Docker
      entrypoints can point to a mounted ini without CLI flags.
  - deployers.py:
    - skip port check / CHATMAIL_NOPORTCHECK
    - skip echobot systemd cleanup w/ has_systemd
  - dovecot/deployer.py:
    - Guard sysctl writes behind CHATMAIL_NOSYSCTL
    - invert dovecot install check so it works without systemd
  - sshexec.py: Add __call__ to LocalExec so cmdeploy status works with
    @local target. Without it, cmdeploy status tried to call the
    executor directly and got TypeError.

Consolidated from j4n/docker branch commits (selection):
- 8953fde feat(cmdeploy): read CHATMAIL_INI env var for default --config path
- 81d7782 fix(cmdeploy): add __call__ to LocalExec so status works with @local
- 8bba78e docker: disable port check if docker is running. fix #694
- 865b514 docker: replace config flags with env vars, drop docker param (instead of f26cb08)

Files: cmdeploy/src/cmdeploy/{basedeploy,cmdeploy,deployers,sshexec,dovecot/deployer}.py

Co-authored-by: Keonik1 <keonik.dev@gmail.com>
Co-authored-by: missytake <missytake@systemli.org>
2026-02-19 16:03:39 +01:00
47 changed files with 1041 additions and 958 deletions

View File

@@ -15,7 +15,7 @@ jobs:
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
- name: download filtermail - name: download filtermail
run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.5.2/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail run: curl -L https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-x86_64 -o /usr/local/bin/filtermail && chmod +x /usr/local/bin/filtermail
- name: run chatmaild tests - name: run chatmaild tests
working-directory: chatmaild working-directory: chatmaild
run: pipx run tox run: pipx run tox

View File

@@ -1,375 +0,0 @@
name: Deploy
on:
push:
branches:
- main
- j4n/docker-pr
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-docker:
name: Build Docker image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image: ${{ steps.image-ref.outputs.image }}
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tagged releases: v1.2.3 -> :1.2.3, :1.2, :latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
# Branch pushes: foo/docker-pr -> :foo-docker-pr
type=ref,event=branch
# Always: :sha-<hash>
type=sha
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: docker/chatmail_relay.dockerfile
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
GIT_HASH=${{ github.sha }}
- name: Output image reference
id: image-ref
run: |
SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7)
IMAGE="${{ env.REGISTRY }}/$(echo "${{ env.IMAGE_NAME }}" | tr '[:upper:]' '[:lower:]'):sha-${SHORT_SHA}"
echo "image=${IMAGE}" >> "$GITHUB_OUTPUT"
deploy:
name: Deploy to ${{ matrix.host }}
needs: build-docker
# dont do the regular tests on this branch
if: >-
!cancelled() && (
github.event_name == 'push' ||
(github.event_name == 'pull_request' && !startsWith(github.head_ref, 'j4n/'))
)
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- host: staging2.testrun.org
acme_dir: acme
dkim_dir: dkimkeys
zone_file: staging.testrun.org-default.zone
disable_ipv6: false
add_ssh_keys: true
- host: staging-ipv4.testrun.org
acme_dir: acme-ipv4
dkim_dir: dkimkeys-ipv4
zone_file: staging-ipv4.testrun.org-default.zone
disable_ipv6: true
add_ssh_keys: false
environment:
name: ${{ matrix.host }}
url: https://${{ matrix.host }}/
concurrency: ${{ matrix.host }}
steps:
# --- Common setup ---
- uses: actions/checkout@v4
- name: prepare SSH and save ACME/DKIM
env:
HOST: ${{ matrix.host }}
ACME_DIR: ${{ matrix.acme_dir }}
DKIM_DIR: ${{ matrix.dkim_dir }}
ZONE: ${{ matrix.zone_file }}
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan ${HOST} > ~/.ssh/known_hosts
# save previous acme & dkim state (trailing slash = copy contents)
rsync -avz root@${HOST}:/var/lib/acme/ ${ACME_DIR}/ || true
rsync -avz root@${HOST}:/etc/dkimkeys/ ${DKIM_DIR}/ || true
# backup to ns.testrun.org if contents are useful
if [ -f ${DKIM_DIR}/opendkim.private ]; then
rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" ${DKIM_DIR}/ root@ns.testrun.org:/tmp/${DKIM_DIR}/ || true
fi
if [ "$(ls -A ${ACME_DIR}/certs 2>/dev/null)" ]; then
rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" ${ACME_DIR}/ root@ns.testrun.org:/tmp/${ACME_DIR}/ || true
fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/${ZONE} root@ns.testrun.org:/etc/nsd/${HOST}.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/${HOST}.zone
ssh root@ns.testrun.org nsd-checkzone ${HOST} /etc/nsd/${HOST}.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild VPS
env:
SERVER_ID: ${{ matrix.host == 'staging2.testrun.org' && secrets.STAGING_SERVER_ID || secrets.STAGING_IPV4_SERVER_ID }}
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${SERVER_ID}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: wait for VPS rebuild
id: wait-for-vps
env:
HOST: ${{ matrix.host }}
run: |
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new root@${HOST} id -u ; do sleep 1 ; done
- name: restore ACME/DKIM
env:
HOST: ${{ matrix.host }}
ACME_DIR: ${{ matrix.acme_dir }}
DKIM_DIR: ${{ matrix.dkim_dir }}
run: |
# download from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/${ACME_DIR}/ acme-restore/ || true
rsync -avz root@ns.testrun.org:/tmp/${DKIM_DIR}/ dkimkeys-restore/ || true
# restore to VPS
rsync -avz acme-restore/ root@${HOST}:/var/lib/acme/ || true
rsync -avz dkimkeys-restore/ root@${HOST}:/etc/dkimkeys/ || true
ssh root@${HOST} chown root:root -R /var/lib/acme || true
- name: bare offline tests
if: github.ref == 'refs/heads/main' || github.event_name == 'pull_request'
run: pytest --pyargs cmdeploy
- name: bare deploy
if: github.ref == 'refs/heads/main' || github.event_name == 'pull_request'
env:
HOST: ${{ matrix.host }}
DISABLE_IPV6: ${{ matrix.disable_ipv6 }}
run: |
ssh root@${HOST} 'apt update && apt install -y git python3.11-venv python3-dev gcc'
ssh root@${HOST} 'git clone https://github.com/chatmail/relay'
ssh root@${HOST} "cd relay && git checkout ${{ github.head_ref || github.ref_name }}"
ssh root@${HOST} 'cd relay && scripts/initenv.sh'
ssh root@${HOST} "cd relay && scripts/cmdeploy init ${HOST}"
if [ "${DISABLE_IPV6}" = "true" ]; then
ssh root@${HOST} "sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' relay/chatmail.ini"
fi
ssh root@${HOST} "sed -i 's/#\s*mtail_address/mtail_address/' relay/chatmail.ini"
ssh root@${HOST} "cd relay && scripts/cmdeploy run --verbose --skip-dns-check --ssh-host localhost"
- name: bare DNS
if: github.ref == 'refs/heads/main' || github.event_name == 'pull_request'
env:
HOST: ${{ matrix.host }}
ZONE: ${{ matrix.zone_file }}
run: |
ssh root@${HOST} chown opendkim:opendkim -R /etc/dkimkeys
ssh root@${HOST} "cd relay && scripts/cmdeploy dns --zonefile staging-generated.zone --ssh-host localhost"
ssh root@${HOST} cat relay/staging-generated.zone >> .github/workflows/${ZONE}
cat .github/workflows/${ZONE}
scp .github/workflows/${ZONE} root@ns.testrun.org:/etc/nsd/${HOST}.zone
ssh root@ns.testrun.org nsd-checkzone ${HOST} /etc/nsd/${HOST}.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: bare integration tests
if: github.ref == 'refs/heads/main' || github.event_name == 'pull_request'
env:
HOST: ${{ matrix.host }}
run: ssh root@${HOST} "cd relay && CHATMAIL_DOMAIN2=ci-chatmail.testrun.org scripts/cmdeploy test --slow --ssh-host localhost"
- name: bare final DNS check
if: github.ref == 'refs/heads/main' || github.event_name == 'pull_request'
env:
HOST: ${{ matrix.host }}
run: ssh root@${HOST} "cd relay && scripts/cmdeploy dns -v --ssh-host localhost"
# --- Docker deploy (push only, runs even if bare failed) ---
- name: stop bare services
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
ssh root@${HOST} 'systemctl stop postfix dovecot nginx opendkim unbound filtermail doveauth chatmail-metadata iroh-relay mtail fcgiwrap acmetool 2>/dev/null || true'
- name: install Docker on VPS
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
ssh root@${HOST} 'apt-get update && apt-get install -y ca-certificates curl'
ssh root@${HOST} 'install -m 0755 -d /etc/apt/keyrings'
ssh root@${HOST} 'curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc && chmod a+r /etc/apt/keyrings/docker.asc'
ssh root@${HOST} 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list'
ssh root@${HOST} 'apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin'
- name: prepare Docker bind mounts
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
ssh root@${HOST} 'mkdir -p /srv/chatmail/certs /srv/chatmail/dkim'
ssh root@${HOST} 'cp -a /var/lib/acme/. /srv/chatmail/certs/ && cp -a /etc/dkimkeys/. /srv/chatmail/dkim/' || true
- name: generate and upload chatmail.ini
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
cmdeploy init ${HOST}
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
scp chatmail.ini root@${HOST}:/srv/chatmail/chatmail.ini
- name: deploy with Docker
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
GHCR_IMAGE="${{ needs.build-docker.outputs.image }}"
rsync -avz --exclude='.git' --exclude='venv' --exclude='__pycache__' ./ root@${HOST}:/srv/chatmail/relay/
# Login to GHCR on VPS and pull pre-built image
echo "${{ secrets.GITHUB_TOKEN }}" | ssh root@${HOST} 'docker login ghcr.io -u ${{ github.actor }} --password-stdin'
ssh root@${HOST} "docker pull ${GHCR_IMAGE}"
ssh root@${HOST} "cd /srv/chatmail/relay && CHATMAIL_IMAGE=${GHCR_IMAGE} MAIL_DOMAIN=${HOST} docker compose -f docker-compose.yaml -f docker/docker-compose.ci.yaml up -d"
- name: wait for container healthy
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
# Stream journald inside the container
ssh root@${HOST} 'docker exec chatmail journalctl -f --no-pager' &
LOG_PID=$!
trap "kill $LOG_PID 2>/dev/null || true" EXIT
for i in $(seq 1 60); do
status=$(ssh root@${HOST} 'docker inspect --format={{.State.Health.Status}} chatmail 2>/dev/null' || echo "missing")
echo " [$i/60] status=$status"
if [ "$status" = "healthy" ]; then
echo "Container is healthy."
exit 0
fi
if [ "$status" = "unhealthy" ]; then
echo "Container is unhealthy!"
break
fi
sleep 5
done
echo "Container did not become healthy."
kill $LOG_PID 2>/dev/null || true
echo "--- failed units ---"
ssh root@${HOST} 'docker exec chatmail systemctl --failed --no-pager' || true
echo "--- service logs ---"
ssh root@${HOST} 'docker exec chatmail journalctl -u dovecot -u postfix -u nginx -u unbound --no-pager -n 50' || true
echo "--- listening ports ---"
ssh root@${HOST} 'docker exec chatmail ss -tlnp' || true
echo "--- chatmail.ini ---"
ssh root@${HOST} 'docker exec chatmail cat /etc/chatmail/chatmail.ini' || true
exit 1
- name: show container state
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: |
echo "--- listening ports ---"
ssh root@${HOST} 'docker exec chatmail ss -tlnp'
echo "--- chatmail.ini ---"
ssh root@${HOST} 'docker exec chatmail cat /etc/chatmail/chatmail.ini'
- name: Docker offline tests
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: CHATMAIL_DOCKER=chatmail pytest --pyargs cmdeploy
- name: Docker DNS
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
ZONE: ${{ matrix.zone_file }}
run: |
# Reset zone file in case bare DNS already appended to it
git checkout .github/workflows/${ZONE}
ssh root@${HOST} 'docker exec chatmail chown opendkim:opendkim -R /etc/dkimkeys'
ssh root@${HOST} 'docker exec chatmail cmdeploy dns --ssh-host @local --zonefile /opt/chatmail/staging.zone --verbose'
ssh root@${HOST} 'docker cp chatmail:/opt/chatmail/staging.zone /tmp/staging.zone'
scp root@${HOST}:/tmp/staging.zone staging-generated.zone
cat staging-generated.zone >> .github/workflows/${ZONE}
cat .github/workflows/${ZONE}
scp .github/workflows/${ZONE} root@ns.testrun.org:/etc/nsd/${HOST}.zone
ssh root@ns.testrun.org nsd-checkzone ${HOST} /etc/nsd/${HOST}.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: Docker integration tests
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
run: CHATMAIL_DOCKER=chatmail CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: Docker final DNS check
if: >-
!cancelled() && github.event_name == 'push'
&& steps.wait-for-vps.outcome == 'success'
env:
HOST: ${{ matrix.host }}
run: ssh root@${HOST} 'docker exec chatmail cmdeploy dns -v --ssh-host @local'
# --- Cleanup ---
- name: add SSH keys
if: >-
!cancelled() && matrix.add_ssh_keys
&& steps.wait-for-vps.outcome == 'success'
run: ssh root@${{ matrix.host }} 'curl -s https://github.com/hpk42.keys https://github.com/j4n.keys >> .ssh/authorized_keys'

76
.github/workflows/docker-build.yaml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Docker Build
on:
pull_request:
paths:
- 'docker/**'
- 'docker-compose.yaml'
- '.dockerignore'
- 'chatmaild/**'
- 'cmdeploy/**'
- '.github/workflows/docker-build.yaml'
push:
branches:
- main
- j4n/docker
paths:
- 'docker/**'
- 'docker-compose.yaml'
- '.dockerignore'
- 'chatmaild/**'
- 'cmdeploy/**'
- '.github/workflows/docker-build.yaml'
tags:
- 'v*'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
name: Build Docker image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tagged releases: v1.2.3 → :1.2.3, :1.2, :latest
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
# Branch pushes: j4n/docker → :j4n-docker
type=ref,event=branch
# Always: :sha-<hash>
type=sha
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: docker/chatmail_relay.dockerfile
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
GIT_HASH=${{ github.sha }}

View File

@@ -0,0 +1,95 @@
name: deploy on staging-ipv4.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging-ipv4.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging-ipv4.testrun.org
url: https://staging-ipv4.testrun.org/
concurrency: staging-ipv4.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging-ipv4.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging-ipv4.testrun.org:/var/lib/acme acme-ipv4 || true
rsync -avz root@staging-ipv4.testrun.org:/etc/dkimkeys dkimkeys-ipv4 || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys-ipv4/dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys-ipv4 root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme-ipv4/acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme-ipv4 root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging-ipv4.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_IPV4_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging-ipv4.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme-ipv4/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys-ipv4/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging-ipv4.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging-ipv4.testrun.org
sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' chatmail.ini
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone
cat staging-generated.zone >> .github/workflows/staging-ipv4.testrun.org-default.zone
cat .github/workflows/staging-ipv4.testrun.org-default.zone
scp .github/workflows/staging-ipv4.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging-ipv4.testrun.org /etc/nsd/staging-ipv4.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

98
.github/workflows/test-and-deploy.yaml vendored Normal file
View File

@@ -0,0 +1,98 @@
name: deploy on staging2.testrun.org, and run tests
on:
push:
branches:
- main
pull_request:
paths-ignore:
- 'scripts/**'
- '**/README.md'
- 'CHANGELOG.md'
- 'LICENSE'
jobs:
deploy:
name: deploy on staging2.testrun.org, and run tests
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: staging2.testrun.org
url: https://staging2.testrun.org/
concurrency: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org > ~/.ssh/known_hosts
# save previous acme & dkim state
rsync -avz root@staging2.testrun.org:/var/lib/acme . || true
rsync -avz root@staging2.testrun.org:/etc/dkimkeys . || true
# store previous acme & dkim state on ns.testrun.org, if it contains useful certs
if [ -f dkimkeys/opendkim.private ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" dkimkeys root@ns.testrun.org:/tmp/ || true; fi
if [ "$(ls -A acme/certs)" ]; then rsync -avz -e "ssh -o StrictHostKeyChecking=accept-new" acme root@ns.testrun.org:/tmp/ || true; fi
# make sure CAA record isn't set
scp -o StrictHostKeyChecking=accept-new .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org sed -i '/CAA/d' /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: rebuild staging2.testrun.org to have a clean VPS
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.HETZNER_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image":"debian-12"}' \
"https://api.hetzner.cloud/v1/servers/${{ secrets.STAGING_SERVER_ID }}/actions/rebuild"
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: upload TLS cert after rebuilding
run: |
echo " --- wait until staging2.testrun.org VPS is rebuilt --- "
rm ~/.ssh/known_hosts
while ! ssh -o ConnectTimeout=180 -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u ; do sleep 1 ; done
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org id -u
# download acme & dkim state from ns.testrun.org
rsync -e "ssh -o StrictHostKeyChecking=accept-new" -avz root@ns.testrun.org:/tmp/acme acme-restore || true
rsync -avz root@ns.testrun.org:/tmp/dkimkeys dkimkeys-restore || true
# restore acme & dkim state to staging2.testrun.org
rsync -avz acme-restore/acme root@staging2.testrun.org:/var/lib/ || true
rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true
- name: add hpk42 key to staging server
run: ssh root@staging2.testrun.org 'curl -s https://github.com/hpk42.keys >> .ssh/authorized_keys'
- name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy
- run: |
cmdeploy init staging2.testrun.org
sed -i 's/#\s*mtail_address/mtail_address/' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check
- name: set DNS entries
run: |
ssh -o StrictHostKeyChecking=accept-new root@staging2.testrun.org chown opendkim:opendkim -R /etc/dkimkeys
cmdeploy dns --zonefile staging-generated.zone --verbose
cat staging-generated.zone >> .github/workflows/staging.testrun.org-default.zone
cat .github/workflows/staging.testrun.org-default.zone
scp .github/workflows/staging.testrun.org-default.zone root@ns.testrun.org:/etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org nsd-checkzone staging2.testrun.org /etc/nsd/staging2.testrun.org.zone
ssh root@ns.testrun.org systemctl reload nsd
- name: cmdeploy test
run: CHATMAIL_DOMAIN2=ci-chatmail.testrun.org cmdeploy test --slow
- name: cmdeploy dns
run: cmdeploy dns -v

View File

@@ -0,0 +1,37 @@
name: test tls_external_cert_and_key on staging2.testrun.org
on:
workflow_run:
workflows:
- "deploy on staging2.testrun.org, and run tests"
types:
- completed
jobs:
test-tls-external:
name: test tls_external_cert_and_key
runs-on: ubuntu-latest
timeout-minutes: 30
concurrency: staging2.testrun.org
environment:
name: staging2.testrun.org
steps:
- uses: actions/checkout@v4
- name: prepare SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.STAGING_SSH_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan staging2.testrun.org >> ~/.ssh/known_hosts 2>/dev/null
- run: scripts/initenv.sh
- name: append venv/bin to PATH
run: echo venv/bin >>$GITHUB_PATH
- name: run tls_external e2e test
run: |
python -m cmdeploy.tests.setup_tls_external \
staging2.testrun.org

2
.gitignore vendored
View File

@@ -4,7 +4,7 @@ __pycache__/
*$py.class *$py.class
*.swp *.swp
*qr-*.png *qr-*.png
chatmail*.ini chatmail.ini
# C extensions # C extensions

View File

@@ -75,7 +75,8 @@ class Config:
" paths: CERT_PATH KEY_PATH" " paths: CERT_PATH KEY_PATH"
) )
self.tls_cert_mode = "external" self.tls_cert_mode = "external"
self.tls_cert_path, self.tls_key_path = parts self.tls_cert_path = parts[0]
self.tls_key_path = parts[1]
elif self.mail_domain.startswith("_"): elif self.mail_domain.startswith("_"):
self.tls_cert_mode = "self" self.tls_cert_mode = "self"
self.tls_cert_path = "/etc/ssl/certs/mailserver.pem" self.tls_cert_path = "/etc/ssl/certs/mailserver.pem"

View File

@@ -13,20 +13,9 @@ to show storage summaries only for first 1000 mailboxes
python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000 python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000
to write Prometheus textfile for node_exporter
python -m chatmaild.fsreport --textfile /var/lib/prometheus/node-exporter/
writes to /var/lib/prometheus/node-exporter/fsreport.prom
to also write legacy metrics.py style output (default: /var/www/html/metrics):
python -m chatmaild.fsreport --textfile /var/lib/prometheus/node-exporter/ --legacy-metrics
""" """
import os import os
import tempfile
from argparse import ArgumentParser from argparse import ArgumentParser
from datetime import datetime from datetime import datetime
@@ -59,19 +48,7 @@ class Report:
self.num_ci_logins = self.num_all_logins = 0 self.num_ci_logins = self.num_all_logins = 0
self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)} self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)}
KiB = 1024 self.message_buckets = {x: 0 for x in (0, 160000, 500000, 2000000)}
MiB = 1024 * KiB
self.message_size_thresholds = (
0,
100 * KiB,
MiB // 2,
1 * MiB,
2 * MiB,
5 * MiB,
10 * MiB,
)
self.message_buckets = {x: 0 for x in self.message_size_thresholds}
self.message_count_buckets = {x: 0 for x in self.message_size_thresholds}
def process_mailbox_stat(self, mailbox): def process_mailbox_stat(self, mailbox):
# categorize login times # categorize login times
@@ -91,10 +68,9 @@ class Report:
for size in self.message_buckets: for size in self.message_buckets:
for msg in mailbox.messages: for msg in mailbox.messages:
if msg.size >= size: if msg.size >= size:
if self.mdir and f"/{self.mdir}/" not in msg.path: if self.mdir and not msg.relpath.startswith(self.mdir):
continue continue
self.message_buckets[size] += msg.size self.message_buckets[size] += msg.size
self.message_count_buckets[size] += 1
self.size_messages += sum(entry.size for entry in mailbox.messages) self.size_messages += sum(entry.size for entry in mailbox.messages)
self.size_extra += sum(entry.size for entry in mailbox.extrafiles) self.size_extra += sum(entry.size for entry in mailbox.extrafiles)
@@ -117,10 +93,9 @@ class Report:
pref = f"[{self.mdir}] " if self.mdir else "" pref = f"[{self.mdir}] " if self.mdir else ""
for minsize, sumsize in self.message_buckets.items(): for minsize, sumsize in self.message_buckets.items():
count = self.message_count_buckets[minsize]
percent = (sumsize / all_messages * 100) if all_messages else 0 percent = (sumsize / all_messages * 100) if all_messages else 0
print( print(
f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%), {count} msgs" f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%)"
) )
user_logins = self.num_all_logins - self.num_ci_logins user_logins = self.num_all_logins - self.num_ci_logins
@@ -136,75 +111,6 @@ class Report:
for days, active in self.login_buckets.items(): for days, active in self.login_buckets.items():
print(f"last {days:3} days: {HSize(active)} {p(active)}") print(f"last {days:3} days: {HSize(active)} {p(active)}")
def _write_atomic(self, filepath, content):
"""Atomically write content to filepath via tmp+rename."""
dirpath = os.path.dirname(os.path.abspath(filepath))
fd, tmppath = tempfile.mkstemp(dir=dirpath, suffix=".tmp")
try:
with os.fdopen(fd, "w") as f:
f.write(content)
os.chmod(tmppath, 0o644)
os.rename(tmppath, filepath)
except BaseException:
try:
os.unlink(tmppath)
except OSError:
pass
raise
def dump_textfile(self, filepath):
"""Dump metrics in Prometheus exposition format."""
lines = []
lines.append("# HELP chatmail_storage_bytes Mailbox storage in bytes.")
lines.append("# TYPE chatmail_storage_bytes gauge")
lines.append(f'chatmail_storage_bytes{{kind="messages"}} {self.size_messages}')
lines.append(f'chatmail_storage_bytes{{kind="extra"}} {self.size_extra}')
total = self.size_extra + self.size_messages
lines.append(f'chatmail_storage_bytes{{kind="total"}} {total}')
lines.append("# HELP chatmail_messages_bytes Sum of msg bytes >= threshold.")
lines.append("# TYPE chatmail_messages_bytes gauge")
for minsize, sumsize in self.message_buckets.items():
lines.append(f'chatmail_messages_bytes{{min_size="{minsize}"}} {sumsize}')
lines.append("# HELP chatmail_messages_count Number of msgs >= size threshold.")
lines.append("# TYPE chatmail_messages_count gauge")
for minsize, count in self.message_count_buckets.items():
lines.append(f'chatmail_messages_count{{min_size="{minsize}"}} {count}')
lines.append("# HELP chatmail_accounts Number of accounts.")
lines.append("# TYPE chatmail_accounts gauge")
user_logins = self.num_all_logins - self.num_ci_logins
lines.append(f'chatmail_accounts{{kind="all"}} {self.num_all_logins}')
lines.append(f'chatmail_accounts{{kind="ci"}} {self.num_ci_logins}')
lines.append(f'chatmail_accounts{{kind="user"}} {user_logins}')
lines.append(
"# HELP chatmail_accounts_active Non-CI accounts active within N days."
)
lines.append("# TYPE chatmail_accounts_active gauge")
for days, active in self.login_buckets.items():
lines.append(f'chatmail_accounts_active{{days="{days}"}} {active}')
self._write_atomic(filepath, "\n".join(lines) + "\n")
def dump_compat_textfile(self, filepath):
"""Dump legacy metrics.py style metrics."""
user_logins = self.num_all_logins - self.num_ci_logins
lines = [
"# HELP total number of accounts",
"# TYPE accounts gauge",
f"accounts {self.num_all_logins}",
"# HELP number of CI accounts",
"# TYPE ci_accounts gauge",
f"ci_accounts {self.num_ci_logins}",
"# HELP number of non-CI accounts",
"# TYPE nonci_accounts gauge",
f"nonci_accounts {user_logins}",
]
self._write_atomic(filepath, "\n".join(lines) + "\n")
def main(args=None): def main(args=None):
"""Report about filesystem storage usage of all mailboxes and messages""" """Report about filesystem storage usage of all mailboxes and messages"""
@@ -221,21 +127,19 @@ def main(args=None):
"--days", "--days",
default=0, default=0,
action="store", action="store",
help="assume date to be DAYS older than now", help="assume date to be days older than now",
) )
parser.add_argument( parser.add_argument(
"--min-login-age", "--min-login-age",
default=0, default=0,
metavar="DAYS",
dest="min_login_age", dest="min_login_age",
action="store", action="store",
help="only sum up message size if last login is at least DAYS days old", help="only sum up message size if last login is at least min-login-age days old",
) )
parser.add_argument( parser.add_argument(
"--mdir", "--mdir",
metavar="{cur,new,tmp}",
action="store", action="store",
help="only consider messages in specified Maildir subdirectory for summary", help="only consider 'cur' or 'new' or 'tmp' messages for summary",
) )
parser.add_argument( parser.add_argument(
@@ -244,21 +148,6 @@ def main(args=None):
action="store", action="store",
help="maximum number of mailboxes to iterate on", help="maximum number of mailboxes to iterate on",
) )
parser.add_argument(
"--textfile",
metavar="PATH",
default=None,
help="write Prometheus textfile to PATH (directory or file); "
"if PATH is a directory, writes 'fsreport.prom' inside it",
)
parser.add_argument(
"--legacy-metrics",
metavar="FILENAME",
nargs="?",
const="/var/www/html/metrics",
default=None,
help="write legacy metrics.py textfile (default: /var/www/html/metrics)",
)
args = parser.parse_args(args) args = parser.parse_args(args)
@@ -272,15 +161,7 @@ def main(args=None):
rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir) rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir)
for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum): for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
rep.process_mailbox_stat(mbox) rep.process_mailbox_stat(mbox)
if args.textfile: rep.dump_summary()
path = args.textfile
if os.path.isdir(path):
path = os.path.join(path, "fsreport.prom")
rep.dump_textfile(path)
if args.legacy_metrics:
rep.dump_compat_textfile(args.legacy_metrics)
if not args.textfile and not args.legacy_metrics:
rep.dump_summary()
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -110,7 +110,6 @@ def test_config_tls_external_overrides_underscore(make_config):
) )
assert config.tls_cert_mode == "external" assert config.tls_cert_mode == "external"
assert config.tls_cert_path == "/certs/fullchain.pem" assert config.tls_cert_path == "/certs/fullchain.pem"
assert config.tls_key_path == "/certs/privkey.pem"
def test_config_tls_external_bad_format(make_config): def test_config_tls_external_bad_format(make_config):

View File

@@ -47,8 +47,6 @@ def test_one_mail(
make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch
): ):
monkeypatch.setenv("PYTHONUNBUFFERED", "1") monkeypatch.setenv("PYTHONUNBUFFERED", "1")
# DKIM is tested by cmdeploy tests.
monkeypatch.setenv("FILTERMAIL_SKIP_DKIM", "1")
smtp_inject_port = 20025 smtp_inject_port = 20025
if filtermail_mode == "outgoing": if filtermail_mode == "outgoing":
settings = dict( settings = dict(
@@ -66,10 +64,6 @@ def test_one_mail(
popen = make_popen(["filtermail", path, filtermail_mode]) popen = make_popen(["filtermail", path, filtermail_mode])
line = popen.stderr.readline().strip() line = popen.stderr.readline().strip()
# skip a warning that FILTERMAIL_SKIP_DKIM shouldn't be used in prod
if b"DKIM verification DISABLED!" in line:
line = popen.stderr.readline().strip()
if b"loop" not in line: if b"loop" not in line:
print(line.decode("ascii"), file=sys.stderr) print(line.decode("ascii"), file=sys.stderr)
pytest.fail("starting filtermail failed") pytest.fail("starting filtermail failed")

View File

@@ -20,7 +20,6 @@ dependencies = [
"pytest-xdist", "pytest-xdist",
"execnet", "execnet",
"imap_tools", "imap_tools",
"deltachat-rpc-client",
] ]
[project.scripts] [project.scripts]

View File

@@ -3,7 +3,7 @@ Description=acmetool HTTP redirector
[Service] [Service]
Type=notify Type=notify
ExecStart=/usr/bin/acmetool redirector --service.uid=daemon --bind=127.0.0.1:402 ExecStart=/usr/bin/acmetool redirector --service.uid=daemon
Restart=always Restart=always
RestartSec=30 RestartSec=30

View File

@@ -5,6 +5,7 @@ along with command line option and subcommand parsing.
import argparse import argparse
import importlib.resources import importlib.resources
import importlib.util
import os import os
import pathlib import pathlib
import shutil import shutil
@@ -108,7 +109,10 @@ def run_cmd(args, out):
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra" pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y" cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host == "localhost": if ssh_host in ["localhost", "@docker"]:
if ssh_host == "@docker":
env["CHATMAIL_NOPORTCHECK"] = "True"
env["CHATMAIL_NOSYSCTL"] = "True"
cmd = f"{pyinf} @local {deploy_path} -y" cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"): if version.parse(pyinfra.__version__) < version.parse("3"):
@@ -206,15 +210,17 @@ def test_cmd_options(parser):
action="store_true", action="store_true",
help="also run slow tests", help="also run slow tests",
) )
add_ssh_host_option(parser)
def test_cmd(args, out): def test_cmd(args, out):
"""Run local and online tests for chatmail deployment.""" """Run local and online tests for chatmail deployment.
env = os.environ.copy() This will automatically pip-install 'deltachat' if it's not available.
if args.ssh_host: """
env["CHATMAIL_SSH"] = args.ssh_host
x = importlib.util.find_spec("deltachat")
if x is None:
out.check_call(f"{sys.executable} -m pip install deltachat")
pytest_path = shutil.which("pytest") pytest_path = shutil.which("pytest")
pytest_args = [ pytest_args = [
@@ -228,7 +234,7 @@ def test_cmd(args, out):
] ]
if args.slow: if args.slow:
pytest_args.append("--slow") pytest_args.append("--slow")
ret = out.run_ret(pytest_args, env=env) ret = out.run_ret(pytest_args)
return ret return ret
@@ -319,7 +325,7 @@ def add_ssh_host_option(parser):
parser.add_argument( parser.add_argument(
"--ssh-host", "--ssh-host",
dest="ssh_host", dest="ssh_host",
help="Run commands on 'localhost' or on a specific SSH host " help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
"instead of chatmail.ini's mail_domain.", "instead of chatmail.ini's mail_domain.",
) )
@@ -381,7 +387,9 @@ def get_parser():
def get_sshexec(ssh_host: str, verbose=True): def get_sshexec(ssh_host: str, verbose=True):
if ssh_host in ["localhost", "@local"]: if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose) return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
if verbose: if verbose:
print(f"[ssh] login to {ssh_host}") print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose) return SSHExec(ssh_host, verbose=verbose)

View File

@@ -11,8 +11,8 @@ from pathlib import Path
from chatmaild.config import read_config from chatmaild.config import read_config
from pyinfra import facts, host, logger from pyinfra import facts, host, logger
from pyinfra.api import FactBase
from pyinfra.facts import hardware from pyinfra.facts import hardware
from pyinfra.api import FactBase
from pyinfra.facts.files import Sha256File from pyinfra.facts.files import Sha256File
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
@@ -20,6 +20,7 @@ from pyinfra.operations import apt, files, pip, server, systemd
from cmdeploy.cmdeploy import Out from cmdeploy.cmdeploy import Out
from .acmetool import AcmetoolDeployer from .acmetool import AcmetoolDeployer
from .external.deployer import ExternalTlsDeployer
from .basedeploy import ( from .basedeploy import (
Deployer, Deployer,
Deployment, Deployment,
@@ -29,7 +30,6 @@ from .basedeploy import (
has_systemd, has_systemd,
) )
from .dovecot.deployer import DovecotDeployer from .dovecot.deployer import DovecotDeployer
from .external.deployer import ExternalTlsDeployer
from .filtermail.deployer import FiltermailDeployer from .filtermail.deployer import FiltermailDeployer
from .mtail.deployer import MtailDeployer from .mtail.deployer import MtailDeployer
from .nginx.deployer import NginxDeployer from .nginx.deployer import NginxDeployer
@@ -592,13 +592,9 @@ def deploy_chatmail(config_path: Path, disable_mail: bool, website_only: bool) -
("unbound", 53), ("unbound", 53),
] ]
if config.tls_cert_mode == "acme": if config.tls_cert_mode == "acme":
port_services.append(("acmetool", 402)) port_services.append(("acmetool", 80))
port_services += [ port_services += [
(["imap-login", "dovecot"], 143), (["imap-login", "dovecot"], 143),
# acmetool previously listened on port 80,
# so don't complain during upgrade that moved it to port 402
# and gave the port to nginx.
(["acmetool", "nginx"], 80),
("nginx", 443), ("nginx", 443),
(["master", "smtpd"], 465), (["master", "smtpd"], 465),
(["master", "smtpd"], 587), (["master", "smtpd"], 587),

View File

@@ -1,5 +1,4 @@
import os import os
import urllib.request
from chatmaild.config import Config from chatmaild.config import Config
from pyinfra import host from pyinfra import host
@@ -52,21 +51,10 @@ class DovecotDeployer(Deployer):
self.need_restart = False self.need_restart = False
def _pick_url(primary, fallback):
try:
req = urllib.request.Request(primary, method="HEAD")
urllib.request.urlopen(req, timeout=10)
return primary
except Exception:
return fallback
def _install_dovecot_package(package: str, arch: str): def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch arch = "arm64" if arch == "aarch64" else arch
primary_url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb" url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
fallback_url = f"https://github.com/chatmail/dovecot/releases/download/upstream%2F2.3.21%2Bdfsg1/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
url = _pick_url(primary_url, fallback_url)
deb_filename = "/root/" + url.split("/")[-1] deb_filename = "/root/" + url.split("/")[-1]
match (package, arch): match (package, arch):

View File

@@ -1,8 +1,4 @@
import io from pyinfra.operations import files, server, systemd
from pyinfra import host
from pyinfra.facts.files import File
from pyinfra.operations import files, systemd
from cmdeploy.basedeploy import Deployer, get_resource from cmdeploy.basedeploy import Deployer, get_resource
@@ -21,18 +17,19 @@ class ExternalTlsDeployer(Deployer):
self.key_path = key_path self.key_path = key_path
def configure(self): def configure(self):
# Verify cert and key exist on the remote host using pyinfra facts. server.shell(
for path in (self.cert_path, self.key_path): name="Verify external TLS certificate and key exist",
info = host.get_fact(File, path=path) commands=[
if info is None: f"test -f {self.cert_path} && test -f {self.key_path}",
raise Exception(f"External TLS file not found on server: {path}") ],
)
# Deploy the .path unit (templated with the cert path). # Deploy the .path unit (templated with the cert path).
# pkg=__package__ is required here because the resource files
# live in cmdeploy.external, not the default cmdeploy package.
source = get_resource("tls-cert-reload.path.f", pkg=__package__) source = get_resource("tls-cert-reload.path.f", pkg=__package__)
content = source.read_text().format(cert_path=self.cert_path).encode() content = source.read_text().format(cert_path=self.cert_path).encode()
import io
path_unit = files.put( path_unit = files.put(
name="Upload tls-cert-reload.path", name="Upload tls-cert-reload.path",
src=io.BytesIO(content), src=io.BytesIO(content),
@@ -63,5 +60,10 @@ class ExternalTlsDeployer(Deployer):
restarted=self.need_restart, restarted=self.need_restart,
daemon_reload=self.need_restart, daemon_reload=self.need_restart,
) )
# No explicit reload needed here: dovecot/nginx read the cert # Always trigger a reload so services pick up the current cert.
# on startup, and the .path watcher handles live changes. # The path unit handles future changes via inotify.
server.shell(
name="Reload TLS services for current certificate",
commands=["systemctl start tls-cert-reload.service"],
)

View File

@@ -1,10 +1,6 @@
# Watch the TLS certificate file for changes. # Watch the TLS certificate file for changes.
# When the cert is updated (e.g. renewed by an external process), # When the cert is updated (e.g. renewed by an external process),
# this triggers tls-cert-reload.service to reload the affected services. # this triggers tls-cert-reload.service to restart the affected services.
#
# NOTE: changes to the certificates are not detected if they cross bind-mount boundaries.
# After cert renewal, you must then trigger the reload explicitly:
# systemctl start tls-cert-reload.service
[Unit] [Unit]
Description=Watch TLS certificate for changes Description=Watch TLS certificate for changes

View File

@@ -11,5 +11,5 @@ Description=Reload TLS services after certificate change
[Service] [Service]
Type=oneshot Type=oneshot
ExecStart=/bin/systemctl try-reload-or-restart dovecot ExecStart=/bin/systemctl reload dovecot
ExecStart=/bin/systemctl try-reload-or-restart nginx ExecStart=/bin/systemctl reload nginx

View File

@@ -14,10 +14,10 @@ class FiltermailDeployer(Deployer):
def install(self): def install(self):
arch = host.get_fact(facts.server.Arch) arch = host.get_fact(facts.server.Arch)
url = f"https://github.com/chatmail/filtermail/releases/download/v0.5.2/filtermail-{arch}" url = f"https://github.com/chatmail/filtermail/releases/download/v0.3.0/filtermail-{arch}"
sha256sum = { sha256sum = {
"x86_64": "ce24ca0075aa445510291d775fb3aea8f4411818c7b885ae51a0fe18c5f789ce", "x86_64": "f14a31323ae2dad3b59d3fdafcde507521da2f951a9478cd1f2fe2b4463df71d",
"aarch64": "c5d783eefa5332db3d97a0e6a23917d72849e3eb45da3d16ce908a9b4e5a797d", "aarch64": "933770d75046c4fd7084ce8d43f905f8748333426ad839154f0fc654755ef09f",
}[arch] }[arch]
self.need_restart |= files.download( self.need_restart |= files.download(
name="Download filtermail", name="Download filtermail",

View File

@@ -145,25 +145,4 @@ http {
return 301 $scheme://{{ config.mail_domain }}$request_uri; return 301 $scheme://{{ config.mail_domain }}$request_uri;
access_log syslog:server=unix:/dev/log,facility=local7; access_log syslog:server=unix:/dev/log,facility=local7;
} }
server {
listen 80;
{% if not disable_ipv6 %}
listen [::]:80;
{% endif %}
{% if config.tls_cert_mode == "acme" %}
location /.well-known/acme-challenge/ {
proxy_pass http://acmetool;
}
{% endif %}
return 301 https://$host$request_uri;
}
{% if config.tls_cert_mode == "acme" %}
upstream acmetool {
server 127.0.0.1:402;
}
{% endif %}
} }

View File

@@ -37,15 +37,21 @@ class OpendkimDeployer(Deployer):
) )
need_restart |= main_config.changed need_restart |= main_config.changed
screen_script = files.file( screen_script = files.put(
path="/etc/opendkim/screen.lua", src=get_resource("opendkim/screen.lua"),
present=False, dest="/etc/opendkim/screen.lua",
user="root",
group="root",
mode="644",
) )
need_restart |= screen_script.changed need_restart |= screen_script.changed
final_script = files.file( final_script = files.put(
path="/etc/opendkim/final.lua", src=get_resource("opendkim/final.lua"),
present=False, dest="/etc/opendkim/final.lua",
user="root",
group="root",
mode="644",
) )
need_restart |= final_script.changed need_restart |= final_script.changed
@@ -103,13 +109,6 @@ class OpendkimDeployer(Deployer):
) )
need_restart |= service_file.changed need_restart |= service_file.changed
files.file(
name="chown opendkim: /etc/dkimkeys/opendkim.private",
path="/etc/dkimkeys/opendkim.private",
user="opendkim",
group="opendkim",
)
self.need_restart = need_restart self.need_restart = need_restart
def activate(self): def activate(self):

View File

@@ -0,0 +1,42 @@
mtaname = odkim.get_mtasymbol(ctx, "{daemon_name}")
if mtaname == "ORIGINATING" then
-- Outgoing message will be signed,
-- no need to look for signatures.
return nil
end
nsigs = odkim.get_sigcount(ctx)
if nsigs == nil then
return nil
end
local valid = false
local error_msg = "No valid DKIM signature found."
for i = 1, nsigs do
sig = odkim.get_sighandle(ctx, i - 1)
sigres = odkim.sig_result(sig)
-- All signatures that do not correspond to From:
-- were ignored in screen.lua and return sigres -1.
--
-- Any valid signature that was not ignored like this
-- means the message is acceptable.
if sigres == 0 then
valid = true
else
error_msg = "DKIM signature is invalid, error code " .. tostring(sigres) .. ", search https://github.com/trusteddomainproject/OpenDKIM/blob/master/libopendkim/dkim.h#L108"
end
end
if valid then
-- Strip all DKIM-Signature headers after successful validation
-- Delete in reverse order to avoid index shifting.
for i = nsigs, 1, -1 do
odkim.del_header(ctx, "DKIM-Signature", i)
end
else
odkim.set_reply(ctx, "554", "5.7.1", error_msg)
odkim.set_result(ctx, SMFIS_REJECT)
end
return nil

View File

@@ -45,6 +45,12 @@ SignHeaders *,+autocrypt,+content-type
# Default is empty. # Default is empty.
OversignHeaders from,reply-to,subject,date,to,cc,resent-date,resent-from,resent-sender,resent-to,resent-cc,in-reply-to,references,list-id,list-help,list-unsubscribe,list-subscribe,list-post,list-owner,list-archive,autocrypt OversignHeaders from,reply-to,subject,date,to,cc,resent-date,resent-from,resent-sender,resent-to,resent-cc,in-reply-to,references,list-id,list-help,list-unsubscribe,list-subscribe,list-post,list-owner,list-archive,autocrypt
# Script to ignore signatures that do not correspond to the From: domain.
ScreenPolicyScript /etc/opendkim/screen.lua
# Script to reject mails without a valid DKIM signature.
FinalPolicyScript /etc/opendkim/final.lua
# In Debian, opendkim runs as user "opendkim". A umask of 007 is required when # In Debian, opendkim runs as user "opendkim". A umask of 007 is required when
# using a local socket with MTAs that access the socket as a non-privileged # using a local socket with MTAs that access the socket as a non-privileged
# user (for example, Postfix). You may need to add user "postfix" to group # user (for example, Postfix). You may need to add user "postfix" to group

View File

@@ -0,0 +1,21 @@
-- Ignore signatures that do not correspond to the From: domain.
from_domain = odkim.get_fromdomain(ctx)
if from_domain == nil then
return nil
end
n = odkim.get_sigcount(ctx)
if n == nil then
return nil
end
for i = 1, n do
sig = odkim.get_sighandle(ctx, i - 1)
sig_domain = odkim.sig_getdomain(sig)
if from_domain ~= sig_domain then
odkim.sig_ignore(sig)
end
end
return nil

View File

@@ -86,6 +86,7 @@ filter unix - n n - - lmtp
# Local SMTP server for reinjecting incoming filtered mail # Local SMTP server for reinjecting incoming filtered mail
127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd 127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd
-o syslog_name=postfix/reinject_incoming -o syslog_name=postfix/reinject_incoming
-o smtpd_milters=unix:opendkim/opendkim.sock
# Cleanup `Received` headers for authenticated mail # Cleanup `Received` headers for authenticated mail
# to avoid leaking client IP. # to avoid leaking client IP.

View File

@@ -5,5 +5,5 @@ After=network.target
[Service] [Service]
Type=oneshot Type=oneshot
User=vmail User=vmail
ExecStart=/usr/local/lib/chatmaild/venv/bin/chatmail-fsreport /usr/local/lib/chatmaild/chatmail.ini ExecStart=/usr/local/lib/chatmaild/venv/bin/chatmail-fsreport /usr/local/lib/chatmaild/chatmail.ini

View File

@@ -4,7 +4,7 @@ Description=Chatmail dict proxy for IMAP METADATA
[Service] [Service]
ExecStart={execpath} /run/chatmail-metadata/metadata.socket {config_path} ExecStart={execpath} /run/chatmail-metadata/metadata.socket {config_path}
Restart=always Restart=always
RestartSec=5 RestartSec=30
User=vmail User=vmail
RuntimeDirectory=chatmail-metadata RuntimeDirectory=chatmail-metadata
UMask=0077 UMask=0077

View File

@@ -50,9 +50,6 @@ class SSHExec:
FuncError = FuncError FuncError = FuncError
def __init__(self, host, verbose=False, python="python3", timeout=60): def __init__(self, host, verbose=False, python="python3", timeout=60):
docker_container = os.environ.get("CHATMAIL_DOCKER")
if docker_container:
python = f"docker exec -i {docker_container} python3"
self.gateway = execnet.makegateway(f"ssh=root@{host}//python={python}") self.gateway = execnet.makegateway(f"ssh=root@{host}//python={python}")
self._remote_cmdloop_channel = bootstrap_remote(self.gateway, remote) self._remote_cmdloop_channel = bootstrap_remote(self.gateway, remote)
self.timeout = timeout self.timeout = timeout
@@ -88,10 +85,9 @@ class SSHExec:
class LocalExec: class LocalExec:
FuncError = FuncError def __init__(self, verbose=False, docker=False):
def __init__(self, verbose=False):
self.verbose = verbose self.verbose = verbose
self.docker = docker
def __call__(self, call, kwargs=None, log_callback=None): def __call__(self, call, kwargs=None, log_callback=None):
if kwargs is None: if kwargs is None:
@@ -99,15 +95,11 @@ class LocalExec:
return call(**kwargs) return call(**kwargs)
def logged(self, call, kwargs: dict): def logged(self, call, kwargs: dict):
title = call.__doc__
if not title:
title = call.__name__
where = "locally" where = "locally"
if self.docker:
if call == remote.rdns.perform_initial_checks:
kwargs["pre_command"] = "docker exec chatmail "
where = "in docker"
if self.verbose: if self.verbose:
print_stderr(f"Running {where}: {title}(**{kwargs})") print(f"Running {where}: {call.__name__}(**{kwargs})")
return self(call, kwargs, log_callback=print_stderr) return call(**kwargs)
else:
print_stderr(title, end="")
res = self(call, kwargs, log_callback=remote.rshell.log_progress)
print_stderr()
return res

View File

@@ -1,4 +1,3 @@
import time
def test_tls_imap(benchmark, imap): def test_tls_imap(benchmark, imap):
def imap_connect(): def imap_connect():
imap.connect() imap.connect()
@@ -42,9 +41,9 @@ class TestDC:
def dc_ping_pong(): def dc_ping_pong():
chat.send_text("ping") chat.send_text("ping")
msg = ac2.wait_for_incoming_msg() msg = ac2._evtracker.wait_next_incoming_message()
msg.get_snapshot().chat.send_text("pong") msg.chat.send_text("pong")
ac1.wait_for_incoming_msg() ac1._evtracker.wait_next_incoming_message()
benchmark(dc_ping_pong, 5) benchmark(dc_ping_pong, 5)
@@ -56,6 +55,6 @@ class TestDC:
for i in range(10): for i in range(10):
chat.send_text(f"hello {i}") chat.send_text(f"hello {i}")
for i in range(10): for i in range(10):
ac2.wait_for_incoming_msg() ac2._evtracker.wait_next_incoming_message()
benchmark(dc_send_10_receive_10, 5, cooldown="auto") benchmark(dc_send_10_receive_10, 5)

View File

@@ -7,13 +7,13 @@ import time
import pytest import pytest
from cmdeploy import remote from cmdeploy import remote
from cmdeploy.cmdeploy import get_sshexec from cmdeploy.sshexec import SSHExec
class TestSSHExecutor: class TestSSHExecutor:
@pytest.fixture(scope="class") @pytest.fixture(scope="class")
def sshexec(self, sshdomain): def sshexec(self, sshdomain):
return get_sshexec(sshdomain) return SSHExec(sshdomain)
def test_ls(self, sshexec): def test_ls(self, sshexec):
out = sshexec(call=remote.rdns.shell, kwargs=dict(command="ls")) out = sshexec(call=remote.rdns.shell, kwargs=dict(command="ls"))
@@ -27,7 +27,6 @@ class TestSSHExecutor:
assert res["A"] or res["AAAA"] assert res["A"] or res["AAAA"]
def test_logged(self, sshexec, maildomain, capsys): def test_logged(self, sshexec, maildomain, capsys):
sshexec.verbose = False
sshexec.logged( sshexec.logged(
remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain) remote.rdns.perform_initial_checks, kwargs=dict(mail_domain=maildomain)
) )
@@ -53,8 +52,6 @@ class TestSSHExecutor:
remote.rdns.perform_initial_checks, remote.rdns.perform_initial_checks,
kwargs=dict(mail_domain=None), kwargs=dict(mail_domain=None),
) )
except AssertionError:
pass
except sshexec.FuncError as e: except sshexec.FuncError as e:
assert "rdns.py" in str(e) assert "rdns.py" in str(e)
assert "AssertionError" in str(e) assert "AssertionError" in str(e)
@@ -86,8 +83,10 @@ def test_remote(remote, imap_or_smtp):
def test_use_two_chatmailservers(cmfactory, maildomain2): def test_use_two_chatmailservers(cmfactory, maildomain2):
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
cmfactory.get_accepted_chat(ac1, ac2) cmfactory.get_accepted_chat(ac1, ac2)
domain1 = ac1.get_config("addr").split("@")[1] domain1 = ac1.get_config("addr").split("@")[1]
domain2 = ac2.get_config("addr").split("@")[1] domain2 = ac2.get_config("addr").split("@")[1]
@@ -147,7 +146,7 @@ def test_reject_missing_dkim(cmsetup, maildata, from_addr):
conn.starttls() conn.starttls()
with conn as s: with conn as s:
with pytest.raises(smtplib.SMTPDataError, match="No DKIM signature found"): with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"):
s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg) s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg)
@@ -219,7 +218,7 @@ def test_expunged(remote, chatmail_config):
] ]
outdated_days = int(chatmail_config.delete_large_after) + 1 outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append( find_cmds.append(
f"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f" "find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
) )
for cmd in find_cmds: for cmd in find_cmds:
for line in remote.iter_output(cmd): for line in remote.iter_output(cmd):

View File

@@ -7,7 +7,7 @@ import pytest
import requests import requests
from cmdeploy.remote import rshell from cmdeploy.remote import rshell
from cmdeploy.cmdeploy import get_sshexec from cmdeploy.sshexec import SSHExec
@pytest.fixture @pytest.fixture
@@ -27,7 +27,6 @@ class TestMetadataTokens:
def test_set_get_metadata(self, imap_mailbox): def test_set_get_metadata(self, imap_mailbox):
"set and get metadata token for an account" "set and get metadata token for an account"
time.sleep(5) # make sure Metadata service had a chance to restart
client = imap_mailbox.client client = imap_mailbox.client
client.send(b'a01 SETMETADATA INBOX (/private/devicetoken "1111" )\n') client.send(b'a01 SETMETADATA INBOX (/private/devicetoken "1111" )\n')
res = client.readline() res = client.readline()
@@ -63,8 +62,8 @@ class TestEndToEndDeltaChat:
chat.send_text("message0") chat.send_text("message0")
lp.sec("wait for ac2 to receive message") lp.sec("wait for ac2 to receive message")
msg2 = ac2.wait_for_incoming_msg() msg2 = ac2._evtracker.wait_next_incoming_message()
assert msg2.get_snapshot().text == "message0" assert msg2.text == "message0"
def test_exceed_quota( def test_exceed_quota(
self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain self, cmfactory, lp, tmpdir, remote, chatmail_config, sshdomain
@@ -92,41 +91,45 @@ class TestEndToEndDeltaChat:
lp.sec(f"filling remote inbox for {user}") lp.sec(f"filling remote inbox for {user}")
fn = f"7743102289.M843172P2484002.c20,S={quota},W=2398:2," fn = f"7743102289.M843172P2484002.c20,S={quota},W=2398:2,"
path = chatmail_config.mailboxes_dir.joinpath(user, "cur", fn) path = chatmail_config.mailboxes_dir.joinpath(user, "cur", fn)
sshexec = get_sshexec(sshdomain) sshexec = SSHExec(sshdomain)
sshexec(call=rshell.write_numbytes, kwargs=dict(path=str(path), num=120)) sshexec(call=rshell.write_numbytes, kwargs=dict(path=str(path), num=120))
res = sshexec(call=rshell.dovecot_recalc_quota, kwargs=dict(user=user)) res = sshexec(call=rshell.dovecot_recalc_quota, kwargs=dict(user=user))
assert res["percent"] >= 100 assert res["percent"] >= 100
lp.sec("ac2: check quota is triggered") lp.sec("ac2: check quota is triggered")
def send_hello(): starting = True
chat.send_text("hello") for line in remote.iter_output("journalctl -n0 -f -u dovecot"):
if starting:
for line in remote.iter_output( chat.send_text("hello")
"journalctl -n1 -f -u dovecot", ready=send_hello starting = False
):
if user not in line: if user not in line:
# print(line)
continue continue
if "quota exceeded" in line: if "quota exceeded" in line:
return return
def test_securejoin(self, cmfactory, lp, maildomain2): def test_securejoin(self, cmfactory, lp, maildomain2):
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
lp.sec("ac1: create QR code and let ac2 scan it, starting the securejoin") lp.sec("ac1: create QR code and let ac2 scan it, starting the securejoin")
qr = ac1.get_qr_code() qr = ac1.get_setup_contact_qr()
lp.sec("ac2: start QR-code based setup contact protocol") lp.sec("ac2: start QR-code based setup contact protocol")
ch = ac2.secure_join(qr) ch = ac2.qr_setup_contact(qr)
assert ch.id >= 10 assert ch.id >= 10
ac1.wait_for_securejoin_inviter_success() ac1._evtracker.wait_securejoin_inviter_progress(1000)
def test_dkim_header_stripped(self, cmfactory, maildomain2, lp, imap_mailbox): def test_dkim_header_stripped(self, cmfactory, maildomain2, lp, imap_mailbox):
"""Test that if a DC address receives a message, it has no """Test that if a DC address receives a message, it has no
DKIM-Signature and Authentication-Results headers.""" DKIM-Signature and Authentication-Results headers."""
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
chat = cmfactory.get_accepted_chat(ac1, imap_mailbox.dc_ac) chat = cmfactory.get_accepted_chat(ac1, imap_mailbox.dc_ac)
chat.send_text("message0") chat.send_text("message0")
chat2 = cmfactory.get_accepted_chat(ac2, imap_mailbox.dc_ac) chat2 = cmfactory.get_accepted_chat(ac2, imap_mailbox.dc_ac)
@@ -143,28 +146,29 @@ class TestEndToEndDeltaChat:
assert "dkim-signature" not in msg.headers assert "dkim-signature" not in msg.headers
def test_read_receipts_between_instances(self, cmfactory, lp, maildomain2): def test_read_receipts_between_instances(self, cmfactory, lp, maildomain2):
ac1 = cmfactory.get_online_account() ac1 = cmfactory.new_online_configuring_account(cache=False)
ac2 = cmfactory.get_online_account(domain=maildomain2) cmfactory.switch_maildomain(maildomain2)
ac2 = cmfactory.new_online_configuring_account(cache=False)
cmfactory.bring_accounts_online()
lp.sec("setup encrypted comms between ac1 and ac2 on different instances") lp.sec("setup encrypted comms between ac1 and ac2 on different instances")
qr = ac1.get_qr_code() qr = ac1.get_setup_contact_qr()
ch = ac2.secure_join(qr) ch = ac2.qr_setup_contact(qr)
assert ch.id >= 10 assert ch.id >= 10
ac1.wait_for_securejoin_inviter_success() ac1._evtracker.wait_securejoin_inviter_progress(1000)
lp.sec("ac1 sends a message and ac2 marks it as seen") lp.sec("ac1 sends a message and ac2 marks it as seen")
chat = ac1.create_chat(ac2) chat = ac1.create_chat(ac2)
msg = chat.send_text("hi") msg = chat.send_text("hi")
m = ac2.wait_for_incoming_msg() m = ac2._evtracker.wait_next_incoming_message()
m.mark_seen() m.mark_seen()
# we can only indirectly wait for mark-seen to cause an smtp-error # we can only indirectly wait for mark-seen to cause an smtp-error
lp.sec("try to wait for markseen to complete and check error states") lp.sec("try to wait for markseen to complete and check error states")
deadline = time.time() + 3.1 deadline = time.time() + 3.1
while time.time() < deadline: while time.time() < deadline:
m_snap = m.get_snapshot() msgs = m.chat.get_messages()
msgs = m_snap.chat.get_messages()
for msg in msgs: for msg in msgs:
assert "error" not in m.get_info() assert "error" not in m.get_message_info()
time.sleep(1) time.sleep(1)
@@ -176,7 +180,7 @@ def test_hide_senders_ip_address(cmfactory, ssl_context):
chat = cmfactory.get_accepted_chat(user1, user2) chat = cmfactory.get_accepted_chat(user1, user2)
chat.send_text("testing submission header cleanup") chat.send_text("testing submission header cleanup")
user2.wait_for_incoming_msg() user2._evtracker.wait_next_incoming_message()
addr = user2.get_config("addr") addr = user2.get_config("addr")
host = addr.split("@")[1] host = addr.split("@")[1]
pw = user2.get_config("mail_pw") pw = user2.get_config("mail_pw")

View File

@@ -5,11 +5,7 @@ from cmdeploy.cmdeploy import main
def test_status_cmd(chatmail_config, capsys, request): def test_status_cmd(chatmail_config, capsys, request):
os.chdir(request.config.invocation_params.dir) os.chdir(request.config.invocation_params.dir)
command = ["status"] assert main(["status"]) == 0
if os.getenv("CHATMAIL_SSH"):
command.append("--ssh-host")
command.append(os.getenv("CHATMAIL_SSH"))
assert main(command) == 0
status_out = capsys.readouterr() status_out = capsys.readouterr()
print(status_out.out) print(status_out.out)

View File

@@ -1,4 +1,5 @@
import imaplib import imaplib
import io
import itertools import itertools
import os import os
import random import random
@@ -34,24 +35,17 @@ def pytest_runtest_setup(item):
pytest.skip("skipping slow test, use --slow to run") pytest.skip("skipping slow test, use --slow to run")
def _get_chatmail_config(): @pytest.fixture(scope="session")
current = Path().resolve() def chatmail_config(pytestconfig):
current = basedir = Path().resolve()
while 1: while 1:
path = current.joinpath("chatmail.ini").resolve() path = current.joinpath("chatmail.ini").resolve()
if path.exists(): if path.exists():
return read_config(path), path return read_config(path)
if current == current.parent: if current == current.parent:
break break
current = current.parent current = current.parent
return None, None
@pytest.fixture(scope="session")
def chatmail_config(pytestconfig):
config, path = _get_chatmail_config()
if config:
return config
basedir = Path().resolve()
pytest.skip(f"no chatmail.ini file found in {basedir} or parent dirs") pytest.skip(f"no chatmail.ini file found in {basedir} or parent dirs")
@@ -79,17 +73,10 @@ def sshdomain2(maildomain2):
def pytest_report_header(): def pytest_report_header():
config, path = _get_chatmail_config() domain = os.environ.get("CHATMAIL_DOMAIN")
domain2 = os.environ.get("CHATMAIL_DOMAIN2", "NOT SET") if domain:
domain = config.mail_domain if config else "NOT SET" text = f"chatmail test instance: {domain}"
path = path if path else "NOT SET" return ["-" * len(text), text, "-" * len(text)]
lines = [
f"chatmail.ini {domain} location: {path}",
f"chatmail2: {domain2}",
]
sep = "-" * max(map(len, lines))
return [sep, *lines, sep]
@pytest.fixture @pytest.fixture
@@ -104,22 +91,15 @@ def cm_data(request):
@pytest.fixture @pytest.fixture
def benchmark(request, chatmail_config): def benchmark(request):
def bench(func, num, name=None, reportfunc=None, cooldown=0.0): def bench(func, num, name=None, reportfunc=None):
if name is None: if name is None:
name = func.__name__ name = func.__name__
if cooldown == "auto":
per_minute = max(chatmail_config.max_user_send_per_minute, 1)
cooldown = chatmail_config.max_user_send_burst_size * 60 / per_minute
durations = [] durations = []
for i in range(num): for i in range(num):
now = time.time() now = time.time()
func() func()
durations.append(time.time() - now) durations.append(time.time() - now)
if cooldown > 0 and i + 1 < num:
# Keep post-run cooldown out of measured benchmark duration.
time.sleep(cooldown)
durations.sort() durations.sort()
request.config._benchresults[name] = (reportfunc, durations) request.config._benchresults[name] = (reportfunc, durations)
@@ -296,95 +276,79 @@ def gencreds(chatmail_config):
# #
# Delta Chat RPC-based test support # Delta Chat testplugin re-use
# use the cmfactory fixture to get chatmail instance accounts # use the cmfactory fixture to get chatmail instance accounts
# #
from deltachat_rpc_client import DeltaChat, Rpc
class ChatmailTestProcess:
"""Provider for chatmail instance accounts as used by deltachat.testplugin.acfactory"""
class ChatmailACFactory: def __init__(self, pytestconfig, maildomain, gencreds, chatmail_config):
"""RPC-based account factory for chatmail testing.""" self.pytestconfig = pytestconfig
self.maildomain = maildomain
def __init__(self, rpc, maildomain, gencreds, chatmail_config): assert "." in self.maildomain, maildomain
self.dc = DeltaChat(rpc)
self.rpc = rpc
self._maildomain = maildomain
self.gencreds = gencreds self.gencreds = gencreds
self.chatmail_config = chatmail_config self.chatmail_config = chatmail_config
self._addr2files = {}
def _make_transport(self, domain): def get_liveconfig_producer(self):
"""Build a transport config dict for the given domain.""" while 1:
addr, password = self.gencreds(domain) user, password = self.gencreds(self.maildomain)
transport = { config = {
"addr": addr, "addr": user,
"password": password, "mail_pw": password,
# Setting server explicitly skips requesting autoconfig XML, }
# see https://datatracker.ietf.org/doc/draft-ietf-mailmaint-autoconfig/ # speed up account configuration
"imapServer": domain, config["mail_server"] = self.maildomain
"smtpServer": domain, config["send_server"] = self.maildomain
} if self.chatmail_config.tls_cert_mode == "self":
if self.chatmail_config.tls_cert_mode == "self": # Accept self-signed TLS certificates
transport["certificateChecks"] = "acceptInvalidCertificates" config["imap_certificate_checks"] = "3"
return transport yield config
def get_online_account(self, domain=None): def cache_maybe_retrieve_configured_db_files(self, cache_addr, db_target_path):
"""Create, configure and bring online a single account.""" pass
return self.get_online_accounts(1, domain)[0]
def get_online_accounts(self, num, domain=None): def cache_maybe_store_configured_db_files(self, acc):
"""Create multiple online accounts in parallel.""" pass
domain = domain or self._maildomain
futures = []
accounts = []
for _ in range(num):
account = self.dc.add_account()
future = account.add_or_update_transport.future(
self._make_transport(domain)
)
futures.append(future)
# ensure messages stay in INBOX so that they can be
# concurrently fetched via extra IMAP connections during tests
account.set_config("delete_server_after", "10")
accounts.append(account)
for future in futures:
future()
for account in accounts:
account.bring_online()
return accounts
def get_accepted_chat(self, ac1, ac2):
"""Create a 1:1 chat between ac1 and ac2 accepted on both sides."""
ac2.create_chat(ac1)
return ac1.create_chat(ac2)
@pytest.fixture(scope="session")
def rpc(tmp_path_factory):
"""Start a deltachat-rpc-server process for the test session."""
# NB: accounts_dir must NOT already exist as directory --
# core-rust only creates accounts.toml if the dir doesn't exist yet.
accounts_dir = str(tmp_path_factory.mktemp("dc") / "accounts")
rpc = Rpc(accounts_dir=accounts_dir)
rpc.start()
yield rpc
rpc.close()
@pytest.fixture @pytest.fixture
def cmfactory(rpc, gencreds, maildomain, chatmail_config): def cmfactory(request, gencreds, tmpdir, maildomain, chatmail_config):
"""Return a ChatmailACFactory for creating online Delta Chat accounts.""" # cloned from deltachat.testplugin.amfactory
return ChatmailACFactory( pytest.importorskip("deltachat")
rpc=rpc, from deltachat.testplugin import ACFactory
maildomain=maildomain,
gencreds=gencreds, testproc = ChatmailTestProcess(
chatmail_config=chatmail_config, request.config, maildomain, gencreds, chatmail_config
) )
class Data:
def read_path(self, path):
return
am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data())
# Skip upstream's init_imap to prevent extra imap connections not
# needed for relay testing
am._acsetup.init_imap = lambda acc: None
# nb. a bit hacky
# would probably be better if deltachat's test machinery grows native support
def switch_maildomain(maildomain2):
am.testprocess.maildomain = maildomain2
am.switch_maildomain = switch_maildomain
yield am
if hasattr(request.node, "rep_call") and request.node.rep_call.failed:
if testproc.pytestconfig.getoption("--extra-info"):
logfile = io.StringIO()
am.dump_imap_summary(logfile=logfile)
print(logfile.getvalue())
# request.node.add_report_section("call", "imap-server-state", s)
@pytest.fixture @pytest.fixture
def remote(sshdomain): def remote(sshdomain):
@@ -395,30 +359,19 @@ class Remote:
def __init__(self, sshdomain): def __init__(self, sshdomain):
self.sshdomain = sshdomain self.sshdomain = sshdomain
def iter_output(self, logcmd="", ready=None): def iter_output(self, logcmd=""):
getjournal = "journalctl -f" if not logcmd else logcmd getjournal = "journalctl -f" if not logcmd else logcmd
print(self.sshdomain)
match self.sshdomain:
case "@local": command = []
case "localhost": command = []
case _: command = ["ssh", f"root@{self.sshdomain}"]
docker_container = os.environ.get("CHATMAIL_DOCKER")
if docker_container:
command += ["docker", "exec", docker_container]
[command.append(arg) for arg in getjournal.split()]
self.popen = subprocess.Popen( self.popen = subprocess.Popen(
command, ["ssh", f"root@{self.sshdomain}", getjournal],
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
) )
while 1: while 1:
line = self.popen.stdout.readline() line = self.popen.stdout.readline()
res = line.decode().strip().lower() res = line.decode().strip().lower()
if not res: if res:
yield res
else:
break break
if ready is not None:
ready()
ready = None
yield res
@pytest.fixture @pytest.fixture

View File

@@ -0,0 +1,362 @@
"""Setup and verify external TLS certificates for a chatmail server.
Generates a self-signed TLS certificate, uploads it to the chatmail
server via SCP, runs ``cmdeploy run``, and then probes all TLS-enabled
ports (nginx, postfix, dovecot) to verify the certificate is actually
served. After probing, checks remote service logs for errors.
Prerequisites
~~~~~~~~~~~~~
- SSH root access to the target server (same as ``cmdeploy run``)
- ``cmdeploy`` in PATH (activate the venv first)
How to run
~~~~~~~~~~
From the repository root::
# Full run: generate cert, deploy, probe ports, check services
python -m cmdeploy.tests.setup_tls_external DOMAIN
# Re-probe only (after a previous deploy)
python -m cmdeploy.tests.setup_tls_external DOMAIN \\
--skip-deploy --skip-certgen
# Override SSH host (e.g. when domain doesn't resolve to the server)
python -m cmdeploy.tests.setup_tls_external DOMAIN \\
--ssh-host staging-ipv4.testrun.org
Arguments
~~~~~~~~~
DOMAIN mail domain for the chatmail server (SSH root login must work)
Options
~~~~~~~
--skip-deploy skip ``cmdeploy run``, only probe ports
--skip-certgen skip cert generation/upload, use certs already on server
--ssh-host HOST SSH host override (defaults to DOMAIN)
"""
import argparse
import shutil
import smtplib
import socket
import ssl
import subprocess
import sys
import tempfile
import time
from pathlib import Path
# Cert paths on the remote server
REMOTE_CERT = "/etc/ssl/certs/tmp_fullchain.pem"
REMOTE_KEY = "/etc/ssl/private/tmp_privkey.pem"
# ---------------------------------------------------------------------------
# Config generation
# ---------------------------------------------------------------------------
def generate_config(domain: str, config_dir: Path) -> Path:
"""Generate a chatmail.ini with tls_external_cert_and_key for *domain*."""
from chatmaild.config import write_initial_config
ini_path = config_dir / "chatmail.ini"
write_initial_config(
ini_path,
domain,
overrides={
"tls_external_cert_and_key": f"{REMOTE_CERT} {REMOTE_KEY}",
},
)
print(f"[+] Generated chatmail.ini for {domain} in {config_dir}")
return ini_path
# ---------------------------------------------------------------------------
# Certificate generation
# ---------------------------------------------------------------------------
def generate_cert(domain: str, cert_dir: Path) -> tuple:
"""Generate a self-signed TLS cert+key for *domain* with proper SANs."""
from cmdeploy.selfsigned.deployer import openssl_selfsigned_args
cert_path = cert_dir / "fullchain.pem"
key_path = cert_dir / "privkey.pem"
subprocess.check_call(openssl_selfsigned_args(domain, cert_path, key_path, days=30))
print(f"[+] Generated cert for {domain} in {cert_dir}")
return cert_path, key_path
# ---------------------------------------------------------------------------
# Upload certs to remote server
# ---------------------------------------------------------------------------
def upload_certs(
ssh_host: str,
cert_path: Path,
key_path: Path,
) -> None:
"""SCP cert and key to the remote server."""
subprocess.check_call([
"scp", str(cert_path), f"root@{ssh_host}:{REMOTE_CERT}",
])
subprocess.check_call([
"scp", str(key_path), f"root@{ssh_host}:{REMOTE_KEY}",
])
# Ensure cert is world-readable and key is readable by ssl-cert group
# (dovecot/postfix/nginx need to read these files)
subprocess.check_call([
"ssh", f"root@{ssh_host}",
f"chmod 644 {REMOTE_CERT} && chmod 640 {REMOTE_KEY}"
f" && chgrp ssl-cert {REMOTE_KEY}",
])
print(f"[+] Uploaded cert/key to {ssh_host}")
# ---------------------------------------------------------------------------
# Deploy
# ---------------------------------------------------------------------------
def run_deploy(ini_path: str) -> None:
"""Run ``cmdeploy run --skip-dns-check --config <ini>``."""
cmd = ["cmdeploy", "run", "--config", str(ini_path), "--skip-dns-check"]
print(f"[+] Running: {' '.join(cmd)}")
subprocess.check_call(cmd)
print("[+] Deploy completed successfully")
# ---------------------------------------------------------------------------
# TLS port probing
# ---------------------------------------------------------------------------
def get_peer_cert_binary(host: str, port: int) -> bytes:
"""Connect to host:port with TLS and return the DER-encoded peer cert."""
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
with socket.create_connection((host, port), timeout=15) as sock:
with ctx.wrap_socket(sock, server_hostname=host) as ssock:
return ssock.getpeercert(binary_form=True)
def get_smtp_starttls_cert_binary(host: str, port: int = 587) -> bytes:
"""Connect via SMTP STARTTLS and return the DER cert."""
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
with smtplib.SMTP(host, port, timeout=15) as smtp:
smtp.starttls(context=ctx)
return smtp.sock.getpeercert(binary_form=True)
def check_cert_matches(
label: str, served_der: bytes, expected_der: bytes,
) -> bool:
"""Compare served DER cert against the expected cert."""
if served_der == expected_der:
print(f" [OK] {label}: certificate matches")
return True
else:
print(f" [FAIL] {label}: certificate does NOT match")
return False
def load_cert_der(cert_pem_path: Path) -> bytes:
"""Load a PEM cert file and return its DER encoding."""
pem_text = cert_pem_path.read_text()
start = pem_text.index("-----BEGIN CERTIFICATE-----")
end = pem_text.index("-----END CERTIFICATE-----") + len(
"-----END CERTIFICATE-----"
)
return ssl.PEM_cert_to_DER_cert(pem_text[start:end])
def probe_all_ports(host: str, expected_cert_der: bytes) -> bool:
"""Probe TLS ports and verify the served certificate matches.
Checks ports 993 (IMAP), 465 (SMTPS), 587 (STARTTLS), and 443
(nginx stream). Port 8443 is skipped as nginx binds it to
localhost behind the stream proxy on 443.
"""
print(f"\n[+] Probing TLS ports on {host}...")
all_ok = True
for label, port in [
("IMAP/TLS (993)", 993),
("SMTP/TLS (465)", 465),
]:
try:
served = get_peer_cert_binary(host, port)
if not check_cert_matches(label, served, expected_cert_der):
all_ok = False
except Exception as e:
print(f" [FAIL] {label}: connection failed: {e}")
all_ok = False
# STARTTLS on port 587
try:
served = get_smtp_starttls_cert_binary(host, 587)
if not check_cert_matches("SMTP/STARTTLS (587)", served, expected_cert_der):
all_ok = False
except Exception as e:
print(f" [FAIL] SMTP/STARTTLS (587): connection failed: {e}")
all_ok = False
# Port 443 (nginx stream proxy with ALPN routing)
try:
served = get_peer_cert_binary(host, 443)
if not check_cert_matches("nginx/443 (stream)", served, expected_cert_der):
all_ok = False
except Exception as e:
print(f" [FAIL] nginx/443 (stream): connection failed: {e}")
all_ok = False
return all_ok
# ---------------------------------------------------------------------------
# Post-deploy service health checks
# ---------------------------------------------------------------------------
SERVICES = ["dovecot", "postfix", "nginx"]
def check_remote_services(ssh_host: str, since: str = "") -> bool:
"""SSH to the server and check for service failures or errors.
*since* is a ``journalctl --since`` timestamp (e.g. ``"5 min ago"``).
If empty, checks the entire boot journal.
"""
print(f"\n[+] Checking remote service health on {ssh_host}...")
all_ok = True
for svc in SERVICES:
try:
result = subprocess.run(
["ssh", f"root@{ssh_host}",
f"systemctl is-active {svc}.service"],
capture_output=True, text=True, timeout=15, check=False,
)
status = result.stdout.strip()
if status == "active":
print(f" [OK] {svc}: active")
else:
print(f" [FAIL] {svc}: {status}")
all_ok = False
except Exception as e:
print(f" [FAIL] {svc}: check failed: {e}")
all_ok = False
since_arg = f'--since="{since}"' if since else ""
print(f"\n[+] Checking journal for errors on {ssh_host}...")
for svc in SERVICES:
try:
result = subprocess.run(
["ssh", f"root@{ssh_host}",
f"journalctl -u {svc}.service {since_arg}"
f" --no-pager -p err -q"],
capture_output=True, text=True, timeout=15, check=False,
)
errors = result.stdout.strip()
if errors:
print(f" [WARN] {svc} errors in journal:")
for line in errors.splitlines()[:10]:
print(f" {line}")
all_ok = False
else:
print(f" [OK] {svc}: no errors in journal")
except Exception as e:
print(f" [FAIL] {svc}: journal check failed: {e}")
all_ok = False
return all_ok
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"domain",
help="mail domain (SSH root login must work to this host)",
)
parser.add_argument(
"--skip-deploy",
action="store_true",
help="skip cmdeploy run, only probe ports",
)
parser.add_argument(
"--skip-certgen",
action="store_true",
help="skip cert generation and upload (use existing)",
)
parser.add_argument(
"--ssh-host",
help="SSH host override (defaults to DOMAIN)",
)
args = parser.parse_args()
domain = args.domain
ssh_host = args.ssh_host or domain
print(f"[+] Domain: {domain}")
print(f"[+] SSH host: {ssh_host}")
print(f"[+] Remote cert: {REMOTE_CERT}")
print(f"[+] Remote key: {REMOTE_KEY}")
work_dir = Path(tempfile.mkdtemp(prefix="tls-external-test-"))
try:
# Generate chatmail.ini
ini_path = generate_config(domain, work_dir)
if not args.skip_certgen:
local_cert, local_key = generate_cert(domain, work_dir)
upload_certs(ssh_host, local_cert, local_key)
else:
local_cert = work_dir / "fullchain.pem"
subprocess.check_call([
"scp", f"root@{ssh_host}:{REMOTE_CERT}", str(local_cert),
])
# Record timestamp before deploy for journal filtering
deploy_start = time.strftime("%Y-%m-%d %H:%M:%S")
if not args.skip_deploy:
run_deploy(ini_path)
# Probe TLS ports
expected_der = load_cert_der(local_cert)
ports_ok = probe_all_ports(domain, expected_der)
# Check service health (only errors since deploy started)
services_ok = check_remote_services(ssh_host, since=deploy_start)
if ports_ok and services_ok:
print(
"\n[SUCCESS] All TLS port probes passed and services are healthy"
)
return 0
else:
if not ports_ok:
print("\n[FAILURE] Some TLS port probes failed", file=sys.stderr)
if not services_ok:
print(
"\n[FAILURE] Some services have errors", file=sys.stderr
)
return 1
finally:
shutil.rmtree(work_dir, ignore_errors=True)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -6,9 +6,9 @@ using Docker Compose.
.. note:: .. note::
- Docker support is experimental, CI builds and tests the image automatically, but please report bugs. - Docker support is experimental and not yet covered by automated tests, please report bugs.
- The image wraps the cmdeploy process detailed in the :doc:`getting_started` instructions in a Debian-systemd image with r/w access to `/sys/fs` - This preliminary image simply wraps the cmdeploy process detailed in the :doc:`getting_started` instructions in a full Debian-systemd image with r/w access to `/sys/fs`
- Currently amd64-only (arm64 should work but is untested). - Currently, the image has only been tested and built on amd64, though arm64 should theoretically work as well.
Setup Preparation Setup Preparation
@@ -21,7 +21,7 @@ steps. Please substitute it with your own domain.
- Debian 12 through the `official install instructions <https://docs.docker.com/engine/install/debian/#install-using-the-repository>`_ - Debian 12 through the `official install instructions <https://docs.docker.com/engine/install/debian/#install-using-the-repository>`_
- Debian 13+ with `apt install docker docker-compose` - Debian 13+ with `apt install docker docker-compose`
If you must use v1 (EOL since 2023), use `docker-compose` in the following and modify the `docker-compose.yaml` to use `privileged: true` instead of `cgroup: host`, though that gives the container full privileges. If you must use v1 (EOL since 2023), use `docker-compose` in the following and modify the `docker-compose.yaml` to use `privileged: true` instead of `cgroup: host`, though that will run give the container all priviledges.
2. Setup the initial DNS records. 2. Setup the initial DNS records.
The following is an example in the familiar BIND zone file format with The following is an example in the familiar BIND zone file format with
@@ -105,23 +105,19 @@ You can test the installation with::
You should check and extend your DNS records for better interoperability:: You should check and extend your DNS records for better interoperability::
# Show required DNS records # Show required DNS records
docker exec chatmail cmdeploy dns --ssh-host @local docker exec chatmail /opt/cmdeploy/bin/cmdeploy dns --ssh-host @local
You can check server status with:: You can check server status with::
docker exec chatmail cmdeploy status --ssh-host @local docker exec chatmail /opt/cmdeploy/bin/cmdeploy status --ssh-host @local
You can run some benchmarks (can also run from any machine with cmdeploy installed):: You can run some benchmarks (can also run from any machine with cmdeploy installed)
docker exec chatmail cmdeploy bench docker exec chatmail /opt/cmdeploy/bin/cmdeploy bench chat.example.org
You can run the test suite with:: You can run the test suite with
docker exec chatmail cmdeploy test --ssh-host localhost docker exec chatmail /opt/cmdeploy/bin/cmdeploy test chat.example.org --ssh-host localhost
You can look at logs::
docker exec chatmail journalctl -fu postfix@-
Customization Customization
@@ -237,13 +233,11 @@ Clone the repository and build the Docker image::
git clone https://github.com/chatmail/relay git clone https://github.com/chatmail/relay
cd relay cd relay
docker/build.sh docker compose build chatmail
The build bakes all binaries, Python packages, and the install stage The build bakes all binaries, Python packages, and the install stage
into the image. After building, only ``docker-compose.yaml`` and a ``.env`` into the image. After building, only ``docker-compose.yaml`` and a ``.env`` with
with ``MAIL_DOMAIN`` are needed to run the container. The `build.sh` passes the ``MAIL_DOMAIN`` are needed to run the container.
git hash onto the docker build so it can be determined if there has been a
change that warrants a redeploy.
You can transfer a locally built image to your server directly (pigz is parallel `gzip` which can be used instead as well) :: You can transfer a locally built image to your server directly (pigz is parallel `gzip` which can be used instead as well) ::

View File

@@ -235,11 +235,7 @@ The deploy will verify that both files exist on the server.
You are responsible for certificate renewal. You are responsible for certificate renewal.
When the certificate file changes on disk, When the certificate file changes on disk,
all relay services pick up the new certificate automatically all relay services pick up the new certificate automatically
via a systemd path watcher installed during deploy. (via a systemd path watcher installed during deploy).
The watcher uses inotify, which does not cross bind-mount boundaries.
If you use such a setup, you must trigger the reload explicitly after renewal::
systemctl start tls-cert-reload.service
Migrating to a new build machine Migrating to a new build machine

View File

@@ -1,9 +1,10 @@
# Local overrides: copy to docker-compose.override.yaml in the repo root. # Local overrides copy to docker-compose.override.yaml in the repo root.
# Compose automatically merges this with docker-compose.yaml. # Compose automatically merges this with docker-compose.yaml.
# #
# cp docker-compose.override.yaml.example docker-compose.override.yaml # cp docker-compose.override.yaml.example docker-compose.override.yaml
# #
# Volumes are APPENDED to the base file's volumes list, environment and other scalar keys are MERGED by key. # Volumes are APPENDED to the base file's volumes list.
# Environment and other scalar keys are MERGED by key.
services: services:
chatmail: chatmail:
volumes: volumes:
@@ -23,17 +24,12 @@ services:
# - ./custom/www:/opt/chatmail-www # - ./custom/www:/opt/chatmail-www
## Debug — mount scripts from the repo for live editing: ## Debug — mount scripts from the repo for live editing:
# - ./docker/chatmail-init.sh:/chatmail-init.sh # - ./docker/files/setup_chatmail_docker.sh:/setup_chatmail_docker.sh
# - ./docker/entrypoint.sh:/entrypoint.sh # - ./docker/files/entrypoint.sh:/entrypoint.sh
# environment: # environment:
## Mount certs (above) and set TLS_EXTERNAL_CERT_AND_KEY to in-container paths. ## Mount certs (above) and set TLS_EXTERNAL_CERT_AND_KEY to in-container paths.
## A tls-cert-reload.path watcher inside the container reloads services ## Changed certs are picked up automatically (inotify via tls-cert-reload.path).
## when the cert file changes. However, inotify does not cross bind-mount
## boundaries, so host-side renewals (certbot, acmetool, etc.) must
## notify the container explicitly. Add this to your renewal hook:
##
## docker exec chatmail systemctl start tls-cert-reload.service
## ##
## Host acmetool (bare-metal migration): create mount above, and ## Host acmetool (bare-metal migration): create mount above, and
## rsync -a /var/lib/acme/live data/certs ## rsync -a /var/lib/acme/live data/certs

View File

@@ -2,15 +2,11 @@
# volumes, env overrides) in docker-compose.override.yaml instead. # volumes, env overrides) in docker-compose.override.yaml instead.
# See docker/docker-compose.override.yaml.example for a starting point. # See docker/docker-compose.override.yaml.example for a starting point.
# #
# Security notes: this container uses # Security note: this container uses network_mode:host (chatmail needs many
# - network_mode:host chatmail needs many ports (25, 53, 80, 143, 443, 465, # ports: 25, 53, 80, 143, 443, 465, 587, 993, 3340, 8443) and cgroup:host
# 587, 993, 3340, 8443) and needs to operate from the real IP, which bridging # (required for systemd). Together these give the container near-host-level
# would make tricky # access. This is acceptable for a dedicated mail server, but be aware that
# - cgroup:host (required for systemd). # the container can bind any port and see all host network traffic.
# Together these give the container near-host-level access. This is acceptable
# for a dedicated mail server, but be aware that the container can bind any
# port and see all host network traffic.
services: services:
chatmail: chatmail:
build: build:
@@ -30,7 +26,10 @@ services:
- /run - /run
- /run/lock - /run/lock
logging: logging:
driver: none driver: json-file
options:
max-size: "10m"
max-file: "3"
environment: environment:
MAIL_DOMAIN: $MAIL_DOMAIN MAIL_DOMAIN: $MAIL_DOMAIN
network_mode: "host" network_mode: "host"

View File

@@ -5,5 +5,5 @@
# .git/ is excluded from the build context (.dockerignore) so the hash # .git/ is excluded from the build context (.dockerignore) so the hash
# must be passed as a build arg from the host. # must be passed as a build arg from the host.
export GIT_HASH=$(git rev-parse HEAD) export GIT_HASH=$(git rev-parse --short HEAD)
exec docker compose build "$@" exec docker compose build "$@"

View File

@@ -1,55 +1,59 @@
# syntax=docker/dockerfile:1
FROM jrei/systemd-debian:12 AS base FROM jrei/systemd-debian:12 AS base
ENV LANG=en_US.UTF-8 ENV LANG=en_US.UTF-8
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ RUN echo 'APT::Install-Recommends "0";' > /etc/apt/apt.conf.d/01norecommend && \
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
echo 'APT::Install-Recommends "0";' > /etc/apt/apt.conf.d/01norecommend && \
echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/01norecommend && \ echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/01norecommend && \
apt-get update && \ apt-get update && \
DEBIAN_FRONTEND=noninteractive TZ=UTC \
apt-get install -y \ apt-get install -y \
ca-certificates \ ca-certificates && \
gcc \ DEBIAN_FRONTEND=noninteractive \
git \ TZ=UTC \
python3 \ apt-get install -y tzdata && \
python3-dev \ apt-get install -y locales && \
python3-venv \
tzdata \
locales && \
sed -i -e "s/# $LANG.*/$LANG UTF-8/" /etc/locale.gen && \ sed -i -e "s/# $LANG.*/$LANG UTF-8/" /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \ dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=$LANG update-locale LANG=$LANG \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install -y \
git \
python3 \
python3-venv \
python3-virtualenv \
gcc \
python3-dev \
opendkim \
opendkim-tools \
curl \
rsync \
unbound \
unbound-anchor \
dnsutils \
postfix \
acl \
nginx \
libnginx-mod-stream \
fcgiwrap \
cron \
&& rm -rf /var/lib/apt/lists/*
# --- Build-time: install cmdeploy venv and run install stage --- # --- Build-time: install cmdeploy venv and run install stage ---
# Editable install so importlib.resources reads directly from the source tree. # Editable install so importlib.resources reads directly from the source tree.
# On container start only "configure,activate" stages run. # On container start only "configure,activate" stages run.
# Copy dependency metadata first so pip install layer is cached
COPY cmdeploy/pyproject.toml /opt/chatmail/cmdeploy/pyproject.toml
COPY chatmaild/pyproject.toml /opt/chatmail/chatmaild/pyproject.toml
# Dummy scaffolding so editable install can discover packages
RUN mkdir -p /opt/chatmail/cmdeploy/src/cmdeploy \
/opt/chatmail/chatmaild/src/chatmaild && \
touch /opt/chatmail/cmdeploy/src/cmdeploy/__init__.py \
/opt/chatmail/chatmaild/src/chatmaild/__init__.py
# Dummy git repo: .git/ is excluded from the build context (.dockerignore)
# but setuptools calls `git ls-files` when building the sdist.
WORKDIR /opt/chatmail
RUN --mount=type=cache,target=/root/.cache/pip \
git init -q && \
python3 -m venv /opt/cmdeploy && \
/opt/cmdeploy/bin/pip install -e chatmaild/ -e cmdeploy/
# Full source copy (editable install's .egg-link still points here)
COPY . /opt/chatmail/ COPY . /opt/chatmail/
WORKDIR /opt/chatmail
# Minimal chatmail.ini
RUN printf '[params]\nmail_domain = build.local\n' > /tmp/chatmail.ini RUN printf '[params]\nmail_domain = build.local\n' > /tmp/chatmail.ini
# Dummy git repo init: .git/ is excluded from the build context (.dockerignore)
# but setuptools calls `git ls-files` when building the sdist.
RUN git init -q && \
python3 -m venv /opt/cmdeploy && \
/opt/cmdeploy/bin/pip install --no-cache-dir \
-e chatmaild/ -e cmdeploy/
RUN CMDEPLOY_STAGES=install \ RUN CMDEPLOY_STAGES=install \
CHATMAIL_INI=/tmp/chatmail.ini \ CHATMAIL_INI=/tmp/chatmail.ini \
CHATMAIL_NOSYSCTL=True \ CHATMAIL_NOSYSCTL=True \
@@ -59,17 +63,11 @@ RUN CMDEPLOY_STAGES=install \
RUN cp -a www/ /opt/chatmail-www/ RUN cp -a www/ /opt/chatmail-www/
# Remove build-only packages and their deps — not needed at runtime RUN rm -f /tmp/chatmail.ini
RUN apt-get purge -y gcc git python3-dev && \
apt-get autoremove -y && \
rm -f /tmp/chatmail.ini
# Record image version (used in deploy fingerprint at runtime). # Record image version (used in deploy fingerprint at runtime).
# GIT_HASH is passed as a build arg (from docker-compose or CI) so that # GIT_HASH is passed as a build arg (from docker-compose or CI) so that
# .git/ can be excluded from the build context via .dockerignore. # .git/ can be excluded from the build context via .dockerignore.
# Two files: chatmail-image-version is the immutable build hash (survives
# deploys); chatmail-version is overwritten by cmdeploy run and restored
# from the image version after each deploy in chatmail-init.sh.
ARG GIT_HASH=unknown ARG GIT_HASH=unknown
RUN echo "$GIT_HASH" > /etc/chatmail-image-version && \ RUN echo "$GIT_HASH" > /etc/chatmail-image-version && \
echo "$GIT_HASH" > /etc/chatmail-version echo "$GIT_HASH" > /etc/chatmail-version
@@ -79,19 +77,18 @@ ENV TZ=:/etc/localtime
ENV PATH="/opt/cmdeploy/bin:${PATH}" ENV PATH="/opt/cmdeploy/bin:${PATH}"
RUN ln -s /etc/chatmail/chatmail.ini /opt/chatmail/chatmail.ini RUN ln -s /etc/chatmail/chatmail.ini /opt/chatmail/chatmail.ini
ARG CHATMAIL_INIT_SERVICE_PATH=/lib/systemd/system/chatmail-init.service ARG SETUP_CHATMAIL_SERVICE_PATH=/lib/systemd/system/setup_chatmail.service
COPY ./docker/chatmail-init.service "$CHATMAIL_INIT_SERVICE_PATH" COPY ./docker/files/setup_chatmail.service "$SETUP_CHATMAIL_SERVICE_PATH"
RUN ln -sf "$CHATMAIL_INIT_SERVICE_PATH" "/etc/systemd/system/multi-user.target.wants/chatmail-init.service" RUN ln -sf "$SETUP_CHATMAIL_SERVICE_PATH" "/etc/systemd/system/multi-user.target.wants/setup_chatmail.service"
# Remove default nginx site config at build time (not in entrypoint) # Remove default nginx site config at build time (not in entrypoint)
RUN rm -f /etc/nginx/sites-enabled/default RUN rm -f /etc/nginx/sites-enabled/default
COPY --chmod=555 ./docker/chatmail-init.sh /chatmail-init.sh COPY --chmod=555 ./docker/files/setup_chatmail_docker.sh /setup_chatmail_docker.sh
COPY --chmod=555 ./docker/entrypoint.sh /entrypoint.sh COPY --chmod=555 ./docker/files/entrypoint.sh /entrypoint.sh
COPY --chmod=555 ./docker/healthcheck.sh /healthcheck.sh
HEALTHCHECK --interval=10s --start-period=180s --timeout=10s --retries=3 \ HEALTHCHECK --interval=60s --timeout=10s --retries=3 \
CMD /healthcheck.sh CMD systemctl is-active dovecot postfix nginx unbound opendkim filtermail doveauth chatmail-metadata || exit 1
STOPSIGNAL SIGRTMIN+3 STOPSIGNAL SIGRTMIN+3
@@ -99,3 +96,4 @@ ENTRYPOINT ["/entrypoint.sh"]
CMD [ "--default-standard-output=journal+console", \ CMD [ "--default-standard-output=journal+console", \
"--default-standard-error=journal+console" ] "--default-standard-error=journal+console" ]

View File

@@ -1,11 +0,0 @@
# Used by .github/workflows/docker-ci.yaml
# The GHCR image is set via CHATMAIL_IMAGE env var at deploy time.
services:
chatmail:
image: ${CHATMAIL_IMAGE:-chatmail-relay:latest}
volumes:
- /srv/chatmail/chatmail.ini:/etc/chatmail/chatmail.ini
- /srv/chatmail/dkim:/etc/dkimkeys
- /srv/chatmail/certs:/var/lib/acme
environment:
TLS_EXTERNAL_CERT_AND_KEY: /var/lib/acme/live/${MAIL_DOMAIN}/fullchain /var/lib/acme/live/${MAIL_DOMAIN}/privkey

View File

@@ -1,9 +0,0 @@
#!/bin/bash
set -eo pipefail
CHATMAIL_INIT_SERVICE_PATH="${CHATMAIL_INIT_SERVICE_PATH:-/lib/systemd/system/chatmail-init.service}"
env_vars="MAIL_DOMAIN CMDEPLOY_STAGES CHATMAIL_INI TLS_EXTERNAL_CERT_AND_KEY PATH"
sed -i "s|<envs_list>|$env_vars|g" "$CHATMAIL_INIT_SERVICE_PATH"
exec /lib/systemd/systemd "$@"

12
docker/files/entrypoint.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -eo pipefail
SETUP_CHATMAIL_SERVICE_PATH="${SETUP_CHATMAIL_SERVICE_PATH:-/lib/systemd/system/setup_chatmail.service}"
# Whitelist only the env vars needed by setup_chatmail_docker.sh.
# Forwarding all env vars (via printenv) would leak Docker internals,
# orchestrator secrets, and other unrelated variables into systemd.
env_vars="MAIL_DOMAIN CMDEPLOY_STAGES CHATMAIL_INI TLS_EXTERNAL_CERT_AND_KEY PATH"
sed -i "s|<envs_list>|$env_vars|g" "$SETUP_CHATMAIL_SERVICE_PATH"
exec /lib/systemd/systemd "$@"

View File

@@ -1,11 +1,11 @@
[Unit] [Unit]
Description=Run container setup commands Description=Run container setup commands
After=multi-user.target After=multi-user.target
ConditionPathExists=/chatmail-init.sh ConditionPathExists=/setup_chatmail_docker.sh
[Service] [Service]
Type=oneshot Type=oneshot
ExecStart=/bin/bash /chatmail-init.sh ExecStart=/bin/bash /setup_chatmail_docker.sh
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/chatmail WorkingDirectory=/opt/chatmail
PassEnvironment=<envs_list> PassEnvironment=<envs_list>

View File

@@ -12,44 +12,37 @@ if [ -z "$MAIL_DOMAIN" ]; then
exit 1 exit 1
fi fi
# Generate DKIM keys if not mounted ### MAIN
if [ ! -f /etc/dkimkeys/opendkim.private ]; then if [ ! -f /etc/dkimkeys/opendkim.private ]; then
/usr/sbin/opendkim-genkey -D /etc/dkimkeys -d "$MAIL_DOMAIN" -s opendkim /usr/sbin/opendkim-genkey -D /etc/dkimkeys -d "$MAIL_DOMAIN" -s opendkim
fi fi
# Fix ownership for bind-mounted keys (host opendkim UID may differ from container) # Fix ownership for bind-mounted keys (host opendkim UID may differ from container)
chown -R opendkim:opendkim /etc/dkimkeys chown -R opendkim:opendkim /etc/dkimkeys
# Create chatmail.ini, skip if mounted # Journald: forward to console for docker logs
grep -q '^ForwardToConsole=yes' /etc/systemd/journald.conf \
|| echo "ForwardToConsole=yes" >> /etc/systemd/journald.conf
systemctl restart systemd-journald
# Create chatmail.ini (skips if file already exists, e.g. volume-mounted)
mkdir -p "$(dirname "$CHATMAIL_INI")" mkdir -p "$(dirname "$CHATMAIL_INI")"
if [ ! -f "$CHATMAIL_INI" ]; then if [ ! -f "$CHATMAIL_INI" ]; then
$CMDEPLOY init --config "$CHATMAIL_INI" "$MAIL_DOMAIN" $CMDEPLOY init --config "$CHATMAIL_INI" "$MAIL_DOMAIN"
fi fi
# Auto-detect IPv6: if the host has no IPv6 connectivity, set disable_ipv6 # Inject external TLS paths from env var (unless user mounted their own ini)
# in the ini so dovecot/postfix/nginx bind to IPv4 only.
# Uses network_mode:host so /proc/net/if_inet6 reflects the host's stack.
if [ ! -e /proc/net/if_inet6 ]; then
if grep -q '^disable_ipv6 = False' "$CHATMAIL_INI"; then
sed -i 's/^disable_ipv6 = False/disable_ipv6 = True/' "$CHATMAIL_INI"
echo "[INFO] IPv6 not available, set disable_ipv6 = True"
fi
fi
# Inject external TLS paths from env var unless defined in chatmail.ini
if [ -n "${TLS_EXTERNAL_CERT_AND_KEY:-}" ]; then if [ -n "${TLS_EXTERNAL_CERT_AND_KEY:-}" ]; then
if ! grep -q '^tls_external_cert_and_key' "$CHATMAIL_INI"; then if ! grep -q '^tls_external_cert_and_key' "$CHATMAIL_INI"; then
echo "tls_external_cert_and_key = $TLS_EXTERNAL_CERT_AND_KEY" >> "$CHATMAIL_INI" echo "tls_external_cert_and_key = $TLS_EXTERNAL_CERT_AND_KEY" >> "$CHATMAIL_INI"
fi fi
fi fi
# Ensure mailboxes directory exists (chatmail-metadata needs it at startup,
# but Dovecot only creates it on first mail delivery)
mkdir -p "/home/vmail/mail/${MAIL_DOMAIN}"
chown vmail:vmail "/home/vmail/mail/${MAIL_DOMAIN}"
# --- Deploy fingerprint: skip cmdeploy run if nothing changed --- # --- Deploy fingerprint: skip cmdeploy run if nothing changed ---
# On restart with identical image+config, systemd already brings up all # On restart with identical image+config, systemd already brings up all
# enabled services only configure+activate are needed here. # enabled services — the full cmdeploy run is redundant (~30s saved).
# The install stage runs at image build time (Dockerfile), so only
# configure+activate are needed here.
IMAGE_VERSION_FILE="/etc/chatmail-image-version" IMAGE_VERSION_FILE="/etc/chatmail-image-version"
FINGERPRINT_FILE="/etc/chatmail/.deploy-fingerprint" FINGERPRINT_FILE="/etc/chatmail/.deploy-fingerprint"
image_ver="none" image_ver="none"
@@ -57,7 +50,7 @@ image_ver="none"
config_hash=$(sha256sum "$CHATMAIL_INI" | cut -c1-16) config_hash=$(sha256sum "$CHATMAIL_INI" | cut -c1-16)
current_fp="${image_ver}:${config_hash}" current_fp="${image_ver}:${config_hash}"
# CMDEPLOY_STAGES non-empty in env = operator override -> always run. # CMDEPLOY_STAGES non-empty in env = operator override always run.
# Otherwise, if fingerprint matches the last successful deploy, skip. # Otherwise, if fingerprint matches the last successful deploy, skip.
if [ -z "${CMDEPLOY_STAGES:-}" ] \ if [ -z "${CMDEPLOY_STAGES:-}" ] \
&& [ -f "$FINGERPRINT_FILE" ] \ && [ -f "$FINGERPRINT_FILE" ] \
@@ -65,23 +58,6 @@ if [ -z "${CMDEPLOY_STAGES:-}" ] \
echo "[INFO] No changes detected ($current_fp), skipping deploy." echo "[INFO] No changes detected ($current_fp), skipping deploy."
else else
export CMDEPLOY_STAGES="${CMDEPLOY_STAGES:-configure,activate}" export CMDEPLOY_STAGES="${CMDEPLOY_STAGES:-configure,activate}"
$CMDEPLOY run --config "$CHATMAIL_INI" --ssh-host @local
# Skip DNS check when MAIL_DOMAIN is a bare IP address
SKIP_DNS=""
if [[ "$MAIL_DOMAIN" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]] || [[ "$MAIL_DOMAIN" =~ : ]]; then
SKIP_DNS="--skip-dns-check"
fi
$CMDEPLOY run --config "$CHATMAIL_INI" --ssh-host @local $SKIP_DNS
# Restore the build-time hash
cp /etc/chatmail-image-version /etc/chatmail-version
echo "$current_fp" > "$FINGERPRINT_FILE" echo "$current_fp" > "$FINGERPRINT_FILE"
fi fi
# Signal success to Docker healthcheck
touch /run/chatmail-init.done
# Forward journald to console so `docker compose logs` works
grep -q '^ForwardToConsole=yes' /etc/systemd/journald.conf \
|| echo "ForwardToConsole=yes" >> /etc/systemd/journald.conf
systemctl restart systemd-journald

View File

@@ -1,16 +0,0 @@
#!/bin/bash
# returns 0 when chatmail-init succeeded and all expected services are running.
set -e
test -f /run/chatmail-init.done
# Core services
services="chatmail-metadata doveauth dovecot filtermail filtermail-incoming nginx postfix unbound"
# Optional services
for svc in iroh-relay turnserver; do
systemctl is-enabled "$svc" 2>/dev/null && services="$services $svc"
done
exec systemctl is-active $services