Compare commits

..

2 Commits

Author SHA1 Message Date
missytake
0d301f9807 doc: add changelog 2025-04-10 11:52:23 +02:00
Mark Felder
a5dffdf2e6 Postfix master.cf: use 127.0.0.1 for consistency 2025-04-10 11:52:23 +02:00
55 changed files with 275 additions and 1653 deletions

View File

@@ -12,7 +12,6 @@ Please fill out as much of this form as you can (leaving out stuff that is not a
- Server OS (Operating System) - preferably Debian 12: - Server OS (Operating System) - preferably Debian 12:
- On which OS you run cmdeploy: - On which OS you run cmdeploy:
- chatmail/relay version: `git rev-parse HEAD`
## Expected behavior ## Expected behavior

View File

@@ -1,5 +1,5 @@
blank_issues_enabled: true blank_issues_enabled: true
contact_links: contact_links:
- name: Mutual Help Chat Group - name: Mutual Help Chat Group
url: https://i.delta.chat/#6CBFF8FFD505C0FDEA20A66674F2916EA8FBEE99&a=invitebot%40nine.testrun.org&g=Chatmail%20Mutual%20Help&x=7sFF7Ik50pWv6J1z7RVC5527&i=X69wTFfvCfs3d-JzqP0kVA3i&s=ibp-447dU-wUq-52QanwAtWc url: https://i.delta.chat/#C2846EB4C1CB8DF84B1818F5E3A638FC3FBDC981&a=stalebot1%40nine.testrun.org&g=Chatmail%20Mutual%20Help&x=7sFF7Ik50pWv6J1z7RVC5527&i=d7s1HvOsk5UrSf9AoqRZggg4&s=XmX_9BAW6-g5Ao5E8PyaeKNB
about: If you have troubles setting up the relay server, feel free to ask here. about: If you have troubles setting up the relay server, feel free to ask here.

View File

@@ -10,10 +10,6 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
# Checkout pull request HEAD commit instead of merge commit
# Otherwise `test_deployed_state` will be unhappy.
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: run chatmaild tests - name: run chatmaild tests
working-directory: chatmaild working-directory: chatmaild

View File

@@ -70,6 +70,9 @@ jobs:
rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true rsync -avz dkimkeys-restore/dkimkeys root@staging-ipv4.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true ssh -o StrictHostKeyChecking=accept-new -v root@staging-ipv4.testrun.org chown root:root -R /var/lib/acme || true
- name: run formatting checks
run: cmdeploy fmt -v
- name: run deploy-chatmail offline tests - name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy run: pytest --pyargs cmdeploy
@@ -77,7 +80,7 @@ jobs:
cmdeploy init staging-ipv4.testrun.org cmdeploy init staging-ipv4.testrun.org
sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' chatmail.ini sed -i 's#disable_ipv6 = False#disable_ipv6 = True#' chatmail.ini
- run: cmdeploy run --verbose --skip-dns-check - run: cmdeploy run
- name: set DNS entries - name: set DNS entries
run: | run: |

View File

@@ -70,12 +70,15 @@ jobs:
rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true rsync -avz dkimkeys-restore/dkimkeys root@staging2.testrun.org:/etc/ || true
ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true ssh -o StrictHostKeyChecking=accept-new -v root@staging2.testrun.org chown root:root -R /var/lib/acme || true
- name: run formatting checks
run: cmdeploy fmt -v
- name: run deploy-chatmail offline tests - name: run deploy-chatmail offline tests
run: pytest --pyargs cmdeploy run: pytest --pyargs cmdeploy
- run: cmdeploy init staging2.testrun.org - run: cmdeploy init staging2.testrun.org
- run: cmdeploy run --verbose --skip-dns-check - run: cmdeploy run --verbose
- name: set DNS entries - name: set DNS entries
run: | run: |

View File

@@ -1,50 +0,0 @@
This diagram shows components of the chatmail server; this is a draft
overview as of mid-August 2025:
```mermaid
graph LR;
cmdeploy --- sshd;
letsencrypt --- |80|acmetool-redirector;
acmetool-redirector --- |443|nginx-right(["`nginx
(external)`"]);
nginx-external --- |465|postfix;
nginx-external(["`nginx
(external)`"]) --- |8443|nginx-internal["`nginx
(internal)`"];
nginx-internal --- website["`Website
/var/www/html`"];
nginx-internal --- newemail.py;
nginx-internal --- autoconfig.xml;
certs-nginx[("`TLS certs
/var/lib/acme`")] --> nginx-internal;
cron --- chatmail-metrics;
cron --- acmetool;
chatmail-metrics --- website;
acmetool --> certs[("`TLS certs
/var/lib/acme`")];
nginx-external --- |993|dovecot;
autoconfig.xml --- postfix;
autoconfig.xml --- dovecot;
postfix --- echobot;
postfix --- |10080,10081|filtermail;
postfix --- users["`User data
home/vmail/mail`"];
postfix --- |doveauth.socket|doveauth;
dovecot --- |doveauth.socket|doveauth;
dovecot --- users;
dovecot --- |metadata.socket|chatmail-metadata;
doveauth --- users;
chatmail-expire-daily --- users;
chatmail-fsreport-daily --- users;
chatmail-metadata --- iroh-relay;
certs-nginx --> postfix;
certs-nginx --> dovecot;
style certs fill:#ff6;
style certs-nginx fill:#ff6;
style nginx-external fill:#fc9;
style nginx-right fill:#fc9;
```
The edges in this graph should not be taken too literally; they
reflect some sort of communication path or dependency relationship
between components of the chatmail server.

View File

@@ -2,121 +2,9 @@
## untagged ## untagged
- filtermail: run CPU-intensive handle_DATA in a thread pool executor
([#676](https://github.com/chatmail/relay/pull/676))
- don't use the complicated logging module in filtermail to exclude a potential source of errors.
([#674](https://github.com/chatmail/relay/pull/674))
- Specify nginx.conf to only handle `mail_domain`, www, and mta-sts domains
([#636](https://github.com/chatmail/relay/pull/636))
- Setup TURN server
([#621](https://github.com/chatmail/relay/pull/621))
- cmdeploy: make --ssh-host work with localhost
([#659](https://github.com/chatmail/relay/pull/659))
- Update iroh-relay to 0.35.0
([#650](https://github.com/chatmail/relay/pull/650))
- filtermail: accept mails from Protonmail
([#616](https://github.com/chatmail/relay/pull/655))
- Ignore all RCPT TO: parameters
([#651](https://github.com/chatmail/relay/pull/651))
- Increase opendkim DNS Timeout from 5 to 60 seconds
([#672](https://github.com/chatmail/relay/pull/672))
- Add config parameter for Let's Encrypt ACME email
([#663](https://github.com/chatmail/relay/pull/663))
- Use max username length in newemail.py, not min
([#648](https://github.com/chatmail/relay/pull/648))
- Add startup for `fcgiwrap.service` because sometimes it did not start automatically.
([#657](https://github.com/chatmail/relay/pull/657))
- Add `cmdeploy init --force` command for recreating chatmail.ini
([#656](https://github.com/chatmail/relay/pull/656))
- Increase maxproc for reinjecting ports from 10 to 100
([#646](https://github.com/chatmail/relay/pull/646))
- Allow ports 143 and 993 to be used by `dovecot` process
([#639](https://github.com/chatmail/relay/pull/639))
- Add `--skip-dns-check` argument to `cmdeploy run` command, which disables DNS record checking before installation.
([#661](https://github.com/chatmail/relay/pull/661))
- Rework expiry of message files and mailboxes in Python
to only do a single iteration over sometimes millions of messages
instead of doing "find" commands that iterate 9 times over the messages.
Provide an "fsreport" CLI for more fine grained analysis of message files.
([#637](https://github.com/chatmail/relay/pull/632))
## 1.7.0 2025-09-11
- Make www upload path configurable
([#618](https://github.com/chatmail/relay/pull/618))
- Check whether GCC is installed in initenv.sh
([#608](https://github.com/chatmail/relay/pull/608))
- Expire push notification tokens after 90 days
([#583](https://github.com/chatmail/relay/pull/583))
- Use official `mtail` binary instead of `mtail` package
([#581](https://github.com/chatmail/relay/pull/581))
- dovecot: install from download.delta.chat instead of openSUSE Build Service
([#590](https://github.com/chatmail/relay/pull/590))
- Reconfigure Dovecot imap-login service to high-performance mode
([#578](https://github.com/chatmail/relay/pull/578))
- Set timezone to improve dovecot performance
([#584](https://github.com/chatmail/relay/pull/584))
- Increase nginx connection limits
([#576](https://github.com/chatmail/relay/pull/576))
- If `dns-utils` needs to be installed before cmdeploy run, apt update to make sure it works
([#560](https://github.com/chatmail/relay/pull/560))
- filtermail: respect config message size limit
([#572](https://github.com/chatmail/relay/pull/572))
- Don't deploy if one of the ports used for chatmail relay services is occupied by an unexpected process
([#568](https://github.com/chatmail/relay/pull/568))
- Add config value after how many days large files are deleted
([#555](https://github.com/chatmail/relay/pull/555))
- cmdeploy: push relay version to /etc/chatmail-version
([#573](https://github.com/chatmail/relay/pull/573))
- filtermail: allow partial body length in OpenPGP payloads
([#570](https://github.com/chatmail/relay/pull/570))
- chatmaild: allow echobot to receive unencrypted messages by default
([#556](https://github.com/chatmail/relay/pull/556))
## 1.6.0 2025-04-11
- Handle Port-25 connect errors more gracefully (common with VPNs)
([#552](https://github.com/chatmail/relay/pull/552))
- Avoid "acmetool not found" during initial run - Avoid "acmetool not found" during initial run
([#550](https://github.com/chatmail/relay/pull/550)) ([#550](https://github.com/chatmail/relay/pull/550))
- Fix timezone handling such that client/servers do not need to use
same timezone.
([#553](https://github.com/chatmail/relay/pull/553))
- Enforce end-to-end encryption for incoming messages. - Enforce end-to-end encryption for incoming messages.
New user address mailboxes now get a `enforceE2EEincoming` file New user address mailboxes now get a `enforceE2EEincoming` file
which prohibits incoming cleartext messages from other domains. which prohibits incoming cleartext messages from other domains.
@@ -129,12 +17,6 @@
- Enforce end-to-end encryption between local addresses - Enforce end-to-end encryption between local addresses
([#535](https://github.com/chatmail/server/pull/535)) ([#535](https://github.com/chatmail/server/pull/535))
- unbound: check that port 53 is not occupied by a different process
([#537](https://github.com/chatmail/server/pull/537))
- unbound: before unbound is there, use 9.9.9.9 for resolving
([#518](https://github.com/chatmail/relay/pull/518))
- Limit the bind for the HTTPS server on 8443 to 127.0.0.1 - Limit the bind for the HTTPS server on 8443 to 127.0.0.1
([#522](https://github.com/chatmail/server/pull/522)) ([#522](https://github.com/chatmail/server/pull/522))
([#532](https://github.com/chatmail/server/pull/532)) ([#532](https://github.com/chatmail/server/pull/532))

View File

@@ -69,7 +69,7 @@ Please substitute it with your own domain.
mta-sts.chat.example.com. 3600 IN CNAME chat.example.com. mta-sts.chat.example.com. 3600 IN CNAME chat.example.com.
``` ```
2. On your local PC, clone the repository and bootstrap the Python virtualenv. 2. Clone the repository and bootstrap the Python virtualenv.
``` ```
git clone https://github.com/chatmail/relay git clone https://github.com/chatmail/relay
@@ -77,29 +77,30 @@ Please substitute it with your own domain.
scripts/initenv.sh scripts/initenv.sh
``` ```
3. On your local PC, create chatmail configuration file `chatmail.ini`: 3. Create chatmail configuration file `chatmail.ini`:
``` ```
scripts/cmdeploy init chat.example.org # <-- use your domain scripts/cmdeploy init chat.example.org # <-- use your domain
``` ```
4. Verify that SSH root login to your remote server works: 4. Verify that SSH root login works:
``` ```
ssh root@chat.example.org # <-- use your domain ssh root@chat.example.org # <-- use your domain
``` ```
5. From your local PC, deploy the remote chatmail relay server:
5. Deploy the remote chatmail relay server:
``` ```
scripts/cmdeploy run scripts/cmdeploy run
``` ```
This script will also check that you have all necessary DNS records. This script will check that you have all necessary DNS records.
If DNS records are missing, it will recommend If DNS records are missing, it will recommend
which you should configure at your DNS provider which you should configure at your DNS provider
(it can take some time until they are public). (it can take some time until they are public).
### Other helpful commands ### Other helpful commands:
To check the status of your remotely running chatmail service: To check the status of your remotely running chatmail service:
@@ -158,7 +159,7 @@ This repository has four directories:
The `cmdeploy/src/cmdeploy/cmdeploy.py` command line tool The `cmdeploy/src/cmdeploy/cmdeploy.py` command line tool
helps with setting up and managing the chatmail service. helps with setting up and managing the chatmail service.
`cmdeploy init` creates the `chatmail.ini` config file. `cmdeploy init` creates the `chatmail.ini` config file.
`cmdeploy run` uses a [pyinfra](https://pyinfra.com/)-based [`script`](cmdeploy/src/cmdeploy/__init__.py) `cmdeploy run` uses a [pyinfra](https://pyinfra.com/)-based [script](`cmdeploy/src/cmdeploy/__init__.py`)
to automatically install or upgrade all chatmail components on a relay, to automatically install or upgrade all chatmail components on a relay,
according to the `chatmail.ini` config. according to the `chatmail.ini` config.
@@ -255,18 +256,6 @@ This starts a local live development cycle for chatmail web pages:
- Starts a browser window automatically where you can "refresh" as needed. - Starts a browser window automatically where you can "refresh" as needed.
#### Custom web pages
You can skip uploading a web page
by setting `www_folder=disabled` in `chatmail.ini`.
If you want to manage your web pages outside this git repository,
you can set `www_folder` in `chatmail.ini` to a custom directory on your computer.
`cmdeploy run` will upload it as the server's home page,
and if it contains a `src/index.md` file,
will build it with hugo.
## Mailbox directory layout ## Mailbox directory layout
Fresh chatmail addresses have a mailbox directory that contains: Fresh chatmail addresses have a mailbox directory that contains:
@@ -544,15 +533,3 @@ Then reboot the relay or do `sysctl -p` and `nft -f /etc/nftables.conf`.
Once proxy relay is set up, Once proxy relay is set up,
you can add its IP address to the DNS. you can add its IP address to the DNS.
## Neighbors and Acquaintances
Here are some related projects that you may be interested in:
- [Mox](https://github.com/mjl-/mox): A Golang email server. [Work is in
progress](https://github.com/mjl-/mox/issues/251) to modify it to support all
of the features and configuration settings required to operate as a chatmail
relay.
- [Maddy-Chatmail](https://github.com/sadraiiali/maddy_chatmail): a plugin for the
[Maddy email server](https://maddy.email/) which aims to implement the
chatmail relay features and configuration options.

View File

@@ -27,10 +27,8 @@ chatmail-metadata = "chatmaild.metadata:main"
filtermail = "chatmaild.filtermail:main" filtermail = "chatmaild.filtermail:main"
echobot = "chatmaild.echo:main" echobot = "chatmaild.echo:main"
chatmail-metrics = "chatmaild.metrics:main" chatmail-metrics = "chatmaild.metrics:main"
chatmail-expire = "chatmaild.expire:main" delete_inactive_users = "chatmaild.delete_inactive_users:main"
chatmail-fsreport = "chatmaild.fsreport:main"
lastlogin = "chatmaild.lastlogin:main" lastlogin = "chatmaild.lastlogin:main"
turnserver = "chatmaild.turnserver:main"
[project.entry-points.pytest11] [project.entry-points.pytest11]
"chatmaild.testplugin" = "chatmaild.tests.plugin" "chatmaild.testplugin" = "chatmaild.tests.plugin"
@@ -50,9 +48,6 @@ lint.select = [
"PLE", # Pylint Error "PLE", # Pylint Error
"PLW", # Pylint Warning "PLW", # Pylint Warning
] ]
lint.ignore = [
"PLC0415" # import-outside-top-level
]
[tool.tox] [tool.tox]
legacy_tox_ini = """ legacy_tox_ini = """
@@ -72,6 +67,5 @@ commands =
[testenv] [testenv]
deps = pytest deps = pytest
pdbpp pdbpp
pytest-localserver
commands = pytest -v -rsXx {posargs} commands = pytest -v -rsXx {posargs}
""" """

View File

@@ -26,14 +26,12 @@ class Config:
self.max_mailbox_size = params["max_mailbox_size"] self.max_mailbox_size = params["max_mailbox_size"]
self.max_message_size = int(params.get("max_message_size", "31457280")) self.max_message_size = int(params.get("max_message_size", "31457280"))
self.delete_mails_after = params["delete_mails_after"] self.delete_mails_after = params["delete_mails_after"]
self.delete_large_after = params["delete_large_after"]
self.delete_inactive_users_after = int(params["delete_inactive_users_after"]) self.delete_inactive_users_after = int(params["delete_inactive_users_after"])
self.username_min_length = int(params["username_min_length"]) self.username_min_length = int(params["username_min_length"])
self.username_max_length = int(params["username_max_length"]) self.username_max_length = int(params["username_max_length"])
self.password_min_length = int(params["password_min_length"]) self.password_min_length = int(params["password_min_length"])
self.passthrough_senders = params["passthrough_senders"].split() self.passthrough_senders = params["passthrough_senders"].split()
self.passthrough_recipients = params["passthrough_recipients"].split() self.passthrough_recipients = params["passthrough_recipients"].split()
self.www_folder = params.get("www_folder", "")
self.filtermail_smtp_port = int(params["filtermail_smtp_port"]) self.filtermail_smtp_port = int(params["filtermail_smtp_port"])
self.filtermail_smtp_port_incoming = int( self.filtermail_smtp_port_incoming = int(
params["filtermail_smtp_port_incoming"] params["filtermail_smtp_port_incoming"]
@@ -44,7 +42,6 @@ class Config:
) )
self.mtail_address = params.get("mtail_address") self.mtail_address = params.get("mtail_address")
self.disable_ipv6 = params.get("disable_ipv6", "false").lower() == "true" self.disable_ipv6 = params.get("disable_ipv6", "false").lower() == "true"
self.acme_email = params.get("acme_email", "")
self.imap_rawlog = params.get("imap_rawlog", "false").lower() == "true" self.imap_rawlog = params.get("imap_rawlog", "false").lower() == "true"
if "iroh_relay" not in params: if "iroh_relay" not in params:
self.iroh_relay = "https://" + params["mail_domain"] self.iroh_relay = "https://" + params["mail_domain"]
@@ -67,7 +64,7 @@ class Config:
def _getbytefile(self): def _getbytefile(self):
return open(self._inipath, "rb") return open(self._inipath, "rb")
def get_user(self, addr) -> User: def get_user(self, addr):
if not addr or "@" not in addr or "/" in addr: if not addr or "@" not in addr or "/" in addr:
raise ValueError(f"invalid address {addr!r}") raise ValueError(f"invalid address {addr!r}")
@@ -118,7 +115,7 @@ def get_default_config_content(mail_domain, **overrides):
lines = [] lines = []
for line in content.split("\n"): for line in content.split("\n"):
for key, value in privacy.items(): for key, value in privacy.items():
value_lines = value.format(mail_domain=mail_domain).strip().split("\n") value_lines = value.strip().split("\n")
if not line.startswith(f"{key} =") or not value_lines: if not line.startswith(f"{key} =") or not value_lines:
continue continue
if len(value_lines) == 1: if len(value_lines) == 1:

View File

@@ -0,0 +1,31 @@
"""
Remove inactive users
"""
import os
import shutil
import sys
import time
from .config import read_config
def delete_inactive_users(config):
cutoff_date = time.time() - config.delete_inactive_users_after * 86400
for addr in os.listdir(config.mailboxes_dir):
try:
user = config.get_user(addr)
except ValueError:
continue
read_timestamp = user.get_last_login_timestamp()
if read_timestamp and read_timestamp < cutoff_date:
path = config.mailboxes_dir.joinpath(addr)
assert path == user.maildir
shutil.rmtree(path, ignore_errors=True)
def main():
(cfgpath,) = sys.argv[1:]
config = read_config(cfgpath)
delete_inactive_users(config)

View File

@@ -1,182 +0,0 @@
"""
Expire old messages and addresses.
"""
import os
import shutil
import sys
import time
from argparse import ArgumentParser
from collections import namedtuple
from datetime import datetime
from stat import S_ISREG
from chatmaild.config import read_config
FileEntry = namedtuple("FileEntry", ("relpath", "mtime", "size"))
def iter_mailboxes(basedir, maxnum):
if not os.path.exists(basedir):
print_info(f"no mailboxes found at: {basedir}")
return
for name in os.listdir(basedir)[:maxnum]:
if "@" in name:
yield MailboxStat(basedir + "/" + name)
class MailboxStat:
last_login = None
def __init__(self, basedir):
self.basedir = str(basedir)
# all detected messages in cur/new/tmp folders
self.messages = []
# all detected files in mailbox top dir
self.extrafiles = []
# scan all relevant files (without recursion)
old_cwd = os.getcwd()
os.chdir(self.basedir)
for name in os.listdir("."):
if name in ("cur", "new", "tmp"):
for msg_name in os.listdir(name):
relpath = name + "/" + msg_name
st = os.stat(relpath)
self.messages.append(FileEntry(relpath, st.st_mtime, st.st_size))
else:
st = os.stat(name)
if S_ISREG(st.st_mode):
self.extrafiles.append(FileEntry(name, st.st_mtime, st.st_size))
if name == "password":
self.last_login = st.st_mtime
self.extrafiles.sort(key=lambda x: -x.size)
os.chdir(old_cwd)
def print_info(msg):
print(msg, file=sys.stderr)
class Expiry:
def __init__(self, config, dry, now, verbose):
self.config = config
self.dry = dry
self.now = now
self.verbose = verbose
self.del_mboxes = 0
self.all_mboxes = 0
self.del_files = 0
self.all_files = 0
self.start = time.time()
def remove_mailbox(self, mboxdir):
if self.verbose:
print_info(f"removing {mboxdir}")
if not self.dry:
shutil.rmtree(mboxdir)
self.del_mboxes += 1
def remove_file(self, path):
if self.verbose:
print_info(f"removing {path}")
if not self.dry:
try:
os.unlink(path)
except FileNotFoundError:
print_info(f"file not found/vanished {path}")
self.del_files += 1
def process_mailbox_stat(self, mbox):
cutoff_without_login = (
self.now - int(self.config.delete_inactive_users_after) * 86400
)
cutoff_mails = self.now - int(self.config.delete_mails_after) * 86400
cutoff_large_mails = self.now - int(self.config.delete_large_after) * 86400
self.all_mboxes += 1
changed = False
if mbox.last_login and mbox.last_login < cutoff_without_login:
self.remove_mailbox(mbox.basedir)
return
# all to-be-removed files are relative to the mailbox basedir
os.chdir(mbox.basedir)
mboxname = os.path.basename(mbox.basedir)
if self.verbose:
print_info(f"checking for mailbox messages in: {mboxname}")
self.all_files += len(mbox.messages)
for message in mbox.messages:
if message.mtime < cutoff_mails:
self.remove_file(message.relpath)
elif message.size > 200000 and message.mtime < cutoff_large_mails:
# we only remove noticed large files (not unnoticed ones in new/)
if message.relpath.startswith("cur/"):
self.remove_file(message.relpath)
else:
continue
changed = True
if changed:
self.remove_file("maildirsize")
def get_summary(self):
return (
f"Removed {self.del_mboxes} out of {self.all_mboxes} mailboxes "
f"and {self.del_files} out of {self.all_files} files in existing mailboxes "
f"in {time.time() - self.start:2.2f} seconds"
)
def main(args=None):
"""Expire mailboxes and messages according to chatmail config"""
parser = ArgumentParser(description=main.__doc__)
ini = "/usr/local/lib/chatmaild/chatmail.ini"
parser.add_argument(
"chatmail_ini",
action="store",
nargs="?",
help=f"path pointing to chatmail.ini file, default: {ini}",
default=ini,
)
parser.add_argument(
"--days", action="store", help="assume date to be days older than now"
)
parser.add_argument(
"--maxnum",
default=None,
action="store",
help="maximum number of mailboxes to iterate on",
)
parser.add_argument(
"-v",
dest="verbose",
action="store_true",
help="print out removed files and mailboxes",
)
parser.add_argument(
"--remove",
dest="remove",
action="store_true",
help="actually remove all expired files and dirs",
)
args = parser.parse_args(args)
config = read_config(args.chatmail_ini)
now = datetime.utcnow().timestamp()
if args.days:
now = now - 86400 * int(args.days)
maxnum = int(args.maxnum) if args.maxnum else None
exp = Expiry(config, dry=not args.remove, now=now, verbose=args.verbose)
for mailbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
exp.process_mailbox_stat(mailbox)
print(exp.get_summary())
if __name__ == "__main__":
main(sys.argv[1:])

View File

@@ -2,6 +2,7 @@
import asyncio import asyncio
import base64 import base64
import binascii import binascii
import logging
import sys import sys
import time import time
from email import policy from email import policy
@@ -37,12 +38,6 @@ def check_openpgp_payload(payload: bytes):
packet_type_id = payload[i] & 0x3F packet_type_id = payload[i] & 0x3F
i += 1 i += 1
while payload[i] >= 224 and payload[i] < 255:
# Partial body length.
partial_length = 1 << (payload[i] & 0x1F)
i += 1 + partial_length
if payload[i] < 192: if payload[i] < 192:
# One-octet length. # One-octet length.
body_len = payload[i] body_len = payload[i]
@@ -61,7 +56,7 @@ def check_openpgp_payload(payload: bytes):
) )
i += 5 i += 5
else: else:
# Impossible, partial body length was processed above. # Partial body length is not allowed.
return False return False
i += body_len i += body_len
@@ -82,14 +77,8 @@ def check_openpgp_payload(payload: bytes):
return False return False
def check_armored_payload(payload: str, outgoing: bool): def check_armored_payload(payload: str):
"""Check the armored PGP message for invalid content. prefix = "-----BEGIN PGP MESSAGE-----\r\n\r\n"
:param payload: the armored PGP message
:param outgoing: whether the message is outgoing or incoming
:return: whether the message is a valid PGP message
"""
prefix = "-----BEGIN PGP MESSAGE-----\r\n"
if not payload.startswith(prefix): if not payload.startswith(prefix):
return False return False
payload = payload.removeprefix(prefix) payload = payload.removeprefix(prefix)
@@ -101,16 +90,6 @@ def check_armored_payload(payload: str, outgoing: bool):
return False return False
payload = payload.removesuffix(suffix) payload = payload.removesuffix(suffix)
version_comment = "Version: "
if payload.startswith(version_comment):
if outgoing: # Disallow comments in outgoing messages
return False
# Remove comments from incoming messages
payload = payload.partition("\r\n")[2]
while payload.startswith("\r\n"):
payload = payload.removeprefix("\r\n")
# Remove CRC24. # Remove CRC24.
payload = payload.rpartition("=")[0] payload = payload.rpartition("=")[0]
@@ -146,7 +125,7 @@ def is_securejoin(message):
return True return True
def check_encrypted(message, outgoing=True): def check_encrypted(message):
"""Check that the message is an OpenPGP-encrypted message. """Check that the message is an OpenPGP-encrypted message.
MIME structure of the message must correspond to <https://www.rfc-editor.org/rfc/rfc3156>. MIME structure of the message must correspond to <https://www.rfc-editor.org/rfc/rfc3156>.
@@ -173,7 +152,7 @@ def check_encrypted(message, outgoing=True):
if part.get_content_type() != "application/octet-stream": if part.get_content_type() != "application/octet-stream":
return False return False
if not check_armored_payload(part.get_payload(), outgoing=outgoing): if not check_armored_payload(part.get_payload()):
return False return False
else: else:
return False return False
@@ -188,12 +167,7 @@ async def asyncmain_beforequeue(config, mode):
else: else:
port = config.filtermail_smtp_port_incoming port = config.filtermail_smtp_port_incoming
handler = IncomingBeforeQueueHandler(config) handler = IncomingBeforeQueueHandler(config)
HackedController( HackedController(handler, hostname="127.0.0.1", port=port).start()
handler,
hostname="127.0.0.1",
port=port,
data_size_limit=config.max_message_size,
).start()
def recipient_matches_passthrough(recipient, passthrough_recipients): def recipient_matches_passthrough(recipient, passthrough_recipients):
@@ -212,13 +186,11 @@ class HackedController(Controller):
class SMTPDiscardRCPTO_options(SMTP): class SMTPDiscardRCPTO_options(SMTP):
def _getparams(self, params): def _getparams(self, params):
# Ignore RCPT TO parameters. # aiosmtpd's SMTP daemon fails to handle a request if there are RCPT TO options
# # We just ignore them for our incoming filtermail purposes
# Otherwise parameters such as `ORCPT=...` if len(params) == 1 and params[0].startswith("ORCPT"):
# or `NOTIFY=DELAY,FAILURE` (generated by Stalwart) return {}
# make aiosmtpd reject the message here: return super()._getparams(params)
# <https://github.com/aio-libs/aiosmtpd/blob/98f578389ae86e5345cc343fa4e5a17b21d9c96d/aiosmtpd/smtp.py#L1379-L1384>
return {}
class OutgoingBeforeQueueHandler: class OutgoingBeforeQueueHandler:
@@ -227,7 +199,7 @@ class OutgoingBeforeQueueHandler:
self.send_rate_limiter = SendRateLimiter() self.send_rate_limiter = SendRateLimiter()
async def handle_MAIL(self, server, session, envelope, address, mail_options): async def handle_MAIL(self, server, session, envelope, address, mail_options):
log_info(f"handle_MAIL from {address}") logging.info(f"handle_MAIL from {address}")
envelope.mail_from = address envelope.mail_from = address
max_sent = self.config.max_user_send_per_minute max_sent = self.config.max_user_send_per_minute
if not self.send_rate_limiter.is_sending_allowed(address, max_sent): if not self.send_rate_limiter.is_sending_allowed(address, max_sent):
@@ -240,15 +212,11 @@ class OutgoingBeforeQueueHandler:
return "250 OK" return "250 OK"
async def handle_DATA(self, server, session, envelope): async def handle_DATA(self, server, session, envelope):
loop = asyncio.get_running_loop() logging.info("handle_DATA before-queue")
return await loop.run_in_executor(None, self.sync_handle_DATA, envelope)
def sync_handle_DATA(self, envelope):
log_info("handle_DATA before-queue")
error = self.check_DATA(envelope) error = self.check_DATA(envelope)
if error: if error:
return error return error
log_info("re-injecting the mail that passed checks") logging.info("re-injecting the mail that passed checks")
client = SMTPClient("localhost", self.config.postfix_reinject_port) client = SMTPClient("localhost", self.config.postfix_reinject_port)
client.sendmail( client.sendmail(
envelope.mail_from, envelope.rcpt_tos, envelope.original_content envelope.mail_from, envelope.rcpt_tos, envelope.original_content
@@ -257,10 +225,10 @@ class OutgoingBeforeQueueHandler:
def check_DATA(self, envelope): def check_DATA(self, envelope):
"""the central filtering function for e-mails.""" """the central filtering function for e-mails."""
log_info(f"Processing DATA message from {envelope.mail_from}") logging.info(f"Processing DATA message from {envelope.mail_from}")
message = BytesParser(policy=policy.default).parsebytes(envelope.content) message = BytesParser(policy=policy.default).parsebytes(envelope.content)
mail_encrypted = check_encrypted(message, outgoing=True) mail_encrypted = check_encrypted(message)
_, from_addr = parseaddr(message.get("from").strip()) _, from_addr = parseaddr(message.get("from").strip())
@@ -297,15 +265,11 @@ class IncomingBeforeQueueHandler:
self.config = config self.config = config
async def handle_DATA(self, server, session, envelope): async def handle_DATA(self, server, session, envelope):
loop = asyncio.get_running_loop() logging.info("handle_DATA before-queue")
return await loop.run_in_executor(None, self.sync_handle_DATA, envelope)
def sync_handle_DATA(self, envelope):
log_info("handle_DATA before-queue")
error = self.check_DATA(envelope) error = self.check_DATA(envelope)
if error: if error:
return error return error
log_info("re-injecting the mail that passed checks") logging.info("re-injecting the mail that passed checks")
# the smtp daemon on reinject_port_incoming gives it to dkim milter # the smtp daemon on reinject_port_incoming gives it to dkim milter
# which looks at source address to determine whether to verify or sign # which looks at source address to determine whether to verify or sign
@@ -321,10 +285,10 @@ class IncomingBeforeQueueHandler:
def check_DATA(self, envelope): def check_DATA(self, envelope):
"""the central filtering function for e-mails.""" """the central filtering function for e-mails."""
log_info(f"Processing DATA message from {envelope.mail_from}") logging.info(f"Processing DATA message from {envelope.mail_from}")
message = BytesParser(policy=policy.default).parsebytes(envelope.content) message = BytesParser(policy=policy.default).parsebytes(envelope.content)
mail_encrypted = check_encrypted(message, outgoing=False) mail_encrypted = check_encrypted(message)
if mail_encrypted or is_securejoin(message): if mail_encrypted or is_securejoin(message):
print("Incoming: Filtering encrypted mail.", file=sys.stderr) print("Incoming: Filtering encrypted mail.", file=sys.stderr)
@@ -363,19 +327,16 @@ class SendRateLimiter:
return False return False
def log_info(msg):
print(msg, file=sys.stderr)
def main(): def main():
args = sys.argv[1:] args = sys.argv[1:]
assert len(args) == 2 assert len(args) == 2
config = read_config(args[0]) config = read_config(args[0])
mode = args[1] mode = args[1]
logging.basicConfig(level=logging.WARN)
loop = asyncio.new_event_loop() loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop) asyncio.set_event_loop(loop)
assert mode in ["incoming", "outgoing"] assert mode in ["incoming", "outgoing"]
task = asyncmain_beforequeue(config, mode) task = asyncmain_beforequeue(config, mode)
loop.create_task(task) loop.create_task(task)
log_info("entering serving loop") logging.info("entering serving loop")
loop.run_forever() loop.run_forever()

View File

@@ -1,168 +0,0 @@
"""
command line tool to analyze mailbox message storage
example invocation:
python -m chatmaild.fsreport /path/to/chatmail.ini
to show storage summaries for all "cur" folders
python -m chatmaild.fsreport /path/to/chatmail.ini --mdir cur
to show storage summaries only for first 1000 mailboxes
python -m chatmaild.fsreport /path/to/chatmail.ini --maxnum 1000
"""
import os
from argparse import ArgumentParser
from datetime import datetime
from chatmaild.config import read_config
from chatmaild.expire import iter_mailboxes
DAYSECONDS = 24 * 60 * 60
MONTHSECONDS = DAYSECONDS * 30
def HSize(size: int):
"""Format a size integer as a Human-readable string Kilobyte, Megabyte or Gigabyte"""
if size < 10000:
return f"{size / 1000:5.2f}K"
if size < 1000 * 1000:
return f"{size / 1000:5.0f}K"
if size < 1000 * 1000 * 1000:
return f"{int(size / 1000000):5.0f}M"
return f"{size / 1000000000:5.2f}G"
class Report:
def __init__(self, now, min_login_age, mdir):
self.size_extra = 0
self.size_messages = 0
self.now = now
self.min_login_age = min_login_age
self.mdir = mdir
self.num_ci_logins = self.num_all_logins = 0
self.login_buckets = {x: 0 for x in (1, 10, 30, 40, 80, 100, 150)}
self.message_buckets = {x: 0 for x in (0, 160000, 500000, 2000000)}
def process_mailbox_stat(self, mailbox):
# categorize login times
last_login = mailbox.last_login
if last_login:
self.num_all_logins += 1
if os.path.basename(mailbox.basedir)[:3] == "ci-":
self.num_ci_logins += 1
else:
for days in self.login_buckets:
if last_login >= self.now - days * DAYSECONDS:
self.login_buckets[days] += 1
cutoff_login_date = self.now - self.min_login_age * DAYSECONDS
if last_login and last_login <= cutoff_login_date:
# categorize message sizes
for size in self.message_buckets:
for msg in mailbox.messages:
if msg.size >= size:
if self.mdir and not msg.relpath.startswith(self.mdir):
continue
self.message_buckets[size] += msg.size
self.size_messages += sum(entry.size for entry in mailbox.messages)
self.size_extra += sum(entry.size for entry in mailbox.extrafiles)
def dump_summary(self):
all_messages = self.size_messages
print()
print("## Mailbox storage use analysis")
print(f"Mailbox data total size: {HSize(self.size_extra + all_messages)}")
print(f"Messages total size : {HSize(all_messages)}")
try:
percent = self.size_extra / (self.size_extra + all_messages) * 100
except ZeroDivisionError:
percent = 100
print(f"Extra files : {HSize(self.size_extra)} ({percent:.2f}%)")
print()
if self.min_login_age:
print(f"### Message storage for {self.min_login_age} days old logins")
pref = f"[{self.mdir}] " if self.mdir else ""
for minsize, sumsize in self.message_buckets.items():
percent = (sumsize / all_messages * 100) if all_messages else 0
print(
f"{pref}larger than {HSize(minsize)}: {HSize(sumsize)} ({percent:.2f}%)"
)
user_logins = self.num_all_logins - self.num_ci_logins
def p(num):
return f"({num / user_logins * 100:2.2f}%)" if user_logins else "100%"
print()
print(f"## Login stats, from date reference {datetime.fromtimestamp(self.now)}")
print(f"all: {HSize(self.num_all_logins)}")
print(f"non-ci: {HSize(user_logins)}")
print(f"ci: {HSize(self.num_ci_logins)}")
for days, active in self.login_buckets.items():
print(f"last {days:3} days: {HSize(active)} {p(active)}")
def main(args=None):
"""Report about filesystem storage usage of all mailboxes and messages"""
parser = ArgumentParser(description=main.__doc__)
ini = "/usr/local/lib/chatmaild/chatmail.ini"
parser.add_argument(
"chatmail_ini",
action="store",
nargs="?",
help=f"path pointing to chatmail.ini file, default: {ini}",
default=ini,
)
parser.add_argument(
"--days",
default=0,
action="store",
help="assume date to be days older than now",
)
parser.add_argument(
"--min-login-age",
default=0,
dest="min_login_age",
action="store",
help="only sum up message size if last login is at least min-login-age days old",
)
parser.add_argument(
"--mdir",
action="store",
help="only consider 'cur' or 'new' or 'tmp' messages for summary",
)
parser.add_argument(
"--maxnum",
default=None,
action="store",
help="maximum number of mailboxes to iterate on",
)
args = parser.parse_args(args)
config = read_config(args.chatmail_ini)
now = datetime.utcnow().timestamp()
if args.days:
now = now - 86400 * int(args.days)
maxnum = int(args.maxnum) if args.maxnum else None
rep = Report(now=now, min_login_age=int(args.min_login_age), mdir=args.mdir)
for mbox in iter_mailboxes(str(config.mailboxes_dir), maxnum=maxnum):
rep.process_mailbox_stat(mbox)
rep.dump_summary()
if __name__ == "__main__":
main()

View File

@@ -23,9 +23,6 @@ max_message_size = 31457280
# days after which mails are unconditionally deleted # days after which mails are unconditionally deleted
delete_mails_after = 20 delete_mails_after = 20
# days after which large messages (>200k) are unconditionally deleted
delete_large_after = 7
# days after which users without a successful login are deleted (database and mails) # days after which users without a successful login are deleted (database and mails)
delete_inactive_users_after = 90 delete_inactive_users_after = 90
@@ -43,10 +40,7 @@ passthrough_senders =
# list of e-mail recipients for which to accept outbound un-encrypted mails # list of e-mail recipients for which to accept outbound un-encrypted mails
# (space-separated, item may start with "@" to whitelist whole recipient domains) # (space-separated, item may start with "@" to whitelist whole recipient domains)
passthrough_recipients = xstore@testrun.org echo@{mail_domain} passthrough_recipients = xstore@testrun.org
# path to www directory - documented here: https://github.com/chatmail/relay/#custom-web-pages
#www_folder = www
# #
# Deployment Details # Deployment Details
@@ -63,9 +57,6 @@ postfix_reinject_port_incoming = 10026
# if set to "True" IPv6 is disabled # if set to "True" IPv6 is disabled
disable_ipv6 = False disable_ipv6 = False
# Your email adress, which will be used in acmetool to manage Let's Encrypt SSL certificates
acme_email =
# Defaults to https://iroh.{{mail_domain}} and running `iroh-relay` on the chatmail # Defaults to https://iroh.{{mail_domain}} and running `iroh-relay` on the chatmail
# service. # service.
# If you set it to anything else, the service will be disabled # If you set it to anything else, the service will be disabled

View File

@@ -1,7 +1,7 @@
[privacy] [privacy]
passthrough_recipients = privacy@testrun.org xstore@testrun.org echo@{mail_domain} passthrough_recipients = privacy@testrun.org xstore@testrun.org
privacy_postal = privacy_postal =
Merlinux GmbH, Represented by the managing director H. Krekel, Merlinux GmbH, Represented by the managing director H. Krekel,

View File

@@ -1,24 +1,14 @@
import logging import logging
import sys import sys
import time
from contextlib import contextmanager
from .config import read_config from .config import read_config
from .dictproxy import DictProxy from .dictproxy import DictProxy
from .filedict import FileDict from .filedict import FileDict
from .notifier import Notifier from .notifier import Notifier
from .turnserver import turn_credentials
def _is_valid_token_timestamp(timestamp, now):
# Token if invalid after 90 days
# or if the timestamp is in the future.
return timestamp > now - 3600 * 24 * 90 and timestamp < now + 60
class Metadata: class Metadata:
# each SETMETADATA on this key appends to dictionary # each SETMETADATA on this key appends to a list of unique device tokens
# mapping of unique device tokens
# which only ever get removed if the upstream indicates the token is invalid # which only ever get removed if the upstream indicates the token is invalid
DEVICETOKEN_KEY = "devicetoken" DEVICETOKEN_KEY = "devicetoken"
@@ -28,60 +18,29 @@ class Metadata:
def get_metadata_dict(self, addr): def get_metadata_dict(self, addr):
return FileDict(self.vmail_dir / addr / "metadata.json") return FileDict(self.vmail_dir / addr / "metadata.json")
@contextmanager
def _modify_tokens(self, addr):
with self.get_metadata_dict(addr).modify() as data:
tokens = data.setdefault(self.DEVICETOKEN_KEY, {})
now = int(time.time())
if isinstance(tokens, list):
data[self.DEVICETOKEN_KEY] = tokens = {t: now for t in tokens}
expired_tokens = [
token
for token, timestamp in tokens.items()
if not _is_valid_token_timestamp(tokens[token], now)
]
for expired_token in expired_tokens:
del tokens[expired_token]
yield tokens
def add_token_to_addr(self, addr, token): def add_token_to_addr(self, addr, token):
with self._modify_tokens(addr) as tokens: with self.get_metadata_dict(addr).modify() as data:
tokens[token] = int(time.time()) tokens = data.setdefault(self.DEVICETOKEN_KEY, [])
if token not in tokens:
tokens.append(token)
def remove_token_from_addr(self, addr, token): def remove_token_from_addr(self, addr, token):
with self._modify_tokens(addr) as tokens: with self.get_metadata_dict(addr).modify() as data:
tokens = data.get(self.DEVICETOKEN_KEY, [])
if token in tokens: if token in tokens:
del tokens[token] tokens.remove(token)
def get_tokens_for_addr(self, addr): def get_tokens_for_addr(self, addr):
mdict = self.get_metadata_dict(addr).read() mdict = self.get_metadata_dict(addr).read()
tokens = mdict.get(self.DEVICETOKEN_KEY, {}) return mdict.get(self.DEVICETOKEN_KEY, [])
now = int(time.time())
if isinstance(tokens, dict):
token_list = [
token
for token, timestamp in tokens.items()
if _is_valid_token_timestamp(timestamp, now)
]
if len(token_list) < len(tokens):
# Some tokens have expired, remove them.
with self._modify_tokens(addr) as _tokens:
pass
else:
token_list = []
return token_list
class MetadataDictProxy(DictProxy): class MetadataDictProxy(DictProxy):
def __init__(self, notifier, metadata, iroh_relay=None, turn_hostname=None): def __init__(self, notifier, metadata, iroh_relay=None):
super().__init__() super().__init__()
self.notifier = notifier self.notifier = notifier
self.metadata = metadata self.metadata = metadata
self.iroh_relay = iroh_relay self.iroh_relay = iroh_relay
self.turn_hostname = turn_hostname
def handle_lookup(self, parts): def handle_lookup(self, parts):
# Lpriv/43f5f508a7ea0366dff30200c15250e3/devicetoken\tlkj123poi@c2.testrun.org # Lpriv/43f5f508a7ea0366dff30200c15250e3/devicetoken\tlkj123poi@c2.testrun.org
@@ -100,11 +59,6 @@ class MetadataDictProxy(DictProxy):
): ):
# Handle `GETMETADATA "" /shared/vendor/deltachat/irohrelay` # Handle `GETMETADATA "" /shared/vendor/deltachat/irohrelay`
return f"O{self.iroh_relay}\n" return f"O{self.iroh_relay}\n"
elif keyname == "vendor/vendor.dovecot/pvt/server/vendor/deltachat/turn":
res = turn_credentials()
port = 3478
return f"O{self.turn_hostname}:{port}:{res}\n"
logging.warning(f"lookup ignored: {parts!r}") logging.warning(f"lookup ignored: {parts!r}")
return "N\n" return "N\n"
@@ -128,7 +82,6 @@ def main():
config = read_config(config_path) config = read_config(config_path)
iroh_relay = config.iroh_relay iroh_relay = config.iroh_relay
mail_domain = config.mail_domain
vmail_dir = config.mailboxes_dir vmail_dir = config.mailboxes_dir
if not vmail_dir.exists(): if not vmail_dir.exists():
@@ -142,10 +95,7 @@ def main():
notifier.start_notification_threads(metadata.remove_token_from_addr) notifier.start_notification_threads(metadata.remove_token_from_addr)
dictproxy = MetadataDictProxy( dictproxy = MetadataDictProxy(
notifier=notifier, notifier=notifier, metadata=metadata, iroh_relay=iroh_relay
metadata=metadata,
iroh_relay=iroh_relay,
turn_hostname=mail_domain,
) )
dictproxy.serve_forever_from_socket(socket) dictproxy.serve_forever_from_socket(socket)

View File

@@ -15,7 +15,7 @@ ALPHANUMERIC_PUNCT = string.ascii_letters + string.digits + string.punctuation
def create_newemail_dict(config: Config): def create_newemail_dict(config: Config):
user = "".join(random.choices(ALPHANUMERIC, k=config.username_max_length)) user = "".join(random.choices(ALPHANUMERIC, k=config.username_min_length))
password = "".join( password = "".join(
secrets.choice(ALPHANUMERIC_PUNCT) secrets.choice(ALPHANUMERIC_PUNCT)
for _ in range(config.password_min_length + 3) for _ in range(config.password_min_length + 3)

View File

@@ -17,11 +17,11 @@ and which are scheduled for retry using exponential back-off timing.
If a token notification would be scheduled more than DROP_DEADLINE seconds If a token notification would be scheduled more than DROP_DEADLINE seconds
after its first attempt, it is dropped with a log error. after its first attempt, it is dropped with a log error.
Note that tokens are opaque to the notification machinery here Note that tokens are completely opaque to the notification machinery here
and are encrypted foreclosing all ability to distinguish and will in the future be encrypted foreclosing all ability to distinguish
which device token ultimately goes to which phone-provider notification service, which device token ultimately goes to which phone-provider notification service,
or to understand the relation of "device tokens" and chatmail addresses. or to understand the relation of "device tokens" and chatmail addresses.
The meaning and format of tokens is basically a matter of chatmail Core and The meaning and format of tokens is basically a matter of Delta-Chat Core and
the `notification.delta.chat` service. the `notification.delta.chat` service.
""" """
@@ -95,12 +95,7 @@ class Notifier:
logging.warning(f"removing spurious queue item: {queue_path!r}") logging.warning(f"removing spurious queue item: {queue_path!r}")
queue_path.unlink() queue_path.unlink()
continue continue
try: queue_item = PersistentQueueItem.read_from_path(queue_path)
queue_item = PersistentQueueItem.read_from_path(queue_path)
except ValueError:
logging.warning(f"removing spurious queue item: {queue_path!r}")
queue_path.unlink()
continue
self.queue_for_retry(queue_item) self.queue_for_retry(queue_item)
def queue_for_retry(self, queue_item, retry_num=0): def queue_for_retry(self, queue_item, retry_num=0):

View File

@@ -35,7 +35,6 @@ def test_read_config_testrun(make_config):
assert config.max_user_send_per_minute == 60 assert config.max_user_send_per_minute == 60
assert config.max_mailbox_size == "100M" assert config.max_mailbox_size == "100M"
assert config.delete_mails_after == "20" assert config.delete_mails_after == "20"
assert config.delete_large_after == "7"
assert config.username_min_length == 9 assert config.username_min_length == 9
assert config.username_max_length == 9 assert config.username_max_length == 9
assert config.password_min_length == 9 assert config.password_min_length == 9

View File

@@ -1,7 +1,7 @@
import time import time
from chatmaild.delete_inactive_users import delete_inactive_users
from chatmaild.doveauth import AuthDictProxy from chatmaild.doveauth import AuthDictProxy
from chatmaild.expire import main as main_expire
def test_login_timestamps(example_config): def test_login_timestamps(example_config):
@@ -45,12 +45,7 @@ def test_delete_inactive_users(example_config):
for addr in to_remove: for addr in to_remove:
assert example_config.get_user(addr).maildir.exists() assert example_config.get_user(addr).maildir.exists()
main_expire( delete_inactive_users(example_config)
args=[
"--remove",
str(example_config._inipath),
]
)
for p in example_config.mailboxes_dir.iterdir(): for p in example_config.mailboxes_dir.iterdir():
assert not p.name.startswith("old") assert not p.name.startswith("old")

View File

@@ -1,129 +0,0 @@
import os
import random
from datetime import datetime
from fnmatch import fnmatch
from pathlib import Path
import pytest
from chatmaild.expire import FileEntry, MailboxStat, iter_mailboxes
from chatmaild.expire import main as expiry_main
from chatmaild.fsreport import main as report_main
def fill_mbox(basedir):
basedir1 = basedir.joinpath("mailbox1@example.org")
basedir1.mkdir()
password = basedir1.joinpath("password")
password.write_text("xxx")
basedir1.joinpath("maildirsize").write_text("xxx")
garbagedir = basedir1.joinpath("garbagedir")
garbagedir.mkdir()
create_new_messages(basedir1, ["cur/msg1"], size=500)
create_new_messages(basedir1, ["new/msg2"], size=600)
return basedir1
def create_new_messages(basedir, relpaths, size=1000, days=0):
now = datetime.utcnow().timestamp()
for relpath in relpaths:
msg_path = Path(basedir).joinpath(relpath)
msg_path.parent.mkdir(parents=True, exist_ok=True)
msg_path.write_text("x" * size)
# accessed now, modified N days ago
os.utime(msg_path, (now, now - days * 86400))
@pytest.fixture
def mbox1(example_config):
basedir1 = fill_mbox(example_config.mailboxes_dir)
return MailboxStat(basedir1)
def test_filentry_ordering(tmp_path):
l = [FileEntry(f"x{i}", size=i + 10, mtime=1000 - i) for i in range(10)]
sorted = list(l)
random.shuffle(l)
l.sort(key=lambda x: x.size)
assert l == sorted
def test_no_mailbxoes(tmp_path, capsys):
assert [] == list(iter_mailboxes(str(tmp_path.joinpath("notexists")), maxnum=10))
out, err = capsys.readouterr()
assert "no mailboxes" in err
def test_stats_mailbox(mbox1):
password = Path(mbox1.basedir).joinpath("password")
assert mbox1.last_login == password.stat().st_mtime
assert len(mbox1.messages) == 2
msgs = list(sorted(mbox1.messages, key=lambda x: x.size))
assert len(msgs) == 2
assert msgs[0].size == 500 # cur
assert msgs[1].size == 600 # new
create_new_messages(mbox1.basedir, ["large-extra"], size=1000)
create_new_messages(mbox1.basedir, ["index-something"], size=3)
mbox2 = MailboxStat(mbox1.basedir)
assert len(mbox2.extrafiles) == 4
assert mbox2.extrafiles[0].size == 1000
# cope well with mailbox dirs that have no password (for whatever reason)
Path(mbox1.basedir).joinpath("password").unlink()
mbox3 = MailboxStat(mbox1.basedir)
assert mbox3.last_login is None
def test_report_no_mailboxes(example_config):
args = (str(example_config._inipath),)
report_main(args)
def test_report(mbox1, example_config):
args = (str(example_config._inipath),)
report_main(args)
args = list(args) + "--days 1".split()
report_main(args)
args = list(args) + "--min-login-age 1".split()
report_main(args)
args = list(args) + "--mdir cur".split()
report_main(args)
def test_expiry_cli_basic(example_config, mbox1):
args = (str(example_config._inipath),)
expiry_main(args)
def test_expiry_cli_old_files(capsys, example_config, mbox1):
relpaths_old = ["cur/msg_old1", "cur/msg_old1"]
cutoff_days = int(example_config.delete_mails_after) + 1
create_new_messages(mbox1.basedir, relpaths_old, size=1000, days=cutoff_days)
relpaths_large = ["cur/msg_old_large1", "new/msg_old_large2"]
cutoff_days = int(example_config.delete_large_after) + 1
create_new_messages(
mbox1.basedir, relpaths_large, size=1000 * 300, days=cutoff_days
)
create_new_messages(mbox1.basedir, ["cur/shouldstay"], size=1000 * 300, days=1)
args = str(example_config._inipath), "--remove", "-v"
expiry_main(args)
out, err = capsys.readouterr()
allpaths = relpaths_old + relpaths_large + ["maildirsize"]
for path in allpaths:
for line in err.split("\n"):
if fnmatch(line, f"removing*{path}"):
break
else:
if path != "new/msg_old_large2":
pytest.fail(f"failed to remove {path}\n{err}")
assert "shouldstay" not in err

View File

@@ -241,9 +241,8 @@ def test_cleartext_passthrough_senders(gencreds, handler, maildata):
def test_check_armored_payload(): def test_check_armored_payload():
prefix = "-----BEGIN PGP MESSAGE-----\r\n" payload = """-----BEGIN PGP MESSAGE-----\r
comment = "Version: ProtonMail\r\n" \r
payload = """\r
wU4DSqFx0d1yqAoSAQdAYkX/ZN/Az4B0k7X47zKyWrXxlDEdS3WOy0Yf2+GJTFgg\r wU4DSqFx0d1yqAoSAQdAYkX/ZN/Az4B0k7X47zKyWrXxlDEdS3WOy0Yf2+GJTFgg\r
Zk5ql0mLG8Ze+ZifCS0XMO4otlemSyJ0K1ZPdFMGzUDBTgNqzkFabxXoXRIBB0AM\r Zk5ql0mLG8Ze+ZifCS0XMO4otlemSyJ0K1ZPdFMGzUDBTgNqzkFabxXoXRIBB0AM\r
755wlX41X6Ay3KhnwBq7yEqSykVH6F3x11iHPKraLCAGZoaS8bKKNy/zg5slda1X\r 755wlX41X6Ay3KhnwBq7yEqSykVH6F3x11iHPKraLCAGZoaS8bKKNy/zg5slda1X\r
@@ -279,25 +278,16 @@ UN4fiB0KR9JyG2ayUdNJVkXZSZLnHyRgiaadlpUo16LVvw==\r
\r \r
""" """
commented_payload = prefix + comment + payload assert check_armored_payload(payload) == True
assert check_armored_payload(commented_payload, outgoing=False) == True
assert check_armored_payload(commented_payload, outgoing=True) == False
payload = prefix + payload
assert check_armored_payload(payload, outgoing=False) == True
assert check_armored_payload(payload, outgoing=True) == True
payload = payload.removesuffix("\r\n") payload = payload.removesuffix("\r\n")
assert check_armored_payload(payload, outgoing=False) == True assert check_armored_payload(payload) == True
assert check_armored_payload(payload, outgoing=True) == True
payload = payload.removesuffix("\r\n") payload = payload.removesuffix("\r\n")
assert check_armored_payload(payload, outgoing=False) == True assert check_armored_payload(payload) == True
assert check_armored_payload(payload, outgoing=True) == True
payload = payload.removesuffix("\r\n") payload = payload.removesuffix("\r\n")
assert check_armored_payload(payload, outgoing=False) == True assert check_armored_payload(payload) == True
assert check_armored_payload(payload, outgoing=True) == True
payload = """-----BEGIN PGP MESSAGE-----\r payload = """-----BEGIN PGP MESSAGE-----\r
\r \r
@@ -305,8 +295,7 @@ HELLOWORLD
-----END PGP MESSAGE-----\r -----END PGP MESSAGE-----\r
\r \r
""" """
assert check_armored_payload(payload, outgoing=False) == False assert check_armored_payload(payload) == False
assert check_armored_payload(payload, outgoing=True) == False
payload = """-----BEGIN PGP MESSAGE-----\r payload = """-----BEGIN PGP MESSAGE-----\r
\r \r
@@ -314,48 +303,4 @@ HELLOWORLD
-----END PGP MESSAGE-----\r -----END PGP MESSAGE-----\r
\r \r
""" """
assert check_armored_payload(payload, outgoing=False) == False assert check_armored_payload(payload) == False
assert check_armored_payload(payload, outgoing=True) == False
# Test payload using partial body length
# as generated by GopenPGP.
payload = """-----BEGIN PGP MESSAGE-----\r
\r
wV4DdCVjRfOT3TQSAQdAY5+pjT6mlCxPGdR3be4w7oJJRUGIPI/Vnh+mJxGSm34w\r
LNlVc89S1g22uQYFif2sUJsQWbpoHpNkuWpkSgOaHmNvrZiY/YU5iv+cZ3LbmtUG\r
0uoBisSHh9O1c+5sYZSbrvYZ1NOwlD7Fv/U5/Mw4E5+CjxfdgNGp5o3DDddzPK78\r
jseDhdSXxnaiIJC93hxNX6R1RPt3G2gukyzx69wciPQShcF8zf3W3o75Ed7B8etV\r
QEeB16xzdFhKa9JxdjTu3osgCs21IO7wpcFkjc7nZzlW6jPnELJJaNmv4yOOCjMp\r
6YAkaN/BkL+jHTznHDuDsT5ilnTXpwHDU1Cm9PIx/KFcNCQnIB+2DcdIHPHUH1ci\r
jvqoeXAVWjKXEjS7PqPFuP/xGbrWG2ugs+toXJOKbgRkExvKs1dwPFKrgghvCVbW\r
AcKejQKAPArLwpkA7aD875TZQShvGt74fNs45XBlGOYOnNOAJ1KAmzrXLIDViyyB\r
kDsmTBk785xofuCkjBpXSe6vsMprPzCteDfaUibh8FHeJjucxPerwuOPEmnogNaf\r
YyL4+iy8H8I9/p7pmUqILprxTG0jTOtlk0bTVzeiF56W1xbtSEMuOo4oFbQTyOM2\r
bKXaYo774Jm+rRtKAnnI2dtf9RpK19cog6YNzfYjesLKbXDsPZbN5rmwyFiCvvxC\r
kQ6JLob+B2fPdY2gzy7LypxktS8Zi1HJcWDHJGVmQodaDLqKUObb4M26bXDe6oxI\r
NS8PJz5exVbM3KhZnUOEn6PJRBBf5a/ZqxlhZPcQo/oBuhKpBRpO5kSDwPIUByu3\r
UlXLSkpMqe9pUarAOEuQjfl2RVY7U+RrQYp4YP5keMO+i8NCefAFbowTTufO1JIq\r
2nVgCi/QVnxZyEc9OYt/8AE3g4cdojE+vsSDifZLSWYIetpfrohHv3dT3StD1QRG\r
0QE6qq6oKpg/IL0cjvuX4c7a7bslv2fXp8t75y37RU6253qdIebhxc/cRhPbc/yu\r
p0YLyD4SrvKTLP2ZV95jT4IPEpqm4AN3QmiOzdtqR2gLyb62L8QfqI/FdwsIiRiM\r
hqydwoqt/lfSqG1WKPh+6EkMkH+TDiCC1BQdbN1MNcyUtcjb35PR2c8Ld2TF3guA\r
jLIqMt/Vb7hBoMb2FcsOYY25ka9oV62OwgKWLXnFzk+modMR5fzb4kxVVAYEqP+D\r
T5KO1Vs76v1fyPGOq6BbBCvLwTqe/e6IZInJles4v5jrhnLcGKmNGivCUDe6X6NY\r
UKNt5RsZllwDQpaAb5dMNhyrk8SgIE7TBI7rvqIdUCE52Vy+0JDxFg5olRpFUfO6\r
/MyTW3Yo/ekk/npHr7iYYqJTCc21bDGLWQcIo/XO7WPxrKNWGBNPFnkRdw0MaKr4\r
+cEM3V8NFnSEpC12xA+RX/CezuJtwXZK5MpG76eYqMO6qyC+c25YcFecEufDZDxx\r
ZLqRszVRyxyWPtk/oIeQK2v9wOqY6N9/ff01gHz69vqYqN5bUw/QKZsmx1zW+gPw\r
6x2tDK2BHeYl182gCbhlKISRFwCtbjqZSkiKWao/VtygHkw0fK34avJuyQ/X9YaN\r
BRy+7Lf3VA53pnB5WJ1xwRXN8VDvmZeXzv2krHveCMemj0OjnRoCLu117xN0A5m9\r
Fm/RoDix5PolDHtWTtr2m1n2hp2LHnj8at9lFEd0SKhAYHVL9KjzycwWODZRXt+x\r
zGDDuooEeTvdY5NLyKcl4gETz1ZP4Ez5jGGjhPSwSpq1mU7UaJ9ZXXdr4KHyifW6\r
ggNzNsGhXTap7IWZpTtqXABydfiBshmH2NjqtNDwBweJVSgP10+r0WhMWlaZs6xl\r
V3o5yskJt6GlkwpJxZrTvN6Tiww/eW7HFV6NGf7IRSWY5tJc/iA7/92tOmkdvJ1q\r
myLbG7cJB787QjplEyVe2P/JBO6xYvbkJLf9Q+HaviTO25rugRSrYsoKMDfO8VlQ\r
1CcnTPVtApPZJEQzAWJEgVAM8uIlkqWJJMgyWT34sTkdBeCUFGloXQFs9Yxd0AGf\r
/zHEkYZSTKpVSvAIGu4=\r
=6iHb\r
-----END PGP MESSAGE-----\r
"""
assert check_armored_payload(payload, outgoing=False) == True
assert check_armored_payload(payload, outgoing=True) == True

View File

@@ -1,78 +0,0 @@
import smtplib
import subprocess
import sys
import pytest
@pytest.fixture
def smtpserver():
from pytest_localserver import smtp
server = smtp.Server("127.0.0.1")
server.start()
yield server
server.stop()
@pytest.fixture
def make_popen(request):
def popen(cmdargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kw):
p = subprocess.Popen(
cmdargs,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
def fin():
p.terminate()
out, err = p.communicate()
print(out.decode("ascii"))
print(err.decode("ascii"), file=sys.stderr)
request.addfinalizer(fin)
return p
return popen
@pytest.mark.parametrize("filtermail_mode", ["outgoing", "incoming"])
def test_one_mail(
make_config, make_popen, smtpserver, maildata, filtermail_mode, monkeypatch
):
monkeypatch.setenv("PYTHONUNBUFFERED", "1")
smtp_inject_port = 20025
if filtermail_mode == "outgoing":
settings = dict(
postfix_reinject_port=smtpserver.port,
filtermail_smtp_port=smtp_inject_port,
)
else:
settings = dict(
postfix_reinject_port_incoming=smtpserver.port,
filtermail_smtp_port_incoming=smtp_inject_port,
)
config = make_config("example.org", settings=settings)
path = str(config._inipath)
popen = make_popen(["filtermail", path, filtermail_mode])
line = popen.stderr.readline().strip()
if b"loop" not in line:
print(line.decode("ascii"), file=sys.stderr)
pytest.fail("starting filtermail failed")
addr = f"user1@{config.mail_domain}"
config.get_user(addr).set_password("l1k2j3l1k2j3l")
# send encrypted mail
data = str(maildata("encrypted.eml", from_addr=addr, to_addr=addr))
client = smtplib.SMTP("localhost", smtp_inject_port)
client.sendmail(addr, [addr], data)
assert len(smtpserver.outbox) == 1
# send un-encrypted mail that errors
data = str(maildata("fake-encrypted.eml", from_addr=addr, to_addr=addr))
with pytest.raises(smtplib.SMTPDataError) as e:
client.sendmail(addr, [addr], data)
assert e.value.smtp_code == 523

View File

@@ -242,22 +242,6 @@ def test_requeue_removes_tmp_files(notifier, metadata, testaddr, caplog):
assert queue_item.addr == testaddr assert queue_item.addr == testaddr
def test_requeue_removes_invalid_files(notifier, metadata, testaddr, caplog):
metadata.add_token_to_addr(testaddr, "01234")
notifier.new_message_for_addr(testaddr, metadata)
# empty/invalid files should be ignored
p = notifier.queue_dir.joinpath("1203981203")
p.touch()
notifier2 = notifier.__class__(notifier.queue_dir)
notifier2.requeue_persistent_queue_items()
assert "spurious" in caplog.records[0].msg
assert not p.exists()
assert notifier2.retry_queues[0].qsize() == 1
when, queue_item = notifier2.retry_queues[0].get()
assert when <= int(time.time())
assert queue_item.addr == testaddr
def test_start_and_stop_notification_threads(notifier, testaddr): def test_start_and_stop_notification_threads(notifier, testaddr):
threads = notifier.start_notification_threads(None) threads = notifier.start_notification_threads(None)
for retry_num, threadlist in threads.items(): for retry_num, threadlist in threads.items():

View File

@@ -1,9 +0,0 @@
#!/usr/bin/env python3
import socket
def turn_credentials() -> str:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as client_socket:
client_socket.connect("/run/chatmail-turn/turn.socket")
with client_socket.makefile("rb") as file:
return file.readline().decode("utf-8")

View File

@@ -58,8 +58,7 @@ class User:
if not self.addr.startswith("echo@"): if not self.addr.startswith("echo@"):
logging.error(f"could not write password for: {self.addr}") logging.error(f"could not write password for: {self.addr}")
raise raise
if not self.addr.startswith("echo@"): self.enforce_E2EE_path.touch()
self.enforce_E2EE_path.touch()
def set_last_login_timestamp(self, timestamp): def set_last_login_timestamp(self, timestamp):
"""Track login time with daily granularity """Track login time with daily granularity

View File

@@ -41,6 +41,3 @@ lint.select = [
"PLE", # Pylint Error "PLE", # Pylint Error
"PLW", # Pylint Warning "PLW", # Pylint Warning
] ]
lint.ignore = [
"PLC0415" # import-outside-top-level
]

View File

@@ -7,35 +7,17 @@ import io
import shutil import shutil
import subprocess import subprocess
import sys import sys
from io import StringIO
from pathlib import Path from pathlib import Path
from chatmaild.config import Config, read_config from chatmaild.config import Config, read_config
from pyinfra import facts, host, logger from pyinfra import facts, host
from pyinfra.api import FactBase from pyinfra.facts.files import File
from pyinfra.facts.files import File, Sha256File
from pyinfra.facts.server import Sysctl
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
from .acmetool import deploy_acmetool from .acmetool import deploy_acmetool
class Port(FactBase):
"""
Returns the process occuping a port.
"""
def command(self, port: int) -> str:
return (
"ss -lptn 'src :%d' | awk 'NR>1 {print $6,$7}' | sed 's/users:((\"//;s/\".*//'"
% (port,)
)
def process(self, output: [str]) -> str:
return output[0]
def _build_chatmaild(dist_dir) -> None: def _build_chatmaild(dist_dir) -> None:
dist_dir = Path(dist_dir).resolve() dist_dir = Path(dist_dir).resolve()
if dist_dir.exists(): if dist_dir.exists():
@@ -128,11 +110,6 @@ def _install_remote_venv_with_chatmaild(config) -> None:
"echobot", "echobot",
"chatmail-metadata", "chatmail-metadata",
"lastlogin", "lastlogin",
"turnserver",
"chatmail-expire",
"chatmail-expire.timer",
"chatmail-fsreport",
"chatmail-fsreport.timer",
): ):
execpath = fn if fn != "filtermail-incoming" else "filtermail" execpath = fn if fn != "filtermail-incoming" else "filtermail"
params = dict( params = dict(
@@ -141,34 +118,27 @@ def _install_remote_venv_with_chatmaild(config) -> None:
remote_venv_dir=remote_venv_dir, remote_venv_dir=remote_venv_dir,
mail_domain=config.mail_domain, mail_domain=config.mail_domain,
) )
source_path = importlib.resources.files(__package__).joinpath(
basename = fn if "." in fn else f"{fn}.service" "service", f"{fn}.service.f"
)
source_path = importlib.resources.files(__package__).joinpath("service", f"{basename}.f")
content = source_path.read_text().format(**params).encode() content = source_path.read_text().format(**params).encode()
files.put( files.put(
name=f"Upload {basename}", name=f"Upload {fn}.service",
src=io.BytesIO(content), src=io.BytesIO(content),
dest=f"/etc/systemd/system/{basename}", dest=f"/etc/systemd/system/{fn}.service",
**root_owned, **root_owned,
) )
if fn == "chatmail-expire" or fn == "chatmail-fsreport":
# don't auto-start but let the corresponding timer trigger execution
enabled = False
else:
enabled = True
systemd.service( systemd.service(
name=f"Setup {basename}", name=f"Setup {fn} service",
service=basename, service=f"{fn}.service",
running=enabled, running=True,
enabled=enabled, enabled=True,
restarted=enabled, restarted=True,
daemon_reload=True, daemon_reload=True,
) )
def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool: def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool:
"""Configures OpenDKIM""" """Configures OpenDKIM"""
need_restart = False need_restart = False
@@ -260,6 +230,7 @@ def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool:
) )
need_restart |= service_file.changed need_restart |= service_file.changed
return need_restart return need_restart
@@ -330,40 +301,6 @@ def _configure_postfix(config: Config, debug: bool = False) -> bool:
return need_restart return need_restart
def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch
url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
deb_filename = "/root/" + url.split("/")[-1]
match (package, arch):
case ("core", "amd64"):
sha256 = "43f593332e22ac7701c62d58b575d2ca409e0f64857a2803be886c22860f5587"
case ("core", "arm64"):
sha256 = "4d21eba1a83f51c100f08f2e49f0c9f8f52f721ebc34f75018e043306da993a7"
case ("imapd", "amd64"):
sha256 = "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86"
case ("imapd", "arm64"):
sha256 = "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f"
case ("lmtpd", "amd64"):
sha256 = "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab"
case ("lmtpd", "arm64"):
sha256 = "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f"
case _:
apt.packages(packages=[f"dovecot-{package}"])
return
files.download(
name=f"Download dovecot-{package}",
src=url,
dest=deb_filename,
sha256sum=sha256,
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
)
apt.deb(name=f"Install dovecot-{package}", src=deb_filename)
def _configure_dovecot(config: Config, debug: bool = False) -> bool: def _configure_dovecot(config: Config, debug: bool = False) -> bool:
"""Configures Dovecot IMAP server.""" """Configures Dovecot IMAP server."""
need_restart = False need_restart = False
@@ -398,21 +335,19 @@ def _configure_dovecot(config: Config, debug: bool = False) -> bool:
) )
need_restart |= lua_push_notification_script.changed need_restart |= lua_push_notification_script.changed
# remove historic expunge script files.template(
# which is now implemented through a systemd chatmail-expire service/timer src=importlib.resources.files(__package__).joinpath("dovecot/expunge.cron.j2"),
files.file( dest="/etc/cron.d/expunge",
path="/etc/cron.d/expunge", user="root",
present=False, group="root",
mode="644",
config=config,
) )
# as per https://doc.dovecot.org/configuration_manual/os/ # as per https://doc.dovecot.org/configuration_manual/os/
# it is recommended to set the following inotify limits # it is recommended to set the following inotify limits
for name in ("max_user_instances", "max_user_watches"): for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}" key = f"fs.inotify.{name}"
if host.get_fact(Sysctl)[key] > 65535:
# Skip updating limits if already sufficient
# (enables running in incus containers where sysctl readonly)
continue
server.sysctl( server.sysctl(
name=f"Change {key}", name=f"Change {key}",
key=key, key=key,
@@ -420,13 +355,6 @@ def _configure_dovecot(config: Config, debug: bool = False) -> bool:
persist=True, persist=True,
) )
timezone_env = files.line(
name="Set TZ environment variable",
path="/etc/environment",
line="TZ=:/etc/localtime",
)
need_restart |= timezone_env.changed
return need_restart return need_restart
@@ -507,77 +435,10 @@ def check_config(config):
return config return config
def deploy_turn_server(config):
(url, sha256sum) = {
"x86_64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-x86_64-linux",
"841e527c15fdc2940b0469e206188ea8f0af48533be12ecb8098520f813d41e4",
),
"aarch64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-aarch64-linux",
"a5fc2d06d937b56a34e098d2cd72a82d3e89967518d159bf246dc69b65e81b42",
),
}[host.get_fact(facts.server.Arch)]
need_restart = False
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn")
if existing_sha256sum != sha256sum:
server.shell(
name="Download chatmail-turn",
commands=[
f"(curl -L {url} >/usr/local/bin/chatmail-turn.new && (echo '{sha256sum} /usr/local/bin/chatmail-turn.new' | sha256sum -c) && mv /usr/local/bin/chatmail-turn.new /usr/local/bin/chatmail-turn)",
"chmod 755 /usr/local/bin/chatmail-turn",
],
)
need_restart = True
source_path = importlib.resources.files(__package__).joinpath(
"service", "turnserver.service.f"
)
content = source_path.read_text().format(mail_domain=config.mail_domain).encode()
systemd_unit = files.put(
name="Upload turnserver.service",
src=io.BytesIO(content),
dest="/etc/systemd/system/turnserver.service",
user="root",
group="root",
mode="644",
)
need_restart |= systemd_unit.changed
systemd.service(
name="Setup turnserver service",
service="turnserver.service",
running=True,
enabled=True,
restarted=need_restart,
daemon_reload=systemd_unit.changed,
)
def deploy_mtail(config): def deploy_mtail(config):
# Uninstall mtail package, we are going to install a static binary. apt.packages(
apt.packages(name="Uninstall mtail", packages=["mtail"], present=False) name="Install mtail",
packages=["mtail"],
(url, sha256sum) = {
"x86_64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz",
"123c2ee5f48c3eff12ebccee38befd2233d715da736000ccde49e3d5607724e4",
),
"aarch64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz",
"aa04811c0929b6754408676de520e050c45dddeb3401881888a092c9aea89cae",
),
}[host.get_fact(facts.server.Arch)]
server.shell(
name="Download mtail",
commands=[
f"(echo '{sha256sum} /usr/local/bin/mtail' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - mtail -O >/usr/local/bin/mtail.new && mv /usr/local/bin/mtail.new /usr/local/bin/mtail)",
"chmod 755 /usr/local/bin/mtail",
],
) )
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`. # Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
@@ -615,12 +476,12 @@ def deploy_mtail(config):
def deploy_iroh_relay(config) -> None: def deploy_iroh_relay(config) -> None:
(url, sha256sum) = { (url, sha256sum) = {
"x86_64": ( "x86_64": (
"https://github.com/n0-computer/iroh/releases/download/v0.35.0/iroh-relay-v0.35.0-x86_64-unknown-linux-musl.tar.gz", "https://github.com/n0-computer/iroh/releases/download/v0.28.1/iroh-relay-v0.28.1-x86_64-unknown-linux-musl.tar.gz",
"45c81199dbd70f8c4c30fef7f3b9727ca6e3cea8f2831333eeaf8aa71bf0fac1", "2ffacf7c0622c26b67a5895ee8e07388769599f60e5f52a3bd40a3258db89b2c",
), ),
"aarch64": ( "aarch64": (
"https://github.com/n0-computer/iroh/releases/download/v0.35.0/iroh-relay-v0.35.0-aarch64-unknown-linux-musl.tar.gz", "https://github.com/n0-computer/iroh/releases/download/v0.28.1/iroh-relay-v0.28.1-aarch64-unknown-linux-musl.tar.gz",
"f8ef27631fac213b3ef668d02acd5b3e215292746a3fc71d90c63115446008b1", "b915037bcc1ff1110cc9fcb5de4a17c00ff576fd2f568cd339b3b2d54c420dc4",
), ),
}[host.get_fact(facts.server.Arch)] }[host.get_fact(facts.server.Arch)]
@@ -629,18 +490,15 @@ def deploy_iroh_relay(config) -> None:
packages=["curl"], packages=["curl"],
) )
need_restart = False server.shell(
name="Download iroh-relay",
commands=[
f"(echo '{sha256sum} /usr/local/bin/iroh-relay' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - ./iroh-relay -O >/usr/local/bin/iroh-relay.new && mv /usr/local/bin/iroh-relay.new /usr/local/bin/iroh-relay)",
"chmod 755 /usr/local/bin/iroh-relay",
],
)
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay") need_restart = False
if existing_sha256sum != sha256sum:
server.shell(
name="Download iroh-relay",
commands=[
f"(curl -L {url} | gunzip | tar -x -f - ./iroh-relay -O >/usr/local/bin/iroh-relay.new && (echo '{sha256sum} /usr/local/bin/iroh-relay.new' | sha256sum -c) && mv /usr/local/bin/iroh-relay.new /usr/local/bin/iroh-relay)",
"chmod 755 /usr/local/bin/iroh-relay",
],
)
need_restart = True
systemd_unit = files.put( systemd_unit = files.put(
name="Upload iroh-relay systemd unit", name="Upload iroh-relay systemd unit",
@@ -681,7 +539,7 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
check_config(config) check_config(config)
mail_domain = config.mail_domain mail_domain = config.mail_domain
from .www import build_webpages, get_paths from .www import build_webpages
server.group(name="Create vmail group", group="vmail", system=True) server.group(name="Create vmail group", group="vmail", system=True)
server.user(name="Create vmail user", user="vmail", group="vmail", system=True) server.user(name="Create vmail user", user="vmail", group="vmail", system=True)
@@ -716,15 +574,9 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
path="/etc/apt/sources.list", path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./", line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True, escape_regex_characters=True,
present=False, ensure_newline=True,
) )
if host.get_fact(Port, port=53) != "unbound":
files.line(
name="Add 9.9.9.9 to resolv.conf",
path="/etc/resolv.conf",
line="nameserver 9.9.9.9",
)
apt.update(name="apt update", cache_time=24 * 3600) apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True) apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -733,39 +585,9 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
packages=["rsync"], packages=["rsync"],
) )
deploy_turn_server(config)
# Run local DNS resolver `unbound`. # Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf # `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver. # to use 127.0.0.1 as the resolver.
from cmdeploy.cmdeploy import Out
port_services = [
(["master", "smtpd"], 25),
("unbound", 53),
("acmetool", 80),
(["imap-login", "dovecot"], 143),
("nginx", 443),
(["master", "smtpd"], 465),
(["master", "smtpd"], 587),
(["imap-login", "dovecot"], 993),
("iroh-relay", 3340),
("nginx", 8443),
(["master", "smtpd"], config.postfix_reinject_port),
(["master", "smtpd"], config.postfix_reinject_port_incoming),
("filtermail", config.filtermail_smtp_port),
("filtermail", config.filtermail_smtp_port_incoming),
]
for service, port in port_services:
print(f"Checking if port {port} is available for {service}...")
running_service = host.get_fact(Port, port=port)
if running_service:
if running_service not in service:
Out().red(
f"Deploy failed: port {port} is occupied by: {running_service}"
)
exit(1)
apt.packages( apt.packages(
name="Install unbound", name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"], packages=["unbound", "unbound-anchor", "dnsutils"],
@@ -789,7 +611,6 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
# Deploy acmetool to have TLS certificates. # Deploy acmetool to have TLS certificates.
tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"] tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"]
deploy_acmetool( deploy_acmetool(
email=config.acme_email,
domains=tls_domains, domains=tls_domains,
) )
@@ -804,10 +625,10 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
packages="postfix", packages="postfix",
) )
if not "dovecot.service" in host.get_fact(SystemdEnabled): apt.packages(
_install_dovecot_package("core", host.get_fact(facts.server.Arch)) name="Install Dovecot",
_install_dovecot_package("imapd", host.get_fact(facts.server.Arch)) packages=["dovecot-imapd", "dovecot-lmtpd"],
_install_dovecot_package("lmtpd", host.get_fact(facts.server.Arch)) )
apt.packages( apt.packages(
name="Install nginx", name="Install nginx",
@@ -819,16 +640,12 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
packages=["fcgiwrap"], packages=["fcgiwrap"],
) )
www_path, src_dir, build_dir = get_paths(config) www_path = importlib.resources.files(__package__).joinpath("../../../www").resolve()
# if www_folder was set to a non-existing folder, skip upload
if not www_path.is_dir(): build_dir = www_path.joinpath("build")
logger.warning("Building web pages is disabled in chatmail.ini, skipping") src_dir = www_path.joinpath("src")
else: build_webpages(src_dir, build_dir, config)
# if www_folder is a hugo page, build it files.rsync(f"{build_dir}/", "/var/www/html", flags=["-avz"])
if build_dir:
www_path = build_webpages(src_dir, build_dir, config)
# if it is not a hugo page, upload it as is
files.rsync(f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"])
_install_remote_venv_with_chatmaild(config) _install_remote_venv_with_chatmaild(config)
debug = False debug = False
@@ -876,19 +693,6 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
restarted=nginx_need_restart, restarted=nginx_need_restart,
) )
systemd.service(
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
systemd.service(
name="Restart echobot if postfix and dovecot were just started",
service="echobot.service",
restarted=postfix_need_restart and dovecot_need_restart,
)
# This file is used by auth proxy. # This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName # https://wiki.debian.org/EtcMailName
server.shell( server.shell(
@@ -921,19 +725,5 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
name="Ensure cron is installed", name="Ensure cron is installed",
packages=["cron"], packages=["cron"],
) )
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
files.put(
name="Upload chatmail relay git commiit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
)
deploy_mtail(config) deploy_mtail(config)

View File

@@ -1,5 +1,7 @@
import importlib.resources import importlib.resources
from pyinfra import host
from pyinfra.facts.systemd import SystemdStatus
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, files, server, systemd
@@ -52,6 +54,12 @@ def deploy_acmetool(email="", domains=[]):
group="root", group="root",
mode="644", mode="644",
) )
if host.get_fact(SystemdStatus).get("nginx.service"):
systemd.service(
name="Stop nginx service to free port 80",
service="nginx",
running=False,
)
systemd.service( systemd.service(
name="Setup acmetool-redirector service", name="Setup acmetool-redirector service",

View File

@@ -19,7 +19,7 @@ from packaging import version
from termcolor import colored from termcolor import colored
from . import dns, remote from . import dns, remote
from .sshexec import SSHExec, LocalExec from .sshexec import SSHExec
# #
# cmdeploy sub commands and options # cmdeploy sub commands and options
@@ -32,30 +32,17 @@ def init_cmd_options(parser):
action="store", action="store",
help="fully qualified DNS domain name for your chatmail instance", help="fully qualified DNS domain name for your chatmail instance",
) )
parser.add_argument(
"--force",
dest="recreate_ini",
action="store_true",
help="force reacreate ini file",
)
def init_cmd(args, out): def init_cmd(args, out):
"""Initialize chatmail config file.""" """Initialize chatmail config file."""
mail_domain = args.chatmail_domain mail_domain = args.chatmail_domain
inipath = args.inipath
if args.inipath.exists(): if args.inipath.exists():
if not args.recreate_ini: print(f"Path exists, not modifying: {args.inipath}")
print(f"[WARNING] Path exists, not modifying: {inipath}") return 1
return 1 else:
else: write_initial_config(args.inipath, mail_domain, overrides={})
print( out.green(f"created config file for {mail_domain} in {args.inipath}")
f"[WARNING] Force argument was provided, deleting config file: {inipath}"
)
inipath.unlink()
write_initial_config(inipath, mail_domain, overrides={})
out.green(f"created config file for {mail_domain} in {inipath}")
def run_cmd_options(parser): def run_cmd_options(parser):
@@ -72,24 +59,20 @@ def run_cmd_options(parser):
help="install/upgrade the server, but disable postfix & dovecot for now", help="install/upgrade the server, but disable postfix & dovecot for now",
) )
parser.add_argument( parser.add_argument(
"--skip-dns-check", "--ssh-host",
dest="dns_check_disabled", dest="ssh_host",
action="store_true", help="specify an SSH host to deploy to; uses mail_domain from chatmail.ini by default",
help="disable checks nslookup for dns",
) )
add_ssh_host_option(parser)
def run_cmd(args, out): def run_cmd(args, out):
"""Deploy chatmail services on the remote server.""" """Deploy chatmail services on the remote server."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain sshexec = args.get_sshexec()
sshexec = get_sshexec(ssh_host)
require_iroh = args.config.enable_iroh_relay require_iroh = args.config.enable_iroh_relay
if not args.dns_check_disabled: remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain) if not dns.check_initial_remote_data(remote_data, print=out.red):
if not dns.check_initial_remote_data(remote_data, print=out.red): return 1
return 1
env = os.environ.copy() env = os.environ.copy()
env["CHATMAIL_INI"] = args.inipath env["CHATMAIL_INI"] = args.inipath
@@ -97,37 +80,21 @@ def run_cmd(args, out):
env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else "" env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else ""
deploy_path = importlib.resources.files(__package__).joinpath("deploy.py").resolve() deploy_path = importlib.resources.files(__package__).joinpath("deploy.py").resolve()
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra" pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
ssh_host = args.config.mail_domain if not args.ssh_host else args.ssh_host
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y" cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if ssh_host in ["localhost", "@docker"]:
cmd = f"{pyinf} @local {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"): if version.parse(pyinfra.__version__) < version.parse("3"):
out.red("Please re-run scripts/initenv.sh to update pyinfra to version 3.") out.red("Please re-run scripts/initenv.sh to update pyinfra to version 3.")
return 1 return 1
try: retcode = out.check_call(cmd, env=env)
retcode = out.check_call(cmd, env=env) if retcode == 0:
if retcode == 0: out.green("Deploy completed, call `cmdeploy dns` next.")
if not args.disable_mail: elif not remote_data["acme_account_url"]:
print("\nYou can try out the relay by talking to this echo bot: ") out.red("Deploy completed but letsencrypt not configured")
sshexec = SSHExec(args.config.mail_domain, verbose=args.verbose) out.red("Run 'cmdeploy run' again")
print( retcode = 0
sshexec( else:
call=remote.rshell.shell,
kwargs=dict(command="cat /var/lib/echobot/invite-link.txt"),
)
)
out.green("Deploy completed, call `cmdeploy dns` next.")
elif not remote_data["acme_account_url"]:
out.red("Deploy completed but letsencrypt not configured")
out.red("Run 'cmdeploy run' again")
retcode = 0
else:
out.red("Deploy failed")
except subprocess.CalledProcessError:
out.red("Deploy failed") out.red("Deploy failed")
retcode = 1
return retcode return retcode
@@ -139,13 +106,11 @@ def dns_cmd_options(parser):
default=None, default=None,
help="write out a zonefile", help="write out a zonefile",
) )
add_ssh_host_option(parser)
def dns_cmd(args, out): def dns_cmd(args, out):
"""Check DNS entries and optionally generate dns zone file.""" """Check DNS entries and optionally generate dns zone file."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain sshexec = args.get_sshexec()
sshexec = get_sshexec(ssh_host, verbose=args.verbose)
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain) remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
if not remote_data: if not remote_data:
return 1 return 1
@@ -299,15 +264,6 @@ class Out:
return proc.returncode return proc.returncode
def add_ssh_host_option(parser):
parser.add_argument(
"--ssh-host",
dest="ssh_host",
help="Run commands on 'localhost', via '@docker', or on a specific SSH host "
"instead of chatmail.ini's mail_domain.",
)
def add_config_option(parser): def add_config_option(parser):
parser.add_argument( parser.add_argument(
"--config", "--config",
@@ -363,16 +319,6 @@ def get_parser():
return parser return parser
def get_sshexec(ssh_host: str, verbose=True):
if ssh_host in ["localhost", "@local"]:
return LocalExec(verbose, docker=False)
elif ssh_host == "@docker":
return LocalExec(verbose, docker=True)
if verbose:
print(f"[ssh] login to {ssh_host}")
return SSHExec(ssh_host, verbose=verbose)
def main(args=None): def main(args=None):
"""Provide main entry point for 'cmdeploy' CLI invocation.""" """Provide main entry point for 'cmdeploy' CLI invocation."""
parser = get_parser() parser = get_parser()
@@ -380,6 +326,12 @@ def main(args=None):
if not hasattr(args, "func"): if not hasattr(args, "func"):
return parser.parse_args(["-h"]) return parser.parse_args(["-h"])
def get_sshexec():
print(f"[ssh] login to {args.config.mail_domain}")
return SSHExec(args.config.mail_domain, verbose=args.verbose)
args.get_sshexec = get_sshexec
out = Out() out = Out()
kwargs = {} kwargs = {}
if args.func.__name__ not in ("init_cmd", "fmt_cmd"): if args.func.__name__ not in ("init_cmd", "fmt_cmd"):

View File

@@ -45,7 +45,8 @@ def check_full_zone(sshexec, remote_data, out, zonefile) -> int:
and return (exitcode, remote_data) tuple.""" and return (exitcode, remote_data) tuple."""
required_diff, recommended_diff = sshexec.logged( required_diff, recommended_diff = sshexec.logged(
remote.rdns.check_zonefile, kwargs=dict(zonefile=zonefile, verbose=False), remote.rdns.check_zonefile,
kwargs=dict(zonefile=zonefile, mail_domain=remote_data["mail_domain"]),
) )
returncode = 0 returncode = 0

View File

@@ -177,34 +177,20 @@ service auth-worker {
} }
service imap-login { service imap-login {
# High-performance mode as described in # High-security mode.
# <https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-performance-mode> # Each process serves a single connection and exits afterwards.
# # This is the default, but we set it explicitly to be sure.
# So-called high-security mode described in # See <https://doc.dovecot.org/admin_manual/login_processes/#high-security-mode> for details.
# <https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-security-mode> service_count = 1
# and enabled by default with `service_count = 1` starts one process per connection
# and has problems logging in thousands of users after Dovecot restart.
service_count = 0
# Increase virtual memory size limit. # Inrease the number of simultaneous connections.
# Since imap-login processes handle TLS connections
# even after logging users in
# and many connections are handled by each process,
# memory size limit should be increased.
# #
# Otherwise the whole process eventually dies # As of Dovecot 2.3.19.1 the default is 100 processes.
# with an error similar to # Combined with `service_count = 1` it means only 100 connections
# imap-login: Fatal: master: service(imap-login): # can be handled simultaneously.
# child 1422951 returned error 83 process_limit = 10000
# (Out of memory (service imap-login { vsz_limit=256 MB },
# you may need to increase it)
# and takes down all its TLS connections at once.
vsz_limit = 1G
# Avoid startup latency for new connections. # Avoid startup latency for new connections.
#
# Should be set to at least the number of CPU cores
# according to the documentation.
process_min_avail = 10 process_min_avail = 10
} }

View File

@@ -0,0 +1,14 @@
# delete already seen big mails after 7 days, in the INBOX
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/cur/*' -mtime +7 -size +200k -type f -delete
# delete all mails after {{ config.delete_mails_after }} days, in the Inbox
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/cur/*' -mtime +{{ config.delete_mails_after }} -type f -delete
# or in any IMAP subfolder
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/.*/cur/*' -mtime +{{ config.delete_mails_after }} -type f -delete
# even if they are unseen
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/new/*' -mtime +{{ config.delete_mails_after }} -type f -delete
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/.*/new/*' -mtime +{{ config.delete_mails_after }} -type f -delete
# or only temporary (but then they shouldn't be around after {{ config.delete_mails_after }} days anyway).
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/tmp/*' -mtime +{{ config.delete_mails_after }} -type f -delete
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/.*/tmp/*' -mtime +{{ config.delete_mails_after }} -type f -delete
3 0 * * * vmail find {{ config.mailboxes_dir }} -name 'maildirsize' -type f -delete
4 0 * * * vmail /usr/local/lib/chatmaild/venv/bin/delete_inactive_users /usr/local/lib/chatmaild/chatmail.ini

View File

@@ -2,6 +2,15 @@ function dovecot_lua_notify_begin_txn(user)
return user return user
end end
function contains(v, needle)
for _, keyword in ipairs(v) do
if keyword == needle then
return true
end
end
return false
end
function dovecot_lua_notify_event_message_new(user, event) function dovecot_lua_notify_event_message_new(user, event)
local mbox = user:mailbox(event.mailbox) local mbox = user:mailbox(event.mailbox)
mbox:sync() mbox:sync()

View File

@@ -1,11 +1,5 @@
enable_relay = true enable_relay = true
http_bind_addr = "[::]:3340" http_bind_addr = "[::]:3340"
enable_stun = true
# Disable built-in STUN server in iroh-relay 0.35
# as we deploy our own TURN server instead.
# STUN server is going to be removed in iroh-relay 1.0
# and this line can be removed after upgrade.
enable_stun = false
enable_metrics = false enable_metrics = false
metrics_bind_addr = "127.0.0.1:9092" metrics_bind_addr = "127.0.0.1:9092"

View File

@@ -3,7 +3,7 @@ Description=mtail
[Service] [Service]
Type=simple Type=simple
ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -" ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs /dev/stdin"
Restart=on-failure Restart=on-failure
[Install] [Install]

View File

@@ -2,25 +2,11 @@ load_module modules/ngx_stream_module.so;
user www-data; user www-data;
worker_processes auto; worker_processes auto;
# Increase the number of connections
# that a worker process can open
# to avoid errors such as
# accept4() failed (24: Too many open files)
# and
# socket() failed (24: Too many open files) while connecting to upstream
# in the logs.
# <https://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile>
worker_rlimit_nofile 2048;
pid /run/nginx.pid; pid /run/nginx.pid;
error_log syslog:server=unix:/dev/log,facility=local3; error_log syslog:server=unix:/dev/log,facility=local3;
events { events {
# Increase to avoid errors such as worker_connections 768;
# 768 worker_connections are not enough while connecting to upstream
# in the logs.
# <https://nginx.org/en/docs/ngx_core_module.html#worker_connections>
worker_connections 2048;
# multi_accept on; # multi_accept on;
} }
@@ -66,7 +52,7 @@ http {
index index.html index.htm; index index.html index.htm;
server_name {{ config.domain_name }} www.{{ config.domain_name }} mta-sts.{{ config.domain_name }}; server_name _;
access_log syslog:server=unix:/dev/log,facility=local7; access_log syslog:server=unix:/dev/log,facility=local7;

View File

@@ -13,7 +13,6 @@ OversignHeaders From
On-BadSignature reject On-BadSignature reject
On-KeyNotFound reject On-KeyNotFound reject
On-NoSignature reject On-NoSignature reject
DNSTimeout 60
# Signing domain, selector, and key (required). For example, perform signing # Signing domain, selector, and key (required). For example, perform signing
# for domain "example.com" with selector "2020" (2020._domainkey.example.com), # for domain "example.com" with selector "2020" (2020._domainkey.example.com),

View File

@@ -77,13 +77,13 @@ scache unix - - y - 1 scache
postlog unix-dgram n - n - 1 postlogd postlog unix-dgram n - n - 1 postlogd
filter unix - n n - - lmtp filter unix - n n - - lmtp
# Local SMTP server for reinjecting outgoing filtered mail. # Local SMTP server for reinjecting outgoing filtered mail.
127.0.0.1:{{ config.postfix_reinject_port }} inet n - n - 100 smtpd 127.0.0.1:{{ config.postfix_reinject_port }} inet n - n - 10 smtpd
-o syslog_name=postfix/reinject -o syslog_name=postfix/reinject
-o smtpd_milters=unix:opendkim/opendkim.sock -o smtpd_milters=unix:opendkim/opendkim.sock
-o cleanup_service_name=authclean -o cleanup_service_name=authclean
# Local SMTP server for reinjecting incoming filtered mail # Local SMTP server for reinjecting incoming filtered mail
127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 100 smtpd 127.0.0.1:{{ config.postfix_reinject_port_incoming }} inet n - n - 10 smtpd
-o syslog_name=postfix/reinject_incoming -o syslog_name=postfix/reinject_incoming
-o smtpd_milters=unix:opendkim/opendkim.sock -o smtpd_milters=unix:opendkim/opendkim.sock

View File

@@ -12,23 +12,23 @@ All functions of this module
import re import re
from .rshell import CalledProcessError, shell, log_progress from .rshell import CalledProcessError, shell
def perform_initial_checks(mail_domain, pre_command=""): def perform_initial_checks(mail_domain):
"""Collecting initial DNS settings.""" """Collecting initial DNS settings."""
assert mail_domain assert mail_domain
if not shell("dig", fail_ok=True, print=log_progress): if not shell("dig", fail_ok=True):
shell("apt-get update && apt-get install -y dnsutils", print=log_progress) shell("apt-get install -y dnsutils")
A = query_dns("A", mail_domain) A = query_dns("A", mail_domain)
AAAA = query_dns("AAAA", mail_domain) AAAA = query_dns("AAAA", mail_domain)
MTA_STS = query_dns("CNAME", f"mta-sts.{mail_domain}") MTA_STS = query_dns("CNAME", f"mta-sts.{mail_domain}")
WWW = query_dns("CNAME", f"www.{mail_domain}") WWW = query_dns("CNAME", f"www.{mail_domain}")
res = dict(mail_domain=mail_domain, A=A, AAAA=AAAA, MTA_STS=MTA_STS, WWW=WWW) res = dict(mail_domain=mail_domain, A=A, AAAA=AAAA, MTA_STS=MTA_STS, WWW=WWW)
res["acme_account_url"] = shell(pre_command + "acmetool account-url", fail_ok=True, print=log_progress) res["acme_account_url"] = shell("acmetool account-url", fail_ok=True)
res["dkim_entry"], res["web_dkim_entry"] = get_dkim_entry( res["dkim_entry"], res["web_dkim_entry"] = get_dkim_entry(
mail_domain, pre_command, dkim_selector="opendkim" mail_domain, dkim_selector="opendkim"
) )
if not MTA_STS or not WWW or (not A and not AAAA): if not MTA_STS or not WWW or (not A and not AAAA):
@@ -40,12 +40,11 @@ def perform_initial_checks(mail_domain, pre_command=""):
return res return res
def get_dkim_entry(mail_domain, pre_command, dkim_selector): def get_dkim_entry(mail_domain, dkim_selector):
try: try:
dkim_pubkey = shell( dkim_pubkey = shell(
f"{pre_command}openssl rsa -in /etc/dkimkeys/{dkim_selector}.private " f"openssl rsa -in /etc/dkimkeys/{dkim_selector}.private "
"-pubout 2>/dev/null | awk '/-/{next}{printf(\"%s\",$0)}'", "-pubout 2>/dev/null | awk '/-/{next}{printf(\"%s\",$0)}'"
print=log_progress
) )
except CalledProcessError: except CalledProcessError:
return return
@@ -62,7 +61,7 @@ def query_dns(typ, domain):
# Get autoritative nameserver from the SOA record. # Get autoritative nameserver from the SOA record.
soa_answers = [ soa_answers = [
x.split() x.split()
for x in shell(f"dig -r -q {domain} -t SOA +noall +authority +answer", print=log_progress).split( for x in shell(f"dig -r -q {domain} -t SOA +noall +authority +answer").split(
"\n" "\n"
) )
] ]
@@ -72,13 +71,13 @@ def query_dns(typ, domain):
ns = soa[0][4] ns = soa[0][4]
# Query authoritative nameserver directly to bypass DNS cache. # Query authoritative nameserver directly to bypass DNS cache.
res = shell(f"dig @{ns} -r -q {domain} -t {typ} +short", print=log_progress) res = shell(f"dig @{ns} -r -q {domain} -t {typ} +short")
if res: if res:
return res.split("\n")[0] return res.split("\n")[0]
return "" return ""
def check_zonefile(zonefile, verbose=True): def check_zonefile(zonefile, mail_domain):
"""Check expected zone file entries.""" """Check expected zone file entries."""
required = True required = True
required_diff = [] required_diff = []
@@ -90,7 +89,7 @@ def check_zonefile(zonefile, verbose=True):
continue continue
if not zf_line.strip() or zf_line.startswith(";"): if not zf_line.strip() or zf_line.startswith(";"):
continue continue
print(f"dns-checking {zf_line!r}") if verbose else log_progress("") print(f"dns-checking {zf_line!r}")
zf_domain, zf_typ, zf_value = zf_line.split(maxsplit=2) zf_domain, zf_typ, zf_value = zf_line.split(maxsplit=2)
zf_domain = zf_domain.rstrip(".") zf_domain = zf_domain.rstrip(".")
zf_value = zf_value.strip() zf_value = zf_value.strip()

View File

@@ -1,14 +1,7 @@
import sys
from subprocess import DEVNULL, CalledProcessError, check_output from subprocess import DEVNULL, CalledProcessError, check_output
def log_progress(data): def shell(command, fail_ok=False):
sys.stderr.write(".")
sys.stderr.flush()
def shell(command, fail_ok=False, print=print):
print(f"$ {command}") print(f"$ {command}")
args = dict(shell=True) args = dict(shell=True)
if fail_ok: if fail_ok:

View File

@@ -1,9 +0,0 @@
[Unit]
Description=chatmail mail storage expiration job
After=network.target
[Service]
Type=oneshot
User=vmail
ExecStart=/usr/local/lib/chatmaild/venv/bin/chatmail-expire /usr/local/lib/chatmaild/chatmail.ini -v --remove

View File

@@ -1,8 +0,0 @@
[Unit]
Description=Run Daily chatmail-expire job
[Timer]
OnCalendar=*-*-* 00:02:00
[Install]
WantedBy=timers.target

View File

@@ -1,9 +0,0 @@
[Unit]
Description=chatmail file system storage reporting job
After=network.target
[Service]
Type=oneshot
User=vmail
ExecStart=/usr/local/lib/chatmaild/venv/bin/chatmail-fsreport /usr/local/lib/chatmaild/chatmail.ini

View File

@@ -1,9 +0,0 @@
[Unit]
Description=Run Daily Chatmail fsreport Job
[Timer]
OnCalendar=*-*-* 08:02:00
Persistent=true
[Install]
WantedBy=timers.target

View File

@@ -1,16 +0,0 @@
[Unit]
Description=A wrapper for the TURN server
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/usr/local/bin/chatmail-turn --realm {mail_domain} --socket /run/chatmail-turn/turn.socket
# Create /run/chatmail-turn
RuntimeDirectory=chatmail-turn
User=vmail
Group=vmail
[Install]
WantedBy=multi-user.target

View File

@@ -42,7 +42,6 @@ def bootstrap_remote(gateway, remote=remote):
def print_stderr(item="", end="\n"): def print_stderr(item="", end="\n"):
print(item, file=sys.stderr, end=end) print(item, file=sys.stderr, end=end)
sys.stderr.flush()
class SSHExec: class SSHExec:
@@ -71,6 +70,10 @@ class SSHExec:
raise self.FuncError(data) raise self.FuncError(data)
def logged(self, call, kwargs): def logged(self, call, kwargs):
def log_progress(data):
sys.stderr.write(".")
sys.stderr.flush()
title = call.__doc__ title = call.__doc__
if not title: if not title:
title = call.__name__ title = call.__name__
@@ -79,22 +82,6 @@ class SSHExec:
return self(call, kwargs, log_callback=print_stderr) return self(call, kwargs, log_callback=print_stderr)
else: else:
print_stderr(title, end="") print_stderr(title, end="")
res = self(call, kwargs, log_callback=remote.rshell.log_progress) res = self(call, kwargs, log_callback=log_progress)
print_stderr() print_stderr()
return res return res
class LocalExec:
def __init__(self, verbose=False, docker=False):
self.verbose = verbose
self.docker = docker
def logged(self, call, kwargs: dict):
where = "locally"
if self.docker:
if call == remote.rdns.perform_initial_checks:
kwargs['pre_command'] = "docker exec chatmail "
where = "in docker"
if self.verbose:
print(f"Running {where}: {call.__name__}(**{kwargs})")
return call(**kwargs)

View File

@@ -90,13 +90,8 @@ def test_concurrent_logins_same_account(
def test_no_vrfy(chatmail_config): def test_no_vrfy(chatmail_config):
domain = chatmail_config.mail_domain
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10) sock.connect((chatmail_config.mail_domain, 25))
try:
sock.connect((domain, 25))
except socket.timeout:
pytest.skip(f"port 25 not reachable for {domain}")
banner = sock.recv(1024) banner = sock.recv(1024)
print(banner) print(banner)
sock.send(b"VRFY wrongaddress@%s\r\n" % (chatmail_config.mail_domain.encode(),)) sock.send(b"VRFY wrongaddress@%s\r\n" % (chatmail_config.mail_domain.encode(),))

View File

@@ -1,8 +1,5 @@
import datetime import datetime
import smtplib import smtplib
import socket
import subprocess
import time
import pytest import pytest
@@ -32,8 +29,7 @@ class TestSSHExecutor:
) )
out, err = capsys.readouterr() out, err = capsys.readouterr()
assert err.startswith("Collecting") assert err.startswith("Collecting")
# XXX could not figure out how capturing can be made to work properly assert err.endswith("....\n")
#assert err.endswith("....\n")
assert err.count("\n") == 1 assert err.count("\n") == 1
sshexec.verbose = True sshexec.verbose = True
@@ -42,8 +38,7 @@ class TestSSHExecutor:
) )
out, err = capsys.readouterr() out, err = capsys.readouterr()
lines = err.split("\n") lines = err.split("\n")
# XXX could not figure out how capturing can be made to work properly assert len(lines) > 4
#assert len(lines) > 4
assert remote.rdns.perform_initial_checks.__doc__ in lines[0] assert remote.rdns.perform_initial_checks.__doc__ in lines[0]
def test_exception(self, sshexec, capsys): def test_exception(self, sshexec, capsys):
@@ -60,20 +55,11 @@ class TestSSHExecutor:
def test_opendkim_restarted(self, sshexec): def test_opendkim_restarted(self, sshexec):
"""check that opendkim is not running for longer than a day.""" """check that opendkim is not running for longer than a day."""
cmd = "systemctl show opendkim --timestamp=utc --property=ActiveEnterTimestamp" out = sshexec(call=remote.rshell.shell, kwargs=dict(command="systemctl status opendkim"))
out = sshexec(call=remote.rshell.shell, kwargs=dict(command=cmd)) assert type(out) == str
datestring = out.split("=")[1] since_date_str = out.split("since ")[1].split(";")[0]
since_date = datetime.datetime.strptime(datestring, "%a %Y-%m-%d %H:%M:%S %Z") since_date = datetime.datetime.strptime(since_date_str, "%a %Y-%m-%d %H:%M:%S %Z")
now = datetime.datetime.now(since_date.tzinfo) assert (datetime.datetime.now() - since_date).total_seconds() < 60 * 60 * 24
assert (now - since_date).total_seconds() < 60 * 60 * 51
def test_timezone_env(remote):
for line in remote.iter_output("env"):
print(line)
if line == "tz=:/etc/localtime":
return
pytest.fail("TZ is not set")
def test_remote(remote, imap_or_smtp): def test_remote(remote, imap_or_smtp):
@@ -130,35 +116,13 @@ def test_authenticated_from(cmsetup, maildata):
@pytest.mark.parametrize("from_addr", ["fake@example.org", "fake@testrun.org"]) @pytest.mark.parametrize("from_addr", ["fake@example.org", "fake@testrun.org"])
def test_reject_missing_dkim(cmsetup, maildata, from_addr): def test_reject_missing_dkim(cmsetup, maildata, from_addr):
domain = cmsetup.maildomain
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10)
try:
sock.connect((domain, 25))
except socket.timeout:
pytest.skip(f"port 25 not reachable for {domain}")
recipient = cmsetup.gen_users(1)[0] recipient = cmsetup.gen_users(1)[0]
msg = maildata( msg = maildata("encrypted.eml", from_addr=from_addr, to_addr=recipient.addr).as_string()
"encrypted.eml", from_addr=from_addr, to_addr=recipient.addr with smtplib.SMTP(cmsetup.maildomain, 25) as s:
).as_string()
conn = smtplib.SMTP(cmsetup.maildomain, 25, timeout=10)
with conn as s:
with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"): with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"):
s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg) s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg)
def try_n_times(n, f):
for _ in range(n - 1):
try:
return f()
except Exception:
time.sleep(1)
return f()
def test_rewrite_subject(cmsetup, maildata): def test_rewrite_subject(cmsetup, maildata):
"""Test that subject gets replaced with [...].""" """Test that subject gets replaced with [...]."""
user1, user2 = cmsetup.gen_users(2) user1, user2 = cmsetup.gen_users(2)
@@ -171,8 +135,7 @@ def test_rewrite_subject(cmsetup, maildata):
).as_string() ).as_string()
user1.smtp.sendmail(from_addr=user1.addr, to_addrs=[user2.addr], msg=sent_msg) user1.smtp.sendmail(from_addr=user1.addr, to_addrs=[user2.addr], msg=sent_msg)
# The message may need some time to get delivered by postfix. messages = user2.imap.fetch_all_messages()
messages = try_n_times(5, user2.imap.fetch_all_messages)
assert len(messages) == 1 assert len(messages) == 1
rcvd_msg = messages[0] rcvd_msg = messages[0]
assert "Subject: [...]" not in sent_msg assert "Subject: [...]" not in sent_msg
@@ -213,31 +176,6 @@ def test_expunged(remote, chatmail_config):
f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f", f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f", f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f",
] ]
outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append(
"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
)
for cmd in find_cmds: for cmd in find_cmds:
for line in remote.iter_output(cmd): for line in remote.iter_output(cmd):
assert not line assert not line
def test_deployed_state(remote):
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
git_status = [git_hash.strip()]
for line in git_diff.splitlines():
git_status.append(line.strip().lower())
remote_version = []
for line in remote.iter_output("cat /etc/chatmail-version"):
print(line)
remote_version.append(line)
# assert len(git_status) == len(remote_version) # for some reason, we only get 11 lines from remote.iter_output()
for i in range(len(remote_version)):
assert git_status[i] == remote_version[i], "You have undeployed changes."

View File

@@ -307,7 +307,6 @@ def cmfactory(request, gencreds, tmpdir, maildomain):
class Data: class Data:
def read_path(self, path): def read_path(self, path):
return return
am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data()) am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data())
# nb. a bit hacky # nb. a bit hacky

View File

@@ -1,10 +1,8 @@
import importlib
import os import os
import pytest import pytest
from cmdeploy.cmdeploy import get_parser, main from cmdeploy.cmdeploy import get_parser, main
from cmdeploy.www import get_paths
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
@@ -26,36 +24,6 @@ class TestCmdline:
def test_init_not_overwrite(self, capsys): def test_init_not_overwrite(self, capsys):
assert main(["init", "chat.example.org"]) == 0 assert main(["init", "chat.example.org"]) == 0
capsys.readouterr() capsys.readouterr()
assert main(["init", "chat.example.org"]) == 1 assert main(["init", "chat.example.org"]) == 1
out, err = capsys.readouterr() out, err = capsys.readouterr()
assert "path exists" in out.lower() assert "path exists" in out.lower()
assert main(["init", "chat.example.org", "--force"]) == 0
out, err = capsys.readouterr()
assert "deleting config file" in out.lower()
def test_www_folder(example_config, tmp_path):
reporoot = importlib.resources.files(__package__).joinpath("../../../../").resolve()
assert not example_config.www_folder
www_path, src_dir, build_dir = get_paths(example_config)
assert www_path.absolute() == reporoot.joinpath("www").absolute()
assert src_dir == reporoot.joinpath("www").joinpath("src")
assert build_dir == reporoot.joinpath("www").joinpath("build")
example_config.www_folder = "disabled"
www_path, _, _ = get_paths(example_config)
assert not www_path.is_dir()
example_config.www_folder = str(tmp_path)
www_path, src_dir, build_dir = get_paths(example_config)
assert www_path == tmp_path
assert not src_dir.exists()
assert not build_dir
src_path = tmp_path.joinpath("src")
os.mkdir(src_path)
with open(src_path / "index.md", "w") as f:
f.write("# Test")
www_path, src_dir, build_dir = get_paths(example_config)
assert www_path == tmp_path
assert src_dir == src_path
assert build_dir == tmp_path.joinpath("build")

View File

@@ -89,14 +89,18 @@ class TestZonefileChecks:
def test_check_zonefile_all_ok(self, cm_data, mockdns_base): def test_check_zonefile_all_ok(self, cm_data, mockdns_base):
zonefile = cm_data.get("zftest.zone") zonefile = cm_data.get("zftest.zone")
parse_zonefile_into_dict(zonefile, mockdns_base) parse_zonefile_into_dict(zonefile, mockdns_base)
required_diff, recommended_diff = remote.rdns.check_zonefile(zonefile) required_diff, recommended_diff = remote.rdns.check_zonefile(
zonefile, "some.domain"
)
assert not required_diff and not recommended_diff assert not required_diff and not recommended_diff
def test_check_zonefile_recommended_not_set(self, cm_data, mockdns_base): def test_check_zonefile_recommended_not_set(self, cm_data, mockdns_base):
zonefile = cm_data.get("zftest.zone") zonefile = cm_data.get("zftest.zone")
zonefile_mocked = zonefile.split("; Recommended")[0] zonefile_mocked = zonefile.split("; Recommended")[0]
parse_zonefile_into_dict(zonefile_mocked, mockdns_base) parse_zonefile_into_dict(zonefile_mocked, mockdns_base)
required_diff, recommended_diff = remote.rdns.check_zonefile(zonefile) required_diff, recommended_diff = remote.rdns.check_zonefile(
zonefile, "some.domain"
)
assert not required_diff assert not required_diff
assert len(recommended_diff) == 8 assert len(recommended_diff) == 8

View File

@@ -3,7 +3,6 @@ import importlib.resources
import time import time
import traceback import traceback
import webbrowser import webbrowser
from pathlib import Path
import markdown import markdown
from chatmaild.config import read_config from chatmaild.config import read_config
@@ -31,25 +30,9 @@ def prepare_template(source):
return render_vars, page_layout return render_vars, page_layout
def get_paths(config) -> (Path, Path, Path): def build_webpages(src_dir, build_dir, config):
reporoot = importlib.resources.files(__package__).joinpath("../../../").resolve()
www_path = Path(config.www_folder)
# if www_folder was not set, use default directory
if config.www_folder == "":
www_path = reporoot.joinpath("www")
src_dir = www_path.joinpath("src")
# if www_folder is a hugo page, build it
if src_dir.joinpath("index.md").is_file():
build_dir = www_path.joinpath("build")
# if it is not a hugo page, upload it as is
else:
build_dir = None
return www_path, src_dir, build_dir
def build_webpages(src_dir, build_dir, config) -> Path:
try: try:
return _build_webpages(src_dir, build_dir, config) _build_webpages(src_dir, build_dir, config)
except Exception: except Exception:
print(traceback.format_exc()) print(traceback.format_exc())
@@ -123,11 +106,15 @@ def main():
config = read_config(inipath) config = read_config(inipath)
config.webdev = True config.webdev = True
assert config.mail_domain assert config.mail_domain
www_path = reporoot.joinpath("www")
src_path = www_path.joinpath("src")
stats = None
build_dir = www_path.joinpath("build")
src_dir = www_path.joinpath("src")
index_path = build_dir.joinpath("index.html")
# start web page generation, open a browser and wait for changes # start web page generation, open a browser and wait for changes
www_path, src_path, build_dir = get_paths(config) build_webpages(src_dir, build_dir, config)
build_dir = build_webpages(src_path, build_dir, config)
index_path = build_dir.joinpath("index.html")
webbrowser.open(str(index_path)) webbrowser.open(str(index_path))
stats = snapshot_dir_stats(src_path) stats = snapshot_dir_stats(src_path)
print(f"\nOpened URL: file://{index_path.resolve()}\n") print(f"\nOpened URL: file://{index_path.resolve()}\n")
@@ -148,7 +135,7 @@ def main():
changenum += 1 changenum += 1
stats = newstats stats = newstats
build_webpages(src_path, build_dir, config) build_webpages(src_dir, build_dir, config)
print(f"[{changenum}] regenerated web pages at: {index_path}") print(f"[{changenum}] regenerated web pages at: {index_path}")
print(f"URL: file://{index_path.resolve()}\n\n") print(f"URL: file://{index_path.resolve()}\n\n")
count = 0 count = 0

View File

@@ -1,23 +1,5 @@
#!/bin/sh #!/bin/sh
set -e set -e
if command -v lsb_release 2>&1 >/dev/null; then
case "$(lsb_release -is)" in
Ubuntu | Debian )
if ! dpkg -l | grep python3-dev 2>&1 >/dev/null
then
echo "You need to install python3-dev for installing the other dependencies."
exit 1
fi
if ! gcc --version 2>&1 >/dev/null
then
echo "You need to install gcc for building Python dependencies."
exit 1
fi
;;
esac
fi
python3 -m venv --upgrade-deps venv python3 -m venv --upgrade-deps venv
venv/bin/pip install -e chatmaild venv/bin/pip install -e chatmaild