Compare commits

..

2 Commits

Author SHA1 Message Date
missytake
0d301f9807 doc: add changelog 2025-04-10 11:52:23 +02:00
Mark Felder
a5dffdf2e6 Postfix master.cf: use 127.0.0.1 for consistency 2025-04-10 11:52:23 +02:00
29 changed files with 101 additions and 513 deletions

View File

@@ -12,7 +12,6 @@ Please fill out as much of this form as you can (leaving out stuff that is not a
- Server OS (Operating System) - preferably Debian 12: - Server OS (Operating System) - preferably Debian 12:
- On which OS you run cmdeploy: - On which OS you run cmdeploy:
- chatmail/relay version: `git rev-parse HEAD`
## Expected behavior ## Expected behavior

View File

@@ -10,10 +10,6 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
# Checkout pull request HEAD commit instead of merge commit
# Otherwise `test_deployed_state` will be unhappy.
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: run chatmaild tests - name: run chatmaild tests
working-directory: chatmaild working-directory: chatmaild

View File

@@ -2,58 +2,9 @@
## untagged ## untagged
- Check whether GCC is installed in initenv.sh
([#608](https://github.com/chatmail/relay/pull/608))
- Expire push notification tokens after 90 days
([#583](https://github.com/chatmail/relay/pull/583))
- Use official `mtail` binary instead of `mtail` package
([#581](https://github.com/chatmail/relay/pull/581))
- dovecot: install from download.delta.chat instead of openSUSE Build Service
([#590](https://github.com/chatmail/relay/pull/590))
- Reconfigure Dovecot imap-login service to high-performance mode
([#578](https://github.com/chatmail/relay/pull/578))
- Set timezone to improve dovecot performance
([#584](https://github.com/chatmail/relay/pull/584))
- Increase nginx connection limits
([#576](https://github.com/chatmail/relay/pull/576))
- If `dns-utils` needs to be installed before cmdeploy run, apt update to make sure it works
([#560](https://github.com/chatmail/relay/pull/560))
- filtermail: respect config message size limit
([#572](https://github.com/chatmail/relay/pull/572))
- Add config value after how many days large files are deleted
([#555](https://github.com/chatmail/relay/pull/555))
- cmdeploy: push relay version to /etc/chatmail-version
([#573](https://github.com/chatmail/relay/pull/573))
- filtermail: allow partial body length in OpenPGP payloads
([#570](https://github.com/chatmail/relay/pull/570))
- chatmaild: allow echobot to receive unencrypted messages by default
([#556](https://github.com/chatmail/relay/pull/556))
## 1.6.0 2025-04-11
- Handle Port-25 connect errors more gracefully (common with VPNs)
([#552](https://github.com/chatmail/relay/pull/552))
- Avoid "acmetool not found" during initial run - Avoid "acmetool not found" during initial run
([#550](https://github.com/chatmail/relay/pull/550)) ([#550](https://github.com/chatmail/relay/pull/550))
- Fix timezone handling such that client/servers do not need to use
same timezone.
([#553](https://github.com/chatmail/relay/pull/553))
- Enforce end-to-end encryption for incoming messages. - Enforce end-to-end encryption for incoming messages.
New user address mailboxes now get a `enforceE2EEincoming` file New user address mailboxes now get a `enforceE2EEincoming` file
which prohibits incoming cleartext messages from other domains. which prohibits incoming cleartext messages from other domains.
@@ -66,12 +17,6 @@
- Enforce end-to-end encryption between local addresses - Enforce end-to-end encryption between local addresses
([#535](https://github.com/chatmail/server/pull/535)) ([#535](https://github.com/chatmail/server/pull/535))
- unbound: check that port 53 is not occupied by a different process
([#537](https://github.com/chatmail/server/pull/537))
- unbound: before unbound is there, use 9.9.9.9 for resolving
([#518](https://github.com/chatmail/relay/pull/518))
- Limit the bind for the HTTPS server on 8443 to 127.0.0.1 - Limit the bind for the HTTPS server on 8443 to 127.0.0.1
([#522](https://github.com/chatmail/server/pull/522)) ([#522](https://github.com/chatmail/server/pull/522))
([#532](https://github.com/chatmail/server/pull/532)) ([#532](https://github.com/chatmail/server/pull/532))

View File

@@ -69,7 +69,7 @@ Please substitute it with your own domain.
mta-sts.chat.example.com. 3600 IN CNAME chat.example.com. mta-sts.chat.example.com. 3600 IN CNAME chat.example.com.
``` ```
2. On your local PC, clone the repository and bootstrap the Python virtualenv. 2. Clone the repository and bootstrap the Python virtualenv.
``` ```
git clone https://github.com/chatmail/relay git clone https://github.com/chatmail/relay
@@ -77,29 +77,30 @@ Please substitute it with your own domain.
scripts/initenv.sh scripts/initenv.sh
``` ```
3. On your local PC, create chatmail configuration file `chatmail.ini`: 3. Create chatmail configuration file `chatmail.ini`:
``` ```
scripts/cmdeploy init chat.example.org # <-- use your domain scripts/cmdeploy init chat.example.org # <-- use your domain
``` ```
4. Verify that SSH root login to your remote server works: 4. Verify that SSH root login works:
``` ```
ssh root@chat.example.org # <-- use your domain ssh root@chat.example.org # <-- use your domain
``` ```
5. From your local PC, deploy the remote chatmail relay server:
5. Deploy the remote chatmail relay server:
``` ```
scripts/cmdeploy run scripts/cmdeploy run
``` ```
This script will also check that you have all necessary DNS records. This script will check that you have all necessary DNS records.
If DNS records are missing, it will recommend If DNS records are missing, it will recommend
which you should configure at your DNS provider which you should configure at your DNS provider
(it can take some time until they are public). (it can take some time until they are public).
### Other helpful commands ### Other helpful commands:
To check the status of your remotely running chatmail service: To check the status of your remotely running chatmail service:
@@ -158,7 +159,7 @@ This repository has four directories:
The `cmdeploy/src/cmdeploy/cmdeploy.py` command line tool The `cmdeploy/src/cmdeploy/cmdeploy.py` command line tool
helps with setting up and managing the chatmail service. helps with setting up and managing the chatmail service.
`cmdeploy init` creates the `chatmail.ini` config file. `cmdeploy init` creates the `chatmail.ini` config file.
`cmdeploy run` uses a [pyinfra](https://pyinfra.com/)-based [`script`](cmdeploy/src/cmdeploy/__init__.py) `cmdeploy run` uses a [pyinfra](https://pyinfra.com/)-based [script](`cmdeploy/src/cmdeploy/__init__.py`)
to automatically install or upgrade all chatmail components on a relay, to automatically install or upgrade all chatmail components on a relay,
according to the `chatmail.ini` config. according to the `chatmail.ini` config.
@@ -532,15 +533,3 @@ Then reboot the relay or do `sysctl -p` and `nft -f /etc/nftables.conf`.
Once proxy relay is set up, Once proxy relay is set up,
you can add its IP address to the DNS. you can add its IP address to the DNS.
## Neighbors and Acquaintances
Here are some related projects that you may be interested in:
- [Mox](https://github.com/mjl-/mox): A Golang email server. [Work is in
progress](https://github.com/mjl-/mox/issues/251) to modify it to support all
of the features and configuration settings required to operate as a chatmail
relay.
- [Maddy-Chatmail](https://github.com/sadraiiali/maddy_chatmail): a plugin for the
[Maddy email server](https://maddy.email/) which aims to implement the
chatmail relay features and configuration options.

View File

@@ -48,9 +48,6 @@ lint.select = [
"PLE", # Pylint Error "PLE", # Pylint Error
"PLW", # Pylint Warning "PLW", # Pylint Warning
] ]
lint.ignore = [
"PLC0415" # import-outside-top-level
]
[tool.tox] [tool.tox]
legacy_tox_ini = """ legacy_tox_ini = """

View File

@@ -26,7 +26,6 @@ class Config:
self.max_mailbox_size = params["max_mailbox_size"] self.max_mailbox_size = params["max_mailbox_size"]
self.max_message_size = int(params.get("max_message_size", "31457280")) self.max_message_size = int(params.get("max_message_size", "31457280"))
self.delete_mails_after = params["delete_mails_after"] self.delete_mails_after = params["delete_mails_after"]
self.delete_large_after = params["delete_large_after"]
self.delete_inactive_users_after = int(params["delete_inactive_users_after"]) self.delete_inactive_users_after = int(params["delete_inactive_users_after"])
self.username_min_length = int(params["username_min_length"]) self.username_min_length = int(params["username_min_length"])
self.username_max_length = int(params["username_max_length"]) self.username_max_length = int(params["username_max_length"])
@@ -65,7 +64,7 @@ class Config:
def _getbytefile(self): def _getbytefile(self):
return open(self._inipath, "rb") return open(self._inipath, "rb")
def get_user(self, addr) -> User: def get_user(self, addr):
if not addr or "@" not in addr or "/" in addr: if not addr or "@" not in addr or "/" in addr:
raise ValueError(f"invalid address {addr!r}") raise ValueError(f"invalid address {addr!r}")
@@ -116,7 +115,7 @@ def get_default_config_content(mail_domain, **overrides):
lines = [] lines = []
for line in content.split("\n"): for line in content.split("\n"):
for key, value in privacy.items(): for key, value in privacy.items():
value_lines = value.format(mail_domain=mail_domain).strip().split("\n") value_lines = value.strip().split("\n")
if not line.startswith(f"{key} =") or not value_lines: if not line.startswith(f"{key} =") or not value_lines:
continue continue
if len(value_lines) == 1: if len(value_lines) == 1:

View File

@@ -38,12 +38,6 @@ def check_openpgp_payload(payload: bytes):
packet_type_id = payload[i] & 0x3F packet_type_id = payload[i] & 0x3F
i += 1 i += 1
while payload[i] >= 224 and payload[i] < 255:
# Partial body length.
partial_length = 1 << (payload[i] & 0x1F)
i += 1 + partial_length
if payload[i] < 192: if payload[i] < 192:
# One-octet length. # One-octet length.
body_len = payload[i] body_len = payload[i]
@@ -62,7 +56,7 @@ def check_openpgp_payload(payload: bytes):
) )
i += 5 i += 5
else: else:
# Impossible, partial body length was processed above. # Partial body length is not allowed.
return False return False
i += body_len i += body_len
@@ -173,12 +167,7 @@ async def asyncmain_beforequeue(config, mode):
else: else:
port = config.filtermail_smtp_port_incoming port = config.filtermail_smtp_port_incoming
handler = IncomingBeforeQueueHandler(config) handler = IncomingBeforeQueueHandler(config)
HackedController( HackedController(handler, hostname="127.0.0.1", port=port).start()
handler,
hostname="127.0.0.1",
port=port,
data_size_limit=config.max_message_size,
).start()
def recipient_matches_passthrough(recipient, passthrough_recipients): def recipient_matches_passthrough(recipient, passthrough_recipients):

View File

@@ -23,9 +23,6 @@ max_message_size = 31457280
# days after which mails are unconditionally deleted # days after which mails are unconditionally deleted
delete_mails_after = 20 delete_mails_after = 20
# days after which large messages (>200k) are unconditionally deleted
delete_large_after = 7
# days after which users without a successful login are deleted (database and mails) # days after which users without a successful login are deleted (database and mails)
delete_inactive_users_after = 90 delete_inactive_users_after = 90
@@ -43,7 +40,7 @@ passthrough_senders =
# list of e-mail recipients for which to accept outbound un-encrypted mails # list of e-mail recipients for which to accept outbound un-encrypted mails
# (space-separated, item may start with "@" to whitelist whole recipient domains) # (space-separated, item may start with "@" to whitelist whole recipient domains)
passthrough_recipients = xstore@testrun.org echo@{mail_domain} passthrough_recipients = xstore@testrun.org
# #
# Deployment Details # Deployment Details

View File

@@ -1,7 +1,7 @@
[privacy] [privacy]
passthrough_recipients = privacy@testrun.org xstore@testrun.org echo@{mail_domain} passthrough_recipients = privacy@testrun.org xstore@testrun.org
privacy_postal = privacy_postal =
Merlinux GmbH, Represented by the managing director H. Krekel, Merlinux GmbH, Represented by the managing director H. Krekel,

View File

@@ -1,7 +1,5 @@
import logging import logging
import sys import sys
import time
from contextlib import contextmanager
from .config import read_config from .config import read_config
from .dictproxy import DictProxy from .dictproxy import DictProxy
@@ -9,15 +7,8 @@ from .filedict import FileDict
from .notifier import Notifier from .notifier import Notifier
def _is_valid_token_timestamp(timestamp, now):
# Token if invalid after 90 days
# or if the timestamp is in the future.
return timestamp > now - 3600 * 24 * 90 and timestamp < now + 60
class Metadata: class Metadata:
# each SETMETADATA on this key appends to dictionary # each SETMETADATA on this key appends to a list of unique device tokens
# mapping of unique device tokens
# which only ever get removed if the upstream indicates the token is invalid # which only ever get removed if the upstream indicates the token is invalid
DEVICETOKEN_KEY = "devicetoken" DEVICETOKEN_KEY = "devicetoken"
@@ -27,51 +18,21 @@ class Metadata:
def get_metadata_dict(self, addr): def get_metadata_dict(self, addr):
return FileDict(self.vmail_dir / addr / "metadata.json") return FileDict(self.vmail_dir / addr / "metadata.json")
@contextmanager
def _modify_tokens(self, addr):
with self.get_metadata_dict(addr).modify() as data:
tokens = data.setdefault(self.DEVICETOKEN_KEY, {})
now = int(time.time())
if isinstance(tokens, list):
data[self.DEVICETOKEN_KEY] = tokens = {t: now for t in tokens}
expired_tokens = [
token
for token, timestamp in tokens.items()
if not _is_valid_token_timestamp(tokens[token], now)
]
for expired_token in expired_tokens:
del tokens[expired_token]
yield tokens
def add_token_to_addr(self, addr, token): def add_token_to_addr(self, addr, token):
with self._modify_tokens(addr) as tokens: with self.get_metadata_dict(addr).modify() as data:
tokens[token] = int(time.time()) tokens = data.setdefault(self.DEVICETOKEN_KEY, [])
if token not in tokens:
tokens.append(token)
def remove_token_from_addr(self, addr, token): def remove_token_from_addr(self, addr, token):
with self._modify_tokens(addr) as tokens: with self.get_metadata_dict(addr).modify() as data:
tokens = data.get(self.DEVICETOKEN_KEY, [])
if token in tokens: if token in tokens:
del tokens[token] tokens.remove(token)
def get_tokens_for_addr(self, addr): def get_tokens_for_addr(self, addr):
mdict = self.get_metadata_dict(addr).read() mdict = self.get_metadata_dict(addr).read()
tokens = mdict.get(self.DEVICETOKEN_KEY, {}) return mdict.get(self.DEVICETOKEN_KEY, [])
now = int(time.time())
if isinstance(tokens, dict):
token_list = [
token
for token, timestamp in tokens.items()
if _is_valid_token_timestamp(timestamp, now)
]
if len(token_list) < len(tokens):
# Some tokens have expired, remove them.
with self._modify_tokens(addr) as _tokens:
pass
else:
token_list = []
return token_list
class MetadataDictProxy(DictProxy): class MetadataDictProxy(DictProxy):

View File

@@ -17,11 +17,11 @@ and which are scheduled for retry using exponential back-off timing.
If a token notification would be scheduled more than DROP_DEADLINE seconds If a token notification would be scheduled more than DROP_DEADLINE seconds
after its first attempt, it is dropped with a log error. after its first attempt, it is dropped with a log error.
Note that tokens are opaque to the notification machinery here Note that tokens are completely opaque to the notification machinery here
and are encrypted foreclosing all ability to distinguish and will in the future be encrypted foreclosing all ability to distinguish
which device token ultimately goes to which phone-provider notification service, which device token ultimately goes to which phone-provider notification service,
or to understand the relation of "device tokens" and chatmail addresses. or to understand the relation of "device tokens" and chatmail addresses.
The meaning and format of tokens is basically a matter of chatmail Core and The meaning and format of tokens is basically a matter of Delta-Chat Core and
the `notification.delta.chat` service. the `notification.delta.chat` service.
""" """
@@ -95,12 +95,7 @@ class Notifier:
logging.warning(f"removing spurious queue item: {queue_path!r}") logging.warning(f"removing spurious queue item: {queue_path!r}")
queue_path.unlink() queue_path.unlink()
continue continue
try: queue_item = PersistentQueueItem.read_from_path(queue_path)
queue_item = PersistentQueueItem.read_from_path(queue_path)
except ValueError:
logging.warning(f"removing spurious queue item: {queue_path!r}")
queue_path.unlink()
continue
self.queue_for_retry(queue_item) self.queue_for_retry(queue_item)
def queue_for_retry(self, queue_item, retry_num=0): def queue_for_retry(self, queue_item, retry_num=0):

View File

@@ -35,7 +35,6 @@ def test_read_config_testrun(make_config):
assert config.max_user_send_per_minute == 60 assert config.max_user_send_per_minute == 60
assert config.max_mailbox_size == "100M" assert config.max_mailbox_size == "100M"
assert config.delete_mails_after == "20" assert config.delete_mails_after == "20"
assert config.delete_large_after == "7"
assert config.username_min_length == 9 assert config.username_min_length == 9
assert config.username_max_length == 9 assert config.username_max_length == 9
assert config.password_min_length == 9 assert config.password_min_length == 9

View File

@@ -304,45 +304,3 @@ HELLOWORLD
\r \r
""" """
assert check_armored_payload(payload) == False assert check_armored_payload(payload) == False
# Test payload using partial body length
# as generated by GopenPGP.
payload = """-----BEGIN PGP MESSAGE-----\r
\r
wV4DdCVjRfOT3TQSAQdAY5+pjT6mlCxPGdR3be4w7oJJRUGIPI/Vnh+mJxGSm34w\r
LNlVc89S1g22uQYFif2sUJsQWbpoHpNkuWpkSgOaHmNvrZiY/YU5iv+cZ3LbmtUG\r
0uoBisSHh9O1c+5sYZSbrvYZ1NOwlD7Fv/U5/Mw4E5+CjxfdgNGp5o3DDddzPK78\r
jseDhdSXxnaiIJC93hxNX6R1RPt3G2gukyzx69wciPQShcF8zf3W3o75Ed7B8etV\r
QEeB16xzdFhKa9JxdjTu3osgCs21IO7wpcFkjc7nZzlW6jPnELJJaNmv4yOOCjMp\r
6YAkaN/BkL+jHTznHDuDsT5ilnTXpwHDU1Cm9PIx/KFcNCQnIB+2DcdIHPHUH1ci\r
jvqoeXAVWjKXEjS7PqPFuP/xGbrWG2ugs+toXJOKbgRkExvKs1dwPFKrgghvCVbW\r
AcKejQKAPArLwpkA7aD875TZQShvGt74fNs45XBlGOYOnNOAJ1KAmzrXLIDViyyB\r
kDsmTBk785xofuCkjBpXSe6vsMprPzCteDfaUibh8FHeJjucxPerwuOPEmnogNaf\r
YyL4+iy8H8I9/p7pmUqILprxTG0jTOtlk0bTVzeiF56W1xbtSEMuOo4oFbQTyOM2\r
bKXaYo774Jm+rRtKAnnI2dtf9RpK19cog6YNzfYjesLKbXDsPZbN5rmwyFiCvvxC\r
kQ6JLob+B2fPdY2gzy7LypxktS8Zi1HJcWDHJGVmQodaDLqKUObb4M26bXDe6oxI\r
NS8PJz5exVbM3KhZnUOEn6PJRBBf5a/ZqxlhZPcQo/oBuhKpBRpO5kSDwPIUByu3\r
UlXLSkpMqe9pUarAOEuQjfl2RVY7U+RrQYp4YP5keMO+i8NCefAFbowTTufO1JIq\r
2nVgCi/QVnxZyEc9OYt/8AE3g4cdojE+vsSDifZLSWYIetpfrohHv3dT3StD1QRG\r
0QE6qq6oKpg/IL0cjvuX4c7a7bslv2fXp8t75y37RU6253qdIebhxc/cRhPbc/yu\r
p0YLyD4SrvKTLP2ZV95jT4IPEpqm4AN3QmiOzdtqR2gLyb62L8QfqI/FdwsIiRiM\r
hqydwoqt/lfSqG1WKPh+6EkMkH+TDiCC1BQdbN1MNcyUtcjb35PR2c8Ld2TF3guA\r
jLIqMt/Vb7hBoMb2FcsOYY25ka9oV62OwgKWLXnFzk+modMR5fzb4kxVVAYEqP+D\r
T5KO1Vs76v1fyPGOq6BbBCvLwTqe/e6IZInJles4v5jrhnLcGKmNGivCUDe6X6NY\r
UKNt5RsZllwDQpaAb5dMNhyrk8SgIE7TBI7rvqIdUCE52Vy+0JDxFg5olRpFUfO6\r
/MyTW3Yo/ekk/npHr7iYYqJTCc21bDGLWQcIo/XO7WPxrKNWGBNPFnkRdw0MaKr4\r
+cEM3V8NFnSEpC12xA+RX/CezuJtwXZK5MpG76eYqMO6qyC+c25YcFecEufDZDxx\r
ZLqRszVRyxyWPtk/oIeQK2v9wOqY6N9/ff01gHz69vqYqN5bUw/QKZsmx1zW+gPw\r
6x2tDK2BHeYl182gCbhlKISRFwCtbjqZSkiKWao/VtygHkw0fK34avJuyQ/X9YaN\r
BRy+7Lf3VA53pnB5WJ1xwRXN8VDvmZeXzv2krHveCMemj0OjnRoCLu117xN0A5m9\r
Fm/RoDix5PolDHtWTtr2m1n2hp2LHnj8at9lFEd0SKhAYHVL9KjzycwWODZRXt+x\r
zGDDuooEeTvdY5NLyKcl4gETz1ZP4Ez5jGGjhPSwSpq1mU7UaJ9ZXXdr4KHyifW6\r
ggNzNsGhXTap7IWZpTtqXABydfiBshmH2NjqtNDwBweJVSgP10+r0WhMWlaZs6xl\r
V3o5yskJt6GlkwpJxZrTvN6Tiww/eW7HFV6NGf7IRSWY5tJc/iA7/92tOmkdvJ1q\r
myLbG7cJB787QjplEyVe2P/JBO6xYvbkJLf9Q+HaviTO25rugRSrYsoKMDfO8VlQ\r
1CcnTPVtApPZJEQzAWJEgVAM8uIlkqWJJMgyWT34sTkdBeCUFGloXQFs9Yxd0AGf\r
/zHEkYZSTKpVSvAIGu4=\r
=6iHb\r
-----END PGP MESSAGE-----\r
"""
assert check_armored_payload(payload) == True

View File

@@ -242,22 +242,6 @@ def test_requeue_removes_tmp_files(notifier, metadata, testaddr, caplog):
assert queue_item.addr == testaddr assert queue_item.addr == testaddr
def test_requeue_removes_invalid_files(notifier, metadata, testaddr, caplog):
metadata.add_token_to_addr(testaddr, "01234")
notifier.new_message_for_addr(testaddr, metadata)
# empty/invalid files should be ignored
p = notifier.queue_dir.joinpath("1203981203")
p.touch()
notifier2 = notifier.__class__(notifier.queue_dir)
notifier2.requeue_persistent_queue_items()
assert "spurious" in caplog.records[0].msg
assert not p.exists()
assert notifier2.retry_queues[0].qsize() == 1
when, queue_item = notifier2.retry_queues[0].get()
assert when <= int(time.time())
assert queue_item.addr == testaddr
def test_start_and_stop_notification_threads(notifier, testaddr): def test_start_and_stop_notification_threads(notifier, testaddr):
threads = notifier.start_notification_threads(None) threads = notifier.start_notification_threads(None)
for retry_num, threadlist in threads.items(): for retry_num, threadlist in threads.items():

View File

@@ -58,8 +58,7 @@ class User:
if not self.addr.startswith("echo@"): if not self.addr.startswith("echo@"):
logging.error(f"could not write password for: {self.addr}") logging.error(f"could not write password for: {self.addr}")
raise raise
if not self.addr.startswith("echo@"): self.enforce_E2EE_path.touch()
self.enforce_E2EE_path.touch()
def set_last_login_timestamp(self, timestamp): def set_last_login_timestamp(self, timestamp):
"""Track login time with daily granularity """Track login time with daily granularity

View File

@@ -41,6 +41,3 @@ lint.select = [
"PLE", # Pylint Error "PLE", # Pylint Error
"PLW", # Pylint Warning "PLW", # Pylint Warning
] ]
lint.ignore = [
"PLC0415" # import-outside-top-level
]

View File

@@ -7,35 +7,17 @@ import io
import shutil import shutil
import subprocess import subprocess
import sys import sys
from io import StringIO
from pathlib import Path from pathlib import Path
from chatmaild.config import Config, read_config from chatmaild.config import Config, read_config
from pyinfra import facts, host from pyinfra import facts, host
from pyinfra.api import FactBase
from pyinfra.facts.files import File from pyinfra.facts.files import File
from pyinfra.facts.server import Sysctl
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
from .acmetool import deploy_acmetool from .acmetool import deploy_acmetool
class Port(FactBase):
"""
Returns the process occuping a port.
"""
def command(self, port: int) -> str:
return (
"ss -lptn 'src :%d' | awk 'NR>1 {print $6,$7}' | sed 's/users:((\"//;s/\".*//'"
% (port,)
)
def process(self, output: [str]) -> str:
return output[0]
def _build_chatmaild(dist_dir) -> None: def _build_chatmaild(dist_dir) -> None:
dist_dir = Path(dist_dir).resolve() dist_dir = Path(dist_dir).resolve()
if dist_dir.exists(): if dist_dir.exists():
@@ -248,6 +230,7 @@ def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool:
) )
need_restart |= service_file.changed need_restart |= service_file.changed
return need_restart return need_restart
@@ -318,40 +301,6 @@ def _configure_postfix(config: Config, debug: bool = False) -> bool:
return need_restart return need_restart
def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch
url = f"https://download.delta.chat/dovecot/dovecot-{package}_2.3.21%2Bdfsg1-3_{arch}.deb"
deb_filename = "/root/" + url.split("/")[-1]
match (package, arch):
case ("core", "amd64"):
sha256 = "43f593332e22ac7701c62d58b575d2ca409e0f64857a2803be886c22860f5587"
case ("core", "arm64"):
sha256 = "4d21eba1a83f51c100f08f2e49f0c9f8f52f721ebc34f75018e043306da993a7"
case ("imapd", "amd64"):
sha256 = "8d8dc6fc00bbb6cdb25d345844f41ce2f1c53f764b79a838eb2a03103eebfa86"
case ("imapd", "arm64"):
sha256 = "178fa877ddd5df9930e8308b518f4b07df10e759050725f8217a0c1fb3fd707f"
case ("lmtpd", "amd64"):
sha256 = "2f69ba5e35363de50962d42cccbfe4ed8495265044e244007d7ccddad77513ab"
case ("lmtpd", "arm64"):
sha256 = "89f52fb36524f5877a177dff4a713ba771fd3f91f22ed0af7238d495e143b38f"
case _:
apt.packages(packages=[f"dovecot-{package}"])
return
files.download(
name=f"Download dovecot-{package}",
src=url,
dest=deb_filename,
sha256sum=sha256,
cache_time=60 * 60 * 24 * 365 * 10, # never redownload the package
)
apt.deb(name=f"Install dovecot-{package}", src=deb_filename)
def _configure_dovecot(config: Config, debug: bool = False) -> bool: def _configure_dovecot(config: Config, debug: bool = False) -> bool:
"""Configures Dovecot IMAP server.""" """Configures Dovecot IMAP server."""
need_restart = False need_restart = False
@@ -399,10 +348,6 @@ def _configure_dovecot(config: Config, debug: bool = False) -> bool:
# it is recommended to set the following inotify limits # it is recommended to set the following inotify limits
for name in ("max_user_instances", "max_user_watches"): for name in ("max_user_instances", "max_user_watches"):
key = f"fs.inotify.{name}" key = f"fs.inotify.{name}"
if host.get_fact(Sysctl)[key] > 65535:
# Skip updating limits if already sufficient
# (enables running in incus containers where sysctl readonly)
continue
server.sysctl( server.sysctl(
name=f"Change {key}", name=f"Change {key}",
key=key, key=key,
@@ -410,13 +355,6 @@ def _configure_dovecot(config: Config, debug: bool = False) -> bool:
persist=True, persist=True,
) )
timezone_env = files.line(
name="Set TZ environment variable",
path="/etc/environment",
line="TZ=:/etc/localtime",
)
need_restart |= timezone_env.changed
return need_restart return need_restart
@@ -498,26 +436,9 @@ def check_config(config):
def deploy_mtail(config): def deploy_mtail(config):
# Uninstall mtail package, we are going to install a static binary. apt.packages(
apt.packages(name="Uninstall mtail", packages=["mtail"], present=False) name="Install mtail",
packages=["mtail"],
(url, sha256sum) = {
"x86_64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_amd64.tar.gz",
"123c2ee5f48c3eff12ebccee38befd2233d715da736000ccde49e3d5607724e4",
),
"aarch64": (
"https://github.com/google/mtail/releases/download/v3.0.8/mtail_3.0.8_linux_arm64.tar.gz",
"aa04811c0929b6754408676de520e050c45dddeb3401881888a092c9aea89cae",
),
}[host.get_fact(facts.server.Arch)]
server.shell(
name="Download mtail",
commands=[
f"(echo '{sha256sum} /usr/local/bin/mtail' | sha256sum -c) || (curl -L {url} | gunzip | tar -x -f - mtail -O >/usr/local/bin/mtail.new && mv /usr/local/bin/mtail.new /usr/local/bin/mtail)",
"chmod 755 /usr/local/bin/mtail",
],
) )
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`. # Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
@@ -653,15 +574,9 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
path="/etc/apt/sources.list", path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./", line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True, escape_regex_characters=True,
present=False, ensure_newline=True,
) )
if host.get_fact(Port, port=53) != "unbound":
files.line(
name="Add 9.9.9.9 to resolv.conf",
path="/etc/resolv.conf",
line="nameserver 9.9.9.9",
)
apt.update(name="apt update", cache_time=24 * 3600) apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True) apt.upgrade(name="upgrade apt packages", auto_remove=True)
@@ -673,12 +588,6 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
# Run local DNS resolver `unbound`. # Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf # `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver. # to use 127.0.0.1 as the resolver.
from cmdeploy.cmdeploy import Out
process_on_53 = host.get_fact(Port, port=53)
if process_on_53 not in (None, "unbound"):
Out().red(f"Can't install unbound: port 53 is occupied by: {process_on_53}")
exit(1)
apt.packages( apt.packages(
name="Install unbound", name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"], packages=["unbound", "unbound-anchor", "dnsutils"],
@@ -716,10 +625,10 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
packages="postfix", packages="postfix",
) )
if not "dovecot.service" in host.get_fact(SystemdEnabled): apt.packages(
_install_dovecot_package("core", host.get_fact(facts.server.Arch)) name="Install Dovecot",
_install_dovecot_package("imapd", host.get_fact(facts.server.Arch)) packages=["dovecot-imapd", "dovecot-lmtpd"],
_install_dovecot_package("lmtpd", host.get_fact(facts.server.Arch)) )
apt.packages( apt.packages(
name="Install nginx", name="Install nginx",
@@ -816,19 +725,5 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
name="Ensure cron is installed", name="Ensure cron is installed",
packages=["cron"], packages=["cron"],
) )
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
files.put(
name="Upload chatmail relay git commiit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
)
deploy_mtail(config) deploy_mtail(config)

View File

@@ -19,7 +19,7 @@ from packaging import version
from termcolor import colored from termcolor import colored
from . import dns, remote from . import dns, remote
from .sshexec import SSHExec, Local from .sshexec import SSHExec
# #
# cmdeploy sub commands and options # cmdeploy sub commands and options
@@ -62,18 +62,13 @@ def run_cmd_options(parser):
"--ssh-host", "--ssh-host",
dest="ssh_host", dest="ssh_host",
help="specify an SSH host to deploy to; uses mail_domain from chatmail.ini by default", help="specify an SSH host to deploy to; uses mail_domain from chatmail.ini by default",
default=None,
) )
def run_cmd(args, out): def run_cmd(args, out):
"""Deploy chatmail services on the remote server.""" """Deploy chatmail services on the remote server."""
ssh_host = args.ssh_host if args.ssh_host else args.config.mail_domain sshexec = args.get_sshexec()
if ssh_host == "localhost":
sshexec = Local(ssh_host)
else:
sshexec = args.get_sshexec(ssh_host)
require_iroh = args.config.enable_iroh_relay require_iroh = args.config.enable_iroh_relay
remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain) remote_data = dns.get_initial_remote_data(sshexec, args.config.mail_domain)
if not dns.check_initial_remote_data(remote_data, print=out.red): if not dns.check_initial_remote_data(remote_data, print=out.red):
@@ -85,25 +80,21 @@ def run_cmd(args, out):
env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else "" env["CHATMAIL_REQUIRE_IROH"] = "True" if require_iroh else ""
deploy_path = importlib.resources.files(__package__).joinpath("deploy.py").resolve() deploy_path = importlib.resources.files(__package__).joinpath("deploy.py").resolve()
pyinf = "pyinfra --dry" if args.dry_run else "pyinfra" pyinf = "pyinfra --dry" if args.dry_run else "pyinfra"
ssh_host = "@local" if ssh_host == "localhost" else f"--ssh-host {ssh_host}" ssh_host = args.config.mail_domain if not args.ssh_host else args.ssh_host
cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y" cmd = f"{pyinf} --ssh-user root {ssh_host} {deploy_path} -y"
if version.parse(pyinfra.__version__) < version.parse("3"): if version.parse(pyinfra.__version__) < version.parse("3"):
out.red("Please re-run scripts/initenv.sh to update pyinfra to version 3.") out.red("Please re-run scripts/initenv.sh to update pyinfra to version 3.")
return 1 return 1
try: retcode = out.check_call(cmd, env=env)
retcode = out.check_call(cmd, env=env) if retcode == 0:
if retcode == 0: out.green("Deploy completed, call `cmdeploy dns` next.")
out.green("Deploy completed, call `cmdeploy dns` next.") elif not remote_data["acme_account_url"]:
elif not remote_data["acme_account_url"]: out.red("Deploy completed but letsencrypt not configured")
out.red("Deploy completed but letsencrypt not configured") out.red("Run 'cmdeploy run' again")
out.red("Run 'cmdeploy run' again") retcode = 0
retcode = 0 else:
else:
out.red("Deploy failed")
except subprocess.CalledProcessError:
out.red("Deploy failed") out.red("Deploy failed")
retcode = 1
return retcode return retcode
@@ -335,9 +326,9 @@ def main(args=None):
if not hasattr(args, "func"): if not hasattr(args, "func"):
return parser.parse_args(["-h"]) return parser.parse_args(["-h"])
def get_sshexec(host): def get_sshexec():
print(f"[ssh] login to {host}") print(f"[ssh] login to {args.config.mail_domain}")
return SSHExec(host, verbose=args.verbose) return SSHExec(args.config.mail_domain, verbose=args.verbose)
args.get_sshexec = get_sshexec args.get_sshexec = get_sshexec

View File

@@ -177,34 +177,20 @@ service auth-worker {
} }
service imap-login { service imap-login {
# High-performance mode as described in # High-security mode.
# <https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-performance-mode> # Each process serves a single connection and exits afterwards.
# # This is the default, but we set it explicitly to be sure.
# So-called high-security mode described in # See <https://doc.dovecot.org/admin_manual/login_processes/#high-security-mode> for details.
# <https://doc.dovecot.org/2.3/admin_manual/login_processes/#high-security-mode> service_count = 1
# and enabled by default with `service_count = 1` starts one process per connection
# and has problems logging in thousands of users after Dovecot restart.
service_count = 0
# Increase virtual memory size limit. # Inrease the number of simultaneous connections.
# Since imap-login processes handle TLS connections
# even after logging users in
# and many connections are handled by each process,
# memory size limit should be increased.
# #
# Otherwise the whole process eventually dies # As of Dovecot 2.3.19.1 the default is 100 processes.
# with an error similar to # Combined with `service_count = 1` it means only 100 connections
# imap-login: Fatal: master: service(imap-login): # can be handled simultaneously.
# child 1422951 returned error 83 process_limit = 10000
# (Out of memory (service imap-login { vsz_limit=256 MB },
# you may need to increase it)
# and takes down all its TLS connections at once.
vsz_limit = 1G
# Avoid startup latency for new connections. # Avoid startup latency for new connections.
#
# Should be set to at least the number of CPU cores
# according to the documentation.
process_min_avail = 10 process_min_avail = 10
} }

View File

@@ -1,5 +1,5 @@
# delete already seen big mails after 7 days, in the INBOX # delete already seen big mails after 7 days, in the INBOX
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/cur/*' -mtime +{{ config.delete_large_after }} -size +200k -type f -delete 2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/cur/*' -mtime +7 -size +200k -type f -delete
# delete all mails after {{ config.delete_mails_after }} days, in the Inbox # delete all mails after {{ config.delete_mails_after }} days, in the Inbox
2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/cur/*' -mtime +{{ config.delete_mails_after }} -type f -delete 2 0 * * * vmail find {{ config.mailboxes_dir }} -path '*/cur/*' -mtime +{{ config.delete_mails_after }} -type f -delete
# or in any IMAP subfolder # or in any IMAP subfolder

View File

@@ -2,6 +2,15 @@ function dovecot_lua_notify_begin_txn(user)
return user return user
end end
function contains(v, needle)
for _, keyword in ipairs(v) do
if keyword == needle then
return true
end
end
return false
end
function dovecot_lua_notify_event_message_new(user, event) function dovecot_lua_notify_event_message_new(user, event)
local mbox = user:mailbox(event.mailbox) local mbox = user:mailbox(event.mailbox)
mbox:sync() mbox:sync()

View File

@@ -3,7 +3,7 @@ Description=mtail
[Service] [Service]
Type=simple Type=simple
ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/local/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs -" ExecStart=/bin/sh -c "journalctl -f -o short-iso -n 0 | /usr/bin/mtail --address={{ address }} --port={{ port }} --progs /etc/mtail --logtostderr --logs /dev/stdin"
Restart=on-failure Restart=on-failure
[Install] [Install]

View File

@@ -2,25 +2,11 @@ load_module modules/ngx_stream_module.so;
user www-data; user www-data;
worker_processes auto; worker_processes auto;
# Increase the number of connections
# that a worker process can open
# to avoid errors such as
# accept4() failed (24: Too many open files)
# and
# socket() failed (24: Too many open files) while connecting to upstream
# in the logs.
# <https://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile>
worker_rlimit_nofile 2048;
pid /run/nginx.pid; pid /run/nginx.pid;
error_log syslog:server=unix:/dev/log,facility=local3; error_log syslog:server=unix:/dev/log,facility=local3;
events { events {
# Increase to avoid errors such as worker_connections 768;
# 768 worker_connections are not enough while connecting to upstream
# in the logs.
# <https://nginx.org/en/docs/ngx_core_module.html#worker_connections>
worker_connections 2048;
# multi_accept on; # multi_accept on;
} }

View File

@@ -19,7 +19,7 @@ def perform_initial_checks(mail_domain):
"""Collecting initial DNS settings.""" """Collecting initial DNS settings."""
assert mail_domain assert mail_domain
if not shell("dig", fail_ok=True): if not shell("dig", fail_ok=True):
shell("apt-get update && apt-get install -y dnsutils") shell("apt-get install -y dnsutils")
A = query_dns("A", mail_domain) A = query_dns("A", mail_domain)
AAAA = query_dns("AAAA", mail_domain) AAAA = query_dns("AAAA", mail_domain)
MTA_STS = query_dns("CNAME", f"mta-sts.{mail_domain}") MTA_STS = query_dns("CNAME", f"mta-sts.{mail_domain}")

View File

@@ -1,6 +1,5 @@
import inspect import inspect
import os import os
import subprocess
import sys import sys
from queue import Queue from queue import Queue
@@ -45,16 +44,30 @@ def print_stderr(item="", end="\n"):
print(item, file=sys.stderr, end=end) print(item, file=sys.stderr, end=end)
class Exec: class SSHExec:
RemoteError = execnet.RemoteError
FuncError = FuncError FuncError = FuncError
def __init__(self, host, verbose, timeout): def __init__(self, host, verbose=False, python="python3", timeout=60):
self.host = host self.gateway = execnet.makegateway(f"ssh=root@{host}//python={python}")
self._remote_cmdloop_channel = bootstrap_remote(self.gateway, remote)
self.timeout = timeout self.timeout = timeout
self.verbose = verbose self.verbose = verbose
def __call__(self, call, kwargs=None, log_callback=None): def __call__(self, call, kwargs=None, log_callback=None):
return subprocess.check_output(call) if kwargs is None:
kwargs = {}
assert call.__module__.startswith("cmdeploy.remote")
modname = call.__module__.replace("cmdeploy.", "")
self._remote_cmdloop_channel.send((modname, call.__name__, kwargs))
while 1:
code, data = self._remote_cmdloop_channel.receive(timeout=self.timeout)
if log_callback is not None and code == "log":
log_callback(data)
elif code == "finish":
return data
elif code == "error":
raise self.FuncError(data)
def logged(self, call, kwargs): def logged(self, call, kwargs):
def log_progress(data): def log_progress(data):
@@ -72,33 +85,3 @@ class Exec:
res = self(call, kwargs, log_callback=log_progress) res = self(call, kwargs, log_callback=log_progress)
print_stderr() print_stderr()
return res return res
class Local(Exec):
def __init__(self, host, verbose=False, timeout=60):
super().__init__(host, verbose, timeout)
class SSHExec(Exec):
RemoteError = execnet.RemoteError
def __init__(self, host, verbose=False, timeout=60):
super().__init__(host, verbose, timeout)
self.gateway = execnet.makegateway(f"ssh=root@{host}//python=python3")
self._remote_cmdloop_channel = bootstrap_remote(self.gateway, remote)
def __call__(self, call, kwargs=None, log_callback=None):
if kwargs is None:
kwargs = {}
assert call.__module__.startswith("cmdeploy.remote")
modname = call.__module__.replace("cmdeploy.", "")
self._remote_cmdloop_channel.send((modname, call.__name__, kwargs))
while 1:
code, data = self._remote_cmdloop_channel.receive(timeout=self.timeout)
if log_callback is not None and code == "log":
log_callback(data)
elif code == "finish":
return data
elif code == "error":
raise self.FuncError(data)

View File

@@ -90,13 +90,8 @@ def test_concurrent_logins_same_account(
def test_no_vrfy(chatmail_config): def test_no_vrfy(chatmail_config):
domain = chatmail_config.mail_domain
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10) sock.connect((chatmail_config.mail_domain, 25))
try:
sock.connect((domain, 25))
except socket.timeout:
pytest.skip(f"port 25 not reachable for {domain}")
banner = sock.recv(1024) banner = sock.recv(1024)
print(banner) print(banner)
sock.send(b"VRFY wrongaddress@%s\r\n" % (chatmail_config.mail_domain.encode(),)) sock.send(b"VRFY wrongaddress@%s\r\n" % (chatmail_config.mail_domain.encode(),))

View File

@@ -1,7 +1,5 @@
import datetime import datetime
import smtplib import smtplib
import socket
import subprocess
import pytest import pytest
@@ -57,20 +55,11 @@ class TestSSHExecutor:
def test_opendkim_restarted(self, sshexec): def test_opendkim_restarted(self, sshexec):
"""check that opendkim is not running for longer than a day.""" """check that opendkim is not running for longer than a day."""
cmd = "systemctl show opendkim --timestamp=utc --property=ActiveEnterTimestamp" out = sshexec(call=remote.rshell.shell, kwargs=dict(command="systemctl status opendkim"))
out = sshexec(call=remote.rshell.shell, kwargs=dict(command=cmd)) assert type(out) == str
datestring = out.split("=")[1] since_date_str = out.split("since ")[1].split(";")[0]
since_date = datetime.datetime.strptime(datestring, "%a %Y-%m-%d %H:%M:%S %Z") since_date = datetime.datetime.strptime(since_date_str, "%a %Y-%m-%d %H:%M:%S %Z")
now = datetime.datetime.now(since_date.tzinfo) assert (datetime.datetime.now() - since_date).total_seconds() < 60 * 60 * 24
assert (now - since_date).total_seconds() < 60 * 60 * 51
def test_timezone_env(remote):
for line in remote.iter_output("env"):
print(line)
if line == "tz=:/etc/localtime":
return True
pytest.fail("TZ is not set")
def test_remote(remote, imap_or_smtp): def test_remote(remote, imap_or_smtp):
@@ -127,21 +116,9 @@ def test_authenticated_from(cmsetup, maildata):
@pytest.mark.parametrize("from_addr", ["fake@example.org", "fake@testrun.org"]) @pytest.mark.parametrize("from_addr", ["fake@example.org", "fake@testrun.org"])
def test_reject_missing_dkim(cmsetup, maildata, from_addr): def test_reject_missing_dkim(cmsetup, maildata, from_addr):
domain = cmsetup.maildomain
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10)
try:
sock.connect((domain, 25))
except socket.timeout:
pytest.skip(f"port 25 not reachable for {domain}")
recipient = cmsetup.gen_users(1)[0] recipient = cmsetup.gen_users(1)[0]
msg = maildata( msg = maildata("encrypted.eml", from_addr=from_addr, to_addr=recipient.addr).as_string()
"encrypted.eml", from_addr=from_addr, to_addr=recipient.addr with smtplib.SMTP(cmsetup.maildomain, 25) as s:
).as_string()
conn = smtplib.SMTP(cmsetup.maildomain, 25, timeout=10)
with conn as s:
with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"): with pytest.raises(smtplib.SMTPDataError, match="No valid DKIM signature"):
s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg) s.sendmail(from_addr=from_addr, to_addrs=recipient.addr, msg=msg)
@@ -199,25 +176,6 @@ def test_expunged(remote, chatmail_config):
f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f", f"find {chatmail_config.mailboxes_dir} -path '*/tmp/*' -mtime +{outdated_days} -type f",
f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f", f"find {chatmail_config.mailboxes_dir} -path '*/.*/tmp/*' -mtime +{outdated_days} -type f",
] ]
outdated_days = int(chatmail_config.delete_large_after) + 1
find_cmds.append(
"find {chatmail_config.mailboxes_dir} -path '*/cur/*' -mtime +{outdated_days} -size +200k -type f"
)
for cmd in find_cmds: for cmd in find_cmds:
for line in remote.iter_output(cmd): for line in remote.iter_output(cmd):
assert not line assert not line
def test_deployed_state(remote):
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
git_diff = subprocess.check_output(["git", "diff"]).decode()
git_status = [git_hash.strip()]
for line in git_diff.splitlines():
git_status.append(line.strip().lower())
remote_version = []
for line in remote.iter_output("cat /etc/chatmail-version"):
print(line)
remote_version.append(line)
# assert len(git_status) == len(remote_version) # for some reason, we only get 11 lines from remote.iter_output()
for i in range(len(remote_version)):
assert git_status[i] == remote_version[i], "You have undeployed changes."

View File

@@ -307,7 +307,6 @@ def cmfactory(request, gencreds, tmpdir, maildomain):
class Data: class Data:
def read_path(self, path): def read_path(self, path):
return return
am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data()) am = ACFactory(request=request, tmpdir=tmpdir, testprocess=testproc, data=Data())
# nb. a bit hacky # nb. a bit hacky

View File

@@ -1,23 +1,5 @@
#!/bin/sh #!/bin/sh
set -e set -e
if command -v lsb_release 2>&1 >/dev/null; then
case "$(lsb_release -is)" in
Ubuntu | Debian )
if ! dpkg -l | grep python3-dev 2>&1 >/dev/null
then
echo "You need to install python3-dev for installing the other dependencies."
exit 1
fi
if ! gcc --version 2>&1 >/dev/null
then
echo "You need to install gcc for building Python dependencies."
exit 1
fi
;;
esac
fi
python3 -m venv --upgrade-deps venv python3 -m venv --upgrade-deps venv
venv/bin/pip install -e chatmaild venv/bin/pip install -e chatmaild