Organize cmdeploy into install, configure, and activate stages (#695)

* refactor: Move all imports to top of cmdeploy/__init__.py

* refactor: Move addition of 9.9.9.9 resolver earlier

- Moved the "Add 9.9.9.9 to resolv.conf" step earlier, before the
  creation of users or updates to any config files.  This should not
  affect any of those operations.  Moving this step earlier makes it
  easier to accommodate the restructuring of the deployment process
  into separate components with separate stages for install,
  configure, and activate.

- Added a Deployer class that defines the base for objects that will
  handle installation of individual components, with install,
  configure, and activate stages.  
- The CMDEPLOY_STAGES environment variable is used to determine what
  stages to run.  If this is not defined, all stages run as usual.
- Added import of Deployer to cmdeploy/__init__.py.  This is not yet
  used, but the next series of commits will use it.
- In deploy_chatmail(), define an empty list of deployers, and call
  the create_groups() and create_users() methods for the items in the
  list.  This list will get filled with Deployer objects in the next
  series of commits.

* refactor: Add DovecotDeployer

* refactor: Add PostfixDeployer

- Removed now-unused 'debug' variable from deploy_chatmail().

* refactor: Add NginxDeployer

- Use policy-rc.d during nginx install.  This is needed to keep nginx
  from starting up and interfering with acmetool.  For more information see:
    - https://serverfault.com/questions/861583/how-to-stop-nginx-from-being-automatically-started-on-install
    - https://major.io/p/install-debian-packages-without-starting-daemons/
    - https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt

* refactor: Add OpendkimDeployer

- Note that this moves the installation of the opendkim package
  earlier in the deployment sequence.  Previously, it was installed
  during the _configure_opendkim() routine.

* refactor: Add UnboundDeployer

* refactor: Add IrohDeployer

- This splits the existing deploy_iroh_relay() routine into methods
  for the install, configure, and activate stages.

* refactor: Add JournaldDeployer

* refactor: Add AcmetoolDeployer

- This splits the existing deploy_acmetool() routine into methods for
  the install, configure, and activate stages.

* refactor: Add MtailDeployer

- This splits the existing deploy_mtail() routine into methods for the
  install, configure, and activate stages.

* refactor: Add MtastsDeployer

- This splits the existing _uninstall_mta_sts_daemon() routine into
  methods for the configure and activate stages.

* refactor: Add RspamdDeployer

- This replaces the existing _remove_rspamd() routine with a method
  for the install stage.

* refactor: Split _install_remote_venv_with_chatmaild into stages

- Split _install_remote_venv_with_chatmaild() into three routines, to
  handle the install, configure, and activate stages.
- This moves the upload of chatmail.ini later in the deployment
  process, because it is a configuration file specific to the
  instance, not software installation that would be uniform across all
  deployments.

* refactor: Add ChatmailVenvDeployer

* refactor: Add ChatmailDeployer

- This moves the installation of cron earlier in the deployment sequence.

* refactor: Add FcgiwrapDeployer

* refactor: Add EchobotDeployer

- This class is a special case because it has a dependency on the
  Postfix and Dovecot deployers.  When deciding whether to restart the
  echobot service, it needs to know whether the Postfix and Dovecot
  deployers restarted their services.  To support this dependency, the
  PostfixDeployer and DovecotDeployer objects are passed to the
  EchobotDeployer object, so it can check their was_restarted
  attributes.

* refactor: Add WebsiteDeployer

- This adds a step to create /var/www in the install stage, because
  the directory needs to exist for the rsync in the configure stage to
  work.

* refactor: Add TurnDeployer

- This splits the existing deploy_turn_server() routine into methods
  for the install, configure, and activate stages.

* refactor: Move curl installation from IrohDeployer to ChatmailDeployer

- The 'curl' program is used in TurnDeployer and IrohDeployer, so it
  makes more sense to install it at the beginning in ChatmailDeployer,
  rather than have each thing that uses it install it separately.

* refactor: Reorder deploy_chatmail()

- The previous commits that added Deployer classes mostly kept
  deployment operations in the same order that they were in before.
  To organize the process into separate stages for install, configure,
  and activate, we need to reorder the method calls.  This is the
  commit that does that, and thus this is the commit that has the
  largest effect on the order of operations.
- The calls for the deployer objects are all reordered here so that
  the methods are called in the same sequence for each stage.  This
  will allow us to collect the calls into loops in the next commit.
  This commit provides a way to see a diff showing exactly how the
  sequence changed.
- The sequence of deployers was largely based on preserving the order
  of the "activate" stage, as this seems like the place order might be
  the most likely to matter.  Installation of packages and
  configuration of files should generally be able to run in any order.
  (ChatmailDeployer handles updating the apt data, and therefore needs
  to be first, however.)

* refactor: Call install, configure, and activate methods in loops

- Revised deploy_chatmail() to use all_deployers to call the
  install(), configure(), and activate() methods on all the deployers,
  rather than listing them explicitly in the code.

* docs: Add architectural information about deployer classes

- Updated overview.rst to describe the Deployer class hierarchy and
  the motivations behind it.

* fix: Block unbound from starting up on install

- On an IPv4-only system, if unbound is started but not configured, it
  causes subsequent steps to fail to resolve hosts.
- Revised UnboundDeployer.install_impl() to use policy-rc.d to prevent
  the service from starting when installed.  This is the same
  mechanism used to keep nginx from starting on install.

* feat: Remove obs-home-deltachat.gpg

- We don't install Dovecot from OBS anymore.
- Removed files.put() that creates
  /etc/apt/keyrings/obs-home-deltachat.gpg; replaced this with a
  files.file() that sets present=False to remove the file from any
  existing installations where it already has been installed.
- Removed now-unused obs-home-deltachat.gpg file.
- Clarified description of sources.list operation.
- Suggested in review by missytake and hpk42.

* feat: Reorder deployers

- Moved fcgiwrap before nginx.
- Exchanged order of turn and unbound.
- Moved journald as early as possible.
- Suggested in review by missytake.

* chore: Add CHANGELOG.md entry for cmdeploy refactor

* refactor: Move unit list to ChatmailVenvDeployer

- Split _configure_remote_venv_with_chatmaild() into two functions.
  _configure_remote_venv_with_chatmaild() handles details specific to
  the "venv", while the new _configure_remote_units() is a more
  general function that is applicable to several services.
- Renamed _activate_remote_venv_with_chatmaild() to
  _activate_remote_units() because doesn't have anything
  venv-specific.
- Removed list of units from helper functions (where it appeared
  twice); moved it to ChatmailVenvDeployer, where its is passed as an
  argument to _configure_remote_units() and _activate_remote_units().

* refactor: Move turnserver out of ChatmailVenvDeployer

- Revised TurnDeployer to use _configure_remote_units() and
  _activate_remote_units().  This class no longer uses need_restart
  and daemon_reload attributes to keep track of state.  The activate
  stage of ChatmailVenvDeployer was unconditionally restarting the
  service every time, so we don't need to keep track of extra state in
  an attempt to avoid restarting it; we can just handle the
  unconditional restart in TurnDeployer.activate_impl().
- Removed turnserver from the unit list in ChatmailVenvDeployer.

* refactor: Move echobot out of ChatmailVenvDeployer

- Revised EchobotDeployer to use _configure_remote_units() and
  _activate_remote_units().  The 'activate' stage of
  ChatmailVenvDeployer was unconditionally restarting the service
  every time, so EchobotDeployer no longer needs to depend on the
  was_restarted attributes of the postfix and dovecot deployers in an
  attempt to avoid restarting it; we can just handle the unconditional
  restart in EchobotDeployer.activate_impl().
- Removed echobot from the unit list in ChatmailVenvDeployer.
- Removed now-unused was_restarted attribute from PostfixDeployer and
  DovecotDeployer.

* refactor: Move doveauth out of ChatmailVenvDeployer

- Revised DovecotDeployer to use _configure_remote_units() and
  _activate_remote_units() to deploy doveauth.  This keeps the
  Dovecot-related services in a single deployer class, leaving only
  services that are part of the chatmail project in
  ChatmailVenvDeployer.
- Removed doveauth from the unit list in ChatmailVenvDeployer.

* strike unnccessary deployer variables

* remove indirection with "stages"

* simplify required_users configuration (a method is not needed for now)

* further reduce indirections for staged install

* now that Deployer class is clean and not mixed with what is in Deployment, use the simpler "install", "configure" and "activate" namings instead of *_impl

* remove static method and Make Deployer instances not set any default state

* strike unneccessary *,** argument flexibility

* use a Deployer for setting the remote git hash

* refactor: Revise AcmetoolDeployer for new Deployer interface

* style: Formatting revisions

* refactor: Pass all constructor arguments by position

- The constructor arguments do not have default values; they are all
  required.  Revised deploy_chatmail() to pass them by position rather
  than name, so that the caller is not coupled to the names of the
  arguments inside the method definition.

* refactor: Simplify interface to Deployer.install()

- In the current code, the only class using the interface that sets
  need_restart() from the return value of the install() method was
  IrohDeployer.  That interface was created when the install method
  was a static method, but now it is an instance method with access to
  'self'.  Therefore, we don't need to pass anything up to the caller
  to have them set the attribute, we can just set it.
- Revised IrohDeployer.install() to set self.need_restart directly,
  rather than returning a value.
- Revised Deployment.install() to ignore the return value of the
  deployers' install() methods.
- need_restart is still present in the base Deployer class to ensure
  that it is always defined, even when classes do not set it in a
  constructor.  Apart from this initialization for convenience, there
  is no longer any specific exposure of need_restart in the interface
  of the Deployer class.
- In general, install() methods should use 'self' as little as
  possible, preferably not at all.  In particular, install() methods
  should never depend on "config" data, such as the config dictionary
  in self.config or specific values like self.mail_domain.  This
  ensures that these methods can be used to perform generic
  installation operations that are applicable across multiple relay
  deployments, and therefore can be called in the process of building
  a general-purpose container image.

* docs: Update cmdeploy architecture details

- Revised cmdeploy documentation in doc/source/overview.rst to reflect
  the recent revisions to the Deployer interface.

* docs: Remove section about use of objects

---------

Co-authored-by: holger krekel <holger@merlinux.eu>
This commit is contained in:
cliffmccarthy
2025-11-13 09:51:51 -06:00
committed by GitHub
parent 5515dc4c4b
commit 3df3c031d4
7 changed files with 799 additions and 481 deletions

View File

@@ -2,6 +2,9 @@
## untagged ## untagged
- Organized cmdeploy into install, configure, and activate stages
([#695](https://github.com/chatmail/relay/pull/695))
- docs: move readme.md docs to sphinx documentation rendered at https://chatmail.at/doc/relay - docs: move readme.md docs to sphinx documentation rendered at https://chatmail.at/doc/relay
([#711](https://github.com/chatmail/relay/pull/711)) ([#711](https://github.com/chatmail/relay/pull/711))

View File

@@ -11,6 +11,7 @@ from io import StringIO
from pathlib import Path from pathlib import Path
from chatmaild.config import Config, read_config from chatmaild.config import Config, read_config
from cmdeploy.cmdeploy import Out
from pyinfra import facts, host, logger from pyinfra import facts, host, logger
from pyinfra.api import FactBase from pyinfra.api import FactBase
from pyinfra.facts.files import File, Sha256File from pyinfra.facts.files import File, Sha256File
@@ -18,7 +19,9 @@ from pyinfra.facts.server import Sysctl
from pyinfra.facts.systemd import SystemdEnabled from pyinfra.facts.systemd import SystemdEnabled
from pyinfra.operations import apt, files, pip, server, systemd from pyinfra.operations import apt, files, pip, server, systemd
from .acmetool import deploy_acmetool from .acmetool import AcmetoolDeployer
from .deployer import Deployer, Deployment
from .www import build_webpages, find_merge_conflict, get_paths
class Port(FactBase): class Port(FactBase):
@@ -61,13 +64,12 @@ def remove_legacy_artifacts():
) )
def _install_remote_venv_with_chatmaild(config) -> None: def _install_remote_venv_with_chatmaild() -> None:
remove_legacy_artifacts() remove_legacy_artifacts()
dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist")) dist_file = _build_chatmaild(dist_dir=Path("chatmaild/dist"))
remote_base_dir = "/usr/local/lib/chatmaild" remote_base_dir = "/usr/local/lib/chatmaild"
remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}" remote_dist_file = f"{remote_base_dir}/dist/{dist_file.name}"
remote_venv_dir = f"{remote_base_dir}/venv" remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644") root_owned = dict(user="root", group="root", mode="644")
apt.packages( apt.packages(
@@ -83,13 +85,6 @@ def _install_remote_venv_with_chatmaild(config) -> None:
**root_owned, **root_owned,
) )
files.put(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(),
dest=remote_chatmail_inipath,
**root_owned,
)
pip.virtualenv( pip.virtualenv(
name=f"chatmaild virtualenv {remote_venv_dir}", name=f"chatmaild virtualenv {remote_venv_dir}",
path=remote_venv_dir, path=remote_venv_dir,
@@ -108,6 +103,20 @@ def _install_remote_venv_with_chatmaild(config) -> None:
], ],
) )
def _configure_remote_venv_with_chatmaild(config) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
files.put(
name=f"Upload {remote_chatmail_inipath}",
src=config._getbytefile(),
dest=remote_chatmail_inipath,
**root_owned,
)
files.template( files.template(
src=importlib.resources.files(__package__).joinpath("metrics.cron.j2"), src=importlib.resources.files(__package__).joinpath("metrics.cron.j2"),
dest="/etc/cron.d/chatmail-metrics", dest="/etc/cron.d/chatmail-metrics",
@@ -120,26 +129,21 @@ def _install_remote_venv_with_chatmaild(config) -> None:
}, },
) )
def _configure_remote_units(mail_domain, units) -> None:
remote_base_dir = "/usr/local/lib/chatmaild"
remote_venv_dir = f"{remote_base_dir}/venv"
remote_chatmail_inipath = f"{remote_base_dir}/chatmail.ini"
root_owned = dict(user="root", group="root", mode="644")
# install systemd units # install systemd units
for fn in ( for fn in units:
"doveauth",
"filtermail",
"filtermail-incoming",
"echobot",
"chatmail-metadata",
"lastlogin",
"turnserver",
"chatmail-expire",
"chatmail-expire.timer",
"chatmail-fsreport",
"chatmail-fsreport.timer",
):
execpath = fn if fn != "filtermail-incoming" else "filtermail" execpath = fn if fn != "filtermail-incoming" else "filtermail"
params = dict( params = dict(
execpath=f"{remote_venv_dir}/bin/{execpath}", execpath=f"{remote_venv_dir}/bin/{execpath}",
config_path=remote_chatmail_inipath, config_path=remote_chatmail_inipath,
remote_venv_dir=remote_venv_dir, remote_venv_dir=remote_venv_dir,
mail_domain=config.mail_domain, mail_domain=mail_domain,
) )
basename = fn if "." in fn else f"{fn}.service" basename = fn if "." in fn else f"{fn}.service"
@@ -153,6 +157,13 @@ def _install_remote_venv_with_chatmaild(config) -> None:
dest=f"/etc/systemd/system/{basename}", dest=f"/etc/systemd/system/{basename}",
**root_owned, **root_owned,
) )
def _activate_remote_units(units) -> None:
# activate systemd units
for fn in units:
basename = fn if "." in fn else f"{fn}.service"
if fn == "chatmail-expire" or fn == "chatmail-fsreport": if fn == "chatmail-expire" or fn == "chatmail-fsreport":
# don't auto-start but let the corresponding timer trigger execution # don't auto-start but let the corresponding timer trigger execution
enabled = False enabled = False
@@ -238,11 +249,6 @@ def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool:
present=True, present=True,
) )
apt.packages(
name="apt install opendkim opendkim-tools",
packages=["opendkim", "opendkim-tools"],
)
if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"): if not host.get_fact(File, f"/etc/dkimkeys/{dkim_selector}.private"):
server.shell( server.shell(
name="Generate OpenDKIM domain keys", name="Generate OpenDKIM domain keys",
@@ -263,14 +269,95 @@ def _configure_opendkim(domain: str, dkim_selector: str = "dkim") -> bool:
return need_restart return need_restart
def _uninstall_mta_sts_daemon() -> None: class OpendkimDeployer(Deployer):
required_users = [("opendkim", None, ["opendkim"])]
def __init__(self, mail_domain):
self.mail_domain = mail_domain
def install(self):
apt.packages(
name="apt install opendkim opendkim-tools",
packages=["opendkim", "opendkim-tools"],
)
def configure(self):
self.need_restart = _configure_opendkim(self.mail_domain, "opendkim")
def activate(self):
systemd.service(
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
daemon_reload=self.need_restart,
restarted=self.need_restart,
)
self.need_restart = False
class UnboundDeployer(Deployer):
def install(self):
# Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
#
# On an IPv4-only system, if unbound is started but not
# configured, it causes subsequent steps to fail to resolve hosts.
# Here, we use policy-rc.d to prevent unbound from starting up
# on initial install. Later, we will configure it and start it.
#
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=importlib.resources.files(__package__).joinpath("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
"unbound-anchor -a /var/lib/unbound/root.key || true",
],
)
def activate(self):
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
"systemctl reset-failed unbound.service",
],
)
systemd.service(
name="Start and enable unbound",
service="unbound.service",
running=True,
enabled=True,
)
class MtastsDeployer(Deployer):
def configure(self):
# Remove configuration. # Remove configuration.
files.file("/etc/mta-sts-daemon.yml", present=False) files.file("/etc/mta-sts-daemon.yml", present=False)
files.directory("/usr/local/lib/postfix-mta-sts-resolver", present=False) files.directory("/usr/local/lib/postfix-mta-sts-resolver", present=False)
files.file("/etc/systemd/system/mta-sts-daemon.service", present=False) files.file("/etc/systemd/system/mta-sts-daemon.service", present=False)
def activate(self):
systemd.service( systemd.service(
name="Stop MTA-STS daemon", name="Stop MTA-STS daemon",
service="mta-sts-daemon.service", service="mta-sts-daemon.service",
@@ -330,6 +417,35 @@ def _configure_postfix(config: Config, debug: bool = False) -> bool:
return need_restart return need_restart
class PostfixDeployer(Deployer):
required_users = [("postfix", None, ["opendkim"])]
def __init__(self, config, disable_mail):
self.config = config
self.disable_mail = disable_mail
def install(self):
apt.packages(
name="Install Postfix",
packages="postfix",
)
def configure(self):
self.need_restart = _configure_postfix(self.config)
def activate(self):
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="disable postfix for now" if self.disable_mail else "Start and enable Postfix",
service="postfix.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
)
self.need_restart = False
def _install_dovecot_package(package: str, arch: str): def _install_dovecot_package(package: str, arch: str):
arch = "amd64" if arch == "x86_64" else arch arch = "amd64" if arch == "x86_64" else arch
arch = "arm64" if arch == "aarch64" else arch arch = "arm64" if arch == "aarch64" else arch
@@ -430,6 +546,38 @@ def _configure_dovecot(config: Config, debug: bool = False) -> bool:
return need_restart return need_restart
class DovecotDeployer(Deployer):
def __init__(self, config, disable_mail):
self.config = config
self.disable_mail = disable_mail
self.units = ["doveauth"]
def install(self):
arch = host.get_fact(facts.server.Arch)
if not "dovecot.service" in host.get_fact(SystemdEnabled):
_install_dovecot_package("core", arch)
_install_dovecot_package("imapd", arch)
_install_dovecot_package("lmtpd", arch)
def configure(self):
_configure_remote_units(self.config.mail_domain, self.units)
self.need_restart = _configure_dovecot(self.config)
def activate(self):
_activate_remote_units(self.units)
restart = False if self.disable_mail else self.need_restart
systemd.service(
name="disable dovecot for now" if self.disable_mail else "Start and enable Dovecot",
service="dovecot.service",
running=False if self.disable_mail else True,
enabled=False if self.disable_mail else True,
restarted=restart,
)
self.need_restart = False
def _configure_nginx(config: Config, debug: bool = False) -> bool: def _configure_nginx(config: Config, debug: bool = False) -> bool:
"""Configures nginx HTTP server.""" """Configures nginx HTTP server."""
need_restart = False need_restart = False
@@ -487,8 +635,92 @@ def _configure_nginx(config: Config, debug: bool = False) -> bool:
return need_restart return need_restart
def _remove_rspamd() -> None: class NginxDeployer(Deployer):
"""Remove rspamd""" def __init__(self, config):
self.config = config
def install(self):
#
# If we allow nginx to start up on install, it will grab port
# 80, which then will block acmetool from listening on the port.
# That in turn prevents getting certificates, which then causes
# an error when we try to start nginx on the custom config
# that leaves port 80 open but also requires certificates to
# be present. To avoid getting into that interlocking mess,
# we use policy-rc.d to prevent nginx from starting up when it
# is installed.
#
# This approach allows us to avoid performing any explicit
# systemd operations during the install stage (as opposed to
# allowing it to start and then forcing it to stop), which allows
# the install stage to run in non-systemd environments like a
# container image build.
#
# For documentation about policy-rc.d, see:
# https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
#
files.put(
src=importlib.resources.files(__package__).joinpath("policy-rc.d"),
dest="/usr/sbin/policy-rc.d",
user="root",
group="root",
mode="755",
)
apt.packages(
name="Install nginx",
packages=["nginx", "libnginx-mod-stream"],
)
files.file("/usr/sbin/policy-rc.d", present=False)
def configure(self):
self.need_restart = _configure_nginx(self.config)
def activate(self):
systemd.service(
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
class WebsiteDeployer(Deployer):
def __init__(self, config):
self.config = config
def install(self):
files.directory(
name="Ensure /var/www exists",
path="/var/www",
user="root",
group="root",
mode="755",
present=True,
)
def configure(self):
www_path, src_dir, build_dir = get_paths(self.config)
# if www_folder was set to a non-existing folder, skip upload
if not www_path.is_dir():
logger.warning("Building web pages is disabled in chatmail.ini, skipping")
elif (path := find_merge_conflict(src_dir)) is not None:
logger.warning(f"Merge conflict found in {path}, skipping website deployment. Fix merge conflict if you want to upload your web page.")
else:
# if www_folder is a hugo page, build it
if build_dir:
www_path = build_webpages(src_dir, build_dir, self.config)
# if it is not a hugo page, upload it as is
files.rsync(
f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"]
)
class RspamdDeployer(Deployer):
def install(self):
apt.packages(name="Remove rspamd", packages="rspamd", present=False) apt.packages(name="Remove rspamd", packages="rspamd", present=False)
@@ -507,7 +739,12 @@ def check_config(config):
return config return config
def deploy_turn_server(config): class TurnDeployer(Deployer):
def __init__(self, mail_domain):
self.mail_domain = mail_domain
self.units = ["turnserver"]
def install(self):
(url, sha256sum) = { (url, sha256sum) = {
"x86_64": ( "x86_64": (
"https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-x86_64-linux", "https://github.com/chatmail/chatmail-turn/releases/download/v0.3/chatmail-turn-x86_64-linux",
@@ -519,8 +756,6 @@ def deploy_turn_server(config):
), ),
}[host.get_fact(facts.server.Arch)] }[host.get_fact(facts.server.Arch)]
need_restart = False
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn") existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/chatmail-turn")
if existing_sha256sum != sha256sum: if existing_sha256sum != sha256sum:
server.shell( server.shell(
@@ -530,34 +765,19 @@ def deploy_turn_server(config):
"chmod 755 /usr/local/bin/chatmail-turn", "chmod 755 /usr/local/bin/chatmail-turn",
], ],
) )
need_restart = True
source_path = importlib.resources.files(__package__).joinpath( def configure(self):
"service", "turnserver.service.f" _configure_remote_units(self.mail_domain, self.units)
)
content = source_path.read_text().format(mail_domain=config.mail_domain).encode()
systemd_unit = files.put( def activate(self):
name="Upload turnserver.service", _activate_remote_units(self.units)
src=io.BytesIO(content),
dest="/etc/systemd/system/turnserver.service",
user="root",
group="root",
mode="644",
)
need_restart |= systemd_unit.changed
systemd.service(
name="Setup turnserver service",
service="turnserver.service",
running=True,
enabled=True,
restarted=need_restart,
daemon_reload=systemd_unit.changed,
)
def deploy_mtail(config): class MtailDeployer(Deployer):
def __init__(self, mtail_address):
self.mtail_address = mtail_address
def install(self):
# Uninstall mtail package, we are going to install a static binary. # Uninstall mtail package, we are going to install a static binary.
apt.packages(name="Uninstall mtail", packages=["mtail"], present=False) apt.packages(name="Uninstall mtail", packages=["mtail"], present=False)
@@ -580,15 +800,18 @@ def deploy_mtail(config):
], ],
) )
def configure(self):
# Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`. # Using our own systemd unit instead of `/usr/lib/systemd/system/mtail.service`.
# This allows to read from journalctl instead of log files. # This allows to read from journalctl instead of log files.
files.template( files.template(
src=importlib.resources.files(__package__).joinpath("mtail/mtail.service.j2"), src=importlib.resources.files(__package__).joinpath(
"mtail/mtail.service.j2"
),
dest="/etc/systemd/system/mtail.service", dest="/etc/systemd/system/mtail.service",
user="root", user="root",
group="root", group="root",
mode="644", mode="644",
address=config.mtail_address or "127.0.0.1", address=self.mtail_address or "127.0.0.1",
port=3903, port=3903,
) )
@@ -602,17 +825,24 @@ def deploy_mtail(config):
group="root", group="root",
mode="644", mode="644",
) )
self.need_restart = mtail_conf.changed
def activate(self):
systemd.service( systemd.service(
name="Start and enable mtail", name="Start and enable mtail",
service="mtail.service", service="mtail.service",
running=bool(config.mtail_address), running=bool(self.mtail_address),
enabled=bool(config.mtail_address), enabled=bool(self.mtail_address),
restarted=mtail_conf.changed, restarted=self.need_restart,
) )
self.need_restart = False
def deploy_iroh_relay(config) -> None: class IrohDeployer(Deployer):
def __init__(self, enable_iroh_relay):
self.enable_iroh_relay = enable_iroh_relay
def install(self):
(url, sha256sum) = { (url, sha256sum) = {
"x86_64": ( "x86_64": (
"https://github.com/n0-computer/iroh/releases/download/v0.35.0/iroh-relay-v0.35.0-x86_64-unknown-linux-musl.tar.gz", "https://github.com/n0-computer/iroh/releases/download/v0.35.0/iroh-relay-v0.35.0-x86_64-unknown-linux-musl.tar.gz",
@@ -624,13 +854,6 @@ def deploy_iroh_relay(config) -> None:
), ),
}[host.get_fact(facts.server.Arch)] }[host.get_fact(facts.server.Arch)]
apt.packages(
name="Install curl",
packages=["curl"],
)
need_restart = False
existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay") existing_sha256sum = host.get_fact(Sha256File, "/usr/local/bin/iroh-relay")
if existing_sha256sum != sha256sum: if existing_sha256sum != sha256sum:
server.shell( server.shell(
@@ -640,8 +863,10 @@ def deploy_iroh_relay(config) -> None:
"chmod 755 /usr/local/bin/iroh-relay", "chmod 755 /usr/local/bin/iroh-relay",
], ],
) )
need_restart = True
self.need_restart = True
def configure(self):
systemd_unit = files.put( systemd_unit = files.put(
name="Upload iroh-relay systemd unit", name="Upload iroh-relay systemd unit",
src=importlib.resources.files(__package__).joinpath("iroh-relay.service"), src=importlib.resources.files(__package__).joinpath("iroh-relay.service"),
@@ -650,7 +875,7 @@ def deploy_iroh_relay(config) -> None:
group="root", group="root",
mode="644", mode="644",
) )
need_restart |= systemd_unit.changed self.need_restart |= systemd_unit.changed
iroh_config = files.put( iroh_config = files.put(
name="Upload iroh-relay config", name="Upload iroh-relay config",
@@ -660,14 +885,172 @@ def deploy_iroh_relay(config) -> None:
group="root", group="root",
mode="644", mode="644",
) )
need_restart |= iroh_config.changed self.need_restart |= iroh_config.changed
def activate(self):
systemd.service( systemd.service(
name="Start and enable iroh-relay", name="Start and enable iroh-relay",
service="iroh-relay.service", service="iroh-relay.service",
running=True, running=True,
enabled=config.enable_iroh_relay, enabled=self.enable_iroh_relay,
restarted=need_restart, restarted=self.need_restart,
)
self.need_restart = False
class JournaldDeployer(Deployer):
def configure(self):
journald_conf = files.put(
name="Configure journald",
src=importlib.resources.files(__package__).joinpath("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
self.need_restart = journald_conf.changed
def activate(self):
systemd.service(
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=self.need_restart,
)
self.need_restart = False
class EchobotDeployer(Deployer):
#
# This deployer depends on the dovecot and postfix deployers because
# it needs to base its decision of whether to restart the service on
# whether those two services were restarted.
#
def __init__(self, mail_domain):
self.mail_domain = mail_domain
self.units = ["echobot"]
def install(self):
apt.packages(
# required for setfacl for echobot
name="Install acl",
packages="acl",
)
def configure(self):
_configure_remote_units(self.mail_domain, self.units)
def activate(self):
_activate_remote_units(self.units)
class ChatmailVenvDeployer(Deployer):
def __init__(self, config):
self.config = config
self.units = (
"filtermail",
"filtermail-incoming",
"chatmail-metadata",
"lastlogin",
"chatmail-expire",
"chatmail-expire.timer",
"chatmail-fsreport",
"chatmail-fsreport.timer",
)
def install(self):
_install_remote_venv_with_chatmaild()
def configure(self):
_configure_remote_venv_with_chatmaild(self.config)
_configure_remote_units(self.config.mail_domain, self.units)
def activate(self):
_activate_remote_units(self.units)
class ChatmailDeployer(Deployer):
required_users = [
("vmail", "vmail", None),
("echobot", None, None),
("iroh", None, None),
]
def __init__(self, mail_domain):
self.mail_domain = mail_domain
def install(self):
# Remove OBS repository key that is no longer used.
files.file("/etc/apt/keyrings/obs-home-deltachat.gpg", present=False)
files.line(
name="Remove DeltaChat OBS home repository from sources.list",
path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True,
present=False,
)
apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True)
apt.packages(
name="Install curl",
packages=["curl"],
)
apt.packages(
name="Install rsync",
packages=["rsync"],
)
apt.packages(
name="Ensure cron is installed",
packages=["cron"],
)
def configure(self):
# This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName
server.shell(
name="Setup /etc/mailname",
commands=[
f"echo {self.mail_domain} >/etc/mailname; chmod 644 /etc/mailname"
],
)
class FcgiwrapDeployer(Deployer):
def install(self):
apt.packages(
name="Install fcgiwrap",
packages=["fcgiwrap"],
)
def activate(self):
systemd.service(
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
class GithashDeployer(Deployer):
def activate(self):
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
files.put(
name="Upload chatmail relay git commiit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
) )
@@ -681,64 +1064,12 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
check_config(config) check_config(config)
mail_domain = config.mail_domain mail_domain = config.mail_domain
from .www import build_webpages, find_merge_conflict, get_paths
server.group(name="Create vmail group", group="vmail", system=True)
server.user(name="Create vmail user", user="vmail", group="vmail", system=True)
server.group(name="Create opendkim group", group="opendkim", system=True)
server.user(
name="Create opendkim user",
user="opendkim",
groups=["opendkim"],
system=True,
)
server.user(
name="Add postfix user to opendkim group for socket access",
user="postfix",
groups=["opendkim"],
system=True,
)
server.user(name="Create echobot user", user="echobot", system=True)
server.user(name="Create iroh user", user="iroh", system=True)
# Add our OBS repository for dovecot_no_delay
files.put(
name="Add Deltachat OBS GPG key to apt keyring",
src=importlib.resources.files(__package__).joinpath("obs-home-deltachat.gpg"),
dest="/etc/apt/keyrings/obs-home-deltachat.gpg",
user="root",
group="root",
mode="644",
)
files.line(
name="Add DeltaChat OBS home repository to sources.list",
path="/etc/apt/sources.list",
line="deb [signed-by=/etc/apt/keyrings/obs-home-deltachat.gpg] https://download.opensuse.org/repositories/home:/deltachat/Debian_12/ ./",
escape_regex_characters=True,
present=False,
)
if host.get_fact(Port, port=53) != "unbound": if host.get_fact(Port, port=53) != "unbound":
files.line( files.line(
name="Add 9.9.9.9 to resolv.conf", name="Add 9.9.9.9 to resolv.conf",
path="/etc/resolv.conf", path="/etc/resolv.conf",
line="nameserver 9.9.9.9", line="nameserver 9.9.9.9",
) )
apt.update(name="apt update", cache_time=24 * 3600)
apt.upgrade(name="upgrade apt packages", auto_remove=True)
apt.packages(
name="Install rsync",
packages=["rsync"],
)
deploy_turn_server(config)
# Run local DNS resolver `unbound`.
# `resolvconf` takes care of setting up /etc/resolv.conf
# to use 127.0.0.1 as the resolver.
from cmdeploy.cmdeploy import Out
port_services = [ port_services = [
(["master", "smtpd"], 25), (["master", "smtpd"], 25),
@@ -766,176 +1097,38 @@ def deploy_chatmail(config_path: Path, disable_mail: bool) -> None:
) )
exit(1) exit(1)
apt.packages(
name="Install unbound",
packages=["unbound", "unbound-anchor", "dnsutils"],
)
server.shell(
name="Generate root keys for validating DNSSEC",
commands=[
"unbound-anchor -a /var/lib/unbound/root.key || true",
"systemctl reset-failed unbound.service",
],
)
systemd.service(
name="Start and enable unbound",
service="unbound.service",
running=True,
enabled=True,
)
deploy_iroh_relay(config)
# Deploy acmetool to have TLS certificates.
tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"] tls_domains = [mail_domain, f"mta-sts.{mail_domain}", f"www.{mail_domain}"]
deploy_acmetool(
email=config.acme_email,
domains=tls_domains,
)
apt.packages( all_deployers = [
# required for setfacl for echobot ChatmailDeployer(mail_domain),
name="Install acl", JournaldDeployer(),
packages="acl", UnboundDeployer(),
) TurnDeployer(mail_domain),
IrohDeployer(config.enable_iroh_relay),
AcmetoolDeployer(config.acme_email, tls_domains),
apt.packages( WebsiteDeployer(config),
name="Install Postfix", ChatmailVenvDeployer(config),
packages="postfix", MtastsDeployer(),
) OpendkimDeployer(mail_domain),
if not "dovecot.service" in host.get_fact(SystemdEnabled):
_install_dovecot_package("core", host.get_fact(facts.server.Arch))
_install_dovecot_package("imapd", host.get_fact(facts.server.Arch))
_install_dovecot_package("lmtpd", host.get_fact(facts.server.Arch))
apt.packages(
name="Install nginx",
packages=["nginx", "libnginx-mod-stream"],
)
apt.packages(
name="Install fcgiwrap",
packages=["fcgiwrap"],
)
www_path, src_dir, build_dir = get_paths(config)
# if www_folder was set to a non-existing folder, skip upload
if not www_path.is_dir():
logger.warning("Building web pages is disabled in chatmail.ini, skipping")
elif (path := find_merge_conflict(src_dir)) is not None:
logger.warning(f"Merge conflict found in {path}, skipping website deployment. Fix merge conflict if you want to upload your web page.")
else:
# if www_folder is a hugo page, build it
if build_dir:
www_path = build_webpages(src_dir, build_dir, config)
# if it is not a hugo page, upload it as is
files.rsync(f"{www_path}/", "/var/www/html", flags=["-avz", "--chown=www-data"])
_install_remote_venv_with_chatmaild(config)
debug = False
dovecot_need_restart = _configure_dovecot(config, debug=debug)
postfix_need_restart = _configure_postfix(config, debug=debug)
nginx_need_restart = _configure_nginx(config)
_uninstall_mta_sts_daemon()
_remove_rspamd()
opendkim_need_restart = _configure_opendkim(mail_domain, "opendkim")
systemd.service(
name="Start and enable OpenDKIM",
service="opendkim.service",
running=True,
enabled=True,
daemon_reload=opendkim_need_restart,
restarted=opendkim_need_restart,
)
# Dovecot should be started before Postfix # Dovecot should be started before Postfix
# because it creates authentication socket # because it creates authentication socket
# required by Postfix. # required by Postfix.
systemd.service( DovecotDeployer(config, disable_mail),
name="disable dovecot for now" if disable_mail else "Start and enable Dovecot", PostfixDeployer(config, disable_mail),
service="dovecot.service", FcgiwrapDeployer(),
running=False if disable_mail else True, NginxDeployer(config),
enabled=False if disable_mail else True, RspamdDeployer(),
restarted=dovecot_need_restart if not disable_mail else False, EchobotDeployer(mail_domain),
) MtailDeployer(config.mtail_address),
GithashDeployer(),
]
systemd.service( Deployment().perform_stages(all_deployers)
name="disable postfix for now" if disable_mail else "Start and enable Postfix",
service="postfix.service",
running=False if disable_mail else True,
enabled=False if disable_mail else True,
restarted=postfix_need_restart if not disable_mail else False,
)
systemd.service(
name="Start and enable nginx",
service="nginx.service",
running=True,
enabled=True,
restarted=nginx_need_restart,
)
systemd.service(
name="Start and enable fcgiwrap",
service="fcgiwrap.service",
running=True,
enabled=True,
)
systemd.service(
name="Restart echobot if postfix and dovecot were just started",
service="echobot.service",
restarted=postfix_need_restart and dovecot_need_restart,
)
# This file is used by auth proxy.
# https://wiki.debian.org/EtcMailName
server.shell(
name="Setup /etc/mailname",
commands=[f"echo {mail_domain} >/etc/mailname; chmod 644 /etc/mailname"],
)
journald_conf = files.put(
name="Configure journald",
src=importlib.resources.files(__package__).joinpath("journald.conf"),
dest="/etc/systemd/journald.conf",
user="root",
group="root",
mode="644",
)
systemd.service(
name="Start and enable journald",
service="systemd-journald.service",
running=True,
enabled=True,
restarted=journald_conf.changed,
)
files.directory( files.directory(
name="Ensure old logs on disk are deleted", name="Ensure old logs on disk are deleted",
path="/var/log/journal/", path="/var/log/journal/",
present=False, present=False,
) )
apt.packages(
name="Ensure cron is installed",
packages=["cron"],
)
try:
git_hash = subprocess.check_output(["git", "rev-parse", "HEAD"]).decode()
except Exception:
git_hash = "unknown\n"
try:
git_diff = subprocess.check_output(["git", "diff"]).decode()
except Exception:
git_diff = ""
files.put(
name="Upload chatmail relay git commiit hash",
src=StringIO(git_hash + git_diff),
dest="/etc/chatmail-version",
mode="700",
)
deploy_mtail(config)

View File

@@ -2,9 +2,18 @@ import importlib.resources
from pyinfra.operations import apt, files, server, systemd from pyinfra.operations import apt, files, server, systemd
from ..deployer import Deployer
def deploy_acmetool(email="", domains=[]):
"""Deploy acmetool.""" class AcmetoolDeployer(Deployer):
def __init__(self, email, domains):
self.domains = domains
self.email = email
self.need_restart_redirector = False
self.need_restart_reconcile_service = False
self.need_restart_reconcile_timer = False
def install(self):
apt.packages( apt.packages(
name="Install acmetool", name="Install acmetool",
packages=["acmetool"], packages=["acmetool"],
@@ -30,13 +39,14 @@ def deploy_acmetool(email="", domains=[]):
present=False, present=False,
) )
def configure(self):
files.template( files.template(
src=importlib.resources.files(__package__).joinpath("response-file.yaml.j2"), src=importlib.resources.files(__package__).joinpath("response-file.yaml.j2"),
dest="/var/lib/acme/conf/responses", dest="/var/lib/acme/conf/responses",
user="root", user="root",
group="root", group="root",
mode="644", mode="644",
email=email, email=self.email,
) )
files.template( files.template(
@@ -56,14 +66,7 @@ def deploy_acmetool(email="", domains=[]):
group="root", group="root",
mode="644", mode="644",
) )
self.need_restart_redirector = service_file.changed
systemd.service(
name="Setup acmetool-redirector service",
service="acmetool-redirector.service",
running=True,
enabled=True,
restarted=service_file.changed,
)
reconcile_service_file = files.put( reconcile_service_file = files.put(
src=importlib.resources.files(__package__).joinpath( src=importlib.resources.files(__package__).joinpath(
@@ -74,14 +77,7 @@ def deploy_acmetool(email="", domains=[]):
group="root", group="root",
mode="644", mode="644",
) )
self.need_restart_reconcile_service = reconcile_service_file.changed
systemd.service(
name="Setup acmetool-reconcile service",
service="acmetool-reconcile.service",
running=False,
enabled=False,
daemon_reload=reconcile_service_file.changed,
)
reconcile_timer_file = files.put( reconcile_timer_file = files.put(
src=importlib.resources.files(__package__).joinpath("acmetool-reconcile.timer"), src=importlib.resources.files(__package__).joinpath("acmetool-reconcile.timer"),
@@ -90,16 +86,37 @@ def deploy_acmetool(email="", domains=[]):
group="root", group="root",
mode="644", mode="644",
) )
self.need_restart_reconcile_timer = reconcile_timer_file.changed
def activate(self):
systemd.service(
name="Setup acmetool-redirector service",
service="acmetool-redirector.service",
running=True,
enabled=True,
restarted=self.need_restart_redirector,
)
self.need_restart_redirector = False
systemd.service(
name="Setup acmetool-reconcile service",
service="acmetool-reconcile.service",
running=False,
enabled=False,
daemon_reload=self.need_restart_reconcile_service,
)
self.need_restart_reconcile_service = False
systemd.service( systemd.service(
name="Setup acmetool-reconcile timer", name="Setup acmetool-reconcile timer",
service="acmetool-reconcile.timer", service="acmetool-reconcile.timer",
running=True, running=True,
enabled=True, enabled=True,
daemon_reload=reconcile_timer_file.changed, daemon_reload=self.need_restart_reconcile_timer,
) )
self.need_restart_reconcile_timer = False
server.shell( server.shell(
name=f"Request certificate for: {', '.join(domains)}", name=f"Request certificate for: {', '.join(self.domains)}",
commands=[f"acmetool want --xlog.severity=debug {' '.join(domains)}"], commands=[f"acmetool want --xlog.severity=debug {' '.join(self.domains)}"],
) )

View File

@@ -0,0 +1,57 @@
import os
from pyinfra.operations import server
class Deployment:
def install(self, deployer):
# optional 'required_users' contains a list of (user, group, secondary-group-list) tuples.
# If the group is None, no group is created corresponding to that user.
# If the secondary group list is not None, all listed groups are created as well.
required_users = getattr(deployer, "required_users", [])
for user, group, groups in required_users:
if group is not None:
server.group(
name="Create {} group".format(group), group=group, system=True
)
if groups is not None:
for group2 in groups:
server.group(
name="Create {} group".format(group2), group=group2, system=True
)
server.user(
name="Create {} user".format(user),
user=user,
group=group,
groups=groups,
system=True,
)
deployer.install()
def configure(self, deployer):
deployer.configure()
def activate(self, deployer):
deployer.activate()
def perform_stages(self, deployers):
default_stages = "install,configure,activate"
stages = os.getenv("CMDEPLOY_STAGES", default_stages).split(",")
for stage in stages:
for deployer in deployers:
getattr(self, stage)(deployer)
class Deployer:
need_restart = False
def install(self):
pass
def configure(self):
pass
def activate(self):
pass

View File

@@ -0,0 +1,3 @@
#!/bin/sh
echo "All runlevel operations denied by policy" >&2
exit 101

View File

@@ -297,3 +297,48 @@ actually it is a problem with your TLS certificate.
.. _nginx: https://nginx.org .. _nginx: https://nginx.org
.. _pyinfra: https://pyinfra.com .. _pyinfra: https://pyinfra.com
Architecture of cmdeploy
------------------------
cmdeploy is a Python program that uses the pyinfra library to deploy
chatmail relays, with all the necessary software, configuration, and
services. The deployment process performs three primary types of
operation:
1. Installation of software, universal across all deployments.
2. Configuration of software, with deploy-specific variations.
3. Activation of services.
The process is implemented through a family of "deployer" objects
which all derive from a common ``Deployer`` base class, defined in
cmdeploy/src/cmdeploy/deployer.py. Each object provides
implementation methods for the three stages -- install, configure, and
activate. The top-level procedure in ``deploy_chatmail()`` calls
these methods for all the deployer objects, via the
``Deployment.perform_stages()`` method, also defined in deployer.py.
This first calls all the install methods, then the configure methods,
then the activate methods.
The ``Deployment`` class also implements support for a CMDEPLOY_STAGES
environment variable, which allows limiting the process to specific
stages. Note that some deployers are stateful between the stages
(this is one reason why they are implemented as objects), and that
state will not get propagated between stages when run in separate
invocations of cmdeploy. This environment variable is intended for
use in future revisions to support building Docker images with
software pre-installed, and configuration of containers at run time
from environment variables.
The, ``install()`` methods for the deployer classes should use 'self'
as little as possible, preferably not at all. In particular,
``install()`` methods should never depend on "config" data, such as
the config dictionary in ``self.config`` or specific values like
``self.mail_domain``. This ensures that these methods can be used to
perform generic installation operations that are applicable across
multiple relay deployments, and therefore can be called in the process
of building a general-purpose container image.
Operations that start services for systemd-based deployments should
only be called from the ``activate_impl()`` methods. These methods
will not be called in non-systemd container environments.