Compare commits

...

18 Commits
2.2.1 ... 2.3.0

Author SHA1 Message Date
Anatoly Sablin
9b4aff58c7 Add migration documentation. 2020-01-30 23:17:01 +03:00
Anatoly Sablin
a20e41574d Update docs. Add a new options and configuration. 2020-01-28 23:20:29 +03:00
Anatoly Sablin
72977d65ae Workaround for postgresql. 2020-01-28 23:18:39 +03:00
Anatoly Sablin
7555fff1a5 Add the postgresql backend for internal storage. 2020-01-28 22:15:26 +03:00
Anatoly Sablin
aed12e5536 Add the --dump-and-exit option to exit after printing the full configuration. 2020-01-28 01:02:43 +03:00
Anatoly Sablin
75efd9921d Improve logging configuration. Introduce the root and the app log levels. 2020-01-28 00:55:39 +03:00
Anatoly Sablin
9219bd4723 Add logging configuration. Add --dump option to just print the full configuration. 2020-01-25 14:57:22 +03:00
Anatoly Sablin
73526be2ac Add configuration to use the legacy query for old synapse to get room names. 2020-01-25 14:04:40 +03:00
ma1uta
b827efca2c Merge pull request #13 from NullIsNot0/fix-room-names-patch
Fix room name retrieval after Synapse dropped table room_names
2020-01-25 10:50:55 +00:00
NullIsNot0
6b7a4c8a23 Fix room name retrieval after Synapse dropped table room_names
Recently Synapse dropped unused (by Synapse itself) table "room_names" which brakes room name retrieval for ma1sd. There is a table "room_stats_state" from which we can retrieve room name by it's ID. Note that people to people conversations do not contain room names, because they are generated on-the-fly by setting other participants names separated by word "and". That's why this query will only get names for rooms where room names are set during creation process (or changed later) and are the same for all participants.
Link to Synapse code where it drops "room_names" table: https://github.com/matrix-org/synapse/blob/master/synapse/storage/data_stores/main/schema/delta/56/drop_unused_event_tables.sql#L17
2020-01-10 18:23:29 +02:00
Anatoly Sablin
47f6239268 Add equals and hashCode methods for the MemoryThreePid. 2020-01-09 22:28:44 +03:00
ma1uta
0d6f65b469 Merge pull request #11 from NullIsNot0/master
Load DNS overwrite config on startup and remove duplicates from identity store before email notifications
2020-01-09 19:25:13 +00:00
Edgars Voroboks
be915aed94 Remove duplicates from identity store before email notifications
I use LDAP for user store. I have set up "mail" and "otherMailbox" as threepid email attributes. When people get invited to rooms, they receive 2 (sometimes 3) invitation e-mails if they have the same e-mail address in LDAP "mail" and "otherMailbox" fields. I think it's a good idea to check identity store for duplicates before sending invitation e-mails.
2020-01-09 20:14:56 +02:00
NullIsNot0
ce938bb4a5 Load DNS overwrite config on startup
I recently noticed that DNS overwrite does not happen. There are messages in logs: "No DNS overwrite for <REDACTED>", but I definitely have configured DNS overwrithng. I think it's because DNS overwriting config is not loaded when ma1sd starts up.
Documented here: https://github.com/ma1uta/ma1sd/blob/master/docs/features/authentication.md#dns-overwrite and here: https://github.com/ma1uta/ma1sd/blob/master/docs/features/directory.md#dns-overwrite
2020-01-07 22:24:26 +02:00
Anatoly Sablin
15db563e8d Add documentation. 2019-12-26 22:49:25 +03:00
Anatoly Sablin
82a538c750 Add an option to enable/disable hash lookup via the LDAP provider. 2019-12-25 22:51:44 +03:00
Anatoly Sablin
84ca8ebbd9 Add support of the MSC2134 (Identity hash lookup) for the LDAP provider. 2019-12-25 00:13:07 +03:00
Anatoly Sablin
774ebf4fa8 Fix for #9. Proper wrap the handles with the sanitize handler. 2019-12-16 22:47:24 +03:00
28 changed files with 523 additions and 94 deletions

View File

@@ -108,7 +108,7 @@ dependencies {
compile 'net.i2p.crypto:eddsa:0.3.0'
// LDAP connector
compile 'org.apache.directory.api:api-all:1.0.0'
compile 'org.apache.directory.api:api-all:1.0.3'
// DNS lookups
compile 'dnsjava:dnsjava:2.1.9'

View File

@@ -136,5 +136,11 @@ exec:
hashEnabled: true # enable the hash lookup (defaults is false)
```
For ldap providers:
```.yaml
ldap:
lookup: true
```
NOTE: Federation requests work only with `none` algorithms.

View File

@@ -58,7 +58,47 @@ Commonly the `server.publicUrl` should be the same value as the `trusted_third_p
## Storage
### SQLite
`storage.provider.sqlite.database`: Absolute location of the SQLite database
```yaml
storage:
backend: sqlite # default
provider:
sqlite:
database: /var/lib/ma1sd/store.db # Absolute location of the SQLite database
```
### Postgresql
```yaml
storage:
backend: postgresql
provider:
postgresql:
database: //localhost:5432/ma1sd
username: ma1sd
password: secret_password
```
See [the migration instruction](migration-to-postgresql.md) from sqlite to postgresql
## Logging
```yaml
logging:
root: error # default level for all loggers (apps and thirdparty libraries)
app: info # log level only for the ma1sd
```
Possible value: `trace`, `debug`, `info`, `warn`, `error`, `off`.
Default value for root level: `info`.
Value for app level can be specified via `MA1SD_LOG_LEVEL` environment variable, configuration or start options.
Default value for app level: `info`.
| start option | equivalent configuration |
| --- | --- |
| | app: info |
| -v | app: debug |
| -vv | app: trace |
## Identity stores
See the [Identity stores](stores/README.md) for specific configuration

View File

@@ -0,0 +1,41 @@
# Migration from sqlite to postgresql
Starting from the version 2.3.0 ma1sd support postgresql for internal storage in addition to sqlite (parameters `storage.backend`).
#### Migration steps
1. create the postgresql database and user for ma1sd storage
2. create a backup for sqlite storage (default location: /var/lib/ma1sd/store.db)
3. migrate data from sqlite to postgresql
4. change ma1sd configuration to use the postgresql
For data migration is it possible to use https://pgloader.io tool.
Example of the migration command:
```shell script
pgloader --with "quote identifiers" /path/to/store.db pgsql://ma1sd_user:ma1sd_password@host:port/database
```
or (short version for database on localhost)
```shell script
pgloader --with "quote identifiers" /path/to/store.db pgsql://ma1sd_user:ma1sd_password@localhost/ma1sd
```
An option `--with "quote identifies"` used to create case sensitive tables.
ma1sd_user - postgresql user for ma1sd.
ma1sd_password - password of the postgresql user.
host - postgresql host
post - database port (default 5432)
database - database name.
Configuration example for postgresql storage:
```yaml
storage:
backend: postgresql
provider:
postgresql:
database: '//localhost/ma1sd' # or full variant //192.168.1.100:5432/ma1sd_database
username: 'ma1sd_user'
password: 'ma1sd_password'
```

View File

@@ -89,7 +89,7 @@ ldap:
#### 3PIDs
You can also change the attribute lists for 3PID, like email or phone numbers.
The following example would overwrite the [default list of attributes](../../src/main/java/io/kamax/ma1sd/config/ldap/LdapConfig.java#L64)
The following example would overwrite the [default list of attributes](../../src/main/java/io/kamax/mxisd/config/ldap/LdapConfig.java#L64)
for emails and phone number:
```yaml
ldap:

View File

@@ -130,6 +130,26 @@ threepid:
# synapseSql:
# lookup:
# query: 'select user_id as mxid, medium, address from user_threepids' # query for retrive 3PIDs for hashes.
# legacyRoomNames: false # use the old query to get room names.
### hash lookup for ldap provider (with example of the ldap configuration)
# ldap:
# enabled: true
# lookup: true # hash lookup
# connection:
# host: 'ldap.domain.tld'
# port: 389
# bindDn: 'cn=admin,dc=domain,dc=tld'
# bindPassword: 'Secret'
# baseDNs:
# - 'dc=domain,dc=tld'
# attribute:
# uid:
# type: 'uid' # or mxid
# value: 'cn'
# name: 'displayName'
# identity:
# filter: '(objectClass=inetOrgPerson)'
#### MSC2140 (Terms)
#policy:
@@ -148,3 +168,6 @@ threepid:
# - '/_matrix/identity/v2/hash_details'
# - '/_matrix/identity/v2/lookup'
#
# logging:
# root: trace # logging level

View File

@@ -104,30 +104,30 @@ public class HttpMxisd {
HttpHandler asNotFoundHandler = SaneHandler.around(new AsNotFoundHandler(m.getAs()));
final RoutingHandler handler = Handlers.routing()
.add("OPTIONS", "/**", SaneHandler.around(new OptionsHandler()))
.add("OPTIONS", "/**", sane(new OptionsHandler()))
// Status endpoints
.get(StatusHandler.Path, SaneHandler.around(new StatusHandler()))
.get(VersionHandler.Path, SaneHandler.around(new VersionHandler()))
.get(StatusHandler.Path, sane(new StatusHandler()))
.get(VersionHandler.Path, sane(new VersionHandler()))
// Authentication endpoints
.get(LoginHandler.Path, SaneHandler.around(new LoginGetHandler(m.getAuth(), m.getHttpClient())))
.post(LoginHandler.Path, SaneHandler.around(new LoginPostHandler(m.getAuth())))
.post(RestAuthHandler.Path, SaneHandler.around(new RestAuthHandler(m.getAuth())))
.get(LoginHandler.Path, sane(new LoginGetHandler(m.getAuth(), m.getHttpClient())))
.post(LoginHandler.Path, sane(new LoginPostHandler(m.getAuth())))
.post(RestAuthHandler.Path, sane(new RestAuthHandler(m.getAuth())))
// Directory endpoints
.post(UserDirectorySearchHandler.Path, SaneHandler.around(new UserDirectorySearchHandler(m.getDirectory())))
.post(UserDirectorySearchHandler.Path, sane(new UserDirectorySearchHandler(m.getDirectory())))
// Profile endpoints
.get(ProfileHandler.Path, SaneHandler.around(new ProfileHandler(m.getProfile())))
.get(InternalProfileHandler.Path, SaneHandler.around(new InternalProfileHandler(m.getProfile())))
.get(ProfileHandler.Path, sane(new ProfileHandler(m.getProfile())))
.get(InternalProfileHandler.Path, sane(new InternalProfileHandler(m.getProfile())))
// Registration endpoints
.post(Register3pidRequestTokenHandler.Path,
SaneHandler.around(new Register3pidRequestTokenHandler(m.getReg(), m.getClientDns(), m.getHttpClient())))
sane(new Register3pidRequestTokenHandler(m.getReg(), m.getClientDns(), m.getHttpClient())))
// Invite endpoints
.post(RoomInviteHandler.Path, SaneHandler.around(new RoomInviteHandler(m.getHttpClient(), m.getClientDns(), m.getInvite())))
.post(RoomInviteHandler.Path, sane(new RoomInviteHandler(m.getHttpClient(), m.getClientDns(), m.getInvite())))
// Application Service endpoints
.get(AsUserHandler.Path, asUserHandler)
@@ -139,7 +139,7 @@ public class HttpMxisd {
.put("/transactions/{" + AsTransactionHandler.ID + "}", asTxnHandler) // Legacy endpoint
// Banned endpoints
.get(InternalInfoHandler.Path, SaneHandler.around(new InternalInfoHandler()));
.get(InternalInfoHandler.Path, sane(new InternalInfoHandler()));
keyEndpoints(handler);
identityEndpoints(handler);
termsEndpoints(handler);
@@ -191,10 +191,10 @@ public class HttpMxisd {
private void accountEndpoints(RoutingHandler routingHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
routingHandler.post(AccountRegisterHandler.Path, SaneHandler.around(new AccountRegisterHandler(m.getAccMgr())));
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new AccountGetUserInfoHandler(m.getAccMgr())),
routingHandler.post(AccountRegisterHandler.Path, sane(new AccountRegisterHandler(m.getAccMgr())));
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new AccountGetUserInfoHandler(m.getAccMgr()),
AccountGetUserInfoHandler.Path, true);
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new AccountLogoutHandler(m.getAccMgr())),
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new AccountLogoutHandler(m.getAccMgr()),
AccountLogoutHandler.Path, true);
}
}
@@ -202,7 +202,7 @@ public class HttpMxisd {
private void termsEndpoints(RoutingHandler routingHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
routingHandler.get(GetTermsHandler.PATH, new GetTermsHandler(m.getConfig().getPolicy()));
routingHandler.get(GetTermsHandler.PATH, sane(new GetTermsHandler(m.getConfig().getPolicy())));
routingHandler.post(AcceptTermsHandler.PATH, sane(new AcceptTermsHandler(m.getAccMgr())));
}
}
@@ -210,10 +210,10 @@ public class HttpMxisd {
private void hashEndpoints(RoutingHandler routingHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV2()) {
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, sane(new HashDetailsHandler(m.getHashManager())),
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.GET, new HashDetailsHandler(m.getHashManager()),
HashDetailsHandler.PATH, true);
wrapWithTokenAndAuthorizationHandlers(routingHandler, Methods.POST,
sane(new HashLookupHandler(m.getIdentity(), m.getHashManager())), HashLookupHandler.Path, true);
new HashLookupHandler(m.getIdentity(), m.getHashManager()), HashLookupHandler.Path, true);
}
}
@@ -227,11 +227,11 @@ public class HttpMxisd {
HttpHandler httpHandler) {
MatrixConfig matrixConfig = m.getConfig().getMatrix();
if (matrixConfig.isV1()) {
routingHandler.add(method, apiHandler.getPath(IdentityServiceAPI.V1), httpHandler);
routingHandler.add(method, apiHandler.getPath(IdentityServiceAPI.V1), sane(httpHandler));
}
if (matrixConfig.isV2()) {
String path = apiHandler.getPath(IdentityServiceAPI.V2);
wrapWithTokenAndAuthorizationHandlers(routingHandler, method, httpHandler, path, useAuthorization);
wrapWithTokenAndAuthorizationHandlers(routingHandler, method, httpHandler, apiHandler.getPath(IdentityServiceAPI.V2),
useAuthorization);
}
}
@@ -245,7 +245,7 @@ public class HttpMxisd {
} else {
wrappedHandler = httpHandler;
}
routingHandler.add(method, url, wrappedHandler);
routingHandler.add(method, url, sane(wrappedHandler));
}
@NotNull

View File

@@ -27,6 +27,8 @@ import io.kamax.mxisd.auth.AuthProviders;
import io.kamax.mxisd.backend.IdentityStoreSupplier;
import io.kamax.mxisd.backend.sql.synapse.Synapse;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.PostgresqlStorageConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.crypto.CryptoFactory;
import io.kamax.mxisd.crypto.KeyManager;
import io.kamax.mxisd.crypto.SignatureManager;
@@ -109,7 +111,20 @@ public class Mxisd {
IdentityServerUtils.setHttpClient(httpClient);
srvFetcher = new RemoteIdentityServerFetcher(httpClient);
store = new OrmLiteSqlStorage(cfg);
StorageConfig.BackendEnum storageBackend = cfg.getStorage().getBackend();
StorageConfig.Provider storageProvider = cfg.getStorage().getProvider();
switch (storageBackend) {
case sqlite:
store = new OrmLiteSqlStorage(storageBackend, storageProvider.getSqlite().getDatabase());
break;
case postgresql:
PostgresqlStorageConfig postgresql = storageProvider.getPostgresql();
store = new OrmLiteSqlStorage(storageBackend, postgresql.getDatabase(), postgresql.getUsername(), postgresql.getPassword());
break;
default:
throw new IllegalStateException("Storage provider hasn't been configured");
}
keyMgr = CryptoFactory.getKeyManager(cfg.getKey());
signMgr = CryptoFactory.getSignatureManager(cfg, keyMgr);
clientDns = new ClientDnsOverwrite(cfg.getDns().getOverwrite());

View File

@@ -44,31 +44,46 @@ public class MxisdStandaloneExec {
try {
MxisdConfig cfg = null;
Iterator<String> argsIt = Arrays.asList(args).iterator();
boolean dump = false;
boolean exit = false;
while (argsIt.hasNext()) {
String arg = argsIt.next();
if (StringUtils.equalsAny(arg, "-h", "--help", "-?", "--usage")) {
System.out.println("Available arguments:" + System.lineSeparator());
System.out.println(" -h, --help Show this help message");
System.out.println(" --version Print the version then exit");
System.out.println(" -c, --config Set the configuration file location");
System.out.println(" -v Increase log level (log more info)");
System.out.println(" -vv Further increase log level");
System.out.println(" ");
System.exit(0);
} else if (StringUtils.equals(arg, "-v")) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "debug");
} else if (StringUtils.equals(arg, "-vv")) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "trace");
} else if (StringUtils.equalsAny(arg, "-c", "--config")) {
String cfgFile = argsIt.next();
cfg = YamlConfigLoader.loadFromFile(cfgFile);
} else if (StringUtils.equals("--version", arg)) {
System.out.println(Mxisd.Version);
System.exit(0);
} else {
System.err.println("Invalid argument: " + arg);
System.err.println("Try '--help' for available arguments");
System.exit(1);
switch (arg) {
case "-h":
case "--help":
case "-?":
case "--usage":
System.out.println("Available arguments:" + System.lineSeparator());
System.out.println(" -h, --help Show this help message");
System.out.println(" --version Print the version then exit");
System.out.println(" -c, --config Set the configuration file location");
System.out.println(" -v Increase log level (log more info)");
System.out.println(" -vv Further increase log level");
System.out.println(" --dump Dump the full ma1sd configuration");
System.out.println(" --dump-and-exit Dump the full ma1sd configuration and exit");
System.out.println(" ");
System.exit(0);
return;
case "-v":
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "debug");
break;
case "-vv":
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", "trace");
break;
case "-c":
case "--config":
String cfgFile = argsIt.next();
cfg = YamlConfigLoader.loadFromFile(cfgFile);
break;
case "--dump-and-exit":
exit = true;
case "--dump":
dump = true;
break;
default:
System.err.println("Invalid argument: " + arg);
System.err.println("Try '--help' for available arguments");
System.exit(1);
}
}
@@ -76,6 +91,13 @@ public class MxisdStandaloneExec {
cfg = YamlConfigLoader.tryLoadFromFile("ma1sd.yaml").orElseGet(MxisdConfig::new);
}
if (dump) {
YamlConfigLoader.dumpConfig(cfg);
if (exit) {
System.exit(0);
}
}
log.info("ma1sd starting");
log.info("Version: {}", Mxisd.Version);

View File

@@ -144,7 +144,13 @@ public class MembershipEventProcessor implements EventTypeProcessor {
.collect(Collectors.toList());
log.info("Found {} email(s) in identity store for {}", tpids.size(), inviteeId);
for (_ThreePid tpid : tpids) {
log.info("Removing duplicates from identity store");
List<_ThreePid> uniqueTpids = tpids.stream()
.distinct()
.collect(Collectors.toList());
log.info("There are {} unique email(s) in identity store for {}", uniqueTpids.size(), inviteeId);
for (_ThreePid tpid : uniqueTpids) {
log.info("Found Email to notify about room invitation: {}", tpid.getAddress());
Map<String, String> properties = new HashMap<>();
profiler.getDisplayName(sender).ifPresent(name -> properties.put("sender_display_name", name));

View File

@@ -41,7 +41,10 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
public class LdapThreePidProvider extends LdapBackend implements IThreePidProvider {
@@ -137,4 +140,65 @@ public class LdapThreePidProvider extends LdapBackend implements IThreePidProvid
return mappingsFound;
}
private List<String> getAttributes() {
final List<String> attributes = getCfg().getAttribute().getThreepid().values().stream().flatMap(List::stream)
.collect(Collectors.toList());
attributes.add(getUidAtt());
return attributes;
}
private Optional<String> getAttributeValue(Entry entry, List<String> attributes) {
return attributes.stream()
.map(attr -> getAttribute(entry, attr))
.filter(Objects::nonNull)
.filter(Optional::isPresent)
.map(Optional::get)
.findFirst();
}
@Override
public Iterable<ThreePidMapping> populateHashes() {
List<ThreePidMapping> result = new ArrayList<>();
if (!getCfg().getIdentity().isLookup()) {
return result;
}
String filter = getCfg().getIdentity().getFilter();
try (LdapConnection conn = getConn()) {
bind(conn);
log.debug("Query: {}", filter);
List<String> attributes = getAttributes();
log.debug("Attributes: {}", GsonUtil.build().toJson(attributes));
for (String baseDN : getBaseDNs()) {
log.debug("Base DN: {}", baseDN);
try (EntryCursor cursor = conn.search(baseDN, filter, SearchScope.SUBTREE, attributes.toArray(new String[0]))) {
while (cursor.next()) {
Entry entry = cursor.get();
log.info("Found possible match, DN: {}", entry.getDn().getName());
Optional<String> mxid = getAttribute(entry, getUidAtt());
if (!mxid.isPresent()) {
continue;
}
for (Map.Entry<String, List<String>> attributeEntry : getCfg().getAttribute().getThreepid().entrySet()) {
String medium = attributeEntry.getKey();
getAttributeValue(entry, attributeEntry.getValue())
.ifPresent(s -> result.add(new ThreePidMapping(medium, s, buildMatrixIdFromUid(mxid.get()))));
}
}
} catch (CursorLdapReferralException e) {
log.warn("3PID is only available via referral, skipping", e);
} catch (IOException | LdapException | CursorException e) {
log.error("Unable to fetch 3PID mappings", e);
}
}
} catch (LdapException | IOException e) {
log.error("Unable to fetch 3PID mappings", e);
}
return result;
}
}

View File

@@ -107,14 +107,16 @@ public abstract class SqlThreePidProvider implements IThreePidProvider {
@Override
public Iterable<ThreePidMapping> populateHashes() {
if (StringUtils.isBlank(cfg.getLookup().getQuery())) {
String query = cfg.getLookup().getQuery();
if (StringUtils.isBlank(query)) {
log.warn("Lookup query not configured, skip.");
return Collections.emptyList();
}
log.debug("Uses query to match users: {}", query);
List<ThreePidMapping> result = new ArrayList<>();
try (Connection connection = pool.get()) {
PreparedStatement statement = connection.prepareStatement(cfg.getLookup().getQuery());
PreparedStatement statement = connection.prepareStatement(query);
try (ResultSet resultSet = statement.executeQuery()) {
while (resultSet.next()) {
String mxid = resultSet.getString("mxid");

View File

@@ -29,23 +29,27 @@ import java.util.Optional;
public class Synapse {
private SqlConnectionPool pool;
private final SqlConnectionPool pool;
private final SynapseSqlProviderConfig providerConfig;
public Synapse(SynapseSqlProviderConfig sqlCfg) {
this.pool = new SqlConnectionPool(sqlCfg);
providerConfig = sqlCfg;
}
public Optional<String> getRoomName(String id) {
return pool.withConnFunction(conn -> {
PreparedStatement stmt = conn.prepareStatement(SynapseQueries.getRoomName());
stmt.setString(1, id);
ResultSet rSet = stmt.executeQuery();
if (!rSet.next()) {
return Optional.empty();
}
String query = providerConfig.isLegacyRoomNames() ? SynapseQueries.getLegacyRoomName() : SynapseQueries.getRoomName();
return Optional.ofNullable(rSet.getString(1));
return pool.withConnFunction(conn -> {
try (PreparedStatement stmt = conn.prepareStatement(query)) {
stmt.setString(1, id);
ResultSet rSet = stmt.executeQuery();
if (!rSet.next()) {
return Optional.empty();
}
return Optional.ofNullable(rSet.getString(1));
}
});
}
}

View File

@@ -72,7 +72,10 @@ public class SynapseQueries {
}
public static String getRoomName() {
return "select r.name from room_names r, events e, (select r1.room_id,max(e1.origin_server_ts) ts from room_names r1, events e1 where r1.event_id = e1.event_id group by r1.room_id) rle where e.origin_server_ts = rle.ts and r.event_id = e.event_id and r.room_id = ?";
return "select name from room_stats_state where room_id = ? limit 1";
}
public static String getLegacyRoomName() {
return "select r.name from room_names r, events e, (select r1.room_id,max(e1.origin_server_ts) ts from room_names r1, events e1 where r1.event_id = e1.event_id group by r1.room_id) rle where e.origin_server_ts = rle.ts and r.event_id = e.event_id and r.room_id = ?";
}
}

View File

@@ -0,0 +1,47 @@
package io.kamax.mxisd.config;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class LoggingConfig {
private static final Logger LOGGER = LoggerFactory.getLogger("App");
private String root;
private String app;
public String getRoot() {
return root;
}
public void setRoot(String root) {
this.root = root;
}
public String getApp() {
return app;
}
public void setApp(String app) {
this.app = app;
}
public void build() {
LOGGER.info("Logging config:");
if (StringUtils.isNotBlank(getRoot())) {
LOGGER.info(" Default log level: {}", getRoot());
System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", getRoot());
}
String appLevel = System.getProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd");
if (StringUtils.isNotBlank(appLevel)) {
LOGGER.info(" Logging level set by environment: {}", appLevel);
} else if (StringUtils.isNotBlank(getApp())) {
System.setProperty("org.slf4j.simpleLogger.log.io.kamax.mxisd", getApp());
LOGGER.info(" Logging level set by the configuration: {}", getApp());
} else {
LOGGER.info(" Logging level hasn't set, use default");
}
}
}

View File

@@ -117,6 +117,7 @@ public class MxisdConfig {
private WordpressConfig wordpress = new WordpressConfig();
private PolicyConfig policy = new PolicyConfig();
private HashingConfig hashing = new HashingConfig();
private LoggingConfig logging = new LoggingConfig();
public AppServiceConfig getAppsvc() {
return appsvc;
@@ -342,6 +343,14 @@ public class MxisdConfig {
this.hashing = hashing;
}
public LoggingConfig getLogging() {
return logging;
}
public void setLogging(LoggingConfig logging) {
this.logging = logging;
}
public MxisdConfig inMemory() {
getKey().setPath(":memory:");
getStorage().getProvider().getSqlite().setDatabase(":memory:");
@@ -350,6 +359,8 @@ public class MxisdConfig {
}
public MxisdConfig build() {
getLogging().build();
if (StringUtils.isBlank(getServer().getName())) {
getServer().setName(getMatrix().getDomain());
log.debug("server.name is empty, using matrix.domain");
@@ -359,6 +370,7 @@ public class MxisdConfig {
getAuth().build();
getAccountConfig().build();
getDirectory().build();
getDns().build();
getExec().build();
getFirebase().build();
getForward().build();

View File

@@ -0,0 +1,54 @@
/*
* mxisd - Matrix Identity Server Daemon
* Copyright (C) 2017 Kamax Sarl
*
* https://www.kamax.io/
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package io.kamax.mxisd.config;
public class PostgresqlStorageConfig {
private String database;
private String username;
private String password;
public String getDatabase() {
return database;
}
public void setDatabase(String database) {
this.database = database;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}

View File

@@ -21,14 +21,21 @@
package io.kamax.mxisd.config;
import io.kamax.mxisd.exception.ConfigurationException;
import org.apache.commons.lang.StringUtils;
public class StorageConfig {
public enum BackendEnum {
sqlite,
postgresql
}
public static class Provider {
private SQLiteStorageConfig sqlite = new SQLiteStorageConfig();
private PostgresqlStorageConfig postgresql = new PostgresqlStorageConfig();
public SQLiteStorageConfig getSqlite() {
return sqlite;
}
@@ -37,16 +44,23 @@ public class StorageConfig {
this.sqlite = sqlite;
}
public PostgresqlStorageConfig getPostgresql() {
return postgresql;
}
public void setPostgresql(PostgresqlStorageConfig postgresql) {
this.postgresql = postgresql;
}
}
private String backend = "sqlite";
private BackendEnum backend = BackendEnum.sqlite; // or postgresql
private Provider provider = new Provider();
public String getBackend() {
public BackendEnum getBackend() {
return backend;
}
public void setBackend(String backend) {
public void setBackend(BackendEnum backend) {
this.backend = backend;
}
@@ -59,7 +73,7 @@ public class StorageConfig {
}
public void build() {
if (StringUtils.isBlank(getBackend())) {
if (getBackend() == null) {
throw new ConfigurationException("storage.backend");
}
}

View File

@@ -76,4 +76,14 @@ public class YamlConfigLoader {
}
}
public static void dumpConfig(MxisdConfig cfg) {
Representer rep = new Representer();
rep.getPropertyUtils().setBeanAccess(BeanAccess.FIELD);
rep.getPropertyUtils().setAllowReadOnlyProperties(true);
rep.getPropertyUtils().setSkipMissingProperties(true);
Yaml yaml = new Yaml(new Constructor(MxisdConfig.class), rep);
String dump = yaml.dump(cfg);
log.info("Full configuration:\n{}", dump);
}
}

View File

@@ -233,6 +233,7 @@ public abstract class LdapConfig {
private String filter;
private String token = "%3pid";
private Map<String, String> medium = new HashMap<>();
private boolean lookup = false;
public String getFilter() {
return filter;
@@ -262,6 +263,13 @@ public abstract class LdapConfig {
this.medium = medium;
}
public boolean isLookup() {
return lookup;
}
public void setLookup(boolean lookup) {
this.lookup = lookup;
}
}
public static class Profile {

View File

@@ -45,4 +45,21 @@ public class MemoryThreePid implements _ThreePid {
this.address = address;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
MemoryThreePid threePid = (MemoryThreePid) o;
if (!medium.equals(threePid.medium)) return false;
return address.equals(threePid.address);
}
@Override
public int hashCode() {
int result = medium.hashCode();
result = 31 * result + address.hashCode();
return result;
}
}

View File

@@ -24,9 +24,23 @@ import io.kamax.mxisd.UserIdType;
import io.kamax.mxisd.backend.sql.synapse.SynapseQueries;
import io.kamax.mxisd.config.sql.SqlConfig;
import org.apache.commons.lang.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class SynapseSqlProviderConfig extends SqlConfig {
private transient final Logger log = LoggerFactory.getLogger(SynapseSqlProviderConfig.class);
private boolean legacyRoomNames = false;
public boolean isLegacyRoomNames() {
return legacyRoomNames;
}
public void setLegacyRoomNames(boolean legacyRoomNames) {
this.legacyRoomNames = legacyRoomNames;
}
@Override
protected String getProviderName() {
return "Synapse SQL";
@@ -65,4 +79,12 @@ public class SynapseSqlProviderConfig extends SqlConfig {
printConfig();
}
@Override
protected void printConfig() {
super.printConfig();
if (isEnabled()) {
log.info("Use legacy room name query: {}", isLegacyRoomNames());
}
}
}

View File

@@ -35,7 +35,9 @@ public class HashEngine {
hashStorage.clear();
for (IThreePidProvider provider : providers) {
try {
LOGGER.info("Populate hashes from the handler: {}", provider.getClass().getCanonicalName());
for (ThreePidMapping pidMapping : provider.populateHashes()) {
LOGGER.debug("Found 3PID: {}", pidMapping);
hashStorage.add(pidMapping, hash(pidMapping));
}
} catch (Exception e) {

View File

@@ -85,6 +85,7 @@ public class HashLookupHandler extends LookupHandler implements ApiHandler {
default:
throw new InvalidParamException();
}
hashManager.getRotationStrategy().newRequest();
}
private void noneAlgorithm(HttpServerExchange exchange, HashLookupRequest request, ClientHashLookupRequest input) throws Exception {

View File

@@ -46,6 +46,4 @@ public interface LookupStrategy {
Optional<SingleLookupReply> findRecursive(SingleLookupRequest request);
CompletableFuture<List<ThreePidMapping>> find(BulkLookupRequest requests);
CompletableFuture<List<ThreePidMapping>> find(HashLookupRequest request);
}

View File

@@ -26,17 +26,23 @@ import io.kamax.matrix.json.MatrixJson;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.exception.ConfigurationException;
import io.kamax.mxisd.hash.HashManager;
import io.kamax.mxisd.hash.storage.HashStorage;
import io.kamax.mxisd.lookup.*;
import io.kamax.mxisd.lookup.ALookupRequest;
import io.kamax.mxisd.lookup.BulkLookupRequest;
import io.kamax.mxisd.lookup.SingleLookupReply;
import io.kamax.mxisd.lookup.SingleLookupRequest;
import io.kamax.mxisd.lookup.ThreePidMapping;
import io.kamax.mxisd.lookup.fetcher.IBridgeFetcher;
import io.kamax.mxisd.lookup.provider.IThreePidProvider;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.tuple.Pair;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.UnknownHostException;
import java.util.*;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
@@ -238,13 +244,4 @@ public class RecursivePriorityLookupStrategy implements LookupStrategy {
result.complete(mapFoundAll);
return bulkLookupInProgress.remove(payloadId);
}
@Override
public CompletableFuture<List<ThreePidMapping>> find(HashLookupRequest request) {
HashStorage hashStorage = hashManager.getHashStorage();
CompletableFuture<List<ThreePidMapping>> result = new CompletableFuture<>();
result.complete(hashStorage.find(request.getHashes()).stream().map(Pair::getValue).collect(Collectors.toList()));
hashManager.getRotationStrategy().newRequest();
return result;
}
}

View File

@@ -28,8 +28,8 @@ import com.j256.ormlite.stmt.QueryBuilder;
import com.j256.ormlite.support.ConnectionSource;
import com.j256.ormlite.table.TableUtils;
import io.kamax.matrix.ThreePid;
import io.kamax.mxisd.config.MxisdConfig;
import io.kamax.mxisd.config.PolicyConfig;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.exception.ConfigurationException;
import io.kamax.mxisd.exception.InternalServerError;
import io.kamax.mxisd.exception.InvalidCredentialsException;
@@ -91,29 +91,31 @@ public class OrmLiteSqlStorage implements IStorage {
private Dao<AcceptedDao, Long> acceptedDao;
private Dao<HashDao, String> hashDao;
private Dao<ChangelogDao, String> changelogDao;
private StorageConfig.BackendEnum backend;
public OrmLiteSqlStorage(MxisdConfig cfg) {
this(cfg.getStorage().getBackend(), cfg.getStorage().getProvider().getSqlite().getDatabase());
public OrmLiteSqlStorage(StorageConfig.BackendEnum backend, String path) {
this(backend, path, null, null);
}
public OrmLiteSqlStorage(String backend, String path) {
if (StringUtils.isBlank(backend)) {
public OrmLiteSqlStorage(StorageConfig.BackendEnum backend, String database, String username, String password) {
if (backend == null) {
throw new ConfigurationException("storage.backend");
}
this.backend = backend;
if (StringUtils.isBlank(path)) {
if (StringUtils.isBlank(database)) {
throw new ConfigurationException("Storage destination cannot be empty");
}
withCatcher(() -> {
ConnectionSource connPool = new JdbcConnectionSource("jdbc:" + backend + ":" + path);
ConnectionSource connPool = new JdbcConnectionSource("jdbc:" + backend + ":" + database, username, password);
changelogDao = createDaoAndTable(connPool, ChangelogDao.class);
invDao = createDaoAndTable(connPool, ThreePidInviteIO.class);
expInvDao = createDaoAndTable(connPool, HistoricalThreePidInviteIO.class);
sessionDao = createDaoAndTable(connPool, ThreePidSessionDao.class);
asTxnDao = createDaoAndTable(connPool, ASTransactionDao.class);
accountDao = createDaoAndTable(connPool, AccountDao.class);
acceptedDao = createDaoAndTable(connPool, AcceptedDao.class);
acceptedDao = createDaoAndTable(connPool, AcceptedDao.class, true);
hashDao = createDaoAndTable(connPool, HashDao.class);
runMigration(connPool);
});
@@ -134,9 +136,27 @@ public class OrmLiteSqlStorage implements IStorage {
}
private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c) throws SQLException {
return createDaoAndTable(connPool, c, false);
}
/**
* Workaround for https://github.com/j256/ormlite-core/issues/20.
*/
private <V, K> Dao<V, K> createDaoAndTable(ConnectionSource connPool, Class<V> c, boolean workaround) throws SQLException {
LOGGER.info("Create the dao: {}", c.getSimpleName());
Dao<V, K> dao = DaoManager.createDao(connPool, c);
TableUtils.createTableIfNotExists(connPool, c);
if (workaround && StorageConfig.BackendEnum.postgresql.equals(backend)) {
LOGGER.info("Workaround for postgresql on dao: {}", c.getSimpleName());
try {
dao.countOf();
LOGGER.info("Table exists, do nothing");
} catch (SQLException e) {
LOGGER.info("Table doesn't exist, create");
TableUtils.createTableIfNotExists(connPool, c);
}
} else {
TableUtils.createTableIfNotExists(connPool, c);
}
return dao;
}

View File

@@ -20,6 +20,7 @@
package io.kamax.mxisd.test.storage;
import io.kamax.mxisd.config.StorageConfig;
import io.kamax.mxisd.storage.ormlite.OrmLiteSqlStorage;
import org.junit.Test;
@@ -29,14 +30,14 @@ public class OrmLiteSqlStorageTest {
@Test
public void insertAsTxnDuplicate() {
OrmLiteSqlStorage store = new OrmLiteSqlStorage("sqlite", ":memory:");
OrmLiteSqlStorage store = new OrmLiteSqlStorage(StorageConfig.BackendEnum.sqlite, ":memory:");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
store.insertTransactionResult("mxisd", "2", Instant.now(), "{}");
}
@Test(expected = RuntimeException.class)
public void insertAsTxnSame() {
OrmLiteSqlStorage store = new OrmLiteSqlStorage("sqlite", ":memory:");
OrmLiteSqlStorage store = new OrmLiteSqlStorage(StorageConfig.BackendEnum.sqlite, ":memory:");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
store.insertTransactionResult("mxisd", "1", Instant.now(), "{}");
}